If Money = Memory, if Society = a Super Computer, if Computation is in Physical Systems, what is a Decentralized Operating System? By Dana Edwards. Posted on Steemit. October 24, 2018.
These concepts are not often discussed so let's have the discussion from the beginning. The first concept to think about is pancomputationalism or put another way the ubiquitous computers which exist everywhere in our environment. We for example can look at physical systems living and non living and see computations taking place all around us. If you look at rocks and trees you can see memory storage. If you look at DNA you can see code and if you look at viruses you can see microscopic programmers adding new codes to DNA. Even when we look at the weather such as a hurricane it is computing.
If you look at nature you see algorithms. You will see learners (yes the same as in AI), also in nature. The process is basically the same for all learning. Consider that everything which is physical is also digital. Consider that the universe is merely information patterns.
If we look at society we can also think of society as a computer. What does society compute though? One way people talk about a society is as a complex adaptive system, but this is also how people might talk about the human body. The human body computes with the purpose of maintaining homeostasis, to persist through time and reproduce copies of itself over time. The human brain computes to promote the survival of the human body. Just as viruses pass on codes to our DNA, the human brain is infected with mind viruses which are called memes. Memes are pieces of information which can alter physically how the brain is working.
The mind isn't limited to the brain. The mind is all the resources the brain can leverage to compute. In other words a person has a brain to compute with but when language was invented this allowed a person to compute not just using their own brain but using the environment itself. To draw on a cave is to use the cave to enhance the memory of the brain. To use mathematics is to use language to enhance the ability of the brain to compute by relying on external storage and symbol manipulation. To use a computer with a programming language is essentially to use mathematics only instead of writing on the cave wall we are writing in 1s and 0s. The mind exists to augment the brain in a constant feedback loop where the brain relies on the mind to improve itself and adapt. If there were no external reality the brain would have no way to evolve itself and improve.
A society in the strictly human sense of the word is the aggregation of minds. This can be at minimum all the human minds in that society. As technology improves the mind capacity increases because each human can remember more, can access more computation resources, can in essence use technology to continuously improve their mind and then leverage the improved mind to improve their brain. The Internet is the pinnacle of this kind of progress but it's obviously not good enough. While the Internet allows for the creation of a global mind by connecting people, things, and minds, it does nothing to actually improve the feedback loop between the mind and the brain, nor does it really offer what could be offered.
Bitcoin came into the picture and perhaps we can think of it as a better memory. A decentralized memory where essentially you can have money. The problem is that money is a very narrow application. It is the start, just as to learn to write on the cave wall was a start, but it's not ambitious enough in my opinion.
Humans in the current blockchain or crypto community do not have many ways where human computation can be exchanged. Human computation is just as valuable as non biological machine computation because there are some kinds of computations which humans can do quite easily which non biological machines still cannot do as well. Translation for example is something non biological machines have a difficult time with but human beings can do well. This means a market will be able to form where humans can sell their computation to translate stuff. If we look at Amazon Mechanical Turk we can see many tasks which humans can do which computer AI cannot yet do, such as labeling and classifying stuff. In order for things to go to the next level we will need markets which allow humans to contribute human computer and or human knowledge in exchange for crypto tokens.
The concept of a decentralized operating system is interesting. First if there are a such thing as social computations (such as collaborative filtering, subjective ranking, waze, etc) then what about the new paradigm of social dispersed computing?
The question becomes what do we want to do with this computing power? Will we use it to extend life? Will we use it to spread life into the cosmos? Will we use it to become wise? To become moral? To become rational? If we want to focus on these kinds of concerns then we definitely need something more than Bitcoin, Ethereum, or even EOS. While EOS does seem to be pursuing the strategy of a decentralized operating system which seems to be the correct course, it does not get everything right.
One problem is as I mentioned before the importance of the feedback loops between minds and brains. The reason I always communicate on the concept of external mind or extended mind is based on that fact that it is the mind which creates the immune system to protect the brain from harmful memes. The brain keeps the body alive. The brain is not really capable of rationality, or morality, or logic, and relies on the mind to achieve this. The mind is essentially all the computation resources that the brain can leverage.
EOS has the problem in the sense that it doesn't seem to improve the user. The user can connect, can join, can earn or sell, can participate, but unless the user can become wiser, more rational, more moral, then EOS has limits. EOS does have Everpedia which is quite interesting but again there are still problems. What can EOS do to improve people in society and thus improve society, if society is a computer and is in need of being upgraded?
Well if society is a computer first what does society compute? What should it compute? I don't even know how to answer those questions. I could suggest that if computation is a commodity along with data then whichever decentralized operating systems that do compete and exist will compete for these commodities. The total brain power of a society is just as important as the amount of connectivity. And the mind of the society is the most important part of a society because it is what can allow the society to become better over time, allow the people in the society to thrive, allow the life forms to continue to evolve avoid extinction.
A decentralized operating system on a technical level would have a kernel or something similar to it. This is the resource management part. For example Aragon promises to offer a decentralized OS and it too mentions having a kernel. A true decentralized operating system has to go further and requires autonomous agents. Autonomous agents which can act on behalf of their owners are philosophically speaking the extended mind. But the resources of a society is still finite, has to be managed, and so a kernel would provide for an ability to allow for resource management.
The total computation ability of a society is likely a massive amount of resources. A lot more than just to connect a bunch of CPUs together. Every member of the society which can compute could participate in a computation market. Of course as we are beginning to see now, the regulators seem concerned about certain kinds of social computations such as prediction markets. So it is unknown how truly decentralized operating systems would be handled but my guess is that if designed right then they could be pro-social, be capable of producing augmented morality by leveraging mass computation, and also by leveraging human computation be able to be compliant. To be compliant is simply to understand the local laws but these can be programmed into the autonomous agents if people think it is necessary.
What is more important is that if a law is clearly bad, and people have enhanced minds, then it will be very clear why the law is bad. This clarity will help people to dispute and seek to change bad laws through the appropriate channels. If there is more wisdom, due to insights from big data, from data scientists, etc, then there can be proposals for law changes which are much wiser and more intelligent. This is something specifically that people in the Tauchain community have realized (that technology can be used to improve policy making).
A lot is still unknown so these writings do not provide clear answers. Consider this just a stream of consciousness about concepts I am deeply contemplating. This is also a way to interpret different technologies.
''We live in a world in which no one knows the law.''
Ohad Asor, Sept 11, 2016
I continue herewith with sharing my contemporary state-of-grok  of the up to now four  scriptures of the aka newtau . Sorry for the delay, but it comes mostly from the efforts to contain the outburst of words, catalyzed by the very exegetic process of such a rich content, into a reader-friendly shorter form.
The subject of vivisection textographically identifies as the first three paragraphs of ''Tau and the Crisis of Truth'', Ohad Asor, Sep 11, 2016 .
The four core themes extracted are ennumerated bellow, with as modest as not to sidetrack the thought and to not spoil the original message, streak of comments of mine.:
As I guy who's immersed in Law for more than quarter of century  I can swear with both hands on my heart in the notion of unknowability of Law.
Since my youth years in the law school  I was asking myself how it is possible at all to have 'rule of law'  in case any legal system ever known required humans to operate !?
It seemed that the only requisite or categorcal difference between mere arbitrary 'rule of man'  and the 'rule of law' was that in some isolated cases some ruling men happened to be internally programmed by their morals  to produce 'rule of law' appearance effects by 'rule of man' means.
Otherwise 'rule of law' done via 'rule of man' poses extremely serious threats of law to be used by some to exploit and harm others.
In that line of thoughts my conclusion was that the Law is ... yet to come.
What we know as Law is not good networking protocol software of mankind as such, but rather we see comparatively rare examples of individually well programmed ... lawyers.
On the wings of a technological breakthrough, just like: flying came with the invention of airplanes and moonwalk needed the advent of rocketry, or to remember without to stay alive - the writing. The Law is an old dream. If we judge by the depth of the abyss of floklore - one of the humanity's most ancient dreams, indeed. Needless to repeat myself that this was what sucked me into Tau as relentlessly as a black hole spagetification  :)
The referred by Ohad frustration by Law of the great Franz Kafka  expressed in his book The Trial  becomes very understandable for Kafka's epoch lacking the comforting hope in a technology which we already have - the computers - and the overall progress in the field of logic, mathematics, engineering ... forming a self-reinforcing loop centered around this sci-tech of artificial cognition.
Similarly to the nuclear fusion, which is always few decades away, but the Fusion gap closes noticeably nowadays , we are standing on the cliff of a Legal gap.
The mankind's heavy involvement in cognition technologies, especially in the last several decades, outlined multiple promising directions of further development, which seem to bring us closer to abilities to compensate the fundamental deficiencies of Law and in fact to finally bring it into existence.
It took entire Ohad Asor, however, to identify the major reasons why the Law is bottlenecked out of our reach yet, and to propose viable means to bridge us through that Legal gap... The other side is already in sight.
It is in the first place the language to blame !
The human natural language . Our most important atribute as species. The mankind maker. The glue of society. It just emerged, it hasn't been created. It has rather ... patterns, vaguely conventional, than intentionally coined set of solid rules. There ain't firm rules to change its rules, either ... The natural human language is mostly wilderness of untamed pristine naked nature, dotted here and there with very expensive and hard to install and maintain ''arteftacts'' . Leave it alone out of the coercion of state mass media, mass education and national language institutes and it falls back into host of unintelligible dialects. Even when aided by the mnemonic amplifier which we call writing.
Ambiguity is characteristic of the natural language, a feature in poetry and politics, but a deadly bug in logic and law.
We'll put aside for now the postulate of impossibility of a single universal language to revisit it later when its exegetic turn comes. In another chapter onto another scripture. Likewise, not in this chapter we'll cover the neurological human bottlenecks which are targetted to be overcome by Tau. Lets observe the sequence of author's thoughts and to not fast forward.
Instead of that I'll dare to share with you my own hypothesis about why the natural human languages are so. (I'm smiling while I type this, cause I can visualize Ohad's reaction upon reading such frivolous lay narrative. I hope he being too busy will actually not to.) To say that the human languages are just too complex does not bring us any nearer to decent explanation. Many logic based languages are more than a match of the natural human ones in terms of expressiveness and complexity. It shouldn't be that reason.
My suspicion is rather that the natural human languages pose such a Moravec hardness  for being not exactly languages. Languages are conveyors of meaning. Human languages convey not meaning, but indexes or addresses or tags of mind states. The meaning is the mind state. Understanding between humans is function of not only shared learnt syntaxi, but also of shared lives. Of aggregation of similar mind states which to be referred by matching word keys.
If this is true it is another angle for grokking the solution of human users leaning towards the machine by use of human intelligible Machinish, instead of Tau waiting the language barrier to be broken and machines to start speaking and listening Humanish.
In a nutshell we yet wait the Law to come cuz Law is not doable in Humanish. Bad software. And the other side of the no-law coin is that the humans are no cognitive ASICs . We do congnition only meanwhile and in-order-to do what other animals do - to survive. Bad hardware.
In order law to become law it must become handsfree .
Not humans to read laws, but laws to read laws.
The technology to enable that looks on an arm's length.
Ok, so far we butchered the law and the language. What's left?
The nature and essence of human language brought one of the most harmful and devastating notions ever. Literally, a thought of mass destruction.
The ''crisis of truth''. The wasteland left by the toxic idea spilover of ''there is no one truth'' or even ''there ain't truth'' at all. This is not only abstract, philosophical problem. Billions of people actually got killed for somebody else's truth.
Not occasionally the philosophers who immersed themselves into this pool are nicknamed 'Deconstructivist' . Following back their epistemic genealogy, we see btw, that they are rooted rather in faith than in reasoning, but this is another story.
The general problem of truth, of which the problem of law is just a private case, opens up two important aspects:
Number one, is that all knowledge is conjectural to truth and that, truth is an asymptotic boundary - forever to close on but never to reach. Like speed of light or absolute zero. Number two, is that human languages make pretty lousy vehicles to chase the truth with.
If really words are just to match people's thoughts together, then there are thoughts without words and words without thoughts. Words mismatch thoughts, so how to expect they to bridge thoughts to things? Entire worlds on nonsensical wording emerge, dangerously disturbing the seamless unity of things and thoughts. Truth displaced.
''But can we at least have some island of truth in which social contracts can be useful and make sense?''
This island of shared truth is made of consensus  bedrock and synchronization  landmass.
Thuth and Law self-enforced. From within instead of by violence from without. And in self-referenial non-regressive way.
''We therefore remain without any logical basis for the process of rulemaking, not only the crisis of deciding what is legal and what is illegal." 
Peter Suber with his ''The Paradox of Self-Amendment: A Study of Law, Logic, Omnipotence, and Change''  proposed a rulemaking solution which he called Nomic .
''Nomic is a game in which changing the rules is a move.'' 
The merit of Nomic is that it really eliminates the illths of the infinite regress  of laws-of-changing-the-laws-of-changing-the-laws, ad infinitum, by use of transmutable self-referrenial rules. But Nomic suffers from number of issues - the first one, in the spotlight of that chapter, being the fact that we still remain with the “crisis of truth” in which there is no one truth, and the other ones - like sclability of sequencing and voting - we'll revisit in their order of appearance in the discussed texts.
The aka 'newtau'  went past the inherent limitations of the Nomic system and resolves the 'crisis of truth' problem.
The next few chapters will dive into Decidability and how it applies to provide solution to the problems described above.
 - https://en.wikipedia.org/wiki/Grok
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-intro
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-the-two-towers
 - http://www.idni.org/blog/tau-and-the-crisis-of-truth.html
 - http://www.behest.io/
 - https://steemit.com/blockchain/@karov/behest-for-tauchain
 - https://en.wikipedia.org/wiki/Rule_of_law
 - https://en.wikipedia.org/wiki/Tyrant
 - https://en.wikipedia.org/wiki/Morality
 - https://en.wikipedia.org/wiki/Spaghettification
 - https://en.wikipedia.org/wiki/Franz_Kafka
 - https://en.wikipedia.org/wiki/The_Trial
 - https://www.amazon.com/Merchants-Despair-Environmentalists-Pseudo-Scientists-Antihumanism/dp/159403737X
 - https://en.wikipedia.org/wiki/Language
 - https://en.wikipedia.org/wiki/Official_language
 - https://steemit.com/blockchain/@karov/tau-through-the-moravec-prism
 - https://en.wikipedia.org/wiki/Application-specific_integrated_circuit
 - https://www.etymonline.com/word/manipulation
 - https://en.wikipedia.org/wiki/Deconstruction
 - https://en.wikipedia.org/wiki/Consensus_decision-making
 - https://en.wikipedia.org/wiki/Synchronization
 - http://legacy.earlham.edu/~peters/writing/psa/index.htm
 - https://en.wikipedia.org/wiki/Nomic
 - https://en.wikipedia.org/wiki/Infinite_regress
 - the illustration is a painting courtecy of the author Georgi Andonov https://www.facebook.com/georgi.andonov.9674?tn-str=*F
Tauchain and the privacy question (benefits of secret contracts and private knowledge). By Dana Edwards. Posted on Steemit. August 21, 2018.
As we can see from the current trend in crypto there is now a move toward privacy. Most people underestimate in my opinion the utility of these cryptographic advances. In this blogpost I will highlight a particular advance enabled by these new cryptographic (and hardware techniques such as trusted execution environment) which can be of massive benefit to the long term believers in Tauchain.
The problem: Anyone can copy the code Ohad writes if it's open source
So we have a problem with Tauchain where all of the code Ohad is writing with regard to TML is open source and on Github. This allows a competitor to simply steal his best ideas and in a sense rob the token holders who actually funded the development of the code. This happens very often as we see a new innovation in the crypto space and soon later we see a new ICO or a new group come out of no where acting as if they originated the technology. In some cases the new group may even be much more centralized, more secretive, and very well funded.
The solution: Secret contracts (private source code and execution)
The trusted execution environment allows for the protection of intellectual property rights on the hardware level. While sMPC (secure multiparty computation) can also achieve similar ends on the software level. The idea being that this provides a solution to idea theft where a community can keep certain critical pieces of code, data, algorithms, or other unique features secret. This creates an entirely new way to monetize knowledge, code, and ideas, which Agoras will be uniquely positioned to leverage.
Guy Zyskind of the Enigma Project provides the definition for what secret contracts are and how they work. The Enigma Project deserves credit for introducing this technology and for identifying a major problem in the cryptospace. Traditionally on Ethereum or all other current platforms when you release a DApp your code has to be open source. It is not possible to create a closed or private source decentralized app. In addition the app has to be executed in the open so all data running through it is public.
Strategic implementation of private knowledge and source code can allow Tauchain to maintain a dominant position
In most cases the world benefits if knowledge is shared. In fact I'm in favor most of the time of sharing as much knowledge as is safe. The problem with algorithms, source code, and certain kinds of knowledge is that by sharing that knowledge it provides a competitive advantage to people who have more financial resources. These individuals can simply see Github and copy. They can hire programmers to compete with Tauchain and Agoras developers and as long as the code is open there will be no real reason to buy the Agoras token long term.
What if the Tauchain development team and Agoras developers decide to implement private knowledge bases? What if it becomes possible to run code in a trusted execution environment so that other developers around the world cannot see the code or the algorithms? This would allow Tauchain to build Agoras in such a way that no other project will be capable of duplicating it. This would lock in the value backed by the community brainpower into the Agoras token making it a true knowledge token which cannot simply by copied with ease by another project.
In fact this is a strategy that developers making apps using Enigma's Secret Contracts are looking into as we speak. This competitive advantage of secrecy will change the landscape of the cryptospace. What does this enable for Agoras? Imagine an encrypted Github which developers can contribute to but only the developers can see the code? Imagine after the code is written that no one else can see the code if the code is set to run privately? This would allow developers to code in secret and have the code run on computers without anyone knowing what the code is.
This can open up security vulnerabilities but Tauchain can defend against these. In particular it matters what is private and what is public. Critical aspects can be private while security critical areas can always be kept public. There may even be ways to prove that the code doesn't behave in a certain way without actually sharing the code (using advanced cryptography). In fact my favored way of implementing this feature would be to timelock the release of the source code by a number of months of years.
The idea isn't to keep things closed forever or secret forever. Privacy is about access control and about keeping things secret long enough to maintain a competitive advantage. A time delay to unlock the source code for example could work. It is even possible to allow the community to use puzzle based time lock encryption to have to mine to get the source code released early (if there is a serious need or threat). In this way all secret blocks of code could be unlockable but not for free and this would make it less likely that the community will seek to unlock it unless there is a genuine reason (beyond just to steal ideas).
What do you think about these ideas? If you agree with this or disagree then comment below. Strategic IP (intellectual property) is used by major corporations to give themselves a competitive advantage. The crypto community can do the same thing in ways the legal mechanisms can't do. In fact it can be done in a more fair and better way because often the people or companies awarded IP rights aren't the actual inventors. A knowledge economy is fantastic but if the knowledge is just harvested by big corporations monitoring the wide open network then it's going to be hard to bring value to a knowledge token.
UPDATE: Many people ask where to buy Agoras. The problem is it's not widely available on centralized exchanges. The only exchange I know that has it is Bitshares. So if anyone really wants to buy Agoras (AGRS) which is the token of discussion in this post feel free to buy it at:
42 million intermediate tokens total. Current price is: 0.00010700 BTC which is around 70 cents. This is the cheapest price I've seen it in a while because for a long time it was $1.50-$1.30 range. This is a very speculative token at this time so buy at your own risk as I'm not providing any financial advice. I'm a holder of this token of course and have been for years.
Puddu, I., Dmitrienko, A., & Capkun, S. (2017). μchain: How to Forget without Hard Forks. IACR Cryptology ePrint Archive, 2017, 106.
Kaptchuk, G., Miers, I., & Green, M. (2017). Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control. IACR Cryptology ePrint Archive, 2017, 201.
“A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”
― Robert A. Heinlein 
No, it is not a vow everybody to be everything. It is a reflection of the fundamental human fungibility . The average human can be taught to take any human role. The exceptions of true organic geniuses (those who are hard to be replaced) and morons (those who are incapable to replace), only confirm this general rule of shear numbers . This is what makes the mankind so scalable .
''Know'' is synonymous with ''can''. Literally. Knowledge = technology. Even etymologically . Knowledge is praxis . Only. There ain't such thing as impractical knowledge. If it is not a skill, it is not knowledge. I mentioned once  that we're all AIs. Ref.: feral children .
We are not what we eat , but we are what we've learnt. You are what you know/can. And you can what you have learnt. Learning is from the taking side. Teaching is on the giving side. Of one and a same process. We do not have a word to denote the modulus  of learning/teaching, it seems. But it will come.
We are taught by the others, the society. We are the cherry ontop of a layer cake of culture onto nature . We are learning by ... living. We acquire skills in plethora of contexts from family, street, school, job, media ... Learning  is not a monopoly of man, countless systems are also learners. Maybe one of the basic definitions of life and intelligence is the ability to learn . Giant topic, yeah. We won't graze into it here now on what is learning, but on how we learn.
Due to our neurological bottlenecks we spontaneously form hierarchies . This hinders our scalabilty  by forcing humanity to be more or less a fractal of 5. We are close to a number of breakthroughs which to mitigate these innate limitations of ours into a number of ways    . But the general case is not subject of this article - herein we focus on HOW we are taught. How we acquire knowledge, and how this knowledge of ours gets recognized and utilized by society. And the hierarchic emergent structuring is of course in full force upon us in teaching as well as into everything social else.
So comes education , such comes exam , knowledge certification , certified skills application , knowledge creation verification , job fitness testing , CVs and employer recommendations ... etc., etc. With all the bugs and the so little features of this 'map is not the territory' , situation.
It is all centralized and hierarchic - exactly as the global fractal of double-entry accountancy ledgers which we call fiat financial system is. In fact it is so interwoven with fiat finance than it is almost inextricable from it . And as much inefficient and imprecise.
In all these years of talking and thinking on Tauchain  - I noticed - and this suspicion of mine incrementally turns into shear conviction - that Tau, the upscaler of humanity, inevitably also is the ultimate teaching machine. If education is facilitating of learning, Tau is the maximizer of learning. By its very construction, it comes out so.
People talk and listen whenever and whatever they want. Tau has unlimited capacity to listen and attend and remember, and answer. Only limited by the hardware capacity allocated. Tau extracts meaning. Purifies the stream, distills it down to the essence. Detects repetitions, contradictions and all other, ubiquitous nowadays conversation bugs. Remembers changes of opinions of the individual user. And points them out. Sounds like the best tool to know oneself. And the others to know you if you let them.
Your Tau account or profile is what you know. You say what you say and also ask. Say statements and questions. Tau pools you together with the others who state the same and, more importantly, who ask the same type of questions. Knowing what you know, and asking about what you don't know but want to know, maps not only your knowledge state but also maps your knowledge dynamics. Records and drives how your knowledge changes. You even have access to what you forget, and can recollect it. True real time knowledge state reporting. For first time in human history.
If consciousness  is - aside from the clinical state of being merely awake - the post-factum integration of senso-motoric experience , the Accountant of mind, the speaker of the narrative which is you, then Tau is your consciousness booster. That is - stronger than thought.
The ultimate teaching, the ultimate fair testing or exam, the ultimate real-time comprehensive diploma, or certificate, super-peer reviewed paper(s) of you as academic carrer.., the ultimate job interview AND the ultimate ... job of being working as yourself and anything useful you create to be instantly scarcifiable and monetizable - your Tau account is! And all the rest of accessible socoety - being your own workforce. And you to them. In the billions. In a move. In real time.
Including control over the pathways of increase of your skills towards the most productive personally for you learning directions, because it aids you to analyze the you-Tau history and to apply knowledge maximizer techniques and to participate profitably into creation of newer better ones. Maximizer of self. And maximizer of society making it to consist of max-selfs. Ever improving. Merger of education with work occupation. Work-as-you-live.
The literal Knowledge Economy, as described by @trafalgar in his article  from few months ago. Where search, creation, reflection, certification, recognition, commercialization, accumulation, modification, improvement ... everything of knowledge - is all in one.
And it is not only Humans and Tau lonely job. I foresee the other Machines to join the party . Yes, I mean machines capable to have interests and to ask and seek answers of palatable questions.
This - the education amplification - to come down the technology way - has been, of course, anticipated by many. Few arbitrary examples:
- A distant rough-sketch hint for the inevitable tuition power of Tau is Neil Stephenson's  ''The Diamond age''  , with the depicted: '' Or, A Young Lady's Illustrated Primer '' , as an interactive networked teaching device.
- or if I'm right about the inevitable conquest of the natural languages territory  - UX  like in the 'Her' (2013) film .
- Thomas Frey  of the futurist DaVinci Institute  in his book ''Epiphany Z''  paid special attention of this.: down the way of micro- and nano-education, an effective merger of the processes of education, diplomas issuing, job application, exam and actual execution of job obligations. Tom does not know about Tau. But I'll tell him.
With a big smile of irony and self-irony of course... these examples. Just to pick from here and there proofs of the giant anticipation of what's to come. And taken with a few big grains of salt. Cause the reality will be immensely more powerful.
Tutor , tuition , my emphasis via using exactly this wording, comes to denote the economic side of learning/teaching. It is about the cost of learning - the association of tuition with fees, about the placement of the acquired skills, about the business organization of those, about the protection of ownership and security of transaction of knowledge ... Let me introduce here a neologism  which to reflect the business side of it:
Scrooge Factor 
- Simply denoting the money-making power of a technology use by a business. The 'money suction power' of a business entity or organization of any kind coming from the application of a technology, if you want. Technology as socialized knowledge. Scaled up over multiple humans. Over a society. Of course the Scrooge Factor can pump in different directions. The Scrooge Factor of the traditional hierarchic education, governance and everything ... is apparently very often negative - hierarchies decapitalize, dissipate, waste. Orders of magnitude more wasteful than any PoW , but on this - some other time.
So aside from all the niceties of the abstractions of the full supply and value chains of a Knowledge economy, lets round up some numbers:
- We know that a true functional semantic search engine alone is worth $10t. Yeah. Tens of Trills. Trillions. As per the assessments of Davos WEF attendees of as far as I remember 2015 or 2016...
- Also, Bill Gates stated back in 2004  that ''If you invent a breakthrough in artificial intelligence, so machines can learn,'' Mr. Gates responded, ''that is worth 10 Microsofts.''
- Tom Frey  also argued  that by 2030 the biggest corporation in the world will be an online school. Given the present day size and growth rate  of, say, Amazon  this 'online school' should be in the range of good deal of trillions of marcap if it is to be bigger than the biggest corporations. But we do not need such indirect analogies over analogies to access the scale. The shear size of the global education industry is the most eloquent indicator . Note that Tom talks about 'corporation' i.e. for clumsy and inefficient hierarchic human collective. Not for a system which does this orders of magnitude more efficiently and powerfully due to being intrinsically P2P, i.e. geodesic . Even the best futurologists can be forgiven for missing to predict Tau. :)
And this mind-boggling hail of trillions, does not even account for the Hanson Engine  factor.
Tau the Tutor ex Machina is just another unintended useful consequence outta the overall design.
It is nearly impossible to track and contemplate exactly what all these 'side-effects' would be and how they will synergetically boost each other.
With my articles I intend to only touch some lines of the immense phase space  of the possibilia, with neither any ambition to think it is possible to cover it all, nor this to represent any form of advice.
Future is incompressible. Compression is comprehension. Comprehensible only by living.
Failure to go to the geodesic way of learning, will turn these beautiful but trilling words into prophecy:
"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age." H.P.Lovecraft  (1926 ).''
Size matters. Some people object that it does not matter, but has meaning. But meaning always matters, so it is the same.
The bigger problems one solves, the bigger the gains. Big problems require big solutions. We live in a big universe and our very survival is to deal with bigger and bigger problems, which require bigger and bigger solutions to cope.
But nevertheless to build big is hard so we naturally prefer to create small things which can grow. Small from point of view both of understandable and affordable to build. So best fit are small solutions, cheap and easy to make which scale out or unfold or unleash into big means to address big problems. Scaling is everything.
Scaling. Scalable! Scalability !!
The root-word 'scale' possesses marvelous riches of meaning in English language  with lots of poetics inside.:
 snake skin epidermals - wisdom, memory, protection, rejuvenation, regeneration, eternity...
hen to pan (ἓν τὸ πᾶν), "the all is one"
 warrior armour - security, defense, power, strength.
 weighting scales - device to measure mass, unit, measure, account.
all very Blockchainy wording without any shadow of doubt.
The scalability issues could be grokked  with the following anecdote:
Bunch of workers on a construction site and a huge log. The onsite manager commands a few of them to lift and move it. They try and object ''Too heavy!''. The manager adds more and more workers, until they shout back again: ''Too short!''.
A few real examples, the first two - bad and the last three excellent:
[a] I won't name this 'crypto' just will say it is named after a mythical element of the universe, according to the prescientific gnostic  imaginations. It's core 'value proposition is to shovel meaningful computation into a thread of computation which very value proposition is to be as random, meaningless and unidirectional (hard to do, easy to prove) as possibly possible - the blockchain. The theoretically most expensive form of computation. Visualize: cars and airplanes made of gold and diamonds burning most expensive perfumes. Or mass production of electricity by raising trillions of cats and hiring trillions of people to pet them with grid of pure gold wires to discharge and collect the electrostatics. If they have chosen the original Satoshi blockchain  for their 'experiments' - where the futility of such attempt would become instantly clear and would die out outright due to impending unbearable cost - will of course be more fair way to do, and would've spared dozens of billions of dollars to the Mankind, but logically they preferred a 'controlled' blockchain of their own. In a sense that the guys with vested interest into it have the power to hand-drive, stop, restart and vivisect it. The only use of this 'blockchain supercomputer' is ... tokenomics by Layering. Why it was at all necessary for a blockchain advertised as so good as to do all the general computation, to be made so hairy and bushy with layered tokens??
[b] Another trio of chaps, won't mention names again, were really at awe with Satoshi's creation, so much that they not just liked, but wanted it and decided to have it. For themselves. All of it. And rebelled and forked out and provided 'scaling' errrmm ... uhhh... solution. By increasing the blocksize. Something which Satoshi meditated on, extensively discussed with his disciples and not occasionally decided to put breaks on.  Very recently the crypto news headlines said that the blocksize increase solution providers are eyeing ... Layering. Which they furiously were advocating that blocksize increase makes unnecessary. Cause it is the solution, isn't it? Or maybe it just was. And is not anymore? Well, I'd say that all the aka 'alts'  - to provide a rejuvenated clone of Bitcoin tweeked here and there to provide momentary ease of difficulty and transaction fees - suffer from one and a same problem - traveling back in time does not tell you the future.
[c] Lets jump half a century back in time. It is 1960es. The very making of internet. Computers are already here and scaled up in numbers so their networking to become a problem/juice worth the solution/squeeze. The birth of TCP/IP  and the report of the very makers of it. Of the solution for the network scaling. Enjoy the ancient wisdom:
Initially, the TCP managed both datagram transmissions and routing, but as the protocol grew, other researchers recommended a division of functionality into protocol layers. Advocates included Johnatan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. The Transmission Control Program was split into two distinct protocols, the Transmission Control Protocol and the Internet Protocol.
The layering made the Internet as we know it. By the simple trick of just one node needed to permit another. Unstoppable inclusivity!
[d] The Mastercoin / Omni Layer :
«A common analogy that is used to describe the relation of the Omni Layer to bitcoin is that of HTTP to TCP/IP: HTTP, like the Omni Layer, is the application layer to the more fundamental transport and internet layer of TCP/IP, like bitcoin».
[e] The Lightning network (LN) :
The Lightning Network is a "second layer" payment protocol that operates on top of a blockchain (most commonly Bitcoin).
Satoshi spoke on 'payment' channels in his masterpiece. Foreseeing the way to scale.
An estimate of the power of LN layering .:
''The bitcoin devs accept that eventually larger block sizes will be needed. The current transaction rate isn't going to cut it if people all over the world actually start using bitcoin daily. They estimate that eventually, if everyone in the world uses bitcoin and makes 2 transactions a day, but uses the lightning network, a 133mb blocksize will be needed. Without the lightning network, something like a 200gb (GIGABYTE) size PER BLOCK would be needed to accommodate that much usage.''
Layering upscales it with orders of magnitude of higher efficiency.
If Bitcoin is the 'first layer' and Omni and Lightning are 'second layer', I see which one is the 'Zeroth Layer' and also foresee  the inevitability of the merger or 'Amalgamation' of all second layers over all blockchains, so the user will be able to transact everything into anything to anybody, without to know or care which chain is in use ... I have special nicknames for these and will go back to these topics in series of future posts.
Enough of examples I reckon.
The Postel's sacred Principle of Layering comes from the implementation levels paradigm.
or Abstraction layering :
''separations of concerns to facilitate interoperability and platform independence''
With other words - delegate the task to that layer of the system which does the particular job best. We can generalize this into The Scaling Commandment. Only one enough:
''Thou shalt not jam it all into a single layer!''
The Layer Cake architecture is literally ubiquitous across the Universe.: biology, semantics, informatics ...
It seems that it is if not the only, at least THE way to scale.
Maybe, someday, we the Humanity, upscaled by Tauchain will discover more powerful than Layering ways to Scale, but it is all we have for now.
Scaling is a problem. Scaling must be scalable, too.
Metascale from here to Eternity.
''Thinking by Machine: A Study of Cybernetics''
by Pierre de Latil 
Published by Houghton Mifflin Company in 1957 (c.1956), Boston.
Foreword of Isaac Asimov (then only 36 years old) ! Recommendation by the legendary mathematician and cyberneticist Norbert Wiener (then 62 years old) ! ... A true jewel! The book is described as:
A review of "the last ten years' progress in the development of self-governing machines," describing "the principles that make the most complex automatic machines possible, as well as the fundamentals of their construction."
Nineteen fifties !! The midway between the first digital computer made by my half-compatriot John Atanasoff  and internet . Almost a human generation span between the former, the book and the later event. Epoch so deep in the past that even television, air travel, rockets and nukes ... were young then.
Same Kondratieff  wave phase btw, which hints towards the historical rhyming of socially important intellectual interests. (On how K-waves imprint on the humanity growth curve - in series of other posts to come).
I must admit here that I've never put my hands and eyes onto this book. But, it is stamped into my mind and memory by Stanislaw Lem  - one of the greatest philosophers of the XXth century, working under the disguise of a Sci-Fi writer, for being caught on the wrong side of the Iron curtain.
''Summa Technologiae'' (1964)  is a monumental work of Lem's, where most issues discussed sound more contemporary nowadays than they were the more than half a century ago when it was built, and for many things also we are yet in the deep past ...
... Lem reports and discussed the following from the aforementioned Pierre de Latil's book.:
''As a starting point will serve a graphic chart classifying effectors, i.e., systems capable of acting, which Pierre de Latil included in his book Artificial Thinking [P. de Latil: Sztuczne mys´lenie. Warsaw 1958]. He distinguishes three main classes of effectors. To the first, the deterministic effectors, belong simple (like a hammer) and complex devices (adding machine, classical machines) as well as devices coupled to the environment (but without feedback) - e.g. automatic fire alarm. The second class, organized effectors, includes systems with feedback: machines with built-in determinism of action (automatic regulators, e.g., steam engine), machines with variable goals of action (externally conditioned, e.g., electronic brains) and self-programming machines (system capable of self-organization). To the latter group belong the animals and humans. One more degree of freedom can be found in systems which are capable, in order to achieve their goals, to change themselves (de Latil calls this the freedom of the "who", meaning that, while the organization and material of his body "is given" to man, systems of that higher type can - being restricted only with respect to the choice of the building material - radically reconstruct the organization of their own system: as an example may serve a living species during biological evolution). A hypothetical effector of an even higher degree also possesses the freedom of choice of the building material from which "it creates itself". De Latil suggests for such an effector with highest freedom - the mechanism of self-creation of cosmic matter according to Hoyle's theory. It is easy to see that a far less hypothetical and easily verifiable system of that kind is the technological evolution. It displays all the features of a system with feedback, programmed "from within", i.e., self-organizing, additionally equipped with freedom with respect to total self-reconstruction (like a living, evolving species) as well as with respect to the choice of the building material (since a technology has at its disposal everything the universe contains).
Longish quote, but every word in it is a worth. When I've read this as a kid back in 1980es ... immediately came to my mind the next, the seventh logical higher effector class.: the worldmaker !!
The degrees of freedom of all the previous six according to the classical taxonomy of de Latil are confined by the rule-set, the local laws of physics.
They are prisoners of an universe. Like birds incapable to reconfig their cage into roomier and cozier ones.
If we regard the laws of nature as code or algorithm, my 7th level effector will be capable to draft and implement itself onto newer and stronger algorithmic foundations. ( Note the seamlessness between computation and robotics in Latil/Lem categorization construct - quite logical indeed, having in mind that software is state of hardware, that matter-form-action are inextricable from each other, but on this in series of other times and posts ... ). Without bond?
So, I wonder:
Where, you reckon, is Tauchain  placed onto the Latil's effectors map?
To zoom out is useful. It puts the events networks of our spacetime in perspective. Including on what the great Jorje Luis Borges was calling the Orbis Tertius :
''ORBIS TERTIUS. "Tertius" (Latin = third) is an allusion to: World 3: the world of the products of the human mind, defined by Karl Popper.''
Poetically stated, ''retrodiction studies'' , ,  enables us to get a glimpse on the "clear, cold lines of eternity".
Back in 20th century Prof Robin Hanson put together this extremely insightful and strong document .
Long-Term Growth As A Sequence of Exponential Modes,
Economy grows. [see: Footnote]. Unstoppable.
Hanson's unprecedented contribution was to provide us with systematic orientation tool on how and why economy grows.
It accelerates. See:
Mode Doubling Date Began Doubles Doubles Transition
Grows Time (DT) To Dominate of DT of WP CES Power
---------- --------- ----------- ------ ------- ----------
Brain size 34M yrs 550M B.C. ? "16" ?
Hunters 224K yrs 2000K B.C. 7.3 8.9 ?
Farmers 909 yrs 4856 B.C. 7.9 7.6 2.4
Industry 6.3 yrs 2020 A.D. 7.2 >9.2 0.094
The model identifies the past economy accelerators as.:
- neural networks, evolving into doubling brain size each 30-ish megayears (hinting that human level of intelligence is an inevitability: +/-30 millions of year around the Now, by the virtue of the good old 'coin-toss' Darwinian algorithm alone.)
- human as the top-of-the-foodchains predator since around 2 000 000 BC. (maybe the human mastering of the Fire and the Blade to blame), compressing the doubling time with over two orders of magnitude down to a quarter of a million of years.
- Food production, ecosystem manipulation (or rather the collimation of farming, horse domestication and writing as accelerator components), leading to less than 40 human generations per economy doubling.
- All we know as division of labor, specialization, systematized Sci-Tech... industry - the centralized ways for production and control of knowledge leading to another hundreds-fold compression down to mere ~decade of economy doubling time.
Recommended: digest each Hanson (economy accelerator drive or) Engine with the Bob Hettinga's 'ensime' :
My observation about networks in general is a rather obvious one when you think about it: our social structures map to our communication structures. As intuitive as it is to understand, this observation provides great insight into where the technology of computer assisted communication will take us in the years ahead.
Connectivity specs as indicator and drive.
Now, when we leave the past and use these models to gaze into the future, the really interesting stuff comes out.
Aside from giving explanation to the, detected by Brad DeLong in his also monumental paper , overall trajectory of the economy, the nucleus of meaning in the Rob Hanson's paper is:
Typically, the economy is dominated by one particular mode of economic growth, which produces a constant growth rate. While there are often economic processes which grow exponentially at a rate much faster than that of the economy as a whole, such processes almost always slow down as they become limited by the size of the total economy. Very rarely, however, a faster process reforms the economy so fundamentally that overall economic growth rates accelerate to track this new process. The economy might then be thought of as composed of an old sector and a new sector, a new sector which continues to grow at its same speed even when it comes to dominate the economy.
Visualize: a Petri dish and sugar being expanded in size and quantity by the accelerating growth of the bacterial culture in it.
Hanson actually predicted nearly quarter of century ago, ... something that is relentlessly coming.
In the CES model (which this author prefers) if the next number of doubles of DT were the same as one of the last three DT doubles, the next doubling time would be ... 1.3, 2.1, or 2.3 weeks. This suggests a remarkably precise estimate of an amazingly fast growth rate. ... it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.
An economy accelerator avalanche is roaring down the slope of time towards us.
A brand new Hanson Engine is about to leave the assembly line.
Tau, is that you?
FOOTNOTE: To wrap up the above statements in the flesh of the deep thesaurus of content onto which they lie, would conservatively consume hundreds of pages. Even if only briefed. I promise to come back to these subtopic meaning expansions (by referring back to here) with series of posts in the months to come to tie up with the notions of.: economy as a network, network as computer, what exactly it processes and outputs, economy (like the universe or life) being endogenously driven positive feedback loop self-amplifying non-equilibrium entropic combinatorial explosion system, the wealth as economy complexity growth in relation with GDP size and the intimate connection of dollars-joules in energy intensity, physical and economic limits of growth, self-reinforcing predator-pray models, knowledge as synonymous with skill and so forth, economic cycles upon the DeLong curve ... to name a few. Readers questions and comments will of course help a lot with the subtopics prioritization, and will boost (incl. mine) understanding. Thank you in advance!
NOTE: I currently have the pleasure and honor to be part of the Tau Team, but this post contains ONLY my personal views.
Hans Moravec  is the patriarch of robotics . The real one, not the Sci-Fi father. Asimov was just the prophet in this scheme of things.
Moravec to Kurzweil is what's Bitcoin to Ethereum and Satoshi to Vitalik.
Sorry, for the rough joke. No offence, Ray! Back in the earler 2000s I bought your books too .
In my humble opinion - aside from the ''reality intratextualization''  concept - the other wisdom jewel of Moravec's - fruit of a life devoted to robotics - is the Moravec's Paradox .
Explained in his own words:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
or with Steven Pinker's :
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived...
As I noted in a previous related post of mine , a system's value dynamics is all about how it scales. Preferable of course are systems which make more good to go around than less. Respectively, to come around.
Humanity is a network, and its scaling is stumbled by our innate attentional resources limitations.
Human social interaction is a skill and we naturally have only as much of it.
For now, in the good old hierarchic way , we can't deny that we scale satisfactory well (as compared, lets say, to our DNA-blockchain-fork-out first cousins the chimps ) for collaborating efficiently on successful execution of trivial tasks like empire building or colonization of the Galaxy.
But not all problems we encounter are simple. In fact most problems are more complex than we are capable to grok and master in the hierarchic collaboration mode, which quickly slams into the Shannon's 'brick wall' 
Ohad Asor's Tau  is intended to be humanity upscaler . This project is the first and only one I've discovered so far where the so obvious (after you know it) problem is even identified, stated and addressed.
This means uplifting the individual humans too, because we are literally AIs serially manufactured by our society (cf. feral children ).
It feels easy for us to attend, to remember, to forget, to think, to talk, to work together - so it is extremely Moravec-hard!
Tau is unique approach towards the Moravec-hardness of these problems in the realization that we do not need at all to waste time and resources to mimic nature and copy ourselves and to create high tech homunculi .
The 'problem' is the solution. Don't 'solve' it - just god damn use it!
It is the people who ask questions, upload statements, express tastes and do all that qualia  crap humans usually do.
The machine distills the semantic essence of all the shared thought flow, treats it as wishes specs, and automatically converts into executable code, incl. its own code self-amendment.
As Moravec found out few decades ago  :
The 1,500 cubic centimeter human brain is about 100,000 times as large as the retina, suggesting that matching overall human behavior will take about 100 million MIPS of computer power.
When these processing brain things are really put together in numbers the result is unprecedented power. An unstoppable force. A glimpse into it by Ohad :
It turns out that under certain assumptions we can reach truly efficiently scaling discussions and information flow, where 10,000 people are actually 100 times more effective than 100 people, in terms of collaborative decision making and collaborative theory formation. But for this we'll need the aid of machines, and we'll also need to help them to help us.
Without application of dehumanizing individual upgrades, without to be necessary to understand and reengineer the billions of years of evolutionary capital, but just harness it and use it. (Scaling itself must be scalable, too, ah?)
In my personal up to date limited understanding it seems that it is indeed the HUMANITY what's to be known as the Tau's 'Zennet Supercomputer', and the machines are the ... collab amplifier media, the 'internet' of it. (Ohad, correct me if I'm wrong, please.)
Like laser configurations of minds.
With performance stronger than thought.
NOTE: I have the honor to be in the Tau Team, but all reflections in this post are personally my opinion.
Retrodictive archaeology is so tempting. It is about what it was, what it is, what we knew and what we know.
Here I present another time travel glimpse of mine:
February 1998. Global Information Summit*. Japan. Robert Hettinga** - the patriarch of financial cryptography wrote:
My realization was, if Moore's Law creates geodesic communications networks, and our social structures -- our institutions, our businesses, our governments -- all map to the way we communicate in large groups, then we are in the process of creating a geodesic society. A society in which communication between any two residents of that society, people, economic entities, pieces of software, whatever, is geodesic: literally, the straightest line across a sphere, rather than hierarchical, through a chain of command, for instance.
A network scales according to the capacity of its switches.
Mankind is a network of interlinked humans routed by ... humans.
The network topology*** of society is dictated by our incapacity to switch - similarly to the way the penguins society is shaped by their inability to fly.
Running the Sorites paradox**** in reverse - humanity does not form a sand-heap by adding grains, but fractalizes into groupings of up to just a few individuals.*****
Big body of research on discussions persistently brings back the result that over a certain threshold of as little as 5 persons the number of possible social interactions explosively exceeds the participants capacity to handle the group traffic of information.
Increase the group size and the 'c factor' - the collective intelligence - abruptly implodes. Bellow the individual human level. So long 'wisdom of the crowd'.
Hierarchy is the only way we know (up to now) for a society to scale. Centralization as emergenta of organic switching limitations.
It is fair to say that we have and have had upscaling exosomatic prosthetics all the time.: language, writing, institutions, specialization... but at the end of the day even within these boosters the social switching is bottlenecked down to just a few humans-strong.
Since recently, cause, you know ... computers. Humans are not only lousy switches, but also tremendously expensive ones to make. Computers - the vice versa: their performance/cost relentlessly bigbangs.
Moore's law****** is not only about silicon wafers. It is a megatrend from the very dawn of the universe as Kurzweil noticed******* long time ago, which goes up and up across all computronium substrata imaginable or possible.
Non-human computation and automated communication promises to break the social scaling barrier.
Here comes the Ohad Asor's Tau.********
The only project I know which asks the correct questions and looks into doable solutions of humanity scaling. And the only meaningful identification and treatment of these problems which seems to lead towards fulfilling of Bob Hettinga's Geodesic visions from few decades ago.
Of course I do not know it all, but lets say that I intensively search the relevant space.
Tau transcends the human switching limitations in humane way. Without to amalgamate individuals out of existence, which some other discussed ways - like direct neural interfacing - seem to inevitably infer. For society is ... human beings.
What's the pragmatics of geodesic vs hierarchic?
What game really the 'flat' p2p networks beat the vertical social configurations into?
It is an easy answer. It is pure physics:
A Tauful geodesic society comprises IMMENSELY richer economy.
Metcalfe's (and Szabo's) law on max!
The combinatorial size of it vastly exceeds the possible arrangements of any traditional social 'pyramid'.
The maximum social diameter becomes ~1.
In fact, it seems quite an ancient archetypal vision, the whole thing:
“Imagine a multidimensional spider’s web in the early morning covered with dew drops. And every dew drop contains the reflection of all the other dew drops. And, in each reflected dew drop, the reflections of all the other dew drops in that reflection. And so ad infinitum.” Allan Ginsberg*********
1. *- http://www.nikkei.co.jp/summit/98summit/english/online/emlasia3.html (the second entry)
2. **- http://nakamotoinstitute.org/the-geodesic-market/
3. ***- https://en.wikipedia.org/wiki/Network_topology
4. ****- https://en.wikipedia.org/wiki/Sorites_paradox
5. *****- https://sheilamargolis.com/2011/01/24/what-is-the-optimal-group-size-for-decision-making/
9.*********- https://en.wikipedia.org/wiki/Indra%27s_net (image from: https://mindfulnessforhealing.com/2012/12/29/weaving-a-tapestry-of-wellness/ )
NOTE: I'm in the Tau Team, but this post expresses only my own associations and interpretations.
The Power of Tau - Scaling the Creation of Knowledge. By Trafalgar. Posted on Steemit. December 31, 2017.
Ohad Asor, creator of Tau Chain/Agoras, has recently published the long awaited blog post detailing his vision for what very likely is the most ambitious project in the crypto space: Tau.
Tau will accelerate human endeavors by overcoming long ingrained limitations in our collaborative processes; limitations which we rarely even question.
The Problem of Social Governance
Take social governance, for example. As individuals, we have opinions over a wide variety of social issues. Perhaps you feel that the speed limit on certain roads is too high, or that programming should be a compulsory subject at public schools, or that everyone would benefit if cryptocurrencies were officially recognized and endorsed by the state.
However, you have no idea how to get these concerns across to the general public. I mean you could try writing a letter to your local representative or signing a petition but ultimately that's unlikely to gain much traction. Meanwhile, the very same issues that seems to have divided the nation over the past decade remain at the forefront of our political debate. Immigration, climate change, abortion, gun control etc. are all important issues of course, but very little progress have been made considering the amount of time, resources and attention that have been devoted to them.
So the problem with traditional forms of social governance, such as democratic voting, is apparent: on the one hand it has difficulty identifying and addressing the wide range of opinions different people hold, on the other hand, even with respect to the small number of issues that do end up bubbling up to the surface, it isn't particularly efficient at detecting consensus.
The central cause of this problem is that current modes of discussion are not scalable. There are inherent limitations in the way we're able to communicate our views across to each other; namely, human ability to comprehend and organize information is the main bottleneck. We cannot possible follow multiple conversations at once, or recall everyone's propositions once there are more than a handful of people in the mix. This is why most collaborative decision making bodies in practice are generally quite small in number: the President's cabinet, Supreme Court Justices, boardroom directions of a fortune 100 company etc.; you just can't have a productive discussion with 50 people. Our entire civilization is structured around this very limitation: discussions don't scale.
Scaling Collaborative Discussions Under Tau
Imagine if we can overcome this limitation; what will it mean for social governance? By using a self defining, decidable logic, the Tau network is easily able to keep track of every user's propositions and detect consensus automatically. Note that making a proposition is exactly the same as voting for that very same proposition: when you're proposing 'dogs should always be on a leash in public unless in a park' you're in effect putting in a vote for such a proposition. This way, countless issues, regardless of how technical or niche, can be assessed through the network concurrently, and social consensus can be detected on the fly. The Tau network can scale social governance by overcoming one of the greatest limitation in human communication of ideas by delegating the task of logically making sense of everybody's propositions to the computer. A simple use case of this will be the rules of the Tau network itself: through a self defining logic, Tau is able to detect consensus among its users from block to block, altering its own rules to conform to the choices of the user base.
The benefits of scaling discussions are not limited to just a more efficient form of social governance. Logic isn't merely about detecting surface level consensus, the network can easily form further deductions from everyone's propositions. If one states 'all men are mortal' and 'Socrates is a man', one can deduce that 'Socrates is mortal.' But deductions can be very deep and non trivial. Imagine if we had a group of 1000 mathematicians all inputting their mathematical insight as propositions. Tau can rapidly detect who agrees with whom on what, and deduce every logical consequence of their combined wisdom; in effect arriving to new truths and insights. In other words, Tau greatly accelerates the production of new knowledge. This will, of course, also work if you have physicists, doctors, engineers, computer scientists, indeed experts in every field working together on the platform. By scaling collaborative discussions in a logical network, Tau is able to scale the creation of knowledge.
When Tau comes into effect, any company, government, and indeed any organization not using this new network will be rendered obsolete. Tau aims to become an indispensable technology.
And this is only the alpha of Tau.
I will talk about the beta in a future posts. The beta will revolve around not just the scaling of discussions and consensus, but the automation and execution of code based of the results of those discussion. For more information on code synthesis and more, please read Ohad's blog. Also, do check out my introduction to Tau here if you missed it.
You can invest in Tau through buying Agoras tokens on Bittrex.
I am not affiliated or paid by the project. These represent my own subjective views. Tau/Agoras is the only other crypto project apart from Steem in which I see an extraordinary future, and I am merely sharing that with fellow Steemians here.
Ohad Asor's New Tau Blog
IRC Chat: Where you may ask Ohad himself technical questions
Tau Chinese QQ Group: 203884141
Tau Chain – Code or Money (traducción). By Ohad Asor. Post Traducido por Virgilio Leonardo Ruilova. 14 de julio de 2016.
Código o pago, ¿Qué acción debe realizarse primero?
Posteado Jul 21, 2015, 7:18 PM por Ohad Asor [ Actualizado el Jul 21, 2015, 8:02 PM ]
Supongamos el caso hipotético de que Lisa sea programadora y Bart, un hombre de negocios.
A Bart le gustaría que Lisa desarrolle un software para su empresa. Debido a que ellos no se conocen, es natural que se formulen la siguiente pregunta: ¿cómo podrían confiar el uno en el otro? A Bart le gustaria pagar cuando el software esté terminado y funcionando correctamente, a Lisa le gustaria recibir por adelantado el pago de su labor, con el fin de evitar el riesgo de trabajar sin recibir remuneración alguna. Sería justo llegar a una especie de acuerdo intermedio que los deje contentos a ambos; por ejemplo, definiendo las metas de trabajo; sin embargo, esto no resolverá totalmente el tema de la confianza, sólo lo minimizará un poco.
¿De qué manera se puede resolver este problema a través de aplicaciones descentralizadas?
Gracias al algoritmo de la cadena de bloques se resuelve de alguna manera el problema de la confianza en la mayoría de los casos. Por ejemplo, con el Bitcoin se puede demostrar la propiedad de la divisa mediante una firma criptográfica, que les permite utilizar este dinero. ¿Este método podría ser de ayuda para Bart y Lisa?
La cadena de bloques podría ser de utilidad solo cuando se trata de un bien raíz de vital importancia. Es en menor escala una característica propia de la cadena de bloques y en mayor escala una característica del código fuente. A partir de esta afirmación surge una interrogante mucho más simple: Cuando Bart reciba el código fuente, ¿cómo podrá saber si funciona apropiadamente?
La verificación del software, así como también la realización de pruebas sobre su correcto funcionamiento y el control de calidad son fundamentales para la vida útil del mismo. No están hechos a prueba de personas poco inteligentes: comúnmente nos vemos enfrentados a software con errores de programación; dicha inestabilidad guarda relación con desfases que van en contra de lo que se pudiera esperar de un software, brechas de seguridad y mal funcionamiento del mismo. Programar software de calidad es un proceso lento, costoso y sujeto a conjeturas, que además presenta difíciles tasas de convergencia.
Hace mucho tiempo los expertos en informática tienen conocimiento de que esta situación se debe a la naturaleza lógica de los lenguajes de programación. Sería fantástico si pudiésemos expresar formalmente las especificaciones del software que necesitamos, y si el computador pudiera indicar si un determinado código fuente cumple con dichas exigencias. De hecho, esto podría ser una realidad si seleccionamos un subconjunto de los lenguajes de programación denominado: Totally Functional Programming Languages (Lenguajes de programación totalmente funcionales), cuya sigla en inglés es (TFPL).
Si pensamos en un código TFPL, las afirmaciones probables respecto a este tipo de código son exactamente las afirmaciones correctas; por lo tanto, siempre es posible entregar una prueba, junto con el código fuente, que cumpla con determinadas especificaciones del software. Este no es el caso de los lenguajes que cumplen a cabalidad con las especificaciones de la máquina de Turing, como ya se mencionó en mi publicación anterior.
Entonces, Lisa podría desarrollar el programa en (TFPL) y Bart podría verificar que el mismo cumpla con las especificaciones que solicitó. En muchos casos, Bart tendría que contratar a un programador que escriba aquellas especificaciones formales, pero desde luego, la expresión de las especificaciones es un trabajo mucho más sencillo que el cumplimiento de dichas exigencias.
Ahora que ya hemos resuelto el problema de verificación cabe preguntarse cómo solucionar el problema de la confianza en la forma de pago planteado al principio del texto.
Para resolver el problema del código y su pago, que es similar al dilema de qué fue primero el huevo o la gallina, Tau-Chain bridará una solución a la brevedad.
Tau es un lenguaje (TFPL), es decir le da significado/sentido (TFPL) a los lenguajes existentes (la familia de lenguajes RDF, caracterizados por una gran legibilidad para el humano). El cliente de Tau cumple la función de poner a prueba y verificar un teorema, de manera que puede comprobar afirmaciones respecto a un determinado código de lenguaje Tau. Este último también funciona como un nodo de cadena de bloques; sin embargo, las pruebas pueden ser simplemente pruebas en el sentido matemático hasta sistema de tipos Tau, que abarcan todas las máquinas de Turing finitas; es decir, prácticamente todo tipo de computadores.
Entonces, Bart podría guardar dinero en una dirección multifirma junto con Lisa (que sería el equivalente de un bitcoin en una cuenta bancaria compartida que requiere ambas firmas), y especifica que en Tau la divisa será entregada a Lisa sólo si ella presenta pruebas de que su código cumple con todas las especificaciones solicitadas. Dichas pruebas serán verificadas por la red completa, por así decirlo, o por personas que minan bitcoins, la forma específica depende nuevamente de las reglas impuestas por el usuario y de la manera en que se construye esa divisa específica sobre Tau. Finalmente Lisa será recompensada de manera segura a través del pago que ella merece por su trabajo.
Cabe señalar que Bart puede tener la seguridad de que Lisa no ha ocultado alguna sorpresa en el código, ya que puede verificar de forma automática que el código no realiza acciones incorrectas como acceder a información confidencial. Por lo tanto, estamos en presencia de una nueva era en el mundo de la informática, dónde es posible confiar en el software y las personas pueden confiar las unas en las otras cuando se realiza una amplia gama de transacciones.
¿Difícil de creer?
Ya terminamos un sistema de verificación muy poderoso, que fue sometido a una gran cantidad de pruebas. Aún no está en condiciones óptimas, pero seguimos trabajando para lograrlo. Requiere un mayor desarrollo para superar un sistema de razonamiento RDF y convertirse en un nodo completo Tau. Es necesario terminar los sistema de tipos con restricciones, completar la integración de la cadena de Bloques y el sistema DHT, construir el bloque genesis, una vez realizado todo lo anterior, ¡voilá!
Autor: Ohad Asor
Traducción: Virgilio Leonardo Ruilova
Redacción y estilo: Marcela Reyes.
Licencia de la traducción: Creative Commons – Atribución – Compartir Igual
Licencia del original: Copyright, por Ohad Asor
Fuente / Source: Post original escrito por Ohad Asor y traducido al español por Virgilio Leonardo Ruilova: Tauchain – Código o pago, ¿Qué acción debe realizarse primero?
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.