If Money = Memory, if Society = a Super Computer, if Computation is in Physical Systems, what is a Decentralized Operating System? By Dana Edwards. Posted on Steemit. October 24, 2018.
These concepts are not often discussed so let's have the discussion from the beginning. The first concept to think about is pancomputationalism or put another way the ubiquitous computers which exist everywhere in our environment. We for example can look at physical systems living and non living and see computations taking place all around us. If you look at rocks and trees you can see memory storage. If you look at DNA you can see code and if you look at viruses you can see microscopic programmers adding new codes to DNA. Even when we look at the weather such as a hurricane it is computing.
If you look at nature you see algorithms. You will see learners (yes the same as in AI), also in nature. The process is basically the same for all learning. Consider that everything which is physical is also digital. Consider that the universe is merely information patterns.
If we look at society we can also think of society as a computer. What does society compute though? One way people talk about a society is as a complex adaptive system, but this is also how people might talk about the human body. The human body computes with the purpose of maintaining homeostasis, to persist through time and reproduce copies of itself over time. The human brain computes to promote the survival of the human body. Just as viruses pass on codes to our DNA, the human brain is infected with mind viruses which are called memes. Memes are pieces of information which can alter physically how the brain is working.
The mind isn't limited to the brain. The mind is all the resources the brain can leverage to compute. In other words a person has a brain to compute with but when language was invented this allowed a person to compute not just using their own brain but using the environment itself. To draw on a cave is to use the cave to enhance the memory of the brain. To use mathematics is to use language to enhance the ability of the brain to compute by relying on external storage and symbol manipulation. To use a computer with a programming language is essentially to use mathematics only instead of writing on the cave wall we are writing in 1s and 0s. The mind exists to augment the brain in a constant feedback loop where the brain relies on the mind to improve itself and adapt. If there were no external reality the brain would have no way to evolve itself and improve.
A society in the strictly human sense of the word is the aggregation of minds. This can be at minimum all the human minds in that society. As technology improves the mind capacity increases because each human can remember more, can access more computation resources, can in essence use technology to continuously improve their mind and then leverage the improved mind to improve their brain. The Internet is the pinnacle of this kind of progress but it's obviously not good enough. While the Internet allows for the creation of a global mind by connecting people, things, and minds, it does nothing to actually improve the feedback loop between the mind and the brain, nor does it really offer what could be offered.
Bitcoin came into the picture and perhaps we can think of it as a better memory. A decentralized memory where essentially you can have money. The problem is that money is a very narrow application. It is the start, just as to learn to write on the cave wall was a start, but it's not ambitious enough in my opinion.
Humans in the current blockchain or crypto community do not have many ways where human computation can be exchanged. Human computation is just as valuable as non biological machine computation because there are some kinds of computations which humans can do quite easily which non biological machines still cannot do as well. Translation for example is something non biological machines have a difficult time with but human beings can do well. This means a market will be able to form where humans can sell their computation to translate stuff. If we look at Amazon Mechanical Turk we can see many tasks which humans can do which computer AI cannot yet do, such as labeling and classifying stuff. In order for things to go to the next level we will need markets which allow humans to contribute human computer and or human knowledge in exchange for crypto tokens.
The concept of a decentralized operating system is interesting. First if there are a such thing as social computations (such as collaborative filtering, subjective ranking, waze, etc) then what about the new paradigm of social dispersed computing?
The question becomes what do we want to do with this computing power? Will we use it to extend life? Will we use it to spread life into the cosmos? Will we use it to become wise? To become moral? To become rational? If we want to focus on these kinds of concerns then we definitely need something more than Bitcoin, Ethereum, or even EOS. While EOS does seem to be pursuing the strategy of a decentralized operating system which seems to be the correct course, it does not get everything right.
One problem is as I mentioned before the importance of the feedback loops between minds and brains. The reason I always communicate on the concept of external mind or extended mind is based on that fact that it is the mind which creates the immune system to protect the brain from harmful memes. The brain keeps the body alive. The brain is not really capable of rationality, or morality, or logic, and relies on the mind to achieve this. The mind is essentially all the computation resources that the brain can leverage.
EOS has the problem in the sense that it doesn't seem to improve the user. The user can connect, can join, can earn or sell, can participate, but unless the user can become wiser, more rational, more moral, then EOS has limits. EOS does have Everpedia which is quite interesting but again there are still problems. What can EOS do to improve people in society and thus improve society, if society is a computer and is in need of being upgraded?
Well if society is a computer first what does society compute? What should it compute? I don't even know how to answer those questions. I could suggest that if computation is a commodity along with data then whichever decentralized operating systems that do compete and exist will compete for these commodities. The total brain power of a society is just as important as the amount of connectivity. And the mind of the society is the most important part of a society because it is what can allow the society to become better over time, allow the people in the society to thrive, allow the life forms to continue to evolve avoid extinction.
A decentralized operating system on a technical level would have a kernel or something similar to it. This is the resource management part. For example Aragon promises to offer a decentralized OS and it too mentions having a kernel. A true decentralized operating system has to go further and requires autonomous agents. Autonomous agents which can act on behalf of their owners are philosophically speaking the extended mind. But the resources of a society is still finite, has to be managed, and so a kernel would provide for an ability to allow for resource management.
The total computation ability of a society is likely a massive amount of resources. A lot more than just to connect a bunch of CPUs together. Every member of the society which can compute could participate in a computation market. Of course as we are beginning to see now, the regulators seem concerned about certain kinds of social computations such as prediction markets. So it is unknown how truly decentralized operating systems would be handled but my guess is that if designed right then they could be pro-social, be capable of producing augmented morality by leveraging mass computation, and also by leveraging human computation be able to be compliant. To be compliant is simply to understand the local laws but these can be programmed into the autonomous agents if people think it is necessary.
What is more important is that if a law is clearly bad, and people have enhanced minds, then it will be very clear why the law is bad. This clarity will help people to dispute and seek to change bad laws through the appropriate channels. If there is more wisdom, due to insights from big data, from data scientists, etc, then there can be proposals for law changes which are much wiser and more intelligent. This is something specifically that people in the Tauchain community have realized (that technology can be used to improve policy making).
A lot is still unknown so these writings do not provide clear answers. Consider this just a stream of consciousness about concepts I am deeply contemplating. This is also a way to interpret different technologies.
Truth vs Consensus
Truth can be thought of either as something which we can prove by experiments or it can be the result of a consensus. A scientific fact is arrived at by the process of conducting scientific experimentation. A mathematical fact is discovered by finding a proof. Consensus is discovered by analysis of sentiment (or by voting) to determine what the majority currently believes at a point in time about a subject. The truth of the scientists might not match up with the popular consensus at the time. The mathematical proof might say one thing but a majority of people might agree to disagree with the math. We have seen this happen in the past and this blog post is a discussion on that topic. Particularly for Tauchain we have the question of what is the truth and what is more important? Do we care more about the truth or more about consensus?
Tauchain offers helpers in the form of reasoners and logic to improve the quality of discussion. These helpers will not necessarily work unless people agree to accept the results generated. In addition, the bias people inherently have could influence what they discuss in the first place which could create a consensus but not necessarily an improvement.
Consensus as Truth
According to the "truth by consensus" paradigm the truth is produced by consensus gentium. Consensus gentium means agreement of the people. In my previous post I discussed exactly this topic: Consensus Morality and Tauchain | Consensus Gentium. To be specific we can think of consensus gentium to mean: "the truth is what everyone currently believes". In this model of truth we can only get the truth by finding out what everyone believes but how do we determine what people believe? It is a challenge to find a way to determine what people actually believe in a blockchain context. One method of attempting this is called Futarchy which provides an economic reward and an economic cost for having correct or incorrect beliefs. In essence under Futarchy the people must bet on their beliefs rather than just vote. Under Futarchy prediction markets are used to apply market elements to produce a market consensus truth.
Consensus gentium in an environment where there is persecution and or coercion can result in widely held "beliefs" which are enforced into existence such as the belief in geocentrism. Victims of this kind of persecution may include Galileo who was forced to recant his beliefs or face the inquisition. Ancient Greek philosopher Anaximander proposed that the universe revolved around the earth and this idea caught on. Once the idea caught on it became the gospel truth and over time it became blasphemous to dispute this belief. We continue to see this happen even now in the cryptospace with for example the belief of "code is law" or that "blockchains must be immutable", but these too are beliefs based on a particular set of values which the holders of these beliefs hold dear.
Consensus as a regulative ideal
A descriptive theory is one that tells how things are, while a normative theory tells how things ought to be. Expressed in practical terms, a normative theory, more properly called a policy, tells agents how they ought to act. A policy can be an absolute imperative, telling agents how they ought to act in any case, or it can be a contingent directive, telling agents how they ought to act if they want to achieve a particular goal. A policy is frequently stated in the form of a piece of advice called a heuristic, a maxim, a norm, a rule, a slogan, and so on. Other names for a policy are a recommendation and a regulative principle.
In this case we have a distinction between the way things are and the way things ought to be. Policies can be directed to shape the way things ought to be.
The problem with consensus as truth | argumentum ad populum
If consensus equals truth, then truth can be made by forcing or organizing a consensus, rather than being discovered through experiment or observation, or existing separately from consensus. The principles of mathematics also do not hold under consensus truth because mathematical propositions build on each other. If the consensus declared 2+2=5 it would render the practice of mathematics where 2+2=4 impossible.
A big problem is that of coercion. Another big problem is that popular opinion can in fact lead to really bad outcomes. If something is true at a point of time merely because a lot of people believe it then we are basing our decisions merely on what a lot of people believe. This can result in decisions which satisfy what is popular yet also unwise. A lot of people believe a lot of crazy wrong stuff but this does not mean they do not passionately believe it. The question of truth is more about what is true even if not very many people believe it. Geocentricism turned out to be false even though a lot of people believed it at some point in time. On the other hand the laws of physics appear to be true for 13 billion years even during times when a lot of people didn't believe it.
The State, or the ruling government, has the special role of taking care of the people; however, what distinguishes the Chinese ruling government from other ruling governments is the respectful attitude of the citizens, who regard the government as part of their family. In fact, the ruling government is "the head of the family, the patriarch." Therefore, the Chinese look to the government for guidance as if they are listening to their father who, according to Chinese tradition, enjoys high reverence from the rest of the family. Furthermore, "still another tradition that supports state control of music is the Chinese expectation of a verbal 'message.'" A "verbal message" is the underlying meaning behind people's words. In order to get to the "verbal message," one needs to read into words and ask oneself what the desired or expected response would be.
Tauchain: The Social Dispersed Computer introduced as a Social Network? By Dana Edwards. Posted on Steemit. October 12, 2018.
What might a Tau Operating System via a Tau Social Dispersed Computer function like?
We know from tauchain.org that the first iteration of Tau is to be a discussion platform not too dissimilar from Facebook. Of course this would simply be the front end or the "face" of what could behind the scenes evolve toward a social dispersed computer complete with a dispersed operating system. The resources have to be managed and a kernel could provide for this in a manner not dissimilar to what we see with EOS. The Agoras or AGRS token specifically represents "resources" as it is the tokenization of resources for whichever application Tauchain will use.
TML provides the basis from which to create the necessary languages to produce a dispersed operating system computer. Zennet even has an algorithm which Ohad himself worked on for the purpose of calculating the resource requirements. All minds will be able to contribute towards the computational resources (at least in theory) of Tauchain.
Because of Zennet there may in fact not be a limit to the amount of computation resources which we could throw at the super computer. It will of course depend on resource management which is where a kernel likely comes into play because any smart apps built to run on Tau will have to ask for resources. Resource management is one of the core functions of a kernel and of an operating system which is why I think it is likely that Tauchain will have one. I think the Ethereum route shows problems with scaling as applications also have to compete for resources in a way where the network cannot self manage it. Cryptokitties for example can render the whole Ethereum network lagged and if this is a computer then it could mean a nonsense app could disrupt more critical apps.
A prime example of a potential smart app for Tauchain
An example (which may or may not be feasible) is a health and fitness app. The app in theory could allow any user to provide data such as genetic information, blood test results, exercise tracking, blood pressure, blood sugar and anything else. All of this could provide a feedback loop back to the patient on how to improve their health over time based on the knowledge of Tau. As technology gets better the users could add more devices to provide more data for a better feedback loop. As technology evolves FGPAs could be added to meet the demand for calculations and storage can be rented as well.
An operating system could give priority to this kind of app by load balancing the resources. How would it know to do this? Tau could learn the morals, legal ramifications, and a consensus can emerge that health related apps deserve a premium access to resources because it can save lives.
Tauchain Update: Significant code changes in Github and discussion of progress. By Dana Edwards. Posted on Steemit. September 30, 2018.
Just several hours ago lead developer and founder of the Tauchain project Ohad Asor released his most significant code update yet. This blog post will be to discuss some of those updates and put it into context. In order to make sense of the current codebase : "Tauchain Codebase" I will also discuss a bit about the makeup of the code.
The significant breakthrough - Ohad implements the BDD
First some might be wondering what is BDD? BDD is a data structure called binary decision diagram. This data structure in my opinion is as significant to Tauchain as the "blockchain" data structure was to Bitcoin. For those who do not have a computer science degree I will elaborate on what exactly a data structure is below before discussing what a BDD is and why it is so significant.
Brief discussion on what a data structure is
In programming a data structure is a concept which represents a data organization method. For example blockchain is all about how records are stored as blocks. There are other similar data structures which represent decentralized data management and storage such as for instance the distributed hash table data structure.
A blockchain data structure looks like this for visualization:
By Matthäus Wander [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons
A hash table looks like this for a visual:
By Jorge Stolfi [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], from Wikimedia Commons
The really good programmers choose the appropriate data structure to meet the requirements of the project. BDD was chosen specifically by Ohad because it provides efficiency boosts in a key area necessary for Tauchain to function as intended. In specific we know Tauchain requires partial fixed point logic in order to have decidability in P-SPACE. We also know Tauchain requires decentralization and efficiency. Efficiency can be understood better in terms of the trade off between time and space. We do not have unlimited time or space so we must sacrifice one in order to get more of the other.
When we look at the code base we know that Ohad can optimize the code either by sacrificing space in which the executable will be bigger (but the code runs faster) or he can choose to sacrifice time in which the code is a smaller executable to save memory but might run slightly slower. This highlights the essential trade off between time and space when optimizing code but of course there is more to it because algorithms within a code base have to make similar trade offs.
Now what exactly is a BDD (binary decision diagram)?
Now that we understand the basics about efficiency and what a data structure is we can make a bit more sense of what a BDD is. In order to understand why BDD as a data structure is so important to Tauchain we have to remember that Tauchain is about logic. We can take the most basic example of Socrates:
A predicate takes an entity or entities in the domain of discourse as input while outputs are either True or False. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences are viewed as being unrelated and might be denoted, for example, by variables such as p and q. The predicate "is a philosopher" occurs in both sentences, which have a common structure of "a is a philosopher". The variable a is instantiated as "Socrates" in the first sentence and is instantiated as "Plato" in the second sentence. While first-order logic allows for the use of predicates, such as "is a philosopher" in this example, propositional logic does not.
Based on the rules of first order logic we can have our inputs and receive our outputs. In the most basic example above we an see a bit about how logic works. To elaborate further:
Relationships between predicates can be stated using logical connectives. Consider, for example, the first-order formula "if a is a philosopher, then a is a scholar". This formula is a conditional statement with "a is a philosopher" as its hypothesis and "a is a scholar" as its conclusion. The truth of this formula depends on which object is denoted by a, and on the interpretations of the predicates "is a philosopher" and "is a scholar".
A truth table has one column for each input variable (for example, P and Q), and one final column showing all of the possible results of the logical operation that the table represents (for example, P XOR Q). Each row of the truth table contains one possible configuration of the input variables (for instance, P=true Q=false), and the result of the operation for those values. See the examples below for further clarification. Ludwig Wittgenstein is often credited with inventing the truth table in his Tractatus Logico-Philosophicus, though it appeared at least a year earlier in a paper on propositional logic by Emil Leon Post.
When we are dealing with logic we may find that a truth table helps with visualization.
Now with this knowledge we have the most basic Socrates example:
This can be represented via truth table and is called a syllogism. To solve this we simply apply a kind of reasoning called deductive reasoning. This would indicate that if All men are mortal is true and if Socrates is a man is also true then Socrates is a mortal must be true. If we were to say all men are mortal but Socrates is immortal then Socrates cannot be a man. So if Socrates is a man he must be moral or there is what we call a contradiction. Logic is all about avoiding these sorts of contradictions and in specific binary or boolean logic is to reach a conclusion which always must be one of two possible values.
If I ask you to play a game which we want to guarantee will end with either one of two possible outcomes then we have a good example of a boolean function. 1 or 0, true or false, on or off, a or b.
Some of you may be familiar with data structure we call a DAG (directed acyclic graph). For those of you who understand this concept you can visualize a BDD as being very similar to a propositional DAG.
By David Eppstein [CC0], from Wikimedia Commons
We know from DAGs that it's a finite amount of vertices, edges, etc. We may also be able to visualize topological ordering and if you remember my post on transitive closure you might also remember the visuals on how that can work:
A binary decision diagram can represent a truth table:
By The original uploader was IMeowbot at English Wikipedia. (Transferred from en.wikipedia to Commons.) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons
And from these visuals now it should be abundantly clear how this is critical to the functioning of Tauchain. The BDD data structure allows for efficient model checking as well. To understand we have to consider the boolean satisfiability problem.
This highlights the fact that BDD can be used to create a SAT solver.
A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 1960s (see references below) and is now commonly referred to as the Davis–Putnam–Logemann–Loveland algorithm ("DPLL" or "DLL"). Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms.
Without getting overwhelmed by technical details the key points are below:
To read the code for yourself and track the progress of Tauchain development take a look at Github:
This topic is loaded in the barrel since - as I see in my draft records - April 2018. It is my free assotiations on the major topic of the aka ''tragedy of the commons''  refracted through the prism of things which I had to pass through with Tau  in mind. In the months it replicated itself into numerous subtopics and threatens to grow in several general theories  so I decided to better unleash it in the wild and to handle it with your help and if necessary to tame and domesticate it and its progeny by the coming power of Tau.
The problem of the 'tragedy of the commons' as a symptom of the more general theme of ownership .
I think I kinda nailed it. It seems this approach brings serious inference power, i.e. via it most of what we know can be derived. Of course it lacks mathematical / logical rigor, but still even on such haiku expression level seems to work.
Yes, there is such a word. In linguistics .
Per se, ''clusivity'' is modulus  of inclusion  and/or exclusion .
Absolute value in maths denotes 'distance' from zero, regardless of direction, which seems to translate well for depicting the spectrum between 'included' and 'excluded', if we imagine that excluded=-1 as the opposite of included=1, and zero measures state of equal clusion. The other, more intuitive and easier to grasp, way would be of the fuzzy logic  of zero to one fractional values, where zero is no clusivity, and one is full clusivity. Lets say we take one of the possible 'directions' and 0= complete exclusion, 1=complete inclusion ... multi-values in between.
Of course due to purely physical reasons 0 and 1 are asymptotic values - ever to approach, never to reach. And of course due to purely physical, finitist  reasons the clusivity fuzzy spectrum is quantized , not smoothly continuous .
Attending etymology usually pays off, because of two reasons:
Thus, we can visualize all languages as a single language, a continuum with mascons  of commonality of indexing-meaning pairs. Like a strange form of semantic entanglement  - to be inevitably hacked someday open and to give birth to endless valuable technologies...
What does this up to now have in common with Commons, Ownership and Tau?
Interestingly, the etymology of 'include'  automatically leads to its privatization-publicization functionality.
It is cognate with both.:
The private/public ''divide'' as key/access driven relation.
Do we ''have the keys''? Or ''are we'' the keys (given non-computerized 'face-control' type of access cases)?
NO. For any entity and for every access, the keys are not the entity or are not property of it.
Key is OUTPUT by us. Fed as INPUT into other systems, so they to perform.
Society can be imaged as a network of partially-black boxes  , where free will is function of the box certainty of autoreflection and trust is function of the uncertanty of other boxes behavior prediction ...
We do not know and in most cases can not know what's going on inside other peoples or organizations or other artifacts inner workings, but we know that by inserting Key we can make them to perform certain expected predicted action.
The boxes are said to be partially-black for the non-black part denoting the zone of predictability - i.e. ''if I input this into that black-box I know it will return to me this and that specifically''...
Key, be it biometrics, piece of shaped metal, digital string of bits ... a reason which causes, a input which brings the outcome of access to...
Important side note is that in the case of key-pair philosophy it is NOT two keys - public & private, but rather a (public) padlock  and THE (private) key , so everybody can lock it but only the key-owner can unlock it / access it.
You maybe have noticed one of my many times repeated slogans :
LAW IS BETWEEN, CODE IS WITHIN
, coming to delineate the map of Trust - i.e. where force is needed ( ''I trust you only as much as I can make you to'') and the self-enforcing systems of blockchain and god knows what else possible systems.
The whole picture is pretty insightful in both the blockchain and the trust (e.g. force)  context, when we realize that it is not so much about de jure, but purely de facto situation. Even when minding the Law. For, private-public being function of the performance and efficiency of the protocol. Incl. the key-making ones. Incl. the key-breaking ones.
On The Law and the related trust=enforcement relations to code and protocols, I'll go some other time in detail (actually lots of times because it seems the bunch of concepts here have lots of fruitful logical consequences), but the inevitable conclusion seems to be that it is in general a Clusivity thing even in the Legal case. For it is matter of accessing the output of compulsory legal action by inputting a ... key.
The recent EU intellectual law directive  is alphabetical example of the Fiat  approach of the external enforcement (as opposed to the cryptographic 'trustless' one). The Fiat way of enforcing ownership rights is also a Clusivity system. The subjects victims of property rights breach ACCESSES the authorities with their ID information, evidence, procedural codes and as output they have to receive enforcement actions vs the delinquents . The cost of trust  this way might be staggering and it is apparent that such a system may easily get clogged and to implosively unscale , .
Tau is mostly about knowledge economy. Economy without ownership ... is very hard, if not impossible to imagine. Like , where there ain't between anymore but everything is within, but even all white boxes system is prone to failures . Especially when we go past the veil of the ideological cliche definitions and take ''to own'' = ''to access'' in the purely factual, physical sense of the word.
In this sense each and every economy is a Clusivity management system.
Tau promises the ultimate Clusivity management.
 - https://en.wikipedia.org/wiki/Tragedy_of_the_commons
 - http://www.idni.org/
 - https://en.wikipedia.org/wiki/Irony
 - https://en.wikipedia.org/wiki/Ownership
 - https://en.wikipedia.org/wiki/Clusivity
 - https://en.wikipedia.org/wiki/Absolute_value
 - https://www.etymonline.com/word/inclusion
 - https://www.etymonline.com/word/exclusion
 - https://en.wikipedia.org/wiki/Fuzzy_logic
 - https://en.wikipedia.org/wiki/Finitism
 - https://en.wikipedia.org/wiki/Discrete
 - https://en.wiktionary.org/wiki/continuous
 - https://en.wikipedia.org/wiki/World_line
 - https://en.wikipedia.org/wiki/Memory_(disambiguation)
 - https://en.wikipedia.org/wiki/Morphism
 - https://en.wikipedia.org/wiki/Mass_concentration_(astronomy)
 - https://en.wikipedia.org/wiki/Quantum_entanglement
 - https://www.etymonline.com/word/include
 - https://en.wikipedia.org/wiki/Black_box
 - https://security.stackexchange.com/questions/87247/why-is-a-public-key-called-a-key-isnt-it-a-lock
 - https://en.wikipedia.org/wiki/Public-key_cryptography
 - http://www.behest.io/
 - https://steemit.com/tauchain/@karov/tauchain-and-the-cost-of-trust
 - https://www.theguardian.com/technology/2018/jun/20/eu-votes-for-copyright-law-that-would-make-internet-a-tool-for-control
 - https://en.wikipedia.org/wiki/Fiat_money
 - https://en.wikipedia.org/wiki/Delict
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://steemit.com/tauchain/@karov/scaling-is-layering
 - https://steemit.com/tauchain/@karov/tauchain-transcaling
 - https://en.wikipedia.org/wiki/Borg
 - https://en.wikipedia.org/wiki/Cancer
 - The marvelous picture above is quoted from : https://www.deviantart.com/lora-zombie/art/LORA-ZOMBIE-THREADLESS-351467642
Tauchain and the privacy question (benefits of secret contracts and private knowledge). By Dana Edwards. Posted on Steemit. August 21, 2018.
As we can see from the current trend in crypto there is now a move toward privacy. Most people underestimate in my opinion the utility of these cryptographic advances. In this blogpost I will highlight a particular advance enabled by these new cryptographic (and hardware techniques such as trusted execution environment) which can be of massive benefit to the long term believers in Tauchain.
The problem: Anyone can copy the code Ohad writes if it's open source
So we have a problem with Tauchain where all of the code Ohad is writing with regard to TML is open source and on Github. This allows a competitor to simply steal his best ideas and in a sense rob the token holders who actually funded the development of the code. This happens very often as we see a new innovation in the crypto space and soon later we see a new ICO or a new group come out of no where acting as if they originated the technology. In some cases the new group may even be much more centralized, more secretive, and very well funded.
The solution: Secret contracts (private source code and execution)
The trusted execution environment allows for the protection of intellectual property rights on the hardware level. While sMPC (secure multiparty computation) can also achieve similar ends on the software level. The idea being that this provides a solution to idea theft where a community can keep certain critical pieces of code, data, algorithms, or other unique features secret. This creates an entirely new way to monetize knowledge, code, and ideas, which Agoras will be uniquely positioned to leverage.
Guy Zyskind of the Enigma Project provides the definition for what secret contracts are and how they work. The Enigma Project deserves credit for introducing this technology and for identifying a major problem in the cryptospace. Traditionally on Ethereum or all other current platforms when you release a DApp your code has to be open source. It is not possible to create a closed or private source decentralized app. In addition the app has to be executed in the open so all data running through it is public.
Strategic implementation of private knowledge and source code can allow Tauchain to maintain a dominant position
In most cases the world benefits if knowledge is shared. In fact I'm in favor most of the time of sharing as much knowledge as is safe. The problem with algorithms, source code, and certain kinds of knowledge is that by sharing that knowledge it provides a competitive advantage to people who have more financial resources. These individuals can simply see Github and copy. They can hire programmers to compete with Tauchain and Agoras developers and as long as the code is open there will be no real reason to buy the Agoras token long term.
What if the Tauchain development team and Agoras developers decide to implement private knowledge bases? What if it becomes possible to run code in a trusted execution environment so that other developers around the world cannot see the code or the algorithms? This would allow Tauchain to build Agoras in such a way that no other project will be capable of duplicating it. This would lock in the value backed by the community brainpower into the Agoras token making it a true knowledge token which cannot simply by copied with ease by another project.
In fact this is a strategy that developers making apps using Enigma's Secret Contracts are looking into as we speak. This competitive advantage of secrecy will change the landscape of the cryptospace. What does this enable for Agoras? Imagine an encrypted Github which developers can contribute to but only the developers can see the code? Imagine after the code is written that no one else can see the code if the code is set to run privately? This would allow developers to code in secret and have the code run on computers without anyone knowing what the code is.
This can open up security vulnerabilities but Tauchain can defend against these. In particular it matters what is private and what is public. Critical aspects can be private while security critical areas can always be kept public. There may even be ways to prove that the code doesn't behave in a certain way without actually sharing the code (using advanced cryptography). In fact my favored way of implementing this feature would be to timelock the release of the source code by a number of months of years.
The idea isn't to keep things closed forever or secret forever. Privacy is about access control and about keeping things secret long enough to maintain a competitive advantage. A time delay to unlock the source code for example could work. It is even possible to allow the community to use puzzle based time lock encryption to have to mine to get the source code released early (if there is a serious need or threat). In this way all secret blocks of code could be unlockable but not for free and this would make it less likely that the community will seek to unlock it unless there is a genuine reason (beyond just to steal ideas).
What do you think about these ideas? If you agree with this or disagree then comment below. Strategic IP (intellectual property) is used by major corporations to give themselves a competitive advantage. The crypto community can do the same thing in ways the legal mechanisms can't do. In fact it can be done in a more fair and better way because often the people or companies awarded IP rights aren't the actual inventors. A knowledge economy is fantastic but if the knowledge is just harvested by big corporations monitoring the wide open network then it's going to be hard to bring value to a knowledge token.
UPDATE: Many people ask where to buy Agoras. The problem is it's not widely available on centralized exchanges. The only exchange I know that has it is Bitshares. So if anyone really wants to buy Agoras (AGRS) which is the token of discussion in this post feel free to buy it at:
42 million intermediate tokens total. Current price is: 0.00010700 BTC which is around 70 cents. This is the cheapest price I've seen it in a while because for a long time it was $1.50-$1.30 range. This is a very speculative token at this time so buy at your own risk as I'm not providing any financial advice. I'm a holder of this token of course and have been for years.
Puddu, I., Dmitrienko, A., & Capkun, S. (2017). μchain: How to Forget without Hard Forks. IACR Cryptology ePrint Archive, 2017, 106.
Kaptchuk, G., Miers, I., & Green, M. (2017). Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control. IACR Cryptology ePrint Archive, 2017, 201.
Tauchain 101: Essential Reading On One Of The Most Revolutionary Blockchain Project Under The Radar...By Rok Sivante. Published on Steemit. August 3, 2018.
Amidst countless blockchain projects hyping themselves up as "the next big thing," there are a few that have been working under the radar that hold the promise - not in word, but in substance - of truly being revolutionary game-changers.
Such ventures have not yet often come into the spotlight. Partly, due to that their founders have focused first on the fundamentals of creating something that speaks for itself versus the all-too-common approach of prioritizing sensationalistic marketing. And partly, because the degree of innovativeness they represent - in tandem with a complexity in scope of the larger visions and implications of their success - does not always lend itself to an easy understanding upfront.
One such project - still very early on in its development, yet holding transformative potentials no less grand than those of Bitcoin and Ethereum as they birthed and evolved the blockchain landscape:
Until recently, with the launch of a new website that has successfully managed to articulate the project's vision much more clearly, understanding what Tauchain is striving to accomplish was a domain only a very few, highly-intelligent technically-inclined dared to tread. And prior to December 2018, there was no code - only an unproven concept spearheaded by a single Israeli developer, Ohad Asor, whom nearly all who've managed to connect with have declared to be one of the most brilliant geniuses they've ever met, possibly ahead of his time.
Just as Bitcoin introduced blockchain as an innovation radically altering the trajectory of our societal, economic, and technological evolution - and Ethereum continued in suit with its upgrades to expand in developing upon the vision with entirely new sets of capabilities for developing a range of decentralized applications and smart contracts - so too, may Tauchain be such a platform whose success proves comparable, the impact of which may bring quantum leaps in the Blockchain Revolution.
How and where to start in describing Tauchain...?
Well, were we to begin with the technical side of things, it'd be likely to lose 98% of the audience. So perhaps, a better starting point might be the bigger picture:
This generalized overview, however, still only barely scratches the surface.
While the intended ends may be that of a generic concept enabling drastically-increased efficiency in global collaboration, the means by which such is to be achieved entails a number of innovative component developments that each hold great significance and implications of their own.
While each may require deeper exploration to better grasp and begin piecing together into the bigger picture, the Tauchain website now offers an overview of key features which account for just some of what it to differentiate it from other blockchain platforms - and enable new collaborative capabilities not currently possible with currently existing technologies:
While it'd be possible to expand upon each in great detail - both in regards to the functionality and implications for their applications - this particular piece of writing is to serve as a basic introduction to some of the best, most-easily-accessible content written on Tauchain to-date.
And as we transition into that content, we shall begin with a quote summarizing the core essence of Tauchain, as approached from but one angle:
This project created by Ohad Asor is really ambitious and aims to create the internet of knowledge.
Some people would label it as an Artificial Intelligence, but according to the creator this is something totally different. Summing up and to understand me, Tau-chain is a tool that knows how to interpret any information and deduce any consensus. This tool can be used in any field, judicial, political, academic, social, scientific and also without limits assembly from 2 people to a million for example.
~ @capitanart, from "My experience with Tau-chain"
The collection begins with two selections from Steemit's @trafalgar.
If anyone has successfully managed to distill the essence of the Tauchain vision into words that'd serve as a foundational Tauchain 101 intro, it'd have been him in these two excellent pieces:
What Is Tau? - My Only Other Crypto Investment
The Power of Tau - Scaling the Creation of Knowledge
Next, come three short articles from @flis, which may not go into any new details beyond the three above, yet offer a slightly different yet simplified perspective to reinforce the clarification of Tauchain's key concepts:
The vision of Tau-Chain, a blockchain based self-amending platform designed to scale human collaboration and knowledge building
How Tau-Chain can be implemented in practice
Tau Chain vs. Tezos - which platform will provide a better solution?
~ design credit: @voronoi
Next, come a few selections from @dana-edwards, who has likely been the single individual who has translated the highly-complex technical vision of Ohad Asor into a more-approachable nature from which non-academics may begin and better understanding a Tauchain.
Quite possibly the first to write of developments and share outside of the project’s IRC channel and Bitcoin talk thread, Dana has one of the most comprehensive grasps publicized anywhere on the project, and his writings continue to serve in establishing bridges for more people to discover and deepen their own comprehensions of the innovations Tauchain represents to not only computer science and the blockchain revolution, but cultural & societal evolution as well.
What follows are a collection of his writings related to the project which excellently piece together key ideas and insights, from which the gaps may be filled in to grasp a firmer idea of just how significant these developments could be and what the bigger picture of their success might look like:
What Tauchain can do for us: Collaborative Serious Alternate Reality Games
What Tauchain can do for us: Finding the world's biggest problems
Tauchain: The automated programmer
Artificial morality: Moral agents and Tauchain
What Tauchain can do for us: Effective Altruism + Tauchain
Collaborative Alternate Reality Games + Tauchain = UBAs (Universal Basic Assets)?
Tauchain and Tezos, why adaptability is the key to surviving in a fast changing environment
My commentary on Ohad's latest blog post: "Agoras to TML"
The following three pieces are not introductory-level, and may likely require a background in computer programming to understanding. However, for anyone reading who might be interested in diving deeper into the technical side of the project, they are included here:
Tauchain is not easy to understand but here are some concepts to know to track Ohad's progress
For all who are researching Tauchain (TML) to understand how it works, a nice video!
More on partial evaluation - How does partial evaluation work and why is it important?
~ design credit: @crypticalias
One other writer covering Tauchain needing to be mentioned: @karov.
While not the easiest to read and understand, the Steemit account of Georgi Karov is undoubtedly one of the most consistent sources of coverage on the project.
A lawyer by-trade and currently one of the three members of the core team, @karov's insights into the project are reliably detailed, expansive into philosophical territory, and fascinating.
Although none of his articles have been included in this introductory collection, those who may be interested to keep up-to-date with coverage on the project would be well-advised to follow his Steemit blog - and/or read backwards through the last few months of his posts there, as the blog is nearly-entirely Tauchain-related content.
Lastly, though not least:
Coming from one of Steemit's most brilliant early-adopter-minds, @kevinwong, this one is a quick read in itself with some key points worth factoring in to a proper assessment of the project. And - far lengthier than the post itself - the comments thread also contains some gold:
Is Tauchain Agoras in Good Hands?
And to wrap up with another excellent quotes from design consultant to the project, @capitanart - who is another to follow for updates:
The goal of Tau is to create a supermind, to solve the limitations inherent in human communication on a large scale.
Able to deduce consensus and understand discussions, Tau can generate and execute code automatically based on consensus, through a process known as code synthesis. This will greatly accelerate the production of knowledge and streamline most of the large-scale collaborative efforts we can imagine in today's world.
~ design credit: @overdye
''Tau solves the problems from the Tower of Babel to the Tower of Basel''
- an early 21st century yet undisclosable author
Okay, dearest friends, lets pull sleeves up and start with it. Vivisection of the Scriptures? Revelation by transfiguration? Pulling the Tau from the ocean of wisdom out on the dry no-Maths-land? I hope not.
The quote above on first glance sounds so pompously biblical, but in fact it denotes the crystal clear and simple practical and mundane rationale of Tau which I already tried to approach from few angles , .
It is about the hierarchic bottleneck of one unscaling ,  Humanity. Take the hint about leveling of the Towers as a poetic symbol of elimination of the social 'verticality' -- the hierarchies as a so far necessary evil to compensate certain innate neurological limitations , , ,  -- and reforming  the network we are embedded into and usually call mankind or society or economy or world into an as geodesic as possibly possible one . For the sake of its own functional programmatic optimization .
Notice that towers leveling is not by demolition, but by uplifting the overall landscape level to and above the tower tops, turning them into deep roots or support pylons of asymptotically geodesic society .
Apparently, mentioning the Gate of God  denotes the unmixing  of languages & mentioning the apex global fiat settlement institution  - the excelling of the current fiat procrustics  i.e. the economy aspect.
That is: TML to Agoras . The first and last of the totally six identified aspects or steps of the social choice  as addressed by what we call Tau.
''our six steps of language, knowledge, discussion, collaboration, choice, and knowledge economy''
These aspects deserve of course separate zoom-in exegetic chapters and they'll definitely get it. I promise. And not only they.
Any exegesis of Tau unavoidably must start with scroll back and tracking down of the full history of the development so far. As a zoom out to see the full picture and to identify the dominant features of the landscape relief.
You, I reckon, already noticed this retrodictive inclination of mine , that in my mind the notion of ''Timeline of Development'' can not be by any logic just a handful of milestone promises thrown into the future, but it is a must to account for the up to now trajectory, too! No future without past.
It all started as Zennet , continued as Tau-chains  and 'turned' into aka 'newtau' , , , .
Wait! A New Tau?
Excuse me, Ohad, but I personally do not buy that and I said it many times. There ain't old and new Tau. The situation is much more straightforward and grokkable . Here it is:
Lotsa guts, balls, butt, brains or whatever human offal... is required for each of us to admit a mistake made in our everyday life. Generally quite a strength is needed to even look ourselves into the mirror...
It takes a whole Ohad though, to keep all oneself's work totally public and transparent even on the full and unedited live record of the infil  into entire branch of mathematics  and then throwing it all away as untauful. We witnessed that reported in real time!
Did this change the ends? No. But sorted out the means to an end.
Was it a 'mistake'? In no case. It was duly delivered R&D effort.
Was oldtau looking promising on first glance? Yes, of course it did.
Did it survive the Ohad's R&D 'crash-testing'? No, it didn't.
Was it a ''juice worth the sqweeze''? It was.
Was it a job well done? Absolutely.
The oldtau materials are for me legacy jewels. Like those dinosaur bugs trapped into blobs of amber .
Development is a process, not just results shipping. Related like cooking and serving.
Studying the zoom-out dev map we observe these few major landmarks:
The Zennet province is all right. Its gently rolling hills gradually merge into the Tau lands proper with the inevitable realization that a 'world supercomputer' can not be a Tauless thing. Zennet lives in Tau with .:
''... having a decentralized search engine requires Zennet-like capabilities, the ability to fairly rent (and rent-out) computational resources, under acceptable risk in the user's terms (as a function of cost). Our knowledge market will surely require such capabilities, and is therefore one of the three main ingredients of Agoras... hardware rent market...''
We move over through the oldtau wastelands  where the burnt ruins of MLTT  lie scattered - rough oldtau location-on-the-map indicator is the fall of 2015 with
''Tau as a Generalized Blockchain'' - posted Oct 17, 2015, 6:33 AM [updated Oct 17, 2015, 6:49 AM]
and then we reach the fertile gardens of newtau  in the fall of 2017:
''The New Tau'' - posted Dec 31, 2017, 12:27 AM [updated Dec 31, 2017, 12:28 AM]
Hmm. Apparently we crossed a watershed. Which relief feature it was? - The ridge  of:
''Tau and the Crisis of Truth'' - posted Sep 10, 2016, 8:25 PM [updated Sep 10, 2016, 8:28 PM]
Tau sorts out the Towers. I hope that the synopsis in this short chapter of Exegesis helped to sort out Tau dev in time as a navigation lookup tool.
Software is nothing but states of hardware. There is that intimate deep, not yet codified into a neat compact of logic, connection between Gödel , Heisenberg  and Laws of thermodynamics .
Tau keeps us off these traps.
I do not dare to state that someday we won't have the command on infinities and to play with them with the ease  of
''... a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.''
In fact, quite the opposite I'd rather take it as inevitability someday we to conquer the Cantor  expanses and to venture far even beyond that. To transcale  the transfinite. Like Hilbert  said it.:
''Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können. (From the paradise, that Cantor created for us, no-one can expel us.)''
But it takes ... finitary vehicles of DECIDABILITY to conquer the transfinitary outer spaces. Because, in order to dear to dream to tame the infinities, we must first harness and get full command of finities.
Including of ourselves. Tau is ''understanding each other''. Without Tau we are ... others to ourselves.
Imperare sibi maximum imperium est.
Bizarre headline, isn't it? Sorry. It just ... coalesced spontaneously as ... a protein folding . Lets try to decompress it. Compression is comprehension . Decompression is experience. Firstly, I'll throw herein three bold statements - big separate mega-topics which I'll soon revisit by furnishing them with or backing them by their due Behest.io  full-fledged articles.:
1. The World is Fiat
I tend to generalize the term of fiat , to not only currency  but to all the Sollen  approach to transactors. In my vocabulary Fiat as an umbrella, general term for all social interaction which requires external enforcement, i.e. all what's not trustless or self-enforcing like morals  or blockchain . The whole system of monetized coercion. Or reciprocal - coercion backed fiat monetization . (Note: monetization of coercion vs coercion of monetization are not related by an OR  operator, but they are typical chicken-and-egg problem  - even the smallest children know that eggs precede chicks!) All what requires trust ...
2. Trust is Force
''You trust 'em only as much as you could make 'em to.''
Coercion or force or violence ... itself, IS currency per se - the primordial, the deeply preceding the emergence of Mankind one, and who manages to rigorize  it quantitatively will get and give us a TOE  unification of ecology and economics, i.e. instantly Nobel prizes! Not sure in which combination of fields. Simultaneously.
3. Money is Mnemonics
E.g. money in all forms is ... accountancy. Or book-keeping. Ledgers. Logs. Databases. Memory. They are even cognates  those - money and memory. Ancient truism.
It comes as necessity from the problem of simultaneity of transactions between autonomous agents, with other words - between automata , or self-thinkers, or those who are black-boxes  to each other. Regardless of whether the economy is mere barter, or it have uplifted one or more of its items to transactor/currency status.
Apparent feature of all accountancy systems is that they possess cardinality  of entries.
Up to now we know single-entry , double-entry  and tripple-entry  book-keeping system.
Not sure if a 'system' where everybody perceives, remembers and acts upon an isolated unshared 'ledger' of records on what's owed, contained only in its head - and runs it the way they could and want ... - counts for zero-entry book-keeping. Pun intended.
Can't wrap my head around negative or fractional numbers of book-keeping entries, nor I know what's the maximum practical and useful number of entries to juggle with. I expect Tau to bring together the, without any shadow of doubt, already available but dispersed across space and time bits and pieces of knowledge on accountancy entries cardinality into a general theory of transaction logging. It is necessary because, you know - an item is money (mnemonic  facility) ... transactor is accelerator , and general theory will give us a tool to know which monetary mechanism design  is the most powerful wealth growth booster.
Satoshi's blockchain is the first and only instance of successful implementation of the triple-entry book-keeping , so far, where credit and debit records and receipt are coined  into one. Self-enforcing log-book is as much (or not more) magical, or deus ex machina  solution then a horseless carriage vs a 'legacy' cart.
The blockchain catered total value is expectedly impressive grower itself. It took only 7-8 years to Bitcoin (and its imitations) to reach ~1% of what took 7-8 THOUSANDS of years to Gold  to get.
BUT, we still live in a predominantly Fiat, double-entry book-keeping world:
Visualize the modern world as a forest of centralized 2-entry ledgers:
From the several hundreds of tree stems - the Central Banks , though the thousands and thousands of commercial banks - fractional reserve franchisees of the Central Banks, down to the individual humans and firms credit-debit records.
A vast centralized fractal of 2-entry ledgers of ledgers. Lined into one by the global meta-ledgers - provided by international institutions like BIS .
Important Note: ... which I must make here - Lots of crap talk we've heard about how Blockchain is against Fiat, how it will replace it, how it frees us from the illths of the ancient regime  . NOTHING like that! The truth is that, for now, we do not have even the slightest idea or hint about how we could decentralize or detrust interpersonal voluntary exchange! Geography and history, e.g. nature and culture are forces to reckon. The propaganda suggestion that fiat money is kinda fake, printed at a wish, valuable only because we all believe in them ... is one of biggest nonsense I've ever heard.
As in any forest, the tree size and power varies. And matters. USD is the Yggdrasil  of the meatspace  of the global fiat mainstream Swartzwald  ! (Just like BTC is in the cyberspace one. It is not occasional at all that both are so perfect systemic benchmark matches.) In the ocean of fiat, USD is a giant landmass, a Pangea which is nearly impossible to go around of. The force of 20 000 golf balls of Plutonium coupled with same number of office dustbins of LiD  . And 1000+ military bases scattered around the world. And comprehensive coverage of the sea routes to guarantee that the global trade goes by the books. And working supremacy of law system as an antidote of internal corruption decay of the system... Shall the USD survive the Blockchaincalypsis? Of course! Taxcoins  are always needed. The runaway crypto-fication of the fiat monetary systems only makes the due payments of geopolitical services more and more unescapable. And more and more precise and fairer. With higher resolution and lower lag.
Backed by force means that the the strongest force is the most trustful. Like all those currencies who belong to the hall of glory of the millennial monumental transactors.: Hellenic drachma  - survived so far as a currency name in the Gulf , Roman solidus , Spanish silver dollar , etc. ... used to be. Mainstreamers - for being backed by the biggest force. (Mentioning the Force, we simply can not go without a Star Wars quote , I'm afraid - the best and most inexorable thesaurus of cliches.)
Lets close now the three side notes of dictionary intro here, and go back to PROCRUSTICS:
First, yes, it is about that antiquity gangsta, the psychopathic dropout of the noble blacksmiths profession - Procrustes .:
''who attacked people by stretching them or cutting off their legs, so as to force them to fit the size of an iron bed.''
Secondly, the etymology turtledoves  who explained to us what Behest is , clarify that:
Don't look exactly like pigeons, do they?
Thirdly, Procrustics by the great philosopher Stanislaw Lem . This is from Wikipedia:
''In 1959 science fiction novel Eden by Stanislaw Lem, Procrustics is the name of a fictitious information-theory based social engineering discipline of molding groups within a society and ultimately a society as a whole to behave as designed by secretive hidden rulers, to create a hideous form of social control in which the very existence of the governing powers is denied and each individual appears to themselves to be free yet are being manipulated and controlled. One example described in the novel is "concentration camps" without any guards which are designed so that the prisoners stay inside apparently on their "free" will.''
Last but not least, it is no surprise that this so much meaning laden word entered the vast fields of mathematics, too - , , , to denote so important concepts. Procrustean transformations:
''Hence, it may change the size, but not the shape of an ... object.''
I think this is enough of explanation to tie it up into:
The Fiat is procrustic because it is ripe to be transcaled!
Fiat is saturated. It can not grow the old ways any more. It is really dearer and dearer to be grown. It reached its internal limits.
Fiat (as global fractal integration of all double-entry accountancy books) is a narrative.
Fiat is procrustic, because being unaffordable thing to cover it all: it omits, it cuts off, it keeps out, excludes, discriminates, sequesters ...
As a narrative it tells a story of wealth, but leaves out so vast unsung, though present, riches.
The global fiat bards memory is too weak to memorize it all and they are not clever enough to distinguish the true from false entries ...
The fiat Yggdrasil Norns  fingers are weak to handle all threads and to manage to interweave them all into the meta-ledger ...
Giant mass of economic data left lying in waste, unused. And that's REALLY bad cause the data about the system state is the fuel for its own self-reinforcing positive feedback loop . Yeah, data as the new oil , but literally.
The estimates are that as much as up to 80% of all economic information stays off the record .
Cf.: Hernando de Soto Polar , who estimated that.:
''The existence of such massive exclusion generates two parallel economies, legal and extra legal. An elite minority enjoys the economic benefits of the law and globalization, while the majority of entrepreneurs are stuck in poverty, where their assets—adding up to more than US$10 trillion worldwide—languish as dead capital in the shadows of the law.''
in his 2000, ''The Mystery of Capital: Why Capitalism Triumphs in the West and Fails Everywhere Else'' 
Cf.: aka Bazaaristan .:
''Across the globe, 1.8 billion people -- a quarter of the world's population -- work off the books each day. They are paid in cash for the goods they sell and the services they provide, and due to their ubiquity, there's a word for these merchants in nearly every language. As Robert Neuwirth reports, in French colonies, they're known as débrouillards -- self-starters, entrepreneurs, all outside the bureaucratic system. They might be vendors selling revolutionary goods in Egypt's Tahrir Square, Nigerians selling mobile phones, or the guy down the street hawking flowers on the corner. Whoever they are, they work in the world's fastest-growing economy: System D. As Neuwirth writes, System D, slang for "l'economie de la débrouillardise," is the crucial blackmarket, providing opportunities where the regulated global economy has failed. Its value is estimated at roughly $10 trillion, meaning, as Neuwirth points out, that, "If System D were an independent nation, united in a single political structure -- call it the the United Street Sellers Republic (USSR) or, perhaps, Bazaaristan -- it would be an economic superpower, the second largest economy in the world." The Organization for Economic Co-operation and Development (OECD) predicts that two-thirds of the world's workers will be employed in System D as soon as 2020.''
Cf.: the world unbanked population phenomenon 
''Two billion people worldwide do not have a bank account or access to a financial institution via a mobile phone, or any other device.''
The ancient worldmap picture up at the bizarre headline, denotes exactly this 'Here Be Dragons' situation of the Blockchain-unboosted yet Fiat finance.
All these examples demonstrate not a conspiracy of a kind, but mere and obvious fiat unscaling symptomatics.
Probably in the old centralized way, for a double-entry book-keeping system in order to check, record and run all facts of relevant economic information, would have to consume more than what the economy makes as a whole! :)
This inevitably crosses with the important topics of the network scaling effects - for merely linking all the dots means automatically n^2 bigger economy . Without to add anything new, but by just noticing and accounting of the existing wealth.
We have probably dozens of TIMES bigger economy than we realize! Tantalus suffering .
On the comparative costs of the accounting systems there are three studies, , ,  which I particularly value, and which put into a neat perspective together with the network scaling effects  are definitely subject of separate near future blog posts of mine.
Now scroll-up back to the ''Important note'' above, please.
Blockchain is not the Fiat killer. It is its Transcaler  !!
And Tauchain being - together with so many other things - the generalization and the generalizer of all possible blockchains in particular and all possible accountancies in general - is the transcaler of the transcalers.
And as effect - the ultimate economy (incl. economy governance ) Accelerator .
The power of ambiguity and of ambiguity minimization in communication. By Dana Edwards on Steemit. June 1, 2018.
Formal communication benefits from ambiguity minimization.
So what exactly do I mean by formal communication? Well when we think of how human beings communicate with machines it is in a formal language. This formal language requires minimized ambiguity for security analysis (how can we analyze code if we cannot effectively interpret it?). The other problem is that the machines require for example that if... then... else and similar conditional statements are well defined and unambiguous.
Is it possible to show that a grammar is unambiguous?
To show a grammar is unambiguous you have to argue that for each string in the language there is only one derivation tree. This is how it would be done theoretically speaking.
In computer science, an ambiguous grammar is a context-free grammar for which there exists a string that can have more than one leftmost derivation or parse tree, while an unambiguous grammar is a context-free grammar for which every valid string has a unique leftmost derivation or parse tree. Many languages admit both ambiguous and unambiguous grammars, while some languages admit only ambiguous grammars.
Specifically we know that deterministic context free grammars must be unambiguous. So we know unambiguous grammars exist. It appears the strategy is ambiguity minimization with regard to formal languages (such as computer programming languages).
For computer programming languages, the reference grammar is often ambiguous, due to issues such as the dangling else problem. If present, these ambiguities are generally resolved by adding precedence rules or other context-sensitive parsing rules, so the overall phrase grammar is unambiguous. The set of all parse trees for an ambiguous sentence is called a parse forest.
The parse forest is an important concept to note. All possible parse trees for an ambiguous sentence is called a "parse forest". This concept is key to understanding the strategy of ambiguity minimization. So we can in practice minimize ambiguity and we know for certain that deterministic context free grammars admit an unambiguous grammar but what does that mean? What are the benefits of unambiguous language in general?
A benefit of ambiguity minimization
Simple English is a form of controlled English designed to minimize ambiguity in English. This is important because by using simple English to codify the rules or write the laws it puts it in a language where there is less of a computational expense (in brain power) to process and interpret the statements.
In one of my older blogposts @omitaylor commented and in one of her future posts she asked about the topic of love. In specific her post was titled: "What Does LOVE Mean To YOU"
Her post highlights the fact that there are different love languages and that we don't all speak the same love language. Ambiguity here is actually not a good thing but the simple fact is when someone speaks about love how do we know they are talking about the same thing? As a result we often seek an agreed upon or formally defined "love concept" where we all agree it's love. This is not trivial to find and as a result a topic like love is not easy to discuss in any serious manner. Unambiguous communication or to be more precise (minimized ambiguity) would allow Alice to discuss with Bob the topic of love in a way where they both know exactly what the other is referring to in terms of behavioral expectations, emotions/feelings, etc.
If Alice agrees to love Bob then Bob has no way to determine what Alice means unless he and she agree on a mutually defined concept of love. This highlights how agreement requires very good communication and how minimizing ambiguity can be beneficial at least in this example.
Ambiguity minimization makes sense when you are following a principle of computational kindness. That is if Alice would like to reduce the computational burden on Bob then she can reduce or minimize the ambiguity of her sentence. This is because in order for Bob to interpret an ambiguous sentence Bob must in essence sort all possible interpretations of that sentence from most likely interpretation to least likely interpretation, and before he can even sort he must first search in order to find all possible or at least plausible interpretations.
This is very computationally expensive for Bob but very cheap for Alice. Alice knows exactly what she means but Bob has no clue what Alice REALLY means.
A benefit of ambiguity
There are other examples where increasing ambiguity could be beneficial, such as perhaps when the communication is less than formal, or to share a stream of consciousness without turning it into a formal communication. Humor for example rides on ambiguity and a good joke may have multiple layers. Art also leverages ambiguity because it's perhaps meant to be interpreted 20 different ways all to produce a certain desired affect.
Ambiguity allows more meaning to be packed into fewer words. This in a sense is a sort of compression scheme. So if a sentence has multiple possible meanings the levels or meanings are still finite. It's a fixed amount of meanings and so theoretically speaking a search can be conducted. In fact this is what a human being does when interpreting natural language where a sentence can have multiple meanings (they do a search for all possible interpretations of that sentence). The problem with this is that it is computationally expensive as a process at least for the human being to try to figure out all possible interpretations of a sentence.
Lawyers when they do their work are working with a specific knowledge base of common legal sentences and common interpretations known in their profession but the rest of us might see a sentence in lawyer-speak and not really know what it means because we will not know the common interpretations. This is a big problem of course because to form agreements between two parties both parties need to have a common understanding (a kind of knowledge symmetric understandability) allowing them both to interpret roughly the same sentence to mean the same thing.
“A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”
― Robert A. Heinlein 
No, it is not a vow everybody to be everything. It is a reflection of the fundamental human fungibility . The average human can be taught to take any human role. The exceptions of true organic geniuses (those who are hard to be replaced) and morons (those who are incapable to replace), only confirm this general rule of shear numbers . This is what makes the mankind so scalable .
''Know'' is synonymous with ''can''. Literally. Knowledge = technology. Even etymologically . Knowledge is praxis . Only. There ain't such thing as impractical knowledge. If it is not a skill, it is not knowledge. I mentioned once  that we're all AIs. Ref.: feral children .
We are not what we eat , but we are what we've learnt. You are what you know/can. And you can what you have learnt. Learning is from the taking side. Teaching is on the giving side. Of one and a same process. We do not have a word to denote the modulus  of learning/teaching, it seems. But it will come.
We are taught by the others, the society. We are the cherry ontop of a layer cake of culture onto nature . We are learning by ... living. We acquire skills in plethora of contexts from family, street, school, job, media ... Learning  is not a monopoly of man, countless systems are also learners. Maybe one of the basic definitions of life and intelligence is the ability to learn . Giant topic, yeah. We won't graze into it here now on what is learning, but on how we learn.
Due to our neurological bottlenecks we spontaneously form hierarchies . This hinders our scalabilty  by forcing humanity to be more or less a fractal of 5. We are close to a number of breakthroughs which to mitigate these innate limitations of ours into a number of ways    . But the general case is not subject of this article - herein we focus on HOW we are taught. How we acquire knowledge, and how this knowledge of ours gets recognized and utilized by society. And the hierarchic emergent structuring is of course in full force upon us in teaching as well as into everything social else.
So comes education , such comes exam , knowledge certification , certified skills application , knowledge creation verification , job fitness testing , CVs and employer recommendations ... etc., etc. With all the bugs and the so little features of this 'map is not the territory' , situation.
It is all centralized and hierarchic - exactly as the global fractal of double-entry accountancy ledgers which we call fiat financial system is. In fact it is so interwoven with fiat finance than it is almost inextricable from it . And as much inefficient and imprecise.
In all these years of talking and thinking on Tauchain  - I noticed - and this suspicion of mine incrementally turns into shear conviction - that Tau, the upscaler of humanity, inevitably also is the ultimate teaching machine. If education is facilitating of learning, Tau is the maximizer of learning. By its very construction, it comes out so.
People talk and listen whenever and whatever they want. Tau has unlimited capacity to listen and attend and remember, and answer. Only limited by the hardware capacity allocated. Tau extracts meaning. Purifies the stream, distills it down to the essence. Detects repetitions, contradictions and all other, ubiquitous nowadays conversation bugs. Remembers changes of opinions of the individual user. And points them out. Sounds like the best tool to know oneself. And the others to know you if you let them.
Your Tau account or profile is what you know. You say what you say and also ask. Say statements and questions. Tau pools you together with the others who state the same and, more importantly, who ask the same type of questions. Knowing what you know, and asking about what you don't know but want to know, maps not only your knowledge state but also maps your knowledge dynamics. Records and drives how your knowledge changes. You even have access to what you forget, and can recollect it. True real time knowledge state reporting. For first time in human history.
If consciousness  is - aside from the clinical state of being merely awake - the post-factum integration of senso-motoric experience , the Accountant of mind, the speaker of the narrative which is you, then Tau is your consciousness booster. That is - stronger than thought.
The ultimate teaching, the ultimate fair testing or exam, the ultimate real-time comprehensive diploma, or certificate, super-peer reviewed paper(s) of you as academic carrer.., the ultimate job interview AND the ultimate ... job of being working as yourself and anything useful you create to be instantly scarcifiable and monetizable - your Tau account is! And all the rest of accessible socoety - being your own workforce. And you to them. In the billions. In a move. In real time.
Including control over the pathways of increase of your skills towards the most productive personally for you learning directions, because it aids you to analyze the you-Tau history and to apply knowledge maximizer techniques and to participate profitably into creation of newer better ones. Maximizer of self. And maximizer of society making it to consist of max-selfs. Ever improving. Merger of education with work occupation. Work-as-you-live.
The literal Knowledge Economy, as described by @trafalgar in his article  from few months ago. Where search, creation, reflection, certification, recognition, commercialization, accumulation, modification, improvement ... everything of knowledge - is all in one.
And it is not only Humans and Tau lonely job. I foresee the other Machines to join the party . Yes, I mean machines capable to have interests and to ask and seek answers of palatable questions.
This - the education amplification - to come down the technology way - has been, of course, anticipated by many. Few arbitrary examples:
- A distant rough-sketch hint for the inevitable tuition power of Tau is Neil Stephenson's  ''The Diamond age''  , with the depicted: '' Or, A Young Lady's Illustrated Primer '' , as an interactive networked teaching device.
- or if I'm right about the inevitable conquest of the natural languages territory  - UX  like in the 'Her' (2013) film .
- Thomas Frey  of the futurist DaVinci Institute  in his book ''Epiphany Z''  paid special attention of this.: down the way of micro- and nano-education, an effective merger of the processes of education, diplomas issuing, job application, exam and actual execution of job obligations. Tom does not know about Tau. But I'll tell him.
With a big smile of irony and self-irony of course... these examples. Just to pick from here and there proofs of the giant anticipation of what's to come. And taken with a few big grains of salt. Cause the reality will be immensely more powerful.
Tutor , tuition , my emphasis via using exactly this wording, comes to denote the economic side of learning/teaching. It is about the cost of learning - the association of tuition with fees, about the placement of the acquired skills, about the business organization of those, about the protection of ownership and security of transaction of knowledge ... Let me introduce here a neologism  which to reflect the business side of it:
Scrooge Factor 
- Simply denoting the money-making power of a technology use by a business. The 'money suction power' of a business entity or organization of any kind coming from the application of a technology, if you want. Technology as socialized knowledge. Scaled up over multiple humans. Over a society. Of course the Scrooge Factor can pump in different directions. The Scrooge Factor of the traditional hierarchic education, governance and everything ... is apparently very often negative - hierarchies decapitalize, dissipate, waste. Orders of magnitude more wasteful than any PoW , but on this - some other time.
So aside from all the niceties of the abstractions of the full supply and value chains of a Knowledge economy, lets round up some numbers:
- We know that a true functional semantic search engine alone is worth $10t. Yeah. Tens of Trills. Trillions. As per the assessments of Davos WEF attendees of as far as I remember 2015 or 2016...
- Also, Bill Gates stated back in 2004  that ''If you invent a breakthrough in artificial intelligence, so machines can learn,'' Mr. Gates responded, ''that is worth 10 Microsofts.''
- Tom Frey  also argued  that by 2030 the biggest corporation in the world will be an online school. Given the present day size and growth rate  of, say, Amazon  this 'online school' should be in the range of good deal of trillions of marcap if it is to be bigger than the biggest corporations. But we do not need such indirect analogies over analogies to access the scale. The shear size of the global education industry is the most eloquent indicator . Note that Tom talks about 'corporation' i.e. for clumsy and inefficient hierarchic human collective. Not for a system which does this orders of magnitude more efficiently and powerfully due to being intrinsically P2P, i.e. geodesic . Even the best futurologists can be forgiven for missing to predict Tau. :)
And this mind-boggling hail of trillions, does not even account for the Hanson Engine  factor.
Tau the Tutor ex Machina is just another unintended useful consequence outta the overall design.
It is nearly impossible to track and contemplate exactly what all these 'side-effects' would be and how they will synergetically boost each other.
With my articles I intend to only touch some lines of the immense phase space  of the possibilia, with neither any ambition to think it is possible to cover it all, nor this to represent any form of advice.
Future is incompressible. Compression is comprehension. Comprehensible only by living.
Failure to go to the geodesic way of learning, will turn these beautiful but trilling words into prophecy:
"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age." H.P.Lovecraft  (1926 ).''
''Thinking by Machine: A Study of Cybernetics''
by Pierre de Latil 
Published by Houghton Mifflin Company in 1957 (c.1956), Boston.
Foreword of Isaac Asimov (then only 36 years old) ! Recommendation by the legendary mathematician and cyberneticist Norbert Wiener (then 62 years old) ! ... A true jewel! The book is described as:
A review of "the last ten years' progress in the development of self-governing machines," describing "the principles that make the most complex automatic machines possible, as well as the fundamentals of their construction."
Nineteen fifties !! The midway between the first digital computer made by my half-compatriot John Atanasoff  and internet . Almost a human generation span between the former, the book and the later event. Epoch so deep in the past that even television, air travel, rockets and nukes ... were young then.
Same Kondratieff  wave phase btw, which hints towards the historical rhyming of socially important intellectual interests. (On how K-waves imprint on the humanity growth curve - in series of other posts to come).
I must admit here that I've never put my hands and eyes onto this book. But, it is stamped into my mind and memory by Stanislaw Lem  - one of the greatest philosophers of the XXth century, working under the disguise of a Sci-Fi writer, for being caught on the wrong side of the Iron curtain.
''Summa Technologiae'' (1964)  is a monumental work of Lem's, where most issues discussed sound more contemporary nowadays than they were the more than half a century ago when it was built, and for many things also we are yet in the deep past ...
... Lem reports and discussed the following from the aforementioned Pierre de Latil's book.:
''As a starting point will serve a graphic chart classifying effectors, i.e., systems capable of acting, which Pierre de Latil included in his book Artificial Thinking [P. de Latil: Sztuczne mys´lenie. Warsaw 1958]. He distinguishes three main classes of effectors. To the first, the deterministic effectors, belong simple (like a hammer) and complex devices (adding machine, classical machines) as well as devices coupled to the environment (but without feedback) - e.g. automatic fire alarm. The second class, organized effectors, includes systems with feedback: machines with built-in determinism of action (automatic regulators, e.g., steam engine), machines with variable goals of action (externally conditioned, e.g., electronic brains) and self-programming machines (system capable of self-organization). To the latter group belong the animals and humans. One more degree of freedom can be found in systems which are capable, in order to achieve their goals, to change themselves (de Latil calls this the freedom of the "who", meaning that, while the organization and material of his body "is given" to man, systems of that higher type can - being restricted only with respect to the choice of the building material - radically reconstruct the organization of their own system: as an example may serve a living species during biological evolution). A hypothetical effector of an even higher degree also possesses the freedom of choice of the building material from which "it creates itself". De Latil suggests for such an effector with highest freedom - the mechanism of self-creation of cosmic matter according to Hoyle's theory. It is easy to see that a far less hypothetical and easily verifiable system of that kind is the technological evolution. It displays all the features of a system with feedback, programmed "from within", i.e., self-organizing, additionally equipped with freedom with respect to total self-reconstruction (like a living, evolving species) as well as with respect to the choice of the building material (since a technology has at its disposal everything the universe contains).
Longish quote, but every word in it is a worth. When I've read this as a kid back in 1980es ... immediately came to my mind the next, the seventh logical higher effector class.: the worldmaker !!
The degrees of freedom of all the previous six according to the classical taxonomy of de Latil are confined by the rule-set, the local laws of physics.
They are prisoners of an universe. Like birds incapable to reconfig their cage into roomier and cozier ones.
If we regard the laws of nature as code or algorithm, my 7th level effector will be capable to draft and implement itself onto newer and stronger algorithmic foundations. ( Note the seamlessness between computation and robotics in Latil/Lem categorization construct - quite logical indeed, having in mind that software is state of hardware, that matter-form-action are inextricable from each other, but on this in series of other times and posts ... ). Without bond?
So, I wonder:
Where, you reckon, is Tauchain  placed onto the Latil's effectors map?
To zoom out is useful. It puts the events networks of our spacetime in perspective. Including on what the great Jorje Luis Borges was calling the Orbis Tertius :
''ORBIS TERTIUS. "Tertius" (Latin = third) is an allusion to: World 3: the world of the products of the human mind, defined by Karl Popper.''
Poetically stated, ''retrodiction studies'' , ,  enables us to get a glimpse on the "clear, cold lines of eternity".
Back in 20th century Prof Robin Hanson put together this extremely insightful and strong document .
Long-Term Growth As A Sequence of Exponential Modes,
Economy grows. [see: Footnote]. Unstoppable.
Hanson's unprecedented contribution was to provide us with systematic orientation tool on how and why economy grows.
It accelerates. See:
Mode Doubling Date Began Doubles Doubles Transition
Grows Time (DT) To Dominate of DT of WP CES Power
---------- --------- ----------- ------ ------- ----------
Brain size 34M yrs 550M B.C. ? "16" ?
Hunters 224K yrs 2000K B.C. 7.3 8.9 ?
Farmers 909 yrs 4856 B.C. 7.9 7.6 2.4
Industry 6.3 yrs 2020 A.D. 7.2 >9.2 0.094
The model identifies the past economy accelerators as.:
- neural networks, evolving into doubling brain size each 30-ish megayears (hinting that human level of intelligence is an inevitability: +/-30 millions of year around the Now, by the virtue of the good old 'coin-toss' Darwinian algorithm alone.)
- human as the top-of-the-foodchains predator since around 2 000 000 BC. (maybe the human mastering of the Fire and the Blade to blame), compressing the doubling time with over two orders of magnitude down to a quarter of a million of years.
- Food production, ecosystem manipulation (or rather the collimation of farming, horse domestication and writing as accelerator components), leading to less than 40 human generations per economy doubling.
- All we know as division of labor, specialization, systematized Sci-Tech... industry - the centralized ways for production and control of knowledge leading to another hundreds-fold compression down to mere ~decade of economy doubling time.
Recommended: digest each Hanson (economy accelerator drive or) Engine with the Bob Hettinga's 'ensime' :
My observation about networks in general is a rather obvious one when you think about it: our social structures map to our communication structures. As intuitive as it is to understand, this observation provides great insight into where the technology of computer assisted communication will take us in the years ahead.
Connectivity specs as indicator and drive.
Now, when we leave the past and use these models to gaze into the future, the really interesting stuff comes out.
Aside from giving explanation to the, detected by Brad DeLong in his also monumental paper , overall trajectory of the economy, the nucleus of meaning in the Rob Hanson's paper is:
Typically, the economy is dominated by one particular mode of economic growth, which produces a constant growth rate. While there are often economic processes which grow exponentially at a rate much faster than that of the economy as a whole, such processes almost always slow down as they become limited by the size of the total economy. Very rarely, however, a faster process reforms the economy so fundamentally that overall economic growth rates accelerate to track this new process. The economy might then be thought of as composed of an old sector and a new sector, a new sector which continues to grow at its same speed even when it comes to dominate the economy.
Visualize: a Petri dish and sugar being expanded in size and quantity by the accelerating growth of the bacterial culture in it.
Hanson actually predicted nearly quarter of century ago, ... something that is relentlessly coming.
In the CES model (which this author prefers) if the next number of doubles of DT were the same as one of the last three DT doubles, the next doubling time would be ... 1.3, 2.1, or 2.3 weeks. This suggests a remarkably precise estimate of an amazingly fast growth rate. ... it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.
An economy accelerator avalanche is roaring down the slope of time towards us.
A brand new Hanson Engine is about to leave the assembly line.
Tau, is that you?
FOOTNOTE: To wrap up the above statements in the flesh of the deep thesaurus of content onto which they lie, would conservatively consume hundreds of pages. Even if only briefed. I promise to come back to these subtopic meaning expansions (by referring back to here) with series of posts in the months to come to tie up with the notions of.: economy as a network, network as computer, what exactly it processes and outputs, economy (like the universe or life) being endogenously driven positive feedback loop self-amplifying non-equilibrium entropic combinatorial explosion system, the wealth as economy complexity growth in relation with GDP size and the intimate connection of dollars-joules in energy intensity, physical and economic limits of growth, self-reinforcing predator-pray models, knowledge as synonymous with skill and so forth, economic cycles upon the DeLong curve ... to name a few. Readers questions and comments will of course help a lot with the subtopics prioritization, and will boost (incl. mine) understanding. Thank you in advance!
NOTE: I currently have the pleasure and honor to be part of the Tau Team, but this post contains ONLY my personal views.
Tauchain is a profound project that has taken years of deep research and development. Some of the smartest people I've known on this platform highly recommended it, which is why it has been making me do a few things I've not been doing for a while now:-
So one of the first things I noticed in #idni's IRC channel is a cool-looking username "naturalog". While I'm pretty sure it just means natural logarithm, could it be natural OG instead? The natural, original gangsta? In casual parlance of course. Turns out, that's Ohad Asor's (the founder) nickname. What a smooth operator. That username is like wordplay: a mathematician with street cred. Too bad that Steem username is already taken.
The Natural OG
Reading through the logs I soon realised that I can trust his words. Why? Other than his experience, I think it's because I'm somewhat the same in nature. Not that I'm a genius with great knowledge and expertise like he is, but I do appreciate stuff like language, semantics, logic, and such. They're the kind of subjects which I think helps shape clear communication. It shows throughout his replies in the logs.
Many might not know it, but everything I say or type usually takes quite some time because I do try to be careful with words. Sometimes I even spend minutes to decide whether or not to say "could" instead of "would", amongst all of the other nuances in communication. Because, what else do we really have between us other than words? This is why writing is almost sacred to me.
The ability to question oneself and question one's choice of words are part of our learning process. Why do we really say what we say, or think what we think? Can't speak for everyone, but I expect introspective, lifelong learners to be more trustworthy when it comes to dealing with complex subjects. Plus, the obvious elements of the project seems to speak more about substance than hype:-
So all things considered, the project is unlikely to be a scam. If you search through the ~28 megabytes worth of IRC chatlogs, you will even find three ultra-rare instances of Ohad Asor aka naturalog mentioning "before it was cool". Look at the image below. Knowing his history and experience, I think it's safe to conclude that this dude is a certified OG. The natural OG. Total man crush! I might even ask him for some dating tips once he's done with the bulk of the development.
If those points above are not enough street cred to establish an OG status, check out this section of the chat log below:-
10:39 < Liaomiao> you must know a lot about blockchain architecture if you came up with some of the ideas behind graphene
Just good to know that he might have had some influence in the creation of Graphene, Dan Larimer's creation for Bitshares that subsequently shaped both the inner-workings of Steem and EOS. Impressive indeed. It's a good sign for Tauchain / Idni Agoras. In contrast, I was still riding rollercoasters all day high on sweet carbonated drinks in Disneyland during the same age when Ohad Asor was already grinding like an OG, writing production-level software.
So it would seem like my investigation into the heart of Tauchain has quickly turned me into a huge admirer and fan of the project. It has never happened to me before to this extent, but I certainly don't mind given the project's scope and the main developer's character. It's at least a much better story than elevating irrational loonies and sensationalists with no appreciation of well-founded knowledge, which unfortunately is all too common in society these days. If anything would make the world a better place, it would be intellectual curiosity, not intellectual dishonesty.
For now, I'm quite happy to have found the natural OG who has been working quietly behind the scenes. So far it seems to me that it could very well be the next big thing other than Steem communities and SMTs. I'll be posting more about the project in time. As always, thanks for reading.
Website - http://www.idni.org
Github - https://github.com/IDNI/tau
Telegram - https://t.me/tauchain
Reddit (with FAQ) - https://www.reddit.com/r/tauchain/
Coinmarketcap entry - https://coinmarketcap.com/currencies/agoras-tokens/
Here's an hour-long interview with Ohad Asor that you might want to check out.
Not to be taken as financial advice.
What is Tau-Chain?
The purpose of this article is to demonstrate how Tau-Chain (Tau) can be implemented in practice. I have already presented Tau and its four-step roadmap in my previous article, but I think that further explanation about Tau is required to better understand its applications.
Tau is basically a discussion platform (like any other social network you know) with two significant innovations:
*Just to clarify, knowledge can be facts, lines of code, qualitative and quantitative data, etc.
How Tau can be implemented in practice?
Tau will be a free, open-source platform to advance and execute knowledge. Think about it as a one-stop shop that provides free consulting services, in all areas, to large numbers of people. For example, if you would like to start an enterprise but you lack the relevant business skills, Tau can answer your questions and even perform a market research or analysis (if initial data is provided) to evaluate your business opportunity.
In order to better understand how Tau can improve our society, I am providing below a detailed example showing how I see the vision implemented in practice.
Suppose Alice and Olivia are Ph.D. students in computer science who face a problem with their research. They use Tau to discuss the details of their data, findings and hypothesis. Tau will automatically translate this information into its metalanguage, adding Alice and Olivia’s data to the knowledge archive. Tau is basically the third member in the conversation, and can guide Alice and Olivia to advance their research by interpreting the data and suggesting improvements to their findings. If the students would like to implement the research and develop computer software, Tau will assist them with writing the code in the most efficient way. Using Tau, Alice and Olivia can overcome the limits of their knowledge to quickly complete and implement their research.
But how can people profit from sharing their knowledge?
There is another way for Tau to deepen its knowledge and develop better intelligence. Tau can gain knowledge from the Knowledge Marketplace (Agoras), a blockchain based smart contracts platform where individuals are able to generate income by sharing knowledge and information. With every transaction and exchange of knowledge, Tau will be exposed to the data to become more “educated” and accurate, resulting in a better knowledge deduction capability.
I know that smart contract platforms already exist, but they all lack very important capabilities – the ability to auto-verify the data, run quality assurance tests and suggest improvements to eliminate potential disagreements between the parties to a contract. Tau’s artificial intelligence will support the transaction between the two parties, and will make sure that there will be no fraudulent activities, inaccurate information or low-quality services. This will be the only platform where a computer that acts human (without human deficiencies) will supervise and support such transactions.
The following example demonstrates a possible application of Agoras:
Consider Bob, a software developer who has recently signed a smart contract with David to design a new software program. When Bob shared his code in the Knowledge Marketplace (Agoras), Tau verifies the relevancy of the code and will even suggest improvements to advance the code, eliminating a potential disagreement about quality and fraud. Upon Tau’s approval, Bob will receive his reward, as agreed in the contract. Tau will use the final code as additional knowledge to strengthen the platform’s intelligence.
As described above, the compensation mechanism will incentivize users to contribute their knowledge to advance ideas of others. Thus, we create a society in which individuals’ knowledge and expertise become public domain and can be better utilized to promote social health, welfare and resources.
I provided only a few examples of how Tau and Agoras can by implemented in practice. My examples were computer-science related, but you should realize that Tau-Chain can advance ideas and produce knowledge for every collaborative human endeavor across all fields, including sciences, business and government. Think about a situation where you have a problem and need some help – this is where Tau can assist you with solving your problem and even execute the solution if required and applicable.
Just to clarify, Agoras is also the name of the tokens that users will use in the Knowledge Marketplace (the smart contract platform). Agoras tokens holders will also benefit from developments that will be built as part of Tau’s ecosystem, including a Computational Resource Market (“Zennet”), Distributed Search Engine and a Derivatives Trading Platform.
To end this article, I would like to quote the last paragraph in my previous article, as it is still relevant:
"I foresee huge potential for this project, and urge you to read and learn about this project and its relevant applications. If you find this vision interesting, I recommend that you follow the project on Telegram,Facebook, LinkedIn and Reddit, or read Ohad’s blog for further information."
Disclaimer: I have invested in Agoras. Please do your own research before investing in Agoras and/or any other coin or project. Please do not consider this article to constitute financial advice.
The vision of Tau-Chain, a blockchain based self-amending platform designed to scale human collaboration and knowledge building. By Isar Flis. Posted on Steemit. January 8, 2018.
The Crypto-Currency Market
With the fluctuation in the price of Bitcoin, there are more voices claiming that the crypto-currency market is a bubble, warning investors about the risks of investing and possibly losing their funds. One of the claims is that virtual coins have no real value. However, by carefully studying this market, the potential investor will discover that some projects include technology, innovation, true vision and strong community, thus creating a fiscal value like that of other successful startup companies.
Today, it is difficult to predict which coin will secure a place among the top currencies on Coinmarketcap. There are large number of projects and buzz-words, used in fancy websites and white-papers, which make it challenging to extract the relevant information and make educated investments. In addition, there are projects that work “under-the-radar” and are very technical to comprehend, discouraging potential investors.
I would like to discuss one of these technical projects that works under-the-radar, without a fancy website or extensive marketing campaign but with brilliant innovation and fast-growing community. The name of the project is Tau-Chain (Agoras tokens on Coinmarketcap), developed by Ohad Asor.
Tau is a collaboratively self-amending program designed to scale human collaboration and knowledge building. To further clarify the explanation, think about a platform that can develop any computer program the user desires, based solely on discussions with his or her team about the program’s specification and development. The use of such a platform can change not only the crypto-ecosystem, but all branches of science.
Tau’s vision has a long way to go. However, Ohad has developed a detailed roadmap to achieve his vision. Tau will be developed in four stages, as follows:
Tau Meta Language (TML): TML is the base language that will enable all users to interact with each other, no matter what computer language they speak. Think about it as the technology behind Google Translate, but for computer languages, or as Ohad calls it: “the Internet of Languages”.
Alpha: Alpha is a social platform that promotes discussions between infinite numbers of users. Today, an effective conversation cannot be held when too many people take part in the decision-making process (that is why democracy was created). However, Alpha will be able to scale these discussions and detect logical points of consensus between users, thus enabling better knowledge sharing and construction.
Beta: Beta will advance Alpha to enable the development of computer programs, based on user discussions in the platform. To make this more tangible, think of Wix.com where anyone can easily develop a website, even without the technical expertise. With Beta, the code for any computer program will be developed based on specific instructions that the user provides.
Tau: Tau is where blockchain is introduced, thereby creating a decentralized platform (the Tau-Chain), compared to the centralized Beta. Tau will be self-amending and will be able to deduce knowledge based on the information submitted by its users. In its final stage, Tau will amplify the creation of knowledge for its users, advancing current human-knowledge, research and development in different disciplines, such as physics, mathematics and computer science.
The reasoning behind designing the roadmap in four stages is that each stage can support the advancement of the next one. This year we expect the development of the first two stages, TML and Alpha, to be completed. Using Alpha’s discussion platform, an infinite number of developers can join the project to build Beta, expediting its go-to-market date. After Beta is developed, it will only be a matter of time until Tau is completed as all technical challenges will be resolved using Beta.
The legal entity behind this operation is called “IDNI” (Intelligent Decentralized Networks Initiatives), which is composed by Tau’s development team and support units.
So, what is Agoras?
While Tau creates a true knowledge society, Agoras is about creating true monetary knowledge, by powering the ecosystem built via Tau. Agoras will be used to execute the applications of Tau, Zennet (Computational Resource Market), derivatives trading platform and further developments to be built as part of Tau’s ecosystem.
There are 42 million agoras in total. Most of the tokens were sold by Ohad during 2017. The sold tokens, named IDNI Agoras, represent the future Agoras coins holders will receive upon the completion of Tau (fourth stage), where the blockchain is introduced.
The current price of one IDNI Agoras is around ~$2 (traded on Bittrex), which has shown a steady growth throughout the development of the project. The initial code that was released as a proof of concept strengthened the confidence of investors in Tau, compared to competing projects.
I foresee huge potential for this project, and urge you to read and learn about this project and its relevant applications. If you find this vision interesting, I recommend that you follow the project on Telegram, Facebook and Reddit, or read Ohad’s blog for further information.
Disclaimer: I have invested in Agoras. Please do your own research before investing in Agoras and/or any other coin or project. Please do not consider this article to constitute financial advice.
The Power of Tau - Scaling the Creation of Knowledge. By Trafalgar. Posted on Steemit. December 31, 2017.
Ohad Asor, creator of Tau Chain/Agoras, has recently published the long awaited blog post detailing his vision for what very likely is the most ambitious project in the crypto space: Tau.
Tau will accelerate human endeavors by overcoming long ingrained limitations in our collaborative processes; limitations which we rarely even question.
The Problem of Social Governance
Take social governance, for example. As individuals, we have opinions over a wide variety of social issues. Perhaps you feel that the speed limit on certain roads is too high, or that programming should be a compulsory subject at public schools, or that everyone would benefit if cryptocurrencies were officially recognized and endorsed by the state.
However, you have no idea how to get these concerns across to the general public. I mean you could try writing a letter to your local representative or signing a petition but ultimately that's unlikely to gain much traction. Meanwhile, the very same issues that seems to have divided the nation over the past decade remain at the forefront of our political debate. Immigration, climate change, abortion, gun control etc. are all important issues of course, but very little progress have been made considering the amount of time, resources and attention that have been devoted to them.
So the problem with traditional forms of social governance, such as democratic voting, is apparent: on the one hand it has difficulty identifying and addressing the wide range of opinions different people hold, on the other hand, even with respect to the small number of issues that do end up bubbling up to the surface, it isn't particularly efficient at detecting consensus.
The central cause of this problem is that current modes of discussion are not scalable. There are inherent limitations in the way we're able to communicate our views across to each other; namely, human ability to comprehend and organize information is the main bottleneck. We cannot possible follow multiple conversations at once, or recall everyone's propositions once there are more than a handful of people in the mix. This is why most collaborative decision making bodies in practice are generally quite small in number: the President's cabinet, Supreme Court Justices, boardroom directions of a fortune 100 company etc.; you just can't have a productive discussion with 50 people. Our entire civilization is structured around this very limitation: discussions don't scale.
Scaling Collaborative Discussions Under Tau
Imagine if we can overcome this limitation; what will it mean for social governance? By using a self defining, decidable logic, the Tau network is easily able to keep track of every user's propositions and detect consensus automatically. Note that making a proposition is exactly the same as voting for that very same proposition: when you're proposing 'dogs should always be on a leash in public unless in a park' you're in effect putting in a vote for such a proposition. This way, countless issues, regardless of how technical or niche, can be assessed through the network concurrently, and social consensus can be detected on the fly. The Tau network can scale social governance by overcoming one of the greatest limitation in human communication of ideas by delegating the task of logically making sense of everybody's propositions to the computer. A simple use case of this will be the rules of the Tau network itself: through a self defining logic, Tau is able to detect consensus among its users from block to block, altering its own rules to conform to the choices of the user base.
The benefits of scaling discussions are not limited to just a more efficient form of social governance. Logic isn't merely about detecting surface level consensus, the network can easily form further deductions from everyone's propositions. If one states 'all men are mortal' and 'Socrates is a man', one can deduce that 'Socrates is mortal.' But deductions can be very deep and non trivial. Imagine if we had a group of 1000 mathematicians all inputting their mathematical insight as propositions. Tau can rapidly detect who agrees with whom on what, and deduce every logical consequence of their combined wisdom; in effect arriving to new truths and insights. In other words, Tau greatly accelerates the production of new knowledge. This will, of course, also work if you have physicists, doctors, engineers, computer scientists, indeed experts in every field working together on the platform. By scaling collaborative discussions in a logical network, Tau is able to scale the creation of knowledge.
When Tau comes into effect, any company, government, and indeed any organization not using this new network will be rendered obsolete. Tau aims to become an indispensable technology.
And this is only the alpha of Tau.
I will talk about the beta in a future posts. The beta will revolve around not just the scaling of discussions and consensus, but the automation and execution of code based of the results of those discussion. For more information on code synthesis and more, please read Ohad's blog. Also, do check out my introduction to Tau here if you missed it.
You can invest in Tau through buying Agoras tokens on Bittrex.
I am not affiliated or paid by the project. These represent my own subjective views. Tau/Agoras is the only other crypto project apart from Steem in which I see an extraordinary future, and I am merely sharing that with fellow Steemians here.
Ohad Asor's New Tau Blog
IRC Chat: Where you may ask Ohad himself technical questions
Tau Chinese QQ Group: 203884141
Ohad Asor the lead developer and founder of Tauchain releases first new blog post in over a year. By Dana Edwards. Posted on Steemit. December 30, 2017.
The new blog post titled "The New Tau" is available for everyone to read. The blog post speaks on the critical topic of collaborative decision making. This is a topic which I myself have been interested in and Ohad's solution is different from the usual solution. In my own thinking I was considering a solution based on collaborative filtering but I realized this would never scale. I then considered a solution based upon using IA (intelligence amplification) by way of personal preference agents and this does scale but requires that the agents have a lot of data to truly know each user and their preferences. The solution Ohad Asor comes up with attempts to solve many of the same problems but his solution scales without seeming to require collaborative filtering or any kind of voting as we traditionally think about it.
Let me list some of the obvious problems with voting which many will recognize from Steem which also relies on collaborative filtering:
Now let's see what Ohad Asor has to say:
In small groups and everyday life we usually don't vote but express our opinions, sometimes discuss them, and the agreement or disagreement or opinions map arises from the situation. But on large communities, like a country, we can only think of everyone having a right to vote to some limited number of proposals. We reach those few proposals using hierarchical (rather decentralized) processes, in the good case, in which everyone has some right to propose but the opinions flow through certain pipes and reach the voting stage almost empty from the vast information gathered in the process. Yet, we don't even dare to imagine an equal right to propose just like an equal right to vote, for everyone, in a way that can actually work. Indeed how can that work, how can a voter go over equally-weighted one million proposals every day?
This in my opinion is very true. In reality we have discussions and at best we seek to broadcast or share our intentions. Intent casting was actually the basis behind how I thought to solve this problem of social choice but I would say intent casting even with my best ideas would not have been good enough because again the typical voter would be uninformed. Without an ability of the typical voter to be either educated continuously which in a complex world may be unrealistic, or for the network itself to somehow keep the voter up to date, this intent casting barely works. It works well for shopping where a shopper knows what they want but does not work so well when a person doesn't actually know what they want and merely knows what they value. Values are the basis for morality, for ethical systems, and this is the area where Ohad's solution really shines.
Tauchain has the potential not only to scale discussions but also morality, because it will have the built in logic to make sure people can be moral without constant contradiction. The truth is, without this aid, the human being cannot actually be moral in decision making in my opinion due to the inability to avoid all sorts of contradictions.
All known methods of discussions so far suffer from very poor scaling. Twice more participants is rarely twice the information gain, and when the group is too big (even few dozens), twice more participants may even reduce the overall gain into half and below, not just to not improve it times two.
This is the conclusion that Ohad and myself reached separately but it still holds true. We require the aid of machines in order to scale collaborative decision making. This in my opinion is one of the major difference makers philosophically speaking between the intended design and function of Tauchain vs every other crypto platform in development. This also in my opinion is going to be the difference maker for the community which Tauchain as a technology will serve because it will enable the machines and humans to aid each other for mutual benefit or symbiosis.
The blog post by Ohad Asor brings forward a very important discussion which has many different angles to it. The angle I focused on with regard to the social choice dilemma is the problem of how do we scale morality. In my opinion if we can scale morality in a decentralized, open source, truly significant manner, then nothing stands in the way of absolute legitimacy, mainstream adoption, and with it a very high yet fairly priced token. The utility value of scaling morality in my opinion is higher than just about anything else we can accomplish with crypto tech and AI. If the morality is better, then the design of future platforms will be greatly improved in terms of how the users are treated, and this in itself could at least in my opinion help solve the debate about whether AI can remain beneficial over a long period of time. I think if we can scale morality in a decentralized way, it will make it easier to design and spread beneficial AI. Crypto-effective alturism could become a new thing if we can solve the deeper more philosophical problems.
The liquid paradigm, feedback loops, the virtuous cycle and Tauchain. By Dana Edwards. Posted on Steemit. December 31, 2017.
What do I mean by the concept of "liquid platform"? This is merely a re-articulation of the concept of self amendment and self definition. In other words it is very much like an autopoietic design. Bruce Lee once said to "be like water", and the reason is because water can adapt to any environment it is placed it by taking the form of the container it is put into.
So by liquid paradigm I mean that the core feature of true next generation platform design is going to be focused on maximum adaptability.
Feedback loops and the virtuous cycle
How can we have a platform which promotes continuous self improvement? If you have a platform with no hard coded "self" then even the design of the platform is under constant negotiation and creation. This is key because it means Tauchain will be able to adapt quicker than all other competing platforms. Quicker than Tezos because Tezos merely provides self amendment but lacks the virtuous cycle, the meta language, etc.
The Tau Meta Language allows for self definition at the level of languages. This means even the communication mechanism between humans and machines can be updated continuously. This continuous updating is the key design breakthrough of Tauchain because it means Tauchain will always be state of the art in any area. Think of a platform like Wikipedia where anyone can update any part of it in real time continuously so that every part of it is always the state of the art.
Starting at languages, the feedback loop can be created between humans and intelligent machines. Humans must make decision on how to design Tau. These design decisions benefit from the virtuous cycle due to the feedback loop between humans and machines allowing the decision making ability itself to be upgraded. This could even allow for the humans to transcend traditional human capabilities by relying on intelligent machines to assist in design which means better future designs, which means better decision making, which means better future designs which leads to better decision making, this represents the "virtuous cycle" by way of a feedback loop between humans to machines to humans to machines to humans etc. The humans improve the quality of the machines by feeding knowledge, feeding new algorithms, feeding just enough for the machines to become intelligent enough to help the humans to help the machines even more efficiently in the next iteration of Tauchain, over and over again.
Humans and machines will seek more good and less bad for the formal specification of Tau itself. Good and bad designs will be defined collaboratively by the human participants by way of intelligent discussion. As discussion scales, bigger crowds means more human minds involved, which means improved design, which leads eventually to a better and perhaps wiser Tau, which of course would lead to wiser even more intelligent discussions, which can lead to an improved formal specification, and to a better Tau. So that is a loop. It is also a loop between improving Tau, improving society, improving Tau, improving society.
Something Revolutionary In the Crypto Space.
The overwhelming majority of new crypto projects out there fall into 3 main categories:
Now the trillion dollar question is this: is just having a currency or shoving a Turing Complete programming language into the blockchain to allow for smart contracts truly the best use of this decentralized innovation? Ohad Asor, creator and lead developer of Tau, does not think so.
What Is Tau?
Before I start I have to make a confession: I don't truly understand Tau. But I feel that I don't understand it slightly less than people who don't know about it at all, so I'll have a go at explaining it.
Tau is a platform that is designed to scale human collaboration and knowledge building.
Almost every significant piece of technology to date (that isn't about accelerating physical labor) has been primarily focused on the disseminating information or data. The wheel, roads, telephones, the internet are all indispensable achievements that have served to aid getting information from A to B.
But the real value isn't in the data itself, it's from the organization of the information within that data into useful knowledge. While the mere distribution of information is an important step to scaling human progress, it's also only part of the picture. The next step has typically been up to us, the human actors, to use our little brains to distill that information manually until we produce knowledge;
Tau is the first piece of serious technology that is aimed to not only automate the collection of information, but also the production of knowledge, unless you count Netflix's 'AI' recommending 'The Human Centipede' after your toddler has just watched 'A Bug's Life', as successful knowledge discovery made by a machine. Tau is about the industrialization of knowledge creation via taking some of the burden of logical reasoning from us humans and giving it to the machine.
What Can Tau Do?
Ohad has spent years researching and developing Tau. The design is centered around creating a self defining, decidable logic that is expressible under pspace (which is mathematically shown to be the most expressive any self defining and decidable language can be), that will act as a metalanguage for all programming languages defined under Tau.
A trivial example of what this can directly lead to is secure smart contracts. Smart contracts operating under Tau cannot ever give rise to something like the DAO hack - decidable programming languages means one can anticipate the entire spectrum of possible consequences of the code before running it, allowing us to avoid anything unintended. But reliable and secure smart contracts are only a tiny fraction of what the platform can truly offer.
The power of Tau's design will allow it to boast some truly wondrous features including:
Ohad has yet to fully explain how this will be achieved, but by far the most difficult part is creating the initial decidable, self defining logic system that serves as a metalanguage. Many had their doubts but yesterday Ohad announced that the first and most difficult step towards this end has been achieved. The code he has written is a working version of the Tau Meta Language which correctly computed a transitive closure graph. This is a proof of concept of the great things to come!
Now that the initial code is released, Ohad is working on a set of explanations about Tau which will outline it's features and how it'll be able to achieve them in more detail. Tau is notoriously difficult to explain, but it's definitely worth the effort to understand it. I'll keep you updated when his explanations are released.
Who Is The Lead Developer Ohad Asor?
Ohad Asor is a programmer, computer scientist, mathematician and logician from Israel. He attended university at the age of 13 and has extensive experience (30+ years) in programming and mathematics.
Most people know me as the clown on here who just writes jokes along the lines of taking his mom to the prom after his cousin rejected him or some shit, but I sat my university entrance exams at 16 and scored in the top 0.5% of Australian Tertiary Admissions Rank and took a prestigious course at a well known university. I only bring this up to show that I've had no shortage of dealings with what ordinarily would be considered to be extremely intelligent people, but Ohad is on a completely different level.
Ohad Asor is, quite frankly, the most intelligent and knowledgeable person with whom I've ever interacted. There are many geniuses and child prodigies out there, but Ohad has spent virtually every minute of his waking moments studying up until this point in his life, and he likely has an IQ of over 5 standard deviations above the mean to begin with. I have spoken to him and followed his project over the past 8 months, and my assessment and admiration of his abilities has only increased over this time period.
Here is a short video of him explaining the old design of Tau and some of its features. The information is dated as the new design is far superior, but these features remain.
English is Ohad's second language - His native language is C.
How Do I Invest In Tau?
Tau itself has no tokens but Ohad is also building Agoras, the first automated marketplace over the Tau collaborative platform. Agoras tokens are currently traded on Bittrex. It has one of the fairest distributions in the cryptosphere and Ohad is only reserving 3% of the tokens for himself. None of that 20% for the founders, 10% for the developers, 20% for the foundation, 15% for the founders' penis enlargement fund bullshit.
Agoras has made considerable gains over the last few weeks but it's total market cap is still under 100 million at the time of writing, which, to me, represents an incredible opportunity for something potentially revolutionary. If we woke up tomorrow without Bitcoin, things would more or less continue as they did, but if we woke up tomorrow without electricity, the world would be an entirely different place. Indeed Tau aims to be the latter: a truly indispensable piece of technology, which is a status that no crypto project has yet reached.
This article isn't to be taken as investment advice any more than it is to be construed as advice on how to get out of the friend zone without resorting to chloroform. I'm not affiliated nor paid by the Tau team in any way. I have not made a single crypto recommendation in my 8 months of being here until now. I just wanted to share something that I think has immense potential to be truly revolutionary, and it also happens to be the only other crypto investment I hold other than Steem.
Feel free to ask some questions after and I'll try my best to answer them.
Special thanks to @dana-edwards and the Steemit platform for allowing me to discover this project
Tau QQ Group Number: 203884141
IRC for technical questions only, Ohad will generally reply within a day
The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems). By Dana Edwards. Posted on Steemit. March 27, 2017.
This article contains an explanation of two core concepts for creating decentralized artificial intelligence and also discusses some projects which are attempting to bring these concepts into practical reality. The first of these concepts is called knowledge representation. The second of these concepts is called a knowledge base. Human beings contribute to a knowledge base using a knowledge representation language. Reasoning over this knowledge base is possible and artificial intelligence utilizing this knowledge base is also possible.
Knowledge representation defined by it's roles.
To define knowledge representation we must list the five roles of knowledge representation which can reveal what it does.
1. Knowledge representation is a surrogate
2. Knowledge representation is a set of ontological commitments
3. Knowledge representation is a fragmentary theory of intelligent reasoning
4. Knowledge representation is a medium for efficient computation
Part 1: Knowledge Representation is a Surrogate
By surrogate we means it is substituting or acting in place of something. So if knowledge representation is a surrogate then it must be representing some original. There is of course an issue that the surrogate must be a completely accurate representation but if we want a completely accurate representation of an object then it can only come from the object itself. In this case all other representations are inaccurate as they inevitably contain simplifying assumptions and possibly artifacts. To put this into a context, if you make a copy of an audio recording, for every copy you make it going to contain slightly more artifacts. This similarly also happens when dealing with information sent through a wire, where if not properly amplified there eventually will be artifects that come from copying a transmission.
"Two important consequences follow from the inevitability of imperfect surrogates. One consequence is that in describing the natural world, we must inevitably lie, by omission at least. At a minimum we must omit some of the effectively limitless complexity of the natural world; our descriptions may in addition introduce artifacts not present in the world.
Part 2: Knowledge Representation is a Set of Ontological Commitments.
"If, as we have argued, all representations are imperfect approximations to reality, each approximation attending to some things and ignoring others, then in selecting any representation we are in the very same act unavoidably making a set of decisions about how and what to see in the world. That is, selecting a representation means making a set of ontological commitments. (2) The commitments are in effect a strong pair of glasses that determine what we can see, bringing some part of the world into sharp focus, at the expense of blurring other parts."
In this case because our commitments are made then our representation is selected by making a set of ontological commitments. An ontological commitment is a framework for how we will view the world, such as viewing the world through logic. If we choose to view the world through logic, through rule-based systems then all of our knowledge about the world is also within that framework. We choose our representation technology and commit to a particular view of the world.
Part 3: Knowledge Representation is a Fragmentary Theory of Intelligent Reasoning.
Mathmaetical logic seems to provide a basis for some of intelligent reasoning but it is also recognized to be derived from the five fields which include of course mathematical logic, but also psychology, biology, statistics, and economics. If we go with mathematical logic then we have deductive and inductive reasoning approaches. Deductive reasoning according to some is the basis behind. If we want to explore an example of reasoning we can take the Socrates example,
Statement A: True? Y/N?
"All men are mortal"
Statement B: True? Y/N?
"Socrates is a man"
Satement C: True? Y/N?
"Socrates is a mortal"
If A is true, and B is also true, then C must be true. This is an example of basic logical reasoning which can easily be resolved using symbol manipulation and knowledge representation. The symbol at play in this example would be implication.
Part 4: Knowledge Representation is a Medium for Efficient Computation.
If we think of computational efficiency, and think of all forms of computation whether mechanical or natural in the sense of the sort of computation done by a biological entity, then we may think of knowledge representation as a medium for that computation efficiency. Currently we think of money as a medium of exchange, and if we think of the human brain as a type of computer which does human computation, then we may think of knowledge representation.
While the issue of efficient use of representations has been addressed by representation designers, in the larger sense the field appears to have been historically ambivalent in its reaction. Early recognition of the notion of heuristic adequacy  demonstrates that early on researchers appreciated the significance of the computational properties of a representation, but the tone of much subsequent work in logic (e.g., ) suggested that epistemology (knowledge content) alone mattered, and defined computational efficiency out of the agenda. Epistemology does of course matter, and it may be useful to study it without the potentially distracting concerns about speed. But eventually we must compute with our representations, hence efficiency must be part of the agenda. The pendulum later swung sharply over, to what we might call the computational imperative view. Some work in this vein (e.g., ) offered representation languages whose design was strongly driven by the desire to provide not only efficiency, but guaranteed efficiency. The result appears to be a language of significant speed but restricted expressive power .
While I will admit the above paragraph may be a bit cryptic, shows that there is a view that better representation of knowledge leads to computational efficiency.
Part 5: Knowledge Representation is a Medium of Human Expression.
Of course knowledge representation is part of how we communicate with each other or with machines. Human beings use natural language to convey knowledge and this natural language can include the use of vocabularies of words with agreed upon meanings. This vocabulary of words may be found in various dictionaries including the urban dictionary and we rely on these dictionaries as a sort of knowledge base.
What is a decentralized Knowledge Base?
To understand what a decentralized knowledge base is we must first describe what a knowledge base is. A knowledge base stores knowledge representations which are described in the above examples. This knowledge base in more simple terms could be thought of as representing the facts about the world in the form of structured and or unstructured information which can be utilized by a computer system. An artificial intelligence can utilize a knowledge base to solve problems and typically this particular kind of artificial intelligence is called an expert system. The artificial intelligence in the most simple form will just reason on this knowledge base through an inference engine and through this it can do the sort of computations which are of great utility to problem solvers.
When we think of Wikipedia we are thinking about an encyclopedia which the whole world can contribute to. When we think about the problems with Wikipedia we can quickly see that one of the problems is the fact that it's centralized. We also have the problem that the knowledge that is stored on Wikipedia is not stored in a way which machines can make use of it and this means even if Wikipedia can be useful for humans to look up facts it is not in the current form able to act effectively as a decentralized knowledge base. DBPedia is an attempt to bring Wikipedia into a form which machines can make use of but it still is centralized which means a DDOS or similar attack can censor it.
Decentralized knowledge is important for the world and a decentralized knowledge base is critical for the development of a decentralized AI. If we are speaking about an expert system then the knowledge base would have to be as large as possible which means we may need to give the incentive for human beings to contribute and share their knowledge with this decentralized knowledge base. We also would have to provide a knowledge representation language so that human beings can share their knowledge in the appropriate way for it to enter into the knowledge base to be used by potential AI.
Knowledge representation is a necessary component for the vast majority of attempts at a truly decentralized AI. If we are going to deal with any AI then we must have a way for human beings to convey knowledge to the machines in a way which both the human beings and machines can understand it. The use of a knowledge representation language makes it possible for a human being to contribute to a knowledge base and this ultimately allows for machines to make use of it's inference engine capabilities to reason from this knowledge base. In the case of a decentralized knowledge base then the barrier of entry is low or non-existent and any human being or perhaps any living being or even robots can contribute to this shared resource yet at the same time both humans and machines can gain utility from this shared resource. An artificial intelligence which functions similar to an expert system can make use of an extremely large knowledge base to solve complex problems and a decentralized knowledge base combined with open and decentralized access to this artificial intelligence can benefit humanity and life on earth in general if used appropriately.
Discussion of example projects.
One of the well known attempts to do something like this is Tauchain which will have both a knowledge representation system and a decentralized knowledge base. In the case of Tau there will be a special simple knowledge representation language under development which resembles simplified controlled English. This knowledge representation language will allow anyone to contribute to the collective knowledge base. Tauchain eventually will have a decentralized knowledge base over the course of it's evolution from the first alpha.
Unfortunately upon reading the Lunyr whitepaper and following their public materials I fail to see how they will pull off what they are promising. I do not think the current Ethereum can handle concurrency which probably would be necessary for doing AI. I also don't see how Ethereum would be able to do it securely with the current design although I remain optimistic about Casper. The lack of code on Github, the lack of references to their research, does not allow me to completely analyze their approach. I can see based on the fact that they are talking about a decentralized knowledge base that their approach will require more than the magic of the market combined with pretty marketing. They will require a knowledge representation language, they will require a true decentralized knowledge base built into IPFS. This true decentralized knowledge base will have to scale with IPFS and through this maybe they can achieve something but without a clear plan of action I would have to say that today I'm not confident in their approach or in Ethereum's ability to handle doing it efficiently.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems).
What is the Knowledge Acquisition Bottleneck problem? By Dana Edwards. Posted on Steemit. March 29, 2017.
Now that we know what knowledge representation is, and what knowledge bases are, and how the knowledge base is relied upon in a knowledge based system of artificial intelligence (KR+KB+Inference engine), we can move on to discussing one of the open problems.
The Knowledge Acquisition Bottleneck problem.
Many people already know about the familiar Byzantines generals problem in computer science. We also know how the Nakamoto consensus in Bitcoin provided a novel example of a solution. The Knowledge Acquisition Bottleneck problem is one of the problems plaguing AI and is what limits or seems to be a limit on the strength of artificial intelligence. One of the main problems in artificial intelligence is that knowledge formation typically requires domain experts who can contribute to the knowledge base. The Cyc project attempted to solve the problem of scaling up the knowledge base but is suffering from the bottleneck. The bottleneck can be summarized below [taken from Wagner, 2006]:
The paper from which this summary was pulled "Breaking the Knowledge Acquisition Bottleneck Through Conversational Knowledge Management" also offers a solution called collaborative conversational knowledge management. This is the same solution which Tauchain will attempt to utilize in a more sophisticated way. Tauchain will allow for collaborative theory formation. In the paper this quote explains a key concept:
We see this concept in how Wikipedia works to manage knowledge. We know Wikipedia is indeed not without flaws but it does manage knowledge. In their conclusion we see this quote:
Tauchain by design will be collaborative and allow for collaborative theory formation. This would mean anyone will be able to contribute to the knowledge base with relative ease. In addition, it will have knowledge management properties built in, and if the knowledge acquisition bottleneck problem can be solved then it will have a huge impact. For one, the problems which prevent knowledge based AI from scaling could be resolved if this bottleneck is removed.
DARPA has attempted to solve the Knowledge Acquisition Bottleneck problem utilizing high performance knowledge bases (HPKBs)and Rapid Knowledge Formation yet failed. Cyc has attempted to solve the same problem and has failed. The semantic web has yet to take off because this problem stands in the way. Will Tauchain succeed where these other attempts have failed? I think it is a strong possibility which is why I'm excited about the implications should Tauchain successfully be built.
Lenat, D. B., Prakash, M., & Shepherd, M. (1985). CYC: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks. AI magazine, 6(4), 65.
Wagner, C. (2006). Breaking the Knowledge Acquisition Bottleneck Through Conversational Knowledge Management. Information Resources Management Journal, 19(1), 70-83.
Web 1. https://www.quora.com/What-is-knowledge-acquisition-bottleneck
Web 2. http://www.igi-global.com/dictionary/knowledge-acquisition-bottleneck/49991
Web 3: http://www.tauchain.org
Web 4: https://steemit.com/tauchain/@dana-edwards/how-to-become-a-stakeholder-in-agoras-and-indirectly-tauchain
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: What is the Knowledge Acquisition Bottleneck problem?
Virtualización de los contratos con TauChain y Agoras. Video del Canal Educación Financiera Bitcoin Criptomonedas en Youtube. 15 de mayo de 2016.
Sujeto, verbo y predicado. Taking the language of Ontologies to unify languages of:
* Computer Programs.
* Network Protocols.
Ontologies are expressed in RDF language family (Resource Description Framework). IDNI propose a software client that stores an ontology of local rules. Inteligencia Artificial, ontología, lenguaje, código "human readable", democracia descentralizada y equitativa.
Fuente / Source: Fuente: Canal Educación Financiera Bitcoin y Criptomonedas en YouTube.
What is Tau?
Tau is a decentralized network that can amend itself based on decisions f its users. Tau will provide a platform for users to reach agreements and decisions, in the largest scale seen so far. A social platform to reach agreements: Tau is a blockchain based platform that will allow for on-the-fly logical consensus detection which enables it to scale some of the largest bottlenecks to human advancement including social governance and knowledge creation. By using a self defining and decidable logical framework, Tau is the first platform able to gather data voluntarily submitted by its users and logically deduce valuable knowledge over a network secured with the blockchain technology. What this means is that in effect, we can scale collaborative endeavors between thousands of users to greatly accelerate the production of knowledge. The only dynamic decentralized social network: When Tau’s community will face a decision to change Tau or its blockchain protocol, they will just need to express their opinions and perspectives, like we do today in the social networks, and Tau will self-amend itself based on users’ agreement. Considering the perspectives of the entire community (unlike voting) is the only way to reach a decentralized decision that benefits all users. Tau’s ability to scale discussions is the only decentralized solution to create a true dynamic protocol.
What is Agoras?
Agoras is a cryptocurrency and an integral platform built over the Tau network and will serve as the primary economy. While Tau creates a true knowledge society, Agoras is about creating true monetary knowledge, by powering the ecosystem built via Tau. Agoras will be used to execute the applications of Tau, Zennet (Computational Resource Market), derivatives trading platform and further developments to be built as part of Tau’s ecosystem. Through the power of Tau, we envision the possibility of fully autonomous businesses operating over the Agoras virtual economy.
Special fields: Language, Knowledge, Economy, Collaboration, Discussion, Choice, Blockchain, Cryptocurrency, Logic, Dynamic Protocol, Decentralized Network, Internet of Languages
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.