Introducing Value Area Networks - Matched participants by shared values. By Dana Edwards. Posted on Steemit. December 14, 2018.
This concept is possible only based on the design of Tauchain presented by Ohad. In his design for Tauchain he highlights the fact that any member of the social network will be allowed to input their worldview. It has been discussed by myself previously that moral values could be an important part of Tauchain in this setting.
A Value Area Network is a concept I'm introducing which is designated to mean a kind of network where all participants are matched according to shared values. These participants in the network (economic agents, bots, machines, humans, companies, whatever) should in theory be allowed to outline as much of their current values and as long as all participants are deemed to be in alignment by the consensus algorithm of Tau then they will be considered part of unified network.
The acronym VAN can be designated to stand for Value Area Network, not to be confused by Value Added Network. Unlike a LAN (Local Area Network) which is based on physical geography, the VAN is based on "social geography". People who are closer to each other socially on the moral and "concerns and values" level would represent a sort of shared location. In social science the concept of social proximity is defined mostly in geographical terms but in the digital age with a technology like Tau in existence the idea of closeness might not have to be restricted to the geographical definition.
Closeness in terms of how close your values align to another participant in a network would represent a distinct place on a sort of map. This distinct place would be represented or quantified by a score which indicates it's potential location on a spectrum of possible locations. Of course the mathematics behind this would have to be more clearly defined in future posts but this post is to introduce the concepts for future discussion.
My concerns and reasons behind thinking up VANs is based on that fact that while social media today does a pretty good job connecting billions of people to random people it also does a horrible job connecting socially compatible people to each other. It's not good enough to connect a bunch of random people. People want to connect to people who have compatible values with themselves as their values are constantly updating over time. Tauchain in theory is the only platform which is expected to have the features to make this idea a possibility.
Values in this context could be negotiated from or derived from beliefs or worldview using Tau discussion. The values then would over time be updating as the person updates their beliefs or worldview. This would be to go the emergent route of letting Tau try to identify the values of the participant based on what the participant said in discussions (avoiding contradictions). The other would be to let the participant explicitly enter their current values and over time let Tau help them to constantly update that over time.
These are features I hope to see developed over Tau in some form some day. If I'm in the position to bring these features into development (provided AGRS works as intended) then this could be one of my contributions. The key mechanism behind this feature would be a novel matchmaking algorithm which leverages the Tau Shared Knowledge Base and reasoning capabilities. The social values map feature could be deduced via the discussions had over time or it can simply be a checkbox setting where the participant chooses by checking boxes and sliding scales.
The Paradigm of Social Dispersed Computing and the Utility of Agoras. By Dana Edwards. Posted on Steemit. October 12, 2018.
Social Dispersed Computing
What is socially dispersed computing? It is an edge oriented computing paradigm which goes beyond cloud and fog computing. To understand socially dispersed computing we first have to discuss dispersed computing and how it differs from the previous paradigm of cloud and fog computing. The current trend toward decentralized networks which we first saw with the peer to peer technologies such as Napster, Limewire, Bittorrent, and later with Bitcoin, have brought to us an opportunity to conceptually new paradigms. The original model most people are familiar with is the client server model which was very much limited in that the server was always vulnerable to DDOS attack. The client server model has never been and could likely never be censorship resistant.
In the client server model the server could simply shut down as was the case with Bitconnect or it could be raided. The server could also be shut down by hackers who simply flood the site with requests. As we can see from the problems the client server model presented we discovered the utility of the peer to peer model. The peer to peer model was all about censorship resistance and promoted a network which was to have no single point of failure (single point of attack) which could be result in the shutdown of access points to the information. One of the first applications for these peer to peer networks was file sharing networks and networks such as Freenet/Tor etc. This of course eventually evolved into the Bitcoin which ultimately led to the development of Steem.
In dispersed computing a concept is introduced called "Networked Computation Points". An NCP can execute a function in support of user applications. To elaborate further I'll offer something below.
Consider that every component in a network is a node. Now consider that every component node is an NCP in that it can execute some function to support some user application. If we think of for example a blockchain then we know mining would fit into this category because it is both a node in the network and it also can execute a function in support of Bitcoin transactions. Why is any of this important? Parallelism is something we can gain from dispersed computing and please note that it is distinct form concurrent computing. When we rely on parallelism we can reap the benefits in terms of performance when executing code by breaking it up into many small tasks which can be performed across many CPUs.
EOS attempts to leverage parallelism specifically to enable it's performance boost. The benefit is speed and flexibility. Think for example of the hardware side also with FGPAs which can do similar tasks of a microprocessor. FGPAs (not ASICs) which unlike ASICs would provide generalized flexible parallel computing. Consider that just like with mining a company could add more and more FGPAs to scale an application as needed.
To understand Social Dispersed Computing we have to make note of the fact that there are other users at any given time. For example the other users in the network participate to provide resources to the network for the benefit of other users whilst using the network. So in Steem for example as you add content to Steem you are adding value to Steem in a direct way, but also in a dynamic way. The resources on Steem also can adapt dynamically to the demand provided that the incentive mechanism (Resource Credits) works as intended.
EOS as an example DOSC (Dispersed Operating System Computer)
Because EOS seems to be the first to approach this holistically I will give credit to the EOS network for pioneering dispersed computing in the crypto space. All resources are representable by tokenization in a dispersed computing network. EOS and even Steem have this. Steem has it in the form of "Resource Credits" which represent the available resources on the Steem network. If more resources are needed then theoretically the resource credits could act as an incentive to provide these resources to the Steem network. This provides a permanent price floor to Steem represented as the amount of Steem which would have to be purchased in order to have enough resources to run Steem (if I have the correct theoretical understanding). This would put Steem on a trajectory toward dispersed computing.
Operating systems typically sit between the hardware and software as a sort of abstraction layer. This traditionally has been valuable because programmers don't have to directly speak to the hardware and hardware designers don't have to directly communicate by their designs to the programmer. In essence the operating system in the traditional model is centralized and made by a company such as Microsoft or Apple. This centralized operating system typically runs on a device or set of devices and provides some standard services such as email, a web browser, and maybe even a Bitcoin wallet.
Typically the most valuable or high utility software people consider on a computer is the operating system. In our smart phones this is Android OS and in PCs it may be Windows or Linux. This is of course thrown on it's head under the new paradigm of dispersed computing and the new conceptual model of the "decentralized" operating system. EOS is the first to attempt a decentralized operating system using current blockchain technology but the upcoming technology easily eclipses what EOS could do. Tauchain is a technology which if successful will leave EOS in the stone age in terms of what it will be able to do. EOS while ambitious also has had it's problems with regard to the voting mechanisms and the ease at which collusion can take place.
To better understand how decentralized operating systems emerge learn about:
If we look at OSKit we see that it is the tools necessary for operating system development. If we look at Tauchain we realize that it is strategically the most important tool for the development of a decentralized operating system being provided in the form of TML (a partial evaluator). If we think of the primary tool necessary to develop from we have to initially start with a compiler. A compiler generator is more like what TML allows with it's partial evaluator. More specifically it is the feature of Futamura projection which can provide the ability to generate compilers.
If we look at the next most important part of an operating system it is typically the kernel. Let's have a look at what an exokernel is:
Operating systems generally present hardware resources to applications through high-level abstractions such as (virtual) file systems. The idea behind exokernels is to force as few abstractions as possible on application developers, enabling them to make as many decisions as possible about hardware abstractions. Exokernels are tiny, since functionality is limited to ensuring protection and multiplexing of resources, which is considerably simpler than conventional microkernels' implementation of message passing and monolithic kernels' implementation of high-level abstractions.
By Thorben Bochenek [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
From this at minimum we can see that an exokernel is a more efficient and direct way for programmers to communicate with hardware. To be more specific, "programs" communicate with hardware directly by way of an exokernel. We know the most basic function of a kernel in an operating system is the management of resources. We know in a decentralized context that tokenization allows for incentives for management of resources. When we combine them we get kernel+tokenization to produce an elementary foundation of an operating system. In a distributed context we could apply a decentralized operating system in such a way that the network could be treated as a unified computer.
Abstraction is still important by the way. In an operating system we know the object oriented way of abstraction. Typically the programmer works with the concept of objects. In an "Application Operating Environment" an "Application Object" can be another useful abstraction. Abstraction can of course be taken further but that is for another blog post.
The Utility of Agoras
Agoras+TML is interesting. Agoras is the resource management component of what may evolve into the Tau Operating System. This Tau Operating System or TOS is something which would be vastly superior to EOS or anything else out there because of the unique abilities of Agoras. The main abilities have been announced on the website such as the knowledge exchange (knowledge market) where humans and machines alike can contribute knowledge to the network in exchange for the token reward. We also know that Agoras will have a more direct resource contribution incentive property in the form of the AGRS token so as to facilitate the sale or trade of storage, bandwidth or computation resources.
The possible (likely?) emergence of the Tau Operating System
In order for Tauchain to evolve into a Dispersed Operating System Computer it will need an equivalent to a kernel. Some means of allowing whomever is responsible for the Tauchain network to control and manage the resources of that network. If for example the users decide then by way of discussion there would be a formal specification or model of a future iteration of the Tauchain network. This according to current documents is what would produce the requirements for the Beta version of the network to apply program synthesis. Program synthesis in essence could result in a kernel and from there the components of a Tau Operating System could be synthesized in the same way. Just remember that all that I write is purely speculative as we have no way to predict with certainty the direction the community will take during the alpha.
The Era of Signals and Changing Power Dynamics. By Dana Edwards. Posted on Steemit. October 8, 2018.
The world we live in is rapidly changing. For instance the #MeToo era has arrived. This new era shows us that any individual in any position in society can be brought down. It proves a point that many in the blockchain community may have known instinctively which is that any individual source of authority and or power can and may be removed from that position. Some people actively choose to seek to be in these positions of power for their own reasons and then some of these people abuse their positions of power. People who seek power for the wrong reasons and then abuse it are in my opinion a risk which positions of authority bring (which blockchain technology may help reduce).
What are signals and what is signalling theory?
Social desirability bias is a popular topic in academic circles. To explain:
In social science research, social desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad," or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports, especially questionnaires. This bias interferes with the interpretation of average tendencies as well as individual differences.
People tend to want to be liked/loved. People when asked questions on a survey may feel pressured to answer the survey in a way which they think they will be viewed more favorably by others. In other words rather than answering in a manner which they truly think or feel they will assess how others might judge their response and then answer in a way which they think they will be judged more favorably.
A full video on this topic is below:
Social desirability bias is exactly why voting on platforms such as Steem will not work. When voting is public then most of the research seems to show that people will feel pressured to answer the question not in the way which they really believe or prefer but in the way which they think the whales want them to vote or prefer. In other words because on Steem the whales can reward (or punish) anyone who votes in ways which go against "political sensibilities" it is likely that social desirability bias applies particularly on DPOS style consensus platforms. If there are votes and the votes are not encrypted (secret) then we have no way to determine which votes are legitimate and which votes are the result of signalling (such as virtue signals).
For example when it was Trump vs Hillary the polls suggested Hillary would win. This is because there likely was social desirability bias which made it socially undesirable for anyone to admit they voted for Trump. As a result people who voted for Trump or who planned to vote for Trump may have said in public that they intended to vote for Hillary. Because the votes in the election are secret the people who may have seemed like loud Hillary supporters could have been secret Trump supporters in disguise.
In some of my previous posts I discuss signalling theory a bit more:
In these posts I have identified that behavior of individuals is shaped by how individuals think other individuals will think of their behaviors. This would apply to social desirability optimization which I'll label as adopting behaviors which provide the expected payoff of being rewarded with improved social desirability.
To provide clarity the definition of social desirability:
Social desirability is the tendency for research participants to attempt to act in ways that make them seem desirable to other people.
In other words people want to be liked. Likeability is a word I can use to simplify the concept of social desirability for readers. In the example with the 2016 election it is clear that supporters of Trump would risk a social stigma with severe social consequences if they came out in public support. This high cost of public support is why some believed that there were secret Trump supporters who were simply afraid of "losing face". In the most simple terms a person can talk red or talk blue depending on where the social stigma is.
One of the stunning conclusions I reached in my own research on this topic is that the increasing transparency leads to "preference falsification". That is a person who is talking blue while thinking red. If all speech is public (like it is on Steem) then there is the possibility that preference falsification is taking place.
Here is a video on the topic of preference falsification:
Why is this a major problem in the blockchain community? The evolutionary trajectory of a platform relies entirely on market preferences. If censorship exists and conformist pressures hinder true preference aggregation then the developers (and the community itself) will have no way of knowing which improvements to make or which changes would best satisfy the community.
What is leadership and what is the era of signals?
Before I attempt to discuss leadership I will first explain what I think leadership means and what it is. In my opinion the community must always come first. A person who is put into a leadership position is in my opinion in what I'll term "the seat of responsibility". This is in my opinion not an enviable position to be in but someone has to be in this position. For example a person who receives a security clearance is now in a position of heavy responsibility. The information which they protect is not their secrets but the nations secrets.
Leadership in my understanding is not about "being in power" but is about serving a community. To be in a "big seat" is to be in a position of responsibility to make decisions on behalf of a community which the chosen person must represent. In other words being in positions of responsibility is entirely about service and not about power. A representative in congress is not in a position of power but in a position to serve their constituents who put them in that position to represent their interests.
In my opinion to be a good leader is to be a great listener. The leader must listen to the community to find out what the community wants and or needs. The leader must listen to the community to determine what the community thinks is right or wrong. The leader then must offer solutions or proposals or policies which satisfies the requirements of the community. What matters more than who is in the seat is the seat itself. This means the Presidency itself matters more than who is in office. The positions themselves matter more than who is in them. Long after whomever is in these positions are gone there will be these positions to be filled. Any leader in any position is replaceable by someone else if they show failure to lead (whether it be a CEO, or a President of a country, or a lead developer, or any other kind of community leader).
In my understanding it is like chess where all pieces on the board can be in various positions. We know in chess that the pawn can become any piece on the board. The point with this analogy is that individuals in my opinion are not likely to remain the source of power in society. The source of power in society is increasingly becoming the community for better or for worse. According to me, to lead is to serve and to lead effectively is to serve effectively.
To accept a responsibility to serve (to lead) it is required to seek feedback from all whom the community servant represents. This does not require voting specifically but it does require under any circumstance a mechanism by which the community can give brutally honest feedback to the system itself. When I say the system itself I do not mean the feedback must go direction to those who serve the system but that the system must have a means of collecting data, analyzing data, and then informing those who can improve the system on which changes best would satisfy the needs of the community.
In my opinion this is a very data driven process. I do not think leaders can for example process big data using their brain power. This will require that they harness the power of machines (machine intelligence). There is also risk if all the processing is done by one company (such as Google) just as there is risk if all people rely on Facebook for the news and opinions. We can see that Facebook has the ability right or wrong to shape elections by deforming the news feed or by allowing certain fake profiles to interact on the site. We see that Facebook can ban crypto ads at will for example to enforce certain policies without taking any kind of poll from the community or the users for instance. We simply do not see any poll data from the users which indicated that the users were tired of seeing crypto ads.
Summary of thoughts on leadership:
Augmenting the wisdom of the community as a means of better governance
In a world where the community must decide what to do we have a situation where responsibility is increasingly diffuse. This means while it is true that the signature may come from the face of the community (if it is a human face) it is still the community which has to be capable of wisdom. The problem is most communities in the world do not become wiser as more join the community. A bigger community doesn't produce better policies by merely voting together. The problem is while most people have opinions it does not mean opinions are well informed or scientific or wise. The lack of wisdom in a community results in horrible (harmful) policies, over reactions, systemic bias, and more.
The conclusion I have reached so far is that in order to have better governance in an era where the community is the government it is a requirement that the community be wise. It's not enough to simply give the community unlimited power to shape the future without providing any capacity for the community to be wise or to do research or to solve problems. Voting in the sense we see in elections does not involve informed voters. Information supplied to voters is almost always sub par and voters are expected to trust "opinion leaders" and "opinion shapers" who tell them how to vote and why. Often disinformation shapes elections more than scientific evidence, facts, math, or reason.
As we build blockchain technology I think it is critical that we put great emphasis on data analytics. Data analytics will allow our leaders to make better decisions on our behalf. Blockchain technology will have to rely on data analytics to figure out potential wants and needs of it's participants, users, e-citizens, etc. At the same time private communication will be a necessity even if just to conduct surveys. The reason is people will not necessarily provide their real opinion in a survey which is completely transparent. The only solution I could find to the problem of preference falsification is privacy.
Most important of all is those who are put into positions of leadership are in trusted positions. This includes people who are moderators at forums, people who are lead developers, people who run exchanges. People who are in these positions have the responsibility to serve the blockchain community to the best of their ability. The abuse of these positions for personal power or personal gain is a violation of this trust and in these instances the community can and should select someone else for that position.
Bulbulia, J., & Sosis, R. (2011). Signalling theory and the evolution of religious cooperation. Religion, 41(3), 363-388.
Davis, W. L. (2004). Preference falsification in the economics profession. Econ Journal Watch, 1(2), 359.
Frank, R. H. (1996). The Political Economy of Preference Falsification: Timur Kuran's Private Truths, Public Lies. Journal of Economic Literature, 34(1), 115-123.
Grimm, P. (2010). Social desirability bias. Wiley international encyclopedia of marketing.
Sîrbu, A., Loreto, V., Servedio, V. D., & Tria, F. (2017). Opinion dynamics: models, extensions and external effects. In Participatory Sensing, Opinions and Collective Awareness (pp. 363-401). Springer, Cham.
''We live in a world in which no one knows the law.''
Ohad Asor, Sept 11, 2016
I continue herewith with sharing my contemporary state-of-grok  of the up to now four  scriptures of the aka newtau . Sorry for the delay, but it comes mostly from the efforts to contain the outburst of words, catalyzed by the very exegetic process of such a rich content, into a reader-friendly shorter form.
The subject of vivisection textographically identifies as the first three paragraphs of ''Tau and the Crisis of Truth'', Ohad Asor, Sep 11, 2016 .
The four core themes extracted are ennumerated bellow, with as modest as not to sidetrack the thought and to not spoil the original message, streak of comments of mine.:
As I guy who's immersed in Law for more than quarter of century  I can swear with both hands on my heart in the notion of unknowability of Law.
Since my youth years in the law school  I was asking myself how it is possible at all to have 'rule of law'  in case any legal system ever known required humans to operate !?
It seemed that the only requisite or categorcal difference between mere arbitrary 'rule of man'  and the 'rule of law' was that in some isolated cases some ruling men happened to be internally programmed by their morals  to produce 'rule of law' appearance effects by 'rule of man' means.
Otherwise 'rule of law' done via 'rule of man' poses extremely serious threats of law to be used by some to exploit and harm others.
In that line of thoughts my conclusion was that the Law is ... yet to come.
What we know as Law is not good networking protocol software of mankind as such, but rather we see comparatively rare examples of individually well programmed ... lawyers.
On the wings of a technological breakthrough, just like: flying came with the invention of airplanes and moonwalk needed the advent of rocketry, or to remember without to stay alive - the writing. The Law is an old dream. If we judge by the depth of the abyss of floklore - one of the humanity's most ancient dreams, indeed. Needless to repeat myself that this was what sucked me into Tau as relentlessly as a black hole spagetification  :)
The referred by Ohad frustration by Law of the great Franz Kafka  expressed in his book The Trial  becomes very understandable for Kafka's epoch lacking the comforting hope in a technology which we already have - the computers - and the overall progress in the field of logic, mathematics, engineering ... forming a self-reinforcing loop centered around this sci-tech of artificial cognition.
Similarly to the nuclear fusion, which is always few decades away, but the Fusion gap closes noticeably nowadays , we are standing on the cliff of a Legal gap.
The mankind's heavy involvement in cognition technologies, especially in the last several decades, outlined multiple promising directions of further development, which seem to bring us closer to abilities to compensate the fundamental deficiencies of Law and in fact to finally bring it into existence.
It took entire Ohad Asor, however, to identify the major reasons why the Law is bottlenecked out of our reach yet, and to propose viable means to bridge us through that Legal gap... The other side is already in sight.
It is in the first place the language to blame !
The human natural language . Our most important atribute as species. The mankind maker. The glue of society. It just emerged, it hasn't been created. It has rather ... patterns, vaguely conventional, than intentionally coined set of solid rules. There ain't firm rules to change its rules, either ... The natural human language is mostly wilderness of untamed pristine naked nature, dotted here and there with very expensive and hard to install and maintain ''arteftacts'' . Leave it alone out of the coercion of state mass media, mass education and national language institutes and it falls back into host of unintelligible dialects. Even when aided by the mnemonic amplifier which we call writing.
Ambiguity is characteristic of the natural language, a feature in poetry and politics, but a deadly bug in logic and law.
We'll put aside for now the postulate of impossibility of a single universal language to revisit it later when its exegetic turn comes. In another chapter onto another scripture. Likewise, not in this chapter we'll cover the neurological human bottlenecks which are targetted to be overcome by Tau. Lets observe the sequence of author's thoughts and to not fast forward.
Instead of that I'll dare to share with you my own hypothesis about why the natural human languages are so. (I'm smiling while I type this, cause I can visualize Ohad's reaction upon reading such frivolous lay narrative. I hope he being too busy will actually not to.) To say that the human languages are just too complex does not bring us any nearer to decent explanation. Many logic based languages are more than a match of the natural human ones in terms of expressiveness and complexity. It shouldn't be that reason.
My suspicion is rather that the natural human languages pose such a Moravec hardness  for being not exactly languages. Languages are conveyors of meaning. Human languages convey not meaning, but indexes or addresses or tags of mind states. The meaning is the mind state. Understanding between humans is function of not only shared learnt syntaxi, but also of shared lives. Of aggregation of similar mind states which to be referred by matching word keys.
If this is true it is another angle for grokking the solution of human users leaning towards the machine by use of human intelligible Machinish, instead of Tau waiting the language barrier to be broken and machines to start speaking and listening Humanish.
In a nutshell we yet wait the Law to come cuz Law is not doable in Humanish. Bad software. And the other side of the no-law coin is that the humans are no cognitive ASICs . We do congnition only meanwhile and in-order-to do what other animals do - to survive. Bad hardware.
In order law to become law it must become handsfree .
Not humans to read laws, but laws to read laws.
The technology to enable that looks on an arm's length.
Ok, so far we butchered the law and the language. What's left?
The nature and essence of human language brought one of the most harmful and devastating notions ever. Literally, a thought of mass destruction.
The ''crisis of truth''. The wasteland left by the toxic idea spilover of ''there is no one truth'' or even ''there ain't truth'' at all. This is not only abstract, philosophical problem. Billions of people actually got killed for somebody else's truth.
Not occasionally the philosophers who immersed themselves into this pool are nicknamed 'Deconstructivist' . Following back their epistemic genealogy, we see btw, that they are rooted rather in faith than in reasoning, but this is another story.
The general problem of truth, of which the problem of law is just a private case, opens up two important aspects:
Number one, is that all knowledge is conjectural to truth and that, truth is an asymptotic boundary - forever to close on but never to reach. Like speed of light or absolute zero. Number two, is that human languages make pretty lousy vehicles to chase the truth with.
If really words are just to match people's thoughts together, then there are thoughts without words and words without thoughts. Words mismatch thoughts, so how to expect they to bridge thoughts to things? Entire worlds on nonsensical wording emerge, dangerously disturbing the seamless unity of things and thoughts. Truth displaced.
''But can we at least have some island of truth in which social contracts can be useful and make sense?''
This island of shared truth is made of consensus  bedrock and synchronization  landmass.
Thuth and Law self-enforced. From within instead of by violence from without. And in self-referenial non-regressive way.
''We therefore remain without any logical basis for the process of rulemaking, not only the crisis of deciding what is legal and what is illegal." 
Peter Suber with his ''The Paradox of Self-Amendment: A Study of Law, Logic, Omnipotence, and Change''  proposed a rulemaking solution which he called Nomic .
''Nomic is a game in which changing the rules is a move.'' 
The merit of Nomic is that it really eliminates the illths of the infinite regress  of laws-of-changing-the-laws-of-changing-the-laws, ad infinitum, by use of transmutable self-referrenial rules. But Nomic suffers from number of issues - the first one, in the spotlight of that chapter, being the fact that we still remain with the “crisis of truth” in which there is no one truth, and the other ones - like sclability of sequencing and voting - we'll revisit in their order of appearance in the discussed texts.
The aka 'newtau'  went past the inherent limitations of the Nomic system and resolves the 'crisis of truth' problem.
The next few chapters will dive into Decidability and how it applies to provide solution to the problems described above.
 - https://en.wikipedia.org/wiki/Grok
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-intro
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-the-two-towers
 - http://www.idni.org/blog/tau-and-the-crisis-of-truth.html
 - http://www.behest.io/
 - https://steemit.com/blockchain/@karov/behest-for-tauchain
 - https://en.wikipedia.org/wiki/Rule_of_law
 - https://en.wikipedia.org/wiki/Tyrant
 - https://en.wikipedia.org/wiki/Morality
 - https://en.wikipedia.org/wiki/Spaghettification
 - https://en.wikipedia.org/wiki/Franz_Kafka
 - https://en.wikipedia.org/wiki/The_Trial
 - https://www.amazon.com/Merchants-Despair-Environmentalists-Pseudo-Scientists-Antihumanism/dp/159403737X
 - https://en.wikipedia.org/wiki/Language
 - https://en.wikipedia.org/wiki/Official_language
 - https://steemit.com/blockchain/@karov/tau-through-the-moravec-prism
 - https://en.wikipedia.org/wiki/Application-specific_integrated_circuit
 - https://www.etymonline.com/word/manipulation
 - https://en.wikipedia.org/wiki/Deconstruction
 - https://en.wikipedia.org/wiki/Consensus_decision-making
 - https://en.wikipedia.org/wiki/Synchronization
 - http://legacy.earlham.edu/~peters/writing/psa/index.htm
 - https://en.wikipedia.org/wiki/Nomic
 - https://en.wikipedia.org/wiki/Infinite_regress
 - the illustration is a painting courtecy of the author Georgi Andonov https://www.facebook.com/georgi.andonov.9674?tn-str=*F
Let's use Tauchain to save our own lives and the lives of others: The life saving potential of Tauchain. By Dana Edwards. Posted on Steemit. September 10, 2018.
In this post I'm going to discuss what I think is one of the main reasons why I want Tauchain to exist. This is a reason I think many or perhaps most people can relate to. It starts with the question of how can we save our own lives using our own effort? It evolves into the question of how can we save lives in general by augmenting our efforts as much as technologically feasible?
1 out of 2 (around 50%) will be diagnosed with invasive cancer
The current statistics reveal that the highest scale we have a 50% chance of developing cancer in our life time. This can be lower according to some recent statistics (closer to 30% or in some cases 40% but still this is very high). The fact is if we are each in a room then about 1 out of every 3 of us in the best case will get cancer someday. And 100% of us will know someone who has cancer someday. So there is a very high chance that someone we care about a lot will develop cancer and do we want to be in a position where we didn't do all we could to have a capability of saving their life? It could even be you who developers cancer and would you want to be in the position where you can say you dedicated some of your resources toward finding a cure?
Cancer is one of those global problems that most human beings want to eradicate. It is not politically controversial to want to cure cancer. It is also something that Tauchain can help with because using Tauchain we can scale discussions, define problems in a precise manner, and most importantly leverage the market. The ability to create markets which are smart (meaning which can adapt to regulatory obstacles) is a potentially unique feature of Tauchain.
Some might say that there are already pharmaceutical companies trying to cure cancer or develop anti-aging treatments. Indeed this is true there are these companies. The problem right now is these companies do not have the new business models which Tauchain might make possible. First is the fact that using an ICO you can let future patients/customers own shares in the company. This allows companies which want to create cures to have the potential to raise billions of dollars necessary to do expensive trials. In addition the ability to do research may improve due to the features of Tauchain as well so that it is cheaper to search for new potential drugs or supplements.
The human genome is very complicated and is an area we know very little about. Cancer is also something we have to study. One example of an approach to defeating cancer is immunotherapy but this again is going to require a lot of research into how to reprogram the immune system to identify and destroy cancer. If everyone can help or contribute in some way to the process then it makes the process much cheaper than it is right now which means the drug or treatment can potentially be cheaper due to lower R&D cost.
Most people want to live long and healthy lives but we still know very little
We know very little about aging. We do have some theories as to what causes aging. We even have some theories on how to slow it down. But we don't understand the mechanism well enough yet to develop a treatment. By aging I'm referring to the process by which cellular function deteriorates over time. We know for example the risk of getting cancer increases with age. But we still are working on the means of developing biomarkers to even determine the age of a person.
What if we could leverage the potential of Tauchain to discover more about the aging process? What if we could develop an anti aging pill or treatment which we could collaboratively develop and own? What if we could make a profit from every pill sold via tokenization? If this sounds good to you then it might sound good to millions of others who could be encouraged to participate in an ICO to develop a pill to slow or supplement the aging process.
The ethical and rational argument
Some people could say that to put an emphasis on saving lives is to seek to do the greatest good for the greatest number. This emphasis could put Tauchain on a fast track to mainstream adoption because utility would be measured in not just how profitable it is to hold a token but in the potential lives that could be saved. To profit from saving lives is an ethical and rational argument. To align the profit motive with saving as many lives as possible is an easy ethical (and rational) argument to make. People who value life will value any technology which saves lives.
Some projects exist which I will list below that already are trying to save lives or end aging. These projects did ICOs over Ethereum and so they currently are Ethereum focused. That being said there is the possibility that some projects could still leverage Tauchain regardless of whether they originally launched on Ethereum. It is also possible that new projects can launch on Tauchain to attempt the same or similar objectives.
What can Tauchain do?
Grunau, G. L., Gueron, S., Pornov, B., & Linn, S. (2018). The Risk of Cancer Might be Lower Than We Think. Alternatives to Lifetime Risk Estimates. Rambam Maimonides medical journal, 9(1).
How Tauchain and the Exocortex can give anyone a conscience and make anyone more law abiding. By Dana Edwards. Posted on Steemit. September 2, 2018.
First "anyone" is not literal. By anyone I mean anyone with a reasonable level of intelligence who is willing to take the advice generated by the network. The network would include human beings and machines. The network would learn and be more properly defined as a complex adaptive system. Tauchain would enable the emergence of this network. This post is about how the network which can emerge from Tauchain. It is also about how people who intend to be as moral as possible whilst also complying with the law as much as possible might leverage the network. This post assumes that the human brain has a finite memory and comprehension capacity. This post assumes that every human being can benefit from enhancing these naturally limited capacities in areas of legal comprehension and risk literacy (under the assumption that most or perhaps none of us know every law on the books but need to comply with the laws most likely to be aggressively enforced).
The Personal Moral Assistant
PMA is a concept I've been thinking about for years now. The idea that we can augment our ability to be moral persons. A PMA is a personal moral assistant and in an ideal world every person born would have one. This would be an interface similar to what we see with Cortana or Siri where you can ask any question pertaining to whether a particular action is right or wrong. This PMA would solve the problem using the same priorities that you would and so you would get a definite right or wrong result.
A Personal Moral Assistant is just one primary use case. But these personal assistants over Tauchain could also include for instance a Personal Compliance Assistant. This is essentially another bot but instead of dealing with moral problems this bot would handle compliance. If you're trying to accomplish a goal this bot would make sure that you do so following all the known laws as your exocortex currently understands it. This would enable people to avoid legal pitfalls whilst chasing opportunities.
In order to go from poor to rich in this world requires taking risks. There is no way around risk taking if you want to get ahead. Risk literacy is essential and very few people who are poor have risk literacy. The PMA might be able to tell a person whether a certain choice aligns with their current values. The PCA might tell a person whether a certain choice complies with the laws. What about opportunities? An opportunity web crawler agent could theoretically search across the entire Internet to find opportunities which match your chosen risk profile.
What are we doing today?
Today we have to make choices often in trial and error. If we aren't lucky enough to have mentors or people who can guide us then the only way to learn is to make the common mistakes. When we deal with moral problems today we often rely on holy scripture interpreted by other human beings who are just as flawed as we are. We simply don't have a bot which could interpret the scripture in a completely logical way. In other words we don't have the digital representation of the mind of our spiritual guides.
We also have a situation where some of us can afford to comply with every law and take the lowest risk approach while others simply don't have the resources available to pay the expensive legal fees. Some people get better legal advice than other people as well. What if we could get at least some level of legal assistance from our intelligent assistant? What if this intelligent assistant can even ask human beings who have legal knowledge to help?
And finally what if we could figure out which risks are worth taking and which are not worth taking? It's one thing to find opportunities but another to be able to assess them. People get scammed because at the end of the day our emotions influence our ability to do proper assessment of opportunities. I'm human and it even happens to me from time to time. What if we could avoid this by using the capabilities of Tauchain to analyze massive amounts of information for us which our brains could never handle?
Opportunity Crawler Bot
I ask a simple hypothetical question: what if you could have set a bot to search the Internet for opportunities that resemble Bitcoin in 2008? What if this bot would be activated and search for an indefinite period of time on an undetermined yet expanding number of networks? If you define "Bitcoin in 2008" in a way which the bot can make sense of then it could search for anything which meets that criteria. We have this technology now but it's extremely primitive. On Google you can set up alerts for certain things but what if you could go beyond mere alerts and look for code on Github, and certain individuals involved with it, and certain growth patterns?
A way to think about these bots / intelligent assistants
One way to think about these intelligent assistants is as part of your extended mind. These bots essentially help you to think better and communicate better. It's still you and what they do on your behalf is essentially as if you did it. So the total collection of all of these agents which are under your control represent your complete exocortex. It will take great responsibility and wisdom to use these abilities in a way which is perceived by the world as ethical, moral, legal, etc. It is for these reasons that I initiate a discussion on how each of you would like to use such technology if it did exist or such bots or how you would think about them?
What is Tauchain & Why It Could Be One of The Greatest Inventions of All Time (Part 1: Introduction). By Kevin Wong. Posted on Steemit. August 28, 2018.
In anticipation of Tau's demo some time around the end of this year, I'd be publishing a series of articles leading up to its release and beyond on Steem. If you would like to get to know what some of us think is going to be one of the greatest inventions of all time, I'd recommend you to check out http://wwwidni.org. It seems like a foundation that we've missed out on building together since the birth of the Internet.
A close resemblance of this project is the Semantic Web although some of us would place Tau as being far more ambitious in scope, oddly in a way that is likely more feasible with its ingenious use of a logic blockchain to power a decentralized social choice platform. I think it's impressive how singular the concept actually is, despite the unavoidable lengthy explanations that come paired with the many first-time features that Tau will provide.
Without further ado, let's explore this world-changing technology that is currently baking in the oven.
What is Tau?
Let's begin by first checking out the opening of IDNI's website at http://idni.org:-
Tau is a decentralized blockchain network intended to solve the bottlenecks inherent in large scale human communication and accelerate productivity in human collaboration using logic based Artificial Intelligence.
Sounds fairly straight-forward at first glance, and to me, it really stands out in the cryptosphere. We now have millions and billions of people using the Internet everyday, yet we still do not have any effective means of discussing and collaborating without being all over the place. Sure, we may have been pouring a lot of our time and effort into various platforms trying to connect with others, but have things been really any different compared to a time before the Internet?
The speed of information propagation has increased by orders of magnitude, and we can reach anyone on the planet now, but it's still really up to us to be present and be able to process information in our heads before turning them into relevant knowledge for our networks.
Expanding our social bandwidth.
Turns out, we have been experiencing a lot of trouble coming to terms with the chatter of billions of people in cyberspace. The bottlenecks inherent in our human bandwidth remain to be unsolved even with near-instantaneous communications. From governments to corporations and blockchain communities, we are all still facing the age-old problem of being unable to scale governance beyond the size of a classroom. It's just difficult to get our points across to many different people, let alone making sense of complex long-term discussions and making network-wide decisions collaboratively.
The introduction to The New Tau written by Ohad Asor explains our situation quite accurately:-
Some of the main problems with collaborative decision making have to do with scales and limits that affect flow and processing of information. Those limits are so believed to be inherent in reality such that they're mostly not considered to possibly be overcomed. For example, we naturally consider the case in which everyone has a right to vote, but what about the case in which everyone has an equal right to propose what to vote over?
So how is Tau actually going to solve our communications bottleneck? It will be through a highly bespoke and non-trivial implementation of a logic-based Artificial Intelligence (AI). It's worth noting that AI in this case is more of a buzzword for marketing-speak, and it is actually not of the same variety as the commercial implementations of deep machine learnig.
The distinction that must be made is that Tau is not the kind of AI that attempts to guess what the world is around them, including that of our opinions and the things we say or do. Instead, we must make the step towards communicating through Tau and what we choose to communicate will be as definite as computer programs. It can be thought of as a persistent logic companion that helps us improve the scale our reasoning, logic, and bandwidth.
We can take the time to share what we want to share on the Tau network and most of the logic-based connections and operations will happen in the background over time, even when we're not paying attention in-person. Again, the use of the word AI is a misnomer here because it usually paints the picture of AI agents attempting to mimic human autonomy. That's not what Tau is about. In this case, thinking about Tau as just a logic machine should provide better clarity on what it actually is.
The power of logic.
To expand, here's the second paragraph found in the opening of IDNI's website that explains Tau's paradigm in logic-based communications, http://idni.org:-
Currently, large scale discussions and collaborative efforts carried out directly between people are highly inefficient. To address this problem, we developed a paradigm which we call Human-Machine-Human communication: the core principle is that the users can not only interact with each other but also make their statements clear to their Tau client. Our paradigm enables Tau to deduce areas of consensus among its users in real time, allowing the network to boost communication by acting as an intermediary between humans. It does so by collecting the opinions and preferences its users wish to share and logically constructing opinions into a semantic knowledge base.
Indeed, Tau will offer a semantic social choice platform where we can discuss and store knowledge in a logical universe that helps us organize information, thereby empowering us in highly relevant ways. If you're worried about privacy, know that Tau is first-and-foremost designed as a local client with local processing and storage. The platform itself will be deployed as a decentralized peer-to-peer network, a place where we can connect and share our knowledge-base with anyone we desire.
The only price to pay in all of these is that we must speak in Tau-comprehensible languages, which can always be added and modified over time. A sophisticated language that can be defined over Tau may closely resemble natural languages, but it is really best to expect Tau as a machine-comprehensible language that only speaks in logic. Fortunately, logical formalism is something that we can easily deal with.
So it will be up to us to communicate with our local Tau client in a way that it'll understand our worldviews. When the machine understands what we share completely in some logical, mathematically-verifiable sense, it can then connect our dots with the rest of the Tau network, effectively boosting communications beyond the limits of human bandwidth, effectively scaling our points of discussion, consensus, and collaboration up to an infinite number of participants.
Code and consciousness.
Finally, we look at the last paragraph of Tau's introduction at http://idni.org
Able to deduce consensus and understand discussions, Tau can automatically generate and execute code on consensus basis, through a process known as code synthesis. This will greatly accelerate knowledge production and expedite most large scale collaborative efforts we can imagine in today's world.
Since Tau is a logic blockchain that powers a semantic social choice platform, we can leverage it to have both small and large-scale discussions about program specifications, detect points of consensus, and even generate software in the process. Being able to go from discussions to the realization of decentralized applications would mean inclusive code development for the masses. It's also a unique addition to decentralization that no other blockchain projects have even thought about.
Now that we may have come to a better understanding of Tau's emphasis on the use of logic in every part of its being, let's revisit the process description found in The New Tau to get closer to knowing what it really is about:-
We are interested in a process in which a small or very large group of people repeatedly reach and follow agreements. We refer to such processes as Social Choice. We identify five aspects arising from them, language, knowledge, discussion, collaboration, and choice about choice. We propose a social choice mechanism by a careful consideration of these aspects.
In short, Tau is a decentralized peer-to-peer network that takes the shape of a social choice platform, and it can become anything that we want it to be, for as long as it's expressible within the self-defining and decidable logics of FO[PFP] with PSPACE-complexity. This precise specification is required to satisfy the very definition of Tau as seen in the excerpt above. Tau is also intended to be a compiler-compiler.
This is taking application-generality into a completely different direction compared to blockchains that are built specifically with turing-completeness in mind, like Ethereum. Relevant literature to check out: Finite Model Theory.
Understanding each other.
While it's all highly technical and difficult to grasp in one seating, perhaps a better way to truly begin to understand Tau is to spend some time studying its main features. Or just wait for the product release. In any case, I will try to explore these topics in the future if my brain can still handle it:-
The more I think about Tau, the more I think that it is (poetically) a logical conclusion to the way the Internet works as a protocol. It even lives and breaths logic. Not just any kind of logic, but specifically, logics that can define their own semantics and is decidable. Tau is intelligently designed to be a truly dynamic and ever-evolving blockchain.
When the Tau community intends to make changes to the network code, rules or protocols, they will simply need to express these opinions and perspectives in a compatible language over the network. The self defining logic of the Tau blockchain network will enable it to detect the consensus among these opinions and automatically amend its own code to reflect this consensus from block to block. Unlike the common method of voting, Tau’s approach will take into account the perspectives of the entire community, where people will be free to vote and propose what to vote for in real time. This unique ability of Tau is the only decentralized solution to create a truly dynamic protocol.
Now you might think: Tau seems like a powerful tool but will it be too difficult to use for most people? There might be some learning curve involved for sure, and it'd be similar to learning a new language in the beginning. Those of us who learn to use it well enough to scale our discussions and collaborative works will likely gain a significant edge over those who are not using the platform. I'd imagine plenty of projects and communities around the world being able to overcome some of their obstacles in development through Tau. Hence, it may be fair to expect that market forces will gravitate towards the platform just like how we're all using the Internet these days.
Until the next post.
I've been thinking about Tau almost everyday for the past many months now, and I will admit that its deeper technicalities are still way out of my league, although I've made sure to word them broadly out the best I can. If you like what I do, please consider sharing this post and voting on my witness account on Steem. For more info, check out my recent witness announcement post.
As always, thanks for reading!
Images from Pexels
Music tracklist by Magical Mystery Mix
Follow me @kevinwong / @etherpunk
Not to be taken as financial advice.
Always do your own research.
Tauchain 101: Essential Reading On One Of The Most Revolutionary Blockchain Project Under The Radar...By Rok Sivante. Published on Steemit. August 3, 2018.
Amidst countless blockchain projects hyping themselves up as "the next big thing," there are a few that have been working under the radar that hold the promise - not in word, but in substance - of truly being revolutionary game-changers.
Such ventures have not yet often come into the spotlight. Partly, due to that their founders have focused first on the fundamentals of creating something that speaks for itself versus the all-too-common approach of prioritizing sensationalistic marketing. And partly, because the degree of innovativeness they represent - in tandem with a complexity in scope of the larger visions and implications of their success - does not always lend itself to an easy understanding upfront.
One such project - still very early on in its development, yet holding transformative potentials no less grand than those of Bitcoin and Ethereum as they birthed and evolved the blockchain landscape:
Until recently, with the launch of a new website that has successfully managed to articulate the project's vision much more clearly, understanding what Tauchain is striving to accomplish was a domain only a very few, highly-intelligent technically-inclined dared to tread. And prior to December 2018, there was no code - only an unproven concept spearheaded by a single Israeli developer, Ohad Asor, whom nearly all who've managed to connect with have declared to be one of the most brilliant geniuses they've ever met, possibly ahead of his time.
Just as Bitcoin introduced blockchain as an innovation radically altering the trajectory of our societal, economic, and technological evolution - and Ethereum continued in suit with its upgrades to expand in developing upon the vision with entirely new sets of capabilities for developing a range of decentralized applications and smart contracts - so too, may Tauchain be such a platform whose success proves comparable, the impact of which may bring quantum leaps in the Blockchain Revolution.
How and where to start in describing Tauchain...?
Well, were we to begin with the technical side of things, it'd be likely to lose 98% of the audience. So perhaps, a better starting point might be the bigger picture:
This generalized overview, however, still only barely scratches the surface.
While the intended ends may be that of a generic concept enabling drastically-increased efficiency in global collaboration, the means by which such is to be achieved entails a number of innovative component developments that each hold great significance and implications of their own.
While each may require deeper exploration to better grasp and begin piecing together into the bigger picture, the Tauchain website now offers an overview of key features which account for just some of what it to differentiate it from other blockchain platforms - and enable new collaborative capabilities not currently possible with currently existing technologies:
While it'd be possible to expand upon each in great detail - both in regards to the functionality and implications for their applications - this particular piece of writing is to serve as a basic introduction to some of the best, most-easily-accessible content written on Tauchain to-date.
And as we transition into that content, we shall begin with a quote summarizing the core essence of Tauchain, as approached from but one angle:
This project created by Ohad Asor is really ambitious and aims to create the internet of knowledge.
Some people would label it as an Artificial Intelligence, but according to the creator this is something totally different. Summing up and to understand me, Tau-chain is a tool that knows how to interpret any information and deduce any consensus. This tool can be used in any field, judicial, political, academic, social, scientific and also without limits assembly from 2 people to a million for example.
~ @capitanart, from "My experience with Tau-chain"
The collection begins with two selections from Steemit's @trafalgar.
If anyone has successfully managed to distill the essence of the Tauchain vision into words that'd serve as a foundational Tauchain 101 intro, it'd have been him in these two excellent pieces:
What Is Tau? - My Only Other Crypto Investment
The Power of Tau - Scaling the Creation of Knowledge
Next, come three short articles from @flis, which may not go into any new details beyond the three above, yet offer a slightly different yet simplified perspective to reinforce the clarification of Tauchain's key concepts:
The vision of Tau-Chain, a blockchain based self-amending platform designed to scale human collaboration and knowledge building
How Tau-Chain can be implemented in practice
Tau Chain vs. Tezos - which platform will provide a better solution?
~ design credit: @voronoi
Next, come a few selections from @dana-edwards, who has likely been the single individual who has translated the highly-complex technical vision of Ohad Asor into a more-approachable nature from which non-academics may begin and better understanding a Tauchain.
Quite possibly the first to write of developments and share outside of the project’s IRC channel and Bitcoin talk thread, Dana has one of the most comprehensive grasps publicized anywhere on the project, and his writings continue to serve in establishing bridges for more people to discover and deepen their own comprehensions of the innovations Tauchain represents to not only computer science and the blockchain revolution, but cultural & societal evolution as well.
What follows are a collection of his writings related to the project which excellently piece together key ideas and insights, from which the gaps may be filled in to grasp a firmer idea of just how significant these developments could be and what the bigger picture of their success might look like:
What Tauchain can do for us: Collaborative Serious Alternate Reality Games
What Tauchain can do for us: Finding the world's biggest problems
Tauchain: The automated programmer
Artificial morality: Moral agents and Tauchain
What Tauchain can do for us: Effective Altruism + Tauchain
Collaborative Alternate Reality Games + Tauchain = UBAs (Universal Basic Assets)?
Tauchain and Tezos, why adaptability is the key to surviving in a fast changing environment
My commentary on Ohad's latest blog post: "Agoras to TML"
The following three pieces are not introductory-level, and may likely require a background in computer programming to understanding. However, for anyone reading who might be interested in diving deeper into the technical side of the project, they are included here:
Tauchain is not easy to understand but here are some concepts to know to track Ohad's progress
For all who are researching Tauchain (TML) to understand how it works, a nice video!
More on partial evaluation - How does partial evaluation work and why is it important?
~ design credit: @crypticalias
One other writer covering Tauchain needing to be mentioned: @karov.
While not the easiest to read and understand, the Steemit account of Georgi Karov is undoubtedly one of the most consistent sources of coverage on the project.
A lawyer by-trade and currently one of the three members of the core team, @karov's insights into the project are reliably detailed, expansive into philosophical territory, and fascinating.
Although none of his articles have been included in this introductory collection, those who may be interested to keep up-to-date with coverage on the project would be well-advised to follow his Steemit blog - and/or read backwards through the last few months of his posts there, as the blog is nearly-entirely Tauchain-related content.
Lastly, though not least:
Coming from one of Steemit's most brilliant early-adopter-minds, @kevinwong, this one is a quick read in itself with some key points worth factoring in to a proper assessment of the project. And - far lengthier than the post itself - the comments thread also contains some gold:
Is Tauchain Agoras in Good Hands?
And to wrap up with another excellent quotes from design consultant to the project, @capitanart - who is another to follow for updates:
The goal of Tau is to create a supermind, to solve the limitations inherent in human communication on a large scale.
Able to deduce consensus and understand discussions, Tau can generate and execute code automatically based on consensus, through a process known as code synthesis. This will greatly accelerate the production of knowledge and streamline most of the large-scale collaborative efforts we can imagine in today's world.
~ design credit: @overdye
It's Thursday and I'm back, guys.
It's been long time, but here I'm again :)
This post theme was getting ripe in my head for long time. Something like since 2014.
Recently I got some data to put together the stepping stones for turning my mere suspicion into more of a grounded conclusion.
The problem was that it was also growing in width and depth with time, so here you are a momentary snapshot or sketch-map of it, which I intend to elaborate further on.
I'll start with shooting two slogan-missiles which constitute super-compression of lotsa research and which will be revisited soon in separate series of articles.
Trust is Force
''you trust 'em only as much as you can make 'em to...''
Money is Mnemonics
yes, precisely THIS is the core essence and function of ANY monetary system - (even the primordial barter one with its naturally emerging special tokens ,  to mitigate its intrinsic exponential wall  of unscalabiliuty , ) - to account or remember human activity. That is, money is always work to prove work. Basically we need to remember due to impossibility of simultaneity of transactions.
Which I already went over ... and, I beg your pardon. Three, not two slogans. The third one is:
Law is Between, Code is Within
Will explain later what I mean  and how it ties up with the former two. In a nutshell is about the enforceability as essential characteristic of all law and now will just hint that the reason why Force (coercion) is deemed to be fundamentally non-decentralizable is the Pauli exclusion principle  which is kinda ''location conservation law'' .
You already know ,  my taste for epystemological 'archaeology', that's why I think it is better to carry the story on in chronological order.
Back in 2014 I stumbled upon series of extremely astute and deep thought articles , , , ,  on the cost of several well known monetary systems in comparison with Bitcoin, which just has been grown enough to become visible for unaided eye.
I remember I discovered these great articles by the obviously great Hass McCook in the wake of the MtGox ,  boom and bust aftershock, when huge anxiety about the 'wastefullness' of the Bitcoin mining was reigning the public sentiment. (It happens everytime the price nears the production cost).
The search of mine which hit those was driven by the quite legitimate question of:
''If crypto is wasteful, then how much the traditional fiat costs us, god damn it?''
Well, the comparison turned up, as I suspected, not at all in favor neither of the quite recent demetalized fractalized-centralized double-entry book-keeping debts mnemonincs of the banknotes monetary system, nor in favor of the millennia old 'heavy metal' single-entry money where the physical possession of gold/silver denotes your purchase power...
And it occured it was not at all just about costs of mining, refining, casting, ink, printing press, storage, accounting, counterfaiting countermeasures, ... but the bill to pay includes also all the social infrastructure and capital devoted on the making the system to work, and to be kept ticking ...
Essentially all which is know as ... government. All its buildings, all its sallaried humans, all their guns, pens, pensions, courts, judges and bailiffs ... everything.
All that needed in order a common Ledger to be built, maintained, broadcasted and kept. The difference between government and governance is obvious - the former is the means to an end, the later is the end. The former is the machine, the later is the function.
Here is the place to insert three other quick notions which are in the pipeline for revisiting and furnishing with separate articles.:
Firstly, Mnemonics is subject of big evolutionary/development forces just as anything else into the combinatorial explosion which the universe, nature, society is ...
You noticed above the notion of money emergence kinda coinciding with writing? The Sumerian example.
Writing is mnemonics amplifier . Just like the combustion engines are transportation boosters .
The better memory and memory sharing system we have on our disposal the better money we have.
Money is technology .
Secondly, any book-keeping - regardless whether we write by hand on cave wall or papyri, or by blade on a wooden stick, or by most sophisticated laser-quantum methods on most sophisticated multi-dimensional crystals  - is, yeah, a function of writing. We can go even further and state that illiterate verbal folklore - the only thing we got for millions of years - is form of verbal writing onto each other's short-term/long-term memories, just like photography and sound recording is.
The important thing to note here is that in the light of ''Money is Mnemonics'' spell of mine - the accountancy systems do possess cardinality of entries , , .
And it seems that the mega-trend is:
''the more entries handled = the better our money is''
Fiat one - monetary and overall - is double-entry based and relies upon import of trust, blockchain is tripple-entry and trust is built-in. Blockchain is not 'trustless' but is 'autotrophic'  in regards with trust.
The third notion turns us back on track with the main theme of this article. It is that of the mutual entropy .
The Ledger, no matter which tech it uses to be, has as purpose to define how the individual people's acivity has to be limited for the sake of collective cooperation and collaboration.
The Ledger - product of the particular kind of Mnemonics in play - literally SHAPES and MAKES the society.
As kinda Sorites  or Holon  or Mereonomic  ... generator.
NOW, which costs more? Which one is more wasteful of all the known Ledger or Mnemonic or Monetary systems known?
Literally couple of days ago I stumbled upon ''The $29 trillion cost of trust'' from 24 Jul 2018 by Sinclair Davidson, Mikayla Novak and Jason Potts , which made this long time in the making article to come out.
Now I finally have put my eyes on some numbers to juggle with.
The ecumenical  or midgardic  GDP is evaluated on roughly rounded up ~$100t p.a.
There is lots of well grounded criticism  on the ability of the present day fiat financial system to actually manage to encompass and measure it all - but lets take this conditional good round figure for the global GDP.
The total wealth of ~quarter of $Quadrillion (giving total average depriciation / consumption rate of over a third per year).
GDP evaluates the dynamic part. The work.
Almost 1/3rd of all work is devoted to account for or to prove the work!
Visualize the fiat system as a primitive, primordial, predeluvial or perecursor form of PoW .
Funny enough this ~1/3rd global proof-of-work or mnemoic or governance cost strangely coincides with the energy budget of the brain  as fraction of the total energy a human body dissipates to live.
The last two pieces of research argumentation to close the topic are.:
I'm trully impressed by the depth of these two documents. It is as big as - each sentence backed by several book volumes of profound research.
Paul Sztorc convincingly demonstrates that PoW is the most efficient protocol for decentralization or 'trustlessness'. It appears that 'PoW is the cheapest' not only among the blockspace  but also cheapest everywhere and everywhen.
Mr. Game and Watch evaluates that if in the present day 100-ish $Trills strong global economy there was nothing but Bitcoin as a form of money - the value of a single BTC would be worth millions of $.
''Banknote waste diﬀers from other types of monetary waste in that it is much harder to perceive, by virtue of the complex nature of banknote creation. In contrast, Bitcoin mining directly consumes electricity, and gold mining obviously requires engineers, machinery, armed guards and so forth. At ﬁrst glance, it seems incredible that impoverished hunter-gatherers would devote some of their precious time to the manufacture of silly beads and shells and other collectibles. And, it seems wasteful indeed, that we humans use our powerful brains primarily to obsess over what other people think of us. All of these activities are wasteful,in a narrow sense, but in a broader sense they maintain the infrastructure required to promote and sustain cooperation. These are social activities – we engage in them because we are not alone.''
Apparently monetary system which involves humans to function is unscalable. In the preTau. It is far easier and unlimited as capacity to grow our electricity and machinery resources, than to replicate humans. 
Intuitively, the lower the Cost of Trust the stronger the society, the bigger and with higher acceleration is the growth of the economy, the higher is the affluence and wealth. , , , , , .
If hypothetically the Cost of Trust is zero, the value of the economy will be infinite?
The endogenous automation of production and distribution of trust which the blockchain enables many orders of magntitude lowering of the cost of trust, compared with the present hand-driven system. (As an example - Satoshi himself posited aka 'payment channels'  and Lightning Network  and such promise hundreds of thousands of times smaller transaction costs all internal to the trusltessness environment of blockchain without to rely upon human work to prove work ...)
At the end, what has Tauchain in common with that all?
Well, lotsa things. I'm light years if not infinitely far from any generalization and systematization, but here you are an improvised list ... of questions :
Please, you continue ...
 - https://www.thoughtco.com/clay-tokens-mesopotamian-writing-171673
 - http://www.ancientpages.com/2017/07/08/intriguing-sumerian-clay-tokens-ancient-book-keeping-system-used-long-writing-appeared/
 - https://arxiv.org/abs/1703.02572
 - https://steemit.com/tauchain/@karov/scaling-is-layering
 - https://steemit.com/tauchain/@karov/tauchain-transcaling
 - http://www.behest.io/ & https://steemit.com/blockchain/@karov/behest-for-tauchain
 - https://en.wikipedia.org/wiki/Pauli_exclusion_principle
 - https://en.wikipedia.org/wiki/Conservation_law
 - https://steemit.com/bitcoin/@karov/bitcoin-retrodictions
 - https://steemit.com/blockchain/@karov/geodesic-by-tau
 - https://www.coindesk.com/microscope-true-costs-gold-production/
 - https://www.coindesk.com/microscope-real-costs-dollar/
 - https://www.coindesk.com/microscope-true-costs-banking/
 - https://www.coindesk.com/microscope-economic-environmental-costs-bitcoin-mining/
 - https://thebitcoin.pub/t/under-the-microscope-conclusions-on-the-costs-of-bitcoin/44457
 - https://en.wikipedia.org/wiki/Mt._Gox
 - https://oracletimes.com/mt-gox-bitcoin-whale-trustee-seized-selling-bitcoin-btc/
 - https://steemit.com/tauchain/@karov/tauchain-the-hanson-engine
 - https://steemit.com/tauchain/@karov/tauchain-as-szabo-booster
 - https://winklevosscapital.com/money-is-broken-but-its-future-is-not/
 - https://en.wikipedia.org/wiki/5D_optical_data_storage
 - https://en.wikipedia.org/wiki/Single-entry_bookkeeping_system
 - https://en.wikipedia.org/wiki/Double-entry_bookkeeping_system
 - https://bitcoinmagazine.com/articles/triple-entry-bookkeeping-bitcoin-1392069656/
 - https://en.wikipedia.org/wiki/Autotroph
 - https://en.wikipedia.org/wiki/Mutual_information
 - https://en.wikipedia.org/wiki/Sorites_paradox
 - https://en.wikipedia.org/wiki/Holon_(philosophy)
 - https://en.wikipedia.org/wiki/Mereology
 - https://medium.com/@cryptoeconomics/the-29-trillion-cost-of-trust-be8ffbd5788d
 - https://en.wikipedia.org/wiki/Ecumene
 - https://en.wikipedia.org/wiki/Midgard
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://en.wikipedia.org/wiki/Proof-of-work_system
 - http://www.pnas.org/content/99/16/10237
 - http://www.truthcoin.info/blog/pow-cheapest/
 - https://www.scribd.com/document/354688866/Bitcoin-A-5-8-Million-Valuation-Crypto-Currency-and-A-New-Era-of-Human-Cooperation
 - http://www.truthcoin.info/blog/blockspace-demand/
 - https://steemit.com/blockchain/@karov/tau-through-the-moravec-prism
 - https://steemit.com/tauchain/@karov/masa-effect-with-tauchain
 - https://steemit.com/tauchain/@karov/tutor-ex-machina
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://bitcoin.org/bitcoin.pdf
 - https://lightning.network/
 - https://steemit.com/tauchain/@karov/tauchain-in-the-algoverse
 - http://www.juliansimon.com/writings/Ultimate_Resource/ & https://orionsarm.com/fm_store/Population.pdf
 - https://en.wikipedia.org/wiki/Cybernetics & https://en.wikipedia.org/wiki/Control_theory
The power of ambiguity and of ambiguity minimization in communication. By Dana Edwards on Steemit. June 1, 2018.
Formal communication benefits from ambiguity minimization.
So what exactly do I mean by formal communication? Well when we think of how human beings communicate with machines it is in a formal language. This formal language requires minimized ambiguity for security analysis (how can we analyze code if we cannot effectively interpret it?). The other problem is that the machines require for example that if... then... else and similar conditional statements are well defined and unambiguous.
Is it possible to show that a grammar is unambiguous?
To show a grammar is unambiguous you have to argue that for each string in the language there is only one derivation tree. This is how it would be done theoretically speaking.
In computer science, an ambiguous grammar is a context-free grammar for which there exists a string that can have more than one leftmost derivation or parse tree, while an unambiguous grammar is a context-free grammar for which every valid string has a unique leftmost derivation or parse tree. Many languages admit both ambiguous and unambiguous grammars, while some languages admit only ambiguous grammars.
Specifically we know that deterministic context free grammars must be unambiguous. So we know unambiguous grammars exist. It appears the strategy is ambiguity minimization with regard to formal languages (such as computer programming languages).
For computer programming languages, the reference grammar is often ambiguous, due to issues such as the dangling else problem. If present, these ambiguities are generally resolved by adding precedence rules or other context-sensitive parsing rules, so the overall phrase grammar is unambiguous. The set of all parse trees for an ambiguous sentence is called a parse forest.
The parse forest is an important concept to note. All possible parse trees for an ambiguous sentence is called a "parse forest". This concept is key to understanding the strategy of ambiguity minimization. So we can in practice minimize ambiguity and we know for certain that deterministic context free grammars admit an unambiguous grammar but what does that mean? What are the benefits of unambiguous language in general?
A benefit of ambiguity minimization
Simple English is a form of controlled English designed to minimize ambiguity in English. This is important because by using simple English to codify the rules or write the laws it puts it in a language where there is less of a computational expense (in brain power) to process and interpret the statements.
In one of my older blogposts @omitaylor commented and in one of her future posts she asked about the topic of love. In specific her post was titled: "What Does LOVE Mean To YOU"
Her post highlights the fact that there are different love languages and that we don't all speak the same love language. Ambiguity here is actually not a good thing but the simple fact is when someone speaks about love how do we know they are talking about the same thing? As a result we often seek an agreed upon or formally defined "love concept" where we all agree it's love. This is not trivial to find and as a result a topic like love is not easy to discuss in any serious manner. Unambiguous communication or to be more precise (minimized ambiguity) would allow Alice to discuss with Bob the topic of love in a way where they both know exactly what the other is referring to in terms of behavioral expectations, emotions/feelings, etc.
If Alice agrees to love Bob then Bob has no way to determine what Alice means unless he and she agree on a mutually defined concept of love. This highlights how agreement requires very good communication and how minimizing ambiguity can be beneficial at least in this example.
Ambiguity minimization makes sense when you are following a principle of computational kindness. That is if Alice would like to reduce the computational burden on Bob then she can reduce or minimize the ambiguity of her sentence. This is because in order for Bob to interpret an ambiguous sentence Bob must in essence sort all possible interpretations of that sentence from most likely interpretation to least likely interpretation, and before he can even sort he must first search in order to find all possible or at least plausible interpretations.
This is very computationally expensive for Bob but very cheap for Alice. Alice knows exactly what she means but Bob has no clue what Alice REALLY means.
A benefit of ambiguity
There are other examples where increasing ambiguity could be beneficial, such as perhaps when the communication is less than formal, or to share a stream of consciousness without turning it into a formal communication. Humor for example rides on ambiguity and a good joke may have multiple layers. Art also leverages ambiguity because it's perhaps meant to be interpreted 20 different ways all to produce a certain desired affect.
Ambiguity allows more meaning to be packed into fewer words. This in a sense is a sort of compression scheme. So if a sentence has multiple possible meanings the levels or meanings are still finite. It's a fixed amount of meanings and so theoretically speaking a search can be conducted. In fact this is what a human being does when interpreting natural language where a sentence can have multiple meanings (they do a search for all possible interpretations of that sentence). The problem with this is that it is computationally expensive as a process at least for the human being to try to figure out all possible interpretations of a sentence.
Lawyers when they do their work are working with a specific knowledge base of common legal sentences and common interpretations known in their profession but the rest of us might see a sentence in lawyer-speak and not really know what it means because we will not know the common interpretations. This is a big problem of course because to form agreements between two parties both parties need to have a common understanding (a kind of knowledge symmetric understandability) allowing them both to interpret roughly the same sentence to mean the same thing.
Masa. Masayoshi Son . The master of SoftBank . The Japanese national of Korean background  - really great achievement in this context! The individual with, I suspect, the biggest buying power in all the human spacetime combined. In the world and in the history.
Masa's business record is formidable. He's not just serial and parallel multi-billionaire but a multi-billionaires-breeder  - for example he's THE Jack Ma-backer, i.e. THE Alibaba-maker. And many others more ...
He's buying pieces of Google  ! $32b cash for ARM , undisclosed $b cash for Boston Dynamics . Et cetera. And Masa definitely knows what he's doing with these bits and pieces. What mosaic he's building with those chunks.
Masa has a vision. An yuuuge vision. Masa has a Vision Fund . So, visions fully backed. Backing is what distinguishes a vision from fantasy. SoftBank Vision Fund current minimum check size is $100m by the organization's own rules.
With >$100b shopping spree cash in pocket (and we talking cash, not lower liquidity assets), and an yuge vision the already yuge Vision Fund to get even yuuuger.  Cause - you know - trillions are the new billions (and it is not 'just inflation' but in absolute, shear power - productivity beats inflation ).
His vision on the philosophic level in a nutshell is Vernon Vinge's  , Hans Moravec's  , Raymond Kurzweil's  (and countless other's  ) ... SINGULARITY .
On pragmatic level it is as simple as it is ingenious  - the machinery productivity and production grows so immense that inevitably and soon its output/supply exceeds the cumulative human demand. The machines run out of market!
Solution? As obvious as the Frederick Pohl's Midas Plague (1954)  - machines doing business with machines  (- from about minute 09:00 of the vid onwards). Many orders of magnitude more machine-machine collaboration than all the possible machine-human, human-machine or human-human ones. Trillions and trillions of transhuman chips and bots doing business between each other.
And Masa not just advocates or evangelizes this vision behind his Vision - he does it. Now.
In the narrow-minded aspect it is just matter of (a little) time before Masa notices my precious Tau  and ET3  (which I told you I see as 1, not 2 - explanations to be delivered in future posts).
From wide-minded perspective ... Well...
Do you see what I see?
Chatbots porting into Tau.
Masa's chips or bots are into Moore's law  state of inevitability, e.g. doomed to cross the human scale barrier and to rush even further ahead. To even crack the human natural language code barrier and to do all what a human can do and more. (On human-machine-Tau-machine-human sandwiching architecture for direct use of the few megayears thin natural language wealth and even the few gigayears deep non-verbal communication capital - some other time in some other posts).
Machine-Tau-Machine is completely legitimate and unavoidable use and dev mode. Nothing can stop it. (Better Turing Test, anyone?)
In my previous post  I explained my understanding of the ingenuity of Ohad's approach towards the Moravec-hardness problem of the human condition  - the realization that it is a waste and side-tracking to follow dehumanizing pathways of creation of biomimetic cybernetic homunculi to mitigate the organic limited human specifications, BUT we use them - Tau is the way the problem to become the solution. We utilitify all the processing and algorithmic capital accumulated over billennia into what we call human.
Is the Tau way into a divergence course with the Masa way? No! Absolutely not.
To make chips or bots of > and >> x100 Einstein intellect is a huge collaborative effort. Machines alone - it'd take few billions of man-years to get there. Humans needed - to serve as the effort amplifier lever fulcrum 
Tau with its human-machine-human network topology makes collaboration - for first time ever - really a P2P  thing, with social diameter  of 1 or even <1 for each and every participant  no matter human or machine.
- Tau is Masa vision accelerator.
- Tau is the geodesic Agora  of all intellects imaginable, no matter 'natural' or 'artificial'.
NOTE: Ohad most probably will disagree with this vision of visions on visions of mine, but I dared to dare already anyways. Sorry, bro. It is of course, not an official Tau Team position.
 - https://en.wikipedia.org/wiki/Masayoshi_Son
 - https://en.wikipedia.org/wiki/SoftBank_Group
 - https://en.wikipedia.org/wiki/Koreans_in_Japan#Integration_into_Japanese_society
''Thinking by Machine: A Study of Cybernetics''
by Pierre de Latil 
Published by Houghton Mifflin Company in 1957 (c.1956), Boston.
Foreword of Isaac Asimov (then only 36 years old) ! Recommendation by the legendary mathematician and cyberneticist Norbert Wiener (then 62 years old) ! ... A true jewel! The book is described as:
A review of "the last ten years' progress in the development of self-governing machines," describing "the principles that make the most complex automatic machines possible, as well as the fundamentals of their construction."
Nineteen fifties !! The midway between the first digital computer made by my half-compatriot John Atanasoff  and internet . Almost a human generation span between the former, the book and the later event. Epoch so deep in the past that even television, air travel, rockets and nukes ... were young then.
Same Kondratieff  wave phase btw, which hints towards the historical rhyming of socially important intellectual interests. (On how K-waves imprint on the humanity growth curve - in series of other posts to come).
I must admit here that I've never put my hands and eyes onto this book. But, it is stamped into my mind and memory by Stanislaw Lem  - one of the greatest philosophers of the XXth century, working under the disguise of a Sci-Fi writer, for being caught on the wrong side of the Iron curtain.
''Summa Technologiae'' (1964)  is a monumental work of Lem's, where most issues discussed sound more contemporary nowadays than they were the more than half a century ago when it was built, and for many things also we are yet in the deep past ...
... Lem reports and discussed the following from the aforementioned Pierre de Latil's book.:
''As a starting point will serve a graphic chart classifying effectors, i.e., systems capable of acting, which Pierre de Latil included in his book Artificial Thinking [P. de Latil: Sztuczne mys´lenie. Warsaw 1958]. He distinguishes three main classes of effectors. To the first, the deterministic effectors, belong simple (like a hammer) and complex devices (adding machine, classical machines) as well as devices coupled to the environment (but without feedback) - e.g. automatic fire alarm. The second class, organized effectors, includes systems with feedback: machines with built-in determinism of action (automatic regulators, e.g., steam engine), machines with variable goals of action (externally conditioned, e.g., electronic brains) and self-programming machines (system capable of self-organization). To the latter group belong the animals and humans. One more degree of freedom can be found in systems which are capable, in order to achieve their goals, to change themselves (de Latil calls this the freedom of the "who", meaning that, while the organization and material of his body "is given" to man, systems of that higher type can - being restricted only with respect to the choice of the building material - radically reconstruct the organization of their own system: as an example may serve a living species during biological evolution). A hypothetical effector of an even higher degree also possesses the freedom of choice of the building material from which "it creates itself". De Latil suggests for such an effector with highest freedom - the mechanism of self-creation of cosmic matter according to Hoyle's theory. It is easy to see that a far less hypothetical and easily verifiable system of that kind is the technological evolution. It displays all the features of a system with feedback, programmed "from within", i.e., self-organizing, additionally equipped with freedom with respect to total self-reconstruction (like a living, evolving species) as well as with respect to the choice of the building material (since a technology has at its disposal everything the universe contains).
Longish quote, but every word in it is a worth. When I've read this as a kid back in 1980es ... immediately came to my mind the next, the seventh logical higher effector class.: the worldmaker !!
The degrees of freedom of all the previous six according to the classical taxonomy of de Latil are confined by the rule-set, the local laws of physics.
They are prisoners of an universe. Like birds incapable to reconfig their cage into roomier and cozier ones.
If we regard the laws of nature as code or algorithm, my 7th level effector will be capable to draft and implement itself onto newer and stronger algorithmic foundations. ( Note the seamlessness between computation and robotics in Latil/Lem categorization construct - quite logical indeed, having in mind that software is state of hardware, that matter-form-action are inextricable from each other, but on this in series of other times and posts ... ). Without bond?
So, I wonder:
Where, you reckon, is Tauchain  placed onto the Latil's effectors map?
Hans Moravec  is the patriarch of robotics . The real one, not the Sci-Fi father. Asimov was just the prophet in this scheme of things.
Moravec to Kurzweil is what's Bitcoin to Ethereum and Satoshi to Vitalik.
Sorry, for the rough joke. No offence, Ray! Back in the earler 2000s I bought your books too .
In my humble opinion - aside from the ''reality intratextualization''  concept - the other wisdom jewel of Moravec's - fruit of a life devoted to robotics - is the Moravec's Paradox .
Explained in his own words:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
or with Steven Pinker's :
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived...
As I noted in a previous related post of mine , a system's value dynamics is all about how it scales. Preferable of course are systems which make more good to go around than less. Respectively, to come around.
Humanity is a network, and its scaling is stumbled by our innate attentional resources limitations.
Human social interaction is a skill and we naturally have only as much of it.
For now, in the good old hierarchic way , we can't deny that we scale satisfactory well (as compared, lets say, to our DNA-blockchain-fork-out first cousins the chimps ) for collaborating efficiently on successful execution of trivial tasks like empire building or colonization of the Galaxy.
But not all problems we encounter are simple. In fact most problems are more complex than we are capable to grok and master in the hierarchic collaboration mode, which quickly slams into the Shannon's 'brick wall' 
Ohad Asor's Tau  is intended to be humanity upscaler . This project is the first and only one I've discovered so far where the so obvious (after you know it) problem is even identified, stated and addressed.
This means uplifting the individual humans too, because we are literally AIs serially manufactured by our society (cf. feral children ).
It feels easy for us to attend, to remember, to forget, to think, to talk, to work together - so it is extremely Moravec-hard!
Tau is unique approach towards the Moravec-hardness of these problems in the realization that we do not need at all to waste time and resources to mimic nature and copy ourselves and to create high tech homunculi .
The 'problem' is the solution. Don't 'solve' it - just god damn use it!
It is the people who ask questions, upload statements, express tastes and do all that qualia  crap humans usually do.
The machine distills the semantic essence of all the shared thought flow, treats it as wishes specs, and automatically converts into executable code, incl. its own code self-amendment.
As Moravec found out few decades ago  :
The 1,500 cubic centimeter human brain is about 100,000 times as large as the retina, suggesting that matching overall human behavior will take about 100 million MIPS of computer power.
When these processing brain things are really put together in numbers the result is unprecedented power. An unstoppable force. A glimpse into it by Ohad :
It turns out that under certain assumptions we can reach truly efficiently scaling discussions and information flow, where 10,000 people are actually 100 times more effective than 100 people, in terms of collaborative decision making and collaborative theory formation. But for this we'll need the aid of machines, and we'll also need to help them to help us.
Without application of dehumanizing individual upgrades, without to be necessary to understand and reengineer the billions of years of evolutionary capital, but just harness it and use it. (Scaling itself must be scalable, too, ah?)
In my personal up to date limited understanding it seems that it is indeed the HUMANITY what's to be known as the Tau's 'Zennet Supercomputer', and the machines are the ... collab amplifier media, the 'internet' of it. (Ohad, correct me if I'm wrong, please.)
Like laser configurations of minds.
With performance stronger than thought.
NOTE: I have the honor to be in the Tau Team, but all reflections in this post are personally my opinion.
Tau Chain vs. Tezos - which platform will provide a better solution? By Isar Flis. Posted on Steemit. February 10, 2018.
In this article I would like to discuss the self-amending feature of Tau Chain (Tau), which I believe provides a better solution than the one proposed by Tezos.
A short summary about Tau
Tau will be a blockchain based computer network, aimed at supporting collaboration between people. It will be designed like any other social network you know (Facebook, Twitter, etc.); but on Tau, users can interact with each other using machine-comprehensible languages. Specifically, advanced users will be able to define new knowledge-representation languages simply by translating it to Tau’s metalanguage (TML). As the languages use logic, they will be understandable by both machines and humans.
Since Tau can “understand” the entire conversation, it can also translate the discussions into various languages and discover where people agree or disagree; then, it may present the content of the conversation in different forms (languages or formats) for each user, based on specific requests.
The ability of Tau to logically understand discussions (as it will be translated into its TML) will assist users in four important ways:
*For further information about Tau, please refer to my previous article, explaining Tau and its four-step roadmap.
“Tau, is a discussion about Tau”
Tau is a social platform that will assist users with writing and amending code based on users' discussions about a computer program. But Tau is a computer program by itself. Therefore, by discussing Tau, users will be able to amend Tau, whenever they (the community) reach an agreement about changing Tau’s protocol.
When Ohad Asor, the founder and developer of Tau Chain, mentioned that “Tau, is a discussion about Tau”, he meant that Tau is what the community decides when they discuss Tau. Meaning, when the community will face a decision, such as what Tau’s block size should be, they will just need to express their opinions and perspectives, like we do today in the social networks. Tau will organize the conversation in an efficient way to promote a solution that will represent what the community desires. As such, Tau will be the only dynamic decentralized social network.
Why is Tezos developing only a short-term solution?
You probably remember Tezos as one of the biggest ICOs in history, when they raised $232 million (when BTC price was ~$2,500). Like Tau, Tezos is also a dynamic protocol that can change itself based on users' agreements. Tezos considers voting to be the optimal solution to reach a decision between users.
Voting is a good method to include a large number of people in the decision-making process; however, voters have limited influence, as they can only choose between a few solutions/options presented to them. Who will decide when and why the community will vote? Who will decide what solutions the community can vote for? Tezos’ solution is still centralized and is only viable in the short-run. What will happen if some users do not agree with a specific vote? Does that mean that a Tezos fork is inevitable?
Without considering the perspectives of the entire community, we will not be able to reach a decentralized decision that benefits all users. Tau’s ability to scale discussions is the only decentralized solution to create a true dynamic protocol. Tau will enable all users to express their opinions by just discussing or communicating their views. Users will decide when and what to discuss, and Tau will change its protocol based on users' agreements. Thus, Tau will be able utilize all data in the decision-making process; data that is usually wasted when holding a vote.
To make it more tangible, think about the difference between discussing with your family which movie you’re going to watch and receiving a list of two movies to choose from. The latter might not reflect your taste in movies or how you want to spend your time. This is a low-scale analogy for Tezos’ voting solution. Tezos might provide a solution, but the solution is not optimal. When encountering a large-scale decision, the protocol will be changed based on the vote, but the minority might reject the vote and fork the coin.
Under Tau, the protocol will detect the core consensus among the different perspectives and change accordingly. With the assistance of Tau and its knowledge, users will effectively discuss among themselves how to reach further consensus points. With every consensus point, Tau will change itself accordingly.
*As the community members decide how Tau will be developed, they can suggest the majority rule (or a higher bar) as a decision rule. Tau will automatically detect the different perspectives of the community members and will execute their decision to change Tau’s protocol.
Another important aspect of Tau (compared to Tezos) is the fact that Tau will present its users with output about all the network input. All the data/opinions/information that users provide during their discussions will be accumulated to the knowledge archive. Tau will utilize its knowledge to provide its users with a better access for qualitative and quantitative information. Over Tau, the proposals (such as suggestions to change the protocol) that users will raise can be as wise as the information contained in the entire network.
I will end this article by quoting the last paragraph in my first article:
"I foresee huge potential for this project and urge you to read and learn about this project and its relevant applications. If you find this vision interesting, I recommend that you follow the project on Telegram, Facebook, LinkedIn and Reddit, or read Ohad’s blog for further information."
Disclaimer: I have invested in Agoras. Please do your own research before investing in Agoras and/or any other coin or project. Please do not consider this article to constitute financial advice.
Ohad Asor the lead developer and founder of Tauchain releases first new blog post in over a year. By Dana Edwards. Posted on Steemit. December 30, 2017.
The new blog post titled "The New Tau" is available for everyone to read. The blog post speaks on the critical topic of collaborative decision making. This is a topic which I myself have been interested in and Ohad's solution is different from the usual solution. In my own thinking I was considering a solution based on collaborative filtering but I realized this would never scale. I then considered a solution based upon using IA (intelligence amplification) by way of personal preference agents and this does scale but requires that the agents have a lot of data to truly know each user and their preferences. The solution Ohad Asor comes up with attempts to solve many of the same problems but his solution scales without seeming to require collaborative filtering or any kind of voting as we traditionally think about it.
Let me list some of the obvious problems with voting which many will recognize from Steem which also relies on collaborative filtering:
Now let's see what Ohad Asor has to say:
In small groups and everyday life we usually don't vote but express our opinions, sometimes discuss them, and the agreement or disagreement or opinions map arises from the situation. But on large communities, like a country, we can only think of everyone having a right to vote to some limited number of proposals. We reach those few proposals using hierarchical (rather decentralized) processes, in the good case, in which everyone has some right to propose but the opinions flow through certain pipes and reach the voting stage almost empty from the vast information gathered in the process. Yet, we don't even dare to imagine an equal right to propose just like an equal right to vote, for everyone, in a way that can actually work. Indeed how can that work, how can a voter go over equally-weighted one million proposals every day?
This in my opinion is very true. In reality we have discussions and at best we seek to broadcast or share our intentions. Intent casting was actually the basis behind how I thought to solve this problem of social choice but I would say intent casting even with my best ideas would not have been good enough because again the typical voter would be uninformed. Without an ability of the typical voter to be either educated continuously which in a complex world may be unrealistic, or for the network itself to somehow keep the voter up to date, this intent casting barely works. It works well for shopping where a shopper knows what they want but does not work so well when a person doesn't actually know what they want and merely knows what they value. Values are the basis for morality, for ethical systems, and this is the area where Ohad's solution really shines.
Tauchain has the potential not only to scale discussions but also morality, because it will have the built in logic to make sure people can be moral without constant contradiction. The truth is, without this aid, the human being cannot actually be moral in decision making in my opinion due to the inability to avoid all sorts of contradictions.
All known methods of discussions so far suffer from very poor scaling. Twice more participants is rarely twice the information gain, and when the group is too big (even few dozens), twice more participants may even reduce the overall gain into half and below, not just to not improve it times two.
This is the conclusion that Ohad and myself reached separately but it still holds true. We require the aid of machines in order to scale collaborative decision making. This in my opinion is one of the major difference makers philosophically speaking between the intended design and function of Tauchain vs every other crypto platform in development. This also in my opinion is going to be the difference maker for the community which Tauchain as a technology will serve because it will enable the machines and humans to aid each other for mutual benefit or symbiosis.
The blog post by Ohad Asor brings forward a very important discussion which has many different angles to it. The angle I focused on with regard to the social choice dilemma is the problem of how do we scale morality. In my opinion if we can scale morality in a decentralized, open source, truly significant manner, then nothing stands in the way of absolute legitimacy, mainstream adoption, and with it a very high yet fairly priced token. The utility value of scaling morality in my opinion is higher than just about anything else we can accomplish with crypto tech and AI. If the morality is better, then the design of future platforms will be greatly improved in terms of how the users are treated, and this in itself could at least in my opinion help solve the debate about whether AI can remain beneficial over a long period of time. I think if we can scale morality in a decentralized way, it will make it easier to design and spread beneficial AI. Crypto-effective alturism could become a new thing if we can solve the deeper more philosophical problems.
The liquid paradigm, feedback loops, the virtuous cycle and Tauchain. By Dana Edwards. Posted on Steemit. December 31, 2017.
What do I mean by the concept of "liquid platform"? This is merely a re-articulation of the concept of self amendment and self definition. In other words it is very much like an autopoietic design. Bruce Lee once said to "be like water", and the reason is because water can adapt to any environment it is placed it by taking the form of the container it is put into.
So by liquid paradigm I mean that the core feature of true next generation platform design is going to be focused on maximum adaptability.
Feedback loops and the virtuous cycle
How can we have a platform which promotes continuous self improvement? If you have a platform with no hard coded "self" then even the design of the platform is under constant negotiation and creation. This is key because it means Tauchain will be able to adapt quicker than all other competing platforms. Quicker than Tezos because Tezos merely provides self amendment but lacks the virtuous cycle, the meta language, etc.
The Tau Meta Language allows for self definition at the level of languages. This means even the communication mechanism between humans and machines can be updated continuously. This continuous updating is the key design breakthrough of Tauchain because it means Tauchain will always be state of the art in any area. Think of a platform like Wikipedia where anyone can update any part of it in real time continuously so that every part of it is always the state of the art.
Starting at languages, the feedback loop can be created between humans and intelligent machines. Humans must make decision on how to design Tau. These design decisions benefit from the virtuous cycle due to the feedback loop between humans and machines allowing the decision making ability itself to be upgraded. This could even allow for the humans to transcend traditional human capabilities by relying on intelligent machines to assist in design which means better future designs, which means better decision making, which means better future designs which leads to better decision making, this represents the "virtuous cycle" by way of a feedback loop between humans to machines to humans to machines to humans etc. The humans improve the quality of the machines by feeding knowledge, feeding new algorithms, feeding just enough for the machines to become intelligent enough to help the humans to help the machines even more efficiently in the next iteration of Tauchain, over and over again.
Humans and machines will seek more good and less bad for the formal specification of Tau itself. Good and bad designs will be defined collaboratively by the human participants by way of intelligent discussion. As discussion scales, bigger crowds means more human minds involved, which means improved design, which leads eventually to a better and perhaps wiser Tau, which of course would lead to wiser even more intelligent discussions, which can lead to an improved formal specification, and to a better Tau. So that is a loop. It is also a loop between improving Tau, improving society, improving Tau, improving society.
The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems). By Dana Edwards. Posted on Steemit. March 27, 2017.
This article contains an explanation of two core concepts for creating decentralized artificial intelligence and also discusses some projects which are attempting to bring these concepts into practical reality. The first of these concepts is called knowledge representation. The second of these concepts is called a knowledge base. Human beings contribute to a knowledge base using a knowledge representation language. Reasoning over this knowledge base is possible and artificial intelligence utilizing this knowledge base is also possible.
Knowledge representation defined by it's roles.
To define knowledge representation we must list the five roles of knowledge representation which can reveal what it does.
1. Knowledge representation is a surrogate
2. Knowledge representation is a set of ontological commitments
3. Knowledge representation is a fragmentary theory of intelligent reasoning
4. Knowledge representation is a medium for efficient computation
Part 1: Knowledge Representation is a Surrogate
By surrogate we means it is substituting or acting in place of something. So if knowledge representation is a surrogate then it must be representing some original. There is of course an issue that the surrogate must be a completely accurate representation but if we want a completely accurate representation of an object then it can only come from the object itself. In this case all other representations are inaccurate as they inevitably contain simplifying assumptions and possibly artifacts. To put this into a context, if you make a copy of an audio recording, for every copy you make it going to contain slightly more artifacts. This similarly also happens when dealing with information sent through a wire, where if not properly amplified there eventually will be artifects that come from copying a transmission.
"Two important consequences follow from the inevitability of imperfect surrogates. One consequence is that in describing the natural world, we must inevitably lie, by omission at least. At a minimum we must omit some of the effectively limitless complexity of the natural world; our descriptions may in addition introduce artifacts not present in the world.
Part 2: Knowledge Representation is a Set of Ontological Commitments.
"If, as we have argued, all representations are imperfect approximations to reality, each approximation attending to some things and ignoring others, then in selecting any representation we are in the very same act unavoidably making a set of decisions about how and what to see in the world. That is, selecting a representation means making a set of ontological commitments. (2) The commitments are in effect a strong pair of glasses that determine what we can see, bringing some part of the world into sharp focus, at the expense of blurring other parts."
In this case because our commitments are made then our representation is selected by making a set of ontological commitments. An ontological commitment is a framework for how we will view the world, such as viewing the world through logic. If we choose to view the world through logic, through rule-based systems then all of our knowledge about the world is also within that framework. We choose our representation technology and commit to a particular view of the world.
Part 3: Knowledge Representation is a Fragmentary Theory of Intelligent Reasoning.
Mathmaetical logic seems to provide a basis for some of intelligent reasoning but it is also recognized to be derived from the five fields which include of course mathematical logic, but also psychology, biology, statistics, and economics. If we go with mathematical logic then we have deductive and inductive reasoning approaches. Deductive reasoning according to some is the basis behind. If we want to explore an example of reasoning we can take the Socrates example,
Statement A: True? Y/N?
"All men are mortal"
Statement B: True? Y/N?
"Socrates is a man"
Satement C: True? Y/N?
"Socrates is a mortal"
If A is true, and B is also true, then C must be true. This is an example of basic logical reasoning which can easily be resolved using symbol manipulation and knowledge representation. The symbol at play in this example would be implication.
Part 4: Knowledge Representation is a Medium for Efficient Computation.
If we think of computational efficiency, and think of all forms of computation whether mechanical or natural in the sense of the sort of computation done by a biological entity, then we may think of knowledge representation as a medium for that computation efficiency. Currently we think of money as a medium of exchange, and if we think of the human brain as a type of computer which does human computation, then we may think of knowledge representation.
While the issue of efficient use of representations has been addressed by representation designers, in the larger sense the field appears to have been historically ambivalent in its reaction. Early recognition of the notion of heuristic adequacy  demonstrates that early on researchers appreciated the significance of the computational properties of a representation, but the tone of much subsequent work in logic (e.g., ) suggested that epistemology (knowledge content) alone mattered, and defined computational efficiency out of the agenda. Epistemology does of course matter, and it may be useful to study it without the potentially distracting concerns about speed. But eventually we must compute with our representations, hence efficiency must be part of the agenda. The pendulum later swung sharply over, to what we might call the computational imperative view. Some work in this vein (e.g., ) offered representation languages whose design was strongly driven by the desire to provide not only efficiency, but guaranteed efficiency. The result appears to be a language of significant speed but restricted expressive power .
While I will admit the above paragraph may be a bit cryptic, shows that there is a view that better representation of knowledge leads to computational efficiency.
Part 5: Knowledge Representation is a Medium of Human Expression.
Of course knowledge representation is part of how we communicate with each other or with machines. Human beings use natural language to convey knowledge and this natural language can include the use of vocabularies of words with agreed upon meanings. This vocabulary of words may be found in various dictionaries including the urban dictionary and we rely on these dictionaries as a sort of knowledge base.
What is a decentralized Knowledge Base?
To understand what a decentralized knowledge base is we must first describe what a knowledge base is. A knowledge base stores knowledge representations which are described in the above examples. This knowledge base in more simple terms could be thought of as representing the facts about the world in the form of structured and or unstructured information which can be utilized by a computer system. An artificial intelligence can utilize a knowledge base to solve problems and typically this particular kind of artificial intelligence is called an expert system. The artificial intelligence in the most simple form will just reason on this knowledge base through an inference engine and through this it can do the sort of computations which are of great utility to problem solvers.
When we think of Wikipedia we are thinking about an encyclopedia which the whole world can contribute to. When we think about the problems with Wikipedia we can quickly see that one of the problems is the fact that it's centralized. We also have the problem that the knowledge that is stored on Wikipedia is not stored in a way which machines can make use of it and this means even if Wikipedia can be useful for humans to look up facts it is not in the current form able to act effectively as a decentralized knowledge base. DBPedia is an attempt to bring Wikipedia into a form which machines can make use of but it still is centralized which means a DDOS or similar attack can censor it.
Decentralized knowledge is important for the world and a decentralized knowledge base is critical for the development of a decentralized AI. If we are speaking about an expert system then the knowledge base would have to be as large as possible which means we may need to give the incentive for human beings to contribute and share their knowledge with this decentralized knowledge base. We also would have to provide a knowledge representation language so that human beings can share their knowledge in the appropriate way for it to enter into the knowledge base to be used by potential AI.
Knowledge representation is a necessary component for the vast majority of attempts at a truly decentralized AI. If we are going to deal with any AI then we must have a way for human beings to convey knowledge to the machines in a way which both the human beings and machines can understand it. The use of a knowledge representation language makes it possible for a human being to contribute to a knowledge base and this ultimately allows for machines to make use of it's inference engine capabilities to reason from this knowledge base. In the case of a decentralized knowledge base then the barrier of entry is low or non-existent and any human being or perhaps any living being or even robots can contribute to this shared resource yet at the same time both humans and machines can gain utility from this shared resource. An artificial intelligence which functions similar to an expert system can make use of an extremely large knowledge base to solve complex problems and a decentralized knowledge base combined with open and decentralized access to this artificial intelligence can benefit humanity and life on earth in general if used appropriately.
Discussion of example projects.
One of the well known attempts to do something like this is Tauchain which will have both a knowledge representation system and a decentralized knowledge base. In the case of Tau there will be a special simple knowledge representation language under development which resembles simplified controlled English. This knowledge representation language will allow anyone to contribute to the collective knowledge base. Tauchain eventually will have a decentralized knowledge base over the course of it's evolution from the first alpha.
Unfortunately upon reading the Lunyr whitepaper and following their public materials I fail to see how they will pull off what they are promising. I do not think the current Ethereum can handle concurrency which probably would be necessary for doing AI. I also don't see how Ethereum would be able to do it securely with the current design although I remain optimistic about Casper. The lack of code on Github, the lack of references to their research, does not allow me to completely analyze their approach. I can see based on the fact that they are talking about a decentralized knowledge base that their approach will require more than the magic of the market combined with pretty marketing. They will require a knowledge representation language, they will require a true decentralized knowledge base built into IPFS. This true decentralized knowledge base will have to scale with IPFS and through this maybe they can achieve something but without a clear plan of action I would have to say that today I'm not confident in their approach or in Ethereum's ability to handle doing it efficiently.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems).
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.