The Era of Signals and Changing Power Dynamics. By Dana Edwards. Posted on Steemit. October 8, 2018.
The world we live in is rapidly changing. For instance the #MeToo era has arrived. This new era shows us that any individual in any position in society can be brought down. It proves a point that many in the blockchain community may have known instinctively which is that any individual source of authority and or power can and may be removed from that position. Some people actively choose to seek to be in these positions of power for their own reasons and then some of these people abuse their positions of power. People who seek power for the wrong reasons and then abuse it are in my opinion a risk which positions of authority bring (which blockchain technology may help reduce).
What are signals and what is signalling theory?
Social desirability bias is a popular topic in academic circles. To explain:
In social science research, social desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad," or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports, especially questionnaires. This bias interferes with the interpretation of average tendencies as well as individual differences.
People tend to want to be liked/loved. People when asked questions on a survey may feel pressured to answer the survey in a way which they think they will be viewed more favorably by others. In other words rather than answering in a manner which they truly think or feel they will assess how others might judge their response and then answer in a way which they think they will be judged more favorably.
A full video on this topic is below:
Social desirability bias is exactly why voting on platforms such as Steem will not work. When voting is public then most of the research seems to show that people will feel pressured to answer the question not in the way which they really believe or prefer but in the way which they think the whales want them to vote or prefer. In other words because on Steem the whales can reward (or punish) anyone who votes in ways which go against "political sensibilities" it is likely that social desirability bias applies particularly on DPOS style consensus platforms. If there are votes and the votes are not encrypted (secret) then we have no way to determine which votes are legitimate and which votes are the result of signalling (such as virtue signals).
For example when it was Trump vs Hillary the polls suggested Hillary would win. This is because there likely was social desirability bias which made it socially undesirable for anyone to admit they voted for Trump. As a result people who voted for Trump or who planned to vote for Trump may have said in public that they intended to vote for Hillary. Because the votes in the election are secret the people who may have seemed like loud Hillary supporters could have been secret Trump supporters in disguise.
In some of my previous posts I discuss signalling theory a bit more:
In these posts I have identified that behavior of individuals is shaped by how individuals think other individuals will think of their behaviors. This would apply to social desirability optimization which I'll label as adopting behaviors which provide the expected payoff of being rewarded with improved social desirability.
To provide clarity the definition of social desirability:
Social desirability is the tendency for research participants to attempt to act in ways that make them seem desirable to other people.
In other words people want to be liked. Likeability is a word I can use to simplify the concept of social desirability for readers. In the example with the 2016 election it is clear that supporters of Trump would risk a social stigma with severe social consequences if they came out in public support. This high cost of public support is why some believed that there were secret Trump supporters who were simply afraid of "losing face". In the most simple terms a person can talk red or talk blue depending on where the social stigma is.
One of the stunning conclusions I reached in my own research on this topic is that the increasing transparency leads to "preference falsification". That is a person who is talking blue while thinking red. If all speech is public (like it is on Steem) then there is the possibility that preference falsification is taking place.
Here is a video on the topic of preference falsification:
Why is this a major problem in the blockchain community? The evolutionary trajectory of a platform relies entirely on market preferences. If censorship exists and conformist pressures hinder true preference aggregation then the developers (and the community itself) will have no way of knowing which improvements to make or which changes would best satisfy the community.
What is leadership and what is the era of signals?
Before I attempt to discuss leadership I will first explain what I think leadership means and what it is. In my opinion the community must always come first. A person who is put into a leadership position is in my opinion in what I'll term "the seat of responsibility". This is in my opinion not an enviable position to be in but someone has to be in this position. For example a person who receives a security clearance is now in a position of heavy responsibility. The information which they protect is not their secrets but the nations secrets.
Leadership in my understanding is not about "being in power" but is about serving a community. To be in a "big seat" is to be in a position of responsibility to make decisions on behalf of a community which the chosen person must represent. In other words being in positions of responsibility is entirely about service and not about power. A representative in congress is not in a position of power but in a position to serve their constituents who put them in that position to represent their interests.
In my opinion to be a good leader is to be a great listener. The leader must listen to the community to find out what the community wants and or needs. The leader must listen to the community to determine what the community thinks is right or wrong. The leader then must offer solutions or proposals or policies which satisfies the requirements of the community. What matters more than who is in the seat is the seat itself. This means the Presidency itself matters more than who is in office. The positions themselves matter more than who is in them. Long after whomever is in these positions are gone there will be these positions to be filled. Any leader in any position is replaceable by someone else if they show failure to lead (whether it be a CEO, or a President of a country, or a lead developer, or any other kind of community leader).
In my understanding it is like chess where all pieces on the board can be in various positions. We know in chess that the pawn can become any piece on the board. The point with this analogy is that individuals in my opinion are not likely to remain the source of power in society. The source of power in society is increasingly becoming the community for better or for worse. According to me, to lead is to serve and to lead effectively is to serve effectively.
To accept a responsibility to serve (to lead) it is required to seek feedback from all whom the community servant represents. This does not require voting specifically but it does require under any circumstance a mechanism by which the community can give brutally honest feedback to the system itself. When I say the system itself I do not mean the feedback must go direction to those who serve the system but that the system must have a means of collecting data, analyzing data, and then informing those who can improve the system on which changes best would satisfy the needs of the community.
In my opinion this is a very data driven process. I do not think leaders can for example process big data using their brain power. This will require that they harness the power of machines (machine intelligence). There is also risk if all the processing is done by one company (such as Google) just as there is risk if all people rely on Facebook for the news and opinions. We can see that Facebook has the ability right or wrong to shape elections by deforming the news feed or by allowing certain fake profiles to interact on the site. We see that Facebook can ban crypto ads at will for example to enforce certain policies without taking any kind of poll from the community or the users for instance. We simply do not see any poll data from the users which indicated that the users were tired of seeing crypto ads.
Summary of thoughts on leadership:
Augmenting the wisdom of the community as a means of better governance
In a world where the community must decide what to do we have a situation where responsibility is increasingly diffuse. This means while it is true that the signature may come from the face of the community (if it is a human face) it is still the community which has to be capable of wisdom. The problem is most communities in the world do not become wiser as more join the community. A bigger community doesn't produce better policies by merely voting together. The problem is while most people have opinions it does not mean opinions are well informed or scientific or wise. The lack of wisdom in a community results in horrible (harmful) policies, over reactions, systemic bias, and more.
The conclusion I have reached so far is that in order to have better governance in an era where the community is the government it is a requirement that the community be wise. It's not enough to simply give the community unlimited power to shape the future without providing any capacity for the community to be wise or to do research or to solve problems. Voting in the sense we see in elections does not involve informed voters. Information supplied to voters is almost always sub par and voters are expected to trust "opinion leaders" and "opinion shapers" who tell them how to vote and why. Often disinformation shapes elections more than scientific evidence, facts, math, or reason.
As we build blockchain technology I think it is critical that we put great emphasis on data analytics. Data analytics will allow our leaders to make better decisions on our behalf. Blockchain technology will have to rely on data analytics to figure out potential wants and needs of it's participants, users, e-citizens, etc. At the same time private communication will be a necessity even if just to conduct surveys. The reason is people will not necessarily provide their real opinion in a survey which is completely transparent. The only solution I could find to the problem of preference falsification is privacy.
Most important of all is those who are put into positions of leadership are in trusted positions. This includes people who are moderators at forums, people who are lead developers, people who run exchanges. People who are in these positions have the responsibility to serve the blockchain community to the best of their ability. The abuse of these positions for personal power or personal gain is a violation of this trust and in these instances the community can and should select someone else for that position.
Bulbulia, J., & Sosis, R. (2011). Signalling theory and the evolution of religious cooperation. Religion, 41(3), 363-388.
Davis, W. L. (2004). Preference falsification in the economics profession. Econ Journal Watch, 1(2), 359.
Frank, R. H. (1996). The Political Economy of Preference Falsification: Timur Kuran's Private Truths, Public Lies. Journal of Economic Literature, 34(1), 115-123.
Grimm, P. (2010). Social desirability bias. Wiley international encyclopedia of marketing.
Sîrbu, A., Loreto, V., Servedio, V. D., & Tria, F. (2017). Opinion dynamics: models, extensions and external effects. In Participatory Sensing, Opinions and Collective Awareness (pp. 363-401). Springer, Cham.
How Tauchain and the Exocortex can give anyone a conscience and make anyone more law abiding. By Dana Edwards. Posted on Steemit. September 2, 2018.
First "anyone" is not literal. By anyone I mean anyone with a reasonable level of intelligence who is willing to take the advice generated by the network. The network would include human beings and machines. The network would learn and be more properly defined as a complex adaptive system. Tauchain would enable the emergence of this network. This post is about how the network which can emerge from Tauchain. It is also about how people who intend to be as moral as possible whilst also complying with the law as much as possible might leverage the network. This post assumes that the human brain has a finite memory and comprehension capacity. This post assumes that every human being can benefit from enhancing these naturally limited capacities in areas of legal comprehension and risk literacy (under the assumption that most or perhaps none of us know every law on the books but need to comply with the laws most likely to be aggressively enforced).
The Personal Moral Assistant
PMA is a concept I've been thinking about for years now. The idea that we can augment our ability to be moral persons. A PMA is a personal moral assistant and in an ideal world every person born would have one. This would be an interface similar to what we see with Cortana or Siri where you can ask any question pertaining to whether a particular action is right or wrong. This PMA would solve the problem using the same priorities that you would and so you would get a definite right or wrong result.
A Personal Moral Assistant is just one primary use case. But these personal assistants over Tauchain could also include for instance a Personal Compliance Assistant. This is essentially another bot but instead of dealing with moral problems this bot would handle compliance. If you're trying to accomplish a goal this bot would make sure that you do so following all the known laws as your exocortex currently understands it. This would enable people to avoid legal pitfalls whilst chasing opportunities.
In order to go from poor to rich in this world requires taking risks. There is no way around risk taking if you want to get ahead. Risk literacy is essential and very few people who are poor have risk literacy. The PMA might be able to tell a person whether a certain choice aligns with their current values. The PCA might tell a person whether a certain choice complies with the laws. What about opportunities? An opportunity web crawler agent could theoretically search across the entire Internet to find opportunities which match your chosen risk profile.
What are we doing today?
Today we have to make choices often in trial and error. If we aren't lucky enough to have mentors or people who can guide us then the only way to learn is to make the common mistakes. When we deal with moral problems today we often rely on holy scripture interpreted by other human beings who are just as flawed as we are. We simply don't have a bot which could interpret the scripture in a completely logical way. In other words we don't have the digital representation of the mind of our spiritual guides.
We also have a situation where some of us can afford to comply with every law and take the lowest risk approach while others simply don't have the resources available to pay the expensive legal fees. Some people get better legal advice than other people as well. What if we could get at least some level of legal assistance from our intelligent assistant? What if this intelligent assistant can even ask human beings who have legal knowledge to help?
And finally what if we could figure out which risks are worth taking and which are not worth taking? It's one thing to find opportunities but another to be able to assess them. People get scammed because at the end of the day our emotions influence our ability to do proper assessment of opportunities. I'm human and it even happens to me from time to time. What if we could avoid this by using the capabilities of Tauchain to analyze massive amounts of information for us which our brains could never handle?
Opportunity Crawler Bot
I ask a simple hypothetical question: what if you could have set a bot to search the Internet for opportunities that resemble Bitcoin in 2008? What if this bot would be activated and search for an indefinite period of time on an undetermined yet expanding number of networks? If you define "Bitcoin in 2008" in a way which the bot can make sense of then it could search for anything which meets that criteria. We have this technology now but it's extremely primitive. On Google you can set up alerts for certain things but what if you could go beyond mere alerts and look for code on Github, and certain individuals involved with it, and certain growth patterns?
A way to think about these bots / intelligent assistants
One way to think about these intelligent assistants is as part of your extended mind. These bots essentially help you to think better and communicate better. It's still you and what they do on your behalf is essentially as if you did it. So the total collection of all of these agents which are under your control represent your complete exocortex. It will take great responsibility and wisdom to use these abilities in a way which is perceived by the world as ethical, moral, legal, etc. It is for these reasons that I initiate a discussion on how each of you would like to use such technology if it did exist or such bots or how you would think about them?
What is Tauchain & Why It Could Be One of The Greatest Inventions of All Time (Part 1: Introduction). By Kevin Wong. Posted on Steemit. August 28, 2018.
In anticipation of Tau's demo some time around the end of this year, I'd be publishing a series of articles leading up to its release and beyond on Steem. If you would like to get to know what some of us think is going to be one of the greatest inventions of all time, I'd recommend you to check out http://wwwidni.org. It seems like a foundation that we've missed out on building together since the birth of the Internet.
A close resemblance of this project is the Semantic Web although some of us would place Tau as being far more ambitious in scope, oddly in a way that is likely more feasible with its ingenious use of a logic blockchain to power a decentralized social choice platform. I think it's impressive how singular the concept actually is, despite the unavoidable lengthy explanations that come paired with the many first-time features that Tau will provide.
Without further ado, let's explore this world-changing technology that is currently baking in the oven.
What is Tau?
Let's begin by first checking out the opening of IDNI's website at http://idni.org:-
Tau is a decentralized blockchain network intended to solve the bottlenecks inherent in large scale human communication and accelerate productivity in human collaboration using logic based Artificial Intelligence.
Sounds fairly straight-forward at first glance, and to me, it really stands out in the cryptosphere. We now have millions and billions of people using the Internet everyday, yet we still do not have any effective means of discussing and collaborating without being all over the place. Sure, we may have been pouring a lot of our time and effort into various platforms trying to connect with others, but have things been really any different compared to a time before the Internet?
The speed of information propagation has increased by orders of magnitude, and we can reach anyone on the planet now, but it's still really up to us to be present and be able to process information in our heads before turning them into relevant knowledge for our networks.
Expanding our social bandwidth.
Turns out, we have been experiencing a lot of trouble coming to terms with the chatter of billions of people in cyberspace. The bottlenecks inherent in our human bandwidth remain to be unsolved even with near-instantaneous communications. From governments to corporations and blockchain communities, we are all still facing the age-old problem of being unable to scale governance beyond the size of a classroom. It's just difficult to get our points across to many different people, let alone making sense of complex long-term discussions and making network-wide decisions collaboratively.
The introduction to The New Tau written by Ohad Asor explains our situation quite accurately:-
Some of the main problems with collaborative decision making have to do with scales and limits that affect flow and processing of information. Those limits are so believed to be inherent in reality such that they're mostly not considered to possibly be overcomed. For example, we naturally consider the case in which everyone has a right to vote, but what about the case in which everyone has an equal right to propose what to vote over?
So how is Tau actually going to solve our communications bottleneck? It will be through a highly bespoke and non-trivial implementation of a logic-based Artificial Intelligence (AI). It's worth noting that AI in this case is more of a buzzword for marketing-speak, and it is actually not of the same variety as the commercial implementations of deep machine learnig.
The distinction that must be made is that Tau is not the kind of AI that attempts to guess what the world is around them, including that of our opinions and the things we say or do. Instead, we must make the step towards communicating through Tau and what we choose to communicate will be as definite as computer programs. It can be thought of as a persistent logic companion that helps us improve the scale our reasoning, logic, and bandwidth.
We can take the time to share what we want to share on the Tau network and most of the logic-based connections and operations will happen in the background over time, even when we're not paying attention in-person. Again, the use of the word AI is a misnomer here because it usually paints the picture of AI agents attempting to mimic human autonomy. That's not what Tau is about. In this case, thinking about Tau as just a logic machine should provide better clarity on what it actually is.
The power of logic.
To expand, here's the second paragraph found in the opening of IDNI's website that explains Tau's paradigm in logic-based communications, http://idni.org:-
Currently, large scale discussions and collaborative efforts carried out directly between people are highly inefficient. To address this problem, we developed a paradigm which we call Human-Machine-Human communication: the core principle is that the users can not only interact with each other but also make their statements clear to their Tau client. Our paradigm enables Tau to deduce areas of consensus among its users in real time, allowing the network to boost communication by acting as an intermediary between humans. It does so by collecting the opinions and preferences its users wish to share and logically constructing opinions into a semantic knowledge base.
Indeed, Tau will offer a semantic social choice platform where we can discuss and store knowledge in a logical universe that helps us organize information, thereby empowering us in highly relevant ways. If you're worried about privacy, know that Tau is first-and-foremost designed as a local client with local processing and storage. The platform itself will be deployed as a decentralized peer-to-peer network, a place where we can connect and share our knowledge-base with anyone we desire.
The only price to pay in all of these is that we must speak in Tau-comprehensible languages, which can always be added and modified over time. A sophisticated language that can be defined over Tau may closely resemble natural languages, but it is really best to expect Tau as a machine-comprehensible language that only speaks in logic. Fortunately, logical formalism is something that we can easily deal with.
So it will be up to us to communicate with our local Tau client in a way that it'll understand our worldviews. When the machine understands what we share completely in some logical, mathematically-verifiable sense, it can then connect our dots with the rest of the Tau network, effectively boosting communications beyond the limits of human bandwidth, effectively scaling our points of discussion, consensus, and collaboration up to an infinite number of participants.
Code and consciousness.
Finally, we look at the last paragraph of Tau's introduction at http://idni.org
Able to deduce consensus and understand discussions, Tau can automatically generate and execute code on consensus basis, through a process known as code synthesis. This will greatly accelerate knowledge production and expedite most large scale collaborative efforts we can imagine in today's world.
Since Tau is a logic blockchain that powers a semantic social choice platform, we can leverage it to have both small and large-scale discussions about program specifications, detect points of consensus, and even generate software in the process. Being able to go from discussions to the realization of decentralized applications would mean inclusive code development for the masses. It's also a unique addition to decentralization that no other blockchain projects have even thought about.
Now that we may have come to a better understanding of Tau's emphasis on the use of logic in every part of its being, let's revisit the process description found in The New Tau to get closer to knowing what it really is about:-
We are interested in a process in which a small or very large group of people repeatedly reach and follow agreements. We refer to such processes as Social Choice. We identify five aspects arising from them, language, knowledge, discussion, collaboration, and choice about choice. We propose a social choice mechanism by a careful consideration of these aspects.
In short, Tau is a decentralized peer-to-peer network that takes the shape of a social choice platform, and it can become anything that we want it to be, for as long as it's expressible within the self-defining and decidable logics of FO[PFP] with PSPACE-complexity. This precise specification is required to satisfy the very definition of Tau as seen in the excerpt above. Tau is also intended to be a compiler-compiler.
This is taking application-generality into a completely different direction compared to blockchains that are built specifically with turing-completeness in mind, like Ethereum. Relevant literature to check out: Finite Model Theory.
Understanding each other.
While it's all highly technical and difficult to grasp in one seating, perhaps a better way to truly begin to understand Tau is to spend some time studying its main features. Or just wait for the product release. In any case, I will try to explore these topics in the future if my brain can still handle it:-
The more I think about Tau, the more I think that it is (poetically) a logical conclusion to the way the Internet works as a protocol. It even lives and breaths logic. Not just any kind of logic, but specifically, logics that can define their own semantics and is decidable. Tau is intelligently designed to be a truly dynamic and ever-evolving blockchain.
When the Tau community intends to make changes to the network code, rules or protocols, they will simply need to express these opinions and perspectives in a compatible language over the network. The self defining logic of the Tau blockchain network will enable it to detect the consensus among these opinions and automatically amend its own code to reflect this consensus from block to block. Unlike the common method of voting, Tau’s approach will take into account the perspectives of the entire community, where people will be free to vote and propose what to vote for in real time. This unique ability of Tau is the only decentralized solution to create a truly dynamic protocol.
Now you might think: Tau seems like a powerful tool but will it be too difficult to use for most people? There might be some learning curve involved for sure, and it'd be similar to learning a new language in the beginning. Those of us who learn to use it well enough to scale our discussions and collaborative works will likely gain a significant edge over those who are not using the platform. I'd imagine plenty of projects and communities around the world being able to overcome some of their obstacles in development through Tau. Hence, it may be fair to expect that market forces will gravitate towards the platform just like how we're all using the Internet these days.
Until the next post.
I've been thinking about Tau almost everyday for the past many months now, and I will admit that its deeper technicalities are still way out of my league, although I've made sure to word them broadly out the best I can. If you like what I do, please consider sharing this post and voting on my witness account on Steem. For more info, check out my recent witness announcement post.
As always, thanks for reading!
Images from Pexels
Music tracklist by Magical Mystery Mix
Follow me @kevinwong / @etherpunk
Not to be taken as financial advice.
Always do your own research.
Tauchain 101: Essential Reading On One Of The Most Revolutionary Blockchain Project Under The Radar...By Rok Sivante. Published on Steemit. August 3, 2018.
Amidst countless blockchain projects hyping themselves up as "the next big thing," there are a few that have been working under the radar that hold the promise - not in word, but in substance - of truly being revolutionary game-changers.
Such ventures have not yet often come into the spotlight. Partly, due to that their founders have focused first on the fundamentals of creating something that speaks for itself versus the all-too-common approach of prioritizing sensationalistic marketing. And partly, because the degree of innovativeness they represent - in tandem with a complexity in scope of the larger visions and implications of their success - does not always lend itself to an easy understanding upfront.
One such project - still very early on in its development, yet holding transformative potentials no less grand than those of Bitcoin and Ethereum as they birthed and evolved the blockchain landscape:
Until recently, with the launch of a new website that has successfully managed to articulate the project's vision much more clearly, understanding what Tauchain is striving to accomplish was a domain only a very few, highly-intelligent technically-inclined dared to tread. And prior to December 2018, there was no code - only an unproven concept spearheaded by a single Israeli developer, Ohad Asor, whom nearly all who've managed to connect with have declared to be one of the most brilliant geniuses they've ever met, possibly ahead of his time.
Just as Bitcoin introduced blockchain as an innovation radically altering the trajectory of our societal, economic, and technological evolution - and Ethereum continued in suit with its upgrades to expand in developing upon the vision with entirely new sets of capabilities for developing a range of decentralized applications and smart contracts - so too, may Tauchain be such a platform whose success proves comparable, the impact of which may bring quantum leaps in the Blockchain Revolution.
How and where to start in describing Tauchain...?
Well, were we to begin with the technical side of things, it'd be likely to lose 98% of the audience. So perhaps, a better starting point might be the bigger picture:
This generalized overview, however, still only barely scratches the surface.
While the intended ends may be that of a generic concept enabling drastically-increased efficiency in global collaboration, the means by which such is to be achieved entails a number of innovative component developments that each hold great significance and implications of their own.
While each may require deeper exploration to better grasp and begin piecing together into the bigger picture, the Tauchain website now offers an overview of key features which account for just some of what it to differentiate it from other blockchain platforms - and enable new collaborative capabilities not currently possible with currently existing technologies:
While it'd be possible to expand upon each in great detail - both in regards to the functionality and implications for their applications - this particular piece of writing is to serve as a basic introduction to some of the best, most-easily-accessible content written on Tauchain to-date.
And as we transition into that content, we shall begin with a quote summarizing the core essence of Tauchain, as approached from but one angle:
This project created by Ohad Asor is really ambitious and aims to create the internet of knowledge.
Some people would label it as an Artificial Intelligence, but according to the creator this is something totally different. Summing up and to understand me, Tau-chain is a tool that knows how to interpret any information and deduce any consensus. This tool can be used in any field, judicial, political, academic, social, scientific and also without limits assembly from 2 people to a million for example.
~ @capitanart, from "My experience with Tau-chain"
The collection begins with two selections from Steemit's @trafalgar.
If anyone has successfully managed to distill the essence of the Tauchain vision into words that'd serve as a foundational Tauchain 101 intro, it'd have been him in these two excellent pieces:
What Is Tau? - My Only Other Crypto Investment
The Power of Tau - Scaling the Creation of Knowledge
Next, come three short articles from @flis, which may not go into any new details beyond the three above, yet offer a slightly different yet simplified perspective to reinforce the clarification of Tauchain's key concepts:
The vision of Tau-Chain, a blockchain based self-amending platform designed to scale human collaboration and knowledge building
How Tau-Chain can be implemented in practice
Tau Chain vs. Tezos - which platform will provide a better solution?
~ design credit: @voronoi
Next, come a few selections from @dana-edwards, who has likely been the single individual who has translated the highly-complex technical vision of Ohad Asor into a more-approachable nature from which non-academics may begin and better understanding a Tauchain.
Quite possibly the first to write of developments and share outside of the project’s IRC channel and Bitcoin talk thread, Dana has one of the most comprehensive grasps publicized anywhere on the project, and his writings continue to serve in establishing bridges for more people to discover and deepen their own comprehensions of the innovations Tauchain represents to not only computer science and the blockchain revolution, but cultural & societal evolution as well.
What follows are a collection of his writings related to the project which excellently piece together key ideas and insights, from which the gaps may be filled in to grasp a firmer idea of just how significant these developments could be and what the bigger picture of their success might look like:
What Tauchain can do for us: Collaborative Serious Alternate Reality Games
What Tauchain can do for us: Finding the world's biggest problems
Tauchain: The automated programmer
Artificial morality: Moral agents and Tauchain
What Tauchain can do for us: Effective Altruism + Tauchain
Collaborative Alternate Reality Games + Tauchain = UBAs (Universal Basic Assets)?
Tauchain and Tezos, why adaptability is the key to surviving in a fast changing environment
My commentary on Ohad's latest blog post: "Agoras to TML"
The following three pieces are not introductory-level, and may likely require a background in computer programming to understanding. However, for anyone reading who might be interested in diving deeper into the technical side of the project, they are included here:
Tauchain is not easy to understand but here are some concepts to know to track Ohad's progress
For all who are researching Tauchain (TML) to understand how it works, a nice video!
More on partial evaluation - How does partial evaluation work and why is it important?
~ design credit: @crypticalias
One other writer covering Tauchain needing to be mentioned: @karov.
While not the easiest to read and understand, the Steemit account of Georgi Karov is undoubtedly one of the most consistent sources of coverage on the project.
A lawyer by-trade and currently one of the three members of the core team, @karov's insights into the project are reliably detailed, expansive into philosophical territory, and fascinating.
Although none of his articles have been included in this introductory collection, those who may be interested to keep up-to-date with coverage on the project would be well-advised to follow his Steemit blog - and/or read backwards through the last few months of his posts there, as the blog is nearly-entirely Tauchain-related content.
Lastly, though not least:
Coming from one of Steemit's most brilliant early-adopter-minds, @kevinwong, this one is a quick read in itself with some key points worth factoring in to a proper assessment of the project. And - far lengthier than the post itself - the comments thread also contains some gold:
Is Tauchain Agoras in Good Hands?
And to wrap up with another excellent quotes from design consultant to the project, @capitanart - who is another to follow for updates:
The goal of Tau is to create a supermind, to solve the limitations inherent in human communication on a large scale.
Able to deduce consensus and understand discussions, Tau can generate and execute code automatically based on consensus, through a process known as code synthesis. This will greatly accelerate the production of knowledge and streamline most of the large-scale collaborative efforts we can imagine in today's world.
~ design credit: @overdye
“We are moving into an era where cities will matter more than states and supply chains will be a more important source of power than militaries — whose main purpose will be to protect supply chains rather than borders. Competitive connectivity is the arms race of the 21st century.”
-- Parag Khanna , 
A network is made of lines and switches, right?
Lots have been told about the network scaling effects , including attempts by myself [4-12] ... which compels me to introduce the not so frivolous notion of network forces.
These forces are expressed in several laws. I though initially to say 'forces' and 'laws' here, but I realize they are quite objective and physical emergenta , indeed.
In my ''Geodesic by Tauchain''  article of about couple of months ago I emphasized over the Huber-Hettinga Law , of how cost of switching literally defines the 'orographic'  topology of a network .
The cheaper the routing - the flatter the network.
Expensive switches = hierarchy, verticality, power, control, obey, centalization, 'world is fiat' ,, sollen , hence borders instead of bridges, limitations not stumulae, exclusivity ...
Cheap switching = geodesic society , 'world is flat', horizontality, p2p, decentralization, inclusivity ...
The more vertical by centralization a network is - the more it must deplete information - to omit, to ignore calls from the deeps or to even actively suppress or silence nodes. To cope with the stream by strangling it. Simply due to lesser capacity, less degrees of freedom . Geodesic networks possess higher entropy  and therefore are richer. They bolster higher both Scrooge  and Spawn  factors. With other words:
The flatter the network - the richer  it is.
Maybe the explanation on why the wealthiest-healthiest societies tend to be those who are with biggest economic-political freedom. 
Naturally the Huber-Hettinga Law led me to the elementary-watson  conclusion of the power and value of Tau as the ultimate über -switch. So far so good.
Now lets stare in the Lines. Here comes Nick Szabo .
Nick Szabo - a lawyer AND computer scientist - is a legendary figure from the great 'Archaic era of crypto'  - the 1990es when he, together with the other cypherpunk  titans like Tim May , Wei Dai , Bob Hettinga  etc. etc., poured the very baserock foundations in a staggering detail of what we enjoy now as Crypto  in the post-Satoshi  era.
It is THEIR vision came true we all now live in.
Bitcoin was a detonation of namely that critical mass of fused thoughts, of namely these very smart people, piled up and compressed by the connective network forces of the early internet .
No, I do not mean at all Szabo's most famous thing - the 1994 coining of the term of 'smart contracts' . In fact I deeply and strongly reject the very notion of 'smart contracts' - as utter non-sense, even as an oxymoron - which is an yuge separate problem, which I suspect that I nailed it, and I'll address in series of dedicated articles starting in the upcoming weeks...
I mean something much more valuable, what I call the Szabo Law.
When we hear the phrase 'networking effects' the first what comes to mind is the famous Metcalfe law .
''Metcalfe's Law is related to the fact that the number of unique connections in a network of a number of nodes (n) can be expressed mathematically as the triangular number n(n − 1)/2, which is proportional to n2 asymptotically (that is, an element of BigO(n2)).''
In the above order of appearance these network forces laws respect quantitatively the basic properties of a network as:
- Huber-Hettinga Law - the cost of switches and routing.
- Metcalfe Law - the number of nodes, i.e. switches defining the number of unique connections or lines.
- Szabo Law - the cost of the lines and connecting.
All these Laws are scaling ,  laws. Before we to come back to and continue on Szabo Law, we have to briefly mention another one .:
''So what is “scaling”? In its most elemental form, it simply refers to how systems respond when their sizes change. What happens to cities or companies if their sizes are doubled? What happens to buildings, airplanes, economies, or animals if they are halved? Do cities that are twice as large have approximately twice as many roads and produce double the number of patents? Should the profits of a company twice the size of another company double? Does an animal that is half the mass of another animal require half as much food?'' ... With Dirk Helbing (a physicist, now at ETH Zurich) and his student Christian Kuhnert, and later with Luis Bettencourt (a Los Alamos physicist now an SFI Professor), Jose Lobo (an economist, now at ASU), and Debbie Strumsky (UNC-Charlotte), we discovered that cities, like organisms, do indeed exhibit “universal” power law scaling, but with some crucial differences from biological systems.Infrastructural measures, such as numbers of gas stations and lengths of roads and electrical cables, all scale sublinearly with city population size, manifesting economies of scale with a common exponent around 0.85 (rather than the 0.75 observed in biology). More significantly, however, was the emergence of a new phenomenon not observed in biology, namely, superlinear scaling: socioeconomic quantities involving human interaction, such as wages, patents, AIDS cases, and violent crime all scale with a common exponent around 1.15. Thus, on a per capita basis, human interaction metrics (which encompass innovation and wealth creation) systematically increase with city size while, to the same degree, infrastructural metrics manifest increasing savings. Put slightly differently: with every doubling of city size, whether from 20,000 to 40,000 people or 2M to 4M people, socioeconomic quantities – the good, the bad, and the ugly – increase by approximately 15% per person with a concomitant 15% savings on all city infrastructure-related costs.
Which probably comes to denote the shear size of the network in STEM (space, time, energy, mass) , I'm not sure, but I have some strong suspicions about the unity of matter, structure and action which I will expose and share some other time.
What I call Szabo's Law reveals in his ''Transportation, divergence, and the industrial revolution''(Thu, Oct 16, 2014)  that similarly to Metcalfe's (''double the population, quadruple the economy'') there is power-law  correlation between the cost of connections or links or lines ... and the value of the network, too.:
''Metcalfe's Law states that a value of a network is proportional to the square of the number of its nodes. In an area where good soils, mines, and forests are randomly distributed, the number of nodes valuable to an industrial economy is proportional to the area encompassed. The number of such nodes that can be economically accessed is an inverse square of the cost per mile of transportation. Combine this with Metcalfe's Law and we reach a dramatic but solid mathematical conclusion: the potential value of a land transportation network is the inverse fourth power of the cost of that transportation. A reduction in transportation costs in a trade network by a factor of two increases the potential value of that network by a factor of sixteen. While a power of exactly 4.0 will usually be too high, due to redundancies, this does show how the cost of transportation can have a radical nonlinear impact on the value of the trade networks it enables. This formalizes Adam Smith's observations: the division of labor (and thus value of an economy) increases with the extent of the market, and the extent of the market is heavily influenced by transportation costs (as he extensively discussed in his Wealth of Nations).''
My encounter with this article of Nick Szabo's was a goosebumps experience for me, cause it coincided with series of lay rants of mine on the old Zennet irc chat room of Tau that ''computation =communication =transportation''. Somewhere in 2016 as far as I remember. :)
Maybe it was the last drop to shape my conviction that by my dedicated involvement in both Tau and ET3 , , , I'm actually working for ... one and a same project.
For communication, computation and transportation being modes of state change. Cause information is a verb, not a noun. And software being states of hardware.
''Decentralizing the internet is possible only with decentralized physical infrastructure.'' 
Just like the brain is a network computer of neuron nanocomputers , the emergent composite we colloquially call humanity or mankind or economy or society or world ... is a network computer made of all us billions of humans.
Brains do thought, economies do wealth.
Integrated circuitry  upon the face of planet Earth as a motherboard . Literally. The Humanity's planet-hardware. Parag Khanna's Connectography explained.
The Earth is definitely not our ultimate chip carrier . Probably there ain't limit at all of our culture-upon-nature hardware upgrades, see: , . The universe is our computronium  and we've been here for too short and haven't seen far enough. Networking is connectomics . And thus it always also is metabolomics .
Remember my last month's  ''Tauchain the Hanson Engine''?
The series of exponentially shortened growth doubling times looks like driven by transportation technological singularities : domestication of the horse, oceanic navigation, combustion engine ...
In the light of all the net forces summoned above: The planet Earth viewed as a giant computer chip ...
- itself is a subject of the relentless network entropic  force of the Moore's law 
The network forces accelerate what that wealth computer does.
Two quick examples:
A.: The $1500 sandwich  as a proof that trade+production is at least thousands of times stronger in sandwich-making than production alone.
B.: The example of Eric Beinhocker in his 2006 ''The Origin of Wealth''  about the two contemporary tribes of the Amazonian Yanomami  - a stone age population nowadays and the Eastcoastian Manhattanites . That the former are only about 100 times poorer, but the later enjoy billions of times bigger choice of things to have.
Tauchain 'threatens' to affect the parameters of ALL the network forces formulae mentioned herewith in a mind-bogglingly big scale.
Simultaneously, orders of magnitude :
- lower switch cost
- higher nodes count 
- lower connection cost
A wealth hypercane  recipe. Perfect value storm. Future ain't what it used to be .
Signalling Theory, Radical Transparency, and the death of genuine communication. By Dana Edwards on Steemit. June 7, 2018.
In a private conversation which I cannot repeat in public I was inspired to confront and explain a very important yet not often discussed topic. In my previous posts I have discussed the concepts of "dramaturgy, self monitoring, microsociology and morality". I also initiated a discussion around the concepts of reputational risk and feedback in decentralized governance. The main objective of this blog post is to introduce the concept of signalling theory and to encourage debate around the possible long term psychological consequences of radical transparency. In my opinion too few people study what the actual effects of radical transparency could be on the practitioners of it with way too much emphasis on a very narrow law enforcement or moralist perspective.
It is true that radical transparency could make it easier to enforce social norms (the moralist benefit). It also may help law enforcement whether in a traditional or non-traditional context if social systems have greater levels of accountability. What often is missed is that humans (and most primates) have no experience living without some degree of privacy in their lives. The technological trend of expanding the public space to encompass everywhere and everything (hyper connected yet open) is completely new and I would argue foreign to our species.
My argument is that humans as we currently know will cease to exist as the implications of radical transparency becomes apparent. The argument being that once enough people adapt to for example all votes being public then we will have no way to know if a vote is a genuine vote or a virtue signal vote. We will lose genuine communication in favor of radical transparency (this is based on signalling theory). Of course my argument could be wrong but the basis of this argument is that because honest communication is more likely to be punished harshly there will be a reinterpretation of communication where it is not possible for Alice to know the true opinion of Bob or for Bob to know the true opinion of Alice.
How signalling theory works in nature
Those who study the field of communications are probably aware of signalling theory. In my post on the topic of self monitoring and dramaturgy I specifically focused in on following the perspective of an individual within a society where all actions are to be judged. In this hypothetical society (which mirrors the current trajectory of our own); Alice must in essence monitor her behaviors for how anything she does might be interpreted by her audience. If we as a metaphor think of the audience as the community, and we think of the audience as a sort of jury, then Alice must by her actions consistently prove she is good enough to stay on the "good person" list. This good person list may be a formal list such as the case with the cyberocracy social credit we see in China or it may be an informal secret list which Alice doesn't know if she's even on (such as how things work in the United States).
Technology currently allows for the creation of these lists. In fact the Enigma project definitively proves this technology is plausible in their latest blog post which I think is a must read. The Enigma Team calls this proof of concept: "Token-Curated registries". What makes Enigma's proof of concept unique is not that this capacity didn't already exist using less advanced technology but that these lists can actually be formed in a decentralized context on the blockchain. The idea of an encrypted "good person" list where the votes are private is not at all science fiction. In fact if you have more interest in the TCR concept just check out the Meetup
Discussion of Anonymous Benefactor Bot Networks
Let me make it clear that I do not think privacy technology is bad. In fact I offered my own version of a concept which could use TCR called "Anonymous Benefactor Bot Networks". The reason I contemplated this idea is because I wanted a way for good people to help each other grow and because I wanted a means of implementing a permissionless form of basic income. I did not pursue the idea because of regulatory uncertainty but I did introduce the idea (which leverages from what Enigma will allow). In my Anonymous Benefactor Bot Network concept I used a table rather than a "list" but the table could be seen as a more expensive list. The reason I used a table is because when I thought of the concept of I was not aware of TCR but was aware of RDF and Tau(Tauchain).
The point being that lists are extremely powerful, and if you have lists which can be curated in an anonymous or pseudo anonymous fashion then it can be used for positive (to create benefactor bot networks to reward good people) but it can also be used in negative ways such as lists of people deemed "bad" or "evil" for whatever arbitrary reasons the creators of the list define. I will not discuss in this post the differences between a graph and a list as that is a very nuanced discussion but there is a difference. In my concept diagram you can also see the "whitelist" which essentially could be implementable using TCR over Enigma (or over Tau).
Genuine communication under radical transparency?
Now we see that it is possible to have a formal specification based on the concept I introduced above "Anonymous Benefactor Bot Networks". This technology can only be properly implementable if there is privacy due to the sensitive political nature, questionable legal ramifications, etc. At the same time if such a technology did exist then we would potentially have clandestine judges who go around essentially deciding who gets put on the secret "good person" list. Remember that the secret lists are encrypted in such a way that people can vote anyone's account onto the list or off so think of it like a magical island which an individual verified account holder must be voted onto because they give enough people the impression that they are a good person.
These lists existing in cyberspace would mean there are potential economic incentives for people to want to always be perceived as a good person. Just as Santa makes a list, so too would the crowd. But what does this mean for genuine communication if everything a rational account holder does is to give off the perception of being a good enough person to make it on the secret "good person" list? And just as there could be secret "good person" lists to worry about there could just as easily be secret "bad person" lists which unless the good person lists there may not be any way to know how not to wind up on the bad person list. The problem of these lists being secret is that the criteria for entry onto any particular list is going to be unknown to Alice.
What is genuine communication anyway? Well we don't see it defined as "genuine communication" in the technical sense. According to Wikipedia "honest communication" is defined below:
In biology, signals are traits, including structures and behaviours, that have evolved specifically because they change the behaviour of receivers in ways that benefit the signaller. Traits or actions that benefit the receiver exclusively are called cues. When an alert bird deliberately gives a warning call to a stalking predator and the predator gives up the hunt, the sound is a signal. When a foraging bird inadvertently makes a rustling sound in the leaves that attracts predators and increases the risk of predation, the sound is a 'cue'.
As we can see, animals communicate between each other. When I question what it means to be human under radical transparency it is because human as we know it requires a degree of privacy. The fact is, human is not necessarily a static concept but in order for some of our laws to make sense it requires treating human as if it is some sort of static concept. In reality if we humans are put into a position where we must live our lives in public then we adapt and evolve for those circumstances. The debate is how exactly does this impact the psyche? Honest communication between predator and prey is defined above but what is dishonest communication? That is defined below:
Because there are both mutual and conflicting interests in most animal signalling systems, a central problem in signalling theory is dishonesty or cheating. For example, if foraging birds are safer when they give a warning call, cheats could give false alarms at random, just in case a predator is nearby. But too much cheating could cause the signalling system to collapse. Every dishonest signal weakens the integrity of the signalling system, and so reduces the fitness of the group. An example of dishonest signalling comes from Fiddler crabs such as Uca lactea mjoebergi, which have been shown to bluff (no conscious intention being implied) about their fighting ability. When a claw is lost, a crab occasionally regrows a weaker claw that nevertheless intimidates crabs with smaller but stronger claws. The proportion of dishonest signals is low enough for it not to be worthwhile for crabs to test the honesty of every signal through combat.
Virtue signalling as a means of making it on the secret good person list
Virtue signalling is something which is already in the popular discussion. It is when someone does a specific deed in order to earn the perception of being a good person. This for example could take place on Steem if a person for example upvotes a certain post which a lot of the high reputation people upvote because it will send a particular signal that they vote in a particular way or believe in a particular thing. Since on Steem voting is public, does this mean when you vote you're not voting what you really think of a post but you're voting based on how you want your voting patterns to look to analysts who might in the future use that as a data point to put you on the good person or bad person lists?
Education as a signal
By Yathin sk [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
To put the concept of virtue signally into a perspective for most educated people (particularly university) we have these videos which explain how education is a signal. Peter Thiel discusses this topic and it is also discussed by The Economics Detective called "The Signalling Model of Education. The Economics Detective goes into the problem of information asymmetry. Radical transparency offers the ideal of information symmetry but what are the consequences? If we think of education as a signal then just as where you got your degree from could "say something about you", or the fact that you got a degree at all may "say something about you", it's about the idea that people pursue the degrees to communicate something non-verbally and indirectly to others who might judge them. What if in the future people treat voting patterns on Steem in the same way that degrees are treated in order to judge the account holder by the quality of their voting patterns? To put it into a wider context, is there anything you can do which doesn't say something about you to those who judge in secret and form secret lists?
If you are more interested in the technical and economic aspects of signalling theory:
The power of ambiguity and of ambiguity minimization in communication. By Dana Edwards on Steemit. June 1, 2018.
Formal communication benefits from ambiguity minimization.
So what exactly do I mean by formal communication? Well when we think of how human beings communicate with machines it is in a formal language. This formal language requires minimized ambiguity for security analysis (how can we analyze code if we cannot effectively interpret it?). The other problem is that the machines require for example that if... then... else and similar conditional statements are well defined and unambiguous.
Is it possible to show that a grammar is unambiguous?
To show a grammar is unambiguous you have to argue that for each string in the language there is only one derivation tree. This is how it would be done theoretically speaking.
In computer science, an ambiguous grammar is a context-free grammar for which there exists a string that can have more than one leftmost derivation or parse tree, while an unambiguous grammar is a context-free grammar for which every valid string has a unique leftmost derivation or parse tree. Many languages admit both ambiguous and unambiguous grammars, while some languages admit only ambiguous grammars.
Specifically we know that deterministic context free grammars must be unambiguous. So we know unambiguous grammars exist. It appears the strategy is ambiguity minimization with regard to formal languages (such as computer programming languages).
For computer programming languages, the reference grammar is often ambiguous, due to issues such as the dangling else problem. If present, these ambiguities are generally resolved by adding precedence rules or other context-sensitive parsing rules, so the overall phrase grammar is unambiguous. The set of all parse trees for an ambiguous sentence is called a parse forest.
The parse forest is an important concept to note. All possible parse trees for an ambiguous sentence is called a "parse forest". This concept is key to understanding the strategy of ambiguity minimization. So we can in practice minimize ambiguity and we know for certain that deterministic context free grammars admit an unambiguous grammar but what does that mean? What are the benefits of unambiguous language in general?
A benefit of ambiguity minimization
Simple English is a form of controlled English designed to minimize ambiguity in English. This is important because by using simple English to codify the rules or write the laws it puts it in a language where there is less of a computational expense (in brain power) to process and interpret the statements.
In one of my older blogposts @omitaylor commented and in one of her future posts she asked about the topic of love. In specific her post was titled: "What Does LOVE Mean To YOU"
Her post highlights the fact that there are different love languages and that we don't all speak the same love language. Ambiguity here is actually not a good thing but the simple fact is when someone speaks about love how do we know they are talking about the same thing? As a result we often seek an agreed upon or formally defined "love concept" where we all agree it's love. This is not trivial to find and as a result a topic like love is not easy to discuss in any serious manner. Unambiguous communication or to be more precise (minimized ambiguity) would allow Alice to discuss with Bob the topic of love in a way where they both know exactly what the other is referring to in terms of behavioral expectations, emotions/feelings, etc.
If Alice agrees to love Bob then Bob has no way to determine what Alice means unless he and she agree on a mutually defined concept of love. This highlights how agreement requires very good communication and how minimizing ambiguity can be beneficial at least in this example.
Ambiguity minimization makes sense when you are following a principle of computational kindness. That is if Alice would like to reduce the computational burden on Bob then she can reduce or minimize the ambiguity of her sentence. This is because in order for Bob to interpret an ambiguous sentence Bob must in essence sort all possible interpretations of that sentence from most likely interpretation to least likely interpretation, and before he can even sort he must first search in order to find all possible or at least plausible interpretations.
This is very computationally expensive for Bob but very cheap for Alice. Alice knows exactly what she means but Bob has no clue what Alice REALLY means.
A benefit of ambiguity
There are other examples where increasing ambiguity could be beneficial, such as perhaps when the communication is less than formal, or to share a stream of consciousness without turning it into a formal communication. Humor for example rides on ambiguity and a good joke may have multiple layers. Art also leverages ambiguity because it's perhaps meant to be interpreted 20 different ways all to produce a certain desired affect.
Ambiguity allows more meaning to be packed into fewer words. This in a sense is a sort of compression scheme. So if a sentence has multiple possible meanings the levels or meanings are still finite. It's a fixed amount of meanings and so theoretically speaking a search can be conducted. In fact this is what a human being does when interpreting natural language where a sentence can have multiple meanings (they do a search for all possible interpretations of that sentence). The problem with this is that it is computationally expensive as a process at least for the human being to try to figure out all possible interpretations of a sentence.
Lawyers when they do their work are working with a specific knowledge base of common legal sentences and common interpretations known in their profession but the rest of us might see a sentence in lawyer-speak and not really know what it means because we will not know the common interpretations. This is a big problem of course because to form agreements between two parties both parties need to have a common understanding (a kind of knowledge symmetric understandability) allowing them both to interpret roughly the same sentence to mean the same thing.
In a recent article of mine  I hinted my strong suspicion that scaling is itself scalable.
''Scaling is a problem. Scaling must be scalable, too. Metascale from here to Eternity.''
No matter what a terrific grower a system is - as per its own internal algorithmic growth drive rules - it seems inevitable its growth to get it into entropic mutualization  upon impact with a kind of a ... downscaler.
Scaling is everything, yeah. But it is quite intuitive and supported by too big body of evidence to ignore, that, paradoxically: the faster a thing grows - the sooner its encounter with an external and bigger downscaling factor comes.
This realization, refracted through the prism of our 'reptilian brain' layer  amplified to gargantuan proportions by our inherent social hierarchicity  is the source of the 'Malthusian  anxiety' which led to countless violent deaths over all the human history. Fear is anger , so the emotion that there is only as much to go around, and that the catastrophe of 'running out' of something is imminent, is the major source of what makes us bad to each other .
There are plethora of examples of very well mathematically and scientifically grounded doomsayer scenarios, and we must admit that they all correct as per their internal axiomatics  , and simultaneously they are all totally wrong for missing out the obvious - the factors of externalities  , the properties and opportunities of the medium which is consumed and/or created by this growth, and which transcend the axiomatics. For growth being always 'growth into'. The fact that doomsday scenarios are so compellingly consistent internally is what makes them so strong and dangerous ideological weapon of mass destruction .
Lets throw some such problem-solution couples for clarity:
a. the world of 1890es big cities sunk up knee-deep into beast of burden manure , and the super-apocalyptic projections of that VS Tony Seba's  1 pic > 1000 words of NYC carts vs cars situations in 1900 -1913 ...
b. the grim visions of the whole Mankind becoming telephone switchboard blue collar workers , the number of which should've exceeded the number of total world population by now to achieve the same level of telephonization or
c. the all librarians world  where it takes more librarians than the whole mankind to serve the social memory in the paper & printed ink storage facilities mode ...
d. the Club of Rome  as the noisiest modern bird of ill omen with 'projections' based on the same blind extrapolations as the urban seas of shit or the 'proofs' of the impossibility to connect or educate or feed all - instigating mass destruction fear that ''we run out of everything and will soon all die'' , used for justification for mass atrocities VS Julian Simon's  - the ''Ultimate Resource'' (1981, 1996) . Cf.: my accelerando article  and see what precisely is the Factory for succession of better and better Hanson drives for the last few millions of years - from the Blade and the Fire to the Tau - it is the same thing which identification made Julian Simon from fanatical Maltusianist  into rationally convinced Cornucopian  ... the human mind.
e. the predator-pray model  which this pseudo-haiku  I guess depicts best how's it brutally flawed:
''hawk eat chic -> less chic, human eat chic -> more chic''
for missing out to posit and failure to account for positive feedback loop  of predator over pray dynamics ...
f. The comment of Dary Oster  , founder of the other passion of mine - ET3 , on the aka 'saturation' of the scalables (exemplified in the field of transportation, which btw, being communication ... our social structures map onto mobility systems we have on disposal ... ).:
''... US transportation growth has focused on automobile/roads (and airline/airport) developments. (And this has been VERY good for the US economy.) The reason is that cars/jets offered far better MARKET VALUE than horse/buggy/train transport did 150 years ago. In the mid 1800s, trains displaced muscle power for travel between cities - because trains offered better market value than ox carts. Trains reached 'market saturation' about 1895 to 1905 (becoming 'unsustainable') - however 'market momentum' produced 20 years of 'overshoot'. Cars/jets were far more sustainable than passenger trains and muscle power, and started to displace trains (and finish off horses). By 1916 the US rail network peaked at 270,000 miles (today less than 130,000 miles is in use).Just like passenger trains hit market saturation, roads/airports are reaching economic limitations. The time is ripe for a market disruption, and all indicators (past and present) say it will NOT come from, or be supported by government or academia -- but from private sector innovations that offer a 10x value improvement (like ET3), AND also offer incentives for most (not all) key industries to participate (like ET3). Automated cars, smart highways, and electronic ride sharing are industry responses that will contribute to overshoot of cars/roads for the next 5-10 years.The main problem i see with the education system is that is that academic research and publication on transportation is primarily funded by status quo industries like: railroads and rail equipment manufactures, highway builders, automobile/truck manufactures, engineering firms, etc. -- all who fund research centered on 'improving' the status quo.Virtually all universities (for the last 1k years+) are set up to drive incremental improvements that industry demands, and virtually all paradigm shifts are resisted until AFTER they occur and are first adopted by industry. Government is the same (for instance in 1905 passing laws to forbid cars that were disrupting horse traffic; or in 1933 passing laws to limit investment in innovation startups to the wealthy (those successful in the status quo)).''
g. Darwinian algo  sqrt(n) VS higher algos - like Metcalfe n^2 . It is not precise, it is more of metaphorical, to indicate direction or scale of scaling, rather then rigorous precision, but ... the former figuratively speaking takes 100 times more to put up 10 times more, and the later takes 10 times more to return 100 times more...
h. Barter vs money. See.:  bottom of page 5 over the bottomline notes, about the later:
simpliﬁes pricing calculations and negotiations from O(n^2) complexity to O(n) complexity
As demonstration how one item out of a scaling barter system, emerges as specialized transactor and accelerator to transcale the barter economy. From within. Endogenously as always. (btw, Extremely strong document where there are entire books read and internalized behind each tight and contentful sentence!)
i. The heat death of the universe  VS the realization that the 2nd law  - conservation law for entropy/information law does not allow that , the asymptoticity  of the fundamental limits of nature, the fact that max entropy grows faster than/from/due to the actual antropy growth  and that entropy is not disorder  and that at the end of the day it is an unbounded immortal universe  ... cause it's all a combinatorial explosion .
j. The Anthropic principle  and the realization that it is extremely hard if not impossible to posit a lifeless universe  ...
k. The Algoverse - my 'psychedelic' vision  of the asymptotic inexorable hierarchy of the Dirac sea  of lower algos which take everything for almost nothing - up towards giving almost everything for almost nothing - Bucky Fuller's runaway Ephemeralization . Algorithms are things. Objects. Structure. Homoousic or consubstantial to their input and output. Things taking things and making things outta the former. Including other algos of course! Stronger ones.
l. The Masa Effect . The Master of Softbank seeing how the machine productivity is on the imminent course to massively overscale the human clients base and his apparent transcaling solution to upscale the clients base with bots and chips, with the same which scales supply in such a too-much way. 
m. The Pierre the Latil 1950es and Stanislaw Lem 1960es ( copied 1:1 by Tegmark  ) hierarchy . Of degrees of self-creating freedom of Effectors ...
n. Limits of growth - present in any particular moment and in any finitary setting of rules ,  but nonexistent in the infinity of rules upgradability. Like a cancer cell trapped in a cage of light  vs ... photosynthesis.
o. Ray Kurzweil - static vs exponential thinking .
p. Craig Venter's  Human Genome project  which when commenced in 1990 was ridiculed that will be unbearably expensive and will take centuries to finish, and it did - it costed a unbearable for 1990 fortune and it did take centuries, of subjective time as per the initial projections conditions - being completed in year 2000.
q. Jeff Bezos vision  of Solar System wide Mankind:
''The solar system can easily support a trillion humans. And if we had a trillion humans, we would have a thousand Einsteins and a thousand Mozarts and unlimited, for all practical purposes, resources.''
r. The 'wastefulness' of data centers and crypto mining collocation facilities  ... which is as funny as to envy the brain for 'wasting' >25% of the body energy. (Btw, the tech megatrend is exponentially and relentlessly towards the minimum calculation energy).
s. The log-scale intuitive measure and smooth straight line visualization coming out of, this quote which I fished out off the net long time ago.:
"The singularities are happening fairly regularly but at an increasing rate, every 500 to 1000 billion man-years (the total sum of the worldwide population over time). The baby boom of the 1950 is about 200 Billion man-years ago."
ops! go back to Q. With 1 trln. humans population the 'singularities' will occur once a year?!
t. the Tau  !!
I can continue with these examples ... forever [wink] - excuse me if I've bored you - but I think that at least that minimum was needed to be shown and it is enough to grok the big picture.
Scaling is the solution. It is a problem too. Its overcoming is what I dub 'Transcaling' for the purpose of that study.
Size matters. Scaling is the way. But the more general is how a system handles change! This is as fundamental as to be in the very core of definition of life and intelligence .
Tauchain is all about change handling!
Now, lets knit the 'blockchain' of these all example threads above into a knot like the Norns do :
Dear friends, please, scroll back to Example D. Yes, the human mind transcaler thing. The Ultimate resource thing.
We are the ultimate resourse.
We the humans (and soon the whole zoo of our technological imitations and reproductions and transcendences of ourselves ).
We as the-I  are strong thinkers and creators, immensely more road lies ahead than it's been traveled, yes, but yet we, as the-I, are the momentary apex in the Effectoring business  in the Known universe ... AND simultaneously we as the-We are mediocre to outright dumb.
We are very far from proper scaling together. The Ultimate resource is not coherent and is not ... collimated. Scattered dim lights, but not a powerful bright mind laser. Dispersed fissibles, but not a concentration of critical masses.
We as The-We - paradoxically- persistently finds ways to transcale its destinies using the power of the-I, but the-We itself does not entertain the scaling well at all .
The individual human mind is the unscaled transcaler.
Tau is the upscaler of that transcaler.
I'll introduce herewith another 'poetic' neologism, which occurred to me to depict the scaling props of a system after the Scrooge factor of ''Tauchain - Tutor ex Machina'' , and it is the:
Spawn  factor
- the capacity and ability of a system to grow through, despite, against, across, from and via the changes. Just like cuboid  is about all rectangular things like squares, cubes, tesseracts ... regardless of their dimensionality, the Spawn Factor - to be a generalization of all orders of scaling. Zillion light years from rigor, of course, as I'm on at least the same distance from my Leibnizization . For the lawyer to become a mathematician is what is for a caterpillar to become a a butterfly. :) Transcaling.
Tau transcends the infinite regress of orders of: scaling of scaling of scaling ... by being self-referential. Or recursive. 
What is the Spawn factor of Tau?
If you let me I'll illustrate this by a poetic periphrasis of the famous piece of Frank Herbert's .:
I will face my change. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the change has gone there will be nothing. Only I will remain.
To zoom out is useful. It puts the events networks of our spacetime in perspective. Including on what the great Jorje Luis Borges was calling the Orbis Tertius :
''ORBIS TERTIUS. "Tertius" (Latin = third) is an allusion to: World 3: the world of the products of the human mind, defined by Karl Popper.''
Poetically stated, ''retrodiction studies'' , ,  enables us to get a glimpse on the "clear, cold lines of eternity".
Back in 20th century Prof Robin Hanson put together this extremely insightful and strong document .
Long-Term Growth As A Sequence of Exponential Modes,
Economy grows. [see: Footnote]. Unstoppable.
Hanson's unprecedented contribution was to provide us with systematic orientation tool on how and why economy grows.
It accelerates. See:
Mode Doubling Date Began Doubles Doubles Transition
Grows Time (DT) To Dominate of DT of WP CES Power
---------- --------- ----------- ------ ------- ----------
Brain size 34M yrs 550M B.C. ? "16" ?
Hunters 224K yrs 2000K B.C. 7.3 8.9 ?
Farmers 909 yrs 4856 B.C. 7.9 7.6 2.4
Industry 6.3 yrs 2020 A.D. 7.2 >9.2 0.094
The model identifies the past economy accelerators as.:
- neural networks, evolving into doubling brain size each 30-ish megayears (hinting that human level of intelligence is an inevitability: +/-30 millions of year around the Now, by the virtue of the good old 'coin-toss' Darwinian algorithm alone.)
- human as the top-of-the-foodchains predator since around 2 000 000 BC. (maybe the human mastering of the Fire and the Blade to blame), compressing the doubling time with over two orders of magnitude down to a quarter of a million of years.
- Food production, ecosystem manipulation (or rather the collimation of farming, horse domestication and writing as accelerator components), leading to less than 40 human generations per economy doubling.
- All we know as division of labor, specialization, systematized Sci-Tech... industry - the centralized ways for production and control of knowledge leading to another hundreds-fold compression down to mere ~decade of economy doubling time.
Recommended: digest each Hanson (economy accelerator drive or) Engine with the Bob Hettinga's 'ensime' :
My observation about networks in general is a rather obvious one when you think about it: our social structures map to our communication structures. As intuitive as it is to understand, this observation provides great insight into where the technology of computer assisted communication will take us in the years ahead.
Connectivity specs as indicator and drive.
Now, when we leave the past and use these models to gaze into the future, the really interesting stuff comes out.
Aside from giving explanation to the, detected by Brad DeLong in his also monumental paper , overall trajectory of the economy, the nucleus of meaning in the Rob Hanson's paper is:
Typically, the economy is dominated by one particular mode of economic growth, which produces a constant growth rate. While there are often economic processes which grow exponentially at a rate much faster than that of the economy as a whole, such processes almost always slow down as they become limited by the size of the total economy. Very rarely, however, a faster process reforms the economy so fundamentally that overall economic growth rates accelerate to track this new process. The economy might then be thought of as composed of an old sector and a new sector, a new sector which continues to grow at its same speed even when it comes to dominate the economy.
Visualize: a Petri dish and sugar being expanded in size and quantity by the accelerating growth of the bacterial culture in it.
Hanson actually predicted nearly quarter of century ago, ... something that is relentlessly coming.
In the CES model (which this author prefers) if the next number of doubles of DT were the same as one of the last three DT doubles, the next doubling time would be ... 1.3, 2.1, or 2.3 weeks. This suggests a remarkably precise estimate of an amazingly fast growth rate. ... it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.
An economy accelerator avalanche is roaring down the slope of time towards us.
A brand new Hanson Engine is about to leave the assembly line.
Tau, is that you?
FOOTNOTE: To wrap up the above statements in the flesh of the deep thesaurus of content onto which they lie, would conservatively consume hundreds of pages. Even if only briefed. I promise to come back to these subtopic meaning expansions (by referring back to here) with series of posts in the months to come to tie up with the notions of.: economy as a network, network as computer, what exactly it processes and outputs, economy (like the universe or life) being endogenously driven positive feedback loop self-amplifying non-equilibrium entropic combinatorial explosion system, the wealth as economy complexity growth in relation with GDP size and the intimate connection of dollars-joules in energy intensity, physical and economic limits of growth, self-reinforcing predator-pray models, knowledge as synonymous with skill and so forth, economic cycles upon the DeLong curve ... to name a few. Readers questions and comments will of course help a lot with the subtopics prioritization, and will boost (incl. mine) understanding. Thank you in advance!
NOTE: I currently have the pleasure and honor to be part of the Tau Team, but this post contains ONLY my personal views.
Retrodictive archaeology is so tempting. It is about what it was, what it is, what we knew and what we know.
Here I present another time travel glimpse of mine:
February 1998. Global Information Summit*. Japan. Robert Hettinga** - the patriarch of financial cryptography wrote:
My realization was, if Moore's Law creates geodesic communications networks, and our social structures -- our institutions, our businesses, our governments -- all map to the way we communicate in large groups, then we are in the process of creating a geodesic society. A society in which communication between any two residents of that society, people, economic entities, pieces of software, whatever, is geodesic: literally, the straightest line across a sphere, rather than hierarchical, through a chain of command, for instance.
A network scales according to the capacity of its switches.
Mankind is a network of interlinked humans routed by ... humans.
The network topology*** of society is dictated by our incapacity to switch - similarly to the way the penguins society is shaped by their inability to fly.
Running the Sorites paradox**** in reverse - humanity does not form a sand-heap by adding grains, but fractalizes into groupings of up to just a few individuals.*****
Big body of research on discussions persistently brings back the result that over a certain threshold of as little as 5 persons the number of possible social interactions explosively exceeds the participants capacity to handle the group traffic of information.
Increase the group size and the 'c factor' - the collective intelligence - abruptly implodes. Bellow the individual human level. So long 'wisdom of the crowd'.
Hierarchy is the only way we know (up to now) for a society to scale. Centralization as emergenta of organic switching limitations.
It is fair to say that we have and have had upscaling exosomatic prosthetics all the time.: language, writing, institutions, specialization... but at the end of the day even within these boosters the social switching is bottlenecked down to just a few humans-strong.
Since recently, cause, you know ... computers. Humans are not only lousy switches, but also tremendously expensive ones to make. Computers - the vice versa: their performance/cost relentlessly bigbangs.
Moore's law****** is not only about silicon wafers. It is a megatrend from the very dawn of the universe as Kurzweil noticed******* long time ago, which goes up and up across all computronium substrata imaginable or possible.
Non-human computation and automated communication promises to break the social scaling barrier.
Here comes the Ohad Asor's Tau.********
The only project I know which asks the correct questions and looks into doable solutions of humanity scaling. And the only meaningful identification and treatment of these problems which seems to lead towards fulfilling of Bob Hettinga's Geodesic visions from few decades ago.
Of course I do not know it all, but lets say that I intensively search the relevant space.
Tau transcends the human switching limitations in humane way. Without to amalgamate individuals out of existence, which some other discussed ways - like direct neural interfacing - seem to inevitably infer. For society is ... human beings.
What's the pragmatics of geodesic vs hierarchic?
What game really the 'flat' p2p networks beat the vertical social configurations into?
It is an easy answer. It is pure physics:
A Tauful geodesic society comprises IMMENSELY richer economy.
Metcalfe's (and Szabo's) law on max!
The combinatorial size of it vastly exceeds the possible arrangements of any traditional social 'pyramid'.
The maximum social diameter becomes ~1.
In fact, it seems quite an ancient archetypal vision, the whole thing:
“Imagine a multidimensional spider’s web in the early morning covered with dew drops. And every dew drop contains the reflection of all the other dew drops. And, in each reflected dew drop, the reflections of all the other dew drops in that reflection. And so ad infinitum.” Allan Ginsberg*********
1. *- http://www.nikkei.co.jp/summit/98summit/english/online/emlasia3.html (the second entry)
2. **- http://nakamotoinstitute.org/the-geodesic-market/
3. ***- https://en.wikipedia.org/wiki/Network_topology
4. ****- https://en.wikipedia.org/wiki/Sorites_paradox
5. *****- https://sheilamargolis.com/2011/01/24/what-is-the-optimal-group-size-for-decision-making/
9.*********- https://en.wikipedia.org/wiki/Indra%27s_net (image from: https://mindfulnessforhealing.com/2012/12/29/weaving-a-tapestry-of-wellness/ )
NOTE: I'm in the Tau Team, but this post expresses only my own associations and interpretations.
Tauchain is a profound project that has taken years of deep research and development. Some of the smartest people I've known on this platform highly recommended it, which is why it has been making me do a few things I've not been doing for a while now:-
So one of the first things I noticed in #idni's IRC channel is a cool-looking username "naturalog". While I'm pretty sure it just means natural logarithm, could it be natural OG instead? The natural, original gangsta? In casual parlance of course. Turns out, that's Ohad Asor's (the founder) nickname. What a smooth operator. That username is like wordplay: a mathematician with street cred. Too bad that Steem username is already taken.
The Natural OG
Reading through the logs I soon realised that I can trust his words. Why? Other than his experience, I think it's because I'm somewhat the same in nature. Not that I'm a genius with great knowledge and expertise like he is, but I do appreciate stuff like language, semantics, logic, and such. They're the kind of subjects which I think helps shape clear communication. It shows throughout his replies in the logs.
Many might not know it, but everything I say or type usually takes quite some time because I do try to be careful with words. Sometimes I even spend minutes to decide whether or not to say "could" instead of "would", amongst all of the other nuances in communication. Because, what else do we really have between us other than words? This is why writing is almost sacred to me.
The ability to question oneself and question one's choice of words are part of our learning process. Why do we really say what we say, or think what we think? Can't speak for everyone, but I expect introspective, lifelong learners to be more trustworthy when it comes to dealing with complex subjects. Plus, the obvious elements of the project seems to speak more about substance than hype:-
So all things considered, the project is unlikely to be a scam. If you search through the ~28 megabytes worth of IRC chatlogs, you will even find three ultra-rare instances of Ohad Asor aka naturalog mentioning "before it was cool". Look at the image below. Knowing his history and experience, I think it's safe to conclude that this dude is a certified OG. The natural OG. Total man crush! I might even ask him for some dating tips once he's done with the bulk of the development.
If those points above are not enough street cred to establish an OG status, check out this section of the chat log below:-
10:39 < Liaomiao> you must know a lot about blockchain architecture if you came up with some of the ideas behind graphene
Just good to know that he might have had some influence in the creation of Graphene, Dan Larimer's creation for Bitshares that subsequently shaped both the inner-workings of Steem and EOS. Impressive indeed. It's a good sign for Tauchain / Idni Agoras. In contrast, I was still riding rollercoasters all day high on sweet carbonated drinks in Disneyland during the same age when Ohad Asor was already grinding like an OG, writing production-level software.
So it would seem like my investigation into the heart of Tauchain has quickly turned me into a huge admirer and fan of the project. It has never happened to me before to this extent, but I certainly don't mind given the project's scope and the main developer's character. It's at least a much better story than elevating irrational loonies and sensationalists with no appreciation of well-founded knowledge, which unfortunately is all too common in society these days. If anything would make the world a better place, it would be intellectual curiosity, not intellectual dishonesty.
For now, I'm quite happy to have found the natural OG who has been working quietly behind the scenes. So far it seems to me that it could very well be the next big thing other than Steem communities and SMTs. I'll be posting more about the project in time. As always, thanks for reading.
Website - http://www.idni.org
Github - https://github.com/IDNI/tau
Telegram - https://t.me/tauchain
Reddit (with FAQ) - https://www.reddit.com/r/tauchain/
Coinmarketcap entry - https://coinmarketcap.com/currencies/agoras-tokens/
Here's an hour-long interview with Ohad Asor that you might want to check out.
Not to be taken as financial advice.
The Power of Tau - Scaling the Creation of Knowledge. By Trafalgar. Posted on Steemit. December 31, 2017.
Ohad Asor, creator of Tau Chain/Agoras, has recently published the long awaited blog post detailing his vision for what very likely is the most ambitious project in the crypto space: Tau.
Tau will accelerate human endeavors by overcoming long ingrained limitations in our collaborative processes; limitations which we rarely even question.
The Problem of Social Governance
Take social governance, for example. As individuals, we have opinions over a wide variety of social issues. Perhaps you feel that the speed limit on certain roads is too high, or that programming should be a compulsory subject at public schools, or that everyone would benefit if cryptocurrencies were officially recognized and endorsed by the state.
However, you have no idea how to get these concerns across to the general public. I mean you could try writing a letter to your local representative or signing a petition but ultimately that's unlikely to gain much traction. Meanwhile, the very same issues that seems to have divided the nation over the past decade remain at the forefront of our political debate. Immigration, climate change, abortion, gun control etc. are all important issues of course, but very little progress have been made considering the amount of time, resources and attention that have been devoted to them.
So the problem with traditional forms of social governance, such as democratic voting, is apparent: on the one hand it has difficulty identifying and addressing the wide range of opinions different people hold, on the other hand, even with respect to the small number of issues that do end up bubbling up to the surface, it isn't particularly efficient at detecting consensus.
The central cause of this problem is that current modes of discussion are not scalable. There are inherent limitations in the way we're able to communicate our views across to each other; namely, human ability to comprehend and organize information is the main bottleneck. We cannot possible follow multiple conversations at once, or recall everyone's propositions once there are more than a handful of people in the mix. This is why most collaborative decision making bodies in practice are generally quite small in number: the President's cabinet, Supreme Court Justices, boardroom directions of a fortune 100 company etc.; you just can't have a productive discussion with 50 people. Our entire civilization is structured around this very limitation: discussions don't scale.
Scaling Collaborative Discussions Under Tau
Imagine if we can overcome this limitation; what will it mean for social governance? By using a self defining, decidable logic, the Tau network is easily able to keep track of every user's propositions and detect consensus automatically. Note that making a proposition is exactly the same as voting for that very same proposition: when you're proposing 'dogs should always be on a leash in public unless in a park' you're in effect putting in a vote for such a proposition. This way, countless issues, regardless of how technical or niche, can be assessed through the network concurrently, and social consensus can be detected on the fly. The Tau network can scale social governance by overcoming one of the greatest limitation in human communication of ideas by delegating the task of logically making sense of everybody's propositions to the computer. A simple use case of this will be the rules of the Tau network itself: through a self defining logic, Tau is able to detect consensus among its users from block to block, altering its own rules to conform to the choices of the user base.
The benefits of scaling discussions are not limited to just a more efficient form of social governance. Logic isn't merely about detecting surface level consensus, the network can easily form further deductions from everyone's propositions. If one states 'all men are mortal' and 'Socrates is a man', one can deduce that 'Socrates is mortal.' But deductions can be very deep and non trivial. Imagine if we had a group of 1000 mathematicians all inputting their mathematical insight as propositions. Tau can rapidly detect who agrees with whom on what, and deduce every logical consequence of their combined wisdom; in effect arriving to new truths and insights. In other words, Tau greatly accelerates the production of new knowledge. This will, of course, also work if you have physicists, doctors, engineers, computer scientists, indeed experts in every field working together on the platform. By scaling collaborative discussions in a logical network, Tau is able to scale the creation of knowledge.
When Tau comes into effect, any company, government, and indeed any organization not using this new network will be rendered obsolete. Tau aims to become an indispensable technology.
And this is only the alpha of Tau.
I will talk about the beta in a future posts. The beta will revolve around not just the scaling of discussions and consensus, but the automation and execution of code based of the results of those discussion. For more information on code synthesis and more, please read Ohad's blog. Also, do check out my introduction to Tau here if you missed it.
You can invest in Tau through buying Agoras tokens on Bittrex.
I am not affiliated or paid by the project. These represent my own subjective views. Tau/Agoras is the only other crypto project apart from Steem in which I see an extraordinary future, and I am merely sharing that with fellow Steemians here.
Ohad Asor's New Tau Blog
IRC Chat: Where you may ask Ohad himself technical questions
Tau Chinese QQ Group: 203884141
The liquid paradigm, feedback loops, the virtuous cycle and Tauchain. By Dana Edwards. Posted on Steemit. December 31, 2017.
What do I mean by the concept of "liquid platform"? This is merely a re-articulation of the concept of self amendment and self definition. In other words it is very much like an autopoietic design. Bruce Lee once said to "be like water", and the reason is because water can adapt to any environment it is placed it by taking the form of the container it is put into.
So by liquid paradigm I mean that the core feature of true next generation platform design is going to be focused on maximum adaptability.
Feedback loops and the virtuous cycle
How can we have a platform which promotes continuous self improvement? If you have a platform with no hard coded "self" then even the design of the platform is under constant negotiation and creation. This is key because it means Tauchain will be able to adapt quicker than all other competing platforms. Quicker than Tezos because Tezos merely provides self amendment but lacks the virtuous cycle, the meta language, etc.
The Tau Meta Language allows for self definition at the level of languages. This means even the communication mechanism between humans and machines can be updated continuously. This continuous updating is the key design breakthrough of Tauchain because it means Tauchain will always be state of the art in any area. Think of a platform like Wikipedia where anyone can update any part of it in real time continuously so that every part of it is always the state of the art.
Starting at languages, the feedback loop can be created between humans and intelligent machines. Humans must make decision on how to design Tau. These design decisions benefit from the virtuous cycle due to the feedback loop between humans and machines allowing the decision making ability itself to be upgraded. This could even allow for the humans to transcend traditional human capabilities by relying on intelligent machines to assist in design which means better future designs, which means better decision making, which means better future designs which leads to better decision making, this represents the "virtuous cycle" by way of a feedback loop between humans to machines to humans to machines to humans etc. The humans improve the quality of the machines by feeding knowledge, feeding new algorithms, feeding just enough for the machines to become intelligent enough to help the humans to help the machines even more efficiently in the next iteration of Tauchain, over and over again.
Humans and machines will seek more good and less bad for the formal specification of Tau itself. Good and bad designs will be defined collaboratively by the human participants by way of intelligent discussion. As discussion scales, bigger crowds means more human minds involved, which means improved design, which leads eventually to a better and perhaps wiser Tau, which of course would lead to wiser even more intelligent discussions, which can lead to an improved formal specification, and to a better Tau. So that is a loop. It is also a loop between improving Tau, improving society, improving Tau, improving society.
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.