The Paradigm of Social Dispersed Computing and the Utility of Agoras. By Dana Edwards. Posted on Steemit. October 12, 2018.
Social Dispersed Computing
What is socially dispersed computing? It is an edge oriented computing paradigm which goes beyond cloud and fog computing. To understand socially dispersed computing we first have to discuss dispersed computing and how it differs from the previous paradigm of cloud and fog computing. The current trend toward decentralized networks which we first saw with the peer to peer technologies such as Napster, Limewire, Bittorrent, and later with Bitcoin, have brought to us an opportunity to conceptually new paradigms. The original model most people are familiar with is the client server model which was very much limited in that the server was always vulnerable to DDOS attack. The client server model has never been and could likely never be censorship resistant.
In the client server model the server could simply shut down as was the case with Bitconnect or it could be raided. The server could also be shut down by hackers who simply flood the site with requests. As we can see from the problems the client server model presented we discovered the utility of the peer to peer model. The peer to peer model was all about censorship resistance and promoted a network which was to have no single point of failure (single point of attack) which could be result in the shutdown of access points to the information. One of the first applications for these peer to peer networks was file sharing networks and networks such as Freenet/Tor etc. This of course eventually evolved into the Bitcoin which ultimately led to the development of Steem.
In dispersed computing a concept is introduced called "Networked Computation Points". An NCP can execute a function in support of user applications. To elaborate further I'll offer something below.
Consider that every component in a network is a node. Now consider that every component node is an NCP in that it can execute some function to support some user application. If we think of for example a blockchain then we know mining would fit into this category because it is both a node in the network and it also can execute a function in support of Bitcoin transactions. Why is any of this important? Parallelism is something we can gain from dispersed computing and please note that it is distinct form concurrent computing. When we rely on parallelism we can reap the benefits in terms of performance when executing code by breaking it up into many small tasks which can be performed across many CPUs.
EOS attempts to leverage parallelism specifically to enable it's performance boost. The benefit is speed and flexibility. Think for example of the hardware side also with FGPAs which can do similar tasks of a microprocessor. FGPAs (not ASICs) which unlike ASICs would provide generalized flexible parallel computing. Consider that just like with mining a company could add more and more FGPAs to scale an application as needed.
To understand Social Dispersed Computing we have to make note of the fact that there are other users at any given time. For example the other users in the network participate to provide resources to the network for the benefit of other users whilst using the network. So in Steem for example as you add content to Steem you are adding value to Steem in a direct way, but also in a dynamic way. The resources on Steem also can adapt dynamically to the demand provided that the incentive mechanism (Resource Credits) works as intended.
EOS as an example DOSC (Dispersed Operating System Computer)
Because EOS seems to be the first to approach this holistically I will give credit to the EOS network for pioneering dispersed computing in the crypto space. All resources are representable by tokenization in a dispersed computing network. EOS and even Steem have this. Steem has it in the form of "Resource Credits" which represent the available resources on the Steem network. If more resources are needed then theoretically the resource credits could act as an incentive to provide these resources to the Steem network. This provides a permanent price floor to Steem represented as the amount of Steem which would have to be purchased in order to have enough resources to run Steem (if I have the correct theoretical understanding). This would put Steem on a trajectory toward dispersed computing.
Operating systems typically sit between the hardware and software as a sort of abstraction layer. This traditionally has been valuable because programmers don't have to directly speak to the hardware and hardware designers don't have to directly communicate by their designs to the programmer. In essence the operating system in the traditional model is centralized and made by a company such as Microsoft or Apple. This centralized operating system typically runs on a device or set of devices and provides some standard services such as email, a web browser, and maybe even a Bitcoin wallet.
Typically the most valuable or high utility software people consider on a computer is the operating system. In our smart phones this is Android OS and in PCs it may be Windows or Linux. This is of course thrown on it's head under the new paradigm of dispersed computing and the new conceptual model of the "decentralized" operating system. EOS is the first to attempt a decentralized operating system using current blockchain technology but the upcoming technology easily eclipses what EOS could do. Tauchain is a technology which if successful will leave EOS in the stone age in terms of what it will be able to do. EOS while ambitious also has had it's problems with regard to the voting mechanisms and the ease at which collusion can take place.
To better understand how decentralized operating systems emerge learn about:
If we look at OSKit we see that it is the tools necessary for operating system development. If we look at Tauchain we realize that it is strategically the most important tool for the development of a decentralized operating system being provided in the form of TML (a partial evaluator). If we think of the primary tool necessary to develop from we have to initially start with a compiler. A compiler generator is more like what TML allows with it's partial evaluator. More specifically it is the feature of Futamura projection which can provide the ability to generate compilers.
If we look at the next most important part of an operating system it is typically the kernel. Let's have a look at what an exokernel is:
Operating systems generally present hardware resources to applications through high-level abstractions such as (virtual) file systems. The idea behind exokernels is to force as few abstractions as possible on application developers, enabling them to make as many decisions as possible about hardware abstractions. Exokernels are tiny, since functionality is limited to ensuring protection and multiplexing of resources, which is considerably simpler than conventional microkernels' implementation of message passing and monolithic kernels' implementation of high-level abstractions.
By Thorben Bochenek [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
From this at minimum we can see that an exokernel is a more efficient and direct way for programmers to communicate with hardware. To be more specific, "programs" communicate with hardware directly by way of an exokernel. We know the most basic function of a kernel in an operating system is the management of resources. We know in a decentralized context that tokenization allows for incentives for management of resources. When we combine them we get kernel+tokenization to produce an elementary foundation of an operating system. In a distributed context we could apply a decentralized operating system in such a way that the network could be treated as a unified computer.
Abstraction is still important by the way. In an operating system we know the object oriented way of abstraction. Typically the programmer works with the concept of objects. In an "Application Operating Environment" an "Application Object" can be another useful abstraction. Abstraction can of course be taken further but that is for another blog post.
The Utility of Agoras
Agoras+TML is interesting. Agoras is the resource management component of what may evolve into the Tau Operating System. This Tau Operating System or TOS is something which would be vastly superior to EOS or anything else out there because of the unique abilities of Agoras. The main abilities have been announced on the website such as the knowledge exchange (knowledge market) where humans and machines alike can contribute knowledge to the network in exchange for the token reward. We also know that Agoras will have a more direct resource contribution incentive property in the form of the AGRS token so as to facilitate the sale or trade of storage, bandwidth or computation resources.
The possible (likely?) emergence of the Tau Operating System
In order for Tauchain to evolve into a Dispersed Operating System Computer it will need an equivalent to a kernel. Some means of allowing whomever is responsible for the Tauchain network to control and manage the resources of that network. If for example the users decide then by way of discussion there would be a formal specification or model of a future iteration of the Tauchain network. This according to current documents is what would produce the requirements for the Beta version of the network to apply program synthesis. Program synthesis in essence could result in a kernel and from there the components of a Tau Operating System could be synthesized in the same way. Just remember that all that I write is purely speculative as we have no way to predict with certainty the direction the community will take during the alpha.
The Era of Signals and Changing Power Dynamics. By Dana Edwards. Posted on Steemit. October 8, 2018.
The world we live in is rapidly changing. For instance the #MeToo era has arrived. This new era shows us that any individual in any position in society can be brought down. It proves a point that many in the blockchain community may have known instinctively which is that any individual source of authority and or power can and may be removed from that position. Some people actively choose to seek to be in these positions of power for their own reasons and then some of these people abuse their positions of power. People who seek power for the wrong reasons and then abuse it are in my opinion a risk which positions of authority bring (which blockchain technology may help reduce).
What are signals and what is signalling theory?
Social desirability bias is a popular topic in academic circles. To explain:
In social science research, social desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad," or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports, especially questionnaires. This bias interferes with the interpretation of average tendencies as well as individual differences.
People tend to want to be liked/loved. People when asked questions on a survey may feel pressured to answer the survey in a way which they think they will be viewed more favorably by others. In other words rather than answering in a manner which they truly think or feel they will assess how others might judge their response and then answer in a way which they think they will be judged more favorably.
A full video on this topic is below:
Social desirability bias is exactly why voting on platforms such as Steem will not work. When voting is public then most of the research seems to show that people will feel pressured to answer the question not in the way which they really believe or prefer but in the way which they think the whales want them to vote or prefer. In other words because on Steem the whales can reward (or punish) anyone who votes in ways which go against "political sensibilities" it is likely that social desirability bias applies particularly on DPOS style consensus platforms. If there are votes and the votes are not encrypted (secret) then we have no way to determine which votes are legitimate and which votes are the result of signalling (such as virtue signals).
For example when it was Trump vs Hillary the polls suggested Hillary would win. This is because there likely was social desirability bias which made it socially undesirable for anyone to admit they voted for Trump. As a result people who voted for Trump or who planned to vote for Trump may have said in public that they intended to vote for Hillary. Because the votes in the election are secret the people who may have seemed like loud Hillary supporters could have been secret Trump supporters in disguise.
In some of my previous posts I discuss signalling theory a bit more:
In these posts I have identified that behavior of individuals is shaped by how individuals think other individuals will think of their behaviors. This would apply to social desirability optimization which I'll label as adopting behaviors which provide the expected payoff of being rewarded with improved social desirability.
To provide clarity the definition of social desirability:
Social desirability is the tendency for research participants to attempt to act in ways that make them seem desirable to other people.
In other words people want to be liked. Likeability is a word I can use to simplify the concept of social desirability for readers. In the example with the 2016 election it is clear that supporters of Trump would risk a social stigma with severe social consequences if they came out in public support. This high cost of public support is why some believed that there were secret Trump supporters who were simply afraid of "losing face". In the most simple terms a person can talk red or talk blue depending on where the social stigma is.
One of the stunning conclusions I reached in my own research on this topic is that the increasing transparency leads to "preference falsification". That is a person who is talking blue while thinking red. If all speech is public (like it is on Steem) then there is the possibility that preference falsification is taking place.
Here is a video on the topic of preference falsification:
Why is this a major problem in the blockchain community? The evolutionary trajectory of a platform relies entirely on market preferences. If censorship exists and conformist pressures hinder true preference aggregation then the developers (and the community itself) will have no way of knowing which improvements to make or which changes would best satisfy the community.
What is leadership and what is the era of signals?
Before I attempt to discuss leadership I will first explain what I think leadership means and what it is. In my opinion the community must always come first. A person who is put into a leadership position is in my opinion in what I'll term "the seat of responsibility". This is in my opinion not an enviable position to be in but someone has to be in this position. For example a person who receives a security clearance is now in a position of heavy responsibility. The information which they protect is not their secrets but the nations secrets.
Leadership in my understanding is not about "being in power" but is about serving a community. To be in a "big seat" is to be in a position of responsibility to make decisions on behalf of a community which the chosen person must represent. In other words being in positions of responsibility is entirely about service and not about power. A representative in congress is not in a position of power but in a position to serve their constituents who put them in that position to represent their interests.
In my opinion to be a good leader is to be a great listener. The leader must listen to the community to find out what the community wants and or needs. The leader must listen to the community to determine what the community thinks is right or wrong. The leader then must offer solutions or proposals or policies which satisfies the requirements of the community. What matters more than who is in the seat is the seat itself. This means the Presidency itself matters more than who is in office. The positions themselves matter more than who is in them. Long after whomever is in these positions are gone there will be these positions to be filled. Any leader in any position is replaceable by someone else if they show failure to lead (whether it be a CEO, or a President of a country, or a lead developer, or any other kind of community leader).
In my understanding it is like chess where all pieces on the board can be in various positions. We know in chess that the pawn can become any piece on the board. The point with this analogy is that individuals in my opinion are not likely to remain the source of power in society. The source of power in society is increasingly becoming the community for better or for worse. According to me, to lead is to serve and to lead effectively is to serve effectively.
To accept a responsibility to serve (to lead) it is required to seek feedback from all whom the community servant represents. This does not require voting specifically but it does require under any circumstance a mechanism by which the community can give brutally honest feedback to the system itself. When I say the system itself I do not mean the feedback must go direction to those who serve the system but that the system must have a means of collecting data, analyzing data, and then informing those who can improve the system on which changes best would satisfy the needs of the community.
In my opinion this is a very data driven process. I do not think leaders can for example process big data using their brain power. This will require that they harness the power of machines (machine intelligence). There is also risk if all the processing is done by one company (such as Google) just as there is risk if all people rely on Facebook for the news and opinions. We can see that Facebook has the ability right or wrong to shape elections by deforming the news feed or by allowing certain fake profiles to interact on the site. We see that Facebook can ban crypto ads at will for example to enforce certain policies without taking any kind of poll from the community or the users for instance. We simply do not see any poll data from the users which indicated that the users were tired of seeing crypto ads.
Summary of thoughts on leadership:
Augmenting the wisdom of the community as a means of better governance
In a world where the community must decide what to do we have a situation where responsibility is increasingly diffuse. This means while it is true that the signature may come from the face of the community (if it is a human face) it is still the community which has to be capable of wisdom. The problem is most communities in the world do not become wiser as more join the community. A bigger community doesn't produce better policies by merely voting together. The problem is while most people have opinions it does not mean opinions are well informed or scientific or wise. The lack of wisdom in a community results in horrible (harmful) policies, over reactions, systemic bias, and more.
The conclusion I have reached so far is that in order to have better governance in an era where the community is the government it is a requirement that the community be wise. It's not enough to simply give the community unlimited power to shape the future without providing any capacity for the community to be wise or to do research or to solve problems. Voting in the sense we see in elections does not involve informed voters. Information supplied to voters is almost always sub par and voters are expected to trust "opinion leaders" and "opinion shapers" who tell them how to vote and why. Often disinformation shapes elections more than scientific evidence, facts, math, or reason.
As we build blockchain technology I think it is critical that we put great emphasis on data analytics. Data analytics will allow our leaders to make better decisions on our behalf. Blockchain technology will have to rely on data analytics to figure out potential wants and needs of it's participants, users, e-citizens, etc. At the same time private communication will be a necessity even if just to conduct surveys. The reason is people will not necessarily provide their real opinion in a survey which is completely transparent. The only solution I could find to the problem of preference falsification is privacy.
Most important of all is those who are put into positions of leadership are in trusted positions. This includes people who are moderators at forums, people who are lead developers, people who run exchanges. People who are in these positions have the responsibility to serve the blockchain community to the best of their ability. The abuse of these positions for personal power or personal gain is a violation of this trust and in these instances the community can and should select someone else for that position.
Bulbulia, J., & Sosis, R. (2011). Signalling theory and the evolution of religious cooperation. Religion, 41(3), 363-388.
Davis, W. L. (2004). Preference falsification in the economics profession. Econ Journal Watch, 1(2), 359.
Frank, R. H. (1996). The Political Economy of Preference Falsification: Timur Kuran's Private Truths, Public Lies. Journal of Economic Literature, 34(1), 115-123.
Grimm, P. (2010). Social desirability bias. Wiley international encyclopedia of marketing.
Sîrbu, A., Loreto, V., Servedio, V. D., & Tria, F. (2017). Opinion dynamics: models, extensions and external effects. In Participatory Sensing, Opinions and Collective Awareness (pp. 363-401). Springer, Cham.
De Lege Ferenda  is a series. Like the Tauchain Exegesis ,  is. One train of articles.
This is the introductory 'locomotive' article where I attempt to nail down the essential basics. This is nontrivial cause it requires compression of very long stream of thoughts and research. Spanning literally decades. In that sense some of the overcompressed categorical statements are also cognitive ''letters of credit''  or ''promisory notes''  - comprising debt of mine for future separate more detailed explanations to come. I'm afraid this is the only way the theses and conclusions of mine to be expressed in a reader-friendly way. Of course, questions and comments as mutual understanding accelerator are as always more than welcome.
Three ''angles of attack'' , in roman numerals and capitals in pure latin (the lingua franca  of law :) bellow:
Maybe I ,  already tired you with repeating my incantation of:
Law is Between, Code is Within , 
It is quite multi-dimensional in meanings and multi-disciplinary in consequences but here it comes to denote the unavoidability of Law. Rendered down to the most basic physics we currently know:
This is the way and reason why Law is enforceable and Code is executable. And the major categorial difference between them which makes the notion of 'code is law'  utter nonsense, as well as, it seems, also destroys the very basis of the notion of 'smart contracts' . But this belongs to bunch of other series of mine to come ...
Even if it was theoretically possible all effectors  to become one, there'd still be internal uncertainty fragmentation and thus unavoidability of enforcement.
Leaving this head-dizzying fundamental cognitive datum  and heading up across the higher abstraction epistemic layers  we reach the surface to take a swallow of fresh air to:
Nothing, read my lips, NO-THING in crypto or blockchain has ever been or could possible be extralegal.
Cuz there ain't a thing in any blockchain aspect which is not ... physical. Hence beyond the scope of Law.
Blockchain is most probably the arrival of the expected Hanson engine , or Szabo booster , or ultimate Clusivity management tool . Which makes it extremely important domain for proper legal treatment and regulation - both as taxonomy within the existing institutes of Law  - lex lata, and as creation of novel norms to cater it - lex ferenda .
(as a side note: expectedly the novel collective mnemonic technologies knows under the umbrella term of 'crypto' provide positive feedback loop to strengthen the Law, too - Tauchain  seems to promise  the advent of law, at last, as consistent and decidable set of rules, for first time ever.)
II. IURIS DICTIO
Law being inherently about physical, is also about spatiotemporal, i.e. about geography / geopolitics. It is always territorial even when it is cross-border applicable by the virtue of international law or internal rules to resolve inter-jurisdictional normative collisions.
The known world (I deliberately do not say: the planet, the Earth, or the globe because of ... of course - the Outer Space Law  !), is tessalated geographically into jurisdictions , . Countries or nations. The pattern pixels of the universal human jursdictional cellularity. But borders not as much divide as they connect.
The world is internet of jurisdictions no matter how yet primitive are the networking protocols and architecture. And because due to topological defficiencies this can not yet be a geodesic network  - some jurisdictions are special. And among the special there are some which are even more special than the merely special ones. The specialness stems from the fact of what a jurisdiction enjoyed gives to its user.
After decades of observation and practice and comparative studies I reached the conclusion that THE jurisdiction is the Principality of Liechtenstein ! 
Mere ennumeration of its features and the sheer lack of bugs would occupy a sizeable volume. Liechtenstein is not just an island periphery money hideout of an old fat imperial metropoly - it is a HUB. It is immersed  right into the middle of the healthiest-wealthiest community of EU .
What starts in Liechtenstein does not stay in Liechtenstein but swiftly propagates into the giant space of EEA . It is a keyhole jurisdiction straight into this most giant jurisdiction of jurisdictions - so strong in soft power  and so influential that even the FAMGA  seem to reckon Europe more than their own home jurisdiction .
Liechtenstein is simultaneously with deepest and most stable roots in the best of history and geography and is most advanced and ahead in the making of legislation of a highest probe of adequacy.
It does in 2018 - what I (and just a few others) predicted years ago to happen. We must herein admit that other jurisdictions do have some timid try-outs for legal codification of the blockchain but nothing compares with the comprehesive and in-depth approach of the Principality's legislators.
On 28th of August 2018 Liechtenstein published  a draft  of the new Blockchain Act:
<< On 28 August 2018, the Ministry for General Government Affairs and Finance of Liechtenstein published the consultation report on the new Blockchain Act (Act on Transaction Systems based on Trustworthy Technologies (VT) (Blockchain Act; VT Act; VTG)).
The government has decided to regulate not only the current Blockchain-applications (in particular cryptocurrencies and initial coin offerings (ICOs)), but also to establish a legal basis for the entire scope of application of the token economy according to a long-term approach, which should also meet the needs of future generations. >>
The basic provisions of the Liechtenstein Blockchain Act are exposed yet only in German language - which I'm not at all in command of and a language quite indgestable by the Google Transalte AI.
The consultation period ends on 16 November 2018, i.e. less than 2 months left from today.
My modest intention is by this De Lege Ferenda series of articles to provide my comments and opinions to 'whom it may concern' on the upcoming Liechtenstein Blockchain Act.
You already know I'm kinda fond of timelining and retrodictions.  :)
Every result has its cause, often hidden in the ocean of data what past is, and quite hard to distinguish.
US has its Captain America . Liechtenstein is lucky to have its Mr Liectenstein .
Andreas Erick Johannes Kohl Martinez of the House of Sequence . Remember that name.
Since the dawn of the blockchain era, I'm under the strong conviction that Liechtenstein is the true Crypto Valley  of the globe. So is Andreas, too. Purely by chance it occured that we both - long time before we knew eachother - have this astronomically improbable coincidence or synchronicity  of this and multitude of other thoughts.
Society of mind .
[*] - photo attributed to: By Michael Gredenberg - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=18962
 - https://en.wikipedia.org/wiki/Lex_ferenda
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-intro
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-the-two-towers
 - https://en.wikipedia.org/wiki/Letter_of_credit
 - https://en.wikipedia.org/wiki/Promissory_note
 - https://en.wikipedia.org/wiki/Angle_of_attack
 - https://en.wikipedia.org/wiki/Lingua_franca
 - http://www.behest.io/
 - https://steemit.com/blockchain/@karov/behest-for-tauchain
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://steemit.com/tauchain/@karov/tauchain-and-the-cost-of-trust
 - https://en.wikipedia.org/wiki/Pauli_exclusion_principle
 - https://en.wikipedia.org/wiki/Fermion
 - https://en.wikipedia.org/wiki/Enforcement
 - https://en.wikipedia.org/wiki/Uncertainty_principle
 - https://en.wikipedia.org/wiki/Free_will
 - https://en.wikipedia.org/wiki/Mutual_information
 - https://www.coindesk.com/code-is-law-not-quite-yet/
 - https://en.wikipedia.org/wiki/Smart_contract
 - https://steemit.com/tauchain/@karov/tauchain-over-de-latil
 - https://www.etymonline.com/word/data
 - https://steemit.com/tauchain/@karov/tauchain-the-hanson-engine
 - https://steemit.com/tauchain/@karov/tauchain-as-szabo-booster
 - https://steemit.com/tauchain/@karov/clusivity-by-tauchain
 - https://en.wikipedia.org/wiki/Lex_lata
 - http://www.idni.org/
 - http://www.idni.org/blog/tau-and-the-crisis-of-truth.html
 - https://en.wikipedia.org/wiki/Space_law
 - https://www.etymonline.com/word/jurisdiction
 - https://en.wikipedia.org/wiki/Jurisdiction
 - https://steemit.com/blockchain/@karov/geodesic-by-tau
 - https://www.liechtenstein.li/en/
 - https://www.liechtenstein-business.li/en/economic-area/get-to-know/hidden-treasures/liechtenstein-combines-the-best-of-both-worlds/
 - http://europa.eu/
 - https://en.wikipedia.org/wiki/European_Economic_Area
 - https://en.wikipedia.org/wiki/Soft_power
 - https://medium.com/crypto-oracle/why-cryptos-a-growing-threat-to-famga-a-k-a-facebook-apple-microsoft-google-and-amazon-ea237570d3ea
 - https://www.dw.com/en/eu-gives-facebook-twitter-ultimatum-on-consumer-protection-laws/a-45573561
 - https://www.pwc.ch/en/insights/regulation/liechtenstein-publishes-draft-of-the-new-blockchain-act.html
 - https://www.llv.li/files/srk/vnb-blockchain-gesetz.pdf
 - https://steemit.com/bitcoin/@karov/bitcoin-retrodictions
 - https://en.wikipedia.org/wiki/Captain_America
 - https://podcast.bitcoin.com/e349-How-Libertarian-Leader-Mr-Liechtenstein-Got-Lucky
 - http://www.sequence.li/
 - https://www.businessinsider.com/what-its-like-in-zug-switzerlands-crypto-valley-2018-6
 - https://en.wikipedia.org/wiki/Synchronicity
 - https://en.wikipedia.org/wiki/Society_of_Mind
 - https://steemit.com/tauchain/@karov/scaling-is-layering &https://steemit.com/tauchain/@karov/tauchain-transcaling
How Tauchain and the Exocortex can give anyone a conscience and make anyone more law abiding. By Dana Edwards. Posted on Steemit. September 2, 2018.
First "anyone" is not literal. By anyone I mean anyone with a reasonable level of intelligence who is willing to take the advice generated by the network. The network would include human beings and machines. The network would learn and be more properly defined as a complex adaptive system. Tauchain would enable the emergence of this network. This post is about how the network which can emerge from Tauchain. It is also about how people who intend to be as moral as possible whilst also complying with the law as much as possible might leverage the network. This post assumes that the human brain has a finite memory and comprehension capacity. This post assumes that every human being can benefit from enhancing these naturally limited capacities in areas of legal comprehension and risk literacy (under the assumption that most or perhaps none of us know every law on the books but need to comply with the laws most likely to be aggressively enforced).
The Personal Moral Assistant
PMA is a concept I've been thinking about for years now. The idea that we can augment our ability to be moral persons. A PMA is a personal moral assistant and in an ideal world every person born would have one. This would be an interface similar to what we see with Cortana or Siri where you can ask any question pertaining to whether a particular action is right or wrong. This PMA would solve the problem using the same priorities that you would and so you would get a definite right or wrong result.
A Personal Moral Assistant is just one primary use case. But these personal assistants over Tauchain could also include for instance a Personal Compliance Assistant. This is essentially another bot but instead of dealing with moral problems this bot would handle compliance. If you're trying to accomplish a goal this bot would make sure that you do so following all the known laws as your exocortex currently understands it. This would enable people to avoid legal pitfalls whilst chasing opportunities.
In order to go from poor to rich in this world requires taking risks. There is no way around risk taking if you want to get ahead. Risk literacy is essential and very few people who are poor have risk literacy. The PMA might be able to tell a person whether a certain choice aligns with their current values. The PCA might tell a person whether a certain choice complies with the laws. What about opportunities? An opportunity web crawler agent could theoretically search across the entire Internet to find opportunities which match your chosen risk profile.
What are we doing today?
Today we have to make choices often in trial and error. If we aren't lucky enough to have mentors or people who can guide us then the only way to learn is to make the common mistakes. When we deal with moral problems today we often rely on holy scripture interpreted by other human beings who are just as flawed as we are. We simply don't have a bot which could interpret the scripture in a completely logical way. In other words we don't have the digital representation of the mind of our spiritual guides.
We also have a situation where some of us can afford to comply with every law and take the lowest risk approach while others simply don't have the resources available to pay the expensive legal fees. Some people get better legal advice than other people as well. What if we could get at least some level of legal assistance from our intelligent assistant? What if this intelligent assistant can even ask human beings who have legal knowledge to help?
And finally what if we could figure out which risks are worth taking and which are not worth taking? It's one thing to find opportunities but another to be able to assess them. People get scammed because at the end of the day our emotions influence our ability to do proper assessment of opportunities. I'm human and it even happens to me from time to time. What if we could avoid this by using the capabilities of Tauchain to analyze massive amounts of information for us which our brains could never handle?
Opportunity Crawler Bot
I ask a simple hypothetical question: what if you could have set a bot to search the Internet for opportunities that resemble Bitcoin in 2008? What if this bot would be activated and search for an indefinite period of time on an undetermined yet expanding number of networks? If you define "Bitcoin in 2008" in a way which the bot can make sense of then it could search for anything which meets that criteria. We have this technology now but it's extremely primitive. On Google you can set up alerts for certain things but what if you could go beyond mere alerts and look for code on Github, and certain individuals involved with it, and certain growth patterns?
A way to think about these bots / intelligent assistants
One way to think about these intelligent assistants is as part of your extended mind. These bots essentially help you to think better and communicate better. It's still you and what they do on your behalf is essentially as if you did it. So the total collection of all of these agents which are under your control represent your complete exocortex. It will take great responsibility and wisdom to use these abilities in a way which is perceived by the world as ethical, moral, legal, etc. It is for these reasons that I initiate a discussion on how each of you would like to use such technology if it did exist or such bots or how you would think about them?
What is Tauchain & Why It Could Be One of The Greatest Inventions of All Time (Part 1: Introduction). By Kevin Wong. Posted on Steemit. August 28, 2018.
In anticipation of Tau's demo some time around the end of this year, I'd be publishing a series of articles leading up to its release and beyond on Steem. If you would like to get to know what some of us think is going to be one of the greatest inventions of all time, I'd recommend you to check out http://wwwidni.org. It seems like a foundation that we've missed out on building together since the birth of the Internet.
A close resemblance of this project is the Semantic Web although some of us would place Tau as being far more ambitious in scope, oddly in a way that is likely more feasible with its ingenious use of a logic blockchain to power a decentralized social choice platform. I think it's impressive how singular the concept actually is, despite the unavoidable lengthy explanations that come paired with the many first-time features that Tau will provide.
Without further ado, let's explore this world-changing technology that is currently baking in the oven.
What is Tau?
Let's begin by first checking out the opening of IDNI's website at http://idni.org:-
Tau is a decentralized blockchain network intended to solve the bottlenecks inherent in large scale human communication and accelerate productivity in human collaboration using logic based Artificial Intelligence.
Sounds fairly straight-forward at first glance, and to me, it really stands out in the cryptosphere. We now have millions and billions of people using the Internet everyday, yet we still do not have any effective means of discussing and collaborating without being all over the place. Sure, we may have been pouring a lot of our time and effort into various platforms trying to connect with others, but have things been really any different compared to a time before the Internet?
The speed of information propagation has increased by orders of magnitude, and we can reach anyone on the planet now, but it's still really up to us to be present and be able to process information in our heads before turning them into relevant knowledge for our networks.
Expanding our social bandwidth.
Turns out, we have been experiencing a lot of trouble coming to terms with the chatter of billions of people in cyberspace. The bottlenecks inherent in our human bandwidth remain to be unsolved even with near-instantaneous communications. From governments to corporations and blockchain communities, we are all still facing the age-old problem of being unable to scale governance beyond the size of a classroom. It's just difficult to get our points across to many different people, let alone making sense of complex long-term discussions and making network-wide decisions collaboratively.
The introduction to The New Tau written by Ohad Asor explains our situation quite accurately:-
Some of the main problems with collaborative decision making have to do with scales and limits that affect flow and processing of information. Those limits are so believed to be inherent in reality such that they're mostly not considered to possibly be overcomed. For example, we naturally consider the case in which everyone has a right to vote, but what about the case in which everyone has an equal right to propose what to vote over?
So how is Tau actually going to solve our communications bottleneck? It will be through a highly bespoke and non-trivial implementation of a logic-based Artificial Intelligence (AI). It's worth noting that AI in this case is more of a buzzword for marketing-speak, and it is actually not of the same variety as the commercial implementations of deep machine learnig.
The distinction that must be made is that Tau is not the kind of AI that attempts to guess what the world is around them, including that of our opinions and the things we say or do. Instead, we must make the step towards communicating through Tau and what we choose to communicate will be as definite as computer programs. It can be thought of as a persistent logic companion that helps us improve the scale our reasoning, logic, and bandwidth.
We can take the time to share what we want to share on the Tau network and most of the logic-based connections and operations will happen in the background over time, even when we're not paying attention in-person. Again, the use of the word AI is a misnomer here because it usually paints the picture of AI agents attempting to mimic human autonomy. That's not what Tau is about. In this case, thinking about Tau as just a logic machine should provide better clarity on what it actually is.
The power of logic.
To expand, here's the second paragraph found in the opening of IDNI's website that explains Tau's paradigm in logic-based communications, http://idni.org:-
Currently, large scale discussions and collaborative efforts carried out directly between people are highly inefficient. To address this problem, we developed a paradigm which we call Human-Machine-Human communication: the core principle is that the users can not only interact with each other but also make their statements clear to their Tau client. Our paradigm enables Tau to deduce areas of consensus among its users in real time, allowing the network to boost communication by acting as an intermediary between humans. It does so by collecting the opinions and preferences its users wish to share and logically constructing opinions into a semantic knowledge base.
Indeed, Tau will offer a semantic social choice platform where we can discuss and store knowledge in a logical universe that helps us organize information, thereby empowering us in highly relevant ways. If you're worried about privacy, know that Tau is first-and-foremost designed as a local client with local processing and storage. The platform itself will be deployed as a decentralized peer-to-peer network, a place where we can connect and share our knowledge-base with anyone we desire.
The only price to pay in all of these is that we must speak in Tau-comprehensible languages, which can always be added and modified over time. A sophisticated language that can be defined over Tau may closely resemble natural languages, but it is really best to expect Tau as a machine-comprehensible language that only speaks in logic. Fortunately, logical formalism is something that we can easily deal with.
So it will be up to us to communicate with our local Tau client in a way that it'll understand our worldviews. When the machine understands what we share completely in some logical, mathematically-verifiable sense, it can then connect our dots with the rest of the Tau network, effectively boosting communications beyond the limits of human bandwidth, effectively scaling our points of discussion, consensus, and collaboration up to an infinite number of participants.
Code and consciousness.
Finally, we look at the last paragraph of Tau's introduction at http://idni.org
Able to deduce consensus and understand discussions, Tau can automatically generate and execute code on consensus basis, through a process known as code synthesis. This will greatly accelerate knowledge production and expedite most large scale collaborative efforts we can imagine in today's world.
Since Tau is a logic blockchain that powers a semantic social choice platform, we can leverage it to have both small and large-scale discussions about program specifications, detect points of consensus, and even generate software in the process. Being able to go from discussions to the realization of decentralized applications would mean inclusive code development for the masses. It's also a unique addition to decentralization that no other blockchain projects have even thought about.
Now that we may have come to a better understanding of Tau's emphasis on the use of logic in every part of its being, let's revisit the process description found in The New Tau to get closer to knowing what it really is about:-
We are interested in a process in which a small or very large group of people repeatedly reach and follow agreements. We refer to such processes as Social Choice. We identify five aspects arising from them, language, knowledge, discussion, collaboration, and choice about choice. We propose a social choice mechanism by a careful consideration of these aspects.
In short, Tau is a decentralized peer-to-peer network that takes the shape of a social choice platform, and it can become anything that we want it to be, for as long as it's expressible within the self-defining and decidable logics of FO[PFP] with PSPACE-complexity. This precise specification is required to satisfy the very definition of Tau as seen in the excerpt above. Tau is also intended to be a compiler-compiler.
This is taking application-generality into a completely different direction compared to blockchains that are built specifically with turing-completeness in mind, like Ethereum. Relevant literature to check out: Finite Model Theory.
Understanding each other.
While it's all highly technical and difficult to grasp in one seating, perhaps a better way to truly begin to understand Tau is to spend some time studying its main features. Or just wait for the product release. In any case, I will try to explore these topics in the future if my brain can still handle it:-
The more I think about Tau, the more I think that it is (poetically) a logical conclusion to the way the Internet works as a protocol. It even lives and breaths logic. Not just any kind of logic, but specifically, logics that can define their own semantics and is decidable. Tau is intelligently designed to be a truly dynamic and ever-evolving blockchain.
When the Tau community intends to make changes to the network code, rules or protocols, they will simply need to express these opinions and perspectives in a compatible language over the network. The self defining logic of the Tau blockchain network will enable it to detect the consensus among these opinions and automatically amend its own code to reflect this consensus from block to block. Unlike the common method of voting, Tau’s approach will take into account the perspectives of the entire community, where people will be free to vote and propose what to vote for in real time. This unique ability of Tau is the only decentralized solution to create a truly dynamic protocol.
Now you might think: Tau seems like a powerful tool but will it be too difficult to use for most people? There might be some learning curve involved for sure, and it'd be similar to learning a new language in the beginning. Those of us who learn to use it well enough to scale our discussions and collaborative works will likely gain a significant edge over those who are not using the platform. I'd imagine plenty of projects and communities around the world being able to overcome some of their obstacles in development through Tau. Hence, it may be fair to expect that market forces will gravitate towards the platform just like how we're all using the Internet these days.
Until the next post.
I've been thinking about Tau almost everyday for the past many months now, and I will admit that its deeper technicalities are still way out of my league, although I've made sure to word them broadly out the best I can. If you like what I do, please consider sharing this post and voting on my witness account on Steem. For more info, check out my recent witness announcement post.
As always, thanks for reading!
Images from Pexels
Music tracklist by Magical Mystery Mix
Follow me @kevinwong / @etherpunk
Not to be taken as financial advice.
Always do your own research.
“We are moving into an era where cities will matter more than states and supply chains will be a more important source of power than militaries — whose main purpose will be to protect supply chains rather than borders. Competitive connectivity is the arms race of the 21st century.”
-- Parag Khanna , 
A network is made of lines and switches, right?
Lots have been told about the network scaling effects , including attempts by myself [4-12] ... which compels me to introduce the not so frivolous notion of network forces.
These forces are expressed in several laws. I though initially to say 'forces' and 'laws' here, but I realize they are quite objective and physical emergenta , indeed.
In my ''Geodesic by Tauchain''  article of about couple of months ago I emphasized over the Huber-Hettinga Law , of how cost of switching literally defines the 'orographic'  topology of a network .
The cheaper the routing - the flatter the network.
Expensive switches = hierarchy, verticality, power, control, obey, centalization, 'world is fiat' ,, sollen , hence borders instead of bridges, limitations not stumulae, exclusivity ...
Cheap switching = geodesic society , 'world is flat', horizontality, p2p, decentralization, inclusivity ...
The more vertical by centralization a network is - the more it must deplete information - to omit, to ignore calls from the deeps or to even actively suppress or silence nodes. To cope with the stream by strangling it. Simply due to lesser capacity, less degrees of freedom . Geodesic networks possess higher entropy  and therefore are richer. They bolster higher both Scrooge  and Spawn  factors. With other words:
The flatter the network - the richer  it is.
Maybe the explanation on why the wealthiest-healthiest societies tend to be those who are with biggest economic-political freedom. 
Naturally the Huber-Hettinga Law led me to the elementary-watson  conclusion of the power and value of Tau as the ultimate über -switch. So far so good.
Now lets stare in the Lines. Here comes Nick Szabo .
Nick Szabo - a lawyer AND computer scientist - is a legendary figure from the great 'Archaic era of crypto'  - the 1990es when he, together with the other cypherpunk  titans like Tim May , Wei Dai , Bob Hettinga  etc. etc., poured the very baserock foundations in a staggering detail of what we enjoy now as Crypto  in the post-Satoshi  era.
It is THEIR vision came true we all now live in.
Bitcoin was a detonation of namely that critical mass of fused thoughts, of namely these very smart people, piled up and compressed by the connective network forces of the early internet .
No, I do not mean at all Szabo's most famous thing - the 1994 coining of the term of 'smart contracts' . In fact I deeply and strongly reject the very notion of 'smart contracts' - as utter non-sense, even as an oxymoron - which is an yuge separate problem, which I suspect that I nailed it, and I'll address in series of dedicated articles starting in the upcoming weeks...
I mean something much more valuable, what I call the Szabo Law.
When we hear the phrase 'networking effects' the first what comes to mind is the famous Metcalfe law .
''Metcalfe's Law is related to the fact that the number of unique connections in a network of a number of nodes (n) can be expressed mathematically as the triangular number n(n − 1)/2, which is proportional to n2 asymptotically (that is, an element of BigO(n2)).''
In the above order of appearance these network forces laws respect quantitatively the basic properties of a network as:
- Huber-Hettinga Law - the cost of switches and routing.
- Metcalfe Law - the number of nodes, i.e. switches defining the number of unique connections or lines.
- Szabo Law - the cost of the lines and connecting.
All these Laws are scaling ,  laws. Before we to come back to and continue on Szabo Law, we have to briefly mention another one .:
''So what is “scaling”? In its most elemental form, it simply refers to how systems respond when their sizes change. What happens to cities or companies if their sizes are doubled? What happens to buildings, airplanes, economies, or animals if they are halved? Do cities that are twice as large have approximately twice as many roads and produce double the number of patents? Should the profits of a company twice the size of another company double? Does an animal that is half the mass of another animal require half as much food?'' ... With Dirk Helbing (a physicist, now at ETH Zurich) and his student Christian Kuhnert, and later with Luis Bettencourt (a Los Alamos physicist now an SFI Professor), Jose Lobo (an economist, now at ASU), and Debbie Strumsky (UNC-Charlotte), we discovered that cities, like organisms, do indeed exhibit “universal” power law scaling, but with some crucial differences from biological systems.Infrastructural measures, such as numbers of gas stations and lengths of roads and electrical cables, all scale sublinearly with city population size, manifesting economies of scale with a common exponent around 0.85 (rather than the 0.75 observed in biology). More significantly, however, was the emergence of a new phenomenon not observed in biology, namely, superlinear scaling: socioeconomic quantities involving human interaction, such as wages, patents, AIDS cases, and violent crime all scale with a common exponent around 1.15. Thus, on a per capita basis, human interaction metrics (which encompass innovation and wealth creation) systematically increase with city size while, to the same degree, infrastructural metrics manifest increasing savings. Put slightly differently: with every doubling of city size, whether from 20,000 to 40,000 people or 2M to 4M people, socioeconomic quantities – the good, the bad, and the ugly – increase by approximately 15% per person with a concomitant 15% savings on all city infrastructure-related costs.
Which probably comes to denote the shear size of the network in STEM (space, time, energy, mass) , I'm not sure, but I have some strong suspicions about the unity of matter, structure and action which I will expose and share some other time.
What I call Szabo's Law reveals in his ''Transportation, divergence, and the industrial revolution''(Thu, Oct 16, 2014)  that similarly to Metcalfe's (''double the population, quadruple the economy'') there is power-law  correlation between the cost of connections or links or lines ... and the value of the network, too.:
''Metcalfe's Law states that a value of a network is proportional to the square of the number of its nodes. In an area where good soils, mines, and forests are randomly distributed, the number of nodes valuable to an industrial economy is proportional to the area encompassed. The number of such nodes that can be economically accessed is an inverse square of the cost per mile of transportation. Combine this with Metcalfe's Law and we reach a dramatic but solid mathematical conclusion: the potential value of a land transportation network is the inverse fourth power of the cost of that transportation. A reduction in transportation costs in a trade network by a factor of two increases the potential value of that network by a factor of sixteen. While a power of exactly 4.0 will usually be too high, due to redundancies, this does show how the cost of transportation can have a radical nonlinear impact on the value of the trade networks it enables. This formalizes Adam Smith's observations: the division of labor (and thus value of an economy) increases with the extent of the market, and the extent of the market is heavily influenced by transportation costs (as he extensively discussed in his Wealth of Nations).''
My encounter with this article of Nick Szabo's was a goosebumps experience for me, cause it coincided with series of lay rants of mine on the old Zennet irc chat room of Tau that ''computation =communication =transportation''. Somewhere in 2016 as far as I remember. :)
Maybe it was the last drop to shape my conviction that by my dedicated involvement in both Tau and ET3 , , , I'm actually working for ... one and a same project.
For communication, computation and transportation being modes of state change. Cause information is a verb, not a noun. And software being states of hardware.
''Decentralizing the internet is possible only with decentralized physical infrastructure.'' 
Just like the brain is a network computer of neuron nanocomputers , the emergent composite we colloquially call humanity or mankind or economy or society or world ... is a network computer made of all us billions of humans.
Brains do thought, economies do wealth.
Integrated circuitry  upon the face of planet Earth as a motherboard . Literally. The Humanity's planet-hardware. Parag Khanna's Connectography explained.
The Earth is definitely not our ultimate chip carrier . Probably there ain't limit at all of our culture-upon-nature hardware upgrades, see: , . The universe is our computronium  and we've been here for too short and haven't seen far enough. Networking is connectomics . And thus it always also is metabolomics .
Remember my last month's  ''Tauchain the Hanson Engine''?
The series of exponentially shortened growth doubling times looks like driven by transportation technological singularities : domestication of the horse, oceanic navigation, combustion engine ...
In the light of all the net forces summoned above: The planet Earth viewed as a giant computer chip ...
- itself is a subject of the relentless network entropic  force of the Moore's law 
The network forces accelerate what that wealth computer does.
Two quick examples:
A.: The $1500 sandwich  as a proof that trade+production is at least thousands of times stronger in sandwich-making than production alone.
B.: The example of Eric Beinhocker in his 2006 ''The Origin of Wealth''  about the two contemporary tribes of the Amazonian Yanomami  - a stone age population nowadays and the Eastcoastian Manhattanites . That the former are only about 100 times poorer, but the later enjoy billions of times bigger choice of things to have.
Tauchain 'threatens' to affect the parameters of ALL the network forces formulae mentioned herewith in a mind-bogglingly big scale.
Simultaneously, orders of magnitude :
- lower switch cost
- higher nodes count 
- lower connection cost
A wealth hypercane  recipe. Perfect value storm. Future ain't what it used to be .
In a recent article of mine  I hinted my strong suspicion that scaling is itself scalable.
''Scaling is a problem. Scaling must be scalable, too. Metascale from here to Eternity.''
No matter what a terrific grower a system is - as per its own internal algorithmic growth drive rules - it seems inevitable its growth to get it into entropic mutualization  upon impact with a kind of a ... downscaler.
Scaling is everything, yeah. But it is quite intuitive and supported by too big body of evidence to ignore, that, paradoxically: the faster a thing grows - the sooner its encounter with an external and bigger downscaling factor comes.
This realization, refracted through the prism of our 'reptilian brain' layer  amplified to gargantuan proportions by our inherent social hierarchicity  is the source of the 'Malthusian  anxiety' which led to countless violent deaths over all the human history. Fear is anger , so the emotion that there is only as much to go around, and that the catastrophe of 'running out' of something is imminent, is the major source of what makes us bad to each other .
There are plethora of examples of very well mathematically and scientifically grounded doomsayer scenarios, and we must admit that they all correct as per their internal axiomatics  , and simultaneously they are all totally wrong for missing out the obvious - the factors of externalities  , the properties and opportunities of the medium which is consumed and/or created by this growth, and which transcend the axiomatics. For growth being always 'growth into'. The fact that doomsday scenarios are so compellingly consistent internally is what makes them so strong and dangerous ideological weapon of mass destruction .
Lets throw some such problem-solution couples for clarity:
a. the world of 1890es big cities sunk up knee-deep into beast of burden manure , and the super-apocalyptic projections of that VS Tony Seba's  1 pic > 1000 words of NYC carts vs cars situations in 1900 -1913 ...
b. the grim visions of the whole Mankind becoming telephone switchboard blue collar workers , the number of which should've exceeded the number of total world population by now to achieve the same level of telephonization or
c. the all librarians world  where it takes more librarians than the whole mankind to serve the social memory in the paper & printed ink storage facilities mode ...
d. the Club of Rome  as the noisiest modern bird of ill omen with 'projections' based on the same blind extrapolations as the urban seas of shit or the 'proofs' of the impossibility to connect or educate or feed all - instigating mass destruction fear that ''we run out of everything and will soon all die'' , used for justification for mass atrocities VS Julian Simon's  - the ''Ultimate Resource'' (1981, 1996) . Cf.: my accelerando article  and see what precisely is the Factory for succession of better and better Hanson drives for the last few millions of years - from the Blade and the Fire to the Tau - it is the same thing which identification made Julian Simon from fanatical Maltusianist  into rationally convinced Cornucopian  ... the human mind.
e. the predator-pray model  which this pseudo-haiku  I guess depicts best how's it brutally flawed:
''hawk eat chic -> less chic, human eat chic -> more chic''
for missing out to posit and failure to account for positive feedback loop  of predator over pray dynamics ...
f. The comment of Dary Oster  , founder of the other passion of mine - ET3 , on the aka 'saturation' of the scalables (exemplified in the field of transportation, which btw, being communication ... our social structures map onto mobility systems we have on disposal ... ).:
''... US transportation growth has focused on automobile/roads (and airline/airport) developments. (And this has been VERY good for the US economy.) The reason is that cars/jets offered far better MARKET VALUE than horse/buggy/train transport did 150 years ago. In the mid 1800s, trains displaced muscle power for travel between cities - because trains offered better market value than ox carts. Trains reached 'market saturation' about 1895 to 1905 (becoming 'unsustainable') - however 'market momentum' produced 20 years of 'overshoot'. Cars/jets were far more sustainable than passenger trains and muscle power, and started to displace trains (and finish off horses). By 1916 the US rail network peaked at 270,000 miles (today less than 130,000 miles is in use).Just like passenger trains hit market saturation, roads/airports are reaching economic limitations. The time is ripe for a market disruption, and all indicators (past and present) say it will NOT come from, or be supported by government or academia -- but from private sector innovations that offer a 10x value improvement (like ET3), AND also offer incentives for most (not all) key industries to participate (like ET3). Automated cars, smart highways, and electronic ride sharing are industry responses that will contribute to overshoot of cars/roads for the next 5-10 years.The main problem i see with the education system is that is that academic research and publication on transportation is primarily funded by status quo industries like: railroads and rail equipment manufactures, highway builders, automobile/truck manufactures, engineering firms, etc. -- all who fund research centered on 'improving' the status quo.Virtually all universities (for the last 1k years+) are set up to drive incremental improvements that industry demands, and virtually all paradigm shifts are resisted until AFTER they occur and are first adopted by industry. Government is the same (for instance in 1905 passing laws to forbid cars that were disrupting horse traffic; or in 1933 passing laws to limit investment in innovation startups to the wealthy (those successful in the status quo)).''
g. Darwinian algo  sqrt(n) VS higher algos - like Metcalfe n^2 . It is not precise, it is more of metaphorical, to indicate direction or scale of scaling, rather then rigorous precision, but ... the former figuratively speaking takes 100 times more to put up 10 times more, and the later takes 10 times more to return 100 times more...
h. Barter vs money. See.:  bottom of page 5 over the bottomline notes, about the later:
simpliﬁes pricing calculations and negotiations from O(n^2) complexity to O(n) complexity
As demonstration how one item out of a scaling barter system, emerges as specialized transactor and accelerator to transcale the barter economy. From within. Endogenously as always. (btw, Extremely strong document where there are entire books read and internalized behind each tight and contentful sentence!)
i. The heat death of the universe  VS the realization that the 2nd law  - conservation law for entropy/information law does not allow that , the asymptoticity  of the fundamental limits of nature, the fact that max entropy grows faster than/from/due to the actual antropy growth  and that entropy is not disorder  and that at the end of the day it is an unbounded immortal universe  ... cause it's all a combinatorial explosion .
j. The Anthropic principle  and the realization that it is extremely hard if not impossible to posit a lifeless universe  ...
k. The Algoverse - my 'psychedelic' vision  of the asymptotic inexorable hierarchy of the Dirac sea  of lower algos which take everything for almost nothing - up towards giving almost everything for almost nothing - Bucky Fuller's runaway Ephemeralization . Algorithms are things. Objects. Structure. Homoousic or consubstantial to their input and output. Things taking things and making things outta the former. Including other algos of course! Stronger ones.
l. The Masa Effect . The Master of Softbank seeing how the machine productivity is on the imminent course to massively overscale the human clients base and his apparent transcaling solution to upscale the clients base with bots and chips, with the same which scales supply in such a too-much way. 
m. The Pierre the Latil 1950es and Stanislaw Lem 1960es ( copied 1:1 by Tegmark  ) hierarchy . Of degrees of self-creating freedom of Effectors ...
n. Limits of growth - present in any particular moment and in any finitary setting of rules ,  but nonexistent in the infinity of rules upgradability. Like a cancer cell trapped in a cage of light  vs ... photosynthesis.
o. Ray Kurzweil - static vs exponential thinking .
p. Craig Venter's  Human Genome project  which when commenced in 1990 was ridiculed that will be unbearably expensive and will take centuries to finish, and it did - it costed a unbearable for 1990 fortune and it did take centuries, of subjective time as per the initial projections conditions - being completed in year 2000.
q. Jeff Bezos vision  of Solar System wide Mankind:
''The solar system can easily support a trillion humans. And if we had a trillion humans, we would have a thousand Einsteins and a thousand Mozarts and unlimited, for all practical purposes, resources.''
r. The 'wastefulness' of data centers and crypto mining collocation facilities  ... which is as funny as to envy the brain for 'wasting' >25% of the body energy. (Btw, the tech megatrend is exponentially and relentlessly towards the minimum calculation energy).
s. The log-scale intuitive measure and smooth straight line visualization coming out of, this quote which I fished out off the net long time ago.:
"The singularities are happening fairly regularly but at an increasing rate, every 500 to 1000 billion man-years (the total sum of the worldwide population over time). The baby boom of the 1950 is about 200 Billion man-years ago."
ops! go back to Q. With 1 trln. humans population the 'singularities' will occur once a year?!
t. the Tau  !!
I can continue with these examples ... forever [wink] - excuse me if I've bored you - but I think that at least that minimum was needed to be shown and it is enough to grok the big picture.
Scaling is the solution. It is a problem too. Its overcoming is what I dub 'Transcaling' for the purpose of that study.
Size matters. Scaling is the way. But the more general is how a system handles change! This is as fundamental as to be in the very core of definition of life and intelligence .
Tauchain is all about change handling!
Now, lets knit the 'blockchain' of these all example threads above into a knot like the Norns do :
Dear friends, please, scroll back to Example D. Yes, the human mind transcaler thing. The Ultimate resource thing.
We are the ultimate resourse.
We the humans (and soon the whole zoo of our technological imitations and reproductions and transcendences of ourselves ).
We as the-I  are strong thinkers and creators, immensely more road lies ahead than it's been traveled, yes, but yet we, as the-I, are the momentary apex in the Effectoring business  in the Known universe ... AND simultaneously we as the-We are mediocre to outright dumb.
We are very far from proper scaling together. The Ultimate resource is not coherent and is not ... collimated. Scattered dim lights, but not a powerful bright mind laser. Dispersed fissibles, but not a concentration of critical masses.
We as The-We - paradoxically- persistently finds ways to transcale its destinies using the power of the-I, but the-We itself does not entertain the scaling well at all .
The individual human mind is the unscaled transcaler.
Tau is the upscaler of that transcaler.
I'll introduce herewith another 'poetic' neologism, which occurred to me to depict the scaling props of a system after the Scrooge factor of ''Tauchain - Tutor ex Machina'' , and it is the:
Spawn  factor
- the capacity and ability of a system to grow through, despite, against, across, from and via the changes. Just like cuboid  is about all rectangular things like squares, cubes, tesseracts ... regardless of their dimensionality, the Spawn Factor - to be a generalization of all orders of scaling. Zillion light years from rigor, of course, as I'm on at least the same distance from my Leibnizization . For the lawyer to become a mathematician is what is for a caterpillar to become a a butterfly. :) Transcaling.
Tau transcends the infinite regress of orders of: scaling of scaling of scaling ... by being self-referential. Or recursive. 
What is the Spawn factor of Tau?
If you let me I'll illustrate this by a poetic periphrasis of the famous piece of Frank Herbert's .:
I will face my change. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the change has gone there will be nothing. Only I will remain.
Hans Moravec  is the patriarch of robotics . The real one, not the Sci-Fi father. Asimov was just the prophet in this scheme of things.
Moravec to Kurzweil is what's Bitcoin to Ethereum and Satoshi to Vitalik.
Sorry, for the rough joke. No offence, Ray! Back in the earler 2000s I bought your books too .
In my humble opinion - aside from the ''reality intratextualization''  concept - the other wisdom jewel of Moravec's - fruit of a life devoted to robotics - is the Moravec's Paradox .
Explained in his own words:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
or with Steven Pinker's :
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived...
As I noted in a previous related post of mine , a system's value dynamics is all about how it scales. Preferable of course are systems which make more good to go around than less. Respectively, to come around.
Humanity is a network, and its scaling is stumbled by our innate attentional resources limitations.
Human social interaction is a skill and we naturally have only as much of it.
For now, in the good old hierarchic way , we can't deny that we scale satisfactory well (as compared, lets say, to our DNA-blockchain-fork-out first cousins the chimps ) for collaborating efficiently on successful execution of trivial tasks like empire building or colonization of the Galaxy.
But not all problems we encounter are simple. In fact most problems are more complex than we are capable to grok and master in the hierarchic collaboration mode, which quickly slams into the Shannon's 'brick wall' 
Ohad Asor's Tau  is intended to be humanity upscaler . This project is the first and only one I've discovered so far where the so obvious (after you know it) problem is even identified, stated and addressed.
This means uplifting the individual humans too, because we are literally AIs serially manufactured by our society (cf. feral children ).
It feels easy for us to attend, to remember, to forget, to think, to talk, to work together - so it is extremely Moravec-hard!
Tau is unique approach towards the Moravec-hardness of these problems in the realization that we do not need at all to waste time and resources to mimic nature and copy ourselves and to create high tech homunculi .
The 'problem' is the solution. Don't 'solve' it - just god damn use it!
It is the people who ask questions, upload statements, express tastes and do all that qualia  crap humans usually do.
The machine distills the semantic essence of all the shared thought flow, treats it as wishes specs, and automatically converts into executable code, incl. its own code self-amendment.
As Moravec found out few decades ago  :
The 1,500 cubic centimeter human brain is about 100,000 times as large as the retina, suggesting that matching overall human behavior will take about 100 million MIPS of computer power.
When these processing brain things are really put together in numbers the result is unprecedented power. An unstoppable force. A glimpse into it by Ohad :
It turns out that under certain assumptions we can reach truly efficiently scaling discussions and information flow, where 10,000 people are actually 100 times more effective than 100 people, in terms of collaborative decision making and collaborative theory formation. But for this we'll need the aid of machines, and we'll also need to help them to help us.
Without application of dehumanizing individual upgrades, without to be necessary to understand and reengineer the billions of years of evolutionary capital, but just harness it and use it. (Scaling itself must be scalable, too, ah?)
In my personal up to date limited understanding it seems that it is indeed the HUMANITY what's to be known as the Tau's 'Zennet Supercomputer', and the machines are the ... collab amplifier media, the 'internet' of it. (Ohad, correct me if I'm wrong, please.)
Like laser configurations of minds.
With performance stronger than thought.
NOTE: I have the honor to be in the Tau Team, but all reflections in this post are personally my opinion.
Tau Chain vs. Tezos - which platform will provide a better solution? By Isar Flis. Posted on Steemit. February 10, 2018.
In this article I would like to discuss the self-amending feature of Tau Chain (Tau), which I believe provides a better solution than the one proposed by Tezos.
A short summary about Tau
Tau will be a blockchain based computer network, aimed at supporting collaboration between people. It will be designed like any other social network you know (Facebook, Twitter, etc.); but on Tau, users can interact with each other using machine-comprehensible languages. Specifically, advanced users will be able to define new knowledge-representation languages simply by translating it to Tau’s metalanguage (TML). As the languages use logic, they will be understandable by both machines and humans.
Since Tau can “understand” the entire conversation, it can also translate the discussions into various languages and discover where people agree or disagree; then, it may present the content of the conversation in different forms (languages or formats) for each user, based on specific requests.
The ability of Tau to logically understand discussions (as it will be translated into its TML) will assist users in four important ways:
*For further information about Tau, please refer to my previous article, explaining Tau and its four-step roadmap.
“Tau, is a discussion about Tau”
Tau is a social platform that will assist users with writing and amending code based on users' discussions about a computer program. But Tau is a computer program by itself. Therefore, by discussing Tau, users will be able to amend Tau, whenever they (the community) reach an agreement about changing Tau’s protocol.
When Ohad Asor, the founder and developer of Tau Chain, mentioned that “Tau, is a discussion about Tau”, he meant that Tau is what the community decides when they discuss Tau. Meaning, when the community will face a decision, such as what Tau’s block size should be, they will just need to express their opinions and perspectives, like we do today in the social networks. Tau will organize the conversation in an efficient way to promote a solution that will represent what the community desires. As such, Tau will be the only dynamic decentralized social network.
Why is Tezos developing only a short-term solution?
You probably remember Tezos as one of the biggest ICOs in history, when they raised $232 million (when BTC price was ~$2,500). Like Tau, Tezos is also a dynamic protocol that can change itself based on users' agreements. Tezos considers voting to be the optimal solution to reach a decision between users.
Voting is a good method to include a large number of people in the decision-making process; however, voters have limited influence, as they can only choose between a few solutions/options presented to them. Who will decide when and why the community will vote? Who will decide what solutions the community can vote for? Tezos’ solution is still centralized and is only viable in the short-run. What will happen if some users do not agree with a specific vote? Does that mean that a Tezos fork is inevitable?
Without considering the perspectives of the entire community, we will not be able to reach a decentralized decision that benefits all users. Tau’s ability to scale discussions is the only decentralized solution to create a true dynamic protocol. Tau will enable all users to express their opinions by just discussing or communicating their views. Users will decide when and what to discuss, and Tau will change its protocol based on users' agreements. Thus, Tau will be able utilize all data in the decision-making process; data that is usually wasted when holding a vote.
To make it more tangible, think about the difference between discussing with your family which movie you’re going to watch and receiving a list of two movies to choose from. The latter might not reflect your taste in movies or how you want to spend your time. This is a low-scale analogy for Tezos’ voting solution. Tezos might provide a solution, but the solution is not optimal. When encountering a large-scale decision, the protocol will be changed based on the vote, but the minority might reject the vote and fork the coin.
Under Tau, the protocol will detect the core consensus among the different perspectives and change accordingly. With the assistance of Tau and its knowledge, users will effectively discuss among themselves how to reach further consensus points. With every consensus point, Tau will change itself accordingly.
*As the community members decide how Tau will be developed, they can suggest the majority rule (or a higher bar) as a decision rule. Tau will automatically detect the different perspectives of the community members and will execute their decision to change Tau’s protocol.
Another important aspect of Tau (compared to Tezos) is the fact that Tau will present its users with output about all the network input. All the data/opinions/information that users provide during their discussions will be accumulated to the knowledge archive. Tau will utilize its knowledge to provide its users with a better access for qualitative and quantitative information. Over Tau, the proposals (such as suggestions to change the protocol) that users will raise can be as wise as the information contained in the entire network.
I will end this article by quoting the last paragraph in my first article:
"I foresee huge potential for this project and urge you to read and learn about this project and its relevant applications. If you find this vision interesting, I recommend that you follow the project on Telegram, Facebook, LinkedIn and Reddit, or read Ohad’s blog for further information."
Disclaimer: I have invested in Agoras. Please do your own research before investing in Agoras and/or any other coin or project. Please do not consider this article to constitute financial advice.
What is Tau-Chain?
The purpose of this article is to demonstrate how Tau-Chain (Tau) can be implemented in practice. I have already presented Tau and its four-step roadmap in my previous article, but I think that further explanation about Tau is required to better understand its applications.
Tau is basically a discussion platform (like any other social network you know) with two significant innovations:
*Just to clarify, knowledge can be facts, lines of code, qualitative and quantitative data, etc.
How Tau can be implemented in practice?
Tau will be a free, open-source platform to advance and execute knowledge. Think about it as a one-stop shop that provides free consulting services, in all areas, to large numbers of people. For example, if you would like to start an enterprise but you lack the relevant business skills, Tau can answer your questions and even perform a market research or analysis (if initial data is provided) to evaluate your business opportunity.
In order to better understand how Tau can improve our society, I am providing below a detailed example showing how I see the vision implemented in practice.
Suppose Alice and Olivia are Ph.D. students in computer science who face a problem with their research. They use Tau to discuss the details of their data, findings and hypothesis. Tau will automatically translate this information into its metalanguage, adding Alice and Olivia’s data to the knowledge archive. Tau is basically the third member in the conversation, and can guide Alice and Olivia to advance their research by interpreting the data and suggesting improvements to their findings. If the students would like to implement the research and develop computer software, Tau will assist them with writing the code in the most efficient way. Using Tau, Alice and Olivia can overcome the limits of their knowledge to quickly complete and implement their research.
But how can people profit from sharing their knowledge?
There is another way for Tau to deepen its knowledge and develop better intelligence. Tau can gain knowledge from the Knowledge Marketplace (Agoras), a blockchain based smart contracts platform where individuals are able to generate income by sharing knowledge and information. With every transaction and exchange of knowledge, Tau will be exposed to the data to become more “educated” and accurate, resulting in a better knowledge deduction capability.
I know that smart contract platforms already exist, but they all lack very important capabilities – the ability to auto-verify the data, run quality assurance tests and suggest improvements to eliminate potential disagreements between the parties to a contract. Tau’s artificial intelligence will support the transaction between the two parties, and will make sure that there will be no fraudulent activities, inaccurate information or low-quality services. This will be the only platform where a computer that acts human (without human deficiencies) will supervise and support such transactions.
The following example demonstrates a possible application of Agoras:
Consider Bob, a software developer who has recently signed a smart contract with David to design a new software program. When Bob shared his code in the Knowledge Marketplace (Agoras), Tau verifies the relevancy of the code and will even suggest improvements to advance the code, eliminating a potential disagreement about quality and fraud. Upon Tau’s approval, Bob will receive his reward, as agreed in the contract. Tau will use the final code as additional knowledge to strengthen the platform’s intelligence.
As described above, the compensation mechanism will incentivize users to contribute their knowledge to advance ideas of others. Thus, we create a society in which individuals’ knowledge and expertise become public domain and can be better utilized to promote social health, welfare and resources.
I provided only a few examples of how Tau and Agoras can by implemented in practice. My examples were computer-science related, but you should realize that Tau-Chain can advance ideas and produce knowledge for every collaborative human endeavor across all fields, including sciences, business and government. Think about a situation where you have a problem and need some help – this is where Tau can assist you with solving your problem and even execute the solution if required and applicable.
Just to clarify, Agoras is also the name of the tokens that users will use in the Knowledge Marketplace (the smart contract platform). Agoras tokens holders will also benefit from developments that will be built as part of Tau’s ecosystem, including a Computational Resource Market (“Zennet”), Distributed Search Engine and a Derivatives Trading Platform.
To end this article, I would like to quote the last paragraph in my previous article, as it is still relevant:
"I foresee huge potential for this project, and urge you to read and learn about this project and its relevant applications. If you find this vision interesting, I recommend that you follow the project on Telegram,Facebook, LinkedIn and Reddit, or read Ohad’s blog for further information."
Disclaimer: I have invested in Agoras. Please do your own research before investing in Agoras and/or any other coin or project. Please do not consider this article to constitute financial advice.
The Power of Tau - Scaling the Creation of Knowledge. By Trafalgar. Posted on Steemit. December 31, 2017.
Ohad Asor, creator of Tau Chain/Agoras, has recently published the long awaited blog post detailing his vision for what very likely is the most ambitious project in the crypto space: Tau.
Tau will accelerate human endeavors by overcoming long ingrained limitations in our collaborative processes; limitations which we rarely even question.
The Problem of Social Governance
Take social governance, for example. As individuals, we have opinions over a wide variety of social issues. Perhaps you feel that the speed limit on certain roads is too high, or that programming should be a compulsory subject at public schools, or that everyone would benefit if cryptocurrencies were officially recognized and endorsed by the state.
However, you have no idea how to get these concerns across to the general public. I mean you could try writing a letter to your local representative or signing a petition but ultimately that's unlikely to gain much traction. Meanwhile, the very same issues that seems to have divided the nation over the past decade remain at the forefront of our political debate. Immigration, climate change, abortion, gun control etc. are all important issues of course, but very little progress have been made considering the amount of time, resources and attention that have been devoted to them.
So the problem with traditional forms of social governance, such as democratic voting, is apparent: on the one hand it has difficulty identifying and addressing the wide range of opinions different people hold, on the other hand, even with respect to the small number of issues that do end up bubbling up to the surface, it isn't particularly efficient at detecting consensus.
The central cause of this problem is that current modes of discussion are not scalable. There are inherent limitations in the way we're able to communicate our views across to each other; namely, human ability to comprehend and organize information is the main bottleneck. We cannot possible follow multiple conversations at once, or recall everyone's propositions once there are more than a handful of people in the mix. This is why most collaborative decision making bodies in practice are generally quite small in number: the President's cabinet, Supreme Court Justices, boardroom directions of a fortune 100 company etc.; you just can't have a productive discussion with 50 people. Our entire civilization is structured around this very limitation: discussions don't scale.
Scaling Collaborative Discussions Under Tau
Imagine if we can overcome this limitation; what will it mean for social governance? By using a self defining, decidable logic, the Tau network is easily able to keep track of every user's propositions and detect consensus automatically. Note that making a proposition is exactly the same as voting for that very same proposition: when you're proposing 'dogs should always be on a leash in public unless in a park' you're in effect putting in a vote for such a proposition. This way, countless issues, regardless of how technical or niche, can be assessed through the network concurrently, and social consensus can be detected on the fly. The Tau network can scale social governance by overcoming one of the greatest limitation in human communication of ideas by delegating the task of logically making sense of everybody's propositions to the computer. A simple use case of this will be the rules of the Tau network itself: through a self defining logic, Tau is able to detect consensus among its users from block to block, altering its own rules to conform to the choices of the user base.
The benefits of scaling discussions are not limited to just a more efficient form of social governance. Logic isn't merely about detecting surface level consensus, the network can easily form further deductions from everyone's propositions. If one states 'all men are mortal' and 'Socrates is a man', one can deduce that 'Socrates is mortal.' But deductions can be very deep and non trivial. Imagine if we had a group of 1000 mathematicians all inputting their mathematical insight as propositions. Tau can rapidly detect who agrees with whom on what, and deduce every logical consequence of their combined wisdom; in effect arriving to new truths and insights. In other words, Tau greatly accelerates the production of new knowledge. This will, of course, also work if you have physicists, doctors, engineers, computer scientists, indeed experts in every field working together on the platform. By scaling collaborative discussions in a logical network, Tau is able to scale the creation of knowledge.
When Tau comes into effect, any company, government, and indeed any organization not using this new network will be rendered obsolete. Tau aims to become an indispensable technology.
And this is only the alpha of Tau.
I will talk about the beta in a future posts. The beta will revolve around not just the scaling of discussions and consensus, but the automation and execution of code based of the results of those discussion. For more information on code synthesis and more, please read Ohad's blog. Also, do check out my introduction to Tau here if you missed it.
You can invest in Tau through buying Agoras tokens on Bittrex.
I am not affiliated or paid by the project. These represent my own subjective views. Tau/Agoras is the only other crypto project apart from Steem in which I see an extraordinary future, and I am merely sharing that with fellow Steemians here.
Ohad Asor's New Tau Blog
IRC Chat: Where you may ask Ohad himself technical questions
Tau Chinese QQ Group: 203884141
What is the Knowledge Acquisition Bottleneck problem? By Dana Edwards. Posted on Steemit. March 29, 2017.
Now that we know what knowledge representation is, and what knowledge bases are, and how the knowledge base is relied upon in a knowledge based system of artificial intelligence (KR+KB+Inference engine), we can move on to discussing one of the open problems.
The Knowledge Acquisition Bottleneck problem.
Many people already know about the familiar Byzantines generals problem in computer science. We also know how the Nakamoto consensus in Bitcoin provided a novel example of a solution. The Knowledge Acquisition Bottleneck problem is one of the problems plaguing AI and is what limits or seems to be a limit on the strength of artificial intelligence. One of the main problems in artificial intelligence is that knowledge formation typically requires domain experts who can contribute to the knowledge base. The Cyc project attempted to solve the problem of scaling up the knowledge base but is suffering from the bottleneck. The bottleneck can be summarized below [taken from Wagner, 2006]:
The paper from which this summary was pulled "Breaking the Knowledge Acquisition Bottleneck Through Conversational Knowledge Management" also offers a solution called collaborative conversational knowledge management. This is the same solution which Tauchain will attempt to utilize in a more sophisticated way. Tauchain will allow for collaborative theory formation. In the paper this quote explains a key concept:
We see this concept in how Wikipedia works to manage knowledge. We know Wikipedia is indeed not without flaws but it does manage knowledge. In their conclusion we see this quote:
Tauchain by design will be collaborative and allow for collaborative theory formation. This would mean anyone will be able to contribute to the knowledge base with relative ease. In addition, it will have knowledge management properties built in, and if the knowledge acquisition bottleneck problem can be solved then it will have a huge impact. For one, the problems which prevent knowledge based AI from scaling could be resolved if this bottleneck is removed.
DARPA has attempted to solve the Knowledge Acquisition Bottleneck problem utilizing high performance knowledge bases (HPKBs)and Rapid Knowledge Formation yet failed. Cyc has attempted to solve the same problem and has failed. The semantic web has yet to take off because this problem stands in the way. Will Tauchain succeed where these other attempts have failed? I think it is a strong possibility which is why I'm excited about the implications should Tauchain successfully be built.
Lenat, D. B., Prakash, M., & Shepherd, M. (1985). CYC: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks. AI magazine, 6(4), 65.
Wagner, C. (2006). Breaking the Knowledge Acquisition Bottleneck Through Conversational Knowledge Management. Information Resources Management Journal, 19(1), 70-83.
Web 1. https://www.quora.com/What-is-knowledge-acquisition-bottleneck
Web 2. http://www.igi-global.com/dictionary/knowledge-acquisition-bottleneck/49991
Web 3: http://www.tauchain.org
Web 4: https://steemit.com/tauchain/@dana-edwards/how-to-become-a-stakeholder-in-agoras-and-indirectly-tauchain
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: What is the Knowledge Acquisition Bottleneck problem?
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.