If Money = Memory, if Society = a Super Computer, if Computation is in Physical Systems, what is a Decentralized Operating System? By Dana Edwards. Posted on Steemit. October 24, 2018.
These concepts are not often discussed so let's have the discussion from the beginning. The first concept to think about is pancomputationalism or put another way the ubiquitous computers which exist everywhere in our environment. We for example can look at physical systems living and non living and see computations taking place all around us. If you look at rocks and trees you can see memory storage. If you look at DNA you can see code and if you look at viruses you can see microscopic programmers adding new codes to DNA. Even when we look at the weather such as a hurricane it is computing.
If you look at nature you see algorithms. You will see learners (yes the same as in AI), also in nature. The process is basically the same for all learning. Consider that everything which is physical is also digital. Consider that the universe is merely information patterns.
If we look at society we can also think of society as a computer. What does society compute though? One way people talk about a society is as a complex adaptive system, but this is also how people might talk about the human body. The human body computes with the purpose of maintaining homeostasis, to persist through time and reproduce copies of itself over time. The human brain computes to promote the survival of the human body. Just as viruses pass on codes to our DNA, the human brain is infected with mind viruses which are called memes. Memes are pieces of information which can alter physically how the brain is working.
The mind isn't limited to the brain. The mind is all the resources the brain can leverage to compute. In other words a person has a brain to compute with but when language was invented this allowed a person to compute not just using their own brain but using the environment itself. To draw on a cave is to use the cave to enhance the memory of the brain. To use mathematics is to use language to enhance the ability of the brain to compute by relying on external storage and symbol manipulation. To use a computer with a programming language is essentially to use mathematics only instead of writing on the cave wall we are writing in 1s and 0s. The mind exists to augment the brain in a constant feedback loop where the brain relies on the mind to improve itself and adapt. If there were no external reality the brain would have no way to evolve itself and improve.
A society in the strictly human sense of the word is the aggregation of minds. This can be at minimum all the human minds in that society. As technology improves the mind capacity increases because each human can remember more, can access more computation resources, can in essence use technology to continuously improve their mind and then leverage the improved mind to improve their brain. The Internet is the pinnacle of this kind of progress but it's obviously not good enough. While the Internet allows for the creation of a global mind by connecting people, things, and minds, it does nothing to actually improve the feedback loop between the mind and the brain, nor does it really offer what could be offered.
Bitcoin came into the picture and perhaps we can think of it as a better memory. A decentralized memory where essentially you can have money. The problem is that money is a very narrow application. It is the start, just as to learn to write on the cave wall was a start, but it's not ambitious enough in my opinion.
Humans in the current blockchain or crypto community do not have many ways where human computation can be exchanged. Human computation is just as valuable as non biological machine computation because there are some kinds of computations which humans can do quite easily which non biological machines still cannot do as well. Translation for example is something non biological machines have a difficult time with but human beings can do well. This means a market will be able to form where humans can sell their computation to translate stuff. If we look at Amazon Mechanical Turk we can see many tasks which humans can do which computer AI cannot yet do, such as labeling and classifying stuff. In order for things to go to the next level we will need markets which allow humans to contribute human computer and or human knowledge in exchange for crypto tokens.
The concept of a decentralized operating system is interesting. First if there are a such thing as social computations (such as collaborative filtering, subjective ranking, waze, etc) then what about the new paradigm of social dispersed computing?
The question becomes what do we want to do with this computing power? Will we use it to extend life? Will we use it to spread life into the cosmos? Will we use it to become wise? To become moral? To become rational? If we want to focus on these kinds of concerns then we definitely need something more than Bitcoin, Ethereum, or even EOS. While EOS does seem to be pursuing the strategy of a decentralized operating system which seems to be the correct course, it does not get everything right.
One problem is as I mentioned before the importance of the feedback loops between minds and brains. The reason I always communicate on the concept of external mind or extended mind is based on that fact that it is the mind which creates the immune system to protect the brain from harmful memes. The brain keeps the body alive. The brain is not really capable of rationality, or morality, or logic, and relies on the mind to achieve this. The mind is essentially all the computation resources that the brain can leverage.
EOS has the problem in the sense that it doesn't seem to improve the user. The user can connect, can join, can earn or sell, can participate, but unless the user can become wiser, more rational, more moral, then EOS has limits. EOS does have Everpedia which is quite interesting but again there are still problems. What can EOS do to improve people in society and thus improve society, if society is a computer and is in need of being upgraded?
Well if society is a computer first what does society compute? What should it compute? I don't even know how to answer those questions. I could suggest that if computation is a commodity along with data then whichever decentralized operating systems that do compete and exist will compete for these commodities. The total brain power of a society is just as important as the amount of connectivity. And the mind of the society is the most important part of a society because it is what can allow the society to become better over time, allow the people in the society to thrive, allow the life forms to continue to evolve avoid extinction.
A decentralized operating system on a technical level would have a kernel or something similar to it. This is the resource management part. For example Aragon promises to offer a decentralized OS and it too mentions having a kernel. A true decentralized operating system has to go further and requires autonomous agents. Autonomous agents which can act on behalf of their owners are philosophically speaking the extended mind. But the resources of a society is still finite, has to be managed, and so a kernel would provide for an ability to allow for resource management.
The total computation ability of a society is likely a massive amount of resources. A lot more than just to connect a bunch of CPUs together. Every member of the society which can compute could participate in a computation market. Of course as we are beginning to see now, the regulators seem concerned about certain kinds of social computations such as prediction markets. So it is unknown how truly decentralized operating systems would be handled but my guess is that if designed right then they could be pro-social, be capable of producing augmented morality by leveraging mass computation, and also by leveraging human computation be able to be compliant. To be compliant is simply to understand the local laws but these can be programmed into the autonomous agents if people think it is necessary.
What is more important is that if a law is clearly bad, and people have enhanced minds, then it will be very clear why the law is bad. This clarity will help people to dispute and seek to change bad laws through the appropriate channels. If there is more wisdom, due to insights from big data, from data scientists, etc, then there can be proposals for law changes which are much wiser and more intelligent. This is something specifically that people in the Tauchain community have realized (that technology can be used to improve policy making).
A lot is still unknown so these writings do not provide clear answers. Consider this just a stream of consciousness about concepts I am deeply contemplating. This is also a way to interpret different technologies.
Tauchain: The Social Dispersed Computer introduced as a Social Network? By Dana Edwards. Posted on Steemit. October 12, 2018.
What might a Tau Operating System via a Tau Social Dispersed Computer function like?
We know from tauchain.org that the first iteration of Tau is to be a discussion platform not too dissimilar from Facebook. Of course this would simply be the front end or the "face" of what could behind the scenes evolve toward a social dispersed computer complete with a dispersed operating system. The resources have to be managed and a kernel could provide for this in a manner not dissimilar to what we see with EOS. The Agoras or AGRS token specifically represents "resources" as it is the tokenization of resources for whichever application Tauchain will use.
TML provides the basis from which to create the necessary languages to produce a dispersed operating system computer. Zennet even has an algorithm which Ohad himself worked on for the purpose of calculating the resource requirements. All minds will be able to contribute towards the computational resources (at least in theory) of Tauchain.
Because of Zennet there may in fact not be a limit to the amount of computation resources which we could throw at the super computer. It will of course depend on resource management which is where a kernel likely comes into play because any smart apps built to run on Tau will have to ask for resources. Resource management is one of the core functions of a kernel and of an operating system which is why I think it is likely that Tauchain will have one. I think the Ethereum route shows problems with scaling as applications also have to compete for resources in a way where the network cannot self manage it. Cryptokitties for example can render the whole Ethereum network lagged and if this is a computer then it could mean a nonsense app could disrupt more critical apps.
A prime example of a potential smart app for Tauchain
An example (which may or may not be feasible) is a health and fitness app. The app in theory could allow any user to provide data such as genetic information, blood test results, exercise tracking, blood pressure, blood sugar and anything else. All of this could provide a feedback loop back to the patient on how to improve their health over time based on the knowledge of Tau. As technology gets better the users could add more devices to provide more data for a better feedback loop. As technology evolves FGPAs could be added to meet the demand for calculations and storage can be rented as well.
An operating system could give priority to this kind of app by load balancing the resources. How would it know to do this? Tau could learn the morals, legal ramifications, and a consensus can emerge that health related apps deserve a premium access to resources because it can save lives.
De Lege Ferenda  is a series. Like the Tauchain Exegesis ,  is. One train of articles.
This is the introductory 'locomotive' article where I attempt to nail down the essential basics. This is nontrivial cause it requires compression of very long stream of thoughts and research. Spanning literally decades. In that sense some of the overcompressed categorical statements are also cognitive ''letters of credit''  or ''promisory notes''  - comprising debt of mine for future separate more detailed explanations to come. I'm afraid this is the only way the theses and conclusions of mine to be expressed in a reader-friendly way. Of course, questions and comments as mutual understanding accelerator are as always more than welcome.
Three ''angles of attack'' , in roman numerals and capitals in pure latin (the lingua franca  of law :) bellow:
Maybe I ,  already tired you with repeating my incantation of:
Law is Between, Code is Within , 
It is quite multi-dimensional in meanings and multi-disciplinary in consequences but here it comes to denote the unavoidability of Law. Rendered down to the most basic physics we currently know:
This is the way and reason why Law is enforceable and Code is executable. And the major categorial difference between them which makes the notion of 'code is law'  utter nonsense, as well as, it seems, also destroys the very basis of the notion of 'smart contracts' . But this belongs to bunch of other series of mine to come ...
Even if it was theoretically possible all effectors  to become one, there'd still be internal uncertainty fragmentation and thus unavoidability of enforcement.
Leaving this head-dizzying fundamental cognitive datum  and heading up across the higher abstraction epistemic layers  we reach the surface to take a swallow of fresh air to:
Nothing, read my lips, NO-THING in crypto or blockchain has ever been or could possible be extralegal.
Cuz there ain't a thing in any blockchain aspect which is not ... physical. Hence beyond the scope of Law.
Blockchain is most probably the arrival of the expected Hanson engine , or Szabo booster , or ultimate Clusivity management tool . Which makes it extremely important domain for proper legal treatment and regulation - both as taxonomy within the existing institutes of Law  - lex lata, and as creation of novel norms to cater it - lex ferenda .
(as a side note: expectedly the novel collective mnemonic technologies knows under the umbrella term of 'crypto' provide positive feedback loop to strengthen the Law, too - Tauchain  seems to promise  the advent of law, at last, as consistent and decidable set of rules, for first time ever.)
II. IURIS DICTIO
Law being inherently about physical, is also about spatiotemporal, i.e. about geography / geopolitics. It is always territorial even when it is cross-border applicable by the virtue of international law or internal rules to resolve inter-jurisdictional normative collisions.
The known world (I deliberately do not say: the planet, the Earth, or the globe because of ... of course - the Outer Space Law  !), is tessalated geographically into jurisdictions , . Countries or nations. The pattern pixels of the universal human jursdictional cellularity. But borders not as much divide as they connect.
The world is internet of jurisdictions no matter how yet primitive are the networking protocols and architecture. And because due to topological defficiencies this can not yet be a geodesic network  - some jurisdictions are special. And among the special there are some which are even more special than the merely special ones. The specialness stems from the fact of what a jurisdiction enjoyed gives to its user.
After decades of observation and practice and comparative studies I reached the conclusion that THE jurisdiction is the Principality of Liechtenstein ! 
Mere ennumeration of its features and the sheer lack of bugs would occupy a sizeable volume. Liechtenstein is not just an island periphery money hideout of an old fat imperial metropoly - it is a HUB. It is immersed  right into the middle of the healthiest-wealthiest community of EU .
What starts in Liechtenstein does not stay in Liechtenstein but swiftly propagates into the giant space of EEA . It is a keyhole jurisdiction straight into this most giant jurisdiction of jurisdictions - so strong in soft power  and so influential that even the FAMGA  seem to reckon Europe more than their own home jurisdiction .
Liechtenstein is simultaneously with deepest and most stable roots in the best of history and geography and is most advanced and ahead in the making of legislation of a highest probe of adequacy.
It does in 2018 - what I (and just a few others) predicted years ago to happen. We must herein admit that other jurisdictions do have some timid try-outs for legal codification of the blockchain but nothing compares with the comprehesive and in-depth approach of the Principality's legislators.
On 28th of August 2018 Liechtenstein published  a draft  of the new Blockchain Act:
<< On 28 August 2018, the Ministry for General Government Affairs and Finance of Liechtenstein published the consultation report on the new Blockchain Act (Act on Transaction Systems based on Trustworthy Technologies (VT) (Blockchain Act; VT Act; VTG)).
The government has decided to regulate not only the current Blockchain-applications (in particular cryptocurrencies and initial coin offerings (ICOs)), but also to establish a legal basis for the entire scope of application of the token economy according to a long-term approach, which should also meet the needs of future generations. >>
The basic provisions of the Liechtenstein Blockchain Act are exposed yet only in German language - which I'm not at all in command of and a language quite indgestable by the Google Transalte AI.
The consultation period ends on 16 November 2018, i.e. less than 2 months left from today.
My modest intention is by this De Lege Ferenda series of articles to provide my comments and opinions to 'whom it may concern' on the upcoming Liechtenstein Blockchain Act.
You already know I'm kinda fond of timelining and retrodictions.  :)
Every result has its cause, often hidden in the ocean of data what past is, and quite hard to distinguish.
US has its Captain America . Liechtenstein is lucky to have its Mr Liectenstein .
Andreas Erick Johannes Kohl Martinez of the House of Sequence . Remember that name.
Since the dawn of the blockchain era, I'm under the strong conviction that Liechtenstein is the true Crypto Valley  of the globe. So is Andreas, too. Purely by chance it occured that we both - long time before we knew eachother - have this astronomically improbable coincidence or synchronicity  of this and multitude of other thoughts.
Society of mind .
[*] - photo attributed to: By Michael Gredenberg - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=18962
 - https://en.wikipedia.org/wiki/Lex_ferenda
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-intro
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-the-two-towers
 - https://en.wikipedia.org/wiki/Letter_of_credit
 - https://en.wikipedia.org/wiki/Promissory_note
 - https://en.wikipedia.org/wiki/Angle_of_attack
 - https://en.wikipedia.org/wiki/Lingua_franca
 - http://www.behest.io/
 - https://steemit.com/blockchain/@karov/behest-for-tauchain
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://steemit.com/tauchain/@karov/tauchain-and-the-cost-of-trust
 - https://en.wikipedia.org/wiki/Pauli_exclusion_principle
 - https://en.wikipedia.org/wiki/Fermion
 - https://en.wikipedia.org/wiki/Enforcement
 - https://en.wikipedia.org/wiki/Uncertainty_principle
 - https://en.wikipedia.org/wiki/Free_will
 - https://en.wikipedia.org/wiki/Mutual_information
 - https://www.coindesk.com/code-is-law-not-quite-yet/
 - https://en.wikipedia.org/wiki/Smart_contract
 - https://steemit.com/tauchain/@karov/tauchain-over-de-latil
 - https://www.etymonline.com/word/data
 - https://steemit.com/tauchain/@karov/tauchain-the-hanson-engine
 - https://steemit.com/tauchain/@karov/tauchain-as-szabo-booster
 - https://steemit.com/tauchain/@karov/clusivity-by-tauchain
 - https://en.wikipedia.org/wiki/Lex_lata
 - http://www.idni.org/
 - http://www.idni.org/blog/tau-and-the-crisis-of-truth.html
 - https://en.wikipedia.org/wiki/Space_law
 - https://www.etymonline.com/word/jurisdiction
 - https://en.wikipedia.org/wiki/Jurisdiction
 - https://steemit.com/blockchain/@karov/geodesic-by-tau
 - https://www.liechtenstein.li/en/
 - https://www.liechtenstein-business.li/en/economic-area/get-to-know/hidden-treasures/liechtenstein-combines-the-best-of-both-worlds/
 - http://europa.eu/
 - https://en.wikipedia.org/wiki/European_Economic_Area
 - https://en.wikipedia.org/wiki/Soft_power
 - https://medium.com/crypto-oracle/why-cryptos-a-growing-threat-to-famga-a-k-a-facebook-apple-microsoft-google-and-amazon-ea237570d3ea
 - https://www.dw.com/en/eu-gives-facebook-twitter-ultimatum-on-consumer-protection-laws/a-45573561
 - https://www.pwc.ch/en/insights/regulation/liechtenstein-publishes-draft-of-the-new-blockchain-act.html
 - https://www.llv.li/files/srk/vnb-blockchain-gesetz.pdf
 - https://steemit.com/bitcoin/@karov/bitcoin-retrodictions
 - https://en.wikipedia.org/wiki/Captain_America
 - https://podcast.bitcoin.com/e349-How-Libertarian-Leader-Mr-Liechtenstein-Got-Lucky
 - http://www.sequence.li/
 - https://www.businessinsider.com/what-its-like-in-zug-switzerlands-crypto-valley-2018-6
 - https://en.wikipedia.org/wiki/Synchronicity
 - https://en.wikipedia.org/wiki/Society_of_Mind
 - https://steemit.com/tauchain/@karov/scaling-is-layering &https://steemit.com/tauchain/@karov/tauchain-transcaling
Consensus Morality and Tauchain | Consensus Gentium. By Dana Edwards. Posted on Steemit. September 15, 2018.
An ancient criterion of truth, the consensus gentium (Latin for agreement of the people), states "that which is universal among men carries the weight of truth" (Ferm, 64). A number of consensus theories of truth are based on variations of this principle. In some criteria the notion of universal consent is taken strictly, while others qualify the terms of consensus in various ways. There are versions of consensus theory in which the specific population weighing in on a given question, the proportion of the population required for consent, and the period of time needed to declare consensus vary from the classical norm.
In the past I made a controversial statement that the law is amoral. This statement I made is based on a simple understanding of legal positivism. Take note that I am not a legal scholar or legal philosopher. My background is in ethical philosophy and political philosophy. That being said if we look at the ideas behind legal positivism it leads to the conclusion that law and morality have nothing to do with each other. In this post I will try to clarify some of my thoughts on this topic and also address a question I was asked about whether Democracy is moral or immoral. I will also discuss the concept of consensus morality and the implications it could have on Tauchain which by design will be permitted to have law(s). Will the law(s) in Tauchain be moral or immoral? Is it possible to align a moral framework with the creation of all laws in Tauchain? Which moral framework and will it be reached by consensus?
In order to understand a lot of my post we first have to consider the question of what is consensus morality? So in order to discuss this topic I will divide morality up into; private morality, public morality. This also introduces the question of whether public morality is authentic or coerced as it depends on how it emerges.
Private morality is what you internally think or feel is right or wrong. This could be because you did some sophisticated calculation as a consequentialist or it could merely be that you feel a certain kind of way about it. In your opinion it is considered wrong. For example you could say: "eating meat is wrong" and this would be your personal opinion. This is an expression on how you feel about eating meat. Now if you say "eating meat is wrong because it promotes animal suffering" this is also an expression of your opinion but you now have a goal attached which is to avoid promoting animal suffering. The goal of not promoting animal suffering suggests that you value minimization of animal suffering as a kind of optimization strategy.
If you still you follow, private morality can also be based on your religious convictions where because the bible says it is wrong or because you were taught the golden rule that it is in your opinion wrong to do behaviors which violate these teachings. The golden rule is an example of a heuristic rule. There are many such rules which people follow including the example from Kant (categorical imperative) but it is still just an opinion based on adherence to a heuristic rule. We can also consider the non agression principle an example of a heuristic rule (a heuristic rule is a mental shortcut which people take because they believe it leads to good results most of the time).
Public morality on the other hand is a different kind of morality entirely. A private individual has a private morality because that individual is only responsible for themselves in their decisions. A public individual is in a position where other people have a stake in what they are doing. For example a CEO of a company cannot simply do what they think is right because the other shareholders have funds at stake. The CEO has a fiduciary duty which outweighs their personal opinions on what is right and wrong. This fiduciary duty is to the shareholders of the company and is both a legal and ethical obligation. In the case of a public company the rightness or wrongness of a decision if the company weighs consequences is based on data. For example a company might rely on focus groups to determine what a customer might want. A company would have to rely on spiritual advisers, ethical focus groups and determine what the shareholders (and customers) would perceive as right. This is because if the CEO does not do what is in the best interest of the shareholders and customers then the CEO will simply be replaced by another CEO who will.
Public morality is reached by some process which results in a moral consensus. The moral consensus of 2018 is not going to be the same as the moral consensus of 1969. This is to say that moral attitudes change over time. A company which seeks to exist and remain profitable for decades must remain in good moral standards for these decades. The only way a company can remain aligned with current moral trends is through a tactic of data analysis. In other words data science is how "right" and "wrong' are determined. For example public sentiment is tracked and from that the marketing team knows where the line in the sand is and what line not to cross in their marketing campaign. The phrase "we went too far" is common in business because going too far simply means to push the boundaries on what is acceptable (or unacceptable). This also can become problematic because if a company bets on a moral consensus in the 1800s (slavery is right) then that company might find after the Civil War (slavery is wrong) and now have to change their opinion. In other words the moral consensus is always changing and is in essence producing moral populism.
Consensus morality on Tauchain
Consensus morality is essentially a publicly recognized framework for right and wrong. Consensus morality on Tauchain for example could be arrived at if we simply have the discussions on topics of ethics. Over time our discussions will converge in such a way so as to produce a consensus morality. That is a moral attitude of the day, of the year, etc as it is merely what is currently the popular opinion and sentiment on what is right and what is wrong. So consensus morality is in my opinion likely to be a very important concept going forward and is a concept which Tauchain (and blockchains like Steem) may enable.
Consensus morality and potential problems
So the question I was asked is about democracy. The idea a person put forth to me was that democracy is immoral because it is a form of coercion. I do not personally buy into this idea that democracy is inherently immoral or inherently coercive. I will say that democracy implemented in the wrong way can become coercive. This is why the emphasis on privacy may be a requirement. If there is no privacy then all votes could be coerced. If the idea is to have a network which is truly moral then we would require that every moral opinion be expressible. Moral opinions which are unpopular are censored or discouraged from being expressed in a transparent ecosystem. This means a transparent ecosystem may in fact under certain circumstances produce a coerced consensus morality. That is that the votes which are public and attributable to certain individual may be mere virtue signals rather than honest (authentic) opinions on what is right and wrong.
As a result this transparency may skew the results of any poll about any subject. A private or anonymous poll can capture a result which in theory expresses some true opinion. In addition there is the possibility of futarchy to allow for prediction markets and other mechanisms to allow for true sentiment on moral questions to be discovered. My answer to the question is that whilst democracy is not inherently wrong it is also not inherently right. Democracy is a tool which when used in the right circumstances may be best suited for achieving the ends. If no better tool exists to achieve the ends then democracy may in fact be the choice which leads to the least bad consequences which compared to other potential choices. That being said the ideal of consequentialism is to over time reduce the wrongness and increase the rightness by measuring the consequences of every choice; such as private ballot voting vs transparent voting.
Privacy has both it's risks and its benefits with regard to consequences. The benefits include coercion resistance. The risks on the other hand include increased ability to bribe and thus coerce. So the idea being that while in theory a person with privacy can express an authentic opinion (have genuine speech rights) it is also true that anyone could be anonymously (privately) be selling their opinion and thus their vote. It is going to be a challenge to determine when privacy is the right tool for the job and when transparency is the right tool for the job.
In the positivist view, the "source" of a law is the establishment of that law by some socially recognised legal authority. The "merits" of a law are a separate issue: it may be a "bad law" by some standard, but if it was added to the system by a legitimate authority, it is still a law.
Legal positivism states that the law and morality are not one in the same. Just because something is legal it does not mean it is moral. Just because something is illegal it does not mean it is immoral. From this basis I reached a conclusion that because immoral laws exist (some laws are moral) that the law as a whole is amoral. That is to say that whether a law can be made or unmade does not demend on whether the law produces good consequences or even desirable consequences. We could for example look at the drug laws and war on drugs to see examples of policies which produce mass incarceration but was that the intended consequence? It would seem the drug laws would have to be immoral according to consequentialism unless the intended consequence was mass incarceration. If the intended consequence was harm reduction then the current drug laws are ineffective. What do these laws actually achieve? It doesn't really matter because the law is amoral. To align the law with morality is also problematic because it would only be able to align with public morality which under consequentialism may also often lead to bad or unintended consequences.
A potential solution is to allow participants in the ecosystem to rate the laws over time. Laws which receive a higher rating or lower rating would provide a feedback loop indicating when a law should be replaced. This is something that we don't seem to have in the current legal system or if we do have it then what is actually done if a lot of people express the opinion that a particular law is immoral or perhaps not moral enough? If every law on Tauchain could be rated, reviewed, discussed continuously, and improved indefinitely, then we may actually get somewhere.
In a recent article of mine  I hinted my strong suspicion that scaling is itself scalable.
''Scaling is a problem. Scaling must be scalable, too. Metascale from here to Eternity.''
No matter what a terrific grower a system is - as per its own internal algorithmic growth drive rules - it seems inevitable its growth to get it into entropic mutualization  upon impact with a kind of a ... downscaler.
Scaling is everything, yeah. But it is quite intuitive and supported by too big body of evidence to ignore, that, paradoxically: the faster a thing grows - the sooner its encounter with an external and bigger downscaling factor comes.
This realization, refracted through the prism of our 'reptilian brain' layer  amplified to gargantuan proportions by our inherent social hierarchicity  is the source of the 'Malthusian  anxiety' which led to countless violent deaths over all the human history. Fear is anger , so the emotion that there is only as much to go around, and that the catastrophe of 'running out' of something is imminent, is the major source of what makes us bad to each other .
There are plethora of examples of very well mathematically and scientifically grounded doomsayer scenarios, and we must admit that they all correct as per their internal axiomatics  , and simultaneously they are all totally wrong for missing out the obvious - the factors of externalities  , the properties and opportunities of the medium which is consumed and/or created by this growth, and which transcend the axiomatics. For growth being always 'growth into'. The fact that doomsday scenarios are so compellingly consistent internally is what makes them so strong and dangerous ideological weapon of mass destruction .
Lets throw some such problem-solution couples for clarity:
a. the world of 1890es big cities sunk up knee-deep into beast of burden manure , and the super-apocalyptic projections of that VS Tony Seba's  1 pic > 1000 words of NYC carts vs cars situations in 1900 -1913 ...
b. the grim visions of the whole Mankind becoming telephone switchboard blue collar workers , the number of which should've exceeded the number of total world population by now to achieve the same level of telephonization or
c. the all librarians world  where it takes more librarians than the whole mankind to serve the social memory in the paper & printed ink storage facilities mode ...
d. the Club of Rome  as the noisiest modern bird of ill omen with 'projections' based on the same blind extrapolations as the urban seas of shit or the 'proofs' of the impossibility to connect or educate or feed all - instigating mass destruction fear that ''we run out of everything and will soon all die'' , used for justification for mass atrocities VS Julian Simon's  - the ''Ultimate Resource'' (1981, 1996) . Cf.: my accelerando article  and see what precisely is the Factory for succession of better and better Hanson drives for the last few millions of years - from the Blade and the Fire to the Tau - it is the same thing which identification made Julian Simon from fanatical Maltusianist  into rationally convinced Cornucopian  ... the human mind.
e. the predator-pray model  which this pseudo-haiku  I guess depicts best how's it brutally flawed:
''hawk eat chic -> less chic, human eat chic -> more chic''
for missing out to posit and failure to account for positive feedback loop  of predator over pray dynamics ...
f. The comment of Dary Oster  , founder of the other passion of mine - ET3 , on the aka 'saturation' of the scalables (exemplified in the field of transportation, which btw, being communication ... our social structures map onto mobility systems we have on disposal ... ).:
''... US transportation growth has focused on automobile/roads (and airline/airport) developments. (And this has been VERY good for the US economy.) The reason is that cars/jets offered far better MARKET VALUE than horse/buggy/train transport did 150 years ago. In the mid 1800s, trains displaced muscle power for travel between cities - because trains offered better market value than ox carts. Trains reached 'market saturation' about 1895 to 1905 (becoming 'unsustainable') - however 'market momentum' produced 20 years of 'overshoot'. Cars/jets were far more sustainable than passenger trains and muscle power, and started to displace trains (and finish off horses). By 1916 the US rail network peaked at 270,000 miles (today less than 130,000 miles is in use).Just like passenger trains hit market saturation, roads/airports are reaching economic limitations. The time is ripe for a market disruption, and all indicators (past and present) say it will NOT come from, or be supported by government or academia -- but from private sector innovations that offer a 10x value improvement (like ET3), AND also offer incentives for most (not all) key industries to participate (like ET3). Automated cars, smart highways, and electronic ride sharing are industry responses that will contribute to overshoot of cars/roads for the next 5-10 years.The main problem i see with the education system is that is that academic research and publication on transportation is primarily funded by status quo industries like: railroads and rail equipment manufactures, highway builders, automobile/truck manufactures, engineering firms, etc. -- all who fund research centered on 'improving' the status quo.Virtually all universities (for the last 1k years+) are set up to drive incremental improvements that industry demands, and virtually all paradigm shifts are resisted until AFTER they occur and are first adopted by industry. Government is the same (for instance in 1905 passing laws to forbid cars that were disrupting horse traffic; or in 1933 passing laws to limit investment in innovation startups to the wealthy (those successful in the status quo)).''
g. Darwinian algo  sqrt(n) VS higher algos - like Metcalfe n^2 . It is not precise, it is more of metaphorical, to indicate direction or scale of scaling, rather then rigorous precision, but ... the former figuratively speaking takes 100 times more to put up 10 times more, and the later takes 10 times more to return 100 times more...
h. Barter vs money. See.:  bottom of page 5 over the bottomline notes, about the later:
simpliﬁes pricing calculations and negotiations from O(n^2) complexity to O(n) complexity
As demonstration how one item out of a scaling barter system, emerges as specialized transactor and accelerator to transcale the barter economy. From within. Endogenously as always. (btw, Extremely strong document where there are entire books read and internalized behind each tight and contentful sentence!)
i. The heat death of the universe  VS the realization that the 2nd law  - conservation law for entropy/information law does not allow that , the asymptoticity  of the fundamental limits of nature, the fact that max entropy grows faster than/from/due to the actual antropy growth  and that entropy is not disorder  and that at the end of the day it is an unbounded immortal universe  ... cause it's all a combinatorial explosion .
j. The Anthropic principle  and the realization that it is extremely hard if not impossible to posit a lifeless universe  ...
k. The Algoverse - my 'psychedelic' vision  of the asymptotic inexorable hierarchy of the Dirac sea  of lower algos which take everything for almost nothing - up towards giving almost everything for almost nothing - Bucky Fuller's runaway Ephemeralization . Algorithms are things. Objects. Structure. Homoousic or consubstantial to their input and output. Things taking things and making things outta the former. Including other algos of course! Stronger ones.
l. The Masa Effect . The Master of Softbank seeing how the machine productivity is on the imminent course to massively overscale the human clients base and his apparent transcaling solution to upscale the clients base with bots and chips, with the same which scales supply in such a too-much way. 
m. The Pierre the Latil 1950es and Stanislaw Lem 1960es ( copied 1:1 by Tegmark  ) hierarchy . Of degrees of self-creating freedom of Effectors ...
n. Limits of growth - present in any particular moment and in any finitary setting of rules ,  but nonexistent in the infinity of rules upgradability. Like a cancer cell trapped in a cage of light  vs ... photosynthesis.
o. Ray Kurzweil - static vs exponential thinking .
p. Craig Venter's  Human Genome project  which when commenced in 1990 was ridiculed that will be unbearably expensive and will take centuries to finish, and it did - it costed a unbearable for 1990 fortune and it did take centuries, of subjective time as per the initial projections conditions - being completed in year 2000.
q. Jeff Bezos vision  of Solar System wide Mankind:
''The solar system can easily support a trillion humans. And if we had a trillion humans, we would have a thousand Einsteins and a thousand Mozarts and unlimited, for all practical purposes, resources.''
r. The 'wastefulness' of data centers and crypto mining collocation facilities  ... which is as funny as to envy the brain for 'wasting' >25% of the body energy. (Btw, the tech megatrend is exponentially and relentlessly towards the minimum calculation energy).
s. The log-scale intuitive measure and smooth straight line visualization coming out of, this quote which I fished out off the net long time ago.:
"The singularities are happening fairly regularly but at an increasing rate, every 500 to 1000 billion man-years (the total sum of the worldwide population over time). The baby boom of the 1950 is about 200 Billion man-years ago."
ops! go back to Q. With 1 trln. humans population the 'singularities' will occur once a year?!
t. the Tau  !!
I can continue with these examples ... forever [wink] - excuse me if I've bored you - but I think that at least that minimum was needed to be shown and it is enough to grok the big picture.
Scaling is the solution. It is a problem too. Its overcoming is what I dub 'Transcaling' for the purpose of that study.
Size matters. Scaling is the way. But the more general is how a system handles change! This is as fundamental as to be in the very core of definition of life and intelligence .
Tauchain is all about change handling!
Now, lets knit the 'blockchain' of these all example threads above into a knot like the Norns do :
Dear friends, please, scroll back to Example D. Yes, the human mind transcaler thing. The Ultimate resource thing.
We are the ultimate resourse.
We the humans (and soon the whole zoo of our technological imitations and reproductions and transcendences of ourselves ).
We as the-I  are strong thinkers and creators, immensely more road lies ahead than it's been traveled, yes, but yet we, as the-I, are the momentary apex in the Effectoring business  in the Known universe ... AND simultaneously we as the-We are mediocre to outright dumb.
We are very far from proper scaling together. The Ultimate resource is not coherent and is not ... collimated. Scattered dim lights, but not a powerful bright mind laser. Dispersed fissibles, but not a concentration of critical masses.
We as The-We - paradoxically- persistently finds ways to transcale its destinies using the power of the-I, but the-We itself does not entertain the scaling well at all .
The individual human mind is the unscaled transcaler.
Tau is the upscaler of that transcaler.
I'll introduce herewith another 'poetic' neologism, which occurred to me to depict the scaling props of a system after the Scrooge factor of ''Tauchain - Tutor ex Machina'' , and it is the:
Spawn  factor
- the capacity and ability of a system to grow through, despite, against, across, from and via the changes. Just like cuboid  is about all rectangular things like squares, cubes, tesseracts ... regardless of their dimensionality, the Spawn Factor - to be a generalization of all orders of scaling. Zillion light years from rigor, of course, as I'm on at least the same distance from my Leibnizization . For the lawyer to become a mathematician is what is for a caterpillar to become a a butterfly. :) Transcaling.
Tau transcends the infinite regress of orders of: scaling of scaling of scaling ... by being self-referential. Or recursive. 
What is the Spawn factor of Tau?
If you let me I'll illustrate this by a poetic periphrasis of the famous piece of Frank Herbert's .:
I will face my change. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the change has gone there will be nothing. Only I will remain.
To zoom out is useful. It puts the events networks of our spacetime in perspective. Including on what the great Jorje Luis Borges was calling the Orbis Tertius :
''ORBIS TERTIUS. "Tertius" (Latin = third) is an allusion to: World 3: the world of the products of the human mind, defined by Karl Popper.''
Poetically stated, ''retrodiction studies'' , ,  enables us to get a glimpse on the "clear, cold lines of eternity".
Back in 20th century Prof Robin Hanson put together this extremely insightful and strong document .
Long-Term Growth As A Sequence of Exponential Modes,
Economy grows. [see: Footnote]. Unstoppable.
Hanson's unprecedented contribution was to provide us with systematic orientation tool on how and why economy grows.
It accelerates. See:
Mode Doubling Date Began Doubles Doubles Transition
Grows Time (DT) To Dominate of DT of WP CES Power
---------- --------- ----------- ------ ------- ----------
Brain size 34M yrs 550M B.C. ? "16" ?
Hunters 224K yrs 2000K B.C. 7.3 8.9 ?
Farmers 909 yrs 4856 B.C. 7.9 7.6 2.4
Industry 6.3 yrs 2020 A.D. 7.2 >9.2 0.094
The model identifies the past economy accelerators as.:
- neural networks, evolving into doubling brain size each 30-ish megayears (hinting that human level of intelligence is an inevitability: +/-30 millions of year around the Now, by the virtue of the good old 'coin-toss' Darwinian algorithm alone.)
- human as the top-of-the-foodchains predator since around 2 000 000 BC. (maybe the human mastering of the Fire and the Blade to blame), compressing the doubling time with over two orders of magnitude down to a quarter of a million of years.
- Food production, ecosystem manipulation (or rather the collimation of farming, horse domestication and writing as accelerator components), leading to less than 40 human generations per economy doubling.
- All we know as division of labor, specialization, systematized Sci-Tech... industry - the centralized ways for production and control of knowledge leading to another hundreds-fold compression down to mere ~decade of economy doubling time.
Recommended: digest each Hanson (economy accelerator drive or) Engine with the Bob Hettinga's 'ensime' :
My observation about networks in general is a rather obvious one when you think about it: our social structures map to our communication structures. As intuitive as it is to understand, this observation provides great insight into where the technology of computer assisted communication will take us in the years ahead.
Connectivity specs as indicator and drive.
Now, when we leave the past and use these models to gaze into the future, the really interesting stuff comes out.
Aside from giving explanation to the, detected by Brad DeLong in his also monumental paper , overall trajectory of the economy, the nucleus of meaning in the Rob Hanson's paper is:
Typically, the economy is dominated by one particular mode of economic growth, which produces a constant growth rate. While there are often economic processes which grow exponentially at a rate much faster than that of the economy as a whole, such processes almost always slow down as they become limited by the size of the total economy. Very rarely, however, a faster process reforms the economy so fundamentally that overall economic growth rates accelerate to track this new process. The economy might then be thought of as composed of an old sector and a new sector, a new sector which continues to grow at its same speed even when it comes to dominate the economy.
Visualize: a Petri dish and sugar being expanded in size and quantity by the accelerating growth of the bacterial culture in it.
Hanson actually predicted nearly quarter of century ago, ... something that is relentlessly coming.
In the CES model (which this author prefers) if the next number of doubles of DT were the same as one of the last three DT doubles, the next doubling time would be ... 1.3, 2.1, or 2.3 weeks. This suggests a remarkably precise estimate of an amazingly fast growth rate. ... it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.
An economy accelerator avalanche is roaring down the slope of time towards us.
A brand new Hanson Engine is about to leave the assembly line.
Tau, is that you?
FOOTNOTE: To wrap up the above statements in the flesh of the deep thesaurus of content onto which they lie, would conservatively consume hundreds of pages. Even if only briefed. I promise to come back to these subtopic meaning expansions (by referring back to here) with series of posts in the months to come to tie up with the notions of.: economy as a network, network as computer, what exactly it processes and outputs, economy (like the universe or life) being endogenously driven positive feedback loop self-amplifying non-equilibrium entropic combinatorial explosion system, the wealth as economy complexity growth in relation with GDP size and the intimate connection of dollars-joules in energy intensity, physical and economic limits of growth, self-reinforcing predator-pray models, knowledge as synonymous with skill and so forth, economic cycles upon the DeLong curve ... to name a few. Readers questions and comments will of course help a lot with the subtopics prioritization, and will boost (incl. mine) understanding. Thank you in advance!
NOTE: I currently have the pleasure and honor to be part of the Tau Team, but this post contains ONLY my personal views.
The liquid paradigm, feedback loops, the virtuous cycle and Tauchain. By Dana Edwards. Posted on Steemit. December 31, 2017.
What do I mean by the concept of "liquid platform"? This is merely a re-articulation of the concept of self amendment and self definition. In other words it is very much like an autopoietic design. Bruce Lee once said to "be like water", and the reason is because water can adapt to any environment it is placed it by taking the form of the container it is put into.
So by liquid paradigm I mean that the core feature of true next generation platform design is going to be focused on maximum adaptability.
Feedback loops and the virtuous cycle
How can we have a platform which promotes continuous self improvement? If you have a platform with no hard coded "self" then even the design of the platform is under constant negotiation and creation. This is key because it means Tauchain will be able to adapt quicker than all other competing platforms. Quicker than Tezos because Tezos merely provides self amendment but lacks the virtuous cycle, the meta language, etc.
The Tau Meta Language allows for self definition at the level of languages. This means even the communication mechanism between humans and machines can be updated continuously. This continuous updating is the key design breakthrough of Tauchain because it means Tauchain will always be state of the art in any area. Think of a platform like Wikipedia where anyone can update any part of it in real time continuously so that every part of it is always the state of the art.
Starting at languages, the feedback loop can be created between humans and intelligent machines. Humans must make decision on how to design Tau. These design decisions benefit from the virtuous cycle due to the feedback loop between humans and machines allowing the decision making ability itself to be upgraded. This could even allow for the humans to transcend traditional human capabilities by relying on intelligent machines to assist in design which means better future designs, which means better decision making, which means better future designs which leads to better decision making, this represents the "virtuous cycle" by way of a feedback loop between humans to machines to humans to machines to humans etc. The humans improve the quality of the machines by feeding knowledge, feeding new algorithms, feeding just enough for the machines to become intelligent enough to help the humans to help the machines even more efficiently in the next iteration of Tauchain, over and over again.
Humans and machines will seek more good and less bad for the formal specification of Tau itself. Good and bad designs will be defined collaboratively by the human participants by way of intelligent discussion. As discussion scales, bigger crowds means more human minds involved, which means improved design, which leads eventually to a better and perhaps wiser Tau, which of course would lead to wiser even more intelligent discussions, which can lead to an improved formal specification, and to a better Tau. So that is a loop. It is also a loop between improving Tau, improving society, improving Tau, improving society.
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.