Tauchain and the mysterious Futamura projections. By Dana Edwards. Posted on Steemit. October 15, 2018.
Futamura Projections and Partial Evaluation
While we know Futamura projection is a planned and necessary feature it is also unlikely that most of us even know what Futamura projection is. In fact most people do not even fully understand what BDD can do in particular.
One video which can help for those who wish to study further is:
The distinction must be made between the topic of "Boolean Algebra" and "The Algebra of Boole". The Algebra of Boole is pertinent to the understanding of the BDD aspect of TML. Disclosure, I am not a mathematician so the information in the video above goes toward a level of detail which I am not qualified to express any expertise on. If you choose to take on the herculean task of carrying the cognitive load then please do so at your own risk. If you are really brave you can check out the work of Boole himself directly as well.
For all who have suffered through the cognitive workload presented in that video the next part of this discussion is on the capabilities and process of Futamura projection.
The formula below concisely represents exactly what partial evaluation is:
Given a program, p, static inputs SI, dynamic inputs DI, and outputs O,
We can input the description of our translator. Our translator can either be a compiler or an interpreter. What we want to describe is the process by which the defined language can translate to another language. Using an interpreter we can describe the semantics of our programming language.
How do you compile a compiler?
At the most simple and basic level we start with one input and one output. In the abstract you input your commands into the box and the box produces an output based on those commands. Most very simple software works in this way. A compiler basically takes input (source code) and produces output (a program). The source code are the acceptable commands for the compiler to produce the program with the appropriate behavior. In essence we can think of the box as nothing more than a translator device which takes one set of symbols and produces an output of another set of symbols.
Futamura offers three projections. This is a self referential process so what if instead of just one input into the box we now have two? With two inputs we can now not only send source code into the box and watch it translate into a program but now we can actually go even further and create an "interpreter". Using this second input we can now define the behavior of the box by sending a description of how you want the box to behave. In other words you can now rely on an interpreter which is distinct from a compiler in that it can only translate one statement at a time. Compilers, interpreters, and assemblers, are all translators so ultimately we have symbol manipulation at the core of all this activity.
To compile a compiler you must take an input as an interpreter and get an output as a compiler. Wikipedia provides the three projections:
1. Specializing an interpreter for given source code, yielding an executable
In other words Ohad will have to rely on TML to compile TML by using Futamura projection 3 in the list. In essence he will have to compile TML by using TML. This is the most confusing aspect to explain because it's a mode of self reference where TML essentially is used to create itself. The specializer is specialized for itself.
In my opinion this is a similar moment to when Satoshi Nakamoto mined the genesis block to prove Bitcoin could be built. If Ohad can achieve the feat of compiling TML using TML then we will know from this that TML is able to work. From this we can know at minimum that Tauchain on the most basic level is feasible. The question still remains on the question of logic of course. While in theory we know the logic is supposed to work it is also an area of theory which very few of us understand well. If it is demonstrated that this logic does in fact work as intended then we will know for certain that Tauchain is feasible.
Futamura projection is perhaps one of the most difficult parts of TML to explain conceptually due to the self referential nature. Excuse me if I made any errors in my attempt to explain it.
Boole, G. (1984). Analysis of Logic.
Demo coming soon? Tau Meta Language in C++ updated on Github. By Kevin Wong. Posted on Steemit. October 1, 2018.
Awesome visual promotion design by @capitanart for my only other favourite blockchain project besides Steem. Tau for the win! Might sound overly dramatic, but life has never been the same ever since I came across the following statement back in 31st December 2017:-
"Consider a process, denoted by X, of people, forming and following another process denoted by Y. Tau is the case where X=Y."
Say hello to our little friend above. It's the formula for intelligent decentralized networks. I'd really like to write more about this all day and night, but it's just difficult talking about something without a product to inspect. Regardless, it's still very real in my head because it has been shown to be a technical possibility. Is this the alpha protocol, the e=mc2 of blockchain technology?
Good news: there's something to show soon. It looks like the MVP release is on the horizon. New code just out on https://github.com/IDNI/tau. The author, Ohad Asor also remarked "the code is written. now i have to fix its bugs."
At only 384 lines of C++ code, what could it possibly demonstrate? If this is indeed the first instance Tau Meta Language (TML), it would then need to be re-written in TML itself for the next significant stage in development. At the moment, maybe you'd want read up on the following if you're interested:-
Honestly, I don't really know what to expect at the moment. All I know is that I've never been this excited before. Alright, time to attend to some life obligations before getting back into writing about Tau's development. As always, thanks for reading!
Note: here's part 1 of my series on Tau, more to come soon -https://steemit.com/blockchain/@kevinwong/what-is-tauchain-and-why-it-could-be-one-of-the-greatest-inventions-of-all-time-part-1
Disclaimer: Not financial advice.
Tauchain Update: Significant code changes in Github and discussion of progress. By Dana Edwards. Posted on Steemit. September 30, 2018.
Just several hours ago lead developer and founder of the Tauchain project Ohad Asor released his most significant code update yet. This blog post will be to discuss some of those updates and put it into context. In order to make sense of the current codebase : "Tauchain Codebase" I will also discuss a bit about the makeup of the code.
The significant breakthrough - Ohad implements the BDD
First some might be wondering what is BDD? BDD is a data structure called binary decision diagram. This data structure in my opinion is as significant to Tauchain as the "blockchain" data structure was to Bitcoin. For those who do not have a computer science degree I will elaborate on what exactly a data structure is below before discussing what a BDD is and why it is so significant.
Brief discussion on what a data structure is
In programming a data structure is a concept which represents a data organization method. For example blockchain is all about how records are stored as blocks. There are other similar data structures which represent decentralized data management and storage such as for instance the distributed hash table data structure.
A blockchain data structure looks like this for visualization:
By Matthäus Wander [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons
A hash table looks like this for a visual:
By Jorge Stolfi [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], from Wikimedia Commons
The really good programmers choose the appropriate data structure to meet the requirements of the project. BDD was chosen specifically by Ohad because it provides efficiency boosts in a key area necessary for Tauchain to function as intended. In specific we know Tauchain requires partial fixed point logic in order to have decidability in P-SPACE. We also know Tauchain requires decentralization and efficiency. Efficiency can be understood better in terms of the trade off between time and space. We do not have unlimited time or space so we must sacrifice one in order to get more of the other.
When we look at the code base we know that Ohad can optimize the code either by sacrificing space in which the executable will be bigger (but the code runs faster) or he can choose to sacrifice time in which the code is a smaller executable to save memory but might run slightly slower. This highlights the essential trade off between time and space when optimizing code but of course there is more to it because algorithms within a code base have to make similar trade offs.
Now what exactly is a BDD (binary decision diagram)?
Now that we understand the basics about efficiency and what a data structure is we can make a bit more sense of what a BDD is. In order to understand why BDD as a data structure is so important to Tauchain we have to remember that Tauchain is about logic. We can take the most basic example of Socrates:
A predicate takes an entity or entities in the domain of discourse as input while outputs are either True or False. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences are viewed as being unrelated and might be denoted, for example, by variables such as p and q. The predicate "is a philosopher" occurs in both sentences, which have a common structure of "a is a philosopher". The variable a is instantiated as "Socrates" in the first sentence and is instantiated as "Plato" in the second sentence. While first-order logic allows for the use of predicates, such as "is a philosopher" in this example, propositional logic does not.
Based on the rules of first order logic we can have our inputs and receive our outputs. In the most basic example above we an see a bit about how logic works. To elaborate further:
Relationships between predicates can be stated using logical connectives. Consider, for example, the first-order formula "if a is a philosopher, then a is a scholar". This formula is a conditional statement with "a is a philosopher" as its hypothesis and "a is a scholar" as its conclusion. The truth of this formula depends on which object is denoted by a, and on the interpretations of the predicates "is a philosopher" and "is a scholar".
A truth table has one column for each input variable (for example, P and Q), and one final column showing all of the possible results of the logical operation that the table represents (for example, P XOR Q). Each row of the truth table contains one possible configuration of the input variables (for instance, P=true Q=false), and the result of the operation for those values. See the examples below for further clarification. Ludwig Wittgenstein is often credited with inventing the truth table in his Tractatus Logico-Philosophicus, though it appeared at least a year earlier in a paper on propositional logic by Emil Leon Post.
When we are dealing with logic we may find that a truth table helps with visualization.
Now with this knowledge we have the most basic Socrates example:
This can be represented via truth table and is called a syllogism. To solve this we simply apply a kind of reasoning called deductive reasoning. This would indicate that if All men are mortal is true and if Socrates is a man is also true then Socrates is a mortal must be true. If we were to say all men are mortal but Socrates is immortal then Socrates cannot be a man. So if Socrates is a man he must be moral or there is what we call a contradiction. Logic is all about avoiding these sorts of contradictions and in specific binary or boolean logic is to reach a conclusion which always must be one of two possible values.
If I ask you to play a game which we want to guarantee will end with either one of two possible outcomes then we have a good example of a boolean function. 1 or 0, true or false, on or off, a or b.
Some of you may be familiar with data structure we call a DAG (directed acyclic graph). For those of you who understand this concept you can visualize a BDD as being very similar to a propositional DAG.
By David Eppstein [CC0], from Wikimedia Commons
We know from DAGs that it's a finite amount of vertices, edges, etc. We may also be able to visualize topological ordering and if you remember my post on transitive closure you might also remember the visuals on how that can work:
A binary decision diagram can represent a truth table:
By The original uploader was IMeowbot at English Wikipedia. (Transferred from en.wikipedia to Commons.) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons
And from these visuals now it should be abundantly clear how this is critical to the functioning of Tauchain. The BDD data structure allows for efficient model checking as well. To understand we have to consider the boolean satisfiability problem.
This highlights the fact that BDD can be used to create a SAT solver.
A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 1960s (see references below) and is now commonly referred to as the Davis–Putnam–Logemann–Loveland algorithm ("DPLL" or "DLL"). Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms.
Without getting overwhelmed by technical details the key points are below:
To read the code for yourself and track the progress of Tauchain development take a look at Github:
''We live in a world in which no one knows the law.''
Ohad Asor, Sept 11, 2016
I continue herewith with sharing my contemporary state-of-grok  of the up to now four  scriptures of the aka newtau . Sorry for the delay, but it comes mostly from the efforts to contain the outburst of words, catalyzed by the very exegetic process of such a rich content, into a reader-friendly shorter form.
The subject of vivisection textographically identifies as the first three paragraphs of ''Tau and the Crisis of Truth'', Ohad Asor, Sep 11, 2016 .
The four core themes extracted are ennumerated bellow, with as modest as not to sidetrack the thought and to not spoil the original message, streak of comments of mine.:
As I guy who's immersed in Law for more than quarter of century  I can swear with both hands on my heart in the notion of unknowability of Law.
Since my youth years in the law school  I was asking myself how it is possible at all to have 'rule of law'  in case any legal system ever known required humans to operate !?
It seemed that the only requisite or categorcal difference between mere arbitrary 'rule of man'  and the 'rule of law' was that in some isolated cases some ruling men happened to be internally programmed by their morals  to produce 'rule of law' appearance effects by 'rule of man' means.
Otherwise 'rule of law' done via 'rule of man' poses extremely serious threats of law to be used by some to exploit and harm others.
In that line of thoughts my conclusion was that the Law is ... yet to come.
What we know as Law is not good networking protocol software of mankind as such, but rather we see comparatively rare examples of individually well programmed ... lawyers.
On the wings of a technological breakthrough, just like: flying came with the invention of airplanes and moonwalk needed the advent of rocketry, or to remember without to stay alive - the writing. The Law is an old dream. If we judge by the depth of the abyss of floklore - one of the humanity's most ancient dreams, indeed. Needless to repeat myself that this was what sucked me into Tau as relentlessly as a black hole spagetification  :)
The referred by Ohad frustration by Law of the great Franz Kafka  expressed in his book The Trial  becomes very understandable for Kafka's epoch lacking the comforting hope in a technology which we already have - the computers - and the overall progress in the field of logic, mathematics, engineering ... forming a self-reinforcing loop centered around this sci-tech of artificial cognition.
Similarly to the nuclear fusion, which is always few decades away, but the Fusion gap closes noticeably nowadays , we are standing on the cliff of a Legal gap.
The mankind's heavy involvement in cognition technologies, especially in the last several decades, outlined multiple promising directions of further development, which seem to bring us closer to abilities to compensate the fundamental deficiencies of Law and in fact to finally bring it into existence.
It took entire Ohad Asor, however, to identify the major reasons why the Law is bottlenecked out of our reach yet, and to propose viable means to bridge us through that Legal gap... The other side is already in sight.
It is in the first place the language to blame !
The human natural language . Our most important atribute as species. The mankind maker. The glue of society. It just emerged, it hasn't been created. It has rather ... patterns, vaguely conventional, than intentionally coined set of solid rules. There ain't firm rules to change its rules, either ... The natural human language is mostly wilderness of untamed pristine naked nature, dotted here and there with very expensive and hard to install and maintain ''arteftacts'' . Leave it alone out of the coercion of state mass media, mass education and national language institutes and it falls back into host of unintelligible dialects. Even when aided by the mnemonic amplifier which we call writing.
Ambiguity is characteristic of the natural language, a feature in poetry and politics, but a deadly bug in logic and law.
We'll put aside for now the postulate of impossibility of a single universal language to revisit it later when its exegetic turn comes. In another chapter onto another scripture. Likewise, not in this chapter we'll cover the neurological human bottlenecks which are targetted to be overcome by Tau. Lets observe the sequence of author's thoughts and to not fast forward.
Instead of that I'll dare to share with you my own hypothesis about why the natural human languages are so. (I'm smiling while I type this, cause I can visualize Ohad's reaction upon reading such frivolous lay narrative. I hope he being too busy will actually not to.) To say that the human languages are just too complex does not bring us any nearer to decent explanation. Many logic based languages are more than a match of the natural human ones in terms of expressiveness and complexity. It shouldn't be that reason.
My suspicion is rather that the natural human languages pose such a Moravec hardness  for being not exactly languages. Languages are conveyors of meaning. Human languages convey not meaning, but indexes or addresses or tags of mind states. The meaning is the mind state. Understanding between humans is function of not only shared learnt syntaxi, but also of shared lives. Of aggregation of similar mind states which to be referred by matching word keys.
If this is true it is another angle for grokking the solution of human users leaning towards the machine by use of human intelligible Machinish, instead of Tau waiting the language barrier to be broken and machines to start speaking and listening Humanish.
In a nutshell we yet wait the Law to come cuz Law is not doable in Humanish. Bad software. And the other side of the no-law coin is that the humans are no cognitive ASICs . We do congnition only meanwhile and in-order-to do what other animals do - to survive. Bad hardware.
In order law to become law it must become handsfree .
Not humans to read laws, but laws to read laws.
The technology to enable that looks on an arm's length.
Ok, so far we butchered the law and the language. What's left?
The nature and essence of human language brought one of the most harmful and devastating notions ever. Literally, a thought of mass destruction.
The ''crisis of truth''. The wasteland left by the toxic idea spilover of ''there is no one truth'' or even ''there ain't truth'' at all. This is not only abstract, philosophical problem. Billions of people actually got killed for somebody else's truth.
Not occasionally the philosophers who immersed themselves into this pool are nicknamed 'Deconstructivist' . Following back their epistemic genealogy, we see btw, that they are rooted rather in faith than in reasoning, but this is another story.
The general problem of truth, of which the problem of law is just a private case, opens up two important aspects:
Number one, is that all knowledge is conjectural to truth and that, truth is an asymptotic boundary - forever to close on but never to reach. Like speed of light or absolute zero. Number two, is that human languages make pretty lousy vehicles to chase the truth with.
If really words are just to match people's thoughts together, then there are thoughts without words and words without thoughts. Words mismatch thoughts, so how to expect they to bridge thoughts to things? Entire worlds on nonsensical wording emerge, dangerously disturbing the seamless unity of things and thoughts. Truth displaced.
''But can we at least have some island of truth in which social contracts can be useful and make sense?''
This island of shared truth is made of consensus  bedrock and synchronization  landmass.
Thuth and Law self-enforced. From within instead of by violence from without. And in self-referenial non-regressive way.
''We therefore remain without any logical basis for the process of rulemaking, not only the crisis of deciding what is legal and what is illegal." 
Peter Suber with his ''The Paradox of Self-Amendment: A Study of Law, Logic, Omnipotence, and Change''  proposed a rulemaking solution which he called Nomic .
''Nomic is a game in which changing the rules is a move.'' 
The merit of Nomic is that it really eliminates the illths of the infinite regress  of laws-of-changing-the-laws-of-changing-the-laws, ad infinitum, by use of transmutable self-referrenial rules. But Nomic suffers from number of issues - the first one, in the spotlight of that chapter, being the fact that we still remain with the “crisis of truth” in which there is no one truth, and the other ones - like sclability of sequencing and voting - we'll revisit in their order of appearance in the discussed texts.
The aka 'newtau'  went past the inherent limitations of the Nomic system and resolves the 'crisis of truth' problem.
The next few chapters will dive into Decidability and how it applies to provide solution to the problems described above.
 - https://en.wikipedia.org/wiki/Grok
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-intro
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-the-two-towers
 - http://www.idni.org/blog/tau-and-the-crisis-of-truth.html
 - http://www.behest.io/
 - https://steemit.com/blockchain/@karov/behest-for-tauchain
 - https://en.wikipedia.org/wiki/Rule_of_law
 - https://en.wikipedia.org/wiki/Tyrant
 - https://en.wikipedia.org/wiki/Morality
 - https://en.wikipedia.org/wiki/Spaghettification
 - https://en.wikipedia.org/wiki/Franz_Kafka
 - https://en.wikipedia.org/wiki/The_Trial
 - https://www.amazon.com/Merchants-Despair-Environmentalists-Pseudo-Scientists-Antihumanism/dp/159403737X
 - https://en.wikipedia.org/wiki/Language
 - https://en.wikipedia.org/wiki/Official_language
 - https://steemit.com/blockchain/@karov/tau-through-the-moravec-prism
 - https://en.wikipedia.org/wiki/Application-specific_integrated_circuit
 - https://www.etymonline.com/word/manipulation
 - https://en.wikipedia.org/wiki/Deconstruction
 - https://en.wikipedia.org/wiki/Consensus_decision-making
 - https://en.wikipedia.org/wiki/Synchronization
 - http://legacy.earlham.edu/~peters/writing/psa/index.htm
 - https://en.wikipedia.org/wiki/Nomic
 - https://en.wikipedia.org/wiki/Infinite_regress
 - the illustration is a painting courtecy of the author Georgi Andonov https://www.facebook.com/georgi.andonov.9674?tn-str=*F
Tauchain and the privacy question (benefits of secret contracts and private knowledge). By Dana Edwards. Posted on Steemit. August 21, 2018.
As we can see from the current trend in crypto there is now a move toward privacy. Most people underestimate in my opinion the utility of these cryptographic advances. In this blogpost I will highlight a particular advance enabled by these new cryptographic (and hardware techniques such as trusted execution environment) which can be of massive benefit to the long term believers in Tauchain.
The problem: Anyone can copy the code Ohad writes if it's open source
So we have a problem with Tauchain where all of the code Ohad is writing with regard to TML is open source and on Github. This allows a competitor to simply steal his best ideas and in a sense rob the token holders who actually funded the development of the code. This happens very often as we see a new innovation in the crypto space and soon later we see a new ICO or a new group come out of no where acting as if they originated the technology. In some cases the new group may even be much more centralized, more secretive, and very well funded.
The solution: Secret contracts (private source code and execution)
The trusted execution environment allows for the protection of intellectual property rights on the hardware level. While sMPC (secure multiparty computation) can also achieve similar ends on the software level. The idea being that this provides a solution to idea theft where a community can keep certain critical pieces of code, data, algorithms, or other unique features secret. This creates an entirely new way to monetize knowledge, code, and ideas, which Agoras will be uniquely positioned to leverage.
Guy Zyskind of the Enigma Project provides the definition for what secret contracts are and how they work. The Enigma Project deserves credit for introducing this technology and for identifying a major problem in the cryptospace. Traditionally on Ethereum or all other current platforms when you release a DApp your code has to be open source. It is not possible to create a closed or private source decentralized app. In addition the app has to be executed in the open so all data running through it is public.
Strategic implementation of private knowledge and source code can allow Tauchain to maintain a dominant position
In most cases the world benefits if knowledge is shared. In fact I'm in favor most of the time of sharing as much knowledge as is safe. The problem with algorithms, source code, and certain kinds of knowledge is that by sharing that knowledge it provides a competitive advantage to people who have more financial resources. These individuals can simply see Github and copy. They can hire programmers to compete with Tauchain and Agoras developers and as long as the code is open there will be no real reason to buy the Agoras token long term.
What if the Tauchain development team and Agoras developers decide to implement private knowledge bases? What if it becomes possible to run code in a trusted execution environment so that other developers around the world cannot see the code or the algorithms? This would allow Tauchain to build Agoras in such a way that no other project will be capable of duplicating it. This would lock in the value backed by the community brainpower into the Agoras token making it a true knowledge token which cannot simply by copied with ease by another project.
In fact this is a strategy that developers making apps using Enigma's Secret Contracts are looking into as we speak. This competitive advantage of secrecy will change the landscape of the cryptospace. What does this enable for Agoras? Imagine an encrypted Github which developers can contribute to but only the developers can see the code? Imagine after the code is written that no one else can see the code if the code is set to run privately? This would allow developers to code in secret and have the code run on computers without anyone knowing what the code is.
This can open up security vulnerabilities but Tauchain can defend against these. In particular it matters what is private and what is public. Critical aspects can be private while security critical areas can always be kept public. There may even be ways to prove that the code doesn't behave in a certain way without actually sharing the code (using advanced cryptography). In fact my favored way of implementing this feature would be to timelock the release of the source code by a number of months of years.
The idea isn't to keep things closed forever or secret forever. Privacy is about access control and about keeping things secret long enough to maintain a competitive advantage. A time delay to unlock the source code for example could work. It is even possible to allow the community to use puzzle based time lock encryption to have to mine to get the source code released early (if there is a serious need or threat). In this way all secret blocks of code could be unlockable but not for free and this would make it less likely that the community will seek to unlock it unless there is a genuine reason (beyond just to steal ideas).
What do you think about these ideas? If you agree with this or disagree then comment below. Strategic IP (intellectual property) is used by major corporations to give themselves a competitive advantage. The crypto community can do the same thing in ways the legal mechanisms can't do. In fact it can be done in a more fair and better way because often the people or companies awarded IP rights aren't the actual inventors. A knowledge economy is fantastic but if the knowledge is just harvested by big corporations monitoring the wide open network then it's going to be hard to bring value to a knowledge token.
UPDATE: Many people ask where to buy Agoras. The problem is it's not widely available on centralized exchanges. The only exchange I know that has it is Bitshares. So if anyone really wants to buy Agoras (AGRS) which is the token of discussion in this post feel free to buy it at:
42 million intermediate tokens total. Current price is: 0.00010700 BTC which is around 70 cents. This is the cheapest price I've seen it in a while because for a long time it was $1.50-$1.30 range. This is a very speculative token at this time so buy at your own risk as I'm not providing any financial advice. I'm a holder of this token of course and have been for years.
Puddu, I., Dmitrienko, A., & Capkun, S. (2017). μchain: How to Forget without Hard Forks. IACR Cryptology ePrint Archive, 2017, 106.
Kaptchuk, G., Miers, I., & Green, M. (2017). Managing Secrets with Consensus Networks: Fairness, Ransomware and Access Control. IACR Cryptology ePrint Archive, 2017, 201.
Tauchain 101: Essential Reading On One Of The Most Revolutionary Blockchain Project Under The Radar...By Rok Sivante. Published on Steemit. August 3, 2018.
Amidst countless blockchain projects hyping themselves up as "the next big thing," there are a few that have been working under the radar that hold the promise - not in word, but in substance - of truly being revolutionary game-changers.
Such ventures have not yet often come into the spotlight. Partly, due to that their founders have focused first on the fundamentals of creating something that speaks for itself versus the all-too-common approach of prioritizing sensationalistic marketing. And partly, because the degree of innovativeness they represent - in tandem with a complexity in scope of the larger visions and implications of their success - does not always lend itself to an easy understanding upfront.
One such project - still very early on in its development, yet holding transformative potentials no less grand than those of Bitcoin and Ethereum as they birthed and evolved the blockchain landscape:
Until recently, with the launch of a new website that has successfully managed to articulate the project's vision much more clearly, understanding what Tauchain is striving to accomplish was a domain only a very few, highly-intelligent technically-inclined dared to tread. And prior to December 2018, there was no code - only an unproven concept spearheaded by a single Israeli developer, Ohad Asor, whom nearly all who've managed to connect with have declared to be one of the most brilliant geniuses they've ever met, possibly ahead of his time.
Just as Bitcoin introduced blockchain as an innovation radically altering the trajectory of our societal, economic, and technological evolution - and Ethereum continued in suit with its upgrades to expand in developing upon the vision with entirely new sets of capabilities for developing a range of decentralized applications and smart contracts - so too, may Tauchain be such a platform whose success proves comparable, the impact of which may bring quantum leaps in the Blockchain Revolution.
How and where to start in describing Tauchain...?
Well, were we to begin with the technical side of things, it'd be likely to lose 98% of the audience. So perhaps, a better starting point might be the bigger picture:
This generalized overview, however, still only barely scratches the surface.
While the intended ends may be that of a generic concept enabling drastically-increased efficiency in global collaboration, the means by which such is to be achieved entails a number of innovative component developments that each hold great significance and implications of their own.
While each may require deeper exploration to better grasp and begin piecing together into the bigger picture, the Tauchain website now offers an overview of key features which account for just some of what it to differentiate it from other blockchain platforms - and enable new collaborative capabilities not currently possible with currently existing technologies:
While it'd be possible to expand upon each in great detail - both in regards to the functionality and implications for their applications - this particular piece of writing is to serve as a basic introduction to some of the best, most-easily-accessible content written on Tauchain to-date.
And as we transition into that content, we shall begin with a quote summarizing the core essence of Tauchain, as approached from but one angle:
This project created by Ohad Asor is really ambitious and aims to create the internet of knowledge.
Some people would label it as an Artificial Intelligence, but according to the creator this is something totally different. Summing up and to understand me, Tau-chain is a tool that knows how to interpret any information and deduce any consensus. This tool can be used in any field, judicial, political, academic, social, scientific and also without limits assembly from 2 people to a million for example.
~ @capitanart, from "My experience with Tau-chain"
The collection begins with two selections from Steemit's @trafalgar.
If anyone has successfully managed to distill the essence of the Tauchain vision into words that'd serve as a foundational Tauchain 101 intro, it'd have been him in these two excellent pieces:
What Is Tau? - My Only Other Crypto Investment
The Power of Tau - Scaling the Creation of Knowledge
Next, come three short articles from @flis, which may not go into any new details beyond the three above, yet offer a slightly different yet simplified perspective to reinforce the clarification of Tauchain's key concepts:
The vision of Tau-Chain, a blockchain based self-amending platform designed to scale human collaboration and knowledge building
How Tau-Chain can be implemented in practice
Tau Chain vs. Tezos - which platform will provide a better solution?
~ design credit: @voronoi
Next, come a few selections from @dana-edwards, who has likely been the single individual who has translated the highly-complex technical vision of Ohad Asor into a more-approachable nature from which non-academics may begin and better understanding a Tauchain.
Quite possibly the first to write of developments and share outside of the project’s IRC channel and Bitcoin talk thread, Dana has one of the most comprehensive grasps publicized anywhere on the project, and his writings continue to serve in establishing bridges for more people to discover and deepen their own comprehensions of the innovations Tauchain represents to not only computer science and the blockchain revolution, but cultural & societal evolution as well.
What follows are a collection of his writings related to the project which excellently piece together key ideas and insights, from which the gaps may be filled in to grasp a firmer idea of just how significant these developments could be and what the bigger picture of their success might look like:
What Tauchain can do for us: Collaborative Serious Alternate Reality Games
What Tauchain can do for us: Finding the world's biggest problems
Tauchain: The automated programmer
Artificial morality: Moral agents and Tauchain
What Tauchain can do for us: Effective Altruism + Tauchain
Collaborative Alternate Reality Games + Tauchain = UBAs (Universal Basic Assets)?
Tauchain and Tezos, why adaptability is the key to surviving in a fast changing environment
My commentary on Ohad's latest blog post: "Agoras to TML"
The following three pieces are not introductory-level, and may likely require a background in computer programming to understanding. However, for anyone reading who might be interested in diving deeper into the technical side of the project, they are included here:
Tauchain is not easy to understand but here are some concepts to know to track Ohad's progress
For all who are researching Tauchain (TML) to understand how it works, a nice video!
More on partial evaluation - How does partial evaluation work and why is it important?
~ design credit: @crypticalias
One other writer covering Tauchain needing to be mentioned: @karov.
While not the easiest to read and understand, the Steemit account of Georgi Karov is undoubtedly one of the most consistent sources of coverage on the project.
A lawyer by-trade and currently one of the three members of the core team, @karov's insights into the project are reliably detailed, expansive into philosophical territory, and fascinating.
Although none of his articles have been included in this introductory collection, those who may be interested to keep up-to-date with coverage on the project would be well-advised to follow his Steemit blog - and/or read backwards through the last few months of his posts there, as the blog is nearly-entirely Tauchain-related content.
Lastly, though not least:
Coming from one of Steemit's most brilliant early-adopter-minds, @kevinwong, this one is a quick read in itself with some key points worth factoring in to a proper assessment of the project. And - far lengthier than the post itself - the comments thread also contains some gold:
Is Tauchain Agoras in Good Hands?
And to wrap up with another excellent quotes from design consultant to the project, @capitanart - who is another to follow for updates:
The goal of Tau is to create a supermind, to solve the limitations inherent in human communication on a large scale.
Able to deduce consensus and understand discussions, Tau can generate and execute code automatically based on consensus, through a process known as code synthesis. This will greatly accelerate the production of knowledge and streamline most of the large-scale collaborative efforts we can imagine in today's world.
~ design credit: @overdye
It's Thursday and I'm back, guys.
It's been long time, but here I'm again :)
This post theme was getting ripe in my head for long time. Something like since 2014.
Recently I got some data to put together the stepping stones for turning my mere suspicion into more of a grounded conclusion.
The problem was that it was also growing in width and depth with time, so here you are a momentary snapshot or sketch-map of it, which I intend to elaborate further on.
I'll start with shooting two slogan-missiles which constitute super-compression of lotsa research and which will be revisited soon in separate series of articles.
Trust is Force
''you trust 'em only as much as you can make 'em to...''
Money is Mnemonics
yes, precisely THIS is the core essence and function of ANY monetary system - (even the primordial barter one with its naturally emerging special tokens ,  to mitigate its intrinsic exponential wall  of unscalabiliuty , ) - to account or remember human activity. That is, money is always work to prove work. Basically we need to remember due to impossibility of simultaneity of transactions.
Which I already went over ... and, I beg your pardon. Three, not two slogans. The third one is:
Law is Between, Code is Within
Will explain later what I mean  and how it ties up with the former two. In a nutshell is about the enforceability as essential characteristic of all law and now will just hint that the reason why Force (coercion) is deemed to be fundamentally non-decentralizable is the Pauli exclusion principle  which is kinda ''location conservation law'' .
You already know ,  my taste for epystemological 'archaeology', that's why I think it is better to carry the story on in chronological order.
Back in 2014 I stumbled upon series of extremely astute and deep thought articles , , , ,  on the cost of several well known monetary systems in comparison with Bitcoin, which just has been grown enough to become visible for unaided eye.
I remember I discovered these great articles by the obviously great Hass McCook in the wake of the MtGox ,  boom and bust aftershock, when huge anxiety about the 'wastefullness' of the Bitcoin mining was reigning the public sentiment. (It happens everytime the price nears the production cost).
The search of mine which hit those was driven by the quite legitimate question of:
''If crypto is wasteful, then how much the traditional fiat costs us, god damn it?''
Well, the comparison turned up, as I suspected, not at all in favor neither of the quite recent demetalized fractalized-centralized double-entry book-keeping debts mnemonincs of the banknotes monetary system, nor in favor of the millennia old 'heavy metal' single-entry money where the physical possession of gold/silver denotes your purchase power...
And it occured it was not at all just about costs of mining, refining, casting, ink, printing press, storage, accounting, counterfaiting countermeasures, ... but the bill to pay includes also all the social infrastructure and capital devoted on the making the system to work, and to be kept ticking ...
Essentially all which is know as ... government. All its buildings, all its sallaried humans, all their guns, pens, pensions, courts, judges and bailiffs ... everything.
All that needed in order a common Ledger to be built, maintained, broadcasted and kept. The difference between government and governance is obvious - the former is the means to an end, the later is the end. The former is the machine, the later is the function.
Here is the place to insert three other quick notions which are in the pipeline for revisiting and furnishing with separate articles.:
Firstly, Mnemonics is subject of big evolutionary/development forces just as anything else into the combinatorial explosion which the universe, nature, society is ...
You noticed above the notion of money emergence kinda coinciding with writing? The Sumerian example.
Writing is mnemonics amplifier . Just like the combustion engines are transportation boosters .
The better memory and memory sharing system we have on our disposal the better money we have.
Money is technology .
Secondly, any book-keeping - regardless whether we write by hand on cave wall or papyri, or by blade on a wooden stick, or by most sophisticated laser-quantum methods on most sophisticated multi-dimensional crystals  - is, yeah, a function of writing. We can go even further and state that illiterate verbal folklore - the only thing we got for millions of years - is form of verbal writing onto each other's short-term/long-term memories, just like photography and sound recording is.
The important thing to note here is that in the light of ''Money is Mnemonics'' spell of mine - the accountancy systems do possess cardinality of entries , , .
And it seems that the mega-trend is:
''the more entries handled = the better our money is''
Fiat one - monetary and overall - is double-entry based and relies upon import of trust, blockchain is tripple-entry and trust is built-in. Blockchain is not 'trustless' but is 'autotrophic'  in regards with trust.
The third notion turns us back on track with the main theme of this article. It is that of the mutual entropy .
The Ledger, no matter which tech it uses to be, has as purpose to define how the individual people's acivity has to be limited for the sake of collective cooperation and collaboration.
The Ledger - product of the particular kind of Mnemonics in play - literally SHAPES and MAKES the society.
As kinda Sorites  or Holon  or Mereonomic  ... generator.
NOW, which costs more? Which one is more wasteful of all the known Ledger or Mnemonic or Monetary systems known?
Literally couple of days ago I stumbled upon ''The $29 trillion cost of trust'' from 24 Jul 2018 by Sinclair Davidson, Mikayla Novak and Jason Potts , which made this long time in the making article to come out.
Now I finally have put my eyes on some numbers to juggle with.
The ecumenical  or midgardic  GDP is evaluated on roughly rounded up ~$100t p.a.
There is lots of well grounded criticism  on the ability of the present day fiat financial system to actually manage to encompass and measure it all - but lets take this conditional good round figure for the global GDP.
The total wealth of ~quarter of $Quadrillion (giving total average depriciation / consumption rate of over a third per year).
GDP evaluates the dynamic part. The work.
Almost 1/3rd of all work is devoted to account for or to prove the work!
Visualize the fiat system as a primitive, primordial, predeluvial or perecursor form of PoW .
Funny enough this ~1/3rd global proof-of-work or mnemoic or governance cost strangely coincides with the energy budget of the brain  as fraction of the total energy a human body dissipates to live.
The last two pieces of research argumentation to close the topic are.:
I'm trully impressed by the depth of these two documents. It is as big as - each sentence backed by several book volumes of profound research.
Paul Sztorc convincingly demonstrates that PoW is the most efficient protocol for decentralization or 'trustlessness'. It appears that 'PoW is the cheapest' not only among the blockspace  but also cheapest everywhere and everywhen.
Mr. Game and Watch evaluates that if in the present day 100-ish $Trills strong global economy there was nothing but Bitcoin as a form of money - the value of a single BTC would be worth millions of $.
''Banknote waste diﬀers from other types of monetary waste in that it is much harder to perceive, by virtue of the complex nature of banknote creation. In contrast, Bitcoin mining directly consumes electricity, and gold mining obviously requires engineers, machinery, armed guards and so forth. At ﬁrst glance, it seems incredible that impoverished hunter-gatherers would devote some of their precious time to the manufacture of silly beads and shells and other collectibles. And, it seems wasteful indeed, that we humans use our powerful brains primarily to obsess over what other people think of us. All of these activities are wasteful,in a narrow sense, but in a broader sense they maintain the infrastructure required to promote and sustain cooperation. These are social activities – we engage in them because we are not alone.''
Apparently monetary system which involves humans to function is unscalable. In the preTau. It is far easier and unlimited as capacity to grow our electricity and machinery resources, than to replicate humans. 
Intuitively, the lower the Cost of Trust the stronger the society, the bigger and with higher acceleration is the growth of the economy, the higher is the affluence and wealth. , , , , , .
If hypothetically the Cost of Trust is zero, the value of the economy will be infinite?
The endogenous automation of production and distribution of trust which the blockchain enables many orders of magntitude lowering of the cost of trust, compared with the present hand-driven system. (As an example - Satoshi himself posited aka 'payment channels'  and Lightning Network  and such promise hundreds of thousands of times smaller transaction costs all internal to the trusltessness environment of blockchain without to rely upon human work to prove work ...)
At the end, what has Tauchain in common with that all?
Well, lotsa things. I'm light years if not infinitely far from any generalization and systematization, but here you are an improvised list ... of questions :
Please, you continue ...
 - https://www.thoughtco.com/clay-tokens-mesopotamian-writing-171673
 - http://www.ancientpages.com/2017/07/08/intriguing-sumerian-clay-tokens-ancient-book-keeping-system-used-long-writing-appeared/
 - https://arxiv.org/abs/1703.02572
 - https://steemit.com/tauchain/@karov/scaling-is-layering
 - https://steemit.com/tauchain/@karov/tauchain-transcaling
 - http://www.behest.io/ & https://steemit.com/blockchain/@karov/behest-for-tauchain
 - https://en.wikipedia.org/wiki/Pauli_exclusion_principle
 - https://en.wikipedia.org/wiki/Conservation_law
 - https://steemit.com/bitcoin/@karov/bitcoin-retrodictions
 - https://steemit.com/blockchain/@karov/geodesic-by-tau
 - https://www.coindesk.com/microscope-true-costs-gold-production/
 - https://www.coindesk.com/microscope-real-costs-dollar/
 - https://www.coindesk.com/microscope-true-costs-banking/
 - https://www.coindesk.com/microscope-economic-environmental-costs-bitcoin-mining/
 - https://thebitcoin.pub/t/under-the-microscope-conclusions-on-the-costs-of-bitcoin/44457
 - https://en.wikipedia.org/wiki/Mt._Gox
 - https://oracletimes.com/mt-gox-bitcoin-whale-trustee-seized-selling-bitcoin-btc/
 - https://steemit.com/tauchain/@karov/tauchain-the-hanson-engine
 - https://steemit.com/tauchain/@karov/tauchain-as-szabo-booster
 - https://winklevosscapital.com/money-is-broken-but-its-future-is-not/
 - https://en.wikipedia.org/wiki/5D_optical_data_storage
 - https://en.wikipedia.org/wiki/Single-entry_bookkeeping_system
 - https://en.wikipedia.org/wiki/Double-entry_bookkeeping_system
 - https://bitcoinmagazine.com/articles/triple-entry-bookkeeping-bitcoin-1392069656/
 - https://en.wikipedia.org/wiki/Autotroph
 - https://en.wikipedia.org/wiki/Mutual_information
 - https://en.wikipedia.org/wiki/Sorites_paradox
 - https://en.wikipedia.org/wiki/Holon_(philosophy)
 - https://en.wikipedia.org/wiki/Mereology
 - https://medium.com/@cryptoeconomics/the-29-trillion-cost-of-trust-be8ffbd5788d
 - https://en.wikipedia.org/wiki/Ecumene
 - https://en.wikipedia.org/wiki/Midgard
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://en.wikipedia.org/wiki/Proof-of-work_system
 - http://www.pnas.org/content/99/16/10237
 - http://www.truthcoin.info/blog/pow-cheapest/
 - https://www.scribd.com/document/354688866/Bitcoin-A-5-8-Million-Valuation-Crypto-Currency-and-A-New-Era-of-Human-Cooperation
 - http://www.truthcoin.info/blog/blockspace-demand/
 - https://steemit.com/blockchain/@karov/tau-through-the-moravec-prism
 - https://steemit.com/tauchain/@karov/masa-effect-with-tauchain
 - https://steemit.com/tauchain/@karov/tutor-ex-machina
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://bitcoin.org/bitcoin.pdf
 - https://lightning.network/
 - https://steemit.com/tauchain/@karov/tauchain-in-the-algoverse
 - http://www.juliansimon.com/writings/Ultimate_Resource/ & https://orionsarm.com/fm_store/Population.pdf
 - https://en.wikipedia.org/wiki/Cybernetics & https://en.wikipedia.org/wiki/Control_theory
''Tau solves the problems from the Tower of Babel to the Tower of Basel''
- an early 21st century yet undisclosable author
Okay, dearest friends, lets pull sleeves up and start with it. Vivisection of the Scriptures? Revelation by transfiguration? Pulling the Tau from the ocean of wisdom out on the dry no-Maths-land? I hope not.
The quote above on first glance sounds so pompously biblical, but in fact it denotes the crystal clear and simple practical and mundane rationale of Tau which I already tried to approach from few angles , .
It is about the hierarchic bottleneck of one unscaling ,  Humanity. Take the hint about leveling of the Towers as a poetic symbol of elimination of the social 'verticality' -- the hierarchies as a so far necessary evil to compensate certain innate neurological limitations , , ,  -- and reforming  the network we are embedded into and usually call mankind or society or economy or world into an as geodesic as possibly possible one . For the sake of its own functional programmatic optimization .
Notice that towers leveling is not by demolition, but by uplifting the overall landscape level to and above the tower tops, turning them into deep roots or support pylons of asymptotically geodesic society .
Apparently, mentioning the Gate of God  denotes the unmixing  of languages & mentioning the apex global fiat settlement institution  - the excelling of the current fiat procrustics  i.e. the economy aspect.
That is: TML to Agoras . The first and last of the totally six identified aspects or steps of the social choice  as addressed by what we call Tau.
''our six steps of language, knowledge, discussion, collaboration, choice, and knowledge economy''
These aspects deserve of course separate zoom-in exegetic chapters and they'll definitely get it. I promise. And not only they.
Any exegesis of Tau unavoidably must start with scroll back and tracking down of the full history of the development so far. As a zoom out to see the full picture and to identify the dominant features of the landscape relief.
You, I reckon, already noticed this retrodictive inclination of mine , that in my mind the notion of ''Timeline of Development'' can not be by any logic just a handful of milestone promises thrown into the future, but it is a must to account for the up to now trajectory, too! No future without past.
It all started as Zennet , continued as Tau-chains  and 'turned' into aka 'newtau' , , , .
Wait! A New Tau?
Excuse me, Ohad, but I personally do not buy that and I said it many times. There ain't old and new Tau. The situation is much more straightforward and grokkable . Here it is:
Lotsa guts, balls, butt, brains or whatever human offal... is required for each of us to admit a mistake made in our everyday life. Generally quite a strength is needed to even look ourselves into the mirror...
It takes a whole Ohad though, to keep all oneself's work totally public and transparent even on the full and unedited live record of the infil  into entire branch of mathematics  and then throwing it all away as untauful. We witnessed that reported in real time!
Did this change the ends? No. But sorted out the means to an end.
Was it a 'mistake'? In no case. It was duly delivered R&D effort.
Was oldtau looking promising on first glance? Yes, of course it did.
Did it survive the Ohad's R&D 'crash-testing'? No, it didn't.
Was it a ''juice worth the sqweeze''? It was.
Was it a job well done? Absolutely.
The oldtau materials are for me legacy jewels. Like those dinosaur bugs trapped into blobs of amber .
Development is a process, not just results shipping. Related like cooking and serving.
Studying the zoom-out dev map we observe these few major landmarks:
The Zennet province is all right. Its gently rolling hills gradually merge into the Tau lands proper with the inevitable realization that a 'world supercomputer' can not be a Tauless thing. Zennet lives in Tau with .:
''... having a decentralized search engine requires Zennet-like capabilities, the ability to fairly rent (and rent-out) computational resources, under acceptable risk in the user's terms (as a function of cost). Our knowledge market will surely require such capabilities, and is therefore one of the three main ingredients of Agoras... hardware rent market...''
We move over through the oldtau wastelands  where the burnt ruins of MLTT  lie scattered - rough oldtau location-on-the-map indicator is the fall of 2015 with
''Tau as a Generalized Blockchain'' - posted Oct 17, 2015, 6:33 AM [updated Oct 17, 2015, 6:49 AM]
and then we reach the fertile gardens of newtau  in the fall of 2017:
''The New Tau'' - posted Dec 31, 2017, 12:27 AM [updated Dec 31, 2017, 12:28 AM]
Hmm. Apparently we crossed a watershed. Which relief feature it was? - The ridge  of:
''Tau and the Crisis of Truth'' - posted Sep 10, 2016, 8:25 PM [updated Sep 10, 2016, 8:28 PM]
Tau sorts out the Towers. I hope that the synopsis in this short chapter of Exegesis helped to sort out Tau dev in time as a navigation lookup tool.
Software is nothing but states of hardware. There is that intimate deep, not yet codified into a neat compact of logic, connection between Gödel , Heisenberg  and Laws of thermodynamics .
Tau keeps us off these traps.
I do not dare to state that someday we won't have the command on infinities and to play with them with the ease  of
''... a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.''
In fact, quite the opposite I'd rather take it as inevitability someday we to conquer the Cantor  expanses and to venture far even beyond that. To transcale  the transfinite. Like Hilbert  said it.:
''Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können. (From the paradise, that Cantor created for us, no-one can expel us.)''
But it takes ... finitary vehicles of DECIDABILITY to conquer the transfinitary outer spaces. Because, in order to dear to dream to tame the infinities, we must first harness and get full command of finities.
Including of ourselves. Tau is ''understanding each other''. Without Tau we are ... others to ourselves.
Imperare sibi maximum imperium est.
''Thinking by Machine: A Study of Cybernetics''
by Pierre de Latil 
Published by Houghton Mifflin Company in 1957 (c.1956), Boston.
Foreword of Isaac Asimov (then only 36 years old) ! Recommendation by the legendary mathematician and cyberneticist Norbert Wiener (then 62 years old) ! ... A true jewel! The book is described as:
A review of "the last ten years' progress in the development of self-governing machines," describing "the principles that make the most complex automatic machines possible, as well as the fundamentals of their construction."
Nineteen fifties !! The midway between the first digital computer made by my half-compatriot John Atanasoff  and internet . Almost a human generation span between the former, the book and the later event. Epoch so deep in the past that even television, air travel, rockets and nukes ... were young then.
Same Kondratieff  wave phase btw, which hints towards the historical rhyming of socially important intellectual interests. (On how K-waves imprint on the humanity growth curve - in series of other posts to come).
I must admit here that I've never put my hands and eyes onto this book. But, it is stamped into my mind and memory by Stanislaw Lem  - one of the greatest philosophers of the XXth century, working under the disguise of a Sci-Fi writer, for being caught on the wrong side of the Iron curtain.
''Summa Technologiae'' (1964)  is a monumental work of Lem's, where most issues discussed sound more contemporary nowadays than they were the more than half a century ago when it was built, and for many things also we are yet in the deep past ...
... Lem reports and discussed the following from the aforementioned Pierre de Latil's book.:
''As a starting point will serve a graphic chart classifying effectors, i.e., systems capable of acting, which Pierre de Latil included in his book Artificial Thinking [P. de Latil: Sztuczne mys´lenie. Warsaw 1958]. He distinguishes three main classes of effectors. To the first, the deterministic effectors, belong simple (like a hammer) and complex devices (adding machine, classical machines) as well as devices coupled to the environment (but without feedback) - e.g. automatic fire alarm. The second class, organized effectors, includes systems with feedback: machines with built-in determinism of action (automatic regulators, e.g., steam engine), machines with variable goals of action (externally conditioned, e.g., electronic brains) and self-programming machines (system capable of self-organization). To the latter group belong the animals and humans. One more degree of freedom can be found in systems which are capable, in order to achieve their goals, to change themselves (de Latil calls this the freedom of the "who", meaning that, while the organization and material of his body "is given" to man, systems of that higher type can - being restricted only with respect to the choice of the building material - radically reconstruct the organization of their own system: as an example may serve a living species during biological evolution). A hypothetical effector of an even higher degree also possesses the freedom of choice of the building material from which "it creates itself". De Latil suggests for such an effector with highest freedom - the mechanism of self-creation of cosmic matter according to Hoyle's theory. It is easy to see that a far less hypothetical and easily verifiable system of that kind is the technological evolution. It displays all the features of a system with feedback, programmed "from within", i.e., self-organizing, additionally equipped with freedom with respect to total self-reconstruction (like a living, evolving species) as well as with respect to the choice of the building material (since a technology has at its disposal everything the universe contains).
Longish quote, but every word in it is a worth. When I've read this as a kid back in 1980es ... immediately came to my mind the next, the seventh logical higher effector class.: the worldmaker !!
The degrees of freedom of all the previous six according to the classical taxonomy of de Latil are confined by the rule-set, the local laws of physics.
They are prisoners of an universe. Like birds incapable to reconfig their cage into roomier and cozier ones.
If we regard the laws of nature as code or algorithm, my 7th level effector will be capable to draft and implement itself onto newer and stronger algorithmic foundations. ( Note the seamlessness between computation and robotics in Latil/Lem categorization construct - quite logical indeed, having in mind that software is state of hardware, that matter-form-action are inextricable from each other, but on this in series of other times and posts ... ). Without bond?
So, I wonder:
Where, you reckon, is Tauchain  placed onto the Latil's effectors map?
Ohad Asor the lead developer and founder of Tauchain releases first new blog post in over a year. By Dana Edwards. Posted on Steemit. December 30, 2017.
The new blog post titled "The New Tau" is available for everyone to read. The blog post speaks on the critical topic of collaborative decision making. This is a topic which I myself have been interested in and Ohad's solution is different from the usual solution. In my own thinking I was considering a solution based on collaborative filtering but I realized this would never scale. I then considered a solution based upon using IA (intelligence amplification) by way of personal preference agents and this does scale but requires that the agents have a lot of data to truly know each user and their preferences. The solution Ohad Asor comes up with attempts to solve many of the same problems but his solution scales without seeming to require collaborative filtering or any kind of voting as we traditionally think about it.
Let me list some of the obvious problems with voting which many will recognize from Steem which also relies on collaborative filtering:
Now let's see what Ohad Asor has to say:
In small groups and everyday life we usually don't vote but express our opinions, sometimes discuss them, and the agreement or disagreement or opinions map arises from the situation. But on large communities, like a country, we can only think of everyone having a right to vote to some limited number of proposals. We reach those few proposals using hierarchical (rather decentralized) processes, in the good case, in which everyone has some right to propose but the opinions flow through certain pipes and reach the voting stage almost empty from the vast information gathered in the process. Yet, we don't even dare to imagine an equal right to propose just like an equal right to vote, for everyone, in a way that can actually work. Indeed how can that work, how can a voter go over equally-weighted one million proposals every day?
This in my opinion is very true. In reality we have discussions and at best we seek to broadcast or share our intentions. Intent casting was actually the basis behind how I thought to solve this problem of social choice but I would say intent casting even with my best ideas would not have been good enough because again the typical voter would be uninformed. Without an ability of the typical voter to be either educated continuously which in a complex world may be unrealistic, or for the network itself to somehow keep the voter up to date, this intent casting barely works. It works well for shopping where a shopper knows what they want but does not work so well when a person doesn't actually know what they want and merely knows what they value. Values are the basis for morality, for ethical systems, and this is the area where Ohad's solution really shines.
Tauchain has the potential not only to scale discussions but also morality, because it will have the built in logic to make sure people can be moral without constant contradiction. The truth is, without this aid, the human being cannot actually be moral in decision making in my opinion due to the inability to avoid all sorts of contradictions.
All known methods of discussions so far suffer from very poor scaling. Twice more participants is rarely twice the information gain, and when the group is too big (even few dozens), twice more participants may even reduce the overall gain into half and below, not just to not improve it times two.
This is the conclusion that Ohad and myself reached separately but it still holds true. We require the aid of machines in order to scale collaborative decision making. This in my opinion is one of the major difference makers philosophically speaking between the intended design and function of Tauchain vs every other crypto platform in development. This also in my opinion is going to be the difference maker for the community which Tauchain as a technology will serve because it will enable the machines and humans to aid each other for mutual benefit or symbiosis.
The blog post by Ohad Asor brings forward a very important discussion which has many different angles to it. The angle I focused on with regard to the social choice dilemma is the problem of how do we scale morality. In my opinion if we can scale morality in a decentralized, open source, truly significant manner, then nothing stands in the way of absolute legitimacy, mainstream adoption, and with it a very high yet fairly priced token. The utility value of scaling morality in my opinion is higher than just about anything else we can accomplish with crypto tech and AI. If the morality is better, then the design of future platforms will be greatly improved in terms of how the users are treated, and this in itself could at least in my opinion help solve the debate about whether AI can remain beneficial over a long period of time. I think if we can scale morality in a decentralized way, it will make it easier to design and spread beneficial AI. Crypto-effective alturism could become a new thing if we can solve the deeper more philosophical problems.
A less understood feature of Tauchain is the feature called "program synthesis" in academic literature. This is a feature that as far as I know no one else in the crypto community is investigating. This feature of program synthesis when combined with the knowledge based AI discussed in my previous posts(1,2), would allow Tauchain to leverage the trend of big data. As the collective knowledge base of Tauchain expands, the capability of this automated programmer will improve. The automatic programmer will reason over an increasing collective knowledge base to allow Tauchain to in a sense "program itself". Smart contract developers will gain piece of mind knowing that their smart contracts are formally verified (for correctness) and users will be able to contribute to the development by joining in as a group to accurately describe the behavior of the code.Haz clic aquí para editar.
The impact of program synthesis and knowledge based AI on the smart contract development community.
Let's discuss what this means for smart contract developer. A smart contract developer today has to deal with creating a white paper, then a formal specification, then do the programming and formal verification themselves. This is very difficult for humans to do and as a result very few people are able to write secure smart contracts yet almost everyone can come up with some interesting idea for a smart contract. Program synthesis will allow individuals or a group of people to specify in a simple yet controlled natural language such as a simple English what they want (this creates the formal specification) and from this description of what they want the automatic programmer will handle the rest. The code will in a sense be written automatically by the AI by reasoning over a knowledge base which could include the knowledge necessary to produce the code for that smart contract.
AI learns to write its own code by stealing from other programs?
In summary, each participant in Tauchain will be able to speak to their "automated programmer" in a language they are comfortable with. They'll describe as accurately as they can the functioning and behavior of the program or smart contract. The automated programmer will then reason over a very large knowledge base and if it is smart enough it will automatically generate the code for the smart contract from their description. The participant will then either be satisfied with what was generated or not satisfied and update their description so as to trigger the process until they are satisfied. This is yet another breakthrough feature which Tauchain may be able to offer to the crypto-community in addition to potentially solving the knowledge acquisition bottleneck problem.
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.