If Money = Memory, if Society = a Super Computer, if Computation is in Physical Systems, what is a Decentralized Operating System? By Dana Edwards. Posted on Steemit. October 24, 2018.
These concepts are not often discussed so let's have the discussion from the beginning. The first concept to think about is pancomputationalism or put another way the ubiquitous computers which exist everywhere in our environment. We for example can look at physical systems living and non living and see computations taking place all around us. If you look at rocks and trees you can see memory storage. If you look at DNA you can see code and if you look at viruses you can see microscopic programmers adding new codes to DNA. Even when we look at the weather such as a hurricane it is computing.
If you look at nature you see algorithms. You will see learners (yes the same as in AI), also in nature. The process is basically the same for all learning. Consider that everything which is physical is also digital. Consider that the universe is merely information patterns.
If we look at society we can also think of society as a computer. What does society compute though? One way people talk about a society is as a complex adaptive system, but this is also how people might talk about the human body. The human body computes with the purpose of maintaining homeostasis, to persist through time and reproduce copies of itself over time. The human brain computes to promote the survival of the human body. Just as viruses pass on codes to our DNA, the human brain is infected with mind viruses which are called memes. Memes are pieces of information which can alter physically how the brain is working.
The mind isn't limited to the brain. The mind is all the resources the brain can leverage to compute. In other words a person has a brain to compute with but when language was invented this allowed a person to compute not just using their own brain but using the environment itself. To draw on a cave is to use the cave to enhance the memory of the brain. To use mathematics is to use language to enhance the ability of the brain to compute by relying on external storage and symbol manipulation. To use a computer with a programming language is essentially to use mathematics only instead of writing on the cave wall we are writing in 1s and 0s. The mind exists to augment the brain in a constant feedback loop where the brain relies on the mind to improve itself and adapt. If there were no external reality the brain would have no way to evolve itself and improve.
A society in the strictly human sense of the word is the aggregation of minds. This can be at minimum all the human minds in that society. As technology improves the mind capacity increases because each human can remember more, can access more computation resources, can in essence use technology to continuously improve their mind and then leverage the improved mind to improve their brain. The Internet is the pinnacle of this kind of progress but it's obviously not good enough. While the Internet allows for the creation of a global mind by connecting people, things, and minds, it does nothing to actually improve the feedback loop between the mind and the brain, nor does it really offer what could be offered.
Bitcoin came into the picture and perhaps we can think of it as a better memory. A decentralized memory where essentially you can have money. The problem is that money is a very narrow application. It is the start, just as to learn to write on the cave wall was a start, but it's not ambitious enough in my opinion.
Humans in the current blockchain or crypto community do not have many ways where human computation can be exchanged. Human computation is just as valuable as non biological machine computation because there are some kinds of computations which humans can do quite easily which non biological machines still cannot do as well. Translation for example is something non biological machines have a difficult time with but human beings can do well. This means a market will be able to form where humans can sell their computation to translate stuff. If we look at Amazon Mechanical Turk we can see many tasks which humans can do which computer AI cannot yet do, such as labeling and classifying stuff. In order for things to go to the next level we will need markets which allow humans to contribute human computer and or human knowledge in exchange for crypto tokens.
The concept of a decentralized operating system is interesting. First if there are a such thing as social computations (such as collaborative filtering, subjective ranking, waze, etc) then what about the new paradigm of social dispersed computing?
The question becomes what do we want to do with this computing power? Will we use it to extend life? Will we use it to spread life into the cosmos? Will we use it to become wise? To become moral? To become rational? If we want to focus on these kinds of concerns then we definitely need something more than Bitcoin, Ethereum, or even EOS. While EOS does seem to be pursuing the strategy of a decentralized operating system which seems to be the correct course, it does not get everything right.
One problem is as I mentioned before the importance of the feedback loops between minds and brains. The reason I always communicate on the concept of external mind or extended mind is based on that fact that it is the mind which creates the immune system to protect the brain from harmful memes. The brain keeps the body alive. The brain is not really capable of rationality, or morality, or logic, and relies on the mind to achieve this. The mind is essentially all the computation resources that the brain can leverage.
EOS has the problem in the sense that it doesn't seem to improve the user. The user can connect, can join, can earn or sell, can participate, but unless the user can become wiser, more rational, more moral, then EOS has limits. EOS does have Everpedia which is quite interesting but again there are still problems. What can EOS do to improve people in society and thus improve society, if society is a computer and is in need of being upgraded?
Well if society is a computer first what does society compute? What should it compute? I don't even know how to answer those questions. I could suggest that if computation is a commodity along with data then whichever decentralized operating systems that do compete and exist will compete for these commodities. The total brain power of a society is just as important as the amount of connectivity. And the mind of the society is the most important part of a society because it is what can allow the society to become better over time, allow the people in the society to thrive, allow the life forms to continue to evolve avoid extinction.
A decentralized operating system on a technical level would have a kernel or something similar to it. This is the resource management part. For example Aragon promises to offer a decentralized OS and it too mentions having a kernel. A true decentralized operating system has to go further and requires autonomous agents. Autonomous agents which can act on behalf of their owners are philosophically speaking the extended mind. But the resources of a society is still finite, has to be managed, and so a kernel would provide for an ability to allow for resource management.
The total computation ability of a society is likely a massive amount of resources. A lot more than just to connect a bunch of CPUs together. Every member of the society which can compute could participate in a computation market. Of course as we are beginning to see now, the regulators seem concerned about certain kinds of social computations such as prediction markets. So it is unknown how truly decentralized operating systems would be handled but my guess is that if designed right then they could be pro-social, be capable of producing augmented morality by leveraging mass computation, and also by leveraging human computation be able to be compliant. To be compliant is simply to understand the local laws but these can be programmed into the autonomous agents if people think it is necessary.
What is more important is that if a law is clearly bad, and people have enhanced minds, then it will be very clear why the law is bad. This clarity will help people to dispute and seek to change bad laws through the appropriate channels. If there is more wisdom, due to insights from big data, from data scientists, etc, then there can be proposals for law changes which are much wiser and more intelligent. This is something specifically that people in the Tauchain community have realized (that technology can be used to improve policy making).
A lot is still unknown so these writings do not provide clear answers. Consider this just a stream of consciousness about concepts I am deeply contemplating. This is also a way to interpret different technologies.
Truth vs Consensus
Truth can be thought of either as something which we can prove by experiments or it can be the result of a consensus. A scientific fact is arrived at by the process of conducting scientific experimentation. A mathematical fact is discovered by finding a proof. Consensus is discovered by analysis of sentiment (or by voting) to determine what the majority currently believes at a point in time about a subject. The truth of the scientists might not match up with the popular consensus at the time. The mathematical proof might say one thing but a majority of people might agree to disagree with the math. We have seen this happen in the past and this blog post is a discussion on that topic. Particularly for Tauchain we have the question of what is the truth and what is more important? Do we care more about the truth or more about consensus?
Tauchain offers helpers in the form of reasoners and logic to improve the quality of discussion. These helpers will not necessarily work unless people agree to accept the results generated. In addition, the bias people inherently have could influence what they discuss in the first place which could create a consensus but not necessarily an improvement.
Consensus as Truth
According to the "truth by consensus" paradigm the truth is produced by consensus gentium. Consensus gentium means agreement of the people. In my previous post I discussed exactly this topic: Consensus Morality and Tauchain | Consensus Gentium. To be specific we can think of consensus gentium to mean: "the truth is what everyone currently believes". In this model of truth we can only get the truth by finding out what everyone believes but how do we determine what people believe? It is a challenge to find a way to determine what people actually believe in a blockchain context. One method of attempting this is called Futarchy which provides an economic reward and an economic cost for having correct or incorrect beliefs. In essence under Futarchy the people must bet on their beliefs rather than just vote. Under Futarchy prediction markets are used to apply market elements to produce a market consensus truth.
Consensus gentium in an environment where there is persecution and or coercion can result in widely held "beliefs" which are enforced into existence such as the belief in geocentrism. Victims of this kind of persecution may include Galileo who was forced to recant his beliefs or face the inquisition. Ancient Greek philosopher Anaximander proposed that the universe revolved around the earth and this idea caught on. Once the idea caught on it became the gospel truth and over time it became blasphemous to dispute this belief. We continue to see this happen even now in the cryptospace with for example the belief of "code is law" or that "blockchains must be immutable", but these too are beliefs based on a particular set of values which the holders of these beliefs hold dear.
Consensus as a regulative ideal
A descriptive theory is one that tells how things are, while a normative theory tells how things ought to be. Expressed in practical terms, a normative theory, more properly called a policy, tells agents how they ought to act. A policy can be an absolute imperative, telling agents how they ought to act in any case, or it can be a contingent directive, telling agents how they ought to act if they want to achieve a particular goal. A policy is frequently stated in the form of a piece of advice called a heuristic, a maxim, a norm, a rule, a slogan, and so on. Other names for a policy are a recommendation and a regulative principle.
In this case we have a distinction between the way things are and the way things ought to be. Policies can be directed to shape the way things ought to be.
The problem with consensus as truth | argumentum ad populum
If consensus equals truth, then truth can be made by forcing or organizing a consensus, rather than being discovered through experiment or observation, or existing separately from consensus. The principles of mathematics also do not hold under consensus truth because mathematical propositions build on each other. If the consensus declared 2+2=5 it would render the practice of mathematics where 2+2=4 impossible.
A big problem is that of coercion. Another big problem is that popular opinion can in fact lead to really bad outcomes. If something is true at a point of time merely because a lot of people believe it then we are basing our decisions merely on what a lot of people believe. This can result in decisions which satisfy what is popular yet also unwise. A lot of people believe a lot of crazy wrong stuff but this does not mean they do not passionately believe it. The question of truth is more about what is true even if not very many people believe it. Geocentricism turned out to be false even though a lot of people believed it at some point in time. On the other hand the laws of physics appear to be true for 13 billion years even during times when a lot of people didn't believe it.
The State, or the ruling government, has the special role of taking care of the people; however, what distinguishes the Chinese ruling government from other ruling governments is the respectful attitude of the citizens, who regard the government as part of their family. In fact, the ruling government is "the head of the family, the patriarch." Therefore, the Chinese look to the government for guidance as if they are listening to their father who, according to Chinese tradition, enjoys high reverence from the rest of the family. Furthermore, "still another tradition that supports state control of music is the Chinese expectation of a verbal 'message.'" A "verbal message" is the underlying meaning behind people's words. In order to get to the "verbal message," one needs to read into words and ask oneself what the desired or expected response would be.
Tauchain and the mysterious Futamura projections. By Dana Edwards. Posted on Steemit. October 15, 2018.
Futamura Projections and Partial Evaluation
While we know Futamura projection is a planned and necessary feature it is also unlikely that most of us even know what Futamura projection is. In fact most people do not even fully understand what BDD can do in particular.
One video which can help for those who wish to study further is:
The distinction must be made between the topic of "Boolean Algebra" and "The Algebra of Boole". The Algebra of Boole is pertinent to the understanding of the BDD aspect of TML. Disclosure, I am not a mathematician so the information in the video above goes toward a level of detail which I am not qualified to express any expertise on. If you choose to take on the herculean task of carrying the cognitive load then please do so at your own risk. If you are really brave you can check out the work of Boole himself directly as well.
For all who have suffered through the cognitive workload presented in that video the next part of this discussion is on the capabilities and process of Futamura projection.
The formula below concisely represents exactly what partial evaluation is:
Given a program, p, static inputs SI, dynamic inputs DI, and outputs O,
We can input the description of our translator. Our translator can either be a compiler or an interpreter. What we want to describe is the process by which the defined language can translate to another language. Using an interpreter we can describe the semantics of our programming language.
How do you compile a compiler?
At the most simple and basic level we start with one input and one output. In the abstract you input your commands into the box and the box produces an output based on those commands. Most very simple software works in this way. A compiler basically takes input (source code) and produces output (a program). The source code are the acceptable commands for the compiler to produce the program with the appropriate behavior. In essence we can think of the box as nothing more than a translator device which takes one set of symbols and produces an output of another set of symbols.
Futamura offers three projections. This is a self referential process so what if instead of just one input into the box we now have two? With two inputs we can now not only send source code into the box and watch it translate into a program but now we can actually go even further and create an "interpreter". Using this second input we can now define the behavior of the box by sending a description of how you want the box to behave. In other words you can now rely on an interpreter which is distinct from a compiler in that it can only translate one statement at a time. Compilers, interpreters, and assemblers, are all translators so ultimately we have symbol manipulation at the core of all this activity.
To compile a compiler you must take an input as an interpreter and get an output as a compiler. Wikipedia provides the three projections:
1. Specializing an interpreter for given source code, yielding an executable
In other words Ohad will have to rely on TML to compile TML by using Futamura projection 3 in the list. In essence he will have to compile TML by using TML. This is the most confusing aspect to explain because it's a mode of self reference where TML essentially is used to create itself. The specializer is specialized for itself.
In my opinion this is a similar moment to when Satoshi Nakamoto mined the genesis block to prove Bitcoin could be built. If Ohad can achieve the feat of compiling TML using TML then we will know from this that TML is able to work. From this we can know at minimum that Tauchain on the most basic level is feasible. The question still remains on the question of logic of course. While in theory we know the logic is supposed to work it is also an area of theory which very few of us understand well. If it is demonstrated that this logic does in fact work as intended then we will know for certain that Tauchain is feasible.
Futamura projection is perhaps one of the most difficult parts of TML to explain conceptually due to the self referential nature. Excuse me if I made any errors in my attempt to explain it.
Boole, G. (1984). Analysis of Logic.
Tauchain Update: Significant code changes in Github and discussion of progress. By Dana Edwards. Posted on Steemit. September 30, 2018.
Just several hours ago lead developer and founder of the Tauchain project Ohad Asor released his most significant code update yet. This blog post will be to discuss some of those updates and put it into context. In order to make sense of the current codebase : "Tauchain Codebase" I will also discuss a bit about the makeup of the code.
The significant breakthrough - Ohad implements the BDD
First some might be wondering what is BDD? BDD is a data structure called binary decision diagram. This data structure in my opinion is as significant to Tauchain as the "blockchain" data structure was to Bitcoin. For those who do not have a computer science degree I will elaborate on what exactly a data structure is below before discussing what a BDD is and why it is so significant.
Brief discussion on what a data structure is
In programming a data structure is a concept which represents a data organization method. For example blockchain is all about how records are stored as blocks. There are other similar data structures which represent decentralized data management and storage such as for instance the distributed hash table data structure.
A blockchain data structure looks like this for visualization:
By Matthäus Wander [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons
A hash table looks like this for a visual:
By Jorge Stolfi [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], from Wikimedia Commons
The really good programmers choose the appropriate data structure to meet the requirements of the project. BDD was chosen specifically by Ohad because it provides efficiency boosts in a key area necessary for Tauchain to function as intended. In specific we know Tauchain requires partial fixed point logic in order to have decidability in P-SPACE. We also know Tauchain requires decentralization and efficiency. Efficiency can be understood better in terms of the trade off between time and space. We do not have unlimited time or space so we must sacrifice one in order to get more of the other.
When we look at the code base we know that Ohad can optimize the code either by sacrificing space in which the executable will be bigger (but the code runs faster) or he can choose to sacrifice time in which the code is a smaller executable to save memory but might run slightly slower. This highlights the essential trade off between time and space when optimizing code but of course there is more to it because algorithms within a code base have to make similar trade offs.
Now what exactly is a BDD (binary decision diagram)?
Now that we understand the basics about efficiency and what a data structure is we can make a bit more sense of what a BDD is. In order to understand why BDD as a data structure is so important to Tauchain we have to remember that Tauchain is about logic. We can take the most basic example of Socrates:
A predicate takes an entity or entities in the domain of discourse as input while outputs are either True or False. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences are viewed as being unrelated and might be denoted, for example, by variables such as p and q. The predicate "is a philosopher" occurs in both sentences, which have a common structure of "a is a philosopher". The variable a is instantiated as "Socrates" in the first sentence and is instantiated as "Plato" in the second sentence. While first-order logic allows for the use of predicates, such as "is a philosopher" in this example, propositional logic does not.
Based on the rules of first order logic we can have our inputs and receive our outputs. In the most basic example above we an see a bit about how logic works. To elaborate further:
Relationships between predicates can be stated using logical connectives. Consider, for example, the first-order formula "if a is a philosopher, then a is a scholar". This formula is a conditional statement with "a is a philosopher" as its hypothesis and "a is a scholar" as its conclusion. The truth of this formula depends on which object is denoted by a, and on the interpretations of the predicates "is a philosopher" and "is a scholar".
A truth table has one column for each input variable (for example, P and Q), and one final column showing all of the possible results of the logical operation that the table represents (for example, P XOR Q). Each row of the truth table contains one possible configuration of the input variables (for instance, P=true Q=false), and the result of the operation for those values. See the examples below for further clarification. Ludwig Wittgenstein is often credited with inventing the truth table in his Tractatus Logico-Philosophicus, though it appeared at least a year earlier in a paper on propositional logic by Emil Leon Post.
When we are dealing with logic we may find that a truth table helps with visualization.
Now with this knowledge we have the most basic Socrates example:
This can be represented via truth table and is called a syllogism. To solve this we simply apply a kind of reasoning called deductive reasoning. This would indicate that if All men are mortal is true and if Socrates is a man is also true then Socrates is a mortal must be true. If we were to say all men are mortal but Socrates is immortal then Socrates cannot be a man. So if Socrates is a man he must be moral or there is what we call a contradiction. Logic is all about avoiding these sorts of contradictions and in specific binary or boolean logic is to reach a conclusion which always must be one of two possible values.
If I ask you to play a game which we want to guarantee will end with either one of two possible outcomes then we have a good example of a boolean function. 1 or 0, true or false, on or off, a or b.
Some of you may be familiar with data structure we call a DAG (directed acyclic graph). For those of you who understand this concept you can visualize a BDD as being very similar to a propositional DAG.
By David Eppstein [CC0], from Wikimedia Commons
We know from DAGs that it's a finite amount of vertices, edges, etc. We may also be able to visualize topological ordering and if you remember my post on transitive closure you might also remember the visuals on how that can work:
A binary decision diagram can represent a truth table:
By The original uploader was IMeowbot at English Wikipedia. (Transferred from en.wikipedia to Commons.) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons
And from these visuals now it should be abundantly clear how this is critical to the functioning of Tauchain. The BDD data structure allows for efficient model checking as well. To understand we have to consider the boolean satisfiability problem.
This highlights the fact that BDD can be used to create a SAT solver.
A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 1960s (see references below) and is now commonly referred to as the Davis–Putnam–Logemann–Loveland algorithm ("DPLL" or "DLL"). Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms.
Without getting overwhelmed by technical details the key points are below:
To read the code for yourself and track the progress of Tauchain development take a look at Github:
''We live in a world in which no one knows the law.''
Ohad Asor, Sept 11, 2016
I continue herewith with sharing my contemporary state-of-grok  of the up to now four  scriptures of the aka newtau . Sorry for the delay, but it comes mostly from the efforts to contain the outburst of words, catalyzed by the very exegetic process of such a rich content, into a reader-friendly shorter form.
The subject of vivisection textographically identifies as the first three paragraphs of ''Tau and the Crisis of Truth'', Ohad Asor, Sep 11, 2016 .
The four core themes extracted are ennumerated bellow, with as modest as not to sidetrack the thought and to not spoil the original message, streak of comments of mine.:
As I guy who's immersed in Law for more than quarter of century  I can swear with both hands on my heart in the notion of unknowability of Law.
Since my youth years in the law school  I was asking myself how it is possible at all to have 'rule of law'  in case any legal system ever known required humans to operate !?
It seemed that the only requisite or categorcal difference between mere arbitrary 'rule of man'  and the 'rule of law' was that in some isolated cases some ruling men happened to be internally programmed by their morals  to produce 'rule of law' appearance effects by 'rule of man' means.
Otherwise 'rule of law' done via 'rule of man' poses extremely serious threats of law to be used by some to exploit and harm others.
In that line of thoughts my conclusion was that the Law is ... yet to come.
What we know as Law is not good networking protocol software of mankind as such, but rather we see comparatively rare examples of individually well programmed ... lawyers.
On the wings of a technological breakthrough, just like: flying came with the invention of airplanes and moonwalk needed the advent of rocketry, or to remember without to stay alive - the writing. The Law is an old dream. If we judge by the depth of the abyss of floklore - one of the humanity's most ancient dreams, indeed. Needless to repeat myself that this was what sucked me into Tau as relentlessly as a black hole spagetification  :)
The referred by Ohad frustration by Law of the great Franz Kafka  expressed in his book The Trial  becomes very understandable for Kafka's epoch lacking the comforting hope in a technology which we already have - the computers - and the overall progress in the field of logic, mathematics, engineering ... forming a self-reinforcing loop centered around this sci-tech of artificial cognition.
Similarly to the nuclear fusion, which is always few decades away, but the Fusion gap closes noticeably nowadays , we are standing on the cliff of a Legal gap.
The mankind's heavy involvement in cognition technologies, especially in the last several decades, outlined multiple promising directions of further development, which seem to bring us closer to abilities to compensate the fundamental deficiencies of Law and in fact to finally bring it into existence.
It took entire Ohad Asor, however, to identify the major reasons why the Law is bottlenecked out of our reach yet, and to propose viable means to bridge us through that Legal gap... The other side is already in sight.
It is in the first place the language to blame !
The human natural language . Our most important atribute as species. The mankind maker. The glue of society. It just emerged, it hasn't been created. It has rather ... patterns, vaguely conventional, than intentionally coined set of solid rules. There ain't firm rules to change its rules, either ... The natural human language is mostly wilderness of untamed pristine naked nature, dotted here and there with very expensive and hard to install and maintain ''arteftacts'' . Leave it alone out of the coercion of state mass media, mass education and national language institutes and it falls back into host of unintelligible dialects. Even when aided by the mnemonic amplifier which we call writing.
Ambiguity is characteristic of the natural language, a feature in poetry and politics, but a deadly bug in logic and law.
We'll put aside for now the postulate of impossibility of a single universal language to revisit it later when its exegetic turn comes. In another chapter onto another scripture. Likewise, not in this chapter we'll cover the neurological human bottlenecks which are targetted to be overcome by Tau. Lets observe the sequence of author's thoughts and to not fast forward.
Instead of that I'll dare to share with you my own hypothesis about why the natural human languages are so. (I'm smiling while I type this, cause I can visualize Ohad's reaction upon reading such frivolous lay narrative. I hope he being too busy will actually not to.) To say that the human languages are just too complex does not bring us any nearer to decent explanation. Many logic based languages are more than a match of the natural human ones in terms of expressiveness and complexity. It shouldn't be that reason.
My suspicion is rather that the natural human languages pose such a Moravec hardness  for being not exactly languages. Languages are conveyors of meaning. Human languages convey not meaning, but indexes or addresses or tags of mind states. The meaning is the mind state. Understanding between humans is function of not only shared learnt syntaxi, but also of shared lives. Of aggregation of similar mind states which to be referred by matching word keys.
If this is true it is another angle for grokking the solution of human users leaning towards the machine by use of human intelligible Machinish, instead of Tau waiting the language barrier to be broken and machines to start speaking and listening Humanish.
In a nutshell we yet wait the Law to come cuz Law is not doable in Humanish. Bad software. And the other side of the no-law coin is that the humans are no cognitive ASICs . We do congnition only meanwhile and in-order-to do what other animals do - to survive. Bad hardware.
In order law to become law it must become handsfree .
Not humans to read laws, but laws to read laws.
The technology to enable that looks on an arm's length.
Ok, so far we butchered the law and the language. What's left?
The nature and essence of human language brought one of the most harmful and devastating notions ever. Literally, a thought of mass destruction.
The ''crisis of truth''. The wasteland left by the toxic idea spilover of ''there is no one truth'' or even ''there ain't truth'' at all. This is not only abstract, philosophical problem. Billions of people actually got killed for somebody else's truth.
Not occasionally the philosophers who immersed themselves into this pool are nicknamed 'Deconstructivist' . Following back their epistemic genealogy, we see btw, that they are rooted rather in faith than in reasoning, but this is another story.
The general problem of truth, of which the problem of law is just a private case, opens up two important aspects:
Number one, is that all knowledge is conjectural to truth and that, truth is an asymptotic boundary - forever to close on but never to reach. Like speed of light or absolute zero. Number two, is that human languages make pretty lousy vehicles to chase the truth with.
If really words are just to match people's thoughts together, then there are thoughts without words and words without thoughts. Words mismatch thoughts, so how to expect they to bridge thoughts to things? Entire worlds on nonsensical wording emerge, dangerously disturbing the seamless unity of things and thoughts. Truth displaced.
''But can we at least have some island of truth in which social contracts can be useful and make sense?''
This island of shared truth is made of consensus  bedrock and synchronization  landmass.
Thuth and Law self-enforced. From within instead of by violence from without. And in self-referenial non-regressive way.
''We therefore remain without any logical basis for the process of rulemaking, not only the crisis of deciding what is legal and what is illegal." 
Peter Suber with his ''The Paradox of Self-Amendment: A Study of Law, Logic, Omnipotence, and Change''  proposed a rulemaking solution which he called Nomic .
''Nomic is a game in which changing the rules is a move.'' 
The merit of Nomic is that it really eliminates the illths of the infinite regress  of laws-of-changing-the-laws-of-changing-the-laws, ad infinitum, by use of transmutable self-referrenial rules. But Nomic suffers from number of issues - the first one, in the spotlight of that chapter, being the fact that we still remain with the “crisis of truth” in which there is no one truth, and the other ones - like sclability of sequencing and voting - we'll revisit in their order of appearance in the discussed texts.
The aka 'newtau'  went past the inherent limitations of the Nomic system and resolves the 'crisis of truth' problem.
The next few chapters will dive into Decidability and how it applies to provide solution to the problems described above.
 - https://en.wikipedia.org/wiki/Grok
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-intro
 - https://steemit.com/tauchain/@karov/tauchain-exegesis-the-two-towers
 - http://www.idni.org/blog/tau-and-the-crisis-of-truth.html
 - http://www.behest.io/
 - https://steemit.com/blockchain/@karov/behest-for-tauchain
 - https://en.wikipedia.org/wiki/Rule_of_law
 - https://en.wikipedia.org/wiki/Tyrant
 - https://en.wikipedia.org/wiki/Morality
 - https://en.wikipedia.org/wiki/Spaghettification
 - https://en.wikipedia.org/wiki/Franz_Kafka
 - https://en.wikipedia.org/wiki/The_Trial
 - https://www.amazon.com/Merchants-Despair-Environmentalists-Pseudo-Scientists-Antihumanism/dp/159403737X
 - https://en.wikipedia.org/wiki/Language
 - https://en.wikipedia.org/wiki/Official_language
 - https://steemit.com/blockchain/@karov/tau-through-the-moravec-prism
 - https://en.wikipedia.org/wiki/Application-specific_integrated_circuit
 - https://www.etymonline.com/word/manipulation
 - https://en.wikipedia.org/wiki/Deconstruction
 - https://en.wikipedia.org/wiki/Consensus_decision-making
 - https://en.wikipedia.org/wiki/Synchronization
 - http://legacy.earlham.edu/~peters/writing/psa/index.htm
 - https://en.wikipedia.org/wiki/Nomic
 - https://en.wikipedia.org/wiki/Infinite_regress
 - the illustration is a painting courtecy of the author Georgi Andonov https://www.facebook.com/georgi.andonov.9674?tn-str=*F
This topic is loaded in the barrel since - as I see in my draft records - April 2018. It is my free assotiations on the major topic of the aka ''tragedy of the commons''  refracted through the prism of things which I had to pass through with Tau  in mind. In the months it replicated itself into numerous subtopics and threatens to grow in several general theories  so I decided to better unleash it in the wild and to handle it with your help and if necessary to tame and domesticate it and its progeny by the coming power of Tau.
The problem of the 'tragedy of the commons' as a symptom of the more general theme of ownership .
I think I kinda nailed it. It seems this approach brings serious inference power, i.e. via it most of what we know can be derived. Of course it lacks mathematical / logical rigor, but still even on such haiku expression level seems to work.
Yes, there is such a word. In linguistics .
Per se, ''clusivity'' is modulus  of inclusion  and/or exclusion .
Absolute value in maths denotes 'distance' from zero, regardless of direction, which seems to translate well for depicting the spectrum between 'included' and 'excluded', if we imagine that excluded=-1 as the opposite of included=1, and zero measures state of equal clusion. The other, more intuitive and easier to grasp, way would be of the fuzzy logic  of zero to one fractional values, where zero is no clusivity, and one is full clusivity. Lets say we take one of the possible 'directions' and 0= complete exclusion, 1=complete inclusion ... multi-values in between.
Of course due to purely physical reasons 0 and 1 are asymptotic values - ever to approach, never to reach. And of course due to purely physical, finitist  reasons the clusivity fuzzy spectrum is quantized , not smoothly continuous .
Attending etymology usually pays off, because of two reasons:
Thus, we can visualize all languages as a single language, a continuum with mascons  of commonality of indexing-meaning pairs. Like a strange form of semantic entanglement  - to be inevitably hacked someday open and to give birth to endless valuable technologies...
What does this up to now have in common with Commons, Ownership and Tau?
Interestingly, the etymology of 'include'  automatically leads to its privatization-publicization functionality.
It is cognate with both.:
The private/public ''divide'' as key/access driven relation.
Do we ''have the keys''? Or ''are we'' the keys (given non-computerized 'face-control' type of access cases)?
NO. For any entity and for every access, the keys are not the entity or are not property of it.
Key is OUTPUT by us. Fed as INPUT into other systems, so they to perform.
Society can be imaged as a network of partially-black boxes  , where free will is function of the box certainty of autoreflection and trust is function of the uncertanty of other boxes behavior prediction ...
We do not know and in most cases can not know what's going on inside other peoples or organizations or other artifacts inner workings, but we know that by inserting Key we can make them to perform certain expected predicted action.
The boxes are said to be partially-black for the non-black part denoting the zone of predictability - i.e. ''if I input this into that black-box I know it will return to me this and that specifically''...
Key, be it biometrics, piece of shaped metal, digital string of bits ... a reason which causes, a input which brings the outcome of access to...
Important side note is that in the case of key-pair philosophy it is NOT two keys - public & private, but rather a (public) padlock  and THE (private) key , so everybody can lock it but only the key-owner can unlock it / access it.
You maybe have noticed one of my many times repeated slogans :
LAW IS BETWEEN, CODE IS WITHIN
, coming to delineate the map of Trust - i.e. where force is needed ( ''I trust you only as much as I can make you to'') and the self-enforcing systems of blockchain and god knows what else possible systems.
The whole picture is pretty insightful in both the blockchain and the trust (e.g. force)  context, when we realize that it is not so much about de jure, but purely de facto situation. Even when minding the Law. For, private-public being function of the performance and efficiency of the protocol. Incl. the key-making ones. Incl. the key-breaking ones.
On The Law and the related trust=enforcement relations to code and protocols, I'll go some other time in detail (actually lots of times because it seems the bunch of concepts here have lots of fruitful logical consequences), but the inevitable conclusion seems to be that it is in general a Clusivity thing even in the Legal case. For it is matter of accessing the output of compulsory legal action by inputting a ... key.
The recent EU intellectual law directive  is alphabetical example of the Fiat  approach of the external enforcement (as opposed to the cryptographic 'trustless' one). The Fiat way of enforcing ownership rights is also a Clusivity system. The subjects victims of property rights breach ACCESSES the authorities with their ID information, evidence, procedural codes and as output they have to receive enforcement actions vs the delinquents . The cost of trust  this way might be staggering and it is apparent that such a system may easily get clogged and to implosively unscale , .
Tau is mostly about knowledge economy. Economy without ownership ... is very hard, if not impossible to imagine. Like , where there ain't between anymore but everything is within, but even all white boxes system is prone to failures . Especially when we go past the veil of the ideological cliche definitions and take ''to own'' = ''to access'' in the purely factual, physical sense of the word.
In this sense each and every economy is a Clusivity management system.
Tau promises the ultimate Clusivity management.
 - https://en.wikipedia.org/wiki/Tragedy_of_the_commons
 - http://www.idni.org/
 - https://en.wikipedia.org/wiki/Irony
 - https://en.wikipedia.org/wiki/Ownership
 - https://en.wikipedia.org/wiki/Clusivity
 - https://en.wikipedia.org/wiki/Absolute_value
 - https://www.etymonline.com/word/inclusion
 - https://www.etymonline.com/word/exclusion
 - https://en.wikipedia.org/wiki/Fuzzy_logic
 - https://en.wikipedia.org/wiki/Finitism
 - https://en.wikipedia.org/wiki/Discrete
 - https://en.wiktionary.org/wiki/continuous
 - https://en.wikipedia.org/wiki/World_line
 - https://en.wikipedia.org/wiki/Memory_(disambiguation)
 - https://en.wikipedia.org/wiki/Morphism
 - https://en.wikipedia.org/wiki/Mass_concentration_(astronomy)
 - https://en.wikipedia.org/wiki/Quantum_entanglement
 - https://www.etymonline.com/word/include
 - https://en.wikipedia.org/wiki/Black_box
 - https://security.stackexchange.com/questions/87247/why-is-a-public-key-called-a-key-isnt-it-a-lock
 - https://en.wikipedia.org/wiki/Public-key_cryptography
 - http://www.behest.io/
 - https://steemit.com/tauchain/@karov/tauchain-and-the-cost-of-trust
 - https://www.theguardian.com/technology/2018/jun/20/eu-votes-for-copyright-law-that-would-make-internet-a-tool-for-control
 - https://en.wikipedia.org/wiki/Fiat_money
 - https://en.wikipedia.org/wiki/Delict
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://steemit.com/tauchain/@karov/scaling-is-layering
 - https://steemit.com/tauchain/@karov/tauchain-transcaling
 - https://en.wikipedia.org/wiki/Borg
 - https://en.wikipedia.org/wiki/Cancer
 - The marvelous picture above is quoted from : https://www.deviantart.com/lora-zombie/art/LORA-ZOMBIE-THREADLESS-351467642
''Tau solves the problems from the Tower of Babel to the Tower of Basel''
- an early 21st century yet undisclosable author
Okay, dearest friends, lets pull sleeves up and start with it. Vivisection of the Scriptures? Revelation by transfiguration? Pulling the Tau from the ocean of wisdom out on the dry no-Maths-land? I hope not.
The quote above on first glance sounds so pompously biblical, but in fact it denotes the crystal clear and simple practical and mundane rationale of Tau which I already tried to approach from few angles , .
It is about the hierarchic bottleneck of one unscaling ,  Humanity. Take the hint about leveling of the Towers as a poetic symbol of elimination of the social 'verticality' -- the hierarchies as a so far necessary evil to compensate certain innate neurological limitations , , ,  -- and reforming  the network we are embedded into and usually call mankind or society or economy or world into an as geodesic as possibly possible one . For the sake of its own functional programmatic optimization .
Notice that towers leveling is not by demolition, but by uplifting the overall landscape level to and above the tower tops, turning them into deep roots or support pylons of asymptotically geodesic society .
Apparently, mentioning the Gate of God  denotes the unmixing  of languages & mentioning the apex global fiat settlement institution  - the excelling of the current fiat procrustics  i.e. the economy aspect.
That is: TML to Agoras . The first and last of the totally six identified aspects or steps of the social choice  as addressed by what we call Tau.
''our six steps of language, knowledge, discussion, collaboration, choice, and knowledge economy''
These aspects deserve of course separate zoom-in exegetic chapters and they'll definitely get it. I promise. And not only they.
Any exegesis of Tau unavoidably must start with scroll back and tracking down of the full history of the development so far. As a zoom out to see the full picture and to identify the dominant features of the landscape relief.
You, I reckon, already noticed this retrodictive inclination of mine , that in my mind the notion of ''Timeline of Development'' can not be by any logic just a handful of milestone promises thrown into the future, but it is a must to account for the up to now trajectory, too! No future without past.
It all started as Zennet , continued as Tau-chains  and 'turned' into aka 'newtau' , , , .
Wait! A New Tau?
Excuse me, Ohad, but I personally do not buy that and I said it many times. There ain't old and new Tau. The situation is much more straightforward and grokkable . Here it is:
Lotsa guts, balls, butt, brains or whatever human offal... is required for each of us to admit a mistake made in our everyday life. Generally quite a strength is needed to even look ourselves into the mirror...
It takes a whole Ohad though, to keep all oneself's work totally public and transparent even on the full and unedited live record of the infil  into entire branch of mathematics  and then throwing it all away as untauful. We witnessed that reported in real time!
Did this change the ends? No. But sorted out the means to an end.
Was it a 'mistake'? In no case. It was duly delivered R&D effort.
Was oldtau looking promising on first glance? Yes, of course it did.
Did it survive the Ohad's R&D 'crash-testing'? No, it didn't.
Was it a ''juice worth the sqweeze''? It was.
Was it a job well done? Absolutely.
The oldtau materials are for me legacy jewels. Like those dinosaur bugs trapped into blobs of amber .
Development is a process, not just results shipping. Related like cooking and serving.
Studying the zoom-out dev map we observe these few major landmarks:
The Zennet province is all right. Its gently rolling hills gradually merge into the Tau lands proper with the inevitable realization that a 'world supercomputer' can not be a Tauless thing. Zennet lives in Tau with .:
''... having a decentralized search engine requires Zennet-like capabilities, the ability to fairly rent (and rent-out) computational resources, under acceptable risk in the user's terms (as a function of cost). Our knowledge market will surely require such capabilities, and is therefore one of the three main ingredients of Agoras... hardware rent market...''
We move over through the oldtau wastelands  where the burnt ruins of MLTT  lie scattered - rough oldtau location-on-the-map indicator is the fall of 2015 with
''Tau as a Generalized Blockchain'' - posted Oct 17, 2015, 6:33 AM [updated Oct 17, 2015, 6:49 AM]
and then we reach the fertile gardens of newtau  in the fall of 2017:
''The New Tau'' - posted Dec 31, 2017, 12:27 AM [updated Dec 31, 2017, 12:28 AM]
Hmm. Apparently we crossed a watershed. Which relief feature it was? - The ridge  of:
''Tau and the Crisis of Truth'' - posted Sep 10, 2016, 8:25 PM [updated Sep 10, 2016, 8:28 PM]
Tau sorts out the Towers. I hope that the synopsis in this short chapter of Exegesis helped to sort out Tau dev in time as a navigation lookup tool.
Software is nothing but states of hardware. There is that intimate deep, not yet codified into a neat compact of logic, connection between Gödel , Heisenberg  and Laws of thermodynamics .
Tau keeps us off these traps.
I do not dare to state that someday we won't have the command on infinities and to play with them with the ease  of
''... a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.''
In fact, quite the opposite I'd rather take it as inevitability someday we to conquer the Cantor  expanses and to venture far even beyond that. To transcale  the transfinite. Like Hilbert  said it.:
''Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können. (From the paradise, that Cantor created for us, no-one can expel us.)''
But it takes ... finitary vehicles of DECIDABILITY to conquer the transfinitary outer spaces. Because, in order to dear to dream to tame the infinities, we must first harness and get full command of finities.
Including of ourselves. Tau is ''understanding each other''. Without Tau we are ... others to ourselves.
Imperare sibi maximum imperium est.
Lets build an universe , . I realize this blog post is the most 'psychedelic' up to now and for long time to go, but some 'poetry' never really hurts ...
We discussed already the worldmaker effectoring .
It is quite ancient but also exponentially growing business ... in all possible forms of science , faith  and art . This modeling  usually serves to play out what's possible and what's impossible. Gedanken eksperiment , yeah, but isn't all thought  merely algorithmic  and mere action  ?
Usually the posited universes are made of variations and combinations of substance/matter, structure/form and action/process rules. Though, the algorithmic component is always the essential ingredient. Yes, the Laws of Physics are full-fledged, literal algo , too. I have those conjectures that it is impossible to think out, make or discover (which is one and a same thing) a lifeless universe  and that substance-structure-action are inexctricable, but these are separate topic for some other times to address .
Lets put together ours toy-universe  out of only pure algorithm. I've never seen such a construct, although the Orbis Tertius  is enormous and I bet this vision have occurred gazillions of time in zillions of minds.
It is like an ocean. The primary coin-toss algo which outputs 0s and 1s  makes the water. We don't know (yet) if there are even deeper and more fundamental numerical bases  for running algos. Most probably the answer is yes by analogy with the Dirac Sea  - the deeps to be made of simpler and weaker algos. The most elementary coin-toss thing makes out the ... probabilistics, perhaps the primordial form of logic. The laws of physics (and of machine learning  and of darwinian evo algo  ...) tell the rule-set how to stitch together lotsa coin-toss outputs. A hint on inspiration for that - David Deutsch's Constructor theories . The laws of physics as entropy  limitation of the allowed elementary algo cumulative output. For information being a verb, not a noun - isn't it? Very interested philosophic perspective on algorithm as randomness constrictor  raise up...
So, if the Algoverse ocean water is made of elementary coin-toss molecules, being ''liquid'' is just another phase or aggregate state .
There is deep duality  between probabilistics and logic. Just like the zoo of dualities discovered in accelerating pace by the mathematical physics in the last decades  Probability/statistics we make now by logic , the reverse ... - well, nobody yet cracked it. Even Kolmogorov. But I bet we will. Most probably the breakthrough will be Ohad Asor name-labeled... To find the know-how to do it the other way round, do logic with probability/statistics. The statistical algorithmic - not the SAT , brute force, alchemist  way as with NN/ML ,  and other known beasts. This will be nothing less but full merger of maths/logic/philosophy/thought... and physics. Literally!
Excuse me for the haiku  simplification. It is deliberate due to realization of my grok constraints. :) Regard it as sharing a poetic impression.
Is there deeper and weaker algo than the digital - the radix-2, deterministic, unitary one? Intuition says ''yes, of course!'' Like with these radix-1 Half-coins  of negative and other non-unitary probabilities ... which take two tosses to yield a bit... and there must be transfinity  of lower ones, also transfinities of higher and sideways ones ... which is almost as counter-intuitive as Dirac's bottomless night of negative energy , but I bet also as much useful. (Lets not even touch numeral bases of Pi, i, e ... etc.), and lets stick to strictly binary 'water' for our oceanic toy-universe for the sake of sanity.
The next important notion of the Algoverse ocenic model is the Algorightmic strength  - the weakest algo would be that which takes infinity of tosses to get a full bit. The strongest?
Algorithmic ephimeralization  - essentially to do more with less. Or faster - Speed Prior  ... which is just another way to say 'more'.
Some algos are too strong - QM, M-Theory - they return way too much bits per 'toss'. Their vcdim  converges to infinity. Exponential walls  in all directions. Not exactly what Freeman Dyson had in mind ... In our ''mockup'' they could be depicted as too hot. Changing the phase of the elementary algo 'water'. Like.:
but because we are all for peaceful use of algorithmic energy - we reject those up here, too - together with the non-unitary statistics down there.
Last piece of the picture - the Algoverse ocean is habitable and inhabited!
By higher algos as life-forms, stronger - but not so strong to turn the 'water' into roaring steam or plasma.
Examples: Calculi , geometries, algebras ... software . The genetic inter-algo connection should be that calculus came from Leibniz and Newton and numerous unknown others ... heads, but it is the blind watchmaker  of evolution which put those heads together ... (I disagree with Dawkins only on that evolution and design are both algorithms, alternatives but not opposition).
Thus entropically  and combinatorially  algos kinda-sorta come from one another - the stronger from the weaker.
The stronger are the life-forms living in that ocean. Cause randomness  permeates everything, isn't it?
Not so far-fetched of a metaphor given the fact that any Effector-ing  has totally algorithmic nature and essence.
How much higher 'life form' Tauchain in the Algoverse ocean is?
Is it mere life form or ... life, new organizing principle to reform all the system?
Guys, after a few articles , , .  - I think I owe you to present a little bit myself and Behest.io , .
I, Karov, am a human, i.e. I'm not robot ( although, my friend @trafalgar is a witness, once I fought all day long with a google form Captcha, but I prefer to blame a software glitch for that ... ).
I occasionally understood that 'karov' is the word for 'near' in Hebrew, but this is pure coincidence.
I'm a lawyer. More than two decades of uninterrupted PQE . In couple of European jurisdictions.
Behest.io is a ... firm. In the sense of :: firm (n.) , or in the very original sense as any firm's only way to be - a signature. Not in the sense (yet) of a legal personhood entity.
As a signature Behest.io is a tool. My tool, which I continuously develop to deliver answers  upon behests  for compliance to various crypto endeavors.
Metaphorically, the Behest.io tool dev target is: if a law firm is a CPU , Behest.io to be crypto legal services ASIC .
Blockchain came too swift, too strong and too global. Like an alien invasion. Legislators and law enforcement can not keep pace. Law and regulations are far from being definite on it.
It is entire internet of jurisdictions out there. Nobody really knows the Law. One can not just go out and shop answers. There is no legal supermarket with neat shelves of turnkey solutions with price tags.
The compliance space is turbulent. Nothing is ready and definite. Very high risk a grey zone to turn red hot. Quicksand minefield.
Crypto lawyer job is not yet an industry, it is inevitably art and craftsmanship. Tailored solutions.
Thus Behest.io is a studio , not conveyor belt mass factory.
Our approach in support is: side by side, thinking together, carefully map the routes ahead, identify the correct questions and precisely craft specific solutions.
On tailored case by case basis. In strict confidence. In all the time dynamic and adaptive fashion. In real time. From entry to exit. All the way navigation from mere idea to end.
So far it sounds like just another advert... I know. But, let me quickly throw some Behest.io preconditional points in an attempt to start sketching the bigger map:
FIRSTLY.: Why ''of Tauchain''?
Since my law school years back in the past millennium I noticed that the Law in all its dimensions.: legislature, legislation, application, enforcement, science, jurisprudence, doctrine ... is somewhat inconsistent and not quite self-sufficient.
I'm now firmly on position that the place of Law is not with the soft sciences of history and literature but among the hard sciences of maths, logic, philosophy and physics.
If we compare the social rules set with a human network protocol code, the Law up to now is obviously not quite automatic and requires too much 'hand drive'. Including, in the rules to make rules, too.
I tried to envision (with my limited tech knowledge), all this quarter of century, various ... systems which eventually could compensate such flaws: virtualization, procedural generation, gamification ... and then Satoshi came. And Ohad Asor appeared.
If we compare our intention and dream of Law with flying - since times immemorial humans wanted to fly like birds, but it took Wright Bros  we to fly ... not like the birds do.
I must herewith admit that closest to my heart are two technological projects.: Tau  and ET3 . They form kinda ... unity, but on that - other times, in series of other posts.
Ohad Asor in his Sep 10, 2016, 8:25 PM essay  very precisely outlined the problem of Law:
''We would therefore be interested in creating a social process in which we express laws in a decidable language only, and collaboratively form amendable social contracts without diving into paradoxes. This is what Tau-Chain is about.''
Exactly! The problem of Law is that it is written in inherently buggy natural human language 'software' and is run on human brains 'hardware' which is faulty for this, for being 'made' to optimize performance of completely other category of tasks. Like ... survival.
We can achieve Law by these means - human natural language and human brains - not more successfully than we could walk from here to the moon.
Tau is the most solid grounded and promising effort to deliver our long dreamed 'rocketry' to take is from here to the Law.
If Law is decidable code, it is specifiable, all intended consequences predictable and granted. Decidable, consistent ... and self-amending. Precisely what the Law is supposed to be. At last. If it is specifiable in exact terms, action code is synthesizable out of it, to feed the legal effectors of all kinds with precise instructions.
Because our societies map to our communications , drastic improvement of our interactions rules is equivalent of immense improvement of the human condition.
The Law as a Tapp (Tau App)? Most definitely. I know no other attempt the issue to be addressed in such a way of pure reason and demonstrated understanding.
This is the reason behind ''for Tauchain'' part of this post's title. It can get us there. We can have the Law, at last.
This is in the Behest.io and mine best selfish interest. Which is: a world of unimaginable freedom and wealth for all.
Behest.io in that sense is ''for Tauchain'' for the perspective the Tau to become ''for Behest''. Realization of my lifetime Legum  project.
Behest.io is not of Tauchain, or of IDNI. It is an independent project of an independent lawyer, with strong current focus on Tau and ET3. Because of the outlined above reasons. In series of upcoming articles I intend to elaborate on my visions and positions on these in general.
SECONDLY.: How exactly is supposed Behest.io to operate before the Tau is in our hands to play with?
All by the books, of course! Legal profession is for compliance, but also it is all about compliance per se. Not just compliance makers and shippers, but must-be compliant the lawyers themselves. Lawyers are strictly local and heavily regulated profession. As it should be.
Not only no lawyer knows all law, but there is not such a thing as global or universal license to provide legal services. Regardless of the 'professional services provider' Big Four  or other hierarchic collab structure - a lawyer is limited to operate only on the territory which his professional 'badge' granting regulator says.
From the other hand Internet and Blockchain are inherently global and penetrate and permeate all jurisdictions as easy as neutrino passes through a planet.
My plan to deal with this ''license to kill (the problems)'' inter-jurisdictional professional license issue is simple:
Quick assembly of full professional license coverage teams. In bespoke to project way. Ad hoc. Where and when needed.
The idea is ... if Behest.io is a screen and the solutions - images on it, the backend machinery of professionals and other resources to be freely reconfigurable and developed and expanded on demand all the time, without the client to be bothered to grok anything else but what's on the screen.
This resembles the aka B2B2X  telecom services business model which is conceptually so new that it does not have a wikipedia article, yet.
So all professional services colleagues welcome to join! In whatever forms we together see fit in every particular occasion.
I'm sure some really groundbreaking fusions will come out of this collab direction alone!
More posts on Behest.io biz philosophy to come.
Tauchain is a profound project that has taken years of deep research and development. Some of the smartest people I've known on this platform highly recommended it, which is why it has been making me do a few things I've not been doing for a while now:-
So one of the first things I noticed in #idni's IRC channel is a cool-looking username "naturalog". While I'm pretty sure it just means natural logarithm, could it be natural OG instead? The natural, original gangsta? In casual parlance of course. Turns out, that's Ohad Asor's (the founder) nickname. What a smooth operator. That username is like wordplay: a mathematician with street cred. Too bad that Steem username is already taken.
The Natural OG
Reading through the logs I soon realised that I can trust his words. Why? Other than his experience, I think it's because I'm somewhat the same in nature. Not that I'm a genius with great knowledge and expertise like he is, but I do appreciate stuff like language, semantics, logic, and such. They're the kind of subjects which I think helps shape clear communication. It shows throughout his replies in the logs.
Many might not know it, but everything I say or type usually takes quite some time because I do try to be careful with words. Sometimes I even spend minutes to decide whether or not to say "could" instead of "would", amongst all of the other nuances in communication. Because, what else do we really have between us other than words? This is why writing is almost sacred to me.
The ability to question oneself and question one's choice of words are part of our learning process. Why do we really say what we say, or think what we think? Can't speak for everyone, but I expect introspective, lifelong learners to be more trustworthy when it comes to dealing with complex subjects. Plus, the obvious elements of the project seems to speak more about substance than hype:-
So all things considered, the project is unlikely to be a scam. If you search through the ~28 megabytes worth of IRC chatlogs, you will even find three ultra-rare instances of Ohad Asor aka naturalog mentioning "before it was cool". Look at the image below. Knowing his history and experience, I think it's safe to conclude that this dude is a certified OG. The natural OG. Total man crush! I might even ask him for some dating tips once he's done with the bulk of the development.
If those points above are not enough street cred to establish an OG status, check out this section of the chat log below:-
10:39 < Liaomiao> you must know a lot about blockchain architecture if you came up with some of the ideas behind graphene
Just good to know that he might have had some influence in the creation of Graphene, Dan Larimer's creation for Bitshares that subsequently shaped both the inner-workings of Steem and EOS. Impressive indeed. It's a good sign for Tauchain / Idni Agoras. In contrast, I was still riding rollercoasters all day high on sweet carbonated drinks in Disneyland during the same age when Ohad Asor was already grinding like an OG, writing production-level software.
So it would seem like my investigation into the heart of Tauchain has quickly turned me into a huge admirer and fan of the project. It has never happened to me before to this extent, but I certainly don't mind given the project's scope and the main developer's character. It's at least a much better story than elevating irrational loonies and sensationalists with no appreciation of well-founded knowledge, which unfortunately is all too common in society these days. If anything would make the world a better place, it would be intellectual curiosity, not intellectual dishonesty.
For now, I'm quite happy to have found the natural OG who has been working quietly behind the scenes. So far it seems to me that it could very well be the next big thing other than Steem communities and SMTs. I'll be posting more about the project in time. As always, thanks for reading.
Website - http://www.idni.org
Github - https://github.com/IDNI/tau
Telegram - https://t.me/tauchain
Reddit (with FAQ) - https://www.reddit.com/r/tauchain/
Coinmarketcap entry - https://coinmarketcap.com/currencies/agoras-tokens/
Here's an hour-long interview with Ohad Asor that you might want to check out.
Not to be taken as financial advice.
Ohad Asor the lead developer and founder of Tauchain releases first new blog post in over a year. By Dana Edwards. Posted on Steemit. December 30, 2017.
The new blog post titled "The New Tau" is available for everyone to read. The blog post speaks on the critical topic of collaborative decision making. This is a topic which I myself have been interested in and Ohad's solution is different from the usual solution. In my own thinking I was considering a solution based on collaborative filtering but I realized this would never scale. I then considered a solution based upon using IA (intelligence amplification) by way of personal preference agents and this does scale but requires that the agents have a lot of data to truly know each user and their preferences. The solution Ohad Asor comes up with attempts to solve many of the same problems but his solution scales without seeming to require collaborative filtering or any kind of voting as we traditionally think about it.
Let me list some of the obvious problems with voting which many will recognize from Steem which also relies on collaborative filtering:
Now let's see what Ohad Asor has to say:
In small groups and everyday life we usually don't vote but express our opinions, sometimes discuss them, and the agreement or disagreement or opinions map arises from the situation. But on large communities, like a country, we can only think of everyone having a right to vote to some limited number of proposals. We reach those few proposals using hierarchical (rather decentralized) processes, in the good case, in which everyone has some right to propose but the opinions flow through certain pipes and reach the voting stage almost empty from the vast information gathered in the process. Yet, we don't even dare to imagine an equal right to propose just like an equal right to vote, for everyone, in a way that can actually work. Indeed how can that work, how can a voter go over equally-weighted one million proposals every day?
This in my opinion is very true. In reality we have discussions and at best we seek to broadcast or share our intentions. Intent casting was actually the basis behind how I thought to solve this problem of social choice but I would say intent casting even with my best ideas would not have been good enough because again the typical voter would be uninformed. Without an ability of the typical voter to be either educated continuously which in a complex world may be unrealistic, or for the network itself to somehow keep the voter up to date, this intent casting barely works. It works well for shopping where a shopper knows what they want but does not work so well when a person doesn't actually know what they want and merely knows what they value. Values are the basis for morality, for ethical systems, and this is the area where Ohad's solution really shines.
Tauchain has the potential not only to scale discussions but also morality, because it will have the built in logic to make sure people can be moral without constant contradiction. The truth is, without this aid, the human being cannot actually be moral in decision making in my opinion due to the inability to avoid all sorts of contradictions.
All known methods of discussions so far suffer from very poor scaling. Twice more participants is rarely twice the information gain, and when the group is too big (even few dozens), twice more participants may even reduce the overall gain into half and below, not just to not improve it times two.
This is the conclusion that Ohad and myself reached separately but it still holds true. We require the aid of machines in order to scale collaborative decision making. This in my opinion is one of the major difference makers philosophically speaking between the intended design and function of Tauchain vs every other crypto platform in development. This also in my opinion is going to be the difference maker for the community which Tauchain as a technology will serve because it will enable the machines and humans to aid each other for mutual benefit or symbiosis.
The blog post by Ohad Asor brings forward a very important discussion which has many different angles to it. The angle I focused on with regard to the social choice dilemma is the problem of how do we scale morality. In my opinion if we can scale morality in a decentralized, open source, truly significant manner, then nothing stands in the way of absolute legitimacy, mainstream adoption, and with it a very high yet fairly priced token. The utility value of scaling morality in my opinion is higher than just about anything else we can accomplish with crypto tech and AI. If the morality is better, then the design of future platforms will be greatly improved in terms of how the users are treated, and this in itself could at least in my opinion help solve the debate about whether AI can remain beneficial over a long period of time. I think if we can scale morality in a decentralized way, it will make it easier to design and spread beneficial AI. Crypto-effective alturism could become a new thing if we can solve the deeper more philosophical problems.
Personal agents: What are expert systems? Do expert systems benefit from decentralization. By Dana Edwards. Posted on Steemit. March 28, 2017.
Personal agents: What are expert systems? Do expert systems benefit from decentralization?
In my previous blog post titled "The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems)" I discussed the first piece of a larger puzzle. Knowledge representation and a shared knowledge base were both explained. The purpose of that blogpost was to destribe the concepts of knowledge representation and the knowledge base but also to show why both are valuable for artificial intelligence. This particular article will explain the concept of an expert system and then I will discuss some possible ideas for what can be built in a decentralized AI context.
The recipe for building an expert system
An expert system has two core components which include 1) a knowledge base, 2) an inference engine (semantic reasoner). An expert system is a computer system which emulates the decision making capability of a human expert by reasoning about knowledge and applying rules. Implication for example is a rule which leads to if...then... (otherwise recognized as if p then q). In a computer programming language we would call this set of "if then" statements our [conditionals](https://en.wikipedia.org/wiki/Conditional_%28computer_programming%29) . Conditionals are familiar to anyone who knows C, C++, JAVA, Python, or any typical programming language and this basic structure comes from logic.
We can recognize that conditionals are a set of rules which can be mapped on a flow chart like this:
Expert systems are rule based AI
Just as we can see how if-then-else can become a structure of rules, the expert systems are entirely rule based.
An expert system which has a knowledge base to work with may rely on a goal tree.
Expert systems are fundamentally weak AI. They cannot be self aware or conscious as they are simply mechanical sets of rules being applied according to logic on a knowledge base. Expert systems may exhibit intelligent behavior which is to say they are intelligent tools. This may be enough however to achieve the goals and you can have personal agents which can behave intelligently using an expert system approach.
Trees, trees, and more trees
Now we know how to create an expert system built from a knowledge base and a reasoner. To understand what the future holds for decentralized AI I must briefly discuss the concept of trees. Trees can possibly be infinite structures. Higher order model checking for those familiar with model checking is a form of model checking which can work over infinite structures such as the infinite tree via higher order recursion schemes. Why is any of this important?
Program verification, program analysis, can work when you think of the fact that any program can be represented as a tree. This is important for the security guarantees and for correctness guarantees. In the case where you would like to approach decentralization of AI then you ultimately will have to work with trees and for that reason I discuss it.
Decentralized knowledge base + distributed contribution via knowledge representation language
In order to build a decentralized AI it will be important to have a decentralized knowledge base. The main problem is growing the knowledge base large enough that an expert system can become smart. In a decentralized context you can have in theory anyone in the world contribute to the collective decentralized knowledge base. Decentralization of the knowledge base would make it more resilient in the case of an attack, a nuclear apocalypse or similar scenarios like that which necessitated decentralization of the Internet. From a cyber security perspective human knowledge is safest if decentralized.
Sensors are essentially everywhere, big data is essentially here, but the decentralized knowledge base doesn't exist. We have Google which wants to be at best a centralized knowledge base. Google has AI but it will at best be centralized. A decentralized AI based on expert systems can function similar to that which has been described already as the semantic web but with some improvements.
Your own army of personal experts
If everything goes right in a decentralized context then each person will have access to intelligent agents. These agents will be able to reason over a knowledge base and act as an expert system. For very difficult tasks the computation resources could be rented and paid for via a token. Verified computation and model checking can allow for many machines to compute on your behalf but with a minimized security risk as you would have formal verification built in.
What is the conclusion here?
Expert systems can be built in a decentralized context. Decentralized AI is theoretically possible and likely to be built sooner or later. Decentralized AI can be safer than centralized depending on the use case and it can also be much more efficient depending on the circumstances. For example if computation can be sold on a market then in theory if a million PC owners rent their computation out for a token then you in essence have the cheapest super computer in the world. Will it be good at everything? Perhaps not, but for certain kinds of computation it will be the most cost effective.
Bots will become much more powerful, more capable, with an ability to be experts and make intelligent decisions. This will have both positive and negative consequences depending on the safe guards and governance capabilities in place. If there are no safe guards at all then this could be both a new frontier and present new dangers. At the same time if there are some safe guards and ethical governance then this could provide many new opportunities and possibly boost the economy in new ways. In fact, the ability to have this AI can improve not just reasoning ability, but decision making abilities too, and that can allow for moral augmentation along with improved governance.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: Personal agents: What are expert systems? Do expert systems benefit from decentralization.
Virtualización de los contratos con TauChain y Agoras. Video del Canal Educación Financiera Bitcoin Criptomonedas en Youtube. 15 de mayo de 2016.
Sujeto, verbo y predicado. Taking the language of Ontologies to unify languages of:
* Computer Programs.
* Network Protocols.
Ontologies are expressed in RDF language family (Resource Description Framework). IDNI propose a software client that stores an ontology of local rules. Inteligencia Artificial, ontología, lenguaje, código "human readable", democracia descentralizada y equitativa.
Fuente / Source: Fuente: Canal Educación Financiera Bitcoin y Criptomonedas en YouTube.
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.