What is Tauchain & Why It Could Be One of The Greatest Inventions of All Time (Part 1: Introduction). By Kevin Wong. Posted on Steemit. August 28, 2018.
In anticipation of Tau's demo some time around the end of this year, I'd be publishing a series of articles leading up to its release and beyond on Steem. If you would like to get to know what some of us think is going to be one of the greatest inventions of all time, I'd recommend you to check out http://wwwidni.org. It seems like a foundation that we've missed out on building together since the birth of the Internet.
A close resemblance of this project is the Semantic Web although some of us would place Tau as being far more ambitious in scope, oddly in a way that is likely more feasible with its ingenious use of a logic blockchain to power a decentralized social choice platform. I think it's impressive how singular the concept actually is, despite the unavoidable lengthy explanations that come paired with the many first-time features that Tau will provide.
Without further ado, let's explore this world-changing technology that is currently baking in the oven.
What is Tau?
Let's begin by first checking out the opening of IDNI's website at http://idni.org:-
Tau is a decentralized blockchain network intended to solve the bottlenecks inherent in large scale human communication and accelerate productivity in human collaboration using logic based Artificial Intelligence.
Sounds fairly straight-forward at first glance, and to me, it really stands out in the cryptosphere. We now have millions and billions of people using the Internet everyday, yet we still do not have any effective means of discussing and collaborating without being all over the place. Sure, we may have been pouring a lot of our time and effort into various platforms trying to connect with others, but have things been really any different compared to a time before the Internet?
The speed of information propagation has increased by orders of magnitude, and we can reach anyone on the planet now, but it's still really up to us to be present and be able to process information in our heads before turning them into relevant knowledge for our networks.
Expanding our social bandwidth.
Turns out, we have been experiencing a lot of trouble coming to terms with the chatter of billions of people in cyberspace. The bottlenecks inherent in our human bandwidth remain to be unsolved even with near-instantaneous communications. From governments to corporations and blockchain communities, we are all still facing the age-old problem of being unable to scale governance beyond the size of a classroom. It's just difficult to get our points across to many different people, let alone making sense of complex long-term discussions and making network-wide decisions collaboratively.
The introduction to The New Tau written by Ohad Asor explains our situation quite accurately:-
Some of the main problems with collaborative decision making have to do with scales and limits that affect flow and processing of information. Those limits are so believed to be inherent in reality such that they're mostly not considered to possibly be overcomed. For example, we naturally consider the case in which everyone has a right to vote, but what about the case in which everyone has an equal right to propose what to vote over?
So how is Tau actually going to solve our communications bottleneck? It will be through a highly bespoke and non-trivial implementation of a logic-based Artificial Intelligence (AI). It's worth noting that AI in this case is more of a buzzword for marketing-speak, and it is actually not of the same variety as the commercial implementations of deep machine learnig.
The distinction that must be made is that Tau is not the kind of AI that attempts to guess what the world is around them, including that of our opinions and the things we say or do. Instead, we must make the step towards communicating through Tau and what we choose to communicate will be as definite as computer programs. It can be thought of as a persistent logic companion that helps us improve the scale our reasoning, logic, and bandwidth.
We can take the time to share what we want to share on the Tau network and most of the logic-based connections and operations will happen in the background over time, even when we're not paying attention in-person. Again, the use of the word AI is a misnomer here because it usually paints the picture of AI agents attempting to mimic human autonomy. That's not what Tau is about. In this case, thinking about Tau as just a logic machine should provide better clarity on what it actually is.
The power of logic.
To expand, here's the second paragraph found in the opening of IDNI's website that explains Tau's paradigm in logic-based communications, http://idni.org:-
Currently, large scale discussions and collaborative efforts carried out directly between people are highly inefficient. To address this problem, we developed a paradigm which we call Human-Machine-Human communication: the core principle is that the users can not only interact with each other but also make their statements clear to their Tau client. Our paradigm enables Tau to deduce areas of consensus among its users in real time, allowing the network to boost communication by acting as an intermediary between humans. It does so by collecting the opinions and preferences its users wish to share and logically constructing opinions into a semantic knowledge base.
Indeed, Tau will offer a semantic social choice platform where we can discuss and store knowledge in a logical universe that helps us organize information, thereby empowering us in highly relevant ways. If you're worried about privacy, know that Tau is first-and-foremost designed as a local client with local processing and storage. The platform itself will be deployed as a decentralized peer-to-peer network, a place where we can connect and share our knowledge-base with anyone we desire.
The only price to pay in all of these is that we must speak in Tau-comprehensible languages, which can always be added and modified over time. A sophisticated language that can be defined over Tau may closely resemble natural languages, but it is really best to expect Tau as a machine-comprehensible language that only speaks in logic. Fortunately, logical formalism is something that we can easily deal with.
So it will be up to us to communicate with our local Tau client in a way that it'll understand our worldviews. When the machine understands what we share completely in some logical, mathematically-verifiable sense, it can then connect our dots with the rest of the Tau network, effectively boosting communications beyond the limits of human bandwidth, effectively scaling our points of discussion, consensus, and collaboration up to an infinite number of participants.
Code and consciousness.
Finally, we look at the last paragraph of Tau's introduction at http://idni.org
Able to deduce consensus and understand discussions, Tau can automatically generate and execute code on consensus basis, through a process known as code synthesis. This will greatly accelerate knowledge production and expedite most large scale collaborative efforts we can imagine in today's world.
Since Tau is a logic blockchain that powers a semantic social choice platform, we can leverage it to have both small and large-scale discussions about program specifications, detect points of consensus, and even generate software in the process. Being able to go from discussions to the realization of decentralized applications would mean inclusive code development for the masses. It's also a unique addition to decentralization that no other blockchain projects have even thought about.
Now that we may have come to a better understanding of Tau's emphasis on the use of logic in every part of its being, let's revisit the process description found in The New Tau to get closer to knowing what it really is about:-
We are interested in a process in which a small or very large group of people repeatedly reach and follow agreements. We refer to such processes as Social Choice. We identify five aspects arising from them, language, knowledge, discussion, collaboration, and choice about choice. We propose a social choice mechanism by a careful consideration of these aspects.
In short, Tau is a decentralized peer-to-peer network that takes the shape of a social choice platform, and it can become anything that we want it to be, for as long as it's expressible within the self-defining and decidable logics of FO[PFP] with PSPACE-complexity. This precise specification is required to satisfy the very definition of Tau as seen in the excerpt above. Tau is also intended to be a compiler-compiler.
This is taking application-generality into a completely different direction compared to blockchains that are built specifically with turing-completeness in mind, like Ethereum. Relevant literature to check out: Finite Model Theory.
Understanding each other.
While it's all highly technical and difficult to grasp in one seating, perhaps a better way to truly begin to understand Tau is to spend some time studying its main features. Or just wait for the product release. In any case, I will try to explore these topics in the future if my brain can still handle it:-
The more I think about Tau, the more I think that it is (poetically) a logical conclusion to the way the Internet works as a protocol. It even lives and breaths logic. Not just any kind of logic, but specifically, logics that can define their own semantics and is decidable. Tau is intelligently designed to be a truly dynamic and ever-evolving blockchain.
When the Tau community intends to make changes to the network code, rules or protocols, they will simply need to express these opinions and perspectives in a compatible language over the network. The self defining logic of the Tau blockchain network will enable it to detect the consensus among these opinions and automatically amend its own code to reflect this consensus from block to block. Unlike the common method of voting, Tau’s approach will take into account the perspectives of the entire community, where people will be free to vote and propose what to vote for in real time. This unique ability of Tau is the only decentralized solution to create a truly dynamic protocol.
Now you might think: Tau seems like a powerful tool but will it be too difficult to use for most people? There might be some learning curve involved for sure, and it'd be similar to learning a new language in the beginning. Those of us who learn to use it well enough to scale our discussions and collaborative works will likely gain a significant edge over those who are not using the platform. I'd imagine plenty of projects and communities around the world being able to overcome some of their obstacles in development through Tau. Hence, it may be fair to expect that market forces will gravitate towards the platform just like how we're all using the Internet these days.
Until the next post.
I've been thinking about Tau almost everyday for the past many months now, and I will admit that its deeper technicalities are still way out of my league, although I've made sure to word them broadly out the best I can. If you like what I do, please consider sharing this post and voting on my witness account on Steem. For more info, check out my recent witness announcement post.
As always, thanks for reading!
Images from Pexels
Music tracklist by Magical Mystery Mix
Follow me @kevinwong / @etherpunk
Not to be taken as financial advice.
Always do your own research.
Tauchain 101: Essential Reading On One Of The Most Revolutionary Blockchain Project Under The Radar...By Rok Sivante. Published on Steemit. August 3, 2018.
Amidst countless blockchain projects hyping themselves up as "the next big thing," there are a few that have been working under the radar that hold the promise - not in word, but in substance - of truly being revolutionary game-changers.
Such ventures have not yet often come into the spotlight. Partly, due to that their founders have focused first on the fundamentals of creating something that speaks for itself versus the all-too-common approach of prioritizing sensationalistic marketing. And partly, because the degree of innovativeness they represent - in tandem with a complexity in scope of the larger visions and implications of their success - does not always lend itself to an easy understanding upfront.
One such project - still very early on in its development, yet holding transformative potentials no less grand than those of Bitcoin and Ethereum as they birthed and evolved the blockchain landscape:
Until recently, with the launch of a new website that has successfully managed to articulate the project's vision much more clearly, understanding what Tauchain is striving to accomplish was a domain only a very few, highly-intelligent technically-inclined dared to tread. And prior to December 2018, there was no code - only an unproven concept spearheaded by a single Israeli developer, Ohad Asor, whom nearly all who've managed to connect with have declared to be one of the most brilliant geniuses they've ever met, possibly ahead of his time.
Just as Bitcoin introduced blockchain as an innovation radically altering the trajectory of our societal, economic, and technological evolution - and Ethereum continued in suit with its upgrades to expand in developing upon the vision with entirely new sets of capabilities for developing a range of decentralized applications and smart contracts - so too, may Tauchain be such a platform whose success proves comparable, the impact of which may bring quantum leaps in the Blockchain Revolution.
How and where to start in describing Tauchain...?
Well, were we to begin with the technical side of things, it'd be likely to lose 98% of the audience. So perhaps, a better starting point might be the bigger picture:
This generalized overview, however, still only barely scratches the surface.
While the intended ends may be that of a generic concept enabling drastically-increased efficiency in global collaboration, the means by which such is to be achieved entails a number of innovative component developments that each hold great significance and implications of their own.
While each may require deeper exploration to better grasp and begin piecing together into the bigger picture, the Tauchain website now offers an overview of key features which account for just some of what it to differentiate it from other blockchain platforms - and enable new collaborative capabilities not currently possible with currently existing technologies:
While it'd be possible to expand upon each in great detail - both in regards to the functionality and implications for their applications - this particular piece of writing is to serve as a basic introduction to some of the best, most-easily-accessible content written on Tauchain to-date.
And as we transition into that content, we shall begin with a quote summarizing the core essence of Tauchain, as approached from but one angle:
This project created by Ohad Asor is really ambitious and aims to create the internet of knowledge.
Some people would label it as an Artificial Intelligence, but according to the creator this is something totally different. Summing up and to understand me, Tau-chain is a tool that knows how to interpret any information and deduce any consensus. This tool can be used in any field, judicial, political, academic, social, scientific and also without limits assembly from 2 people to a million for example.
~ @capitanart, from "My experience with Tau-chain"
The collection begins with two selections from Steemit's @trafalgar.
If anyone has successfully managed to distill the essence of the Tauchain vision into words that'd serve as a foundational Tauchain 101 intro, it'd have been him in these two excellent pieces:
What Is Tau? - My Only Other Crypto Investment
The Power of Tau - Scaling the Creation of Knowledge
Next, come three short articles from @flis, which may not go into any new details beyond the three above, yet offer a slightly different yet simplified perspective to reinforce the clarification of Tauchain's key concepts:
The vision of Tau-Chain, a blockchain based self-amending platform designed to scale human collaboration and knowledge building
How Tau-Chain can be implemented in practice
Tau Chain vs. Tezos - which platform will provide a better solution?
~ design credit: @voronoi
Next, come a few selections from @dana-edwards, who has likely been the single individual who has translated the highly-complex technical vision of Ohad Asor into a more-approachable nature from which non-academics may begin and better understanding a Tauchain.
Quite possibly the first to write of developments and share outside of the project’s IRC channel and Bitcoin talk thread, Dana has one of the most comprehensive grasps publicized anywhere on the project, and his writings continue to serve in establishing bridges for more people to discover and deepen their own comprehensions of the innovations Tauchain represents to not only computer science and the blockchain revolution, but cultural & societal evolution as well.
What follows are a collection of his writings related to the project which excellently piece together key ideas and insights, from which the gaps may be filled in to grasp a firmer idea of just how significant these developments could be and what the bigger picture of their success might look like:
What Tauchain can do for us: Collaborative Serious Alternate Reality Games
What Tauchain can do for us: Finding the world's biggest problems
Tauchain: The automated programmer
Artificial morality: Moral agents and Tauchain
What Tauchain can do for us: Effective Altruism + Tauchain
Collaborative Alternate Reality Games + Tauchain = UBAs (Universal Basic Assets)?
Tauchain and Tezos, why adaptability is the key to surviving in a fast changing environment
My commentary on Ohad's latest blog post: "Agoras to TML"
The following three pieces are not introductory-level, and may likely require a background in computer programming to understanding. However, for anyone reading who might be interested in diving deeper into the technical side of the project, they are included here:
Tauchain is not easy to understand but here are some concepts to know to track Ohad's progress
For all who are researching Tauchain (TML) to understand how it works, a nice video!
More on partial evaluation - How does partial evaluation work and why is it important?
~ design credit: @crypticalias
One other writer covering Tauchain needing to be mentioned: @karov.
While not the easiest to read and understand, the Steemit account of Georgi Karov is undoubtedly one of the most consistent sources of coverage on the project.
A lawyer by-trade and currently one of the three members of the core team, @karov's insights into the project are reliably detailed, expansive into philosophical territory, and fascinating.
Although none of his articles have been included in this introductory collection, those who may be interested to keep up-to-date with coverage on the project would be well-advised to follow his Steemit blog - and/or read backwards through the last few months of his posts there, as the blog is nearly-entirely Tauchain-related content.
Lastly, though not least:
Coming from one of Steemit's most brilliant early-adopter-minds, @kevinwong, this one is a quick read in itself with some key points worth factoring in to a proper assessment of the project. And - far lengthier than the post itself - the comments thread also contains some gold:
Is Tauchain Agoras in Good Hands?
And to wrap up with another excellent quotes from design consultant to the project, @capitanart - who is another to follow for updates:
The goal of Tau is to create a supermind, to solve the limitations inherent in human communication on a large scale.
Able to deduce consensus and understand discussions, Tau can generate and execute code automatically based on consensus, through a process known as code synthesis. This will greatly accelerate the production of knowledge and streamline most of the large-scale collaborative efforts we can imagine in today's world.
~ design credit: @overdye
In a recent article of mine  I hinted my strong suspicion that scaling is itself scalable.
''Scaling is a problem. Scaling must be scalable, too. Metascale from here to Eternity.''
No matter what a terrific grower a system is - as per its own internal algorithmic growth drive rules - it seems inevitable its growth to get it into entropic mutualization  upon impact with a kind of a ... downscaler.
Scaling is everything, yeah. But it is quite intuitive and supported by too big body of evidence to ignore, that, paradoxically: the faster a thing grows - the sooner its encounter with an external and bigger downscaling factor comes.
This realization, refracted through the prism of our 'reptilian brain' layer  amplified to gargantuan proportions by our inherent social hierarchicity  is the source of the 'Malthusian  anxiety' which led to countless violent deaths over all the human history. Fear is anger , so the emotion that there is only as much to go around, and that the catastrophe of 'running out' of something is imminent, is the major source of what makes us bad to each other .
There are plethora of examples of very well mathematically and scientifically grounded doomsayer scenarios, and we must admit that they all correct as per their internal axiomatics  , and simultaneously they are all totally wrong for missing out the obvious - the factors of externalities  , the properties and opportunities of the medium which is consumed and/or created by this growth, and which transcend the axiomatics. For growth being always 'growth into'. The fact that doomsday scenarios are so compellingly consistent internally is what makes them so strong and dangerous ideological weapon of mass destruction .
Lets throw some such problem-solution couples for clarity:
a. the world of 1890es big cities sunk up knee-deep into beast of burden manure , and the super-apocalyptic projections of that VS Tony Seba's  1 pic > 1000 words of NYC carts vs cars situations in 1900 -1913 ...
b. the grim visions of the whole Mankind becoming telephone switchboard blue collar workers , the number of which should've exceeded the number of total world population by now to achieve the same level of telephonization or
c. the all librarians world  where it takes more librarians than the whole mankind to serve the social memory in the paper & printed ink storage facilities mode ...
d. the Club of Rome  as the noisiest modern bird of ill omen with 'projections' based on the same blind extrapolations as the urban seas of shit or the 'proofs' of the impossibility to connect or educate or feed all - instigating mass destruction fear that ''we run out of everything and will soon all die'' , used for justification for mass atrocities VS Julian Simon's  - the ''Ultimate Resource'' (1981, 1996) . Cf.: my accelerando article  and see what precisely is the Factory for succession of better and better Hanson drives for the last few millions of years - from the Blade and the Fire to the Tau - it is the same thing which identification made Julian Simon from fanatical Maltusianist  into rationally convinced Cornucopian  ... the human mind.
e. the predator-pray model  which this pseudo-haiku  I guess depicts best how's it brutally flawed:
''hawk eat chic -> less chic, human eat chic -> more chic''
for missing out to posit and failure to account for positive feedback loop  of predator over pray dynamics ...
f. The comment of Dary Oster  , founder of the other passion of mine - ET3 , on the aka 'saturation' of the scalables (exemplified in the field of transportation, which btw, being communication ... our social structures map onto mobility systems we have on disposal ... ).:
''... US transportation growth has focused on automobile/roads (and airline/airport) developments. (And this has been VERY good for the US economy.) The reason is that cars/jets offered far better MARKET VALUE than horse/buggy/train transport did 150 years ago. In the mid 1800s, trains displaced muscle power for travel between cities - because trains offered better market value than ox carts. Trains reached 'market saturation' about 1895 to 1905 (becoming 'unsustainable') - however 'market momentum' produced 20 years of 'overshoot'. Cars/jets were far more sustainable than passenger trains and muscle power, and started to displace trains (and finish off horses). By 1916 the US rail network peaked at 270,000 miles (today less than 130,000 miles is in use).Just like passenger trains hit market saturation, roads/airports are reaching economic limitations. The time is ripe for a market disruption, and all indicators (past and present) say it will NOT come from, or be supported by government or academia -- but from private sector innovations that offer a 10x value improvement (like ET3), AND also offer incentives for most (not all) key industries to participate (like ET3). Automated cars, smart highways, and electronic ride sharing are industry responses that will contribute to overshoot of cars/roads for the next 5-10 years.The main problem i see with the education system is that is that academic research and publication on transportation is primarily funded by status quo industries like: railroads and rail equipment manufactures, highway builders, automobile/truck manufactures, engineering firms, etc. -- all who fund research centered on 'improving' the status quo.Virtually all universities (for the last 1k years+) are set up to drive incremental improvements that industry demands, and virtually all paradigm shifts are resisted until AFTER they occur and are first adopted by industry. Government is the same (for instance in 1905 passing laws to forbid cars that were disrupting horse traffic; or in 1933 passing laws to limit investment in innovation startups to the wealthy (those successful in the status quo)).''
g. Darwinian algo  sqrt(n) VS higher algos - like Metcalfe n^2 . It is not precise, it is more of metaphorical, to indicate direction or scale of scaling, rather then rigorous precision, but ... the former figuratively speaking takes 100 times more to put up 10 times more, and the later takes 10 times more to return 100 times more...
h. Barter vs money. See.:  bottom of page 5 over the bottomline notes, about the later:
simpliﬁes pricing calculations and negotiations from O(n^2) complexity to O(n) complexity
As demonstration how one item out of a scaling barter system, emerges as specialized transactor and accelerator to transcale the barter economy. From within. Endogenously as always. (btw, Extremely strong document where there are entire books read and internalized behind each tight and contentful sentence!)
i. The heat death of the universe  VS the realization that the 2nd law  - conservation law for entropy/information law does not allow that , the asymptoticity  of the fundamental limits of nature, the fact that max entropy grows faster than/from/due to the actual antropy growth  and that entropy is not disorder  and that at the end of the day it is an unbounded immortal universe  ... cause it's all a combinatorial explosion .
j. The Anthropic principle  and the realization that it is extremely hard if not impossible to posit a lifeless universe  ...
k. The Algoverse - my 'psychedelic' vision  of the asymptotic inexorable hierarchy of the Dirac sea  of lower algos which take everything for almost nothing - up towards giving almost everything for almost nothing - Bucky Fuller's runaway Ephemeralization . Algorithms are things. Objects. Structure. Homoousic or consubstantial to their input and output. Things taking things and making things outta the former. Including other algos of course! Stronger ones.
l. The Masa Effect . The Master of Softbank seeing how the machine productivity is on the imminent course to massively overscale the human clients base and his apparent transcaling solution to upscale the clients base with bots and chips, with the same which scales supply in such a too-much way. 
m. The Pierre the Latil 1950es and Stanislaw Lem 1960es ( copied 1:1 by Tegmark  ) hierarchy . Of degrees of self-creating freedom of Effectors ...
n. Limits of growth - present in any particular moment and in any finitary setting of rules ,  but nonexistent in the infinity of rules upgradability. Like a cancer cell trapped in a cage of light  vs ... photosynthesis.
o. Ray Kurzweil - static vs exponential thinking .
p. Craig Venter's  Human Genome project  which when commenced in 1990 was ridiculed that will be unbearably expensive and will take centuries to finish, and it did - it costed a unbearable for 1990 fortune and it did take centuries, of subjective time as per the initial projections conditions - being completed in year 2000.
q. Jeff Bezos vision  of Solar System wide Mankind:
''The solar system can easily support a trillion humans. And if we had a trillion humans, we would have a thousand Einsteins and a thousand Mozarts and unlimited, for all practical purposes, resources.''
r. The 'wastefulness' of data centers and crypto mining collocation facilities  ... which is as funny as to envy the brain for 'wasting' >25% of the body energy. (Btw, the tech megatrend is exponentially and relentlessly towards the minimum calculation energy).
s. The log-scale intuitive measure and smooth straight line visualization coming out of, this quote which I fished out off the net long time ago.:
"The singularities are happening fairly regularly but at an increasing rate, every 500 to 1000 billion man-years (the total sum of the worldwide population over time). The baby boom of the 1950 is about 200 Billion man-years ago."
ops! go back to Q. With 1 trln. humans population the 'singularities' will occur once a year?!
t. the Tau  !!
I can continue with these examples ... forever [wink] - excuse me if I've bored you - but I think that at least that minimum was needed to be shown and it is enough to grok the big picture.
Scaling is the solution. It is a problem too. Its overcoming is what I dub 'Transcaling' for the purpose of that study.
Size matters. Scaling is the way. But the more general is how a system handles change! This is as fundamental as to be in the very core of definition of life and intelligence .
Tauchain is all about change handling!
Now, lets knit the 'blockchain' of these all example threads above into a knot like the Norns do :
Dear friends, please, scroll back to Example D. Yes, the human mind transcaler thing. The Ultimate resource thing.
We are the ultimate resourse.
We the humans (and soon the whole zoo of our technological imitations and reproductions and transcendences of ourselves ).
We as the-I  are strong thinkers and creators, immensely more road lies ahead than it's been traveled, yes, but yet we, as the-I, are the momentary apex in the Effectoring business  in the Known universe ... AND simultaneously we as the-We are mediocre to outright dumb.
We are very far from proper scaling together. The Ultimate resource is not coherent and is not ... collimated. Scattered dim lights, but not a powerful bright mind laser. Dispersed fissibles, but not a concentration of critical masses.
We as The-We - paradoxically- persistently finds ways to transcale its destinies using the power of the-I, but the-We itself does not entertain the scaling well at all .
The individual human mind is the unscaled transcaler.
Tau is the upscaler of that transcaler.
I'll introduce herewith another 'poetic' neologism, which occurred to me to depict the scaling props of a system after the Scrooge factor of ''Tauchain - Tutor ex Machina'' , and it is the:
Spawn  factor
- the capacity and ability of a system to grow through, despite, against, across, from and via the changes. Just like cuboid  is about all rectangular things like squares, cubes, tesseracts ... regardless of their dimensionality, the Spawn Factor - to be a generalization of all orders of scaling. Zillion light years from rigor, of course, as I'm on at least the same distance from my Leibnizization . For the lawyer to become a mathematician is what is for a caterpillar to become a a butterfly. :) Transcaling.
Tau transcends the infinite regress of orders of: scaling of scaling of scaling ... by being self-referential. Or recursive. 
What is the Spawn factor of Tau?
If you let me I'll illustrate this by a poetic periphrasis of the famous piece of Frank Herbert's .:
I will face my change. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the change has gone there will be nothing. Only I will remain.
“A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”
― Robert A. Heinlein 
No, it is not a vow everybody to be everything. It is a reflection of the fundamental human fungibility . The average human can be taught to take any human role. The exceptions of true organic geniuses (those who are hard to be replaced) and morons (those who are incapable to replace), only confirm this general rule of shear numbers . This is what makes the mankind so scalable .
''Know'' is synonymous with ''can''. Literally. Knowledge = technology. Even etymologically . Knowledge is praxis . Only. There ain't such thing as impractical knowledge. If it is not a skill, it is not knowledge. I mentioned once  that we're all AIs. Ref.: feral children .
We are not what we eat , but we are what we've learnt. You are what you know/can. And you can what you have learnt. Learning is from the taking side. Teaching is on the giving side. Of one and a same process. We do not have a word to denote the modulus  of learning/teaching, it seems. But it will come.
We are taught by the others, the society. We are the cherry ontop of a layer cake of culture onto nature . We are learning by ... living. We acquire skills in plethora of contexts from family, street, school, job, media ... Learning  is not a monopoly of man, countless systems are also learners. Maybe one of the basic definitions of life and intelligence is the ability to learn . Giant topic, yeah. We won't graze into it here now on what is learning, but on how we learn.
Due to our neurological bottlenecks we spontaneously form hierarchies . This hinders our scalabilty  by forcing humanity to be more or less a fractal of 5. We are close to a number of breakthroughs which to mitigate these innate limitations of ours into a number of ways    . But the general case is not subject of this article - herein we focus on HOW we are taught. How we acquire knowledge, and how this knowledge of ours gets recognized and utilized by society. And the hierarchic emergent structuring is of course in full force upon us in teaching as well as into everything social else.
So comes education , such comes exam , knowledge certification , certified skills application , knowledge creation verification , job fitness testing , CVs and employer recommendations ... etc., etc. With all the bugs and the so little features of this 'map is not the territory' , situation.
It is all centralized and hierarchic - exactly as the global fractal of double-entry accountancy ledgers which we call fiat financial system is. In fact it is so interwoven with fiat finance than it is almost inextricable from it . And as much inefficient and imprecise.
In all these years of talking and thinking on Tauchain  - I noticed - and this suspicion of mine incrementally turns into shear conviction - that Tau, the upscaler of humanity, inevitably also is the ultimate teaching machine. If education is facilitating of learning, Tau is the maximizer of learning. By its very construction, it comes out so.
People talk and listen whenever and whatever they want. Tau has unlimited capacity to listen and attend and remember, and answer. Only limited by the hardware capacity allocated. Tau extracts meaning. Purifies the stream, distills it down to the essence. Detects repetitions, contradictions and all other, ubiquitous nowadays conversation bugs. Remembers changes of opinions of the individual user. And points them out. Sounds like the best tool to know oneself. And the others to know you if you let them.
Your Tau account or profile is what you know. You say what you say and also ask. Say statements and questions. Tau pools you together with the others who state the same and, more importantly, who ask the same type of questions. Knowing what you know, and asking about what you don't know but want to know, maps not only your knowledge state but also maps your knowledge dynamics. Records and drives how your knowledge changes. You even have access to what you forget, and can recollect it. True real time knowledge state reporting. For first time in human history.
If consciousness  is - aside from the clinical state of being merely awake - the post-factum integration of senso-motoric experience , the Accountant of mind, the speaker of the narrative which is you, then Tau is your consciousness booster. That is - stronger than thought.
The ultimate teaching, the ultimate fair testing or exam, the ultimate real-time comprehensive diploma, or certificate, super-peer reviewed paper(s) of you as academic carrer.., the ultimate job interview AND the ultimate ... job of being working as yourself and anything useful you create to be instantly scarcifiable and monetizable - your Tau account is! And all the rest of accessible socoety - being your own workforce. And you to them. In the billions. In a move. In real time.
Including control over the pathways of increase of your skills towards the most productive personally for you learning directions, because it aids you to analyze the you-Tau history and to apply knowledge maximizer techniques and to participate profitably into creation of newer better ones. Maximizer of self. And maximizer of society making it to consist of max-selfs. Ever improving. Merger of education with work occupation. Work-as-you-live.
The literal Knowledge Economy, as described by @trafalgar in his article  from few months ago. Where search, creation, reflection, certification, recognition, commercialization, accumulation, modification, improvement ... everything of knowledge - is all in one.
And it is not only Humans and Tau lonely job. I foresee the other Machines to join the party . Yes, I mean machines capable to have interests and to ask and seek answers of palatable questions.
This - the education amplification - to come down the technology way - has been, of course, anticipated by many. Few arbitrary examples:
- A distant rough-sketch hint for the inevitable tuition power of Tau is Neil Stephenson's  ''The Diamond age''  , with the depicted: '' Or, A Young Lady's Illustrated Primer '' , as an interactive networked teaching device.
- or if I'm right about the inevitable conquest of the natural languages territory  - UX  like in the 'Her' (2013) film .
- Thomas Frey  of the futurist DaVinci Institute  in his book ''Epiphany Z''  paid special attention of this.: down the way of micro- and nano-education, an effective merger of the processes of education, diplomas issuing, job application, exam and actual execution of job obligations. Tom does not know about Tau. But I'll tell him.
With a big smile of irony and self-irony of course... these examples. Just to pick from here and there proofs of the giant anticipation of what's to come. And taken with a few big grains of salt. Cause the reality will be immensely more powerful.
Tutor , tuition , my emphasis via using exactly this wording, comes to denote the economic side of learning/teaching. It is about the cost of learning - the association of tuition with fees, about the placement of the acquired skills, about the business organization of those, about the protection of ownership and security of transaction of knowledge ... Let me introduce here a neologism  which to reflect the business side of it:
Scrooge Factor 
- Simply denoting the money-making power of a technology use by a business. The 'money suction power' of a business entity or organization of any kind coming from the application of a technology, if you want. Technology as socialized knowledge. Scaled up over multiple humans. Over a society. Of course the Scrooge Factor can pump in different directions. The Scrooge Factor of the traditional hierarchic education, governance and everything ... is apparently very often negative - hierarchies decapitalize, dissipate, waste. Orders of magnitude more wasteful than any PoW , but on this - some other time.
So aside from all the niceties of the abstractions of the full supply and value chains of a Knowledge economy, lets round up some numbers:
- We know that a true functional semantic search engine alone is worth $10t. Yeah. Tens of Trills. Trillions. As per the assessments of Davos WEF attendees of as far as I remember 2015 or 2016...
- Also, Bill Gates stated back in 2004  that ''If you invent a breakthrough in artificial intelligence, so machines can learn,'' Mr. Gates responded, ''that is worth 10 Microsofts.''
- Tom Frey  also argued  that by 2030 the biggest corporation in the world will be an online school. Given the present day size and growth rate  of, say, Amazon  this 'online school' should be in the range of good deal of trillions of marcap if it is to be bigger than the biggest corporations. But we do not need such indirect analogies over analogies to access the scale. The shear size of the global education industry is the most eloquent indicator . Note that Tom talks about 'corporation' i.e. for clumsy and inefficient hierarchic human collective. Not for a system which does this orders of magnitude more efficiently and powerfully due to being intrinsically P2P, i.e. geodesic . Even the best futurologists can be forgiven for missing to predict Tau. :)
And this mind-boggling hail of trillions, does not even account for the Hanson Engine  factor.
Tau the Tutor ex Machina is just another unintended useful consequence outta the overall design.
It is nearly impossible to track and contemplate exactly what all these 'side-effects' would be and how they will synergetically boost each other.
With my articles I intend to only touch some lines of the immense phase space  of the possibilia, with neither any ambition to think it is possible to cover it all, nor this to represent any form of advice.
Future is incompressible. Compression is comprehension. Comprehensible only by living.
Failure to go to the geodesic way of learning, will turn these beautiful but trilling words into prophecy:
"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age." H.P.Lovecraft  (1926 ).''
What is Tau-Chain?
The purpose of this article is to demonstrate how Tau-Chain (Tau) can be implemented in practice. I have already presented Tau and its four-step roadmap in my previous article, but I think that further explanation about Tau is required to better understand its applications.
Tau is basically a discussion platform (like any other social network you know) with two significant innovations:
*Just to clarify, knowledge can be facts, lines of code, qualitative and quantitative data, etc.
How Tau can be implemented in practice?
Tau will be a free, open-source platform to advance and execute knowledge. Think about it as a one-stop shop that provides free consulting services, in all areas, to large numbers of people. For example, if you would like to start an enterprise but you lack the relevant business skills, Tau can answer your questions and even perform a market research or analysis (if initial data is provided) to evaluate your business opportunity.
In order to better understand how Tau can improve our society, I am providing below a detailed example showing how I see the vision implemented in practice.
Suppose Alice and Olivia are Ph.D. students in computer science who face a problem with their research. They use Tau to discuss the details of their data, findings and hypothesis. Tau will automatically translate this information into its metalanguage, adding Alice and Olivia’s data to the knowledge archive. Tau is basically the third member in the conversation, and can guide Alice and Olivia to advance their research by interpreting the data and suggesting improvements to their findings. If the students would like to implement the research and develop computer software, Tau will assist them with writing the code in the most efficient way. Using Tau, Alice and Olivia can overcome the limits of their knowledge to quickly complete and implement their research.
But how can people profit from sharing their knowledge?
There is another way for Tau to deepen its knowledge and develop better intelligence. Tau can gain knowledge from the Knowledge Marketplace (Agoras), a blockchain based smart contracts platform where individuals are able to generate income by sharing knowledge and information. With every transaction and exchange of knowledge, Tau will be exposed to the data to become more “educated” and accurate, resulting in a better knowledge deduction capability.
I know that smart contract platforms already exist, but they all lack very important capabilities – the ability to auto-verify the data, run quality assurance tests and suggest improvements to eliminate potential disagreements between the parties to a contract. Tau’s artificial intelligence will support the transaction between the two parties, and will make sure that there will be no fraudulent activities, inaccurate information or low-quality services. This will be the only platform where a computer that acts human (without human deficiencies) will supervise and support such transactions.
The following example demonstrates a possible application of Agoras:
Consider Bob, a software developer who has recently signed a smart contract with David to design a new software program. When Bob shared his code in the Knowledge Marketplace (Agoras), Tau verifies the relevancy of the code and will even suggest improvements to advance the code, eliminating a potential disagreement about quality and fraud. Upon Tau’s approval, Bob will receive his reward, as agreed in the contract. Tau will use the final code as additional knowledge to strengthen the platform’s intelligence.
As described above, the compensation mechanism will incentivize users to contribute their knowledge to advance ideas of others. Thus, we create a society in which individuals’ knowledge and expertise become public domain and can be better utilized to promote social health, welfare and resources.
I provided only a few examples of how Tau and Agoras can by implemented in practice. My examples were computer-science related, but you should realize that Tau-Chain can advance ideas and produce knowledge for every collaborative human endeavor across all fields, including sciences, business and government. Think about a situation where you have a problem and need some help – this is where Tau can assist you with solving your problem and even execute the solution if required and applicable.
Just to clarify, Agoras is also the name of the tokens that users will use in the Knowledge Marketplace (the smart contract platform). Agoras tokens holders will also benefit from developments that will be built as part of Tau’s ecosystem, including a Computational Resource Market (“Zennet”), Distributed Search Engine and a Derivatives Trading Platform.
To end this article, I would like to quote the last paragraph in my previous article, as it is still relevant:
"I foresee huge potential for this project, and urge you to read and learn about this project and its relevant applications. If you find this vision interesting, I recommend that you follow the project on Telegram,Facebook, LinkedIn and Reddit, or read Ohad’s blog for further information."
Disclaimer: I have invested in Agoras. Please do your own research before investing in Agoras and/or any other coin or project. Please do not consider this article to constitute financial advice.
Ohad Asor the lead developer and founder of Tauchain releases first new blog post in over a year. By Dana Edwards. Posted on Steemit. December 30, 2017.
The new blog post titled "The New Tau" is available for everyone to read. The blog post speaks on the critical topic of collaborative decision making. This is a topic which I myself have been interested in and Ohad's solution is different from the usual solution. In my own thinking I was considering a solution based on collaborative filtering but I realized this would never scale. I then considered a solution based upon using IA (intelligence amplification) by way of personal preference agents and this does scale but requires that the agents have a lot of data to truly know each user and their preferences. The solution Ohad Asor comes up with attempts to solve many of the same problems but his solution scales without seeming to require collaborative filtering or any kind of voting as we traditionally think about it.
Let me list some of the obvious problems with voting which many will recognize from Steem which also relies on collaborative filtering:
Now let's see what Ohad Asor has to say:
In small groups and everyday life we usually don't vote but express our opinions, sometimes discuss them, and the agreement or disagreement or opinions map arises from the situation. But on large communities, like a country, we can only think of everyone having a right to vote to some limited number of proposals. We reach those few proposals using hierarchical (rather decentralized) processes, in the good case, in which everyone has some right to propose but the opinions flow through certain pipes and reach the voting stage almost empty from the vast information gathered in the process. Yet, we don't even dare to imagine an equal right to propose just like an equal right to vote, for everyone, in a way that can actually work. Indeed how can that work, how can a voter go over equally-weighted one million proposals every day?
This in my opinion is very true. In reality we have discussions and at best we seek to broadcast or share our intentions. Intent casting was actually the basis behind how I thought to solve this problem of social choice but I would say intent casting even with my best ideas would not have been good enough because again the typical voter would be uninformed. Without an ability of the typical voter to be either educated continuously which in a complex world may be unrealistic, or for the network itself to somehow keep the voter up to date, this intent casting barely works. It works well for shopping where a shopper knows what they want but does not work so well when a person doesn't actually know what they want and merely knows what they value. Values are the basis for morality, for ethical systems, and this is the area where Ohad's solution really shines.
Tauchain has the potential not only to scale discussions but also morality, because it will have the built in logic to make sure people can be moral without constant contradiction. The truth is, without this aid, the human being cannot actually be moral in decision making in my opinion due to the inability to avoid all sorts of contradictions.
All known methods of discussions so far suffer from very poor scaling. Twice more participants is rarely twice the information gain, and when the group is too big (even few dozens), twice more participants may even reduce the overall gain into half and below, not just to not improve it times two.
This is the conclusion that Ohad and myself reached separately but it still holds true. We require the aid of machines in order to scale collaborative decision making. This in my opinion is one of the major difference makers philosophically speaking between the intended design and function of Tauchain vs every other crypto platform in development. This also in my opinion is going to be the difference maker for the community which Tauchain as a technology will serve because it will enable the machines and humans to aid each other for mutual benefit or symbiosis.
The blog post by Ohad Asor brings forward a very important discussion which has many different angles to it. The angle I focused on with regard to the social choice dilemma is the problem of how do we scale morality. In my opinion if we can scale morality in a decentralized, open source, truly significant manner, then nothing stands in the way of absolute legitimacy, mainstream adoption, and with it a very high yet fairly priced token. The utility value of scaling morality in my opinion is higher than just about anything else we can accomplish with crypto tech and AI. If the morality is better, then the design of future platforms will be greatly improved in terms of how the users are treated, and this in itself could at least in my opinion help solve the debate about whether AI can remain beneficial over a long period of time. I think if we can scale morality in a decentralized way, it will make it easier to design and spread beneficial AI. Crypto-effective alturism could become a new thing if we can solve the deeper more philosophical problems.
Using Controlled English as a Knowledge Representation language. By Dana Edwards. Posted on Steemit. April 4, 2017.
Previously I mentioned "controlled English" when discussing the concept of knowledge representation. This post will go into some detail about what controlled English is. In specific I will discuss Kuhn's doctoral dissertation and Attempto Controlled English (ACE).
Computational linguistics is an interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective.
There are many different controlled natural languages
First I would like to discuss the fact that controlled English is not the only controlled nature language and Attempto Controlled English is only one particular controlled English. For example 1 there is RuleSpeak which is a controlled natural language for business rules. Another example2 is Quelo Controlled English which is a controlled English for querying, where you would say statements such as: "I am looking for something, it should be located in a city, the city should produce a new car, the new car should be equipped with a diesel engine". In addition to these examples we also have Google which uses Voice Actions where you can speak into your android phone and say something like: "Create a calendar event: Dinner in San Francisco, Saturday at 7:00PM". All of these are examples of controlled natural languages and reveal just how powerful this could be for users and developers.
What is Attempto Controlled English (ACE)?
Attempto Controlled English also known as ACE is a specific controlled natural language. It is likely that at some point in the early stages of development this controlled natural language will be implemented on Tauchain. ACE is like English but relies on following certain rules with a restricted vocabulary.
Rule subject + verb + complements + adjuncts
All simple ACE sentences have the above structure of subject + verb + complements + adjuncts. An example would be the following sentences below:
A customer waits.
To construct sentences without a verb you can rely on:
there is + noun phrase
There is a customer.
And you can add detail with:
A trusted customer inserts two valid cards.
And you can use variables:
How does Attempto Controlled English help with Knowledge Representation?
In specific because anyone who speaks English can quickly learn Attempto Controlled English it will mean anyone will be able to contribute to the process of knowledge representation. Contributing to a knowledge base becomes very easy when you can simply describe in plain English (with restrictions) exactly the knowledge you want to represent. A semantic Wiki can be built out of this process rather easily.
How does Attempto Controlled English relate to Tauchain?
Tauchain requires input from the users to determine a formal specification. Attempto Controlled English is simple enough that anyone can describe a formal specification. For example sentences like:
Every customer inserts a card.
As you see above, we are dealing with types. Human is a type. Human is divided at a minimum between male and female subtypes.
And ACEWiki gives an example of what a formal specification could look like in Tauchain. The example being country, where the knowledge in this case is the concept of a country. Then we describe a country by filling in the Wiki collaboratively, where we know first of all that every country is an area, but then collaboratively we fill in the list of current persons who govern a country. Through this method we add to the knowledge base using the knowledge representation language ACE, and in the case of Tauchain we would be adding to potentially a formal specification which eventually is synthesized (program synthesis) by the Tauchain automatic programmer.
To learn more about Attempto Controlled English Wiki watch the video lecture
Kuhn, T. (2009). Controlled English for knowledge representation (Doctoral dissertation, University of Zurich).
Kuhn, T. (2014). A survey and classification of controlled natural languages. Computational Linguistics, 40(1), 121-170.
Kuhn, T. (2009). How controlled English can improve semantic wikis. arXiv preprint arXiv:0907.1245.
Ranta, A., Enache, R., & Détrez, G. (2010, September). Controlled language for everyday use: the molto phrasebook. In International Workshop on Controlled Natural Language (pp. 115-136). Springer Berlin Heidelberg.
Ross, Ronald G. 2013. Tabulation of lists in RuleSpeak—using “the following” clause. Business Rules Journal, 14(4):1–16.
White, C., & Schwitter, R. (2009, December). An update on PENG light. In Proceedings of ALTA (Vol. 7, pp. 80-88).
Web 2: http://attempto.ifi.uzh.ch/site/resources/
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: Using Controlled English as a Knowledge Representation language. April 4, 2017.
Personal agents: What are expert systems? Do expert systems benefit from decentralization. By Dana Edwards. Posted on Steemit. March 28, 2017.
Personal agents: What are expert systems? Do expert systems benefit from decentralization?
In my previous blog post titled "The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems)" I discussed the first piece of a larger puzzle. Knowledge representation and a shared knowledge base were both explained. The purpose of that blogpost was to destribe the concepts of knowledge representation and the knowledge base but also to show why both are valuable for artificial intelligence. This particular article will explain the concept of an expert system and then I will discuss some possible ideas for what can be built in a decentralized AI context.
The recipe for building an expert system
An expert system has two core components which include 1) a knowledge base, 2) an inference engine (semantic reasoner). An expert system is a computer system which emulates the decision making capability of a human expert by reasoning about knowledge and applying rules. Implication for example is a rule which leads to if...then... (otherwise recognized as if p then q). In a computer programming language we would call this set of "if then" statements our [conditionals](https://en.wikipedia.org/wiki/Conditional_%28computer_programming%29) . Conditionals are familiar to anyone who knows C, C++, JAVA, Python, or any typical programming language and this basic structure comes from logic.
We can recognize that conditionals are a set of rules which can be mapped on a flow chart like this:
Expert systems are rule based AI
Just as we can see how if-then-else can become a structure of rules, the expert systems are entirely rule based.
An expert system which has a knowledge base to work with may rely on a goal tree.
Expert systems are fundamentally weak AI. They cannot be self aware or conscious as they are simply mechanical sets of rules being applied according to logic on a knowledge base. Expert systems may exhibit intelligent behavior which is to say they are intelligent tools. This may be enough however to achieve the goals and you can have personal agents which can behave intelligently using an expert system approach.
Trees, trees, and more trees
Now we know how to create an expert system built from a knowledge base and a reasoner. To understand what the future holds for decentralized AI I must briefly discuss the concept of trees. Trees can possibly be infinite structures. Higher order model checking for those familiar with model checking is a form of model checking which can work over infinite structures such as the infinite tree via higher order recursion schemes. Why is any of this important?
Program verification, program analysis, can work when you think of the fact that any program can be represented as a tree. This is important for the security guarantees and for correctness guarantees. In the case where you would like to approach decentralization of AI then you ultimately will have to work with trees and for that reason I discuss it.
Decentralized knowledge base + distributed contribution via knowledge representation language
In order to build a decentralized AI it will be important to have a decentralized knowledge base. The main problem is growing the knowledge base large enough that an expert system can become smart. In a decentralized context you can have in theory anyone in the world contribute to the collective decentralized knowledge base. Decentralization of the knowledge base would make it more resilient in the case of an attack, a nuclear apocalypse or similar scenarios like that which necessitated decentralization of the Internet. From a cyber security perspective human knowledge is safest if decentralized.
Sensors are essentially everywhere, big data is essentially here, but the decentralized knowledge base doesn't exist. We have Google which wants to be at best a centralized knowledge base. Google has AI but it will at best be centralized. A decentralized AI based on expert systems can function similar to that which has been described already as the semantic web but with some improvements.
Your own army of personal experts
If everything goes right in a decentralized context then each person will have access to intelligent agents. These agents will be able to reason over a knowledge base and act as an expert system. For very difficult tasks the computation resources could be rented and paid for via a token. Verified computation and model checking can allow for many machines to compute on your behalf but with a minimized security risk as you would have formal verification built in.
What is the conclusion here?
Expert systems can be built in a decentralized context. Decentralized AI is theoretically possible and likely to be built sooner or later. Decentralized AI can be safer than centralized depending on the use case and it can also be much more efficient depending on the circumstances. For example if computation can be sold on a market then in theory if a million PC owners rent their computation out for a token then you in essence have the cheapest super computer in the world. Will it be good at everything? Perhaps not, but for certain kinds of computation it will be the most cost effective.
Bots will become much more powerful, more capable, with an ability to be experts and make intelligent decisions. This will have both positive and negative consequences depending on the safe guards and governance capabilities in place. If there are no safe guards at all then this could be both a new frontier and present new dangers. At the same time if there are some safe guards and ethical governance then this could provide many new opportunities and possibly boost the economy in new ways. In fact, the ability to have this AI can improve not just reasoning ability, but decision making abilities too, and that can allow for moral augmentation along with improved governance.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: Personal agents: What are expert systems? Do expert systems benefit from decentralization.
The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems). By Dana Edwards. Posted on Steemit. March 27, 2017.
This article contains an explanation of two core concepts for creating decentralized artificial intelligence and also discusses some projects which are attempting to bring these concepts into practical reality. The first of these concepts is called knowledge representation. The second of these concepts is called a knowledge base. Human beings contribute to a knowledge base using a knowledge representation language. Reasoning over this knowledge base is possible and artificial intelligence utilizing this knowledge base is also possible.
Knowledge representation defined by it's roles.
To define knowledge representation we must list the five roles of knowledge representation which can reveal what it does.
1. Knowledge representation is a surrogate
2. Knowledge representation is a set of ontological commitments
3. Knowledge representation is a fragmentary theory of intelligent reasoning
4. Knowledge representation is a medium for efficient computation
Part 1: Knowledge Representation is a Surrogate
By surrogate we means it is substituting or acting in place of something. So if knowledge representation is a surrogate then it must be representing some original. There is of course an issue that the surrogate must be a completely accurate representation but if we want a completely accurate representation of an object then it can only come from the object itself. In this case all other representations are inaccurate as they inevitably contain simplifying assumptions and possibly artifacts. To put this into a context, if you make a copy of an audio recording, for every copy you make it going to contain slightly more artifacts. This similarly also happens when dealing with information sent through a wire, where if not properly amplified there eventually will be artifects that come from copying a transmission.
"Two important consequences follow from the inevitability of imperfect surrogates. One consequence is that in describing the natural world, we must inevitably lie, by omission at least. At a minimum we must omit some of the effectively limitless complexity of the natural world; our descriptions may in addition introduce artifacts not present in the world.
Part 2: Knowledge Representation is a Set of Ontological Commitments.
"If, as we have argued, all representations are imperfect approximations to reality, each approximation attending to some things and ignoring others, then in selecting any representation we are in the very same act unavoidably making a set of decisions about how and what to see in the world. That is, selecting a representation means making a set of ontological commitments. (2) The commitments are in effect a strong pair of glasses that determine what we can see, bringing some part of the world into sharp focus, at the expense of blurring other parts."
In this case because our commitments are made then our representation is selected by making a set of ontological commitments. An ontological commitment is a framework for how we will view the world, such as viewing the world through logic. If we choose to view the world through logic, through rule-based systems then all of our knowledge about the world is also within that framework. We choose our representation technology and commit to a particular view of the world.
Part 3: Knowledge Representation is a Fragmentary Theory of Intelligent Reasoning.
Mathmaetical logic seems to provide a basis for some of intelligent reasoning but it is also recognized to be derived from the five fields which include of course mathematical logic, but also psychology, biology, statistics, and economics. If we go with mathematical logic then we have deductive and inductive reasoning approaches. Deductive reasoning according to some is the basis behind. If we want to explore an example of reasoning we can take the Socrates example,
Statement A: True? Y/N?
"All men are mortal"
Statement B: True? Y/N?
"Socrates is a man"
Satement C: True? Y/N?
"Socrates is a mortal"
If A is true, and B is also true, then C must be true. This is an example of basic logical reasoning which can easily be resolved using symbol manipulation and knowledge representation. The symbol at play in this example would be implication.
Part 4: Knowledge Representation is a Medium for Efficient Computation.
If we think of computational efficiency, and think of all forms of computation whether mechanical or natural in the sense of the sort of computation done by a biological entity, then we may think of knowledge representation as a medium for that computation efficiency. Currently we think of money as a medium of exchange, and if we think of the human brain as a type of computer which does human computation, then we may think of knowledge representation.
While the issue of efficient use of representations has been addressed by representation designers, in the larger sense the field appears to have been historically ambivalent in its reaction. Early recognition of the notion of heuristic adequacy  demonstrates that early on researchers appreciated the significance of the computational properties of a representation, but the tone of much subsequent work in logic (e.g., ) suggested that epistemology (knowledge content) alone mattered, and defined computational efficiency out of the agenda. Epistemology does of course matter, and it may be useful to study it without the potentially distracting concerns about speed. But eventually we must compute with our representations, hence efficiency must be part of the agenda. The pendulum later swung sharply over, to what we might call the computational imperative view. Some work in this vein (e.g., ) offered representation languages whose design was strongly driven by the desire to provide not only efficiency, but guaranteed efficiency. The result appears to be a language of significant speed but restricted expressive power .
While I will admit the above paragraph may be a bit cryptic, shows that there is a view that better representation of knowledge leads to computational efficiency.
Part 5: Knowledge Representation is a Medium of Human Expression.
Of course knowledge representation is part of how we communicate with each other or with machines. Human beings use natural language to convey knowledge and this natural language can include the use of vocabularies of words with agreed upon meanings. This vocabulary of words may be found in various dictionaries including the urban dictionary and we rely on these dictionaries as a sort of knowledge base.
What is a decentralized Knowledge Base?
To understand what a decentralized knowledge base is we must first describe what a knowledge base is. A knowledge base stores knowledge representations which are described in the above examples. This knowledge base in more simple terms could be thought of as representing the facts about the world in the form of structured and or unstructured information which can be utilized by a computer system. An artificial intelligence can utilize a knowledge base to solve problems and typically this particular kind of artificial intelligence is called an expert system. The artificial intelligence in the most simple form will just reason on this knowledge base through an inference engine and through this it can do the sort of computations which are of great utility to problem solvers.
When we think of Wikipedia we are thinking about an encyclopedia which the whole world can contribute to. When we think about the problems with Wikipedia we can quickly see that one of the problems is the fact that it's centralized. We also have the problem that the knowledge that is stored on Wikipedia is not stored in a way which machines can make use of it and this means even if Wikipedia can be useful for humans to look up facts it is not in the current form able to act effectively as a decentralized knowledge base. DBPedia is an attempt to bring Wikipedia into a form which machines can make use of but it still is centralized which means a DDOS or similar attack can censor it.
Decentralized knowledge is important for the world and a decentralized knowledge base is critical for the development of a decentralized AI. If we are speaking about an expert system then the knowledge base would have to be as large as possible which means we may need to give the incentive for human beings to contribute and share their knowledge with this decentralized knowledge base. We also would have to provide a knowledge representation language so that human beings can share their knowledge in the appropriate way for it to enter into the knowledge base to be used by potential AI.
Knowledge representation is a necessary component for the vast majority of attempts at a truly decentralized AI. If we are going to deal with any AI then we must have a way for human beings to convey knowledge to the machines in a way which both the human beings and machines can understand it. The use of a knowledge representation language makes it possible for a human being to contribute to a knowledge base and this ultimately allows for machines to make use of it's inference engine capabilities to reason from this knowledge base. In the case of a decentralized knowledge base then the barrier of entry is low or non-existent and any human being or perhaps any living being or even robots can contribute to this shared resource yet at the same time both humans and machines can gain utility from this shared resource. An artificial intelligence which functions similar to an expert system can make use of an extremely large knowledge base to solve complex problems and a decentralized knowledge base combined with open and decentralized access to this artificial intelligence can benefit humanity and life on earth in general if used appropriately.
Discussion of example projects.
One of the well known attempts to do something like this is Tauchain which will have both a knowledge representation system and a decentralized knowledge base. In the case of Tau there will be a special simple knowledge representation language under development which resembles simplified controlled English. This knowledge representation language will allow anyone to contribute to the collective knowledge base. Tauchain eventually will have a decentralized knowledge base over the course of it's evolution from the first alpha.
Unfortunately upon reading the Lunyr whitepaper and following their public materials I fail to see how they will pull off what they are promising. I do not think the current Ethereum can handle concurrency which probably would be necessary for doing AI. I also don't see how Ethereum would be able to do it securely with the current design although I remain optimistic about Casper. The lack of code on Github, the lack of references to their research, does not allow me to completely analyze their approach. I can see based on the fact that they are talking about a decentralized knowledge base that their approach will require more than the magic of the market combined with pretty marketing. They will require a knowledge representation language, they will require a true decentralized knowledge base built into IPFS. This true decentralized knowledge base will have to scale with IPFS and through this maybe they can achieve something but without a clear plan of action I would have to say that today I'm not confident in their approach or in Ethereum's ability to handle doing it efficiently.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems).
What is the Knowledge Acquisition Bottleneck problem? By Dana Edwards. Posted on Steemit. March 29, 2017.
Now that we know what knowledge representation is, and what knowledge bases are, and how the knowledge base is relied upon in a knowledge based system of artificial intelligence (KR+KB+Inference engine), we can move on to discussing one of the open problems.
The Knowledge Acquisition Bottleneck problem.
Many people already know about the familiar Byzantines generals problem in computer science. We also know how the Nakamoto consensus in Bitcoin provided a novel example of a solution. The Knowledge Acquisition Bottleneck problem is one of the problems plaguing AI and is what limits or seems to be a limit on the strength of artificial intelligence. One of the main problems in artificial intelligence is that knowledge formation typically requires domain experts who can contribute to the knowledge base. The Cyc project attempted to solve the problem of scaling up the knowledge base but is suffering from the bottleneck. The bottleneck can be summarized below [taken from Wagner, 2006]:
The paper from which this summary was pulled "Breaking the Knowledge Acquisition Bottleneck Through Conversational Knowledge Management" also offers a solution called collaborative conversational knowledge management. This is the same solution which Tauchain will attempt to utilize in a more sophisticated way. Tauchain will allow for collaborative theory formation. In the paper this quote explains a key concept:
We see this concept in how Wikipedia works to manage knowledge. We know Wikipedia is indeed not without flaws but it does manage knowledge. In their conclusion we see this quote:
Tauchain by design will be collaborative and allow for collaborative theory formation. This would mean anyone will be able to contribute to the knowledge base with relative ease. In addition, it will have knowledge management properties built in, and if the knowledge acquisition bottleneck problem can be solved then it will have a huge impact. For one, the problems which prevent knowledge based AI from scaling could be resolved if this bottleneck is removed.
DARPA has attempted to solve the Knowledge Acquisition Bottleneck problem utilizing high performance knowledge bases (HPKBs)and Rapid Knowledge Formation yet failed. Cyc has attempted to solve the same problem and has failed. The semantic web has yet to take off because this problem stands in the way. Will Tauchain succeed where these other attempts have failed? I think it is a strong possibility which is why I'm excited about the implications should Tauchain successfully be built.
Lenat, D. B., Prakash, M., & Shepherd, M. (1985). CYC: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks. AI magazine, 6(4), 65.
Wagner, C. (2006). Breaking the Knowledge Acquisition Bottleneck Through Conversational Knowledge Management. Information Resources Management Journal, 19(1), 70-83.
Web 1. https://www.quora.com/What-is-knowledge-acquisition-bottleneck
Web 2. http://www.igi-global.com/dictionary/knowledge-acquisition-bottleneck/49991
Web 3: http://www.tauchain.org
Web 4: https://steemit.com/tauchain/@dana-edwards/how-to-become-a-stakeholder-in-agoras-and-indirectly-tauchain
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: What is the Knowledge Acquisition Bottleneck problem?
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.