If Money = Memory, if Society = a Super Computer, if Computation is in Physical Systems, what is a Decentralized Operating System? By Dana Edwards. Posted on Steemit. October 24, 2018.
These concepts are not often discussed so let's have the discussion from the beginning. The first concept to think about is pancomputationalism or put another way the ubiquitous computers which exist everywhere in our environment. We for example can look at physical systems living and non living and see computations taking place all around us. If you look at rocks and trees you can see memory storage. If you look at DNA you can see code and if you look at viruses you can see microscopic programmers adding new codes to DNA. Even when we look at the weather such as a hurricane it is computing.
If you look at nature you see algorithms. You will see learners (yes the same as in AI), also in nature. The process is basically the same for all learning. Consider that everything which is physical is also digital. Consider that the universe is merely information patterns.
If we look at society we can also think of society as a computer. What does society compute though? One way people talk about a society is as a complex adaptive system, but this is also how people might talk about the human body. The human body computes with the purpose of maintaining homeostasis, to persist through time and reproduce copies of itself over time. The human brain computes to promote the survival of the human body. Just as viruses pass on codes to our DNA, the human brain is infected with mind viruses which are called memes. Memes are pieces of information which can alter physically how the brain is working.
The mind isn't limited to the brain. The mind is all the resources the brain can leverage to compute. In other words a person has a brain to compute with but when language was invented this allowed a person to compute not just using their own brain but using the environment itself. To draw on a cave is to use the cave to enhance the memory of the brain. To use mathematics is to use language to enhance the ability of the brain to compute by relying on external storage and symbol manipulation. To use a computer with a programming language is essentially to use mathematics only instead of writing on the cave wall we are writing in 1s and 0s. The mind exists to augment the brain in a constant feedback loop where the brain relies on the mind to improve itself and adapt. If there were no external reality the brain would have no way to evolve itself and improve.
A society in the strictly human sense of the word is the aggregation of minds. This can be at minimum all the human minds in that society. As technology improves the mind capacity increases because each human can remember more, can access more computation resources, can in essence use technology to continuously improve their mind and then leverage the improved mind to improve their brain. The Internet is the pinnacle of this kind of progress but it's obviously not good enough. While the Internet allows for the creation of a global mind by connecting people, things, and minds, it does nothing to actually improve the feedback loop between the mind and the brain, nor does it really offer what could be offered.
Bitcoin came into the picture and perhaps we can think of it as a better memory. A decentralized memory where essentially you can have money. The problem is that money is a very narrow application. It is the start, just as to learn to write on the cave wall was a start, but it's not ambitious enough in my opinion.
Humans in the current blockchain or crypto community do not have many ways where human computation can be exchanged. Human computation is just as valuable as non biological machine computation because there are some kinds of computations which humans can do quite easily which non biological machines still cannot do as well. Translation for example is something non biological machines have a difficult time with but human beings can do well. This means a market will be able to form where humans can sell their computation to translate stuff. If we look at Amazon Mechanical Turk we can see many tasks which humans can do which computer AI cannot yet do, such as labeling and classifying stuff. In order for things to go to the next level we will need markets which allow humans to contribute human computer and or human knowledge in exchange for crypto tokens.
The concept of a decentralized operating system is interesting. First if there are a such thing as social computations (such as collaborative filtering, subjective ranking, waze, etc) then what about the new paradigm of social dispersed computing?
The question becomes what do we want to do with this computing power? Will we use it to extend life? Will we use it to spread life into the cosmos? Will we use it to become wise? To become moral? To become rational? If we want to focus on these kinds of concerns then we definitely need something more than Bitcoin, Ethereum, or even EOS. While EOS does seem to be pursuing the strategy of a decentralized operating system which seems to be the correct course, it does not get everything right.
One problem is as I mentioned before the importance of the feedback loops between minds and brains. The reason I always communicate on the concept of external mind or extended mind is based on that fact that it is the mind which creates the immune system to protect the brain from harmful memes. The brain keeps the body alive. The brain is not really capable of rationality, or morality, or logic, and relies on the mind to achieve this. The mind is essentially all the computation resources that the brain can leverage.
EOS has the problem in the sense that it doesn't seem to improve the user. The user can connect, can join, can earn or sell, can participate, but unless the user can become wiser, more rational, more moral, then EOS has limits. EOS does have Everpedia which is quite interesting but again there are still problems. What can EOS do to improve people in society and thus improve society, if society is a computer and is in need of being upgraded?
Well if society is a computer first what does society compute? What should it compute? I don't even know how to answer those questions. I could suggest that if computation is a commodity along with data then whichever decentralized operating systems that do compete and exist will compete for these commodities. The total brain power of a society is just as important as the amount of connectivity. And the mind of the society is the most important part of a society because it is what can allow the society to become better over time, allow the people in the society to thrive, allow the life forms to continue to evolve avoid extinction.
A decentralized operating system on a technical level would have a kernel or something similar to it. This is the resource management part. For example Aragon promises to offer a decentralized OS and it too mentions having a kernel. A true decentralized operating system has to go further and requires autonomous agents. Autonomous agents which can act on behalf of their owners are philosophically speaking the extended mind. But the resources of a society is still finite, has to be managed, and so a kernel would provide for an ability to allow for resource management.
The total computation ability of a society is likely a massive amount of resources. A lot more than just to connect a bunch of CPUs together. Every member of the society which can compute could participate in a computation market. Of course as we are beginning to see now, the regulators seem concerned about certain kinds of social computations such as prediction markets. So it is unknown how truly decentralized operating systems would be handled but my guess is that if designed right then they could be pro-social, be capable of producing augmented morality by leveraging mass computation, and also by leveraging human computation be able to be compliant. To be compliant is simply to understand the local laws but these can be programmed into the autonomous agents if people think it is necessary.
What is more important is that if a law is clearly bad, and people have enhanced minds, then it will be very clear why the law is bad. This clarity will help people to dispute and seek to change bad laws through the appropriate channels. If there is more wisdom, due to insights from big data, from data scientists, etc, then there can be proposals for law changes which are much wiser and more intelligent. This is something specifically that people in the Tauchain community have realized (that technology can be used to improve policy making).
A lot is still unknown so these writings do not provide clear answers. Consider this just a stream of consciousness about concepts I am deeply contemplating. This is also a way to interpret different technologies.
How Tauchain and the Exocortex can give anyone a conscience and make anyone more law abiding. By Dana Edwards. Posted on Steemit. September 2, 2018.
First "anyone" is not literal. By anyone I mean anyone with a reasonable level of intelligence who is willing to take the advice generated by the network. The network would include human beings and machines. The network would learn and be more properly defined as a complex adaptive system. Tauchain would enable the emergence of this network. This post is about how the network which can emerge from Tauchain. It is also about how people who intend to be as moral as possible whilst also complying with the law as much as possible might leverage the network. This post assumes that the human brain has a finite memory and comprehension capacity. This post assumes that every human being can benefit from enhancing these naturally limited capacities in areas of legal comprehension and risk literacy (under the assumption that most or perhaps none of us know every law on the books but need to comply with the laws most likely to be aggressively enforced).
The Personal Moral Assistant
PMA is a concept I've been thinking about for years now. The idea that we can augment our ability to be moral persons. A PMA is a personal moral assistant and in an ideal world every person born would have one. This would be an interface similar to what we see with Cortana or Siri where you can ask any question pertaining to whether a particular action is right or wrong. This PMA would solve the problem using the same priorities that you would and so you would get a definite right or wrong result.
A Personal Moral Assistant is just one primary use case. But these personal assistants over Tauchain could also include for instance a Personal Compliance Assistant. This is essentially another bot but instead of dealing with moral problems this bot would handle compliance. If you're trying to accomplish a goal this bot would make sure that you do so following all the known laws as your exocortex currently understands it. This would enable people to avoid legal pitfalls whilst chasing opportunities.
In order to go from poor to rich in this world requires taking risks. There is no way around risk taking if you want to get ahead. Risk literacy is essential and very few people who are poor have risk literacy. The PMA might be able to tell a person whether a certain choice aligns with their current values. The PCA might tell a person whether a certain choice complies with the laws. What about opportunities? An opportunity web crawler agent could theoretically search across the entire Internet to find opportunities which match your chosen risk profile.
What are we doing today?
Today we have to make choices often in trial and error. If we aren't lucky enough to have mentors or people who can guide us then the only way to learn is to make the common mistakes. When we deal with moral problems today we often rely on holy scripture interpreted by other human beings who are just as flawed as we are. We simply don't have a bot which could interpret the scripture in a completely logical way. In other words we don't have the digital representation of the mind of our spiritual guides.
We also have a situation where some of us can afford to comply with every law and take the lowest risk approach while others simply don't have the resources available to pay the expensive legal fees. Some people get better legal advice than other people as well. What if we could get at least some level of legal assistance from our intelligent assistant? What if this intelligent assistant can even ask human beings who have legal knowledge to help?
And finally what if we could figure out which risks are worth taking and which are not worth taking? It's one thing to find opportunities but another to be able to assess them. People get scammed because at the end of the day our emotions influence our ability to do proper assessment of opportunities. I'm human and it even happens to me from time to time. What if we could avoid this by using the capabilities of Tauchain to analyze massive amounts of information for us which our brains could never handle?
Opportunity Crawler Bot
I ask a simple hypothetical question: what if you could have set a bot to search the Internet for opportunities that resemble Bitcoin in 2008? What if this bot would be activated and search for an indefinite period of time on an undetermined yet expanding number of networks? If you define "Bitcoin in 2008" in a way which the bot can make sense of then it could search for anything which meets that criteria. We have this technology now but it's extremely primitive. On Google you can set up alerts for certain things but what if you could go beyond mere alerts and look for code on Github, and certain individuals involved with it, and certain growth patterns?
A way to think about these bots / intelligent assistants
One way to think about these intelligent assistants is as part of your extended mind. These bots essentially help you to think better and communicate better. It's still you and what they do on your behalf is essentially as if you did it. So the total collection of all of these agents which are under your control represent your complete exocortex. It will take great responsibility and wisdom to use these abilities in a way which is perceived by the world as ethical, moral, legal, etc. It is for these reasons that I initiate a discussion on how each of you would like to use such technology if it did exist or such bots or how you would think about them?
Guys, after a few articles , , .  - I think I owe you to present a little bit myself and Behest.io , .
I, Karov, am a human, i.e. I'm not robot ( although, my friend @trafalgar is a witness, once I fought all day long with a google form Captcha, but I prefer to blame a software glitch for that ... ).
I occasionally understood that 'karov' is the word for 'near' in Hebrew, but this is pure coincidence.
I'm a lawyer. More than two decades of uninterrupted PQE . In couple of European jurisdictions.
Behest.io is a ... firm. In the sense of :: firm (n.) , or in the very original sense as any firm's only way to be - a signature. Not in the sense (yet) of a legal personhood entity.
As a signature Behest.io is a tool. My tool, which I continuously develop to deliver answers  upon behests  for compliance to various crypto endeavors.
Metaphorically, the Behest.io tool dev target is: if a law firm is a CPU , Behest.io to be crypto legal services ASIC .
Blockchain came too swift, too strong and too global. Like an alien invasion. Legislators and law enforcement can not keep pace. Law and regulations are far from being definite on it.
It is entire internet of jurisdictions out there. Nobody really knows the Law. One can not just go out and shop answers. There is no legal supermarket with neat shelves of turnkey solutions with price tags.
The compliance space is turbulent. Nothing is ready and definite. Very high risk a grey zone to turn red hot. Quicksand minefield.
Crypto lawyer job is not yet an industry, it is inevitably art and craftsmanship. Tailored solutions.
Thus Behest.io is a studio , not conveyor belt mass factory.
Our approach in support is: side by side, thinking together, carefully map the routes ahead, identify the correct questions and precisely craft specific solutions.
On tailored case by case basis. In strict confidence. In all the time dynamic and adaptive fashion. In real time. From entry to exit. All the way navigation from mere idea to end.
So far it sounds like just another advert... I know. But, let me quickly throw some Behest.io preconditional points in an attempt to start sketching the bigger map:
FIRSTLY.: Why ''of Tauchain''?
Since my law school years back in the past millennium I noticed that the Law in all its dimensions.: legislature, legislation, application, enforcement, science, jurisprudence, doctrine ... is somewhat inconsistent and not quite self-sufficient.
I'm now firmly on position that the place of Law is not with the soft sciences of history and literature but among the hard sciences of maths, logic, philosophy and physics.
If we compare the social rules set with a human network protocol code, the Law up to now is obviously not quite automatic and requires too much 'hand drive'. Including, in the rules to make rules, too.
I tried to envision (with my limited tech knowledge), all this quarter of century, various ... systems which eventually could compensate such flaws: virtualization, procedural generation, gamification ... and then Satoshi came. And Ohad Asor appeared.
If we compare our intention and dream of Law with flying - since times immemorial humans wanted to fly like birds, but it took Wright Bros  we to fly ... not like the birds do.
I must herewith admit that closest to my heart are two technological projects.: Tau  and ET3 . They form kinda ... unity, but on that - other times, in series of other posts.
Ohad Asor in his Sep 10, 2016, 8:25 PM essay  very precisely outlined the problem of Law:
''We would therefore be interested in creating a social process in which we express laws in a decidable language only, and collaboratively form amendable social contracts without diving into paradoxes. This is what Tau-Chain is about.''
Exactly! The problem of Law is that it is written in inherently buggy natural human language 'software' and is run on human brains 'hardware' which is faulty for this, for being 'made' to optimize performance of completely other category of tasks. Like ... survival.
We can achieve Law by these means - human natural language and human brains - not more successfully than we could walk from here to the moon.
Tau is the most solid grounded and promising effort to deliver our long dreamed 'rocketry' to take is from here to the Law.
If Law is decidable code, it is specifiable, all intended consequences predictable and granted. Decidable, consistent ... and self-amending. Precisely what the Law is supposed to be. At last. If it is specifiable in exact terms, action code is synthesizable out of it, to feed the legal effectors of all kinds with precise instructions.
Because our societies map to our communications , drastic improvement of our interactions rules is equivalent of immense improvement of the human condition.
The Law as a Tapp (Tau App)? Most definitely. I know no other attempt the issue to be addressed in such a way of pure reason and demonstrated understanding.
This is the reason behind ''for Tauchain'' part of this post's title. It can get us there. We can have the Law, at last.
This is in the Behest.io and mine best selfish interest. Which is: a world of unimaginable freedom and wealth for all.
Behest.io in that sense is ''for Tauchain'' for the perspective the Tau to become ''for Behest''. Realization of my lifetime Legum  project.
Behest.io is not of Tauchain, or of IDNI. It is an independent project of an independent lawyer, with strong current focus on Tau and ET3. Because of the outlined above reasons. In series of upcoming articles I intend to elaborate on my visions and positions on these in general.
SECONDLY.: How exactly is supposed Behest.io to operate before the Tau is in our hands to play with?
All by the books, of course! Legal profession is for compliance, but also it is all about compliance per se. Not just compliance makers and shippers, but must-be compliant the lawyers themselves. Lawyers are strictly local and heavily regulated profession. As it should be.
Not only no lawyer knows all law, but there is not such a thing as global or universal license to provide legal services. Regardless of the 'professional services provider' Big Four  or other hierarchic collab structure - a lawyer is limited to operate only on the territory which his professional 'badge' granting regulator says.
From the other hand Internet and Blockchain are inherently global and penetrate and permeate all jurisdictions as easy as neutrino passes through a planet.
My plan to deal with this ''license to kill (the problems)'' inter-jurisdictional professional license issue is simple:
Quick assembly of full professional license coverage teams. In bespoke to project way. Ad hoc. Where and when needed.
The idea is ... if Behest.io is a screen and the solutions - images on it, the backend machinery of professionals and other resources to be freely reconfigurable and developed and expanded on demand all the time, without the client to be bothered to grok anything else but what's on the screen.
This resembles the aka B2B2X  telecom services business model which is conceptually so new that it does not have a wikipedia article, yet.
So all professional services colleagues welcome to join! In whatever forms we together see fit in every particular occasion.
I'm sure some really groundbreaking fusions will come out of this collab direction alone!
More posts on Behest.io biz philosophy to come.
To zoom out is useful. It puts the events networks of our spacetime in perspective. Including on what the great Jorje Luis Borges was calling the Orbis Tertius :
''ORBIS TERTIUS. "Tertius" (Latin = third) is an allusion to: World 3: the world of the products of the human mind, defined by Karl Popper.''
Poetically stated, ''retrodiction studies'' , ,  enables us to get a glimpse on the "clear, cold lines of eternity".
Back in 20th century Prof Robin Hanson put together this extremely insightful and strong document .
Long-Term Growth As A Sequence of Exponential Modes,
Economy grows. [see: Footnote]. Unstoppable.
Hanson's unprecedented contribution was to provide us with systematic orientation tool on how and why economy grows.
It accelerates. See:
Mode Doubling Date Began Doubles Doubles Transition
Grows Time (DT) To Dominate of DT of WP CES Power
---------- --------- ----------- ------ ------- ----------
Brain size 34M yrs 550M B.C. ? "16" ?
Hunters 224K yrs 2000K B.C. 7.3 8.9 ?
Farmers 909 yrs 4856 B.C. 7.9 7.6 2.4
Industry 6.3 yrs 2020 A.D. 7.2 >9.2 0.094
The model identifies the past economy accelerators as.:
- neural networks, evolving into doubling brain size each 30-ish megayears (hinting that human level of intelligence is an inevitability: +/-30 millions of year around the Now, by the virtue of the good old 'coin-toss' Darwinian algorithm alone.)
- human as the top-of-the-foodchains predator since around 2 000 000 BC. (maybe the human mastering of the Fire and the Blade to blame), compressing the doubling time with over two orders of magnitude down to a quarter of a million of years.
- Food production, ecosystem manipulation (or rather the collimation of farming, horse domestication and writing as accelerator components), leading to less than 40 human generations per economy doubling.
- All we know as division of labor, specialization, systematized Sci-Tech... industry - the centralized ways for production and control of knowledge leading to another hundreds-fold compression down to mere ~decade of economy doubling time.
Recommended: digest each Hanson (economy accelerator drive or) Engine with the Bob Hettinga's 'ensime' :
My observation about networks in general is a rather obvious one when you think about it: our social structures map to our communication structures. As intuitive as it is to understand, this observation provides great insight into where the technology of computer assisted communication will take us in the years ahead.
Connectivity specs as indicator and drive.
Now, when we leave the past and use these models to gaze into the future, the really interesting stuff comes out.
Aside from giving explanation to the, detected by Brad DeLong in his also monumental paper , overall trajectory of the economy, the nucleus of meaning in the Rob Hanson's paper is:
Typically, the economy is dominated by one particular mode of economic growth, which produces a constant growth rate. While there are often economic processes which grow exponentially at a rate much faster than that of the economy as a whole, such processes almost always slow down as they become limited by the size of the total economy. Very rarely, however, a faster process reforms the economy so fundamentally that overall economic growth rates accelerate to track this new process. The economy might then be thought of as composed of an old sector and a new sector, a new sector which continues to grow at its same speed even when it comes to dominate the economy.
Visualize: a Petri dish and sugar being expanded in size and quantity by the accelerating growth of the bacterial culture in it.
Hanson actually predicted nearly quarter of century ago, ... something that is relentlessly coming.
In the CES model (which this author prefers) if the next number of doubles of DT were the same as one of the last three DT doubles, the next doubling time would be ... 1.3, 2.1, or 2.3 weeks. This suggests a remarkably precise estimate of an amazingly fast growth rate. ... it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.
An economy accelerator avalanche is roaring down the slope of time towards us.
A brand new Hanson Engine is about to leave the assembly line.
Tau, is that you?
FOOTNOTE: To wrap up the above statements in the flesh of the deep thesaurus of content onto which they lie, would conservatively consume hundreds of pages. Even if only briefed. I promise to come back to these subtopic meaning expansions (by referring back to here) with series of posts in the months to come to tie up with the notions of.: economy as a network, network as computer, what exactly it processes and outputs, economy (like the universe or life) being endogenously driven positive feedback loop self-amplifying non-equilibrium entropic combinatorial explosion system, the wealth as economy complexity growth in relation with GDP size and the intimate connection of dollars-joules in energy intensity, physical and economic limits of growth, self-reinforcing predator-pray models, knowledge as synonymous with skill and so forth, economic cycles upon the DeLong curve ... to name a few. Readers questions and comments will of course help a lot with the subtopics prioritization, and will boost (incl. mine) understanding. Thank you in advance!
NOTE: I currently have the pleasure and honor to be part of the Tau Team, but this post contains ONLY my personal views.
Hans Moravec  is the patriarch of robotics . The real one, not the Sci-Fi father. Asimov was just the prophet in this scheme of things.
Moravec to Kurzweil is what's Bitcoin to Ethereum and Satoshi to Vitalik.
Sorry, for the rough joke. No offence, Ray! Back in the earler 2000s I bought your books too .
In my humble opinion - aside from the ''reality intratextualization''  concept - the other wisdom jewel of Moravec's - fruit of a life devoted to robotics - is the Moravec's Paradox .
Explained in his own words:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
or with Steven Pinker's :
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived...
As I noted in a previous related post of mine , a system's value dynamics is all about how it scales. Preferable of course are systems which make more good to go around than less. Respectively, to come around.
Humanity is a network, and its scaling is stumbled by our innate attentional resources limitations.
Human social interaction is a skill and we naturally have only as much of it.
For now, in the good old hierarchic way , we can't deny that we scale satisfactory well (as compared, lets say, to our DNA-blockchain-fork-out first cousins the chimps ) for collaborating efficiently on successful execution of trivial tasks like empire building or colonization of the Galaxy.
But not all problems we encounter are simple. In fact most problems are more complex than we are capable to grok and master in the hierarchic collaboration mode, which quickly slams into the Shannon's 'brick wall' 
Ohad Asor's Tau  is intended to be humanity upscaler . This project is the first and only one I've discovered so far where the so obvious (after you know it) problem is even identified, stated and addressed.
This means uplifting the individual humans too, because we are literally AIs serially manufactured by our society (cf. feral children ).
It feels easy for us to attend, to remember, to forget, to think, to talk, to work together - so it is extremely Moravec-hard!
Tau is unique approach towards the Moravec-hardness of these problems in the realization that we do not need at all to waste time and resources to mimic nature and copy ourselves and to create high tech homunculi .
The 'problem' is the solution. Don't 'solve' it - just god damn use it!
It is the people who ask questions, upload statements, express tastes and do all that qualia  crap humans usually do.
The machine distills the semantic essence of all the shared thought flow, treats it as wishes specs, and automatically converts into executable code, incl. its own code self-amendment.
As Moravec found out few decades ago  :
The 1,500 cubic centimeter human brain is about 100,000 times as large as the retina, suggesting that matching overall human behavior will take about 100 million MIPS of computer power.
When these processing brain things are really put together in numbers the result is unprecedented power. An unstoppable force. A glimpse into it by Ohad :
It turns out that under certain assumptions we can reach truly efficiently scaling discussions and information flow, where 10,000 people are actually 100 times more effective than 100 people, in terms of collaborative decision making and collaborative theory formation. But for this we'll need the aid of machines, and we'll also need to help them to help us.
Without application of dehumanizing individual upgrades, without to be necessary to understand and reengineer the billions of years of evolutionary capital, but just harness it and use it. (Scaling itself must be scalable, too, ah?)
In my personal up to date limited understanding it seems that it is indeed the HUMANITY what's to be known as the Tau's 'Zennet Supercomputer', and the machines are the ... collab amplifier media, the 'internet' of it. (Ohad, correct me if I'm wrong, please.)
Like laser configurations of minds.
With performance stronger than thought.
NOTE: I have the honor to be in the Tau Team, but all reflections in this post are personally my opinion.
The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems). By Dana Edwards. Posted on Steemit. March 27, 2017.
This article contains an explanation of two core concepts for creating decentralized artificial intelligence and also discusses some projects which are attempting to bring these concepts into practical reality. The first of these concepts is called knowledge representation. The second of these concepts is called a knowledge base. Human beings contribute to a knowledge base using a knowledge representation language. Reasoning over this knowledge base is possible and artificial intelligence utilizing this knowledge base is also possible.
Knowledge representation defined by it's roles.
To define knowledge representation we must list the five roles of knowledge representation which can reveal what it does.
1. Knowledge representation is a surrogate
2. Knowledge representation is a set of ontological commitments
3. Knowledge representation is a fragmentary theory of intelligent reasoning
4. Knowledge representation is a medium for efficient computation
Part 1: Knowledge Representation is a Surrogate
By surrogate we means it is substituting or acting in place of something. So if knowledge representation is a surrogate then it must be representing some original. There is of course an issue that the surrogate must be a completely accurate representation but if we want a completely accurate representation of an object then it can only come from the object itself. In this case all other representations are inaccurate as they inevitably contain simplifying assumptions and possibly artifacts. To put this into a context, if you make a copy of an audio recording, for every copy you make it going to contain slightly more artifacts. This similarly also happens when dealing with information sent through a wire, where if not properly amplified there eventually will be artifects that come from copying a transmission.
"Two important consequences follow from the inevitability of imperfect surrogates. One consequence is that in describing the natural world, we must inevitably lie, by omission at least. At a minimum we must omit some of the effectively limitless complexity of the natural world; our descriptions may in addition introduce artifacts not present in the world.
Part 2: Knowledge Representation is a Set of Ontological Commitments.
"If, as we have argued, all representations are imperfect approximations to reality, each approximation attending to some things and ignoring others, then in selecting any representation we are in the very same act unavoidably making a set of decisions about how and what to see in the world. That is, selecting a representation means making a set of ontological commitments. (2) The commitments are in effect a strong pair of glasses that determine what we can see, bringing some part of the world into sharp focus, at the expense of blurring other parts."
In this case because our commitments are made then our representation is selected by making a set of ontological commitments. An ontological commitment is a framework for how we will view the world, such as viewing the world through logic. If we choose to view the world through logic, through rule-based systems then all of our knowledge about the world is also within that framework. We choose our representation technology and commit to a particular view of the world.
Part 3: Knowledge Representation is a Fragmentary Theory of Intelligent Reasoning.
Mathmaetical logic seems to provide a basis for some of intelligent reasoning but it is also recognized to be derived from the five fields which include of course mathematical logic, but also psychology, biology, statistics, and economics. If we go with mathematical logic then we have deductive and inductive reasoning approaches. Deductive reasoning according to some is the basis behind. If we want to explore an example of reasoning we can take the Socrates example,
Statement A: True? Y/N?
"All men are mortal"
Statement B: True? Y/N?
"Socrates is a man"
Satement C: True? Y/N?
"Socrates is a mortal"
If A is true, and B is also true, then C must be true. This is an example of basic logical reasoning which can easily be resolved using symbol manipulation and knowledge representation. The symbol at play in this example would be implication.
Part 4: Knowledge Representation is a Medium for Efficient Computation.
If we think of computational efficiency, and think of all forms of computation whether mechanical or natural in the sense of the sort of computation done by a biological entity, then we may think of knowledge representation as a medium for that computation efficiency. Currently we think of money as a medium of exchange, and if we think of the human brain as a type of computer which does human computation, then we may think of knowledge representation.
While the issue of efficient use of representations has been addressed by representation designers, in the larger sense the field appears to have been historically ambivalent in its reaction. Early recognition of the notion of heuristic adequacy  demonstrates that early on researchers appreciated the significance of the computational properties of a representation, but the tone of much subsequent work in logic (e.g., ) suggested that epistemology (knowledge content) alone mattered, and defined computational efficiency out of the agenda. Epistemology does of course matter, and it may be useful to study it without the potentially distracting concerns about speed. But eventually we must compute with our representations, hence efficiency must be part of the agenda. The pendulum later swung sharply over, to what we might call the computational imperative view. Some work in this vein (e.g., ) offered representation languages whose design was strongly driven by the desire to provide not only efficiency, but guaranteed efficiency. The result appears to be a language of significant speed but restricted expressive power .
While I will admit the above paragraph may be a bit cryptic, shows that there is a view that better representation of knowledge leads to computational efficiency.
Part 5: Knowledge Representation is a Medium of Human Expression.
Of course knowledge representation is part of how we communicate with each other or with machines. Human beings use natural language to convey knowledge and this natural language can include the use of vocabularies of words with agreed upon meanings. This vocabulary of words may be found in various dictionaries including the urban dictionary and we rely on these dictionaries as a sort of knowledge base.
What is a decentralized Knowledge Base?
To understand what a decentralized knowledge base is we must first describe what a knowledge base is. A knowledge base stores knowledge representations which are described in the above examples. This knowledge base in more simple terms could be thought of as representing the facts about the world in the form of structured and or unstructured information which can be utilized by a computer system. An artificial intelligence can utilize a knowledge base to solve problems and typically this particular kind of artificial intelligence is called an expert system. The artificial intelligence in the most simple form will just reason on this knowledge base through an inference engine and through this it can do the sort of computations which are of great utility to problem solvers.
When we think of Wikipedia we are thinking about an encyclopedia which the whole world can contribute to. When we think about the problems with Wikipedia we can quickly see that one of the problems is the fact that it's centralized. We also have the problem that the knowledge that is stored on Wikipedia is not stored in a way which machines can make use of it and this means even if Wikipedia can be useful for humans to look up facts it is not in the current form able to act effectively as a decentralized knowledge base. DBPedia is an attempt to bring Wikipedia into a form which machines can make use of but it still is centralized which means a DDOS or similar attack can censor it.
Decentralized knowledge is important for the world and a decentralized knowledge base is critical for the development of a decentralized AI. If we are speaking about an expert system then the knowledge base would have to be as large as possible which means we may need to give the incentive for human beings to contribute and share their knowledge with this decentralized knowledge base. We also would have to provide a knowledge representation language so that human beings can share their knowledge in the appropriate way for it to enter into the knowledge base to be used by potential AI.
Knowledge representation is a necessary component for the vast majority of attempts at a truly decentralized AI. If we are going to deal with any AI then we must have a way for human beings to convey knowledge to the machines in a way which both the human beings and machines can understand it. The use of a knowledge representation language makes it possible for a human being to contribute to a knowledge base and this ultimately allows for machines to make use of it's inference engine capabilities to reason from this knowledge base. In the case of a decentralized knowledge base then the barrier of entry is low or non-existent and any human being or perhaps any living being or even robots can contribute to this shared resource yet at the same time both humans and machines can gain utility from this shared resource. An artificial intelligence which functions similar to an expert system can make use of an extremely large knowledge base to solve complex problems and a decentralized knowledge base combined with open and decentralized access to this artificial intelligence can benefit humanity and life on earth in general if used appropriately.
Discussion of example projects.
One of the well known attempts to do something like this is Tauchain which will have both a knowledge representation system and a decentralized knowledge base. In the case of Tau there will be a special simple knowledge representation language under development which resembles simplified controlled English. This knowledge representation language will allow anyone to contribute to the collective knowledge base. Tauchain eventually will have a decentralized knowledge base over the course of it's evolution from the first alpha.
Unfortunately upon reading the Lunyr whitepaper and following their public materials I fail to see how they will pull off what they are promising. I do not think the current Ethereum can handle concurrency which probably would be necessary for doing AI. I also don't see how Ethereum would be able to do it securely with the current design although I remain optimistic about Casper. The lack of code on Github, the lack of references to their research, does not allow me to completely analyze their approach. I can see based on the fact that they are talking about a decentralized knowledge base that their approach will require more than the magic of the market combined with pretty marketing. They will require a knowledge representation language, they will require a true decentralized knowledge base built into IPFS. This true decentralized knowledge base will have to scale with IPFS and through this maybe they can achieve something but without a clear plan of action I would have to say that today I'm not confident in their approach or in Ethereum's ability to handle doing it efficiently.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems).
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.