If Money = Memory, if Society = a Super Computer, if Computation is in Physical Systems, what is a Decentralized Operating System? By Dana Edwards. Posted on Steemit. October 24, 2018.
These concepts are not often discussed so let's have the discussion from the beginning. The first concept to think about is pancomputationalism or put another way the ubiquitous computers which exist everywhere in our environment. We for example can look at physical systems living and non living and see computations taking place all around us. If you look at rocks and trees you can see memory storage. If you look at DNA you can see code and if you look at viruses you can see microscopic programmers adding new codes to DNA. Even when we look at the weather such as a hurricane it is computing.
If you look at nature you see algorithms. You will see learners (yes the same as in AI), also in nature. The process is basically the same for all learning. Consider that everything which is physical is also digital. Consider that the universe is merely information patterns.
If we look at society we can also think of society as a computer. What does society compute though? One way people talk about a society is as a complex adaptive system, but this is also how people might talk about the human body. The human body computes with the purpose of maintaining homeostasis, to persist through time and reproduce copies of itself over time. The human brain computes to promote the survival of the human body. Just as viruses pass on codes to our DNA, the human brain is infected with mind viruses which are called memes. Memes are pieces of information which can alter physically how the brain is working.
The mind isn't limited to the brain. The mind is all the resources the brain can leverage to compute. In other words a person has a brain to compute with but when language was invented this allowed a person to compute not just using their own brain but using the environment itself. To draw on a cave is to use the cave to enhance the memory of the brain. To use mathematics is to use language to enhance the ability of the brain to compute by relying on external storage and symbol manipulation. To use a computer with a programming language is essentially to use mathematics only instead of writing on the cave wall we are writing in 1s and 0s. The mind exists to augment the brain in a constant feedback loop where the brain relies on the mind to improve itself and adapt. If there were no external reality the brain would have no way to evolve itself and improve.
A society in the strictly human sense of the word is the aggregation of minds. This can be at minimum all the human minds in that society. As technology improves the mind capacity increases because each human can remember more, can access more computation resources, can in essence use technology to continuously improve their mind and then leverage the improved mind to improve their brain. The Internet is the pinnacle of this kind of progress but it's obviously not good enough. While the Internet allows for the creation of a global mind by connecting people, things, and minds, it does nothing to actually improve the feedback loop between the mind and the brain, nor does it really offer what could be offered.
Bitcoin came into the picture and perhaps we can think of it as a better memory. A decentralized memory where essentially you can have money. The problem is that money is a very narrow application. It is the start, just as to learn to write on the cave wall was a start, but it's not ambitious enough in my opinion.
Humans in the current blockchain or crypto community do not have many ways where human computation can be exchanged. Human computation is just as valuable as non biological machine computation because there are some kinds of computations which humans can do quite easily which non biological machines still cannot do as well. Translation for example is something non biological machines have a difficult time with but human beings can do well. This means a market will be able to form where humans can sell their computation to translate stuff. If we look at Amazon Mechanical Turk we can see many tasks which humans can do which computer AI cannot yet do, such as labeling and classifying stuff. In order for things to go to the next level we will need markets which allow humans to contribute human computer and or human knowledge in exchange for crypto tokens.
The concept of a decentralized operating system is interesting. First if there are a such thing as social computations (such as collaborative filtering, subjective ranking, waze, etc) then what about the new paradigm of social dispersed computing?
The question becomes what do we want to do with this computing power? Will we use it to extend life? Will we use it to spread life into the cosmos? Will we use it to become wise? To become moral? To become rational? If we want to focus on these kinds of concerns then we definitely need something more than Bitcoin, Ethereum, or even EOS. While EOS does seem to be pursuing the strategy of a decentralized operating system which seems to be the correct course, it does not get everything right.
One problem is as I mentioned before the importance of the feedback loops between minds and brains. The reason I always communicate on the concept of external mind or extended mind is based on that fact that it is the mind which creates the immune system to protect the brain from harmful memes. The brain keeps the body alive. The brain is not really capable of rationality, or morality, or logic, and relies on the mind to achieve this. The mind is essentially all the computation resources that the brain can leverage.
EOS has the problem in the sense that it doesn't seem to improve the user. The user can connect, can join, can earn or sell, can participate, but unless the user can become wiser, more rational, more moral, then EOS has limits. EOS does have Everpedia which is quite interesting but again there are still problems. What can EOS do to improve people in society and thus improve society, if society is a computer and is in need of being upgraded?
Well if society is a computer first what does society compute? What should it compute? I don't even know how to answer those questions. I could suggest that if computation is a commodity along with data then whichever decentralized operating systems that do compete and exist will compete for these commodities. The total brain power of a society is just as important as the amount of connectivity. And the mind of the society is the most important part of a society because it is what can allow the society to become better over time, allow the people in the society to thrive, allow the life forms to continue to evolve avoid extinction.
A decentralized operating system on a technical level would have a kernel or something similar to it. This is the resource management part. For example Aragon promises to offer a decentralized OS and it too mentions having a kernel. A true decentralized operating system has to go further and requires autonomous agents. Autonomous agents which can act on behalf of their owners are philosophically speaking the extended mind. But the resources of a society is still finite, has to be managed, and so a kernel would provide for an ability to allow for resource management.
The total computation ability of a society is likely a massive amount of resources. A lot more than just to connect a bunch of CPUs together. Every member of the society which can compute could participate in a computation market. Of course as we are beginning to see now, the regulators seem concerned about certain kinds of social computations such as prediction markets. So it is unknown how truly decentralized operating systems would be handled but my guess is that if designed right then they could be pro-social, be capable of producing augmented morality by leveraging mass computation, and also by leveraging human computation be able to be compliant. To be compliant is simply to understand the local laws but these can be programmed into the autonomous agents if people think it is necessary.
What is more important is that if a law is clearly bad, and people have enhanced minds, then it will be very clear why the law is bad. This clarity will help people to dispute and seek to change bad laws through the appropriate channels. If there is more wisdom, due to insights from big data, from data scientists, etc, then there can be proposals for law changes which are much wiser and more intelligent. This is something specifically that people in the Tauchain community have realized (that technology can be used to improve policy making).
A lot is still unknown so these writings do not provide clear answers. Consider this just a stream of consciousness about concepts I am deeply contemplating. This is also a way to interpret different technologies.
The importance of modeling opinion dynamics in Tauchain. By Dana Edwards. Posted on Steemit. October 9, 2018.
The videos I recommend anyone watch to understand the importance of this are listed below:
Opinion dynamics modeling in society (part 1)
How do governments determine policy priorities?
The Hidden Trump Model - Opinion Dynamics w/ Social Desirability Bias - H. Zontine & S. Davies
Tauchain is unique because it can aggregate opinions into consensus and toward synthesis
For those who do not understand what Tauchain is trying to do we have to understand that in the beta network of Tauchain consensus = synthesis. Synthesis in this case is program synthesis. In other words the product of consensus is the software. The consensus emerges based on discussion. During this discussion the opinions will be broadcast in such a way that agreements will be reached. These agreements will form the basis of the specification from which program synthesis can produce or output the software.
The problem Tauchain will face is the same problem which any preference aggregation optimization network will face. In other words just because people have preferences and try to express those preferences it does not mean that these preferences will be effectively expressed. In my other post I identified a specific problem which is summed up in the question on whether or not you can effectively aggregate preferences if there is false preferences being expressed? This problem has been called preference falsification but in general it seems to make the case for why privacy is necessary.
Tauchain promises to scale discussion which is great but the problem is some discussions cannot be had at all. Some discussions are so controversial that people cannot even attempt to start them. For these discussions only privacy would allow for the discussion to take place. Of course this doesn't mean discussions will be equally productive even if privacy was allowed.
What is so important about modeling opinion dynamics?
Opinions have to be formed. How are opinions formed? If a agent must make a decision to be pro or con some specific issue then can we model this process? The utility of this is explored in the video below:
The mathematics of influence is the title of the video above. In other words it might be possible to use Tau not just to scale discussion but to discuss how to better discuss. To improve opinion formation or to at least understand how opinions are being formed in the network could be of utility. The more participants in the discussion, the bigger the network, the more important the mathematical models could become.
How do we deal with problems such as bias? This could include racism, sexism, etc? Any kind of cognitive bias can influence opinion formation but how? Ultimately if we do not understand how to model or think about these things mathematically then it's going to be much harder to examine in depth what is going on. For people who are math inclined and who understand the danger of bias in AI then this may be of interest.
The voter model is specifically interesting. It examines how opinions on who to vote for forms. Under this model a node is picked at random from the network (a neighbor) and the opinion of that neighbor is adopted by the node. Which opinion wins out? The high degree nodes (hubs) which have the highest probability of being connected to. This could mean a lot for an election or for opinion shaping. To me this would resemble the thought leader paradigm where the most connected thought leader expresses their opinion in the group and because a lot of people are connected to them in some direct or indirect way their opinion holds a lot more weight. If those thought leaders are zealots (will not change their mind no matter what new evidence they receive) then these individuals have even more influence on the outcome and on opinion formation.
What is Tauchain & Why It Could Be One of The Greatest Inventions of All Time (Part 1: Introduction). By Kevin Wong. Posted on Steemit. August 28, 2018.
In anticipation of Tau's demo some time around the end of this year, I'd be publishing a series of articles leading up to its release and beyond on Steem. If you would like to get to know what some of us think is going to be one of the greatest inventions of all time, I'd recommend you to check out http://wwwidni.org. It seems like a foundation that we've missed out on building together since the birth of the Internet.
A close resemblance of this project is the Semantic Web although some of us would place Tau as being far more ambitious in scope, oddly in a way that is likely more feasible with its ingenious use of a logic blockchain to power a decentralized social choice platform. I think it's impressive how singular the concept actually is, despite the unavoidable lengthy explanations that come paired with the many first-time features that Tau will provide.
Without further ado, let's explore this world-changing technology that is currently baking in the oven.
What is Tau?
Let's begin by first checking out the opening of IDNI's website at http://idni.org:-
Tau is a decentralized blockchain network intended to solve the bottlenecks inherent in large scale human communication and accelerate productivity in human collaboration using logic based Artificial Intelligence.
Sounds fairly straight-forward at first glance, and to me, it really stands out in the cryptosphere. We now have millions and billions of people using the Internet everyday, yet we still do not have any effective means of discussing and collaborating without being all over the place. Sure, we may have been pouring a lot of our time and effort into various platforms trying to connect with others, but have things been really any different compared to a time before the Internet?
The speed of information propagation has increased by orders of magnitude, and we can reach anyone on the planet now, but it's still really up to us to be present and be able to process information in our heads before turning them into relevant knowledge for our networks.
Expanding our social bandwidth.
Turns out, we have been experiencing a lot of trouble coming to terms with the chatter of billions of people in cyberspace. The bottlenecks inherent in our human bandwidth remain to be unsolved even with near-instantaneous communications. From governments to corporations and blockchain communities, we are all still facing the age-old problem of being unable to scale governance beyond the size of a classroom. It's just difficult to get our points across to many different people, let alone making sense of complex long-term discussions and making network-wide decisions collaboratively.
The introduction to The New Tau written by Ohad Asor explains our situation quite accurately:-
Some of the main problems with collaborative decision making have to do with scales and limits that affect flow and processing of information. Those limits are so believed to be inherent in reality such that they're mostly not considered to possibly be overcomed. For example, we naturally consider the case in which everyone has a right to vote, but what about the case in which everyone has an equal right to propose what to vote over?
So how is Tau actually going to solve our communications bottleneck? It will be through a highly bespoke and non-trivial implementation of a logic-based Artificial Intelligence (AI). It's worth noting that AI in this case is more of a buzzword for marketing-speak, and it is actually not of the same variety as the commercial implementations of deep machine learnig.
The distinction that must be made is that Tau is not the kind of AI that attempts to guess what the world is around them, including that of our opinions and the things we say or do. Instead, we must make the step towards communicating through Tau and what we choose to communicate will be as definite as computer programs. It can be thought of as a persistent logic companion that helps us improve the scale our reasoning, logic, and bandwidth.
We can take the time to share what we want to share on the Tau network and most of the logic-based connections and operations will happen in the background over time, even when we're not paying attention in-person. Again, the use of the word AI is a misnomer here because it usually paints the picture of AI agents attempting to mimic human autonomy. That's not what Tau is about. In this case, thinking about Tau as just a logic machine should provide better clarity on what it actually is.
The power of logic.
To expand, here's the second paragraph found in the opening of IDNI's website that explains Tau's paradigm in logic-based communications, http://idni.org:-
Currently, large scale discussions and collaborative efforts carried out directly between people are highly inefficient. To address this problem, we developed a paradigm which we call Human-Machine-Human communication: the core principle is that the users can not only interact with each other but also make their statements clear to their Tau client. Our paradigm enables Tau to deduce areas of consensus among its users in real time, allowing the network to boost communication by acting as an intermediary between humans. It does so by collecting the opinions and preferences its users wish to share and logically constructing opinions into a semantic knowledge base.
Indeed, Tau will offer a semantic social choice platform where we can discuss and store knowledge in a logical universe that helps us organize information, thereby empowering us in highly relevant ways. If you're worried about privacy, know that Tau is first-and-foremost designed as a local client with local processing and storage. The platform itself will be deployed as a decentralized peer-to-peer network, a place where we can connect and share our knowledge-base with anyone we desire.
The only price to pay in all of these is that we must speak in Tau-comprehensible languages, which can always be added and modified over time. A sophisticated language that can be defined over Tau may closely resemble natural languages, but it is really best to expect Tau as a machine-comprehensible language that only speaks in logic. Fortunately, logical formalism is something that we can easily deal with.
So it will be up to us to communicate with our local Tau client in a way that it'll understand our worldviews. When the machine understands what we share completely in some logical, mathematically-verifiable sense, it can then connect our dots with the rest of the Tau network, effectively boosting communications beyond the limits of human bandwidth, effectively scaling our points of discussion, consensus, and collaboration up to an infinite number of participants.
Code and consciousness.
Finally, we look at the last paragraph of Tau's introduction at http://idni.org
Able to deduce consensus and understand discussions, Tau can automatically generate and execute code on consensus basis, through a process known as code synthesis. This will greatly accelerate knowledge production and expedite most large scale collaborative efforts we can imagine in today's world.
Since Tau is a logic blockchain that powers a semantic social choice platform, we can leverage it to have both small and large-scale discussions about program specifications, detect points of consensus, and even generate software in the process. Being able to go from discussions to the realization of decentralized applications would mean inclusive code development for the masses. It's also a unique addition to decentralization that no other blockchain projects have even thought about.
Now that we may have come to a better understanding of Tau's emphasis on the use of logic in every part of its being, let's revisit the process description found in The New Tau to get closer to knowing what it really is about:-
We are interested in a process in which a small or very large group of people repeatedly reach and follow agreements. We refer to such processes as Social Choice. We identify five aspects arising from them, language, knowledge, discussion, collaboration, and choice about choice. We propose a social choice mechanism by a careful consideration of these aspects.
In short, Tau is a decentralized peer-to-peer network that takes the shape of a social choice platform, and it can become anything that we want it to be, for as long as it's expressible within the self-defining and decidable logics of FO[PFP] with PSPACE-complexity. This precise specification is required to satisfy the very definition of Tau as seen in the excerpt above. Tau is also intended to be a compiler-compiler.
This is taking application-generality into a completely different direction compared to blockchains that are built specifically with turing-completeness in mind, like Ethereum. Relevant literature to check out: Finite Model Theory.
Understanding each other.
While it's all highly technical and difficult to grasp in one seating, perhaps a better way to truly begin to understand Tau is to spend some time studying its main features. Or just wait for the product release. In any case, I will try to explore these topics in the future if my brain can still handle it:-
The more I think about Tau, the more I think that it is (poetically) a logical conclusion to the way the Internet works as a protocol. It even lives and breaths logic. Not just any kind of logic, but specifically, logics that can define their own semantics and is decidable. Tau is intelligently designed to be a truly dynamic and ever-evolving blockchain.
When the Tau community intends to make changes to the network code, rules or protocols, they will simply need to express these opinions and perspectives in a compatible language over the network. The self defining logic of the Tau blockchain network will enable it to detect the consensus among these opinions and automatically amend its own code to reflect this consensus from block to block. Unlike the common method of voting, Tau’s approach will take into account the perspectives of the entire community, where people will be free to vote and propose what to vote for in real time. This unique ability of Tau is the only decentralized solution to create a truly dynamic protocol.
Now you might think: Tau seems like a powerful tool but will it be too difficult to use for most people? There might be some learning curve involved for sure, and it'd be similar to learning a new language in the beginning. Those of us who learn to use it well enough to scale our discussions and collaborative works will likely gain a significant edge over those who are not using the platform. I'd imagine plenty of projects and communities around the world being able to overcome some of their obstacles in development through Tau. Hence, it may be fair to expect that market forces will gravitate towards the platform just like how we're all using the Internet these days.
Until the next post.
I've been thinking about Tau almost everyday for the past many months now, and I will admit that its deeper technicalities are still way out of my league, although I've made sure to word them broadly out the best I can. If you like what I do, please consider sharing this post and voting on my witness account on Steem. For more info, check out my recent witness announcement post.
As always, thanks for reading!
Images from Pexels
Music tracklist by Magical Mystery Mix
Follow me @kevinwong / @etherpunk
Not to be taken as financial advice.
Always do your own research.
Ohad Asor the lead developer and founder of Tauchain releases first new blog post in over a year. By Dana Edwards. Posted on Steemit. December 30, 2017.
The new blog post titled "The New Tau" is available for everyone to read. The blog post speaks on the critical topic of collaborative decision making. This is a topic which I myself have been interested in and Ohad's solution is different from the usual solution. In my own thinking I was considering a solution based on collaborative filtering but I realized this would never scale. I then considered a solution based upon using IA (intelligence amplification) by way of personal preference agents and this does scale but requires that the agents have a lot of data to truly know each user and their preferences. The solution Ohad Asor comes up with attempts to solve many of the same problems but his solution scales without seeming to require collaborative filtering or any kind of voting as we traditionally think about it.
Let me list some of the obvious problems with voting which many will recognize from Steem which also relies on collaborative filtering:
Now let's see what Ohad Asor has to say:
In small groups and everyday life we usually don't vote but express our opinions, sometimes discuss them, and the agreement or disagreement or opinions map arises from the situation. But on large communities, like a country, we can only think of everyone having a right to vote to some limited number of proposals. We reach those few proposals using hierarchical (rather decentralized) processes, in the good case, in which everyone has some right to propose but the opinions flow through certain pipes and reach the voting stage almost empty from the vast information gathered in the process. Yet, we don't even dare to imagine an equal right to propose just like an equal right to vote, for everyone, in a way that can actually work. Indeed how can that work, how can a voter go over equally-weighted one million proposals every day?
This in my opinion is very true. In reality we have discussions and at best we seek to broadcast or share our intentions. Intent casting was actually the basis behind how I thought to solve this problem of social choice but I would say intent casting even with my best ideas would not have been good enough because again the typical voter would be uninformed. Without an ability of the typical voter to be either educated continuously which in a complex world may be unrealistic, or for the network itself to somehow keep the voter up to date, this intent casting barely works. It works well for shopping where a shopper knows what they want but does not work so well when a person doesn't actually know what they want and merely knows what they value. Values are the basis for morality, for ethical systems, and this is the area where Ohad's solution really shines.
Tauchain has the potential not only to scale discussions but also morality, because it will have the built in logic to make sure people can be moral without constant contradiction. The truth is, without this aid, the human being cannot actually be moral in decision making in my opinion due to the inability to avoid all sorts of contradictions.
All known methods of discussions so far suffer from very poor scaling. Twice more participants is rarely twice the information gain, and when the group is too big (even few dozens), twice more participants may even reduce the overall gain into half and below, not just to not improve it times two.
This is the conclusion that Ohad and myself reached separately but it still holds true. We require the aid of machines in order to scale collaborative decision making. This in my opinion is one of the major difference makers philosophically speaking between the intended design and function of Tauchain vs every other crypto platform in development. This also in my opinion is going to be the difference maker for the community which Tauchain as a technology will serve because it will enable the machines and humans to aid each other for mutual benefit or symbiosis.
The blog post by Ohad Asor brings forward a very important discussion which has many different angles to it. The angle I focused on with regard to the social choice dilemma is the problem of how do we scale morality. In my opinion if we can scale morality in a decentralized, open source, truly significant manner, then nothing stands in the way of absolute legitimacy, mainstream adoption, and with it a very high yet fairly priced token. The utility value of scaling morality in my opinion is higher than just about anything else we can accomplish with crypto tech and AI. If the morality is better, then the design of future platforms will be greatly improved in terms of how the users are treated, and this in itself could at least in my opinion help solve the debate about whether AI can remain beneficial over a long period of time. I think if we can scale morality in a decentralized way, it will make it easier to design and spread beneficial AI. Crypto-effective alturism could become a new thing if we can solve the deeper more philosophical problems.
Using Controlled English as a Knowledge Representation language. By Dana Edwards. Posted on Steemit. April 4, 2017.
Previously I mentioned "controlled English" when discussing the concept of knowledge representation. This post will go into some detail about what controlled English is. In specific I will discuss Kuhn's doctoral dissertation and Attempto Controlled English (ACE).
Computational linguistics is an interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective.
There are many different controlled natural languages
First I would like to discuss the fact that controlled English is not the only controlled nature language and Attempto Controlled English is only one particular controlled English. For example 1 there is RuleSpeak which is a controlled natural language for business rules. Another example2 is Quelo Controlled English which is a controlled English for querying, where you would say statements such as: "I am looking for something, it should be located in a city, the city should produce a new car, the new car should be equipped with a diesel engine". In addition to these examples we also have Google which uses Voice Actions where you can speak into your android phone and say something like: "Create a calendar event: Dinner in San Francisco, Saturday at 7:00PM". All of these are examples of controlled natural languages and reveal just how powerful this could be for users and developers.
What is Attempto Controlled English (ACE)?
Attempto Controlled English also known as ACE is a specific controlled natural language. It is likely that at some point in the early stages of development this controlled natural language will be implemented on Tauchain. ACE is like English but relies on following certain rules with a restricted vocabulary.
Rule subject + verb + complements + adjuncts
All simple ACE sentences have the above structure of subject + verb + complements + adjuncts. An example would be the following sentences below:
A customer waits.
To construct sentences without a verb you can rely on:
there is + noun phrase
There is a customer.
And you can add detail with:
A trusted customer inserts two valid cards.
And you can use variables:
How does Attempto Controlled English help with Knowledge Representation?
In specific because anyone who speaks English can quickly learn Attempto Controlled English it will mean anyone will be able to contribute to the process of knowledge representation. Contributing to a knowledge base becomes very easy when you can simply describe in plain English (with restrictions) exactly the knowledge you want to represent. A semantic Wiki can be built out of this process rather easily.
How does Attempto Controlled English relate to Tauchain?
Tauchain requires input from the users to determine a formal specification. Attempto Controlled English is simple enough that anyone can describe a formal specification. For example sentences like:
Every customer inserts a card.
As you see above, we are dealing with types. Human is a type. Human is divided at a minimum between male and female subtypes.
And ACEWiki gives an example of what a formal specification could look like in Tauchain. The example being country, where the knowledge in this case is the concept of a country. Then we describe a country by filling in the Wiki collaboratively, where we know first of all that every country is an area, but then collaboratively we fill in the list of current persons who govern a country. Through this method we add to the knowledge base using the knowledge representation language ACE, and in the case of Tauchain we would be adding to potentially a formal specification which eventually is synthesized (program synthesis) by the Tauchain automatic programmer.
To learn more about Attempto Controlled English Wiki watch the video lecture
Kuhn, T. (2009). Controlled English for knowledge representation (Doctoral dissertation, University of Zurich).
Kuhn, T. (2014). A survey and classification of controlled natural languages. Computational Linguistics, 40(1), 121-170.
Kuhn, T. (2009). How controlled English can improve semantic wikis. arXiv preprint arXiv:0907.1245.
Ranta, A., Enache, R., & Détrez, G. (2010, September). Controlled language for everyday use: the molto phrasebook. In International Workshop on Controlled Natural Language (pp. 115-136). Springer Berlin Heidelberg.
Ross, Ronald G. 2013. Tabulation of lists in RuleSpeak—using “the following” clause. Business Rules Journal, 14(4):1–16.
White, C., & Schwitter, R. (2009, December). An update on PENG light. In Proceedings of ALTA (Vol. 7, pp. 80-88).
Web 2: http://attempto.ifi.uzh.ch/site/resources/
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: Using Controlled English as a Knowledge Representation language. April 4, 2017.
Personal agents: What are expert systems? Do expert systems benefit from decentralization. By Dana Edwards. Posted on Steemit. March 28, 2017.
Personal agents: What are expert systems? Do expert systems benefit from decentralization?
In my previous blog post titled "The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems)" I discussed the first piece of a larger puzzle. Knowledge representation and a shared knowledge base were both explained. The purpose of that blogpost was to destribe the concepts of knowledge representation and the knowledge base but also to show why both are valuable for artificial intelligence. This particular article will explain the concept of an expert system and then I will discuss some possible ideas for what can be built in a decentralized AI context.
The recipe for building an expert system
An expert system has two core components which include 1) a knowledge base, 2) an inference engine (semantic reasoner). An expert system is a computer system which emulates the decision making capability of a human expert by reasoning about knowledge and applying rules. Implication for example is a rule which leads to if...then... (otherwise recognized as if p then q). In a computer programming language we would call this set of "if then" statements our [conditionals](https://en.wikipedia.org/wiki/Conditional_%28computer_programming%29) . Conditionals are familiar to anyone who knows C, C++, JAVA, Python, or any typical programming language and this basic structure comes from logic.
We can recognize that conditionals are a set of rules which can be mapped on a flow chart like this:
Expert systems are rule based AI
Just as we can see how if-then-else can become a structure of rules, the expert systems are entirely rule based.
An expert system which has a knowledge base to work with may rely on a goal tree.
Expert systems are fundamentally weak AI. They cannot be self aware or conscious as they are simply mechanical sets of rules being applied according to logic on a knowledge base. Expert systems may exhibit intelligent behavior which is to say they are intelligent tools. This may be enough however to achieve the goals and you can have personal agents which can behave intelligently using an expert system approach.
Trees, trees, and more trees
Now we know how to create an expert system built from a knowledge base and a reasoner. To understand what the future holds for decentralized AI I must briefly discuss the concept of trees. Trees can possibly be infinite structures. Higher order model checking for those familiar with model checking is a form of model checking which can work over infinite structures such as the infinite tree via higher order recursion schemes. Why is any of this important?
Program verification, program analysis, can work when you think of the fact that any program can be represented as a tree. This is important for the security guarantees and for correctness guarantees. In the case where you would like to approach decentralization of AI then you ultimately will have to work with trees and for that reason I discuss it.
Decentralized knowledge base + distributed contribution via knowledge representation language
In order to build a decentralized AI it will be important to have a decentralized knowledge base. The main problem is growing the knowledge base large enough that an expert system can become smart. In a decentralized context you can have in theory anyone in the world contribute to the collective decentralized knowledge base. Decentralization of the knowledge base would make it more resilient in the case of an attack, a nuclear apocalypse or similar scenarios like that which necessitated decentralization of the Internet. From a cyber security perspective human knowledge is safest if decentralized.
Sensors are essentially everywhere, big data is essentially here, but the decentralized knowledge base doesn't exist. We have Google which wants to be at best a centralized knowledge base. Google has AI but it will at best be centralized. A decentralized AI based on expert systems can function similar to that which has been described already as the semantic web but with some improvements.
Your own army of personal experts
If everything goes right in a decentralized context then each person will have access to intelligent agents. These agents will be able to reason over a knowledge base and act as an expert system. For very difficult tasks the computation resources could be rented and paid for via a token. Verified computation and model checking can allow for many machines to compute on your behalf but with a minimized security risk as you would have formal verification built in.
What is the conclusion here?
Expert systems can be built in a decentralized context. Decentralized AI is theoretically possible and likely to be built sooner or later. Decentralized AI can be safer than centralized depending on the use case and it can also be much more efficient depending on the circumstances. For example if computation can be sold on a market then in theory if a million PC owners rent their computation out for a token then you in essence have the cheapest super computer in the world. Will it be good at everything? Perhaps not, but for certain kinds of computation it will be the most cost effective.
Bots will become much more powerful, more capable, with an ability to be experts and make intelligent decisions. This will have both positive and negative consequences depending on the safe guards and governance capabilities in place. If there are no safe guards at all then this could be both a new frontier and present new dangers. At the same time if there are some safe guards and ethical governance then this could provide many new opportunities and possibly boost the economy in new ways. In fact, the ability to have this AI can improve not just reasoning ability, but decision making abilities too, and that can allow for moral augmentation along with improved governance.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: Personal agents: What are expert systems? Do expert systems benefit from decentralization.
The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems). By Dana Edwards. Posted on Steemit. March 27, 2017.
This article contains an explanation of two core concepts for creating decentralized artificial intelligence and also discusses some projects which are attempting to bring these concepts into practical reality. The first of these concepts is called knowledge representation. The second of these concepts is called a knowledge base. Human beings contribute to a knowledge base using a knowledge representation language. Reasoning over this knowledge base is possible and artificial intelligence utilizing this knowledge base is also possible.
Knowledge representation defined by it's roles.
To define knowledge representation we must list the five roles of knowledge representation which can reveal what it does.
1. Knowledge representation is a surrogate
2. Knowledge representation is a set of ontological commitments
3. Knowledge representation is a fragmentary theory of intelligent reasoning
4. Knowledge representation is a medium for efficient computation
Part 1: Knowledge Representation is a Surrogate
By surrogate we means it is substituting or acting in place of something. So if knowledge representation is a surrogate then it must be representing some original. There is of course an issue that the surrogate must be a completely accurate representation but if we want a completely accurate representation of an object then it can only come from the object itself. In this case all other representations are inaccurate as they inevitably contain simplifying assumptions and possibly artifacts. To put this into a context, if you make a copy of an audio recording, for every copy you make it going to contain slightly more artifacts. This similarly also happens when dealing with information sent through a wire, where if not properly amplified there eventually will be artifects that come from copying a transmission.
"Two important consequences follow from the inevitability of imperfect surrogates. One consequence is that in describing the natural world, we must inevitably lie, by omission at least. At a minimum we must omit some of the effectively limitless complexity of the natural world; our descriptions may in addition introduce artifacts not present in the world.
Part 2: Knowledge Representation is a Set of Ontological Commitments.
"If, as we have argued, all representations are imperfect approximations to reality, each approximation attending to some things and ignoring others, then in selecting any representation we are in the very same act unavoidably making a set of decisions about how and what to see in the world. That is, selecting a representation means making a set of ontological commitments. (2) The commitments are in effect a strong pair of glasses that determine what we can see, bringing some part of the world into sharp focus, at the expense of blurring other parts."
In this case because our commitments are made then our representation is selected by making a set of ontological commitments. An ontological commitment is a framework for how we will view the world, such as viewing the world through logic. If we choose to view the world through logic, through rule-based systems then all of our knowledge about the world is also within that framework. We choose our representation technology and commit to a particular view of the world.
Part 3: Knowledge Representation is a Fragmentary Theory of Intelligent Reasoning.
Mathmaetical logic seems to provide a basis for some of intelligent reasoning but it is also recognized to be derived from the five fields which include of course mathematical logic, but also psychology, biology, statistics, and economics. If we go with mathematical logic then we have deductive and inductive reasoning approaches. Deductive reasoning according to some is the basis behind. If we want to explore an example of reasoning we can take the Socrates example,
Statement A: True? Y/N?
"All men are mortal"
Statement B: True? Y/N?
"Socrates is a man"
Satement C: True? Y/N?
"Socrates is a mortal"
If A is true, and B is also true, then C must be true. This is an example of basic logical reasoning which can easily be resolved using symbol manipulation and knowledge representation. The symbol at play in this example would be implication.
Part 4: Knowledge Representation is a Medium for Efficient Computation.
If we think of computational efficiency, and think of all forms of computation whether mechanical or natural in the sense of the sort of computation done by a biological entity, then we may think of knowledge representation as a medium for that computation efficiency. Currently we think of money as a medium of exchange, and if we think of the human brain as a type of computer which does human computation, then we may think of knowledge representation.
While the issue of efficient use of representations has been addressed by representation designers, in the larger sense the field appears to have been historically ambivalent in its reaction. Early recognition of the notion of heuristic adequacy  demonstrates that early on researchers appreciated the significance of the computational properties of a representation, but the tone of much subsequent work in logic (e.g., ) suggested that epistemology (knowledge content) alone mattered, and defined computational efficiency out of the agenda. Epistemology does of course matter, and it may be useful to study it without the potentially distracting concerns about speed. But eventually we must compute with our representations, hence efficiency must be part of the agenda. The pendulum later swung sharply over, to what we might call the computational imperative view. Some work in this vein (e.g., ) offered representation languages whose design was strongly driven by the desire to provide not only efficiency, but guaranteed efficiency. The result appears to be a language of significant speed but restricted expressive power .
While I will admit the above paragraph may be a bit cryptic, shows that there is a view that better representation of knowledge leads to computational efficiency.
Part 5: Knowledge Representation is a Medium of Human Expression.
Of course knowledge representation is part of how we communicate with each other or with machines. Human beings use natural language to convey knowledge and this natural language can include the use of vocabularies of words with agreed upon meanings. This vocabulary of words may be found in various dictionaries including the urban dictionary and we rely on these dictionaries as a sort of knowledge base.
What is a decentralized Knowledge Base?
To understand what a decentralized knowledge base is we must first describe what a knowledge base is. A knowledge base stores knowledge representations which are described in the above examples. This knowledge base in more simple terms could be thought of as representing the facts about the world in the form of structured and or unstructured information which can be utilized by a computer system. An artificial intelligence can utilize a knowledge base to solve problems and typically this particular kind of artificial intelligence is called an expert system. The artificial intelligence in the most simple form will just reason on this knowledge base through an inference engine and through this it can do the sort of computations which are of great utility to problem solvers.
When we think of Wikipedia we are thinking about an encyclopedia which the whole world can contribute to. When we think about the problems with Wikipedia we can quickly see that one of the problems is the fact that it's centralized. We also have the problem that the knowledge that is stored on Wikipedia is not stored in a way which machines can make use of it and this means even if Wikipedia can be useful for humans to look up facts it is not in the current form able to act effectively as a decentralized knowledge base. DBPedia is an attempt to bring Wikipedia into a form which machines can make use of but it still is centralized which means a DDOS or similar attack can censor it.
Decentralized knowledge is important for the world and a decentralized knowledge base is critical for the development of a decentralized AI. If we are speaking about an expert system then the knowledge base would have to be as large as possible which means we may need to give the incentive for human beings to contribute and share their knowledge with this decentralized knowledge base. We also would have to provide a knowledge representation language so that human beings can share their knowledge in the appropriate way for it to enter into the knowledge base to be used by potential AI.
Knowledge representation is a necessary component for the vast majority of attempts at a truly decentralized AI. If we are going to deal with any AI then we must have a way for human beings to convey knowledge to the machines in a way which both the human beings and machines can understand it. The use of a knowledge representation language makes it possible for a human being to contribute to a knowledge base and this ultimately allows for machines to make use of it's inference engine capabilities to reason from this knowledge base. In the case of a decentralized knowledge base then the barrier of entry is low or non-existent and any human being or perhaps any living being or even robots can contribute to this shared resource yet at the same time both humans and machines can gain utility from this shared resource. An artificial intelligence which functions similar to an expert system can make use of an extremely large knowledge base to solve complex problems and a decentralized knowledge base combined with open and decentralized access to this artificial intelligence can benefit humanity and life on earth in general if used appropriately.
Discussion of example projects.
One of the well known attempts to do something like this is Tauchain which will have both a knowledge representation system and a decentralized knowledge base. In the case of Tau there will be a special simple knowledge representation language under development which resembles simplified controlled English. This knowledge representation language will allow anyone to contribute to the collective knowledge base. Tauchain eventually will have a decentralized knowledge base over the course of it's evolution from the first alpha.
Unfortunately upon reading the Lunyr whitepaper and following their public materials I fail to see how they will pull off what they are promising. I do not think the current Ethereum can handle concurrency which probably would be necessary for doing AI. I also don't see how Ethereum would be able to do it securely with the current design although I remain optimistic about Casper. The lack of code on Github, the lack of references to their research, does not allow me to completely analyze their approach. I can see based on the fact that they are talking about a decentralized knowledge base that their approach will require more than the magic of the market combined with pretty marketing. They will require a knowledge representation language, they will require a true decentralized knowledge base built into IPFS. This true decentralized knowledge base will have to scale with IPFS and through this maybe they can achieve something but without a clear plan of action I would have to say that today I'm not confident in their approach or in Ethereum's ability to handle doing it efficiently.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems).
A less understood feature of Tauchain is the feature called "program synthesis" in academic literature. This is a feature that as far as I know no one else in the crypto community is investigating. This feature of program synthesis when combined with the knowledge based AI discussed in my previous posts(1,2), would allow Tauchain to leverage the trend of big data. As the collective knowledge base of Tauchain expands, the capability of this automated programmer will improve. The automatic programmer will reason over an increasing collective knowledge base to allow Tauchain to in a sense "program itself". Smart contract developers will gain piece of mind knowing that their smart contracts are formally verified (for correctness) and users will be able to contribute to the development by joining in as a group to accurately describe the behavior of the code.Haz clic aquí para editar.
The impact of program synthesis and knowledge based AI on the smart contract development community.
Let's discuss what this means for smart contract developer. A smart contract developer today has to deal with creating a white paper, then a formal specification, then do the programming and formal verification themselves. This is very difficult for humans to do and as a result very few people are able to write secure smart contracts yet almost everyone can come up with some interesting idea for a smart contract. Program synthesis will allow individuals or a group of people to specify in a simple yet controlled natural language such as a simple English what they want (this creates the formal specification) and from this description of what they want the automatic programmer will handle the rest. The code will in a sense be written automatically by the AI by reasoning over a knowledge base which could include the knowledge necessary to produce the code for that smart contract.
AI learns to write its own code by stealing from other programs?
In summary, each participant in Tauchain will be able to speak to their "automated programmer" in a language they are comfortable with. They'll describe as accurately as they can the functioning and behavior of the program or smart contract. The automated programmer will then reason over a very large knowledge base and if it is smart enough it will automatically generate the code for the smart contract from their description. The participant will then either be satisfied with what was generated or not satisfied and update their description so as to trigger the process until they are satisfied. This is yet another breakthrough feature which Tauchain may be able to offer to the crypto-community in addition to potentially solving the knowledge acquisition bottleneck problem.
What is the Knowledge Acquisition Bottleneck problem? By Dana Edwards. Posted on Steemit. March 29, 2017.
Now that we know what knowledge representation is, and what knowledge bases are, and how the knowledge base is relied upon in a knowledge based system of artificial intelligence (KR+KB+Inference engine), we can move on to discussing one of the open problems.
The Knowledge Acquisition Bottleneck problem.
Many people already know about the familiar Byzantines generals problem in computer science. We also know how the Nakamoto consensus in Bitcoin provided a novel example of a solution. The Knowledge Acquisition Bottleneck problem is one of the problems plaguing AI and is what limits or seems to be a limit on the strength of artificial intelligence. One of the main problems in artificial intelligence is that knowledge formation typically requires domain experts who can contribute to the knowledge base. The Cyc project attempted to solve the problem of scaling up the knowledge base but is suffering from the bottleneck. The bottleneck can be summarized below [taken from Wagner, 2006]:
The paper from which this summary was pulled "Breaking the Knowledge Acquisition Bottleneck Through Conversational Knowledge Management" also offers a solution called collaborative conversational knowledge management. This is the same solution which Tauchain will attempt to utilize in a more sophisticated way. Tauchain will allow for collaborative theory formation. In the paper this quote explains a key concept:
We see this concept in how Wikipedia works to manage knowledge. We know Wikipedia is indeed not without flaws but it does manage knowledge. In their conclusion we see this quote:
Tauchain by design will be collaborative and allow for collaborative theory formation. This would mean anyone will be able to contribute to the knowledge base with relative ease. In addition, it will have knowledge management properties built in, and if the knowledge acquisition bottleneck problem can be solved then it will have a huge impact. For one, the problems which prevent knowledge based AI from scaling could be resolved if this bottleneck is removed.
DARPA has attempted to solve the Knowledge Acquisition Bottleneck problem utilizing high performance knowledge bases (HPKBs)and Rapid Knowledge Formation yet failed. Cyc has attempted to solve the same problem and has failed. The semantic web has yet to take off because this problem stands in the way. Will Tauchain succeed where these other attempts have failed? I think it is a strong possibility which is why I'm excited about the implications should Tauchain successfully be built.
Lenat, D. B., Prakash, M., & Shepherd, M. (1985). CYC: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks. AI magazine, 6(4), 65.
Wagner, C. (2006). Breaking the Knowledge Acquisition Bottleneck Through Conversational Knowledge Management. Information Resources Management Journal, 19(1), 70-83.
Web 1. https://www.quora.com/What-is-knowledge-acquisition-bottleneck
Web 2. http://www.igi-global.com/dictionary/knowledge-acquisition-bottleneck/49991
Web 3: http://www.tauchain.org
Web 4: https://steemit.com/tauchain/@dana-edwards/how-to-become-a-stakeholder-in-agoras-and-indirectly-tauchain
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: What is the Knowledge Acquisition Bottleneck problem?
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.