Personal agents: What are expert systems? Do expert systems benefit from decentralization. By Dana Edwards. Posted on Steemit. March 28, 2017.
Personal agents: What are expert systems? Do expert systems benefit from decentralization?
In my previous blog post titled "The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems)" I discussed the first piece of a larger puzzle. Knowledge representation and a shared knowledge base were both explained. The purpose of that blogpost was to destribe the concepts of knowledge representation and the knowledge base but also to show why both are valuable for artificial intelligence. This particular article will explain the concept of an expert system and then I will discuss some possible ideas for what can be built in a decentralized AI context.
The recipe for building an expert system
An expert system has two core components which include 1) a knowledge base, 2) an inference engine (semantic reasoner). An expert system is a computer system which emulates the decision making capability of a human expert by reasoning about knowledge and applying rules. Implication for example is a rule which leads to if...then... (otherwise recognized as if p then q). In a computer programming language we would call this set of "if then" statements our [conditionals](https://en.wikipedia.org/wiki/Conditional_%28computer_programming%29) . Conditionals are familiar to anyone who knows C, C++, JAVA, Python, or any typical programming language and this basic structure comes from logic.
We can recognize that conditionals are a set of rules which can be mapped on a flow chart like this:
Expert systems are rule based AI
Just as we can see how if-then-else can become a structure of rules, the expert systems are entirely rule based.
An expert system which has a knowledge base to work with may rely on a goal tree.
Expert systems are fundamentally weak AI. They cannot be self aware or conscious as they are simply mechanical sets of rules being applied according to logic on a knowledge base. Expert systems may exhibit intelligent behavior which is to say they are intelligent tools. This may be enough however to achieve the goals and you can have personal agents which can behave intelligently using an expert system approach.
Trees, trees, and more trees
Now we know how to create an expert system built from a knowledge base and a reasoner. To understand what the future holds for decentralized AI I must briefly discuss the concept of trees. Trees can possibly be infinite structures. Higher order model checking for those familiar with model checking is a form of model checking which can work over infinite structures such as the infinite tree via higher order recursion schemes. Why is any of this important?
Program verification, program analysis, can work when you think of the fact that any program can be represented as a tree. This is important for the security guarantees and for correctness guarantees. In the case where you would like to approach decentralization of AI then you ultimately will have to work with trees and for that reason I discuss it.
Decentralized knowledge base + distributed contribution via knowledge representation language
In order to build a decentralized AI it will be important to have a decentralized knowledge base. The main problem is growing the knowledge base large enough that an expert system can become smart. In a decentralized context you can have in theory anyone in the world contribute to the collective decentralized knowledge base. Decentralization of the knowledge base would make it more resilient in the case of an attack, a nuclear apocalypse or similar scenarios like that which necessitated decentralization of the Internet. From a cyber security perspective human knowledge is safest if decentralized.
Sensors are essentially everywhere, big data is essentially here, but the decentralized knowledge base doesn't exist. We have Google which wants to be at best a centralized knowledge base. Google has AI but it will at best be centralized. A decentralized AI based on expert systems can function similar to that which has been described already as the semantic web but with some improvements.
Your own army of personal experts
If everything goes right in a decentralized context then each person will have access to intelligent agents. These agents will be able to reason over a knowledge base and act as an expert system. For very difficult tasks the computation resources could be rented and paid for via a token. Verified computation and model checking can allow for many machines to compute on your behalf but with a minimized security risk as you would have formal verification built in.
What is the conclusion here?
Expert systems can be built in a decentralized context. Decentralized AI is theoretically possible and likely to be built sooner or later. Decentralized AI can be safer than centralized depending on the use case and it can also be much more efficient depending on the circumstances. For example if computation can be sold on a market then in theory if a million PC owners rent their computation out for a token then you in essence have the cheapest super computer in the world. Will it be good at everything? Perhaps not, but for certain kinds of computation it will be the most cost effective.
Bots will become much more powerful, more capable, with an ability to be experts and make intelligent decisions. This will have both positive and negative consequences depending on the safe guards and governance capabilities in place. If there are no safe guards at all then this could be both a new frontier and present new dangers. At the same time if there are some safe guards and ethical governance then this could provide many new opportunities and possibly boost the economy in new ways. In fact, the ability to have this AI can improve not just reasoning ability, but decision making abilities too, and that can allow for moral augmentation along with improved governance.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: Personal agents: What are expert systems? Do expert systems benefit from decentralization.
The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems). By Dana Edwards. Posted on Steemit. March 27, 2017.
This article contains an explanation of two core concepts for creating decentralized artificial intelligence and also discusses some projects which are attempting to bring these concepts into practical reality. The first of these concepts is called knowledge representation. The second of these concepts is called a knowledge base. Human beings contribute to a knowledge base using a knowledge representation language. Reasoning over this knowledge base is possible and artificial intelligence utilizing this knowledge base is also possible.
Knowledge representation defined by it's roles.
To define knowledge representation we must list the five roles of knowledge representation which can reveal what it does.
1. Knowledge representation is a surrogate
2. Knowledge representation is a set of ontological commitments
3. Knowledge representation is a fragmentary theory of intelligent reasoning
4. Knowledge representation is a medium for efficient computation
Part 1: Knowledge Representation is a Surrogate
By surrogate we means it is substituting or acting in place of something. So if knowledge representation is a surrogate then it must be representing some original. There is of course an issue that the surrogate must be a completely accurate representation but if we want a completely accurate representation of an object then it can only come from the object itself. In this case all other representations are inaccurate as they inevitably contain simplifying assumptions and possibly artifacts. To put this into a context, if you make a copy of an audio recording, for every copy you make it going to contain slightly more artifacts. This similarly also happens when dealing with information sent through a wire, where if not properly amplified there eventually will be artifects that come from copying a transmission.
"Two important consequences follow from the inevitability of imperfect surrogates. One consequence is that in describing the natural world, we must inevitably lie, by omission at least. At a minimum we must omit some of the effectively limitless complexity of the natural world; our descriptions may in addition introduce artifacts not present in the world.
Part 2: Knowledge Representation is a Set of Ontological Commitments.
"If, as we have argued, all representations are imperfect approximations to reality, each approximation attending to some things and ignoring others, then in selecting any representation we are in the very same act unavoidably making a set of decisions about how and what to see in the world. That is, selecting a representation means making a set of ontological commitments. (2) The commitments are in effect a strong pair of glasses that determine what we can see, bringing some part of the world into sharp focus, at the expense of blurring other parts."
In this case because our commitments are made then our representation is selected by making a set of ontological commitments. An ontological commitment is a framework for how we will view the world, such as viewing the world through logic. If we choose to view the world through logic, through rule-based systems then all of our knowledge about the world is also within that framework. We choose our representation technology and commit to a particular view of the world.
Part 3: Knowledge Representation is a Fragmentary Theory of Intelligent Reasoning.
Mathmaetical logic seems to provide a basis for some of intelligent reasoning but it is also recognized to be derived from the five fields which include of course mathematical logic, but also psychology, biology, statistics, and economics. If we go with mathematical logic then we have deductive and inductive reasoning approaches. Deductive reasoning according to some is the basis behind. If we want to explore an example of reasoning we can take the Socrates example,
Statement A: True? Y/N?
"All men are mortal"
Statement B: True? Y/N?
"Socrates is a man"
Satement C: True? Y/N?
"Socrates is a mortal"
If A is true, and B is also true, then C must be true. This is an example of basic logical reasoning which can easily be resolved using symbol manipulation and knowledge representation. The symbol at play in this example would be implication.
Part 4: Knowledge Representation is a Medium for Efficient Computation.
If we think of computational efficiency, and think of all forms of computation whether mechanical or natural in the sense of the sort of computation done by a biological entity, then we may think of knowledge representation as a medium for that computation efficiency. Currently we think of money as a medium of exchange, and if we think of the human brain as a type of computer which does human computation, then we may think of knowledge representation.
While the issue of efficient use of representations has been addressed by representation designers, in the larger sense the field appears to have been historically ambivalent in its reaction. Early recognition of the notion of heuristic adequacy  demonstrates that early on researchers appreciated the significance of the computational properties of a representation, but the tone of much subsequent work in logic (e.g., ) suggested that epistemology (knowledge content) alone mattered, and defined computational efficiency out of the agenda. Epistemology does of course matter, and it may be useful to study it without the potentially distracting concerns about speed. But eventually we must compute with our representations, hence efficiency must be part of the agenda. The pendulum later swung sharply over, to what we might call the computational imperative view. Some work in this vein (e.g., ) offered representation languages whose design was strongly driven by the desire to provide not only efficiency, but guaranteed efficiency. The result appears to be a language of significant speed but restricted expressive power .
While I will admit the above paragraph may be a bit cryptic, shows that there is a view that better representation of knowledge leads to computational efficiency.
Part 5: Knowledge Representation is a Medium of Human Expression.
Of course knowledge representation is part of how we communicate with each other or with machines. Human beings use natural language to convey knowledge and this natural language can include the use of vocabularies of words with agreed upon meanings. This vocabulary of words may be found in various dictionaries including the urban dictionary and we rely on these dictionaries as a sort of knowledge base.
What is a decentralized Knowledge Base?
To understand what a decentralized knowledge base is we must first describe what a knowledge base is. A knowledge base stores knowledge representations which are described in the above examples. This knowledge base in more simple terms could be thought of as representing the facts about the world in the form of structured and or unstructured information which can be utilized by a computer system. An artificial intelligence can utilize a knowledge base to solve problems and typically this particular kind of artificial intelligence is called an expert system. The artificial intelligence in the most simple form will just reason on this knowledge base through an inference engine and through this it can do the sort of computations which are of great utility to problem solvers.
When we think of Wikipedia we are thinking about an encyclopedia which the whole world can contribute to. When we think about the problems with Wikipedia we can quickly see that one of the problems is the fact that it's centralized. We also have the problem that the knowledge that is stored on Wikipedia is not stored in a way which machines can make use of it and this means even if Wikipedia can be useful for humans to look up facts it is not in the current form able to act effectively as a decentralized knowledge base. DBPedia is an attempt to bring Wikipedia into a form which machines can make use of but it still is centralized which means a DDOS or similar attack can censor it.
Decentralized knowledge is important for the world and a decentralized knowledge base is critical for the development of a decentralized AI. If we are speaking about an expert system then the knowledge base would have to be as large as possible which means we may need to give the incentive for human beings to contribute and share their knowledge with this decentralized knowledge base. We also would have to provide a knowledge representation language so that human beings can share their knowledge in the appropriate way for it to enter into the knowledge base to be used by potential AI.
Knowledge representation is a necessary component for the vast majority of attempts at a truly decentralized AI. If we are going to deal with any AI then we must have a way for human beings to convey knowledge to the machines in a way which both the human beings and machines can understand it. The use of a knowledge representation language makes it possible for a human being to contribute to a knowledge base and this ultimately allows for machines to make use of it's inference engine capabilities to reason from this knowledge base. In the case of a decentralized knowledge base then the barrier of entry is low or non-existent and any human being or perhaps any living being or even robots can contribute to this shared resource yet at the same time both humans and machines can gain utility from this shared resource. An artificial intelligence which functions similar to an expert system can make use of an extremely large knowledge base to solve complex problems and a decentralized knowledge base combined with open and decentralized access to this artificial intelligence can benefit humanity and life on earth in general if used appropriately.
Discussion of example projects.
One of the well known attempts to do something like this is Tauchain which will have both a knowledge representation system and a decentralized knowledge base. In the case of Tau there will be a special simple knowledge representation language under development which resembles simplified controlled English. This knowledge representation language will allow anyone to contribute to the collective knowledge base. Tauchain eventually will have a decentralized knowledge base over the course of it's evolution from the first alpha.
Unfortunately upon reading the Lunyr whitepaper and following their public materials I fail to see how they will pull off what they are promising. I do not think the current Ethereum can handle concurrency which probably would be necessary for doing AI. I also don't see how Ethereum would be able to do it securely with the current design although I remain optimistic about Casper. The lack of code on Github, the lack of references to their research, does not allow me to completely analyze their approach. I can see based on the fact that they are talking about a decentralized knowledge base that their approach will require more than the magic of the market combined with pretty marketing. They will require a knowledge representation language, they will require a true decentralized knowledge base built into IPFS. This true decentralized knowledge base will have to scale with IPFS and through this maybe they can achieve something but without a clear plan of action I would have to say that today I'm not confident in their approach or in Ethereum's ability to handle doing it efficiently.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems).
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.