The Paradigm of Social Dispersed Computing and the Utility of Agoras. By Dana Edwards. Posted on Steemit. October 12, 2018.
Social Dispersed Computing
What is socially dispersed computing? It is an edge oriented computing paradigm which goes beyond cloud and fog computing. To understand socially dispersed computing we first have to discuss dispersed computing and how it differs from the previous paradigm of cloud and fog computing. The current trend toward decentralized networks which we first saw with the peer to peer technologies such as Napster, Limewire, Bittorrent, and later with Bitcoin, have brought to us an opportunity to conceptually new paradigms. The original model most people are familiar with is the client server model which was very much limited in that the server was always vulnerable to DDOS attack. The client server model has never been and could likely never be censorship resistant.
In the client server model the server could simply shut down as was the case with Bitconnect or it could be raided. The server could also be shut down by hackers who simply flood the site with requests. As we can see from the problems the client server model presented we discovered the utility of the peer to peer model. The peer to peer model was all about censorship resistance and promoted a network which was to have no single point of failure (single point of attack) which could be result in the shutdown of access points to the information. One of the first applications for these peer to peer networks was file sharing networks and networks such as Freenet/Tor etc. This of course eventually evolved into the Bitcoin which ultimately led to the development of Steem.
In dispersed computing a concept is introduced called "Networked Computation Points". An NCP can execute a function in support of user applications. To elaborate further I'll offer something below.
Consider that every component in a network is a node. Now consider that every component node is an NCP in that it can execute some function to support some user application. If we think of for example a blockchain then we know mining would fit into this category because it is both a node in the network and it also can execute a function in support of Bitcoin transactions. Why is any of this important? Parallelism is something we can gain from dispersed computing and please note that it is distinct form concurrent computing. When we rely on parallelism we can reap the benefits in terms of performance when executing code by breaking it up into many small tasks which can be performed across many CPUs.
EOS attempts to leverage parallelism specifically to enable it's performance boost. The benefit is speed and flexibility. Think for example of the hardware side also with FGPAs which can do similar tasks of a microprocessor. FGPAs (not ASICs) which unlike ASICs would provide generalized flexible parallel computing. Consider that just like with mining a company could add more and more FGPAs to scale an application as needed.
To understand Social Dispersed Computing we have to make note of the fact that there are other users at any given time. For example the other users in the network participate to provide resources to the network for the benefit of other users whilst using the network. So in Steem for example as you add content to Steem you are adding value to Steem in a direct way, but also in a dynamic way. The resources on Steem also can adapt dynamically to the demand provided that the incentive mechanism (Resource Credits) works as intended.
EOS as an example DOSC (Dispersed Operating System Computer)
Because EOS seems to be the first to approach this holistically I will give credit to the EOS network for pioneering dispersed computing in the crypto space. All resources are representable by tokenization in a dispersed computing network. EOS and even Steem have this. Steem has it in the form of "Resource Credits" which represent the available resources on the Steem network. If more resources are needed then theoretically the resource credits could act as an incentive to provide these resources to the Steem network. This provides a permanent price floor to Steem represented as the amount of Steem which would have to be purchased in order to have enough resources to run Steem (if I have the correct theoretical understanding). This would put Steem on a trajectory toward dispersed computing.
Operating systems typically sit between the hardware and software as a sort of abstraction layer. This traditionally has been valuable because programmers don't have to directly speak to the hardware and hardware designers don't have to directly communicate by their designs to the programmer. In essence the operating system in the traditional model is centralized and made by a company such as Microsoft or Apple. This centralized operating system typically runs on a device or set of devices and provides some standard services such as email, a web browser, and maybe even a Bitcoin wallet.
Typically the most valuable or high utility software people consider on a computer is the operating system. In our smart phones this is Android OS and in PCs it may be Windows or Linux. This is of course thrown on it's head under the new paradigm of dispersed computing and the new conceptual model of the "decentralized" operating system. EOS is the first to attempt a decentralized operating system using current blockchain technology but the upcoming technology easily eclipses what EOS could do. Tauchain is a technology which if successful will leave EOS in the stone age in terms of what it will be able to do. EOS while ambitious also has had it's problems with regard to the voting mechanisms and the ease at which collusion can take place.
To better understand how decentralized operating systems emerge learn about:
If we look at OSKit we see that it is the tools necessary for operating system development. If we look at Tauchain we realize that it is strategically the most important tool for the development of a decentralized operating system being provided in the form of TML (a partial evaluator). If we think of the primary tool necessary to develop from we have to initially start with a compiler. A compiler generator is more like what TML allows with it's partial evaluator. More specifically it is the feature of Futamura projection which can provide the ability to generate compilers.
If we look at the next most important part of an operating system it is typically the kernel. Let's have a look at what an exokernel is:
Operating systems generally present hardware resources to applications through high-level abstractions such as (virtual) file systems. The idea behind exokernels is to force as few abstractions as possible on application developers, enabling them to make as many decisions as possible about hardware abstractions. Exokernels are tiny, since functionality is limited to ensuring protection and multiplexing of resources, which is considerably simpler than conventional microkernels' implementation of message passing and monolithic kernels' implementation of high-level abstractions.
By Thorben Bochenek [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
From this at minimum we can see that an exokernel is a more efficient and direct way for programmers to communicate with hardware. To be more specific, "programs" communicate with hardware directly by way of an exokernel. We know the most basic function of a kernel in an operating system is the management of resources. We know in a decentralized context that tokenization allows for incentives for management of resources. When we combine them we get kernel+tokenization to produce an elementary foundation of an operating system. In a distributed context we could apply a decentralized operating system in such a way that the network could be treated as a unified computer.
Abstraction is still important by the way. In an operating system we know the object oriented way of abstraction. Typically the programmer works with the concept of objects. In an "Application Operating Environment" an "Application Object" can be another useful abstraction. Abstraction can of course be taken further but that is for another blog post.
The Utility of Agoras
Agoras+TML is interesting. Agoras is the resource management component of what may evolve into the Tau Operating System. This Tau Operating System or TOS is something which would be vastly superior to EOS or anything else out there because of the unique abilities of Agoras. The main abilities have been announced on the website such as the knowledge exchange (knowledge market) where humans and machines alike can contribute knowledge to the network in exchange for the token reward. We also know that Agoras will have a more direct resource contribution incentive property in the form of the AGRS token so as to facilitate the sale or trade of storage, bandwidth or computation resources.
The possible (likely?) emergence of the Tau Operating System
In order for Tauchain to evolve into a Dispersed Operating System Computer it will need an equivalent to a kernel. Some means of allowing whomever is responsible for the Tauchain network to control and manage the resources of that network. If for example the users decide then by way of discussion there would be a formal specification or model of a future iteration of the Tauchain network. This according to current documents is what would produce the requirements for the Beta version of the network to apply program synthesis. Program synthesis in essence could result in a kernel and from there the components of a Tau Operating System could be synthesized in the same way. Just remember that all that I write is purely speculative as we have no way to predict with certainty the direction the community will take during the alpha.
Desde hace tiempo la comunidad de Tau sigue los pasos de los últimos cambios en este proyecto: Avances por parte de Ohad, renovaciones en la marca, o el deslistado de Bittrex. En el ultimo mes se ha trabajado muy duro para renovar la vieja imagen del proyecto con una nueva web donde encontraremos un resumen de que consiste Tau.
No es tarea fácil resumir algo que ni siquiera existe, es un ejercicio de optimismo, imaginación y hasta de fe, por si no lo sabes, no existe nada igual funcionando hoy en día.
Voy a analizar que es lo que nos encontramos en la nueva web de Tau.
¿Qué es Tau?
Lo primero que nos encontramos es una ilustración donde se representa a Tau en medio de dos mundos (mundo natural y mundo evolucionado), a la izquierda una figura humana y a la derecha la sociedad. Ohad plantea un paradigma que es Humano-Maquina-Humano, el entendimiento entre nosotros con la ayuda de las maquinas para escalar las conversaciones.
Continuando hacia abajo aparece un resumen acompañado de una especie de raíces que simbolizan las conexiones neuronales que podría tener esta red de conocimiento. En esta explicación resumida vemos un futuro de consensos escalable que amplia el conocimiento a otro nivel.
A continuación encontramos un resumen de cada área:
-El Internet de Lenguajes
-Base de Conocimiento Global
-La Economía del Conocimiento Inteligente de Agoras
-Acelerando y Automatizando la Colaboración
-Elección Social en Tiempo Real
Gracias a estas nuevas explicaciones para cada area, podemos entender cual es el potencial de Tau. El TML (el corazón de todo) definirá todos los lenguajes, para que cualquiera se exprese de la mejor manera que sepa y TML se encargará de convertirlo a otro lenguaje.
¿Predicados lógicos? Parece esa asignatura que te saltaste en el colegio!. Más adelante podemos ver a que se refieren con esto en la simulación animada de Tau cuando los usuarios escriben ?tema01 de esta manera.
Por primera vez en la historia humana al añadir más gente en las discusiones, aumentamos la productividad de estas, en vez de reducirlas como pasa normalmente.
Cuando Tau esté al 100% de sus capacidades, los grupos de trabajo ( de cientos o de miles) serán capaces de llegar a consensos en tiempo real y pedir que se rediseñe ese consenso en cualquier idioma del TML. Esta parte es importantísima porque no estamos hablando solo de idiomas humanos, hablamos de traducciones a idiomas de maquina como la programación, idiomas de contratos jurídicos, o idiomas artísticos como la música o las imágenes.
La clave del no estancamiento es la auto-enmienda, por parte de los usuarios, describirán en un lenguaje natural, como quieren que sean las reglas. Tau estará pendiente de tus conversaciones para entender, con el consenso de todos, que sus reglas se han de cambiar.
Simulación de Tau
Encontramos un video simulación que representa una futura interfaz de Tau en funcionamiento, una mezcla de red social y app de discusión. Excelente decisión por parte del equipo pues ayuda a entender mejor que es realmente Tau. Por primera vez podemos “tocarlo”. En el vemos a un usuario buscar entre varias propuestas de debate. Al elegir una propuesta, el usuario entra en una especie de reunion de trabajo donde se discute un asunto importante. Los usuarios dan su opinion, apoyan la de los demás y utilizan herramientas como el auto comentario o el resumen. Algo inaudito y fascinante!
Hoja de ruta
Una hoja de ruta aparece en esta pagina que seguro será una gran sorpresa para los seguidores del proyecto.
Aparecen todas las fases del proyecto con una descripción para cada una: TML, Alpha, Beta, Tau, Agoras. Pero o sorpresa! Tenemos fechas y compromisos inminentes. Por ejemplo, según la hoja de ruta, en la tercera quincena tendremos una demo de TML para probar amigos!
Se tiene la esperanza de completar el TML y Alfa de Tau a principios de 2019. Y la finalización de Agoras para principios del 2020.
Página de Agoras
Vuelvo a repetir que no es nada fácil imaginar algo que no existe, y con Agoras sucede lo mismo.
En mi humilde opinión Agoras será una herramienta donde coloca a cualquier jugador al mismo nivel. Pretende crear una economía justa para cualquier usuario.
es más probable que los jugadores más grandes necesiten alquilar millones de computadoras y estén dispuestos a pagar por el conocimiento único que posee el usuario-base, mientras que los participantes más pequeños pueden simplemente alquilar su poder de computación inactivo. Por lo tanto, podemos esperar que el dinero fluya en la dirección correcta: de los ricos a los pobres.
Un buscador será la base y tu podrás consultar y prestar el servicio de consulta a la vez. Será difícil para un usuario normal hacer más consultas que dar servicio de consulta, por lo tanto ganarás más de lo que gastes. Por otro lado las grandes potencias gastarán más consultas de las que generen, de esta manera el poderoso mantiene al pobre.
Por otra parte en la actualidad se a hecho una preventa y el resto del total de las 42 millones de monedas se prevé vender en una futura preventa.
Después de leer la pagina de Agoras nos podemos hacer una idea de la representación de la Ilustración en la cabecera. Una economía basada en el conocimiento humano que se expande libremente con la posibilidad de germinación y evolución.
Manual de Equipo
Dos apartados más donde podemos acceder al equipo y el Blog oficial.
El blog mantiene las ultimas entradas de Ohad y sirve de manual para el que quiera saber más sobre este ambicioso proyecto. Por suerte lo encontramos en Inglés, Chino y Español.
Nuestro objetivo es construir una sociedad mejor donde el conocimiento se pueda crear de manera más eficiente y distribuir mejor para resolver problemas complejos. Con la ayuda de las máquinas, podemos resolver el problema de escalamiento de conocimiento y opiniones. No más conocimiento olvidado o descartado, y no más restricciones en el descubrimiento y la transferencia de conocimiento.
Este equipo tiene buenas intenciones y un perfil profesional alto, está compuesto por todo tipo de gente y de diferentes nacionalidades.
No conozco un proyecto similar a Tau, nunca antes me hubiera imaginado capaz de poder ver una conexión al paradigma de unir todo el conocimiento humano en un solo núcleo, que sea capaz de crecer y que este sea accesible públicamente. No estamos frente a una Inteligencia Artificial, estamos frente a la próxima evolución del conocimiento.
Fuente original / Source: Post escrito por CapitanArt y publicado en Busy el 9 de julio de 2018.
The vision of Tau-Chain, a blockchain based self-amending platform designed to scale human collaboration and knowledge building. By Isar Flis. Posted on Steemit. January 8, 2018.
The Crypto-Currency Market
With the fluctuation in the price of Bitcoin, there are more voices claiming that the crypto-currency market is a bubble, warning investors about the risks of investing and possibly losing their funds. One of the claims is that virtual coins have no real value. However, by carefully studying this market, the potential investor will discover that some projects include technology, innovation, true vision and strong community, thus creating a fiscal value like that of other successful startup companies.
Today, it is difficult to predict which coin will secure a place among the top currencies on Coinmarketcap. There are large number of projects and buzz-words, used in fancy websites and white-papers, which make it challenging to extract the relevant information and make educated investments. In addition, there are projects that work “under-the-radar” and are very technical to comprehend, discouraging potential investors.
I would like to discuss one of these technical projects that works under-the-radar, without a fancy website or extensive marketing campaign but with brilliant innovation and fast-growing community. The name of the project is Tau-Chain (Agoras tokens on Coinmarketcap), developed by Ohad Asor.
Tau is a collaboratively self-amending program designed to scale human collaboration and knowledge building. To further clarify the explanation, think about a platform that can develop any computer program the user desires, based solely on discussions with his or her team about the program’s specification and development. The use of such a platform can change not only the crypto-ecosystem, but all branches of science.
Tau’s vision has a long way to go. However, Ohad has developed a detailed roadmap to achieve his vision. Tau will be developed in four stages, as follows:
Tau Meta Language (TML): TML is the base language that will enable all users to interact with each other, no matter what computer language they speak. Think about it as the technology behind Google Translate, but for computer languages, or as Ohad calls it: “the Internet of Languages”.
Alpha: Alpha is a social platform that promotes discussions between infinite numbers of users. Today, an effective conversation cannot be held when too many people take part in the decision-making process (that is why democracy was created). However, Alpha will be able to scale these discussions and detect logical points of consensus between users, thus enabling better knowledge sharing and construction.
Beta: Beta will advance Alpha to enable the development of computer programs, based on user discussions in the platform. To make this more tangible, think of Wix.com where anyone can easily develop a website, even without the technical expertise. With Beta, the code for any computer program will be developed based on specific instructions that the user provides.
Tau: Tau is where blockchain is introduced, thereby creating a decentralized platform (the Tau-Chain), compared to the centralized Beta. Tau will be self-amending and will be able to deduce knowledge based on the information submitted by its users. In its final stage, Tau will amplify the creation of knowledge for its users, advancing current human-knowledge, research and development in different disciplines, such as physics, mathematics and computer science.
The reasoning behind designing the roadmap in four stages is that each stage can support the advancement of the next one. This year we expect the development of the first two stages, TML and Alpha, to be completed. Using Alpha’s discussion platform, an infinite number of developers can join the project to build Beta, expediting its go-to-market date. After Beta is developed, it will only be a matter of time until Tau is completed as all technical challenges will be resolved using Beta.
The legal entity behind this operation is called “IDNI” (Intelligent Decentralized Networks Initiatives), which is composed by Tau’s development team and support units.
So, what is Agoras?
While Tau creates a true knowledge society, Agoras is about creating true monetary knowledge, by powering the ecosystem built via Tau. Agoras will be used to execute the applications of Tau, Zennet (Computational Resource Market), derivatives trading platform and further developments to be built as part of Tau’s ecosystem.
There are 42 million agoras in total. Most of the tokens were sold by Ohad during 2017. The sold tokens, named IDNI Agoras, represent the future Agoras coins holders will receive upon the completion of Tau (fourth stage), where the blockchain is introduced.
The current price of one IDNI Agoras is around ~$2 (traded on Bittrex), which has shown a steady growth throughout the development of the project. The initial code that was released as a proof of concept strengthened the confidence of investors in Tau, compared to competing projects.
I foresee huge potential for this project, and urge you to read and learn about this project and its relevant applications. If you find this vision interesting, I recommend that you follow the project on Telegram, Facebook and Reddit, or read Ohad’s blog for further information.
Disclaimer: I have invested in Agoras. Please do your own research before investing in Agoras and/or any other coin or project. Please do not consider this article to constitute financial advice.
The Power of Tau - Scaling the Creation of Knowledge. By Trafalgar. Posted on Steemit. December 31, 2017.
Ohad Asor, creator of Tau Chain/Agoras, has recently published the long awaited blog post detailing his vision for what very likely is the most ambitious project in the crypto space: Tau.
Tau will accelerate human endeavors by overcoming long ingrained limitations in our collaborative processes; limitations which we rarely even question.
The Problem of Social Governance
Take social governance, for example. As individuals, we have opinions over a wide variety of social issues. Perhaps you feel that the speed limit on certain roads is too high, or that programming should be a compulsory subject at public schools, or that everyone would benefit if cryptocurrencies were officially recognized and endorsed by the state.
However, you have no idea how to get these concerns across to the general public. I mean you could try writing a letter to your local representative or signing a petition but ultimately that's unlikely to gain much traction. Meanwhile, the very same issues that seems to have divided the nation over the past decade remain at the forefront of our political debate. Immigration, climate change, abortion, gun control etc. are all important issues of course, but very little progress have been made considering the amount of time, resources and attention that have been devoted to them.
So the problem with traditional forms of social governance, such as democratic voting, is apparent: on the one hand it has difficulty identifying and addressing the wide range of opinions different people hold, on the other hand, even with respect to the small number of issues that do end up bubbling up to the surface, it isn't particularly efficient at detecting consensus.
The central cause of this problem is that current modes of discussion are not scalable. There are inherent limitations in the way we're able to communicate our views across to each other; namely, human ability to comprehend and organize information is the main bottleneck. We cannot possible follow multiple conversations at once, or recall everyone's propositions once there are more than a handful of people in the mix. This is why most collaborative decision making bodies in practice are generally quite small in number: the President's cabinet, Supreme Court Justices, boardroom directions of a fortune 100 company etc.; you just can't have a productive discussion with 50 people. Our entire civilization is structured around this very limitation: discussions don't scale.
Scaling Collaborative Discussions Under Tau
Imagine if we can overcome this limitation; what will it mean for social governance? By using a self defining, decidable logic, the Tau network is easily able to keep track of every user's propositions and detect consensus automatically. Note that making a proposition is exactly the same as voting for that very same proposition: when you're proposing 'dogs should always be on a leash in public unless in a park' you're in effect putting in a vote for such a proposition. This way, countless issues, regardless of how technical or niche, can be assessed through the network concurrently, and social consensus can be detected on the fly. The Tau network can scale social governance by overcoming one of the greatest limitation in human communication of ideas by delegating the task of logically making sense of everybody's propositions to the computer. A simple use case of this will be the rules of the Tau network itself: through a self defining logic, Tau is able to detect consensus among its users from block to block, altering its own rules to conform to the choices of the user base.
The benefits of scaling discussions are not limited to just a more efficient form of social governance. Logic isn't merely about detecting surface level consensus, the network can easily form further deductions from everyone's propositions. If one states 'all men are mortal' and 'Socrates is a man', one can deduce that 'Socrates is mortal.' But deductions can be very deep and non trivial. Imagine if we had a group of 1000 mathematicians all inputting their mathematical insight as propositions. Tau can rapidly detect who agrees with whom on what, and deduce every logical consequence of their combined wisdom; in effect arriving to new truths and insights. In other words, Tau greatly accelerates the production of new knowledge. This will, of course, also work if you have physicists, doctors, engineers, computer scientists, indeed experts in every field working together on the platform. By scaling collaborative discussions in a logical network, Tau is able to scale the creation of knowledge.
When Tau comes into effect, any company, government, and indeed any organization not using this new network will be rendered obsolete. Tau aims to become an indispensable technology.
And this is only the alpha of Tau.
I will talk about the beta in a future posts. The beta will revolve around not just the scaling of discussions and consensus, but the automation and execution of code based of the results of those discussion. For more information on code synthesis and more, please read Ohad's blog. Also, do check out my introduction to Tau here if you missed it.
You can invest in Tau through buying Agoras tokens on Bittrex.
I am not affiliated or paid by the project. These represent my own subjective views. Tau/Agoras is the only other crypto project apart from Steem in which I see an extraordinary future, and I am merely sharing that with fellow Steemians here.
Ohad Asor's New Tau Blog
IRC Chat: Where you may ask Ohad himself technical questions
Tau Chinese QQ Group: 203884141
The liquid paradigm, feedback loops, the virtuous cycle and Tauchain. By Dana Edwards. Posted on Steemit. December 31, 2017.
What do I mean by the concept of "liquid platform"? This is merely a re-articulation of the concept of self amendment and self definition. In other words it is very much like an autopoietic design. Bruce Lee once said to "be like water", and the reason is because water can adapt to any environment it is placed it by taking the form of the container it is put into.
So by liquid paradigm I mean that the core feature of true next generation platform design is going to be focused on maximum adaptability.
Feedback loops and the virtuous cycle
How can we have a platform which promotes continuous self improvement? If you have a platform with no hard coded "self" then even the design of the platform is under constant negotiation and creation. This is key because it means Tauchain will be able to adapt quicker than all other competing platforms. Quicker than Tezos because Tezos merely provides self amendment but lacks the virtuous cycle, the meta language, etc.
The Tau Meta Language allows for self definition at the level of languages. This means even the communication mechanism between humans and machines can be updated continuously. This continuous updating is the key design breakthrough of Tauchain because it means Tauchain will always be state of the art in any area. Think of a platform like Wikipedia where anyone can update any part of it in real time continuously so that every part of it is always the state of the art.
Starting at languages, the feedback loop can be created between humans and intelligent machines. Humans must make decision on how to design Tau. These design decisions benefit from the virtuous cycle due to the feedback loop between humans and machines allowing the decision making ability itself to be upgraded. This could even allow for the humans to transcend traditional human capabilities by relying on intelligent machines to assist in design which means better future designs, which means better decision making, which means better future designs which leads to better decision making, this represents the "virtuous cycle" by way of a feedback loop between humans to machines to humans to machines to humans etc. The humans improve the quality of the machines by feeding knowledge, feeding new algorithms, feeding just enough for the machines to become intelligent enough to help the humans to help the machines even more efficiently in the next iteration of Tauchain, over and over again.
Humans and machines will seek more good and less bad for the formal specification of Tau itself. Good and bad designs will be defined collaboratively by the human participants by way of intelligent discussion. As discussion scales, bigger crowds means more human minds involved, which means improved design, which leads eventually to a better and perhaps wiser Tau, which of course would lead to wiser even more intelligent discussions, which can lead to an improved formal specification, and to a better Tau. So that is a loop. It is also a loop between improving Tau, improving society, improving Tau, improving society.
Something Revolutionary In the Crypto Space.
The overwhelming majority of new crypto projects out there fall into 3 main categories:
Now the trillion dollar question is this: is just having a currency or shoving a Turing Complete programming language into the blockchain to allow for smart contracts truly the best use of this decentralized innovation? Ohad Asor, creator and lead developer of Tau, does not think so.
What Is Tau?
Before I start I have to make a confession: I don't truly understand Tau. But I feel that I don't understand it slightly less than people who don't know about it at all, so I'll have a go at explaining it.
Tau is a platform that is designed to scale human collaboration and knowledge building.
Almost every significant piece of technology to date (that isn't about accelerating physical labor) has been primarily focused on the disseminating information or data. The wheel, roads, telephones, the internet are all indispensable achievements that have served to aid getting information from A to B.
But the real value isn't in the data itself, it's from the organization of the information within that data into useful knowledge. While the mere distribution of information is an important step to scaling human progress, it's also only part of the picture. The next step has typically been up to us, the human actors, to use our little brains to distill that information manually until we produce knowledge;
Tau is the first piece of serious technology that is aimed to not only automate the collection of information, but also the production of knowledge, unless you count Netflix's 'AI' recommending 'The Human Centipede' after your toddler has just watched 'A Bug's Life', as successful knowledge discovery made by a machine. Tau is about the industrialization of knowledge creation via taking some of the burden of logical reasoning from us humans and giving it to the machine.
What Can Tau Do?
Ohad has spent years researching and developing Tau. The design is centered around creating a self defining, decidable logic that is expressible under pspace (which is mathematically shown to be the most expressive any self defining and decidable language can be), that will act as a metalanguage for all programming languages defined under Tau.
A trivial example of what this can directly lead to is secure smart contracts. Smart contracts operating under Tau cannot ever give rise to something like the DAO hack - decidable programming languages means one can anticipate the entire spectrum of possible consequences of the code before running it, allowing us to avoid anything unintended. But reliable and secure smart contracts are only a tiny fraction of what the platform can truly offer.
The power of Tau's design will allow it to boast some truly wondrous features including:
Ohad has yet to fully explain how this will be achieved, but by far the most difficult part is creating the initial decidable, self defining logic system that serves as a metalanguage. Many had their doubts but yesterday Ohad announced that the first and most difficult step towards this end has been achieved. The code he has written is a working version of the Tau Meta Language which correctly computed a transitive closure graph. This is a proof of concept of the great things to come!
Now that the initial code is released, Ohad is working on a set of explanations about Tau which will outline it's features and how it'll be able to achieve them in more detail. Tau is notoriously difficult to explain, but it's definitely worth the effort to understand it. I'll keep you updated when his explanations are released.
Who Is The Lead Developer Ohad Asor?
Ohad Asor is a programmer, computer scientist, mathematician and logician from Israel. He attended university at the age of 13 and has extensive experience (30+ years) in programming and mathematics.
Most people know me as the clown on here who just writes jokes along the lines of taking his mom to the prom after his cousin rejected him or some shit, but I sat my university entrance exams at 16 and scored in the top 0.5% of Australian Tertiary Admissions Rank and took a prestigious course at a well known university. I only bring this up to show that I've had no shortage of dealings with what ordinarily would be considered to be extremely intelligent people, but Ohad is on a completely different level.
Ohad Asor is, quite frankly, the most intelligent and knowledgeable person with whom I've ever interacted. There are many geniuses and child prodigies out there, but Ohad has spent virtually every minute of his waking moments studying up until this point in his life, and he likely has an IQ of over 5 standard deviations above the mean to begin with. I have spoken to him and followed his project over the past 8 months, and my assessment and admiration of his abilities has only increased over this time period.
Here is a short video of him explaining the old design of Tau and some of its features. The information is dated as the new design is far superior, but these features remain.
English is Ohad's second language - His native language is C.
How Do I Invest In Tau?
Tau itself has no tokens but Ohad is also building Agoras, the first automated marketplace over the Tau collaborative platform. Agoras tokens are currently traded on Bittrex. It has one of the fairest distributions in the cryptosphere and Ohad is only reserving 3% of the tokens for himself. None of that 20% for the founders, 10% for the developers, 20% for the foundation, 15% for the founders' penis enlargement fund bullshit.
Agoras has made considerable gains over the last few weeks but it's total market cap is still under 100 million at the time of writing, which, to me, represents an incredible opportunity for something potentially revolutionary. If we woke up tomorrow without Bitcoin, things would more or less continue as they did, but if we woke up tomorrow without electricity, the world would be an entirely different place. Indeed Tau aims to be the latter: a truly indispensable piece of technology, which is a status that no crypto project has yet reached.
This article isn't to be taken as investment advice any more than it is to be construed as advice on how to get out of the friend zone without resorting to chloroform. I'm not affiliated nor paid by the Tau team in any way. I have not made a single crypto recommendation in my 8 months of being here until now. I just wanted to share something that I think has immense potential to be truly revolutionary, and it also happens to be the only other crypto investment I hold other than Steem.
Feel free to ask some questions after and I'll try my best to answer them.
Special thanks to @dana-edwards and the Steemit platform for allowing me to discover this project
Tau QQ Group Number: 203884141
IRC for technical questions only, Ohad will generally reply within a day
The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems). By Dana Edwards. Posted on Steemit. March 27, 2017.
This article contains an explanation of two core concepts for creating decentralized artificial intelligence and also discusses some projects which are attempting to bring these concepts into practical reality. The first of these concepts is called knowledge representation. The second of these concepts is called a knowledge base. Human beings contribute to a knowledge base using a knowledge representation language. Reasoning over this knowledge base is possible and artificial intelligence utilizing this knowledge base is also possible.
Knowledge representation defined by it's roles.
To define knowledge representation we must list the five roles of knowledge representation which can reveal what it does.
1. Knowledge representation is a surrogate
2. Knowledge representation is a set of ontological commitments
3. Knowledge representation is a fragmentary theory of intelligent reasoning
4. Knowledge representation is a medium for efficient computation
Part 1: Knowledge Representation is a Surrogate
By surrogate we means it is substituting or acting in place of something. So if knowledge representation is a surrogate then it must be representing some original. There is of course an issue that the surrogate must be a completely accurate representation but if we want a completely accurate representation of an object then it can only come from the object itself. In this case all other representations are inaccurate as they inevitably contain simplifying assumptions and possibly artifacts. To put this into a context, if you make a copy of an audio recording, for every copy you make it going to contain slightly more artifacts. This similarly also happens when dealing with information sent through a wire, where if not properly amplified there eventually will be artifects that come from copying a transmission.
"Two important consequences follow from the inevitability of imperfect surrogates. One consequence is that in describing the natural world, we must inevitably lie, by omission at least. At a minimum we must omit some of the effectively limitless complexity of the natural world; our descriptions may in addition introduce artifacts not present in the world.
Part 2: Knowledge Representation is a Set of Ontological Commitments.
"If, as we have argued, all representations are imperfect approximations to reality, each approximation attending to some things and ignoring others, then in selecting any representation we are in the very same act unavoidably making a set of decisions about how and what to see in the world. That is, selecting a representation means making a set of ontological commitments. (2) The commitments are in effect a strong pair of glasses that determine what we can see, bringing some part of the world into sharp focus, at the expense of blurring other parts."
In this case because our commitments are made then our representation is selected by making a set of ontological commitments. An ontological commitment is a framework for how we will view the world, such as viewing the world through logic. If we choose to view the world through logic, through rule-based systems then all of our knowledge about the world is also within that framework. We choose our representation technology and commit to a particular view of the world.
Part 3: Knowledge Representation is a Fragmentary Theory of Intelligent Reasoning.
Mathmaetical logic seems to provide a basis for some of intelligent reasoning but it is also recognized to be derived from the five fields which include of course mathematical logic, but also psychology, biology, statistics, and economics. If we go with mathematical logic then we have deductive and inductive reasoning approaches. Deductive reasoning according to some is the basis behind. If we want to explore an example of reasoning we can take the Socrates example,
Statement A: True? Y/N?
"All men are mortal"
Statement B: True? Y/N?
"Socrates is a man"
Satement C: True? Y/N?
"Socrates is a mortal"
If A is true, and B is also true, then C must be true. This is an example of basic logical reasoning which can easily be resolved using symbol manipulation and knowledge representation. The symbol at play in this example would be implication.
Part 4: Knowledge Representation is a Medium for Efficient Computation.
If we think of computational efficiency, and think of all forms of computation whether mechanical or natural in the sense of the sort of computation done by a biological entity, then we may think of knowledge representation as a medium for that computation efficiency. Currently we think of money as a medium of exchange, and if we think of the human brain as a type of computer which does human computation, then we may think of knowledge representation.
While the issue of efficient use of representations has been addressed by representation designers, in the larger sense the field appears to have been historically ambivalent in its reaction. Early recognition of the notion of heuristic adequacy  demonstrates that early on researchers appreciated the significance of the computational properties of a representation, but the tone of much subsequent work in logic (e.g., ) suggested that epistemology (knowledge content) alone mattered, and defined computational efficiency out of the agenda. Epistemology does of course matter, and it may be useful to study it without the potentially distracting concerns about speed. But eventually we must compute with our representations, hence efficiency must be part of the agenda. The pendulum later swung sharply over, to what we might call the computational imperative view. Some work in this vein (e.g., ) offered representation languages whose design was strongly driven by the desire to provide not only efficiency, but guaranteed efficiency. The result appears to be a language of significant speed but restricted expressive power .
While I will admit the above paragraph may be a bit cryptic, shows that there is a view that better representation of knowledge leads to computational efficiency.
Part 5: Knowledge Representation is a Medium of Human Expression.
Of course knowledge representation is part of how we communicate with each other or with machines. Human beings use natural language to convey knowledge and this natural language can include the use of vocabularies of words with agreed upon meanings. This vocabulary of words may be found in various dictionaries including the urban dictionary and we rely on these dictionaries as a sort of knowledge base.
What is a decentralized Knowledge Base?
To understand what a decentralized knowledge base is we must first describe what a knowledge base is. A knowledge base stores knowledge representations which are described in the above examples. This knowledge base in more simple terms could be thought of as representing the facts about the world in the form of structured and or unstructured information which can be utilized by a computer system. An artificial intelligence can utilize a knowledge base to solve problems and typically this particular kind of artificial intelligence is called an expert system. The artificial intelligence in the most simple form will just reason on this knowledge base through an inference engine and through this it can do the sort of computations which are of great utility to problem solvers.
When we think of Wikipedia we are thinking about an encyclopedia which the whole world can contribute to. When we think about the problems with Wikipedia we can quickly see that one of the problems is the fact that it's centralized. We also have the problem that the knowledge that is stored on Wikipedia is not stored in a way which machines can make use of it and this means even if Wikipedia can be useful for humans to look up facts it is not in the current form able to act effectively as a decentralized knowledge base. DBPedia is an attempt to bring Wikipedia into a form which machines can make use of but it still is centralized which means a DDOS or similar attack can censor it.
Decentralized knowledge is important for the world and a decentralized knowledge base is critical for the development of a decentralized AI. If we are speaking about an expert system then the knowledge base would have to be as large as possible which means we may need to give the incentive for human beings to contribute and share their knowledge with this decentralized knowledge base. We also would have to provide a knowledge representation language so that human beings can share their knowledge in the appropriate way for it to enter into the knowledge base to be used by potential AI.
Knowledge representation is a necessary component for the vast majority of attempts at a truly decentralized AI. If we are going to deal with any AI then we must have a way for human beings to convey knowledge to the machines in a way which both the human beings and machines can understand it. The use of a knowledge representation language makes it possible for a human being to contribute to a knowledge base and this ultimately allows for machines to make use of it's inference engine capabilities to reason from this knowledge base. In the case of a decentralized knowledge base then the barrier of entry is low or non-existent and any human being or perhaps any living being or even robots can contribute to this shared resource yet at the same time both humans and machines can gain utility from this shared resource. An artificial intelligence which functions similar to an expert system can make use of an extremely large knowledge base to solve complex problems and a decentralized knowledge base combined with open and decentralized access to this artificial intelligence can benefit humanity and life on earth in general if used appropriately.
Discussion of example projects.
One of the well known attempts to do something like this is Tauchain which will have both a knowledge representation system and a decentralized knowledge base. In the case of Tau there will be a special simple knowledge representation language under development which resembles simplified controlled English. This knowledge representation language will allow anyone to contribute to the collective knowledge base. Tauchain eventually will have a decentralized knowledge base over the course of it's evolution from the first alpha.
Unfortunately upon reading the Lunyr whitepaper and following their public materials I fail to see how they will pull off what they are promising. I do not think the current Ethereum can handle concurrency which probably would be necessary for doing AI. I also don't see how Ethereum would be able to do it securely with the current design although I remain optimistic about Casper. The lack of code on Github, the lack of references to their research, does not allow me to completely analyze their approach. I can see based on the fact that they are talking about a decentralized knowledge base that their approach will require more than the magic of the market combined with pretty marketing. They will require a knowledge representation language, they will require a true decentralized knowledge base built into IPFS. This true decentralized knowledge base will have to scale with IPFS and through this maybe they can achieve something but without a clear plan of action I would have to say that today I'm not confident in their approach or in Ethereum's ability to handle doing it efficiently.
Fuente / Source: Original post written by Dana Edwards. Published on Steemit: The value of Knowledge Representation and the Decentralized Knowledge Base for Artificial Intelligence (expert systems).
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.