This topic is loaded in the barrel since - as I see in my draft records - April 2018. It is my free assotiations on the major topic of the aka ''tragedy of the commons''  refracted through the prism of things which I had to pass through with Tau  in mind. In the months it replicated itself into numerous subtopics and threatens to grow in several general theories  so I decided to better unleash it in the wild and to handle it with your help and if necessary to tame and domesticate it and its progeny by the coming power of Tau.
The problem of the 'tragedy of the commons' as a symptom of the more general theme of ownership .
I think I kinda nailed it. It seems this approach brings serious inference power, i.e. via it most of what we know can be derived. Of course it lacks mathematical / logical rigor, but still even on such haiku expression level seems to work.
Yes, there is such a word. In linguistics .
Per se, ''clusivity'' is modulus  of inclusion  and/or exclusion .
Absolute value in maths denotes 'distance' from zero, regardless of direction, which seems to translate well for depicting the spectrum between 'included' and 'excluded', if we imagine that excluded=-1 as the opposite of included=1, and zero measures state of equal clusion. The other, more intuitive and easier to grasp, way would be of the fuzzy logic  of zero to one fractional values, where zero is no clusivity, and one is full clusivity. Lets say we take one of the possible 'directions' and 0= complete exclusion, 1=complete inclusion ... multi-values in between.
Of course due to purely physical reasons 0 and 1 are asymptotic values - ever to approach, never to reach. And of course due to purely physical, finitist  reasons the clusivity fuzzy spectrum is quantized , not smoothly continuous .
Attending etymology usually pays off, because of two reasons:
Thus, we can visualize all languages as a single language, a continuum with mascons  of commonality of indexing-meaning pairs. Like a strange form of semantic entanglement  - to be inevitably hacked someday open and to give birth to endless valuable technologies...
What does this up to now have in common with Commons, Ownership and Tau?
Interestingly, the etymology of 'include'  automatically leads to its privatization-publicization functionality.
It is cognate with both.:
The private/public ''divide'' as key/access driven relation.
Do we ''have the keys''? Or ''are we'' the keys (given non-computerized 'face-control' type of access cases)?
NO. For any entity and for every access, the keys are not the entity or are not property of it.
Key is OUTPUT by us. Fed as INPUT into other systems, so they to perform.
Society can be imaged as a network of partially-black boxes  , where free will is function of the box certainty of autoreflection and trust is function of the uncertanty of other boxes behavior prediction ...
We do not know and in most cases can not know what's going on inside other peoples or organizations or other artifacts inner workings, but we know that by inserting Key we can make them to perform certain expected predicted action.
The boxes are said to be partially-black for the non-black part denoting the zone of predictability - i.e. ''if I input this into that black-box I know it will return to me this and that specifically''...
Key, be it biometrics, piece of shaped metal, digital string of bits ... a reason which causes, a input which brings the outcome of access to...
Important side note is that in the case of key-pair philosophy it is NOT two keys - public & private, but rather a (public) padlock  and THE (private) key , so everybody can lock it but only the key-owner can unlock it / access it.
You maybe have noticed one of my many times repeated slogans :
LAW IS BETWEEN, CODE IS WITHIN
, coming to delineate the map of Trust - i.e. where force is needed ( ''I trust you only as much as I can make you to'') and the self-enforcing systems of blockchain and god knows what else possible systems.
The whole picture is pretty insightful in both the blockchain and the trust (e.g. force)  context, when we realize that it is not so much about de jure, but purely de facto situation. Even when minding the Law. For, private-public being function of the performance and efficiency of the protocol. Incl. the key-making ones. Incl. the key-breaking ones.
On The Law and the related trust=enforcement relations to code and protocols, I'll go some other time in detail (actually lots of times because it seems the bunch of concepts here have lots of fruitful logical consequences), but the inevitable conclusion seems to be that it is in general a Clusivity thing even in the Legal case. For it is matter of accessing the output of compulsory legal action by inputting a ... key.
The recent EU intellectual law directive  is alphabetical example of the Fiat  approach of the external enforcement (as opposed to the cryptographic 'trustless' one). The Fiat way of enforcing ownership rights is also a Clusivity system. The subjects victims of property rights breach ACCESSES the authorities with their ID information, evidence, procedural codes and as output they have to receive enforcement actions vs the delinquents . The cost of trust  this way might be staggering and it is apparent that such a system may easily get clogged and to implosively unscale , .
Tau is mostly about knowledge economy. Economy without ownership ... is very hard, if not impossible to imagine. Like , where there ain't between anymore but everything is within, but even all white boxes system is prone to failures . Especially when we go past the veil of the ideological cliche definitions and take ''to own'' = ''to access'' in the purely factual, physical sense of the word.
In this sense each and every economy is a Clusivity management system.
Tau promises the ultimate Clusivity management.
 - https://en.wikipedia.org/wiki/Tragedy_of_the_commons
 - http://www.idni.org/
 - https://en.wikipedia.org/wiki/Irony
 - https://en.wikipedia.org/wiki/Ownership
 - https://en.wikipedia.org/wiki/Clusivity
 - https://en.wikipedia.org/wiki/Absolute_value
 - https://www.etymonline.com/word/inclusion
 - https://www.etymonline.com/word/exclusion
 - https://en.wikipedia.org/wiki/Fuzzy_logic
 - https://en.wikipedia.org/wiki/Finitism
 - https://en.wikipedia.org/wiki/Discrete
 - https://en.wiktionary.org/wiki/continuous
 - https://en.wikipedia.org/wiki/World_line
 - https://en.wikipedia.org/wiki/Memory_(disambiguation)
 - https://en.wikipedia.org/wiki/Morphism
 - https://en.wikipedia.org/wiki/Mass_concentration_(astronomy)
 - https://en.wikipedia.org/wiki/Quantum_entanglement
 - https://www.etymonline.com/word/include
 - https://en.wikipedia.org/wiki/Black_box
 - https://security.stackexchange.com/questions/87247/why-is-a-public-key-called-a-key-isnt-it-a-lock
 - https://en.wikipedia.org/wiki/Public-key_cryptography
 - http://www.behest.io/
 - https://steemit.com/tauchain/@karov/tauchain-and-the-cost-of-trust
 - https://www.theguardian.com/technology/2018/jun/20/eu-votes-for-copyright-law-that-would-make-internet-a-tool-for-control
 - https://en.wikipedia.org/wiki/Fiat_money
 - https://en.wikipedia.org/wiki/Delict
 - https://steemit.com/tauchain/@karov/tauchain-trumps-procrustics
 - https://steemit.com/tauchain/@karov/scaling-is-layering
 - https://steemit.com/tauchain/@karov/tauchain-transcaling
 - https://en.wikipedia.org/wiki/Borg
 - https://en.wikipedia.org/wiki/Cancer
 - The marvelous picture above is quoted from : https://www.deviantart.com/lora-zombie/art/LORA-ZOMBIE-THREADLESS-351467642
“We are moving into an era where cities will matter more than states and supply chains will be a more important source of power than militaries — whose main purpose will be to protect supply chains rather than borders. Competitive connectivity is the arms race of the 21st century.”
-- Parag Khanna , 
A network is made of lines and switches, right?
Lots have been told about the network scaling effects , including attempts by myself [4-12] ... which compels me to introduce the not so frivolous notion of network forces.
These forces are expressed in several laws. I though initially to say 'forces' and 'laws' here, but I realize they are quite objective and physical emergenta , indeed.
In my ''Geodesic by Tauchain''  article of about couple of months ago I emphasized over the Huber-Hettinga Law , of how cost of switching literally defines the 'orographic'  topology of a network .
The cheaper the routing - the flatter the network.
Expensive switches = hierarchy, verticality, power, control, obey, centalization, 'world is fiat' ,, sollen , hence borders instead of bridges, limitations not stumulae, exclusivity ...
Cheap switching = geodesic society , 'world is flat', horizontality, p2p, decentralization, inclusivity ...
The more vertical by centralization a network is - the more it must deplete information - to omit, to ignore calls from the deeps or to even actively suppress or silence nodes. To cope with the stream by strangling it. Simply due to lesser capacity, less degrees of freedom . Geodesic networks possess higher entropy  and therefore are richer. They bolster higher both Scrooge  and Spawn  factors. With other words:
The flatter the network - the richer  it is.
Maybe the explanation on why the wealthiest-healthiest societies tend to be those who are with biggest economic-political freedom. 
Naturally the Huber-Hettinga Law led me to the elementary-watson  conclusion of the power and value of Tau as the ultimate über -switch. So far so good.
Now lets stare in the Lines. Here comes Nick Szabo .
Nick Szabo - a lawyer AND computer scientist - is a legendary figure from the great 'Archaic era of crypto'  - the 1990es when he, together with the other cypherpunk  titans like Tim May , Wei Dai , Bob Hettinga  etc. etc., poured the very baserock foundations in a staggering detail of what we enjoy now as Crypto  in the post-Satoshi  era.
It is THEIR vision came true we all now live in.
Bitcoin was a detonation of namely that critical mass of fused thoughts, of namely these very smart people, piled up and compressed by the connective network forces of the early internet .
No, I do not mean at all Szabo's most famous thing - the 1994 coining of the term of 'smart contracts' . In fact I deeply and strongly reject the very notion of 'smart contracts' - as utter non-sense, even as an oxymoron - which is an yuge separate problem, which I suspect that I nailed it, and I'll address in series of dedicated articles starting in the upcoming weeks...
I mean something much more valuable, what I call the Szabo Law.
When we hear the phrase 'networking effects' the first what comes to mind is the famous Metcalfe law .
''Metcalfe's Law is related to the fact that the number of unique connections in a network of a number of nodes (n) can be expressed mathematically as the triangular number n(n − 1)/2, which is proportional to n2 asymptotically (that is, an element of BigO(n2)).''
In the above order of appearance these network forces laws respect quantitatively the basic properties of a network as:
- Huber-Hettinga Law - the cost of switches and routing.
- Metcalfe Law - the number of nodes, i.e. switches defining the number of unique connections or lines.
- Szabo Law - the cost of the lines and connecting.
All these Laws are scaling ,  laws. Before we to come back to and continue on Szabo Law, we have to briefly mention another one .:
''So what is “scaling”? In its most elemental form, it simply refers to how systems respond when their sizes change. What happens to cities or companies if their sizes are doubled? What happens to buildings, airplanes, economies, or animals if they are halved? Do cities that are twice as large have approximately twice as many roads and produce double the number of patents? Should the profits of a company twice the size of another company double? Does an animal that is half the mass of another animal require half as much food?'' ... With Dirk Helbing (a physicist, now at ETH Zurich) and his student Christian Kuhnert, and later with Luis Bettencourt (a Los Alamos physicist now an SFI Professor), Jose Lobo (an economist, now at ASU), and Debbie Strumsky (UNC-Charlotte), we discovered that cities, like organisms, do indeed exhibit “universal” power law scaling, but with some crucial differences from biological systems.Infrastructural measures, such as numbers of gas stations and lengths of roads and electrical cables, all scale sublinearly with city population size, manifesting economies of scale with a common exponent around 0.85 (rather than the 0.75 observed in biology). More significantly, however, was the emergence of a new phenomenon not observed in biology, namely, superlinear scaling: socioeconomic quantities involving human interaction, such as wages, patents, AIDS cases, and violent crime all scale with a common exponent around 1.15. Thus, on a per capita basis, human interaction metrics (which encompass innovation and wealth creation) systematically increase with city size while, to the same degree, infrastructural metrics manifest increasing savings. Put slightly differently: with every doubling of city size, whether from 20,000 to 40,000 people or 2M to 4M people, socioeconomic quantities – the good, the bad, and the ugly – increase by approximately 15% per person with a concomitant 15% savings on all city infrastructure-related costs.
Which probably comes to denote the shear size of the network in STEM (space, time, energy, mass) , I'm not sure, but I have some strong suspicions about the unity of matter, structure and action which I will expose and share some other time.
What I call Szabo's Law reveals in his ''Transportation, divergence, and the industrial revolution''(Thu, Oct 16, 2014)  that similarly to Metcalfe's (''double the population, quadruple the economy'') there is power-law  correlation between the cost of connections or links or lines ... and the value of the network, too.:
''Metcalfe's Law states that a value of a network is proportional to the square of the number of its nodes. In an area where good soils, mines, and forests are randomly distributed, the number of nodes valuable to an industrial economy is proportional to the area encompassed. The number of such nodes that can be economically accessed is an inverse square of the cost per mile of transportation. Combine this with Metcalfe's Law and we reach a dramatic but solid mathematical conclusion: the potential value of a land transportation network is the inverse fourth power of the cost of that transportation. A reduction in transportation costs in a trade network by a factor of two increases the potential value of that network by a factor of sixteen. While a power of exactly 4.0 will usually be too high, due to redundancies, this does show how the cost of transportation can have a radical nonlinear impact on the value of the trade networks it enables. This formalizes Adam Smith's observations: the division of labor (and thus value of an economy) increases with the extent of the market, and the extent of the market is heavily influenced by transportation costs (as he extensively discussed in his Wealth of Nations).''
My encounter with this article of Nick Szabo's was a goosebumps experience for me, cause it coincided with series of lay rants of mine on the old Zennet irc chat room of Tau that ''computation =communication =transportation''. Somewhere in 2016 as far as I remember. :)
Maybe it was the last drop to shape my conviction that by my dedicated involvement in both Tau and ET3 , , , I'm actually working for ... one and a same project.
For communication, computation and transportation being modes of state change. Cause information is a verb, not a noun. And software being states of hardware.
''Decentralizing the internet is possible only with decentralized physical infrastructure.'' 
Just like the brain is a network computer of neuron nanocomputers , the emergent composite we colloquially call humanity or mankind or economy or society or world ... is a network computer made of all us billions of humans.
Brains do thought, economies do wealth.
Integrated circuitry  upon the face of planet Earth as a motherboard . Literally. The Humanity's planet-hardware. Parag Khanna's Connectography explained.
The Earth is definitely not our ultimate chip carrier . Probably there ain't limit at all of our culture-upon-nature hardware upgrades, see: , . The universe is our computronium  and we've been here for too short and haven't seen far enough. Networking is connectomics . And thus it always also is metabolomics .
Remember my last month's  ''Tauchain the Hanson Engine''?
The series of exponentially shortened growth doubling times looks like driven by transportation technological singularities : domestication of the horse, oceanic navigation, combustion engine ...
In the light of all the net forces summoned above: The planet Earth viewed as a giant computer chip ...
- itself is a subject of the relentless network entropic  force of the Moore's law 
The network forces accelerate what that wealth computer does.
Two quick examples:
A.: The $1500 sandwich  as a proof that trade+production is at least thousands of times stronger in sandwich-making than production alone.
B.: The example of Eric Beinhocker in his 2006 ''The Origin of Wealth''  about the two contemporary tribes of the Amazonian Yanomami  - a stone age population nowadays and the Eastcoastian Manhattanites . That the former are only about 100 times poorer, but the later enjoy billions of times bigger choice of things to have.
Tauchain 'threatens' to affect the parameters of ALL the network forces formulae mentioned herewith in a mind-bogglingly big scale.
Simultaneously, orders of magnitude :
- lower switch cost
- higher nodes count 
- lower connection cost
A wealth hypercane  recipe. Perfect value storm. Future ain't what it used to be .
Size matters. Some people object that it does not matter, but has meaning. But meaning always matters, so it is the same.
The bigger problems one solves, the bigger the gains. Big problems require big solutions. We live in a big universe and our very survival is to deal with bigger and bigger problems, which require bigger and bigger solutions to cope.
But nevertheless to build big is hard so we naturally prefer to create small things which can grow. Small from point of view both of understandable and affordable to build. So best fit are small solutions, cheap and easy to make which scale out or unfold or unleash into big means to address big problems. Scaling is everything.
Scaling. Scalable! Scalability !!
The root-word 'scale' possesses marvelous riches of meaning in English language  with lots of poetics inside.:
 snake skin epidermals - wisdom, memory, protection, rejuvenation, regeneration, eternity...
hen to pan (ἓν τὸ πᾶν), "the all is one"
 warrior armour - security, defense, power, strength.
 weighting scales - device to measure mass, unit, measure, account.
all very Blockchainy wording without any shadow of doubt.
The scalability issues could be grokked  with the following anecdote:
Bunch of workers on a construction site and a huge log. The onsite manager commands a few of them to lift and move it. They try and object ''Too heavy!''. The manager adds more and more workers, until they shout back again: ''Too short!''.
A few real examples, the first two - bad and the last three excellent:
[a] I won't name this 'crypto' just will say it is named after a mythical element of the universe, according to the prescientific gnostic  imaginations. It's core 'value proposition is to shovel meaningful computation into a thread of computation which very value proposition is to be as random, meaningless and unidirectional (hard to do, easy to prove) as possibly possible - the blockchain. The theoretically most expensive form of computation. Visualize: cars and airplanes made of gold and diamonds burning most expensive perfumes. Or mass production of electricity by raising trillions of cats and hiring trillions of people to pet them with grid of pure gold wires to discharge and collect the electrostatics. If they have chosen the original Satoshi blockchain  for their 'experiments' - where the futility of such attempt would become instantly clear and would die out outright due to impending unbearable cost - will of course be more fair way to do, and would've spared dozens of billions of dollars to the Mankind, but logically they preferred a 'controlled' blockchain of their own. In a sense that the guys with vested interest into it have the power to hand-drive, stop, restart and vivisect it. The only use of this 'blockchain supercomputer' is ... tokenomics by Layering. Why it was at all necessary for a blockchain advertised as so good as to do all the general computation, to be made so hairy and bushy with layered tokens??
[b] Another trio of chaps, won't mention names again, were really at awe with Satoshi's creation, so much that they not just liked, but wanted it and decided to have it. For themselves. All of it. And rebelled and forked out and provided 'scaling' errrmm ... uhhh... solution. By increasing the blocksize. Something which Satoshi meditated on, extensively discussed with his disciples and not occasionally decided to put breaks on.  Very recently the crypto news headlines said that the blocksize increase solution providers are eyeing ... Layering. Which they furiously were advocating that blocksize increase makes unnecessary. Cause it is the solution, isn't it? Or maybe it just was. And is not anymore? Well, I'd say that all the aka 'alts'  - to provide a rejuvenated clone of Bitcoin tweeked here and there to provide momentary ease of difficulty and transaction fees - suffer from one and a same problem - traveling back in time does not tell you the future.
[c] Lets jump half a century back in time. It is 1960es. The very making of internet. Computers are already here and scaled up in numbers so their networking to become a problem/juice worth the solution/squeeze. The birth of TCP/IP  and the report of the very makers of it. Of the solution for the network scaling. Enjoy the ancient wisdom:
Initially, the TCP managed both datagram transmissions and routing, but as the protocol grew, other researchers recommended a division of functionality into protocol layers. Advocates included Johnatan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. The Transmission Control Program was split into two distinct protocols, the Transmission Control Protocol and the Internet Protocol.
The layering made the Internet as we know it. By the simple trick of just one node needed to permit another. Unstoppable inclusivity!
[d] The Mastercoin / Omni Layer :
«A common analogy that is used to describe the relation of the Omni Layer to bitcoin is that of HTTP to TCP/IP: HTTP, like the Omni Layer, is the application layer to the more fundamental transport and internet layer of TCP/IP, like bitcoin».
[e] The Lightning network (LN) :
The Lightning Network is a "second layer" payment protocol that operates on top of a blockchain (most commonly Bitcoin).
Satoshi spoke on 'payment' channels in his masterpiece. Foreseeing the way to scale.
An estimate of the power of LN layering .:
''The bitcoin devs accept that eventually larger block sizes will be needed. The current transaction rate isn't going to cut it if people all over the world actually start using bitcoin daily. They estimate that eventually, if everyone in the world uses bitcoin and makes 2 transactions a day, but uses the lightning network, a 133mb blocksize will be needed. Without the lightning network, something like a 200gb (GIGABYTE) size PER BLOCK would be needed to accommodate that much usage.''
Layering upscales it with orders of magnitude of higher efficiency.
If Bitcoin is the 'first layer' and Omni and Lightning are 'second layer', I see which one is the 'Zeroth Layer' and also foresee  the inevitability of the merger or 'Amalgamation' of all second layers over all blockchains, so the user will be able to transact everything into anything to anybody, without to know or care which chain is in use ... I have special nicknames for these and will go back to these topics in series of future posts.
Enough of examples I reckon.
The Postel's sacred Principle of Layering comes from the implementation levels paradigm.
or Abstraction layering :
''separations of concerns to facilitate interoperability and platform independence''
With other words - delegate the task to that layer of the system which does the particular job best. We can generalize this into The Scaling Commandment. Only one enough:
''Thou shalt not jam it all into a single layer!''
The Layer Cake architecture is literally ubiquitous across the Universe.: biology, semantics, informatics ...
It seems that it is if not the only, at least THE way to scale.
Maybe, someday, we the Humanity, upscaled by Tauchain will discover more powerful than Layering ways to Scale, but it is all we have for now.
Scaling is a problem. Scaling must be scalable, too.
Metascale from here to Eternity.
Moralidad artificial: Agentes morales y Tauchain. Por Dana Edwards. Post traducido al español por Tokuyama y publicado en Steemit. 28 de junio de 2017.
En un artículo previo debatimos el valor de la ampliación de la inteligencia con el ejemplo de la ampliación a la moral. Ahora voy a entrar en algunos detalles acerca del concepto de la “moralidad artificial” que pertenece a los agentes autónomos. A un nivel filosófico hay diferentes caminos sobre los agentes autónomos, pero haré una lista de algunos.
Para este debate vamos a hablar del tercer punto pensando en los agentes autónomos como una extensión de tú personalidad digital. Específicamente, como personalidad digital me estoy refiriendo a cómo será cuantificación y digitalizado. Nota: No voy a elaborar en este artículo sobre “lo que será” o si existe o no “el libre albedrío”, ya que es un debate filosófico enteramente aparte, sino simplemente utilizaré esta definición como una forma de pensar sobre la moralidad, la responsabilidad y los agentes autónomos.
¿Qué es la agencia moral?
Teniendo en cuenta la cita anterior, ¿Hay alguna razón para creer que un agente autónomo no puede llegar a entender la moralidad humana? Podría predecir que no sólo los agentes autónomos entenderán la moralidad humana, sino que podrá entenderla mejor que la mayoría de los humanos. Los seres humanos no tienen una comprensión muy clara de la moralidad humana debido a las limitaciones del cerebro humano, la complejidad del mundo y de las otras personas. Los agentes autónomos son la manera de manejar esta complejidad que redunda en un beneficio para los seres humanos.
En Tauchain habrá una base de conocimiento a nivel mundial (KB “Knowledgebase”) con alguna similitud con la Wikipedia. Este conocimiento (KB “Knowledgebase”) si se estructura de forma correcta será capaz de aceptar la contribución de personas humanas y AI agents (Agentes de Inteligencia artificial). El conocimiento de la moralidad al nivel del sentido común será posible, pero ¿hasta dónde podemos llevar el enfoque de la base del conocimiento base (KB) + el acercamiento de la inferencia? La moral deontológica se puede utilizar fácilmente en este contexto porque la lógica deontológica puede ser entrada en Tau que permite el razonamiento automatizado sobre la base del conocimiento de acuerdo con la moral deontológica. Pero, ¿pueden los agentes autónomos ser responsables de sus acciones?
¿Qué es la moralidad artificial?
Los agentes morales artificiales, tales como los agentes autónomos que han sido facultados para tomar decisiones, pueden tomar decisiones morales. No sólo los agentes morales artificiales pueden tomar decisiones morales a la par con los tomadores de decisiones humanos, sino que pueden superar las habilidades de los humanos para la toma de decisiones.
En algún momento del futuro los agentes autónomos podrán tener una mejor comprensión de las normas sociales actuales que cualquier ser humano en particular. Este conocimiento de las normas sociales y de la moral común ayudaría al agente autónomo a navegar dentro de un paisaje social, un paisaje legal y más. La teoría de juegos y la teoría cooperativa de juegos destacan cómo los jugadores racionales procederían bajo ciertas condiciones de información limitada. Los agentes autónomos son capaces de ser a la vez actores racionales, pero también con cierto nivel de comprensión moral y, lo que es más importante, una capacidad para procesar mucha más información que cualquier persona individual. Esto significaría que un agente autónomo tendría una comprensión completa de las leyes, y sería capaz de reducir el riesgo de las consecuencias jurídicas mejor que cualquier humano que tendría que trabajar con una comprensión limitada de las leyes.
La prueba moral de Turing
¿Cómo medimos el desempeño de los agentes morales artificiales? La Prueba Moral de Turing puede ser la respuesta. No basta simplemente con crear agentes morales que creemos o esperamos que van a actuar de manera moral en situaciones difíciles, por lo tanto, en su lugar podría ser necesario probarlos y aceptar sólo a los agentes morales artificiales que pueden pasar la prueba. Además, las simulaciones y otros enfoques pueden ayudar también, pero porque no todos los eventos se pueden predecir con antelación debe haber una manera de mejorar continuamente el diseño de los agentes morales ganadores y por esta razón será importante permitir que los seres humanos lleven a cabo mediciones y revisiones de la conducta de los agentes morales como un medio para promover una especie de evolución artificial.
Los agentes autónomos pueden mejorar el mundo, pero en mi opinión debe hacerse hincapié en asegurarse de que estos agentes autónomos son de alto estándar. Esto incluiría, al menos, que de alguna forma tuvieran ética y, para hacerlo, podría ser necesario hacer experimentos de moralidad artificial. Los agentes autónomos en Tauchain pueden ser agentes morales y necesariamente tienen que serlo. Si se hace correctamente, entonces los humanos podrán controlar a estos agentes autónomos si se salen de control, para mantenerlos dentro de la moralidad, e incluso los pueden diseñar para evolucionar continuamente, de este modo pueden ser cada vez más morales al aprender nuestra moralidad, como individuos y como grupo.
Post traducido por tokuyama. Post original de Dana Edwards en Inglés con permiso del autor para su traducción al español: Artificial morality: Moral agents and Tauchain
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.