Consensus Morality and Tauchain | Consensus Gentium. By Dana Edwards. Posted on Steemit. September 15, 2018.
An ancient criterion of truth, the consensus gentium (Latin for agreement of the people), states "that which is universal among men carries the weight of truth" (Ferm, 64). A number of consensus theories of truth are based on variations of this principle. In some criteria the notion of universal consent is taken strictly, while others qualify the terms of consensus in various ways. There are versions of consensus theory in which the specific population weighing in on a given question, the proportion of the population required for consent, and the period of time needed to declare consensus vary from the classical norm.
In the past I made a controversial statement that the law is amoral. This statement I made is based on a simple understanding of legal positivism. Take note that I am not a legal scholar or legal philosopher. My background is in ethical philosophy and political philosophy. That being said if we look at the ideas behind legal positivism it leads to the conclusion that law and morality have nothing to do with each other. In this post I will try to clarify some of my thoughts on this topic and also address a question I was asked about whether Democracy is moral or immoral. I will also discuss the concept of consensus morality and the implications it could have on Tauchain which by design will be permitted to have law(s). Will the law(s) in Tauchain be moral or immoral? Is it possible to align a moral framework with the creation of all laws in Tauchain? Which moral framework and will it be reached by consensus?
In order to understand a lot of my post we first have to consider the question of what is consensus morality? So in order to discuss this topic I will divide morality up into; private morality, public morality. This also introduces the question of whether public morality is authentic or coerced as it depends on how it emerges.
Private morality is what you internally think or feel is right or wrong. This could be because you did some sophisticated calculation as a consequentialist or it could merely be that you feel a certain kind of way about it. In your opinion it is considered wrong. For example you could say: "eating meat is wrong" and this would be your personal opinion. This is an expression on how you feel about eating meat. Now if you say "eating meat is wrong because it promotes animal suffering" this is also an expression of your opinion but you now have a goal attached which is to avoid promoting animal suffering. The goal of not promoting animal suffering suggests that you value minimization of animal suffering as a kind of optimization strategy.
If you still you follow, private morality can also be based on your religious convictions where because the bible says it is wrong or because you were taught the golden rule that it is in your opinion wrong to do behaviors which violate these teachings. The golden rule is an example of a heuristic rule. There are many such rules which people follow including the example from Kant (categorical imperative) but it is still just an opinion based on adherence to a heuristic rule. We can also consider the non agression principle an example of a heuristic rule (a heuristic rule is a mental shortcut which people take because they believe it leads to good results most of the time).
Public morality on the other hand is a different kind of morality entirely. A private individual has a private morality because that individual is only responsible for themselves in their decisions. A public individual is in a position where other people have a stake in what they are doing. For example a CEO of a company cannot simply do what they think is right because the other shareholders have funds at stake. The CEO has a fiduciary duty which outweighs their personal opinions on what is right and wrong. This fiduciary duty is to the shareholders of the company and is both a legal and ethical obligation. In the case of a public company the rightness or wrongness of a decision if the company weighs consequences is based on data. For example a company might rely on focus groups to determine what a customer might want. A company would have to rely on spiritual advisers, ethical focus groups and determine what the shareholders (and customers) would perceive as right. This is because if the CEO does not do what is in the best interest of the shareholders and customers then the CEO will simply be replaced by another CEO who will.
Public morality is reached by some process which results in a moral consensus. The moral consensus of 2018 is not going to be the same as the moral consensus of 1969. This is to say that moral attitudes change over time. A company which seeks to exist and remain profitable for decades must remain in good moral standards for these decades. The only way a company can remain aligned with current moral trends is through a tactic of data analysis. In other words data science is how "right" and "wrong' are determined. For example public sentiment is tracked and from that the marketing team knows where the line in the sand is and what line not to cross in their marketing campaign. The phrase "we went too far" is common in business because going too far simply means to push the boundaries on what is acceptable (or unacceptable). This also can become problematic because if a company bets on a moral consensus in the 1800s (slavery is right) then that company might find after the Civil War (slavery is wrong) and now have to change their opinion. In other words the moral consensus is always changing and is in essence producing moral populism.
Consensus morality on Tauchain
Consensus morality is essentially a publicly recognized framework for right and wrong. Consensus morality on Tauchain for example could be arrived at if we simply have the discussions on topics of ethics. Over time our discussions will converge in such a way so as to produce a consensus morality. That is a moral attitude of the day, of the year, etc as it is merely what is currently the popular opinion and sentiment on what is right and what is wrong. So consensus morality is in my opinion likely to be a very important concept going forward and is a concept which Tauchain (and blockchains like Steem) may enable.
Consensus morality and potential problems
So the question I was asked is about democracy. The idea a person put forth to me was that democracy is immoral because it is a form of coercion. I do not personally buy into this idea that democracy is inherently immoral or inherently coercive. I will say that democracy implemented in the wrong way can become coercive. This is why the emphasis on privacy may be a requirement. If there is no privacy then all votes could be coerced. If the idea is to have a network which is truly moral then we would require that every moral opinion be expressible. Moral opinions which are unpopular are censored or discouraged from being expressed in a transparent ecosystem. This means a transparent ecosystem may in fact under certain circumstances produce a coerced consensus morality. That is that the votes which are public and attributable to certain individual may be mere virtue signals rather than honest (authentic) opinions on what is right and wrong.
As a result this transparency may skew the results of any poll about any subject. A private or anonymous poll can capture a result which in theory expresses some true opinion. In addition there is the possibility of futarchy to allow for prediction markets and other mechanisms to allow for true sentiment on moral questions to be discovered. My answer to the question is that whilst democracy is not inherently wrong it is also not inherently right. Democracy is a tool which when used in the right circumstances may be best suited for achieving the ends. If no better tool exists to achieve the ends then democracy may in fact be the choice which leads to the least bad consequences which compared to other potential choices. That being said the ideal of consequentialism is to over time reduce the wrongness and increase the rightness by measuring the consequences of every choice; such as private ballot voting vs transparent voting.
Privacy has both it's risks and its benefits with regard to consequences. The benefits include coercion resistance. The risks on the other hand include increased ability to bribe and thus coerce. So the idea being that while in theory a person with privacy can express an authentic opinion (have genuine speech rights) it is also true that anyone could be anonymously (privately) be selling their opinion and thus their vote. It is going to be a challenge to determine when privacy is the right tool for the job and when transparency is the right tool for the job.
In the positivist view, the "source" of a law is the establishment of that law by some socially recognised legal authority. The "merits" of a law are a separate issue: it may be a "bad law" by some standard, but if it was added to the system by a legitimate authority, it is still a law.
Legal positivism states that the law and morality are not one in the same. Just because something is legal it does not mean it is moral. Just because something is illegal it does not mean it is immoral. From this basis I reached a conclusion that because immoral laws exist (some laws are moral) that the law as a whole is amoral. That is to say that whether a law can be made or unmade does not demend on whether the law produces good consequences or even desirable consequences. We could for example look at the drug laws and war on drugs to see examples of policies which produce mass incarceration but was that the intended consequence? It would seem the drug laws would have to be immoral according to consequentialism unless the intended consequence was mass incarceration. If the intended consequence was harm reduction then the current drug laws are ineffective. What do these laws actually achieve? It doesn't really matter because the law is amoral. To align the law with morality is also problematic because it would only be able to align with public morality which under consequentialism may also often lead to bad or unintended consequences.
A potential solution is to allow participants in the ecosystem to rate the laws over time. Laws which receive a higher rating or lower rating would provide a feedback loop indicating when a law should be replaced. This is something that we don't seem to have in the current legal system or if we do have it then what is actually done if a lot of people express the opinion that a particular law is immoral or perhaps not moral enough? If every law on Tauchain could be rated, reviewed, discussed continuously, and improved indefinitely, then we may actually get somewhere.
In a recent article of mine  I hinted my strong suspicion that scaling is itself scalable.
''Scaling is a problem. Scaling must be scalable, too. Metascale from here to Eternity.''
No matter what a terrific grower a system is - as per its own internal algorithmic growth drive rules - it seems inevitable its growth to get it into entropic mutualization  upon impact with a kind of a ... downscaler.
Scaling is everything, yeah. But it is quite intuitive and supported by too big body of evidence to ignore, that, paradoxically: the faster a thing grows - the sooner its encounter with an external and bigger downscaling factor comes.
This realization, refracted through the prism of our 'reptilian brain' layer  amplified to gargantuan proportions by our inherent social hierarchicity  is the source of the 'Malthusian  anxiety' which led to countless violent deaths over all the human history. Fear is anger , so the emotion that there is only as much to go around, and that the catastrophe of 'running out' of something is imminent, is the major source of what makes us bad to each other .
There are plethora of examples of very well mathematically and scientifically grounded doomsayer scenarios, and we must admit that they all correct as per their internal axiomatics  , and simultaneously they are all totally wrong for missing out the obvious - the factors of externalities  , the properties and opportunities of the medium which is consumed and/or created by this growth, and which transcend the axiomatics. For growth being always 'growth into'. The fact that doomsday scenarios are so compellingly consistent internally is what makes them so strong and dangerous ideological weapon of mass destruction .
Lets throw some such problem-solution couples for clarity:
a. the world of 1890es big cities sunk up knee-deep into beast of burden manure , and the super-apocalyptic projections of that VS Tony Seba's  1 pic > 1000 words of NYC carts vs cars situations in 1900 -1913 ...
b. the grim visions of the whole Mankind becoming telephone switchboard blue collar workers , the number of which should've exceeded the number of total world population by now to achieve the same level of telephonization or
c. the all librarians world  where it takes more librarians than the whole mankind to serve the social memory in the paper & printed ink storage facilities mode ...
d. the Club of Rome  as the noisiest modern bird of ill omen with 'projections' based on the same blind extrapolations as the urban seas of shit or the 'proofs' of the impossibility to connect or educate or feed all - instigating mass destruction fear that ''we run out of everything and will soon all die'' , used for justification for mass atrocities VS Julian Simon's  - the ''Ultimate Resource'' (1981, 1996) . Cf.: my accelerando article  and see what precisely is the Factory for succession of better and better Hanson drives for the last few millions of years - from the Blade and the Fire to the Tau - it is the same thing which identification made Julian Simon from fanatical Maltusianist  into rationally convinced Cornucopian  ... the human mind.
e. the predator-pray model  which this pseudo-haiku  I guess depicts best how's it brutally flawed:
''hawk eat chic -> less chic, human eat chic -> more chic''
for missing out to posit and failure to account for positive feedback loop  of predator over pray dynamics ...
f. The comment of Dary Oster  , founder of the other passion of mine - ET3 , on the aka 'saturation' of the scalables (exemplified in the field of transportation, which btw, being communication ... our social structures map onto mobility systems we have on disposal ... ).:
''... US transportation growth has focused on automobile/roads (and airline/airport) developments. (And this has been VERY good for the US economy.) The reason is that cars/jets offered far better MARKET VALUE than horse/buggy/train transport did 150 years ago. In the mid 1800s, trains displaced muscle power for travel between cities - because trains offered better market value than ox carts. Trains reached 'market saturation' about 1895 to 1905 (becoming 'unsustainable') - however 'market momentum' produced 20 years of 'overshoot'. Cars/jets were far more sustainable than passenger trains and muscle power, and started to displace trains (and finish off horses). By 1916 the US rail network peaked at 270,000 miles (today less than 130,000 miles is in use).Just like passenger trains hit market saturation, roads/airports are reaching economic limitations. The time is ripe for a market disruption, and all indicators (past and present) say it will NOT come from, or be supported by government or academia -- but from private sector innovations that offer a 10x value improvement (like ET3), AND also offer incentives for most (not all) key industries to participate (like ET3). Automated cars, smart highways, and electronic ride sharing are industry responses that will contribute to overshoot of cars/roads for the next 5-10 years.The main problem i see with the education system is that is that academic research and publication on transportation is primarily funded by status quo industries like: railroads and rail equipment manufactures, highway builders, automobile/truck manufactures, engineering firms, etc. -- all who fund research centered on 'improving' the status quo.Virtually all universities (for the last 1k years+) are set up to drive incremental improvements that industry demands, and virtually all paradigm shifts are resisted until AFTER they occur and are first adopted by industry. Government is the same (for instance in 1905 passing laws to forbid cars that were disrupting horse traffic; or in 1933 passing laws to limit investment in innovation startups to the wealthy (those successful in the status quo)).''
g. Darwinian algo  sqrt(n) VS higher algos - like Metcalfe n^2 . It is not precise, it is more of metaphorical, to indicate direction or scale of scaling, rather then rigorous precision, but ... the former figuratively speaking takes 100 times more to put up 10 times more, and the later takes 10 times more to return 100 times more...
h. Barter vs money. See.:  bottom of page 5 over the bottomline notes, about the later:
simpliﬁes pricing calculations and negotiations from O(n^2) complexity to O(n) complexity
As demonstration how one item out of a scaling barter system, emerges as specialized transactor and accelerator to transcale the barter economy. From within. Endogenously as always. (btw, Extremely strong document where there are entire books read and internalized behind each tight and contentful sentence!)
i. The heat death of the universe  VS the realization that the 2nd law  - conservation law for entropy/information law does not allow that , the asymptoticity  of the fundamental limits of nature, the fact that max entropy grows faster than/from/due to the actual antropy growth  and that entropy is not disorder  and that at the end of the day it is an unbounded immortal universe  ... cause it's all a combinatorial explosion .
j. The Anthropic principle  and the realization that it is extremely hard if not impossible to posit a lifeless universe  ...
k. The Algoverse - my 'psychedelic' vision  of the asymptotic inexorable hierarchy of the Dirac sea  of lower algos which take everything for almost nothing - up towards giving almost everything for almost nothing - Bucky Fuller's runaway Ephemeralization . Algorithms are things. Objects. Structure. Homoousic or consubstantial to their input and output. Things taking things and making things outta the former. Including other algos of course! Stronger ones.
l. The Masa Effect . The Master of Softbank seeing how the machine productivity is on the imminent course to massively overscale the human clients base and his apparent transcaling solution to upscale the clients base with bots and chips, with the same which scales supply in such a too-much way. 
m. The Pierre the Latil 1950es and Stanislaw Lem 1960es ( copied 1:1 by Tegmark  ) hierarchy . Of degrees of self-creating freedom of Effectors ...
n. Limits of growth - present in any particular moment and in any finitary setting of rules ,  but nonexistent in the infinity of rules upgradability. Like a cancer cell trapped in a cage of light  vs ... photosynthesis.
o. Ray Kurzweil - static vs exponential thinking .
p. Craig Venter's  Human Genome project  which when commenced in 1990 was ridiculed that will be unbearably expensive and will take centuries to finish, and it did - it costed a unbearable for 1990 fortune and it did take centuries, of subjective time as per the initial projections conditions - being completed in year 2000.
q. Jeff Bezos vision  of Solar System wide Mankind:
''The solar system can easily support a trillion humans. And if we had a trillion humans, we would have a thousand Einsteins and a thousand Mozarts and unlimited, for all practical purposes, resources.''
r. The 'wastefulness' of data centers and crypto mining collocation facilities  ... which is as funny as to envy the brain for 'wasting' >25% of the body energy. (Btw, the tech megatrend is exponentially and relentlessly towards the minimum calculation energy).
s. The log-scale intuitive measure and smooth straight line visualization coming out of, this quote which I fished out off the net long time ago.:
"The singularities are happening fairly regularly but at an increasing rate, every 500 to 1000 billion man-years (the total sum of the worldwide population over time). The baby boom of the 1950 is about 200 Billion man-years ago."
ops! go back to Q. With 1 trln. humans population the 'singularities' will occur once a year?!
t. the Tau  !!
I can continue with these examples ... forever [wink] - excuse me if I've bored you - but I think that at least that minimum was needed to be shown and it is enough to grok the big picture.
Scaling is the solution. It is a problem too. Its overcoming is what I dub 'Transcaling' for the purpose of that study.
Size matters. Scaling is the way. But the more general is how a system handles change! This is as fundamental as to be in the very core of definition of life and intelligence .
Tauchain is all about change handling!
Now, lets knit the 'blockchain' of these all example threads above into a knot like the Norns do :
Dear friends, please, scroll back to Example D. Yes, the human mind transcaler thing. The Ultimate resource thing.
We are the ultimate resourse.
We the humans (and soon the whole zoo of our technological imitations and reproductions and transcendences of ourselves ).
We as the-I  are strong thinkers and creators, immensely more road lies ahead than it's been traveled, yes, but yet we, as the-I, are the momentary apex in the Effectoring business  in the Known universe ... AND simultaneously we as the-We are mediocre to outright dumb.
We are very far from proper scaling together. The Ultimate resource is not coherent and is not ... collimated. Scattered dim lights, but not a powerful bright mind laser. Dispersed fissibles, but not a concentration of critical masses.
We as The-We - paradoxically- persistently finds ways to transcale its destinies using the power of the-I, but the-We itself does not entertain the scaling well at all .
The individual human mind is the unscaled transcaler.
Tau is the upscaler of that transcaler.
I'll introduce herewith another 'poetic' neologism, which occurred to me to depict the scaling props of a system after the Scrooge factor of ''Tauchain - Tutor ex Machina'' , and it is the:
Spawn  factor
- the capacity and ability of a system to grow through, despite, against, across, from and via the changes. Just like cuboid  is about all rectangular things like squares, cubes, tesseracts ... regardless of their dimensionality, the Spawn Factor - to be a generalization of all orders of scaling. Zillion light years from rigor, of course, as I'm on at least the same distance from my Leibnizization . For the lawyer to become a mathematician is what is for a caterpillar to become a a butterfly. :) Transcaling.
Tau transcends the infinite regress of orders of: scaling of scaling of scaling ... by being self-referential. Or recursive. 
What is the Spawn factor of Tau?
If you let me I'll illustrate this by a poetic periphrasis of the famous piece of Frank Herbert's .:
I will face my change. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the change has gone there will be nothing. Only I will remain.
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.