Consensus Morality and Tauchain | Consensus Gentium. By Dana Edwards. Posted on Steemit. September 15, 2018.
An ancient criterion of truth, the consensus gentium (Latin for agreement of the people), states "that which is universal among men carries the weight of truth" (Ferm, 64). A number of consensus theories of truth are based on variations of this principle. In some criteria the notion of universal consent is taken strictly, while others qualify the terms of consensus in various ways. There are versions of consensus theory in which the specific population weighing in on a given question, the proportion of the population required for consent, and the period of time needed to declare consensus vary from the classical norm.
In the past I made a controversial statement that the law is amoral. This statement I made is based on a simple understanding of legal positivism. Take note that I am not a legal scholar or legal philosopher. My background is in ethical philosophy and political philosophy. That being said if we look at the ideas behind legal positivism it leads to the conclusion that law and morality have nothing to do with each other. In this post I will try to clarify some of my thoughts on this topic and also address a question I was asked about whether Democracy is moral or immoral. I will also discuss the concept of consensus morality and the implications it could have on Tauchain which by design will be permitted to have law(s). Will the law(s) in Tauchain be moral or immoral? Is it possible to align a moral framework with the creation of all laws in Tauchain? Which moral framework and will it be reached by consensus?
In order to understand a lot of my post we first have to consider the question of what is consensus morality? So in order to discuss this topic I will divide morality up into; private morality, public morality. This also introduces the question of whether public morality is authentic or coerced as it depends on how it emerges.
Private morality is what you internally think or feel is right or wrong. This could be because you did some sophisticated calculation as a consequentialist or it could merely be that you feel a certain kind of way about it. In your opinion it is considered wrong. For example you could say: "eating meat is wrong" and this would be your personal opinion. This is an expression on how you feel about eating meat. Now if you say "eating meat is wrong because it promotes animal suffering" this is also an expression of your opinion but you now have a goal attached which is to avoid promoting animal suffering. The goal of not promoting animal suffering suggests that you value minimization of animal suffering as a kind of optimization strategy.
If you still you follow, private morality can also be based on your religious convictions where because the bible says it is wrong or because you were taught the golden rule that it is in your opinion wrong to do behaviors which violate these teachings. The golden rule is an example of a heuristic rule. There are many such rules which people follow including the example from Kant (categorical imperative) but it is still just an opinion based on adherence to a heuristic rule. We can also consider the non agression principle an example of a heuristic rule (a heuristic rule is a mental shortcut which people take because they believe it leads to good results most of the time).
Public morality on the other hand is a different kind of morality entirely. A private individual has a private morality because that individual is only responsible for themselves in their decisions. A public individual is in a position where other people have a stake in what they are doing. For example a CEO of a company cannot simply do what they think is right because the other shareholders have funds at stake. The CEO has a fiduciary duty which outweighs their personal opinions on what is right and wrong. This fiduciary duty is to the shareholders of the company and is both a legal and ethical obligation. In the case of a public company the rightness or wrongness of a decision if the company weighs consequences is based on data. For example a company might rely on focus groups to determine what a customer might want. A company would have to rely on spiritual advisers, ethical focus groups and determine what the shareholders (and customers) would perceive as right. This is because if the CEO does not do what is in the best interest of the shareholders and customers then the CEO will simply be replaced by another CEO who will.
Public morality is reached by some process which results in a moral consensus. The moral consensus of 2018 is not going to be the same as the moral consensus of 1969. This is to say that moral attitudes change over time. A company which seeks to exist and remain profitable for decades must remain in good moral standards for these decades. The only way a company can remain aligned with current moral trends is through a tactic of data analysis. In other words data science is how "right" and "wrong' are determined. For example public sentiment is tracked and from that the marketing team knows where the line in the sand is and what line not to cross in their marketing campaign. The phrase "we went too far" is common in business because going too far simply means to push the boundaries on what is acceptable (or unacceptable). This also can become problematic because if a company bets on a moral consensus in the 1800s (slavery is right) then that company might find after the Civil War (slavery is wrong) and now have to change their opinion. In other words the moral consensus is always changing and is in essence producing moral populism.
Consensus morality on Tauchain
Consensus morality is essentially a publicly recognized framework for right and wrong. Consensus morality on Tauchain for example could be arrived at if we simply have the discussions on topics of ethics. Over time our discussions will converge in such a way so as to produce a consensus morality. That is a moral attitude of the day, of the year, etc as it is merely what is currently the popular opinion and sentiment on what is right and what is wrong. So consensus morality is in my opinion likely to be a very important concept going forward and is a concept which Tauchain (and blockchains like Steem) may enable.
Consensus morality and potential problems
So the question I was asked is about democracy. The idea a person put forth to me was that democracy is immoral because it is a form of coercion. I do not personally buy into this idea that democracy is inherently immoral or inherently coercive. I will say that democracy implemented in the wrong way can become coercive. This is why the emphasis on privacy may be a requirement. If there is no privacy then all votes could be coerced. If the idea is to have a network which is truly moral then we would require that every moral opinion be expressible. Moral opinions which are unpopular are censored or discouraged from being expressed in a transparent ecosystem. This means a transparent ecosystem may in fact under certain circumstances produce a coerced consensus morality. That is that the votes which are public and attributable to certain individual may be mere virtue signals rather than honest (authentic) opinions on what is right and wrong.
As a result this transparency may skew the results of any poll about any subject. A private or anonymous poll can capture a result which in theory expresses some true opinion. In addition there is the possibility of futarchy to allow for prediction markets and other mechanisms to allow for true sentiment on moral questions to be discovered. My answer to the question is that whilst democracy is not inherently wrong it is also not inherently right. Democracy is a tool which when used in the right circumstances may be best suited for achieving the ends. If no better tool exists to achieve the ends then democracy may in fact be the choice which leads to the least bad consequences which compared to other potential choices. That being said the ideal of consequentialism is to over time reduce the wrongness and increase the rightness by measuring the consequences of every choice; such as private ballot voting vs transparent voting.
Privacy has both it's risks and its benefits with regard to consequences. The benefits include coercion resistance. The risks on the other hand include increased ability to bribe and thus coerce. So the idea being that while in theory a person with privacy can express an authentic opinion (have genuine speech rights) it is also true that anyone could be anonymously (privately) be selling their opinion and thus their vote. It is going to be a challenge to determine when privacy is the right tool for the job and when transparency is the right tool for the job.
In the positivist view, the "source" of a law is the establishment of that law by some socially recognised legal authority. The "merits" of a law are a separate issue: it may be a "bad law" by some standard, but if it was added to the system by a legitimate authority, it is still a law.
Legal positivism states that the law and morality are not one in the same. Just because something is legal it does not mean it is moral. Just because something is illegal it does not mean it is immoral. From this basis I reached a conclusion that because immoral laws exist (some laws are moral) that the law as a whole is amoral. That is to say that whether a law can be made or unmade does not demend on whether the law produces good consequences or even desirable consequences. We could for example look at the drug laws and war on drugs to see examples of policies which produce mass incarceration but was that the intended consequence? It would seem the drug laws would have to be immoral according to consequentialism unless the intended consequence was mass incarceration. If the intended consequence was harm reduction then the current drug laws are ineffective. What do these laws actually achieve? It doesn't really matter because the law is amoral. To align the law with morality is also problematic because it would only be able to align with public morality which under consequentialism may also often lead to bad or unintended consequences.
A potential solution is to allow participants in the ecosystem to rate the laws over time. Laws which receive a higher rating or lower rating would provide a feedback loop indicating when a law should be replaced. This is something that we don't seem to have in the current legal system or if we do have it then what is actually done if a lot of people express the opinion that a particular law is immoral or perhaps not moral enough? If every law on Tauchain could be rated, reviewed, discussed continuously, and improved indefinitely, then we may actually get somewhere.
How Tauchain and the Exocortex can give anyone a conscience and make anyone more law abiding. By Dana Edwards. Posted on Steemit. September 2, 2018.
First "anyone" is not literal. By anyone I mean anyone with a reasonable level of intelligence who is willing to take the advice generated by the network. The network would include human beings and machines. The network would learn and be more properly defined as a complex adaptive system. Tauchain would enable the emergence of this network. This post is about how the network which can emerge from Tauchain. It is also about how people who intend to be as moral as possible whilst also complying with the law as much as possible might leverage the network. This post assumes that the human brain has a finite memory and comprehension capacity. This post assumes that every human being can benefit from enhancing these naturally limited capacities in areas of legal comprehension and risk literacy (under the assumption that most or perhaps none of us know every law on the books but need to comply with the laws most likely to be aggressively enforced).
The Personal Moral Assistant
PMA is a concept I've been thinking about for years now. The idea that we can augment our ability to be moral persons. A PMA is a personal moral assistant and in an ideal world every person born would have one. This would be an interface similar to what we see with Cortana or Siri where you can ask any question pertaining to whether a particular action is right or wrong. This PMA would solve the problem using the same priorities that you would and so you would get a definite right or wrong result.
A Personal Moral Assistant is just one primary use case. But these personal assistants over Tauchain could also include for instance a Personal Compliance Assistant. This is essentially another bot but instead of dealing with moral problems this bot would handle compliance. If you're trying to accomplish a goal this bot would make sure that you do so following all the known laws as your exocortex currently understands it. This would enable people to avoid legal pitfalls whilst chasing opportunities.
In order to go from poor to rich in this world requires taking risks. There is no way around risk taking if you want to get ahead. Risk literacy is essential and very few people who are poor have risk literacy. The PMA might be able to tell a person whether a certain choice aligns with their current values. The PCA might tell a person whether a certain choice complies with the laws. What about opportunities? An opportunity web crawler agent could theoretically search across the entire Internet to find opportunities which match your chosen risk profile.
What are we doing today?
Today we have to make choices often in trial and error. If we aren't lucky enough to have mentors or people who can guide us then the only way to learn is to make the common mistakes. When we deal with moral problems today we often rely on holy scripture interpreted by other human beings who are just as flawed as we are. We simply don't have a bot which could interpret the scripture in a completely logical way. In other words we don't have the digital representation of the mind of our spiritual guides.
We also have a situation where some of us can afford to comply with every law and take the lowest risk approach while others simply don't have the resources available to pay the expensive legal fees. Some people get better legal advice than other people as well. What if we could get at least some level of legal assistance from our intelligent assistant? What if this intelligent assistant can even ask human beings who have legal knowledge to help?
And finally what if we could figure out which risks are worth taking and which are not worth taking? It's one thing to find opportunities but another to be able to assess them. People get scammed because at the end of the day our emotions influence our ability to do proper assessment of opportunities. I'm human and it even happens to me from time to time. What if we could avoid this by using the capabilities of Tauchain to analyze massive amounts of information for us which our brains could never handle?
Opportunity Crawler Bot
I ask a simple hypothetical question: what if you could have set a bot to search the Internet for opportunities that resemble Bitcoin in 2008? What if this bot would be activated and search for an indefinite period of time on an undetermined yet expanding number of networks? If you define "Bitcoin in 2008" in a way which the bot can make sense of then it could search for anything which meets that criteria. We have this technology now but it's extremely primitive. On Google you can set up alerts for certain things but what if you could go beyond mere alerts and look for code on Github, and certain individuals involved with it, and certain growth patterns?
A way to think about these bots / intelligent assistants
One way to think about these intelligent assistants is as part of your extended mind. These bots essentially help you to think better and communicate better. It's still you and what they do on your behalf is essentially as if you did it. So the total collection of all of these agents which are under your control represent your complete exocortex. It will take great responsibility and wisdom to use these abilities in a way which is perceived by the world as ethical, moral, legal, etc. It is for these reasons that I initiate a discussion on how each of you would like to use such technology if it did exist or such bots or how you would think about them?
Logo by CapitanArt
Enlaces / Links
Logo by CapitanArt
Archivos / Archives
Suggested readings to better understand the Tau ecosystem, Tau Meta Language, Tau-Chain and Agoras, and collaborate in the development of the project.
Lecturas sugeridas para entender mejor el ecosistema Tau, Tau Meta Lenguaje, Tau-Chain y Agoras, y colaborar en el desarrollo del proyecto.