The Moral Algorithm

By R.N. Carmona

There are two ways in which morality can be viewed as an algorithm. One way is individualistic, which will be briefly discussed. The other way is pluralistic. Prior to moving forward, it will be useful to define what an algorithm is. It is a set of rules that defines a series of operations such that each rule is definite and effective and such that the series ends in a finite span of time.1 From an individualistic view, some knowledge of the philosophy of mind is necessary–in particular, a knowledge of Computational Theory of Mind (CTM).

Hilary Putman was the first to propose CTM–which is the view that likens the mind to a computer.2 Since its inception, CTM has been developed further. A notable contribution, for example, is Guilio Tononi’s Integrated Information Theory of Consciousness.3 If one assumes that CTM is correct, then the mind is computational. If the mind is computational, there might exist a number of algorithms within the mind. The moral algorithm would be among these algorithms. An interesting feature of morality is that the moral agent doesn’t think about moral action. The algorithm develops along with an individual’s theory of mind and as it develops, it learns to put out the correct solutions with increasing accuracy. This is because the algorithm starts off at an initial state in where it’s first input is received. This roughly correlates with parents teaching children right from wrong and instilling their cultural values into them. Harold Stone stated that “for people to follow the rules of an algorithm, the rules must be formulated so that they can be followed in a robot-like manner, that is, without the need for thought.”4 Therefore, an individualistic moral algorithm would be one built for automated reasoning, which roughly aligns with how humans reason when concerning morality. Far from the careful exercise of deduction or mathematical abduction, moral behavior does appear automated. It appears intuitive if not impulsive. Whether or not the mind aligns with CTM Is an open question. Assuming that’s the case, whether or not morality is an algorithm in the mind is another open question. Therefore, it is better to approach the idea of a moral algorithm from a pluralistic angle.

Algorithms, for one, are given instructions–an initial input. If applied to an individual, then this works just as well for a group. Without intending to endorse normative relativism5, it is interesting that cultures differ from one another in their moral values. Though they differ, however, a moral algorithm, assuming it is given sufficient distribution (D), it will eventually sift out moral values that aren’t conducive to the good of the individual or the group. With that said, if the moral algorithm is viewed as an instance of crowdsourcing, as pluralistic, then it will be self-improving. A good example of a self-improving algorithm is the one belonging to Google’s search engine.6 An advantage of crowdsourcing is that it rules out the idiosyncrasies of certain individuals and groups.7 Marcus, a character in Rebecca Goldstein’s Plato at the Googleplex: Why Philosophy Won’t Go Away, states the following:

There’s some ideal algorithm for working it out, for assigning weights to different opinions. Maybe we should give more weight to people who have lived lives that they find gratifying and that others find admirable. And, of course, for this to work the crowd has to be huge; it has to contain all these disparate vantage points, everybody who’s starting from their own chained-up position in the cave [Plato’s cave analogy8]. It has to contain, in principle, everybody. I mean, if you’re including just men, or just landowners, or just people above a certain IQ, then the results aren’t going to be robust.9

The crowd this algorithm can draw from consists of over seven billion individuals and thousands of groups–cultural, religious, ethnic, etc. In theory, the algorithm has significant D stemming from billions of individual agents and thousands of groups. Furthermore, it won’t face the issue of unknown since the contents of morality are generally understood. That is to say that even a run-of-the-mill psychopath understands right from wrong though he chooses not to adhere to moral norms. Given that it has substantial D, it’s running time has already been optimized. The next step is machine learning nature, which is pivotal to self-improvement.10 Also, the algorithm can use extraneous information to improve performance. Thus, the moral algorithm can use information gathered from a group like the Nazis to improve performance. This would be a perfect example of unacceptable behavior. Unlike Goldstein’s EASE (Ethical Answers Search Engine), which like the individualistic moral algorithm, is one built for automated reasoning, the pluralistic moral algorithm would be one built for data processing. Like Google’s search engine, it will use data to self-improve.

The notion of a pluralistic moral algorithm and consequently, an individualistic moral algorithm can be related to procedural realism. Procedural realism states that “there are answers to moral questions because there are correct procedures for arriving at them.”11 Korsgaard adds that because people are rational agents, they have an ideal person they want to become and they thus guide their actions accordingly. What’s most important on her view is that moral agents self-legislate.12 Self-legislation aligns perfectly with the notion of both an individualistic and a pluralistic moral algorithm. It also aligns perfectly with Kant’s autonomy formulation of his categorical imperative which states that one should act in such a way that one’s will can regard itself at the same time as making universal laws through its maxims.13 Arguably, something much simpler than Kant’s formulation can be at play when speaking of autonomy and self-legislation. However, Kant’s formulation of the Kingdom of Ends takes us from individualistic to pluralistic because the formulation states that one should act as if one were through one’s maxims a law-making member of a kingdom of ends.14 Morality, as a self-correcting algorithm, will, like Goldstein stated, cancel out the peculiar views some individuals hold. Thus, an agent can’t will an immoral law–let alone an immoral universal law. Self-governance, like knowledge, would be subsumed by crowdsourcing–thus becoming the self-government of the people rather than just this or that individual. This is Kant’s Kingdom of Ends.

Ultimately, though morality can be considered an individualistic algorithm, it is best to view it as a pluralistic algorithm. In other words, it isn’t agent-specific but rather species-specific. Compelling arguments can be made defending an individualistic moral algorithm, especially in light of CMT. However, even if CMT isn’t the case, given how people have crowdsourced knowledge and given that humanity can be viewed as something akin to a computer network that allows for the sharing of data among individuals, a pluralistic moral algorithm could be the case even if an individualistic moral algorithm is not. That is to say that a pluralistic moral algorithm doesn’t require an individualistic algorithm to emerge. A pluralistic moral algorithm can easily explain moral universals; furthermore, it can explain the common discomfort one feels when being exposed to moral values that differ drastically from one’s own. In other words, disapproval and approval can be explained from the lens of a pluralistic moral algorithm. From that, it need not follow that there is a pluralistic moral algorithm, which processes moral data so to speak. Nevertheless, morality does appear to have an inherent feature of self-improvement, which could arise from agent-specific autonomy, individual self-legislation, and the self-legislation of the general population. This idea can also transfer to law, which also features self-improvement (e.g. Constitutional amendments).

Works Cited

1 Harold S. Stone. Introduction to Computer Organization and Data Structures, 1972, McGraw-Hill, New York. Cf in particular the first chapter titled: Algorithms, Turing Machines, and Programs.

“The Computational Theory of Mind.” Stanford Encyclopedia of Philosophy. 1 Jul 2003

3 Tononi Guilio. “Integrated Information Theory of Consciousness: An Updated Account.” Archives Italiennes de Biologie, 150: 290-326, 2012 

4 Ibid. [1]

5 Pecorino Philip. “Chapter 8 Ethics: Normative Ethical Relativism.” Queensborough Community College. 2000

6 Goldstein, Rebecca. Plato at the Googleplex: Why Philosophy won’t Go Away, p.105. New York: Pantheon Books, 2014. Print.

7 Ibid. [6] (p.102)

8 Cohen, Marc. “The Allegory of the Cave.” University of Washington. 2006

9 Ibid. [6]

10 Ailon Nir, et. al. “Self-Improving Algorithms.” SIAM Journal on Computing (SICOMP), 40(2),pp. 350-375. 2011

11 Korsgaard, Christine M., and Onora Neill.The Sources of Normativity, p.36-37. Cambridge: Cambridge University Press, 1996. Print.

12 Ibid. [11]

13 Pecorino Philip. “Chapter 8 Ethics: The Categorical Imperative.” Queensborough Community College. 2000

14 Ibid. [13]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s