Tagged: morality

The Moral Algorithm

By R.N. Carmona

There are two ways in which morality can be viewed as an algorithm. One way is individualistic, which will be briefly discussed. The other way is pluralistic. Prior to moving forward, it will be useful to define what an algorithm is. It is a set of rules that defines a series of operations such that each rule is definite and effective and such that the series ends in a finite span of time.1 From an individualistic view, some knowledge of the philosophy of mind is necessary–in particular, a knowledge of Computational Theory of Mind (CTM).

Hilary Putman was the first to propose CTM–which is the view that likens the mind to a computer.2 Since its inception, CTM has been developed further. A notable contribution, for example, is Guilio Tononi’s Integrated Information Theory of Consciousness.3 If one assumes that CTM is correct, then the mind is computational. If the mind is computational, there might exist a number of algorithms within the mind. The moral algorithm would be among these algorithms. An interesting feature of morality is that the moral agent doesn’t think about moral action. The algorithm develops along with an individual’s theory of mind and as it develops, it learns to put out the correct solutions with increasing accuracy. This is because the algorithm starts off at an initial state in where it’s first input is received. This roughly correlates with parents teaching children right from wrong and instilling their cultural values into them. Harold Stone stated that “for people to follow the rules of an algorithm, the rules must be formulated so that they can be followed in a robot-like manner, that is, without the need for thought.”4 Therefore, an individualistic moral algorithm would be one built for automated reasoning, which roughly aligns with how humans reason when concerning morality. Far from the careful exercise of deduction or mathematical abduction, moral behavior does appear automated. It appears intuitive if not impulsive. Whether or not the mind aligns with CTM Is an open question. Assuming that’s the case, whether or not morality is an algorithm in the mind is another open question. Therefore, it is better to approach the idea of a moral algorithm from a pluralistic angle.

Algorithms, for one, are given instructions–an initial input. If applied to an individual, then this works just as well for a group. Without intending to endorse normative relativism5, it is interesting that cultures differ from one another in their moral values. Though they differ, however, a moral algorithm, assuming it is given sufficient distribution (D), it will eventually sift out moral values that aren’t conducive to the good of the individual or the group. With that said, if the moral algorithm is viewed as an instance of crowdsourcing, as pluralistic, then it will be self-improving. A good example of a self-improving algorithm is the one belonging to Google’s search engine.6 An advantage of crowdsourcing is that it rules out the idiosyncrasies of certain individuals and groups.7 Marcus, a character in Rebecca Goldstein’s Plato at the Googleplex: Why Philosophy Won’t Go Away, states the following:

There’s some ideal algorithm for working it out, for assigning weights to different opinions. Maybe we should give more weight to people who have lived lives that they find gratifying and that others find admirable. And, of course, for this to work the crowd has to be huge; it has to contain all these disparate vantage points, everybody who’s starting from their own chained-up position in the cave [Plato’s cave analogy8]. It has to contain, in principle, everybody. I mean, if you’re including just men, or just landowners, or just people above a certain IQ, then the results aren’t going to be robust.9

The crowd this algorithm can draw from consists of over seven billion individuals and thousands of groups–cultural, religious, ethnic, etc. In theory, the algorithm has significant D stemming from billions of individual agents and thousands of groups. Furthermore, it won’t face the issue of unknown since the contents of morality are generally understood. That is to say that even a run-of-the-mill psychopath understands right from wrong though he chooses not to adhere to moral norms. Given that it has substantial D, it’s running time has already been optimized. The next step is machine learning nature, which is pivotal to self-improvement.10 Also, the algorithm can use extraneous information to improve performance. Thus, the moral algorithm can use information gathered from a group like the Nazis to improve performance. This would be a perfect example of unacceptable behavior. Unlike Goldstein’s EASE (Ethical Answers Search Engine), which like the individualistic moral algorithm, is one built for automated reasoning, the pluralistic moral algorithm would be one built for data processing. Like Google’s search engine, it will use data to self-improve.

The notion of a pluralistic moral algorithm and consequently, an individualistic moral algorithm can be related to procedural realism. Procedural realism states that “there are answers to moral questions because there are correct procedures for arriving at them.”11 Korsgaard adds that because people are rational agents, they have an ideal person they want to become and they thus guide their actions accordingly. What’s most important on her view is that moral agents self-legislate.12 Self-legislation aligns perfectly with the notion of both an individualistic and a pluralistic moral algorithm. It also aligns perfectly with Kant’s autonomy formulation of his categorical imperative which states that one should act in such a way that one’s will can regard itself at the same time as making universal laws through its maxims.13 Arguably, something much simpler than Kant’s formulation can be at play when speaking of autonomy and self-legislation. However, Kant’s formulation of the Kingdom of Ends takes us from individualistic to pluralistic because the formulation states that one should act as if one were through one’s maxims a law-making member of a kingdom of ends.14 Morality, as a self-correcting algorithm, will, like Goldstein stated, cancel out the peculiar views some individuals hold. Thus, an agent can’t will an immoral law–let alone an immoral universal law. Self-governance, like knowledge, would be subsumed by crowdsourcing–thus becoming the self-government of the people rather than just this or that individual. This is Kant’s Kingdom of Ends.

Ultimately, though morality can be considered an individualistic algorithm, it is best to view it as a pluralistic algorithm. In other words, it isn’t agent-specific but rather species-specific. Compelling arguments can be made defending an individualistic moral algorithm, especially in light of CMT. However, even if CMT isn’t the case, given how people have crowdsourced knowledge and given that humanity can be viewed as something akin to a computer network that allows for the sharing of data among individuals, a pluralistic moral algorithm could be the case even if an individualistic moral algorithm is not. That is to say that a pluralistic moral algorithm doesn’t require an individualistic algorithm to emerge. A pluralistic moral algorithm can easily explain moral universals; furthermore, it can explain the common discomfort one feels when being exposed to moral values that differ drastically from one’s own. In other words, disapproval and approval can be explained from the lens of a pluralistic moral algorithm. From that, it need not follow that there is a pluralistic moral algorithm, which processes moral data so to speak. Nevertheless, morality does appear to have an inherent feature of self-improvement, which could arise from agent-specific autonomy, individual self-legislation, and the self-legislation of the general population. This idea can also transfer to law, which also features self-improvement (e.g. Constitutional amendments).

Works Cited

1 Harold S. Stone. Introduction to Computer Organization and Data Structures, 1972, McGraw-Hill, New York. Cf in particular the first chapter titled: Algorithms, Turing Machines, and Programs.

“The Computational Theory of Mind.” Stanford Encyclopedia of Philosophy. 1 Jul 2003

3 Tononi Guilio. “Integrated Information Theory of Consciousness: An Updated Account.” Archives Italiennes de Biologie, 150: 290-326, 2012 

4 Ibid. [1]

5 Pecorino Philip. “Chapter 8 Ethics: Normative Ethical Relativism.” Queensborough Community College. 2000

6 Goldstein, Rebecca. Plato at the Googleplex: Why Philosophy won’t Go Away, p.105. New York: Pantheon Books, 2014. Print.

7 Ibid. [6] (p.102)

8 Cohen, Marc. “The Allegory of the Cave.” University of Washington. 2006

9 Ibid. [6]

10 Ailon Nir, et. al. “Self-Improving Algorithms.” SIAM Journal on Computing (SICOMP), 40(2),pp. 350-375. 2011

11 Korsgaard, Christine M., and Onora Neill.The Sources of Normativity, p.36-37. Cambridge: Cambridge University Press, 1996. Print.

12 Ibid. [11]

13 Pecorino Philip. “Chapter 8 Ethics: The Categorical Imperative.” Queensborough Community College. 2000

14 Ibid. [13]

Advertisements

The Problem of Evil: A Refutation of Plantinga’s Theodicy

By R.N. Carmona

Alvin Plantinga, a renowned reformed philosopher and theologian, likely has more than the two theodicies discussed here. These two theodicies, however, are a common route for theists to take. The first defense is no doubt familiar to the reader: the Free Will defense. The second defense is also familiar, but is less relied upon: this defense, for our purposes, will be called the Ignorance defense.

Plantinga’s Free Will defense fails for two reasons, but prior to demonstrating this, a fair treatment of his defense must be granted. So we will first look at what his defense is. HIs defense relies on two assumptions. He also has a set of possible worlds, one of which we’ll consider. HIs first assumption is as follows:

(MSR1) God’s creation of persons with morally significant free will is something of tremendous value. God could not eliminate much of the evil and suffering in this world without thereby eliminating the greater good of having created persons with free will with whom he could have relationships and who are able to love one another and do good deeds.1

MSR1, on the surface, makes sense. It’s plausible that this is the reason the Judeo-Christian god allows evil. MSR1, however, is based on a problematic version of free will, namely Libertarian free will. Libertarianism can be defined as the “view that seeks to protect the reality of human ‘free will by supposing that a free choice is not causally determined but not random either.’”2 As commentary, Blackburn states, that “[w]hat is needed is the conception of a rational, responsible intervention in the ongoing course of events”. He adds that “[i]n some developments a special category of agent-causation is posited, but its relationship with the neurophysiological working of the brain and body, or indeed any moderately naturalistic view of ourselves, tends to be very uneasy, and it is frequently derided as the desire to protect the fantasy of an agency situated outside the realm of nature altogether.”3 This statement implies Cartesian dualism, which is too tangential for our purposes. Whether or not Cartesian dualism helps the case for Libertarian free will, or whether or not it is necessary to make sense of such free will shouldn’t occupy us here.

Libertarian free will is itself questionable. Michael Tooley with the University of Colorado writes:

One problem with an appeal to libertarian free will is that no satisfactory account of the concept of libertarian free will is yet available. Thus, while the requirement that, in order to be free in the libertarian sense, an action not have any cause that lies outside the agent is unproblematic, this is obviously not a sufficient condition, since this condition would be satisfied if the behavior in question was caused by random events within the agent. So one needs to add that the agent is, in some sense, the cause of the action. But how is the causation in question to be understood? Present accounts of the metaphysics of causation typically treat causes as states of affairs. If, however, one adopts such an approach, then it seems that all that one has when an action is freely done, in the libertarian sense, is that there is some uncaused mental state of the agent that causally gives rise to the relevant behavior, and why freedom, thus understood, should be thought valuable, is far from clear.4

He adds that the Libertarian can make a switch from event-causation to agent-causation, but there’s no cogent account for agent-causation either. This harkens back to Blackburn’s sentiments.

Plantinga discusses four possible worlds, the third of which is the most important, which is W1. It looks as follows:

(a) God creates persons with morally significant free will

(b) God does not causally determine people in every situation to choose what is right and to avoid what is wrong and

© There is evil and suffering in W1.5

If god exists, this is precisely the kind of world we seem to live in. Plantinga’s defense is that god couldn’t eliminate evil without infringing upon our choices and by extension, what good might come of them. Plantinga, in this vein, states:

A world containing creatures who are sometimes significantly free (and freely perform more good than evil actions) is more valuable, all else being equal, than a world containing no free creatures at all. Now God can create free creatures, but he cannot cause or determine them to do only what is right. For if he does so, then they are not significantly free after all; they do not do what is right freely. To create creatures capable of moral good, therefore, he must create creatures capable of moral evil; and he cannot leave these creatures free to perform evil and at the same time prevent them from doing so…. The fact that these free creatures sometimes go wrong, however, counts neither against God’s omnipotence nor against his goodness; for he could have forestalled the occurrence of moral evil only by excising the possibility of moral good. (Plantinga 1974, pp. 166-167)6

That a world where humans have Libertarian free will is more valuable than one without that is dubious. Plantinga can’t purport to know what such a world would look like. Furthermore, if we are to take predestination seriously, verses like Psalm 139:16 have to be squared with Plantinga’s account of free will. The context of that verse seems to imply we don’t have free will. There is, if that verse and another which will be discussed shortly are to be believed, a celestial determinism if you will. Consider, for example, Exodus 9:12. There is no sense in which Pharaoh was free to listen. His heart was hardened by god; god, in other words, violates stipulation (b) in W1.

So it appears, on the theist’s view, that we live in a world that resembles W1, but differs in a significant way. God sometimes causally determines our moral decisions. Given Libertarian free will and predestination, which was briefly discussed here, Plantinga’s Free Will defense is inadequate.

Another reason it fails is because it focuses on human-driven evil and not natural evil. To cover this base, Plantinga deploys MSR2, which states that “God allowed natural evil to enter the world as part of Adam and Eve’s punishment for their sin in the Garden of Eden.”7 This is textually, historically, and even scientifically dubious. This too is also too tangential for our purposes. Suffice it to say that here Plantinga presupposes Christian theology to defend Christianity. MSR2 is, at best, unsubstantiated and at worst, false. The burden of proof is then on Plantinga to demonstrate that Genesis 3 is a factual, historical account. It isn’t enough to believe that it happened or to assert that it best explains human nature. These predilections are rooted in the very theology Plantinga is attempting to defend. These statements simply beg the question.

We will now turn to Plantinga’s Ignorance defense. We will note here that he himself doesn’t call it the Ignorance defense. We will call it that given the fact that it relies on our ignorance to work. In other words, the defense states that since our wisdom is incomparable to god’s, we can’t know why he allows evil. Moreover, since it’s reasonable that he has some reason—no doubt unknown to us—for allowing evil, we can’t reasonably blame god for the evil in the world. Let us turn to some of Plantinga’s explications. Kai Nielsen states:

Plantinga grants that, as far as we can see, there are many cases of evil that are apparently pointless. Indeed there are many cases of such evils where we have no idea at all what reason God (if there is such a person) could have for permitting such evils. But, Plantinga remarks, from granting these things it does not follow that “an omnipotent and omniscient God, if he existed, would not have a reason for permitting them” (Plantinga 1993, 400). From the fact that we can see no reason at all for God to permit evils, we cannot legitimately infer that God has no reason to allow such evils. It is not just, Plantinga continues, “obvious or apparent that God could have reason for permitting them. The most we can sensibly say is that we can’t think of any good reason why he would permit them” (Plantinga 1993, 400).8

This, in a nutshell, is the Ignorance defense. We are, in other words, ignorant of god’s will and our wisdom pales in comparison to his. Nielsen, however, has the makings of a perfect counter. All that’s needed is to see his counter from the point of view of one of god’s attributes. Nielsen states that “it looks more like, if he exists and is all powerful and all knowing, that then he more likely to be evil.” He adds that “we see that all the same he might possibly be, as Jewish, Christian, and Islamic religions say he is, perfectly good. But we cannot see that he is. The Mosaic God looks, to understate it, petty, unjust, and cruel to us.”9 This counter is made perfect if we see this from the point of view of god’s omniscience. God would know that we would be unable to see that he is good in light of natural evil. This evil is, in fact, gratuitous. God would have seen, in his omniscience, that the quantity of natural evil in the world would be enough to drive so many to doubt. This apart from contradictory revelations, the limited range and capacity of Christianity, i.e., it’s capacity to appeal to people of other cultures, and the negative evidence against the existence of the Judeo-Christian god. We are then asked “to stick with a belief in what we see to be some kind of possibility, namely that God is, after all, appearances to the contrary notwithstanding, perfectly good.”10 Like Nielsen, however, I see this as an obstinate appeal to the very faith that needs to be substantiated. Furthermore, I see this as an implied superiority of faith over reason. Like Galileo, who no doubt said this with a different sentiment, I “do not feel obliged to believe that same God who endowed us with sense, reason, and intellect had intended for us to forgo their use.” There are other reasons showing that reason is superior to faith, especially since the former is the agreed upon approach in all aspects of life except religion. Nielsen discusses this at length, but that’s not exactly germane to this discussion.

Though we’ve called it the Ignorance defense, Plantinga does argue that we can be privy to god’s reasons for allowing evil (Plantinga 1993, 400-401). This, unfortunately, relies on revelation and is thus, dubious. No amount of revelation can make one privy to all instances of evil in the world—both human-driven and natural. God, for example, isn’t keen on revealing to believers why a forest fire leads to the suffering and deaths of the animals in that ecosystem. This, in fact, seems to be of little concern given putative revelations in the Abrahamic faiths. God, given, for instance, the Book of Job, seems intent on justifying the existence of and need for human-driven evil. Plantinga employs the Book of Job in his defense. This, like the previous defense, is problematic. Given history and textual criticism, the Book of Job is mired with problems. We would, again, have to lean on an obstinate faith to consider it a good supplement to any theodicy or to see it as a theodicy all its own.

The Problem of Evil, especially when adding the element of gratuitous evil, remains an outstanding problem for theism. There is no cogent theodicy or defense against it, Plantinga notwithstanding. The Free Will and Ignorance defenses fail for a number reason—most prominent of which being the groundless presuppositions underlying the arguments. This is to say nothing of the Leibnizian best possible world and defenses in that vein. Theodicies warrant fuller treatment and this has indeed been done. What we have, unfortunately, is one party who refuses to read what the opposition has to say. This is why some plainly and no doubt, hyperbolically, assert that solutions have been offered for centuries. These purported solutions have also been scrutinized as has been briefly sketched out here. The Problem of Evil can be likened to a hemophiliac’s wound. Theodicies notwithstanding, theists haven’t stopped the bleeding.

Works Cited

1 Beebe, James R. “Logical Problem of Evil”. Internet Encyclopedia of Philosophy. ND. Web. 3 Jan 2015.

2 Blackburn, Simon. The Oxford Dictionary of Philosophy. Oxford: Oxford UP, 1994. 208-209. Print.

3 Ibid. [2]

4 Tooley, Michael. “The Problem of Evil”. Stanford Encyclopedia of Philosophy. 2012. Web. 3 Jan 2015.

5 Ibid. [1]

6 Plantinga, Alvin as quoted in Ibid. [1]

7 Ibid. [1]

8 Nielsen, Kai. Naturalism and Religion. Amherst, N.Y.: Prometheus, 2001. 303-304. Print.

9 Ibid. [8], p.308

10 Ibid. [9]