Category: metaphysics

Rescuing Logic From the Abuse of Bayes’ Theorem: Validity, Soundness, and Probability

By R.N. Carmona

In recent years, there has been a surge in the use of Bayes’ Theorem with the intention of bolstering this or that argument. This has resulted in an abject misuse or abuse of Bayes’ Theorem as a tool. It has also resulted in an incapacity to filter out bias in the context of some debates, e.g. theism and naturalism. Participants in these debates, on all sides, betray a tendency to inflate their prior probabilities in accordance with their unmerited epistemic certainty in either a presupposition or key premise of one of their arguments. The prophylactic, to my mind, is found in a retreat to the basics of logic and reasoning.

An Overview on Validity

Validity, for instance, is more involved than some people realize. It is not enough for an argument to appear to have logical form. An analysis of whether it, in fact, has logical form is a task that is seldom undertaken. When people think of validity, something like the following comes to mind: “A deductive argument is said to be valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Otherwise, a deductive argument is said to be invalid” (NA. Validity and Soundness. Internet Encyclopedia of Philosophy. ND. Web.).

Kelley, however, gives us rules to go by:

  1. In a valid syllogism, the middle term must be distributed in at least one of the premises
  2. If either of the terms in the conclusion is distributed, it must be distributed in the premise in which it occurs
  3. No valid syllogism can have two negative premises
  4. If either premise of a valid syllogism is negative, the conclusion must be negative; and if the conclusion is negative, one premise must be negative
  5. If the conclusion is particular, one premise must be particular (Kelley, D.. The Art of Reasoning. WW Norton & Co. 2013. Print. 243-249)

With respect to the first rule, any argument that does not adhere to it commits the fallacy of undistributed middle. Logically, if we take Modus Ponens to be a substitute for a hypothetical syllogism, then undistributed middle is akin to affirming the consequent. Consider the following invalid form:

All P are Q.

All R are P.

∴ All R are Q.

When affirming the consequent, one is saying Q ⊃ P. It is not surprising that these two fallacies are so closely related because both are illegitimate transformations of valid argument forms. We want to say that since all P are Q and all R are P, therefore all R are Q in much the same way we want to infer that P ⊃ Q. Consider the well-known Kalam Cosmological Argument. No one on both sides questions the validity of the argument because validity, for many of us, is met when the conclusion follows from the premises. However, one can ask whether the argument adheres to Kelley’s rules. If one were to analyze the argument closely enough, it is very arguable that the argument violates Kelley’s fourth rule. The issue is that it takes transposing from the fifth rule to fourth rule because the argument does not violate the fifth and therefore, appears valid. However, when restated under the fourth rule, the problem becomes obvious. In other words, the universe is a particular in both Craig’s conclusion and in the second premise of his argument. Let’s consider the KCA restated under the fourth rule:

There are no things that are uncaused.

There is no universe that is uncaused.

∴ All universes have a cause.

Restating it this way appears controversial only because the argument seems to presuppose that there is more than one universe. Two negatives must have properties in common. Put another way, since there are many of all things, then the universe cannot be the only thing of its kind, if we even agree that the universe is like ordinary entities at all. Craig, perhaps unintentionally, attempts to get a universal from a particular, as his argument restated under the fourth rule shows. Given this, we come to the startling conclusion that Craig’s KCA is invalid. Analyses of this kind are extremely rare in debates because most participants do not know or have forgotten the rules of validity. No amount of complexity hides a violation of basic principles. The advent of analytic philosophy with Bertrand and Moore led to an increasing complexity in arguments and for the most part, validity is respected. As shown here, this is not always the case, so a cursory analysis should always be done at the start.

Validity is necessary but not sufficient for an argument to prove effective and persuasive. This is why arguments themselves cannot substitute for or amount to evidence. Soundness is determined by taking a full account of the evidence with respect to the argument. The soundness of an argument is established given that the pertinent evidence supports it; otherwise, the argument is unsound. Let us turn to some simple examples to start.

An Overview of Soundness

“A deductive argument is sound if and only if it is both valid, and all of its premises are actually true. Otherwise, a deductive argument is unsound” (Ibid.).

All ducks are birds.

Larry is a duck.

∴ Larry is a bird.

This argument is stated under Kelley’s fifth rule and is no doubt valid. Now, whether or not the argument is sound will have us looking for external verification. We might say that, a priori, we know that there are no ducks that are not birds. By definition, a duck is a kind of bird. All well and good. There is still the question of whether there is a duck named Larry. This is also setting aside the legitimacy of a priori knowledge because, to my mind, normal cognitive function is necessary to apprehend human languages and to comprehend the litany of predicates that follow from these languages. We know that ducks are birds a posteriori, but on this point I digress. Consider, instead, the following argument.

All ducks are mammals.

Larry is a duck.

∴ Larry is a mammal.

This argument, like the previous one, is valid and in accordance with Kelley’s fifth rule. However, it is unsound. This harkens back to the notion that ducks belonging to the domain of birds is not a piece of a priori knowledge. Despite knowing that all ducks are birds, the differences between birds and mammals are not at all obvious. That is perhaps the underlying issue, a question of how identity is arrived at, in particular the failure of the essentialist program to capture what a thing is. The differentialist program would have us identify a thing by pinning down what it is not. It follows that we know ducks are birds because anatomically and genetically, ducks do not have the signatures of mammals or any other phylum for that matter. A deeper knowledge of taxonomy is required to firmly establish that ducks are, in fact, birds.

An exploration of soundness is much more challenging when analyzing metaphysically laden premises. Consider, for example, the second premise of the KCA: “The universe began to exist.” What exactly does it mean for anything to begin to exist? This question has posed more problems than solutions in the literature; for our purposes, it is not necessary to summarize that here. We can say of a Vizio 50-inch plasma screen television that it began to exist in some warehouse; in other words, there is a given point in time where a functioning television was manufactured and sold to someone. The start of a living organism’s life is also relatively easy to identify. However, mapping these intuitions onto the universe gets us nowhere because as I alluded to earlier, the universe is unlike ordinary entities. This is why the KCA has not been able to escape the charge of fallacy of composition. All ordinary entities we know of, from chairs to cars to elephants to human beings exist within the universe. They are, as it were, the parts that comprise the universe. It does not follow that because it is probable that all ordinary things begin to exist that the universe must have begun to exist.

This is a perfect segue into probability. Again, since Bayes’ Theorem is admittedly complex and not something that is easily handled even by skilled analytic philosophers, a return to the basics is in order. I will assume that the rule of distribution applies to basic arguments; this will turn out to be fairer to all arguments because treating premises as distinct events greatly reduces the chances of a given argument being true. I will demonstrate how this filters out bias in our arguments and imposes on us the need to strictly analyze arguments.

Using Basic Probability to Assess Arguments

Let us state the KCA plainly:

Everything that begins to exist has a cause for its existence.

The universe began to exist.

∴ The universe has a cause for its existence.

As aforementioned, the first premise of the KCA is metaphysically laden. It is, at best, indeterminable because it is an inductive premise; all it takes is just one entity within the universe to throw the entire argument into the fire. To be fair, we can only assign a probability of .5 for this argument being true. We can then use distribution to get the probability of the argument being sound, so since we have a .5 probability of the first premise being sound, and given that we accept that the argument is not in violation of Kelley’s rules, we can therefore distribute this probability across one other premise and arrive at the conclusion that the argument has a 50% chance of being true.

This is preferable to treating each premise as an isolated event; I am being charitable to all arguers by assuming they have properly distributed their middles. Despite this, a slightly different convention might have to be adopted to assess the initial probability of an argument with multiple premises. An argument with six individual premises has a 1.56% chance of being true, i.e. .5^6. This convention would be adopted because we want a probability between 0 and 100. If we use the same convention used for simpler arguments with less premises, then an argument with six premises would have a 300% chance of being true. An arguer can then arbitrarily increase the amount of premises in his argument to boost the probability of his argument being true. Intuitively, an argument with multiple premises has a greater chance of being false; the second convention, at least, shows this while the first clearly does not. The jury is still out on whether the second convention is fair enough to more complex arguments. There is still the option of following standard practice and isolating an individual premise to see if it holds up to scrutiny. Probabilities do not need to be used uniformly; they should be used to make clear our collective epistemic uncertainty about something, i.e., to filter out dogma.

Let us recall my negation strategy and offer the anti-Kalam:

Everything that begins to exist has a cause for its existence.

The universe did not begin to exist.

∴ The universe does not have a cause.

Despite my naturalistic/atheistic leanings, the probability of my argument is also .5 because Craig and I share premise 1. The distribution of that probability into the next premise does not change because my second premise is a negation of his second premise. In one simple demonstration, it should become obvious why using basic probabilities is preferable over the use of Bayes’ Theorem. No matter one’s motivations or biases, one cannot grossly overstate priors or assign a probability much higher than .5 for metaphysically laden premises that are not easily established. We cannot even begin to apply the notion of a priori knowledge to the first premise of the KCA. We can take Larry being a bird as obvious, but we cannot take as obvious that the universe, like all things within it, began to exist and therefore, has a cause.

Now, a final question remains: how exactly does the probability of an argument being sound increase? Probability increases in accordance with the evidence. For the KCA to prove sound, a full exploration of evidence from cosmology is needed. A proponent of the KCA cannot dismiss four-dimensional black holes, white holes, a cyclic universe, eternal inflation, and any theory not in keeping with his predilections. That being the case, his argument becomes one based on presupposition and is therefore, circular. A full account of the evidence available in cosmology actually cuts sharply against the arteries of the KCA and therefore, greatly reduces the probability of it being sound. Conversely, it increases the probability of an argument like the Anti-Kalam being true. The use of basic probability is so parsimonious that the percentage decrease of the Kalam being sound mirrors the percentage increase of the Anti-Kalam being sound. In other words, the percentage decrease of any argument proving sound mirrors the percentage increase of its alternative(s) proving true. So if a full account of cosmological evidence lowers the probability of the Kalam being sound by 60%, the Anti-Kalam’s probability of being true increases by 60%. In other words, the Kalam would now have a 20% probability of being true while its opposite would now have an 80% of being true.

Then, if a Bayesian theorist is not yet satisfied, he can keep all priors neutral and plug in probabilities that were fairly assessed to compare a target argument to its alternatives. Even more to the point regarding fairness, rather than making a favored argument the target of analysis, the Bayesian theorist can make an opponent’s argument the target of analysis. It would follow that their opponent’s favored argument has a low probability of being true, given a more basic analysis that filters out bias and a systematic heuristic like the one I have offered. It is free of human emotion or more accurately, devotion to any given dogma. It also further qualifies the significance of taking evidence seriously. This also lends much credence to the conclusion that arguments themselves are not evidence. If that were the case, logically valid and unsound arguments would be admissible as evidence. How would we be able to determine whether one argument or another is true if the arguments themselves serve as evidence? We would essentially regard arguments as self-evident or tautologous. They would be presuppositionalist in nature and viciously circular. All beliefs would be equal. This, thankfully, is not the case.

Ultimately, my interest here has been a brief exploration into a fairer way to assess competing arguments. All of this stems from a deep disappointment in the abuse of Bayes’ Theorem; everyone is inflating their priors and no progress will be made if that continues to be permitted. A more detailed overview of Bayes’ Theorem is not necessary for such purposes and would likely scare away even some readers versed in analytic philosophy and more advanced logic. My interest, as always, is in communicating philosophy to the uninitiated in a way that is approachable and intelligible. At any rate, a return to the basics should be in order. Arguments should continue to be assessed; validity and soundness must be met. Where soundness proves difficult to come by, a fair initial probability must be applied to all arguments. Then, all pertinent evidence must be accounted for and the consequences the evidence presents for a given argument must be absorbed and accepted. Where amending of the argument is possible, the argument should be restructured, to the best of the arguer’s ability, in a way that demonstrates recognition of what the evidence entails. This may sound like a lot to ask, but the pursuit of truth is an arduous journey, not an easy endeavor by any stretch. Anyone who takes the pursuit seriously would go to great lengths to increase the epistemic certainty of his views. All else is folly.

Philosophy of Religion Series: Why Reincarnation is Incompatible With Naturalism

By R.N. Carmona

It is standard procedure in philosophical practice to anticipate rebuttals. In my last post, I explored reincarnation within a naturalistic framework. Strangely enough, my argument leans much too heavily on the nature aspect of who we are. Yes, if consciousness existed in a vacuum, and if the subject were sufficient to account for consciousness, then my argument would be quite forceful. A possible rebuttal takes the nurture angle, arguing that the subject is not sufficient to account for consciousness because the object is just as, if not, of greater importance when attempting to explain consciousness.

If this is the case, then identity is not so elusive after all and Buddhists and people who deny identity are wrong. How then do any of us answer the question: who are you? If you reply with “Sam,” then you have given us nothing but your name. As it turns out, your name is quite common. If your name gave us a full account of who you are, then anyone named Sam is also you. Clearly, we both disagree with this conclusion so your name is not enough. Exercises like this have been done ad nauseam, so I will spare you the runaround and just give you my answer: I am a particular experiencer. As are you.

Now, that requires some explaining and this is where the object comes into focus. What makes me unique and other than you is that I have had an innumerable set of experiences that, when taken together, you have not had. Granted, it is very possible that we share at least one experience, even if we live a world apart. There are billions of people in China whom I have never met or interacted with and I can say with all the confidence in the world that they share experiences with me: being born, coming down with a cold, sweating, shivering, scrolling on a social media app, feeling a certain emotion like anger or sadness, and so on. I can also say that not one of them shares every single experience unique to me. If they are Chinese nationals, then they probably did not grow up in the Bronx. They do not identify as Puerto Rican or American. They do not check the Latino box when filling out a job application. These experiences, however, are overly simplistic.

Experiences are characterized by a given duration of time, an array of qualitative factors that produce in consciousness any number of qualia that go well beyond simply apprehending the color red or the smell of chocolate. It is also quite possible to have an experience too often and become numb to what makes it unique. One experience a lot of us have in common is that of going to a movie theater. There are certain sights, sounds, textures, and smells that are unique to the experience, but few of us can recall the buttery aroma of popcorn somehow mixing with the dull smell of a carpet that has been sullied and cleaned one too many times. There is the scent of leather seats (if you can count yourself fortunate enough to have the new reclining seats, that is) and other people. There is the texture of the seats, one’s footwear against the carpet, in addition to one’s eyes having to adjust in a very dim setting. So if any of my readers have ever pet a tiger in Thailand or jumped out of a plane to skydive, then they have a unique experience, that if described in full detail, does not align with anything I have ever experienced.

The thing is that I, as a particular experiencer, have had a plethora of experiences in my life that are different from yours. The combination of these experiences is a huge part of what makes me me. The combination of your experiences plays a pivotal role in what makes you you. So it is not enough to say that mental states are inherently finite. While it may be the case that experiences are also finite, I need only convince you that the exact combination of my experiences and my distinctive mental states will never be replicated again and that therefore, naturalistic reincarnation is extremely improbable and dare I say, impossible. The sheer improbability of something, however, does not make a thing impossible, so it is not enough to draw this conclusion and move on.

The improbability has to be crushingly discouraging to persuade you that it is simply more likely that naturalistic reincarnation is not the case. To show this, I am going to begin with a generous initial probability that someone else in the future will have any one of my experiences. I will list as many experiences as come to mind:

  • Staying awake for 36 hours straight
  • Sleeping for 18 hours straight without waking up
  • Fasting for three days straight in a church in the Highbridge section of the Bronx
  • Eating dole whip at Disneyland about seven meters from the Indiana Jones attraction
  • Riding the Nitro at Six Flags in Jackson, NJ in the front row
  • Petting farm animals at Kira’s World Exotics Mini Zoo in Hatillo, Puerto Rico
  • Hearing the singing of coquis in Puerto Rico
  • Getting jumped on a school bus in the E. 161st Tunnel in the Bronx
  • Vomiting after too many drinks on the 6 train near Whitlock Avenue

I can think of more experiences that are unique to me, but even when assigning a generous initial probability for any one of these events recurring, the likelihood of all of them recurring is extremely low. What’s more is that I have neglected a lot of variables. What was the weather like? What direction was the wind blowing and at how many miles per hour? How old was I when all of these things happened? I only specified the event and its location because it is already very unlikely that you also vomited on the 6 train near Whitlock Avenue because you had one too many drinks. I did not mention that it was Cinco de Mayo in 2016 and after 8pm. The more and more specific I get, the less likely it is you will share this experience. If I were to include the people who were on the train and some of their reactions, which were surprisingly few given the amount of people on the train, the likelihood decreases even more.

In any case, if I were to assign an initial probably of 40% to each experience, we get the following: .40*.40*.40*.40*.40*.40*.40*.40*.40 = .000262144 or .026%. I listed just nine events in my life in scant detail and the probability of you experiencing all nine events, even starting with a very generous initial probability for each event of 40%, is very low. Now imagine if I were to be as detailed as possible about as many events as I can remember in my life; I am certain that this number will begin to approach at least one thousand. I can, for example, talk about my earliest memory: waking up in my crib, an infant, hungry and so delirious (probably having a hypnogogic hallucination) that I saw a bottle floating just out of my reach as a pendulum does, left and then right, left and then right; I reached for it and my hand went through it and I started wailing. My dad then gave me an actual bottle. Or I can talk about being seven or so years old and seeing a black and white striped insect fly into my room. It landed on my black toy chest and started to crawl like anthropods do. If I did, in fact, see a flying centipede or millipede that day, I saw a yet to be discovered species, I might add; I have scoured the internet for this insect and have yet to come across anything like it. This was before smart phones, so I could not snap a picture before it fluttered its wings and flew right back out. Hypothetically though, if I did count a thousand experiences with an initial probability of 40%, we get .40 ^ 1,000 or about two thousand decimal places before you arrive at any non-zero integers. So you would get a percentage that is virtually zero.

Given how improbable it is that the combination of experiences one has had will be replicated to the tee in a person that, more or less, has the same exact mental life that one does, i.e., is a one-to-one match to oneself with respect to nature, it is therefore, extremely unlikely that another you or me will be born no matter how long the universe goes on for. The universe can continue to exist for quadrillions of years and I do not think it is very probable that someone will have the combination of our respective experiences. I am a unique experiencer because of the combination of experiences I have had, in addition to the admittedly finite mental states that occur in me. While those mental states very well do occur in other people, the probability that she and I have had the same exact set of experiences is extremely low and it is that that makes us different. It is said that experiences mold us into who we are. Given my argument here, that is likely to be the case. If you are convinced that the unique set of experiences you have had in your lifetime make you you, then I think you cannot be convinced of naturalistic reincarnation. What adds more force to this argument is that I have confined it to experiences I can remember despite the fact that experiences I currently do not recall factor into the person that I am. There are so many unconscious joys and traumas that explain a great deal about us. This starts to venture into psychology, which for our current purposes is unnecessary.

Ultimately, reincarnation is incompatible with naturalism, not because it is too mystical, but because even if we were to imagine a version of reincarnation that is consistent with naturalism, i.e., steel man the notion of reincarnation, one’s full set of experiences is very unlikely to recur in the life of another person. Even twins, though sharing a lot of the same childhood experiences, end up having different experiences that, in turn, ensure that they are different from one another. As I have shown, it is extremely improbable, despite a generous initial probability, for another person to have just nine of the experiences I have had, let alone a thousand or the actual and innumerable experiences I have had in my life. Moreover, the longer one lives, the less likely it becomes that someone else will have one’s experiences. It is even more improbable still that someone in the far future will have the same exact set of experiences and have the same mental life as a centenarian in Japan, e.g. 118-year-old Tanaka Kane, who lives in Fukuoka City. Already, you are at a disadvantage since it is impossible for you to be born in 1903, at the time she was born, and to the same parents. It is virtually guaranteed that your set of experiences will differ from hers. Therein lies identity: you are a particular experiencer with a unique set of experiences. As Dave Chapelle said when remembering his late friend Daphne Dorman: “I am someone having a human experience.” At bottom, this is who we all are, but as with most philosophical topics, the devil is in the details, specifically within the details of our distinct set of experiences.

The Negation Strategy

By R.N. Carmona

Every deductive argument can be negated. I consider this an uncontroversial statement. The problem is, there are people who proceed as though deductive arguments speak to an a priori truth. The Freedom Tower is taller than the Empire State Building; the Empire State Building is taller than the Chrysler Building; therefore, the Freedom Tower is taller than the Chrysler Building. This is an example of an a priori truth because given that one understands the concepts of taller and shorter, the conclusion follows uncontroversially from the premises. This is one way in which the soundness of an argument can be assessed.

Of relevance is how one would proceed if one is unsure of the argument. Thankfully, we no longer live in a world in where one would have to go out of their way to measure the heights of the three buildings. A simple Google search will suffice. The Freedom Tower is ~546m. The Empire State Building is ~443. The Chrysler is ~318m. Granted, this is knowledge by way of testimony. I do not intend to connote religious testimony. What I intend to say is that one’s knowledge is grounded on knowledge directly acquired by someone else. In other words, at least one other person actually measured the heights of these buildings and these are the measurements they got.

Most of our knowledge claims rest on testimony. Not everyone has performed an experimental proof to show that the acceleration of gravity is 9.8m/s^2. Either one learned it from a professor or read it in a physics textbook or learned it when watching a science program. Or, they believe the word of someone they trust, be it a friend or a grade school teacher. This does not change that fact that if one cared to, one could exchange knowledge by way of testimony for directly acquired knowledge by performing an experimental proof. This is something I have done, so I do not believe on basis of mere testimony that Newton’s law holds. I can say that it holds because I tested it for myself.

To whet the appetite, let us consider a well-known deductive argument and let us ignore, for the moment, whether it is sound:

P1 All men are mortal.

P2 Socrates is a man.

C Therefore, Socrates is mortal.

If someone were completely disinterested in checking whether this argument, which is merely a finite set of propositions, coheres with the world or reality, I would employ my negation strategy: the negation of an argument someone assumes to be sound without epistemic warrant or justification. The strategy forces them into exploring whether their argument or its negation is sound. Inevitably, the individual will have to abandon their bizarre commitment to a sort of propositional idealism (namely that propositions can only be logically assessed and do not contain any real world entities contextually or are not claims about the world). In other words, they will abandon the notion that “All men are mortal” is a mere proposition lacking context that is not intended to make a claim about states of affairs objectively accessible to everyone, including the person who disagrees with them. With that in mind, I would offer the following:

P1 All men are immortal.

P2 Socrates is a man.

C Therefore, Socrates is immortal.

This is extremely controversial for reasons we are all familiar with. That is because everyone accepts that the original argument is sound. When speaking of ‘men’, setting aside the historical tendency to dissolve the distinction between men and women, what is meant is “all human persons from everywhere and at all times.” Socrates, as we know, was an ancient Greek philosopher who reportedly died in 399 BCE. Like all people before him, and presumably all people after him, he proved to be mortal. No human person has proven to be immortal and therefore, the original argument holds.

Of course, matters are not so straightforward. Christian apologists offer no arguments that are uncontroversially true like the original argument above. Therefore, the negation strategy will prove extremely effective to disabuse them of propositional idealism and to make them empirically assess whether their arguments are sound. What follows are examples of arguments for God that have been discussed ad nauseam. Clearly, theists are not interested in conceding. They are not interested in admitting that even one of their arguments does not work. Sure, what you find are theists committed to Thomism, for instance, and as such, they will reject Craig’s Kalam Cosmological Argument (KCA) because it does not fit into their Aristotelian paradigm and not because it is unsound. They prefer Aquinas’ approach to cosmological arguments. What is more common is the kind of theist that ignores the incongruity between one argument for another; since they are arguments for God, it counts as evidence for his existence and it really does not matter that Craig’s KCA is not Aristotelian. I happen to think that it is, despite Craig’s denial, but I digress.

Negating Popular Arguments For God’s Existence

Let us explore whether Craig’s Moral Argument falls victim to the negation strategy. Craig’s Moral Argument is as follows:

P1 If God does not exist, objective moral values do not exist.

P2 Objective moral values do exist.

C Therefore, God exist (Craig, William L. “Moral Argument (Part 1)”. Reasonable Faith. 15 Oct 2007. Web.)

With all arguments, a decision must be made. First, an assessment of the argument form is in order. Is it a modus ponens (MP) or a modus tollens (MT)? Perhaps it is neither and is instead, a categorical or disjunctive syllogism. In any case, one has to decide which premise(s) is going to be negated or whether by virtue of the argument form, one will have to change the argument form to state the opposite. You can see this with the original example. I could have very well negated P2 and stated “Socrates is not a man.” Socrates is an immortal jellyfish that I tagged in the Mediterranean. Or he is an eternal being that I met while tripping out on DMT. For purposes of the argument, however, since he is not a man, at the very least, the question of whether or not he is mortal is open. We would have to ask what Socrates is. Now, if Socrates is my pet hamster, then yes, Socrates is mortal despite not being a man. It follows that the choice of negation has to be in a place that proves most effective. Some thought has to go into it.

Likewise, the choice has to be made when confronting Craig’s Moral Argument. Craig’s Moral Argument is a modus tollens. For the uninitiated, it simply states: [((p –> q) ^ ~q) –> ~p] (Potter, A. (2020). The rhetorical structure of Modus Tollens: An exploration in logic-mining. Proceedings of the Society for Computation in Linguistics, 3, 170-179.). Another way of putting it is that one is denying the consequent. That is precisely what Craig does. “Objective moral values do not exist” is the consequent q. Craig is saying ~q or “Objective moral values do exist.” Therefore, one route one can take is keeping the argument form and negating P1, which in turn negates P2.

MT Negated Moral Argument

P1 If God exists, objective moral values and duties exist.

P2 Objective moral values do not exist.

C Therefore, God does not exist.

The key is to come up with a negation that is either sound or, at the very least, free of any controversy. Straight away, I do not like P2. Moral realists would also deny this negation because, to their minds, P2 is not true. The controversy with P2 is not so much whether it is true or false, but that it falls on the horns of the objectivism-relativism and moral realism/anti-realism debates in ethics. The argument may accomplish something with respect to countering Craig’s Moral Argument, but we are in no better place because of it. This is when we should explore changing the argument’s form in order to get a better negation.

MP Negated Moral Argument

P1 If God does not exist, objective moral values and duties exist.

P2 God does not exist.

C Therefore, objective moral values and duties exist.

This is a valid modus ponens. I have changed the argument form of Craig’s Moral Argument and I now have what I think to be a better negation of his argument. From P2, atheists can find satisfaction. This is the epistemic proposition atheists are committed to. The conclusion also alleviates any concerns moral realists might have had with the MT Negated Moral Argument. For my own purposes, I think this argument works better. That, however, is beside the point. The point is that this forces theists to either justify the premises of Craig’s Moral Argument, i.e. prove that the argument is sound, or assert, on the basis of mere faith, that Craig’s argument is true. In either case, one will have succeeded in either forcing the theist to abandon their propositional idealism, in getting them to test the argument against the world as ontologically construed or in getting them to confess that they are indulging in circular reasoning and confirmation bias, i.e. getting them to confess that they are irrational and illogical. Both of these count as victories. We can explore whether other arguments for God fall on this sword.

We can turn our attention to Craig’s Kalam Cosmological Argument (KCA):

P1 Everything that begins to exist has a cause.

P2 The universe began to exist.

C Therefore, the universe has a cause. (Reichenbach, Bruce. “Cosmological Argument”. Stanford Encyclopedia of Philosophy. 2021. Web.)

Again, negation can take place in two places: P1 or P2. Negating P1, however, does not make sense. Negating P2, like in the case of his Moral Argument, changes the argument form; this is arguable and more subtle. So we get the following:

MT Negated KCA

P1 Everything that begins to exist has a cause.

P2 The universe did not begin to exist.

C Therefore, the universe does not have a cause.

Technically, Craig’s KCA is a categorical syllogism. Such syllogisms present a universal () or existential quantifier (∃); the latter is introduced by saying all. Consider, “all philosophers are thinkers; all philosophers are logicians; therefore, all thinkers are logicians.” Conversely, one could say “no mallards are insects; some birds are mallards; therefore, some birds are not insects.” What Craig is stating is that all things that begin to exist have a cause, so if the universe is a thing that began to exist, then it has a cause. Alternatively, his argument is an implicit modus ponens: “if the universe began to exist, then it has a cause; the universe began to exist; therefore, the universe has a cause.” In any case, the negation works because if the universe did not begin to exist, then the universe is not part of the group of all things that have a cause.

Whether the universe is finite or eternal has been debated for millennia and in a sense, despite changing context, the debate rages on. If the universe is part of an eternal multiverse, it is just one universe in a vast sea of universes within a multiverse that has no temporal beginning. Despite this, the MT Negated KCA demonstrates how absurd the KCA is. The singularity was already there ‘before’ the Big Bang. The Big Bang started the cosmic clock, but the universe itself did not begin to exist. This is more plausible. Consider that everything that begins to exist does so when the flow of time is already in motion, i.e. when the arrow of time pointed in a given direction due to entropic increase reducible to the decreasing temperature throughout the universe. Nothing that has ever come into existence has done so simultaneously with time itself because any causal relationship speaks to a change and change requires the passage of time, but at T=0, no time has passed, and therefore, no change could have taken place. This leads to an asymmetry. We thus cannot speak of anything beginning to exist at T=0. The MT Negated KCA puts cosmology in the right context. The universe did not come into existence at T=0. T=0 simply represents the first measure of time; matter and energy did not emerge at that point.

For a more complicated treatment, Malpass and Morriston argue that “one cannot traverse an actual infinite in finite steps” (Malpass, Alex & Morriston, Wes (2020). Endless and Infinite. Philosophical Quarterly 70 (281):830-849.). In other words, from a mathematical point of view, T=0 is the x-axis. All of the events after T=0 are an asymptote along the x-axis. The events go further and further back, ever closer to T=0 but never actually touch it. For a visual representation, see below:

Asymptotes - Free Math Help

Credit: Free Math Help

The implication here is that time began to exist, but the universe did not begin to exist. A recent paper implies that this is most likely the case (Quantum Experiment Shows How Time ‘Emerges’ from Entanglement. The Physics arXiv Blog. 23 Oct 2013. Web.). The very hot, very dense singularity before the emergence of time at T=0 would have been subject to quantum mechanics rather than the macroscopic forces that came later, e.g., General Relativity. As such, the conditions were such that entanglement could have resulted in the emergence of time in our universe, but not the emergence of the universe. All of the matter and energy were already present before the clock started to tick. Conversely, if the universe is akin to a growing runner, then the toddler is at the starting line before the gun goes off. The sound of the gun starts the clock. The runner starts running sometime after she hears the sound. As she runs, she goes through all the stages of childhood, puberty, adolescence, adulthood, and finally dies. Crucially, the act of her running and her growth do not begin until after the gun goes off. Likewise, no changes take place at T=0; all changes take place after T=0. While there is this notion of entanglement, resulting in a change occurring before the clock even started ticking, quantum mechanics demonstrates that quantum changes do not require time and in fact, may result in the emergence of time. Therefore, it is plausible that though time began to exist at the Big Bang, the universe did not begin to existthus, making the MT Negated KCA sound. The KCA is therefore, false.

Finally, so that the Thomists do not feel left out, we can explore whether the negation strategy can be applied to Aquinas’ Five Ways. For our purposes, the Second Way is closely related to the KCA and would be defeated by the same considerations. Of course, we would have to negate the Second Way so that it is vulnerable to the considerations that cast doubt on the KCA. The Second Way can be stated as follows:

We perceive a series of efficient causes of things in the world.

Nothing exists prior to itself.

Therefore nothing [in the world of things we perceive] is the efficient cause of itself.

If a previous efficient cause does not exist, neither does the thing that results (the effect).

Therefore if the first thing in a series does not exist, nothing in the series exists.

If the series of efficient causes extends ad infinitum into the past, then there would be no things existing now.

That is plainly false (i.e., there are things existing now that came about through efficient causes).

Therefore efficient causes do not extend ad infinitum into the past.

Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God. (Gracyk, Theodore. “Argument Analysis of the Five Ways”. Minnesota State University Moorhead. 2016. Web.)

This argument is considerably longer than the KCA, but there are still areas where the argument can be negated. I think P1 is uncontroversial and so, I do not mind starting from there:

Negated Second Way

We perceive a series of efficient causes of things in the world.

Nothing exists prior to itself.

Therefore nothing [in the world of things we perceive] is the efficient cause of itself.

If a previous efficient cause does not exist, neither does the thing that results (the effect).

Therefore if the earlier thing in a series does not exist, nothing in the series exists.

If the series of efficient causes extends ad infinitum into the past, then there would be things existing now.

That is plainly true (i.e., efficient causes, per Malpass and Morriston, extend infinitely into the past or, the number of past efficient causes is a potential infinity).

Therefore efficient causes do extend ad infinitum into the past.

Therefore it is not necessary to admit a first efficient cause, to which everyone gives the name of God.

Either the theist will continue to assert that the Second Way is sound, epistemic warrant and justification be damned, or they will abandon their dubious propositional idealism and run a soundness test. Checking whether the Second Way or the Negated Second Way is sound would inevitably bring them into contact with empirical evidence supporting one argument or the other. As I have shown with the KCA, it appears that considerations of time, from a philosophical and quantum mechanical perspective, greatly lower the probability of the KCA being sound. This follows neatly into Aquinas’ Second Way and as such, one has far less epistemic justification for believing the KCA or Aquinas’ Second Way are sound. The greater justification is found in the negated versions of these arguments.

Ultimately, one either succeeds at making the theist play the game according to the right rules or getting them to admit their beliefs are not properly epistemic at all; instead, they believe by way of blind faith and all of their redundant arguments are exercises in circular reasoning and any pretense of engaging the evidence is an exercise in confirmation bias. Arguments for God are a perfect example of directionally motivated reasoning (see Galef, Julia. The Scout Mindset: Why Some People See Things Clearly and Others Don’t. New York: Portfolio, 2021. 63-66. Print). I much prefer accuracy motivated reasoning. We are all guilty of motivated reasoning, but directionally motivated reasoning is indicative of irrationality and usually speaks to the fact that one holds beliefs that do not square with the facts. Deductive arguments are only useful insofar as premises can be supported by evidence, which therefore makes it easier to show that an argument is sound. This is why we can reason that if Socrates is a man, more specifically, the ancient Greek philosopher that we all know, then Socrates was indeed mortal and that is why he died in 399 BCE. Likewise, this is why we cannot reason that objective morality can only be the case if the Judeo-Christian god exists, that if the universe began to exist, God is the cause, and that if the series of efficient causes cannot regress infinitely and must terminate somewhere, they can only terminate at a necessary first cause, which some call God. These arguments can be negated and the negations will show that they are either absurd or that the reasoning in the arguments is deficient and rests on the laurels of directionally motivated reasoning due to a bias for one’s religious faith rather than on the bedrock of carefully reasoned, meticulously demonstrated, accuracy motivated reasoning which does not ignore or omit pertinent facts.

The arguments for God, no matter how old or new, simple or complex, do not work because not only do they rely on directionally motivated and patently biased reasoning, but because when testing for soundness, being sure not to exclude any pertinent evidence, the arguments turn out to be unsound. In the main, they all contain controversial premises that do not work unless one already believes in God. So there is a sense in which these arguments exist to give believers a false sense of security or more pointedly, a false sense of certainty. Unlike my opponents, I am perfectly content with being wrong, with changing my mind, but the fact remains, theism is simply not the sort of belief that I give much credence to. Along with the Vagueness Strategy, the Negation Strategy is something that should be in every atheist’s toolbox.

Skeptical Theism and New Arguments For Atheism

R.N. Carmona

Skeptical Theism is overtly present in Plantinga’s Ignorance Defense. It must be noted here that he does not call it that. The monicker makes sense because it relies on human ignorance in order to work. In other words, the defense states that since human wisdom is incomparable to God’s, we cannot know why he allows evil. Moreover, since it is reasonable that he has some reason, unbeknownst to us, for allowing evil, we cannot reasonably blame God for the evil in the world. Of Plantinga’s explications, Kai Nielsen says the following: 

Plantinga grants that, as far as we can see, there are many cases of evil that are apparently pointless. Indeed there are many cases of such evils where we have no idea at all what reason God (if there is such a person) could have for permitting such evils. But, Plantinga remarks, from granting these things it does not follow that “an omnipotent and omniscient God, if he existed, would not have a reason for permitting them” (Plantinga 1993, 400). From the fact that we can see no reason at all for God to permit evils, we cannot legitimately infer that God has no reason to allow such evils. It is not just, Plantinga continues, “obvious or apparent that God could have reason for permitting them. The most we can sensibly say is that we can’t think of any good reason why he would permit them” (Plantinga 1993, 400)

Plantinga, Alvin. God, Freedom and Evil. New York: Harper & Row, 1974. 58. Print.

This, in a nutshell, is the Ignorance Defense. Humans are, in other words, ignorant of God’s will and our wisdom pales in comparison to his. Nielsen’s contention, however, has the makings of a perfect defeater. All that is needed is to see his objection from the point of view of one of God’s attributes. Nielsen states that “it looks more like, if he exists and is all powerful and all knowing, that then he more likely to be evil” and adds that “we see that all the same he might possibly be, as Jewish, Christian, and Islamic religions say he is, perfectly good. But we cannot see that he is. The Mosaic God looks, to understate it, petty, unjust, and cruel to us” (Ibid.). This defeater is perfected if we see this from the point of view of God’s omniscience. God would know that we would be incapable of seeing that he is good in light of natural evil. This evil is, in fact, gratuitous. God would have seen, by way of his omniscience, that the quantity of natural evil in the world would be enough to drive so many to doubt. This apart from contradictory revelations, the limited range and capacity of Christianity, i.e., its incapacity to appeal to people of other cultures, e.g., Jews, Muslims, Hindus, and indigenous people across every populated continent, and the negative evidence against the existence of the Judeo-Christian god. 

We are then asked “to stick with a belief in what we see to be some kind of possibility, namely that God is, after all, appearances to the contrary notwithstanding, perfectly good” (Ibid.). This as an obstinate appeal to the very faith that needs to be substantiated. Furthermore, this appears to imply the superiority of faith over reason. Like Galileo, who no doubt said this with a different sentiment, I “do not feel obliged to believe that same God who endowed us with sense, reason, and intellect had intended for us to forgo their use” (Galilei, Galileo, “Letter to the Grand Duchess Christina of Tuscany, 1615” (2013). Instructional Resources. 97.).

For a clearer explication of skeptical theism, McBrayer offers:

Skeptical theism is the view that God exists but that we should be skeptical of our ability to discern God’s reasons for acting or refraining from acting in any particular instance.  In particular, says the skeptical theist, we should not grant that our inability to think of a good reason for doing or allowing something is indicative of whether or not God might have a good reason for doing or allowing something.  If there is a God, he knows much more than we do about the relevant facts, and thus it would not be surprising at all if he has reasons for doing or allowing something that we cannot fathom.

McBrayer, Justin P. “Skeptical Theism”. Internet Encyclopedia of Philosophy. Web.

This matches Plantinga’s Ignorance Defense one-to-one. There is therefore, no need to belabor the point. My concern is twofold: the failure of skeptical theism should be clear and since this appeal to human ignorance is an obstinate roadblock borne of a reluctance to accept an atheistic conclusion, it is crucial to develop arguments that make use of its faulty intuition and arguments that leave no room for a skeptical theistic response. In other words, if the intuition can be turned on its head, in a perfect example of how to employ the double standard and outsider tests (see Galef, Julia. The Scout Mindset: Why Some People See Things Clearly and Others Don’t. New York: Portfolio, 2021. 63-66. Print), perhaps the theist is not in a position to see the vast shortcomings of skeptical theism. This is what I want to do because I am at a loss when it comes to understanding why anyone would think such a response works when confronting The Evidential Problem of Evil and Schellenberg’s Divine Hiddenness Argument. For our purposes, I will set aside stating explicitly The Evidential Problem of Evil, as I think it is unnecessary review for the initiated. Schellenberg’s Divine Hiddenness Argument, on the other hand, is not as familiar, even to the thoroughly initiated. Thankfully, Veronika Weidner has explicitly stated the argument accurately:

(1) Necessarily, if God exists, then God is a personal perfect being.

(2) Necessarily, if God is a personal perfect being, then God always loves all human beings perfectly.

(3) Necessarily, if God always loves all human beings perfectly, then God is always open to be in a personal relationship with all those human beings capable of such a relationship with God.

(4) Necessarily, if God is always open to be in a personal relationship with all those human beings capable of such a relationship with God, then God does or omits nothing which would prevent all those human beings to relate to God personally who are capable of a personal relationship with God and also not resistant to a personal relationship with God.

(5) Necessarily, a human being capable of a personal relationship with God who is not resistant to a personal relationship with God is only able to relate to God personally if she believes that God exists.

(6) Necessarily, if God does or omits nothing which would prevent all those human beings to relate to God personally who are capable of a personal relationship with God and also not resistant to a personal relationship with God, then it is not the case that there is a human being capable of a personal relationship with God who is not resistant to a personal relationship with God and yet not able to relate to God personally because she does not believe that God exists.

(7) There is at least one human being capable of a personal relationship with God who is not resistant to a personal relationship with God and yet not able to relate to God personally because she does not believe that God exists.

(8) Therefore, God does not exist. (see Schellenberg, 2015b: 24–25)

Weidner, Veronika. Divine Hiddenness. Cambridge University Press, 2021. Web.

Prior to discussing Schellenberg’s argument in some detail, it is crucial to understand why skeptical theism fails:

A) Even if we grant that unforeseen goods balance the scales, as it were, i.e. justifies the 1,800 cancer-related deaths of children per year in the United States, there is no way for finite human minds to causally connect these deaths with the goods whenever they arrive; the most we can do is callously reason that their deaths are akin to necessary sacrifices that enable us to eventually find a cure—which is related to minor problem (a) below and more importantly, is not something we should ever give God credit for; developing cures is a slow, painstaking process that does not involve anything like putative revelation or God whispering the secrets to a much needed vaccine in a doctor’s ear. There is also the issue that the goods may arrive well after our lifetimes, which segues into the next problem.

B) On exclusivism, many of today’s atheists are eternally lost because evil and hiddenness were just too persuasive and the goods never came due within our lifetimes. On universalism, this is all arbitrary. This can be conjoined to Street’s recent response to skeptical theism: we are free to indulge moral aporia because no matter what we believe or not, we will ultimately be saved (see Street, S. (2014). If everything happens for a reason, then we don’t know what reasons are: Why the price of theism is normative skepticism. In M. Bergmann & P. Kain (Eds.), Challenges to moral and religious belief: Disagreement and evolution (pp. 172–192). Oxford: Oxford University Press.). So talk of evil and hiddenness and unknown goods to account for them ends up being null.

I have a sneaking suspicion that theists feel the gnaw of these defeaters. Atheists certainly do. This then becomes an exercise of being overly charitable to a kind of argument that can never prove successful. If skeptical theism fails, it is a thread that should be cut and discarded. There are a couple of minor problems that are important as well:

a) It is utilitarian in its analysis. The evil and hiddenness we experience are lesser in magnitude when compared to the goods that await us, be it in heaven or by way of some earthly recompense. The greater good overtones are palpable. I cannot see how a being who is appealed to as the objective and perfect moral standard can subscribe to utilitarianism given its shortcomings.

b) It begs the question because it really is no different from someone saying “just wait and see!” Many people on all sides died waiting and seeing and per Schellenberg, honestly sought divinity their entire lives and came up empty. If God had a better track record of making do on past atrocities, then we would be able to inductively reason, as many of us do with science, in this manner. The thing is, it looks like the bills for the Holocaust and slavery are overdue and all of us, living ~80 and ~450 years respectively, after these atrocities happened, cannot even begin to causally connect potential goods that God has deployed with the intention of paying this debt. Perhaps it is too much to expect God to pay that debt because those were human crimes; but I can also think of disasters, diseases, pandemics, mass extinctions, and other natural evils that are overdue and again, I am not sure what goods are intended to repay the extinctions of all of our hominid cousins, for example.

c) The whole accounting that is done really puts a lack of value on human life that turns out to be nihilistic and even fatalistic. The Black Plague wiped out millions. Are we really to believe any good repaid that debt? Are we supposed to buy that the life of a child, whose loss emotionally crippled her mother, is worth so little that we can just make do with the fact that some future kid was saved from danger in her place? That does nothing at all to alleviate the suffering the child and her mother experienced, so that is another issue, one of currency: what is the value of this coin God is paying his debts with and how exactly does it exchange with the coin in the sometimes distant past?

Now to turn my attention to an argument that subsumes the observations of The Evidential Problem of Evil and The Divine Hiddenness Argument. This argument is novel, forceful, and to my mind, defeats the idea of not just perfect being, omni-god theism, but theism overall. Weidner already observes the following: “After all, the hiddenness argument, if successful, helps us see the deficiencies of personal perfect being theism” (Weidner, ibid.). My next argument should help one see the deficiencies of theism in general.

Infinity Entails Supererogative Capacity

Weidner’s next stop is to grapple with the conclusion of My Argument From Assailability: “if we find in any being, a characteristic that is assailable, then we have no reason to call it a god.” How is a non-perfect theistic being different from an alien, one might ask. Crucially, if per the hiddenness and evil arguments, God does not seem open to being in a relationship with all human beings and does not intervene when great atrocities happen, then we have located an assailable characteristic. How does an omnipotent or, at least, an incredibly powerful being succumb to bystander effect? Even if God is not all-powerful and could not snap the Nazis out of existence, if he is at least powerful enough to assume a disguise and poison Hitler and his top advisers, why not step in and prevent the Holocaust?

The reason Aquinas and others maximized God to have infinite capacities in all respects is because theists already saw the crippling limitations of a god with finite abilities. The question would immediately follow: what motivation is there to worship a being that is not perfect? Infinity entails supererogative capacity. God would be able to give an infinite amount of love, kinship, succor, power, knowledge, and presence and still retain an infinite amount of each. So why does he seem to blithely refuse to commit to this? Perfect and infinite personal being theism is defeated by the combination of Schellenberg’s Divine Hiddenness Argument, The Evidential Problem of Evil, and my argument from God’s apparent lack of supererogatory agency. What is left is non-perfect being theism.

That, however, falls on the horns of my Argument From Assailability and so, Theism is defeated in all its iterations. This is to say nothing about the fact that even a finite deity would be far more capable of supererogatory acts than we are. In any case, the intuition of my supererogative argument can be turned on its head. We can deduce something about God’s power given this lack. God must be much weaker than a hypothetical infinite being due to the fact that he remains a bystander, utterly apathetic to even the worst atrocities known to maneven ones we played no part in causing. This is an assailable characteristic. We therefore, have no obligation whatsoever to worship a being that is apparently weaker than ourselves. As Tracie Harris famously said: “If I could stop a person from raping a child I would. That is the difference between me and your God” (Bennett-Smith, Meredith. “‘Atheist Experience’ TV Host Shocked By Caller’s Statement About Child Rape (Video)”. Huffington Post. 9 Jan 2013. Web). Ultimately, if God appears to be this much weaker than human beings, who can potentially lose their lives when intervening on the behalf of another person, it is far more probable that God does not exist.

Notes on Necessity

The standard contingency argument looks something like the following:

  1. There exists a series of events
  2. The series of events exists as caused and not as uncaused 
  3. This series cannot extend infinitely into the past
  4. Therefore, there must exist the necessary being that is the cause of all contingent being (credit: Queens Community College)

The intuition of skeptical theism, as I made clear at the outset, can be used to cast doubt on contingency arguments across the board. Aside from the fact that there is a chasm between a necessary cause, e.g., something like the Big Bang, and a necessary being, we can assert that we should be skeptical of our ability to discern what a necessary being is. How do we know that it is one necessary being as opposed to many? If a Triune god solves the one-many problem, then why not the Divine Septad of Zoroastrianism? Since we cannot know what the realm of necessity is like, we should refrain from making these kinds of arguments.

Contingency arguments only accomplish one thing: they point to the existence of metaphysical necessities, quite possibly something like brute facts, that can be explained by something more concrete like physical necessity. In my overview of Rasmussen’s recent contingency argument, I go over what this looks like and it is more plausible than crossing the infinite chasm between a necessary cause and a necessary being on blind faith alone. In any case, since we cannot know what necessity is really like and since we cannot visit the realm of necessity, it is best we accept our ignorance on this matter. The intuition of skeptical theism undermines what many theists consider one of the stronger lines of argumentation in favor of theism.

The Argument From Phenomenal Distance

This novel argument, not to be confused with Mander’s “Does God know what it is like to be me?” (see Mander, W.J. (2002), Does God Know What It is Like to be Me?. The Heythrop Journal, 43: 430-443. https://doi.org/10.1111/1468-2265.00203) completely evades skeptical theism. It is an argument from analogy in where an observation about human behavior is mapped onto God. The argument has been alluded to before, but as far as I know, has not been formally stated or given a name.

RC*: The condition of the argument is as follows: there is a difference between the phenomenal experience of human beings and that of earthworms (if it is even appropriate to think that worms have phenomenal experience). Even if earthworms lack phenomenal consciousness, according to some philosophers, we certainly have phenomenal consciousness and as such, there is a distance between our experience and theirs.

RC1 Human beings have phenomenal distance from earthworms and therefore, are indifferent to them, e.g. we walk through a parking lot on a rainy day and probably trample dozens of them underfoot with no second thought.

RC2 An infinite god or even a vastly powerful deity would have an infinite or incalculable phenomenal distance from humans.

RCC1 Therefore, we should expect God to be indifferent to us.

This argument avoids the nauseating intuition of skeptical theism as it cannot appeal to any ignorance we have. One thing we are not ignorant of, as evil and hiddenness make clear, is that either God does not exist or if any gods exist, they are astoundingly indifferent to us. Camus, in The Plague, observes through the character of Tarrou that if God is not going to provide succor in times of great atrocity, it is up to us to take the helm and do something about our plights. Conjoined to his scathing criticisms of the religious propensity to prefer the abstract over the concrete is Camus’ clearsighted focus on God’s absence or indifference, for even as Jacques Othon dies in agony and Father Paneloux shouts out “My God, spare this child!,” the child dies writhing in pain and wailing across the halls of the auxiliary hospital (Camus, Albert. The Plague. New York: Vintage International Books. 217. Web.). This argument is yet another powerful blow against personal being theism because a friend is there in times of need and a father who loves his children, all the more so. No appeal to our ignorance, as my defeaters make clear, can be marshaled in to salvage the notion of the existence of a personal being who loves us and has our well-being and prosperity in mind. The absence or more tentatively, the indifference of God should disabuse one of the belief in a personal being who loves us infinitely.

In the end, I think the lines of argumentation I have pursued here are by no means exhaustive, in that a lot more can be said about evil, suffering, hiddenness, God’s lack of supererogative agency, and an indifference stemming from the incalculable, if not, infinite phenomenal distance he has from us. I defer to Rieux: “No Father. I’ve a very different idea of love. And until my dying day I shall refuse to love a scheme of things in which children are put to torture” (Camus, Ibid., 218).

A Reply to Strawson: Physicalism Does Not Entail Panpsychism

By R.N. Carmona

Why a blog post and not a proper response in a philosophy journal? My very first journal submission is still in the review process, close to two months later, for one. Secondly, blogging allows me to be pedantic, to be human, that is, to express frustration, to show anger, to be candid; in other words, blogging allows me to be myself. Probably of highest priority is the fact that I do not want my first publication in the philosophy of mind to be a response. I want to eventually outline my own theory of consciousness, which is strongly hinted at here, and I prefer for that to be my first contribution to the philosophy of mind. I do not find panpsychism convincing and I think there is another theory of consciousness, similar to panpsychism in ways, that is much more cogent. I have outlined some qualms I have with panpsychism before; to people new to the blog, you can read here. In any case, I will be responding to a number of points in Strawson’s Realistic Monism: Why Physicalism Entails Panpsychism. Here I will outline refutations that should leave panpsychism unresponsive once and for all as it is not a productive theory of consciousness, i.e., it does no explanatory work and does not illuminate further research; it gives us no real direction to go in.

Strawson states: “You’re certainly not a realistic physicalist, you’re not a real physicalist, if you deny the existence of the phenomenon whose existence is more certain than the existence of anything else: experience, ‘consciousness’, conscious experience, ‘phenomenology’, experiential ‘what-it’s-likeness’, feeling, sensation, explicit conscious thought as we have it and know it at almost every waking moment” (3). Strawson not only sounds like an absolutist, but he has, no doubt intentionally, boxed out real physicalists like the Churchlands and Daniel Dennett. For my purposes, I deny none of these things. I am not an eliminativist though in the past I have called myself such when I lacked a better term for my own point of view. Now, I believe I have located a better term and so, I call myself a recontextualist. I do not deny qualia. What I strongly deny is what panpsychists think they entail: usually a version of nonphysicalist panpsychism or even covert substance dualism in where mental phenomena are ethereal. In light of this, I suggest that qualia are physically reducible in obvious ways already known to us and in currently non-obvious ways yet to be discovered or understood; we simply have to do the work of demonstrating how what-it’s-likeness is physically reducible. I do not think Strawson dodges recontextualism and this will become clearer as we move on.

He explains: “It follows that real physicalism can have nothing to do with physicSalism, the view — the faith — that the nature or essence of all concrete reality can in principle be fully captured in the terms of physics. Real physicalism cannot have anything to do with physicSalism unless it is supposed — obviously falsely — that the terms of physics can fully capture the nature or essence of experience” (4). I think the word physicSalism is clunky and so, I will exchange it for the word physicsism, which I think ties nicely to its predecessor scientism. There is not a chasm between someone who thinks science is the only way of knowing and someone who thinks physics is capable of explaining everything. Strawson makes the mistake of thinking physics stands alone among the hard sciences, as if it is the ground level of scientific explanation. I think chemistry joins physics in that department and as such, real physicalists can be physicsists if they are also chemistrists, the idea that a great number of physical phenomena are reducible to chemistry. If monism, that there is only one substance, and physicalism, that this one substance is physical in nature, are true then it is incumbent on Strawson to address the notion that science cannot apprehend certain physical phenomena. Strawson, therefore, is guilty of the same dualistic tendencies he accuses Dennett of (5), and he seems to bite the bullet on this in offering his “‘experiential-and-non- experiential ?-ism’” (7). Per his view, there are actual physical phenomena explainable by science, especially ground level hard sciences like physics and chemistry. On the other hand, there are quasi-physical phenomena in where Strawson feigns at physicalism while also betraying the fact that he means nothing more than nonphysicalism. This has to be qualified.

So, let us grant that Strawson would qualify the sense of sight as uncontroversially physical. Now, he claims that the what-it’s-likeness of seeing red is also physical and yet, science has no account for this per his claims; not only does science have no current account, but it can never have a viable account because, in his own words, “experiential phenomena ‘just are’ physical, so that there is a lot more to neurons than physics and neurophysiology record (or can record)” (7). I am a real physicalist and I strongly disagree with this statement. For starters, I think his statement is informed by a conflation some people tend to make: if something is explainable by science, it lacks existential meaning and so, anything that is explained by science enables nihilism. In other words, if we can explain the origin of morality without recourse to God, morality is suddenly meaningless in the so-called ultimate sense and just is relativistic or subjectivistic. This is wrong-headed. Explaining the what-it’s-likeness of red would not change the fact that red is my favorite color; nor would it change my experience of seeing a blood red trench coat hanging in a clothing store, as if begging me to purchase it. In a naturalistic world, meaning is decided by us anyway and so, nihilism does not follow from the fact that science explains something. Love is not any less riveting, captivating, and enrapturing if science somehow explained every detail about falling in love, loving one’s children, loving the species one belongs to, and loving species entirely different from oneself.

This aversion to science eventually explaining qualia reeks of nonphysicalism and to my mind, just is nonphyiscalism being labeled as physicalism, which is really just a nominal label that is so far failing to cohere with what is normally meant by physicalism. The notion that physics, chemistry, genetics, and neurophysiology can never record those aspects of neurons that account for qualia is incompatible with physicalism. If science can apprehend physical substance, and qualia are a physical substance as Strawson claims, then science can apprehend qualia. To say otherwise is for Strawson to admit that what he means by physical in the case of qualia is actually not physical at all. It is covert dualism and nonphysicalism. I have no qualms with scientists fully understanding why red is my favorite color. This does not then dampen my experience or make it meaningless.

Likewise, I know that sexual attraction reduces to mostly superficial, properly aesthetic, considerations and pheromones that induce a reaction in my brain, which then translate to a host of bodily reactions, e.g., feeling flush and then blushing, feeling either nervous, excited, or some combination of both, feeling a knot in my stomach. This does not accomplish making my attraction meaningless or, at least, making it less meaningful because, in truth, while I understand the science of attraction, it does not register when I am in the middle of experiencing attraction. These considerations factor even less when I have fallen in love. I do not think, “well damn, scientists have me pegged and I am only feeling all of these sensations because of serotonin and dopamine releases in my brain; love is ultimately meaningless.” What gives vibrance to experience is the experiencer.

Experience is akin to aesthetics, hence why we find some experiences pleasurable while there are others we can describe with an array of negative words and connotations. Science can also explain why a lot of people hate waiting for a long period of time, why just as many people hate the feeling of being out of breath, and why pretty much anyone hates going to work after a night of inadequate sleep. Science explaining these experiences does not change the interpretation of the experiencer; science does suggest why we have very common associations between most experiences, from pleasurable to painful to everything between, and that speaks to us being one species. So, experience can be explained by science and science can even predict the interpretation of this or that experiencer, but science does not dampen phenomenal experience. Panpsychists confuse that we have phenomenal experience with that we interpret phenomenal experience. Physicalism is not opposed to science fully explaining either of these and in fact, it has done much in the way of explaining both. Strawson tries to avoid this and yet claims: “If everything that concretely exists is intrinsically experience-involving, well, that is what the physical turns out to be; it is what energy (another name for physical stuff) turns out to be. This view does not stand out as particularly strange against the background of present-day science, and is in no way incompatible with it” (8). Well, if indeed it does not stand out as particularly strange against the background of present-day science, then all concrete things can be explained by science. This entailment seems uncontroversial and obvious for anyone identifying as a physicalist.

Strawson stipulates that “real physicalists … cannot deny that when you put physical stuff together in the way in which it is put together in brains like ours, it constitutes — is — experience like ours; all by itself. All by itself: there is on their own physicalist view nothing else, nothing non-physical, involved” (12). This is patently false as it alludes to mind-brain identity theory. It is not just atoms coming together in brains like ours. Human consciousness is compound reductive. In other words, human consciousness is not reducible to just one physical, macro aspect about our biological structure. That is to say that it is not reducible to just our hands or just our feet or just our brains. Strawson’s conflation of physicalism, as usually construed, and mind-brain identity theory leaves out crucial elements of experience, namely our central and peripheral nervous systems; the parts of the brain because anyone versed in the pertinent science knows that when it comes to the brain, the parts are more integral to consciousness than the whole; sense apparatus like our eyes, noses, pain receptors, and so on; and finally, external objects that provide the mind with data to process, interpret, make sense of, and so on.

From the perspective of differential ontology, and given that I have been thoroughly disabused of flippant idealism and solipsism, I know that my thoughts are not organically generated as if in a vacuum, within my brain. My thoughts are invariably and intimately connected to whatever I am occupied with, in the present time by Strawson’s various claims about what physicalism entails. If he had never written his thoughts, then I would not be countering his claims with my own. Perhaps I would be thinking about lunch at 12:26 pm ET, but alas, I am not. The point being that when I do start to think about having lunch, my thoughts about what to eat will be influenced by hunger pangs that amount to a feedback loop between my brain and my gut, again demonstrating the importance of organs other than just the brain in accounting for my experience, and pretty much any human being’s experience, of hunger. That feeling would take priority over my desire to respond to Strawson. Deciding what to eat amounts to constraints, like what food I have in my pantry and refrigerator and a desire not to spend money on takeout. So, I can only end up eating what is already available to me; in this case, only unexpected factors can change this course. Perhaps a neighbor or a relative is decided on bringing me baked lasagna and since I currently do not know that they have these plans, that option does not feature in what I am thinking of having for lunch. In any case, what has become clear is that phenomenal consciousness reduces, in part, to a litany of physical objects, some of which are not even in your vicinity. What is also clear is that the brain alone does not account for phenomenal consciousness.

Strawson and other panpsychists are looking in one of the right places, to be sure, but understanding phenomenal consciousness is like understanding a crime scene, and as such, we have to be aware of various factors that make the crime cohere, from blood spatter patterns to the murder weapon to point of entry (and whether or not it was forced entry) all the way up to possible motive. If we stop short at the murder weapon, then we can conclude the person was stabbed, but we cannot make any conclusions as to how many times, in what areas of the body, by whom, and for what reason. Phenomenal consciousness, uncontroversially, is exactly like that! Strawson and panpsychists sit out on the porch of the brain and do not venture into a mansion with many rooms, light switches, outlets, and the such. Neurons, synapses, neurogenesis, neurodegeneration, memory formation, recollection, confabulation, and so on are critically important in accounting for certain experiences. We cannot say the what-it’s-likeness of déjà vu is due to the fact that particles are conscious. That tells us nothing, does not help us elucidate on this experience, and ultimately, lacks explanatory power. It is simply a vacuous claim. Real physicalists can enter the many-roomed mansion and start to explain why this experience feels a certain way, and why some of us interpret it the way we do; for instance, there is a delay between seeing and being aware that we have seen, and so, in those small intervals of time, we can fool ourselves into thinking we have already seen what we just realized we saw. In other words, your brain “knows” what you see before you realize you have seen it. Generally, however, scientists think that déjà vu is tied to memory, so if we are sitting on the porch, trying to explain what it’s like to have this experience, we are in the wrong part of the house. We have to venture into the hippocampus, for instance (see Hamzelou, Jessica. “Mystery of déjà vu explained – it’s how we check our memories”. New Scientist. 16 Aug 2016. Web.).

I will free to skip the entire section on emergentism because while I find this account intriguing, it is implausible and has, what I think, are obvious commitments. Strawson defines it as follows:

Experiential phenomena are emergent phenomena. Consciousness properties, experience properties, are emergent properties of wholly and utterly non- conscious, non-experiential phenomena. Physical stuff in itself, in its basic nature, is indeed a wholly non-conscious, non-experiential phenomenon. Nevertheless when parts of it combine in certain ways, experiential phenomena ‘emerge’. Ultimates in themselves are wholly non-conscious, non-experiential phenomena. Nevertheless, when they combine in certain ways, experiential phenomena ‘emerge’. (12)

If this is the case, then emergentism is committed to idealism and to solipsism, “sometimes expressed as the view that “I am the only mind which exists,” or “My mental states are the only mental states”” (Thornton, Stephen. “Solipsism and the Problem of Other Minds”. Internet Encyclopedia of Philosophy. Web.). The obvious drawback here is that there is no way to pin down where these properties emerge from. The source will vary from one first person point of view to the next or, to put it facetiously, from one person under the illusion that they have first person perspective to another person under the same illusion. I will claim that all that exists is my mind while someone else can lay claim to their own mind existing. I will then claim that all else emerges from my mental states while the next person makes the same claim. Then the question becomes, when we are both shopping for clothes, why do we both see a blood red trench coat for sale and why is it that my mental state of wanting to buy it does not emerge from his mental state of barely noticing the coat? How can these same properties group together to become the same object from two people under the illusion that their respective mental states are the only mental states? Emergentism, with respect to consciousness, does not evade these problematic commitments.

To understand the next couple of sections in his paper, in where Strawson’s claims go off the rails and get even wilder, the following have to be kept in mind:

  • The non-experiential thesis: “[NE] physical stuff is, in itself, in its fundamental nature, something wholly and utterly non-experiential” (11)
  • Real Physicalism: “[RP] experience is a real concrete phenomenon and every real concrete phenomenon is physical” (12)
  • P-phenomena: “the phenomena of liquidity reduce without remainder to shape-size-mass-charge-etc. phenomena” (13)
  • “The central idea of neutral monism is that there is a fundamental, correct way of conceiving things — let us say that it involves conceiving of them in terms of ‘Z’ properties — given which all concrete phenomena, experiential and non-experiential, are on a par in all being equally Z phenomena” (23)

Setting aside Strawson’s side-stepping of chemistry, which easily shows how liquid water can “emerge” from two hydrogen atoms and one oxygen atom, the reason we cannot have Z phenomena is because the question of how consciousness can come from non-consciousness is itself reducible to a scientific question that has yet to be fully answered: how did life arise from non-life? Consciousness, as we know, is found in living things, so per the combination problem, what criteria need to be met for a consciousness like ours to take shape? Is it size, mass, shape, charge? Buildings and mountains are far more massive than us and by extension, are larger and have more particles generating what should amount to greater charges; and yet, mountains and buildings do not appear to be conscious at all. This is a critical clue because clearly, the haphazard combination of particles when a mountain forms or when a building is erected does not accomplish giving rise to consciousness like ours. Briefly, the combination problem can be formulated as follows:

Take a hundred of them [feelings], shuffle them and pack them as close together as you can (whatever that may mean); still each remains the same feeling it always was, shut in its own skin, windowless, ignorant of what the other feelings are and mean. There would be a hundred-and first-feeling there, if, when a group or series of such feelings where set up, a consciousness belonging to the group as such should emerge. And this 101st feeling would be a totally new fact; the 100 feelings might, by a curious physical law, be a signal for its creation, when they came together; but they would have no substantial identity with it, not it with them, and one could never deduce the one from the others, nor (in any intelligible sense) say that they evolved it. 

Goff, Philip, William Seager, and Sean Allen-Hermanson. “Panpsychism”. The Stanford Encyclopedia of Philosophy. Summer 2020. Web.

To do away with Strawson’s assertions concerning consciousness coming from experiential ultimates, I summon help from an unexpected source. Though the neo-Aristotelian uses this thought experiment for different purposes, it is enough to show that basic organization does not consciousness make. Jaworski, no doubt inadvertently, presents a version of the combination problem that cuts deeply into Strawson’s thesis. He explains:

Suppose we put Godehard in a strong bag — a very strong bag since we want to ensure that nothing leaks out when we squash him with several tons of force. Before the squashing, the contents of the bag include one human being; after, they include none. In addition, before the squashing the contents of the bag can think, feel, and act, but after the squashing they can’t. What explains these differences in the contents of the bag pre-squashing and post-squashing? The physical materials (whether particles or stuffs) remain the same — none of them leaked out. Intuitively, we want to say that what changed was the way those materials were structured or organized. 

Jaworski, William. Structure and the Metaphysics of Mind: How Hylomorphism Solves the Mind-Body Problem. Oxford: Oxford University Press, 2016. 9. Print.

Intuitively, I do not say that what changed is just the organization or structure of these materials. That dodges Jaworski’s neo-Aristotelian commitments. I also add that the spark of consciousness is what changed. There is, in this case, irreparable damage to the claustrum, thus making consciousness impossible to turn back on, so to speak (see Koch, Christoph. “Neuronal “Superhub” Might Generate Consciousness”. Scientific American. 1 Nov 2014. Web.). Furthermore, there is irreparable damage to other pivotal organs that make it possible for us to make any claim to being alive. The liver, heart, stomach, etc. have all lost their function. The matter is still there, but the electric fields that make us conscious are permanently off. This is why I am conscious and an inanimate object, equal in size and weight to me, perhaps a boulder, is not conscious. Non-experiential things can be used to design other non-experiential things or can naturally form into other non-experiential things given that organic compounds and electric fields are entirely absent. The question of how consciousness arises from non-consciousness just is the question of how life arises from non-life. Just because we currently do not have a fuller, more detailed picture does not mean we have license to offer theories like panpsychism, which possess nothing in the way of explanatory power. The panpsychist and neo-Aristotelian think they are headed in some definite direction, but they are both quickly approaching dead ends.

Electric fields theory (EFT) of consciousness, indeed similar to panpsychism, at least prima facie, is where panpsychists should place their chips. Tam Hunt elaborates:

Nature seems to have figured out that electric fields, similar to the role they play in human-created machines, can power a wide array of processes essential to life. Perhaps even consciousness itself. A veritable army of neuroscientists and electrophysiologists around the world are developing steadily deeper insights into the degree that electric and magnetic fields—“brainwaves” or “neural oscillations”—seem to reveal key aspects of consciousness. The prevailing view for some time now has been that the brain’s bioelectric fields, which are electrical and magnetic fields produced at various physical scales, are an interesting side effect—or epiphenomenon—of the brains’ activity, but not necessarily relevant to the functioning of consciousness itself.

A number of thinkers are suggesting now, instead, that these fields may in fact be the main game in town when it comes to explaining consciousness. In a 2013 paper, philosopher Mostyn Jones reviewed various field theories of consciousness, still a minority school of thought in the field but growing. If that approach is right, it is likely that the body’s bioelectric fields are also, more generally, associated in some manner with some kind of consciousness at various levels. Levin provided some support for this notion when I asked him about the potential for consciousness, in at least some rudimentary form, in the body’s electric fields.

Hunt, Tam. “The Link Between Bioelectricity and Consciousness”. Nautilus. 10 Mar 2021. Web.

While I am committed to monism, the idea that only physical substance exists, and am therefore committed to physicalism, I am not committed to the idea that particles are the kinds of ultimates that attend to consciousness. Cells are the ultimates that attend to conscious beings like ourselves. This is the reason why the boulder lacks consciousness despite weighing as much as I do. Intuitively, the boulder should have roughly the same amount of particles in its structure as I do, but of utmost priority here is determining what I possess that the boulder does not. I am, as it were, activated by electric fields, receptive to my environment, can respond and adapt to it. The boulder, on the other hand, cannot do this. One might want to ask why, when the boulder finds itself gradually eroding under a small waterfall, it does not simply relocate itself? If, like me, it has a rudimentary spark of consciousness, why does it resign itself to a slow death, i.e., erosion? Bioelectric fields account for why I will move out of the way of an oncoming vehicle while the boulder remains idle under the waterfall, slowing eroding as time goes on.

This is probably the most damning response to Strawson: various domains of science are needed to understand consciousness. If EFT is accurate, and I see no reason for it to be inaccurate, cell biology is just as crucial as physics, chemistry, and neurophysiology. This makes for a comprehensive understanding of consciousness, comprised of the convergence of all our domains of knowledge. Philosophy of mind no doubt has a role to play, but not when it ventures far and wide from what science suggests. There is already a fundamental distinction between non-life and life, between inanimate objects and people. It follows, therefore, that if consciousness is inhered in living things, then it cannot be attributed to non-living things. This smacks of common sense and yet, philosophers are willing to occupy the fringes with popular theories like panpsychism. Some pretty anecdotes have come from this idea. Some say we are the universe experiencing itself, but if the universe already had all the makings of phenomenal consciousness, why does it need overrated chimpanzees who are preoccupied with reality tv, social media, violence against one another, and all manner of mundane and superficial nonsense to understand itself? If any composition at all, even absent bioelectric fields, is enough to account for consciousness, why not just configure itself into a Boltzmann brain that has no potential to develop the silly prejudices and biases humans tend to have?

My account starts to make clear how I can be committed to NE and RP and lead us in the right direction as it concerns Z phenomena. P phenomena are well-accounted for as they pertain to inorganic compounds. Of course, it begs the question to say that we have not quite nailed P phenomena as they concern organic chemistry. To reiterate, our ignorance with respect to how inorganic compounds become organic compounds that are essential to life does not give us license to make as many wild assumptions as we please. Any hypothesis, even within philosophy, especially if it encroaches on the territory of science, has to demonstrate awareness of scientific literature or at least, incorporate science wherever it is germane to one’s theory. Claiming that particles are actually experiential entities that account for why we are conscious pushes the buck back. Panpsychists have moved the goalposts and if they were correct, we would still be tasked with comprehending the consciousness of things utterly unlike ourselves. Thankfully, we do not have to do that and we can focus our energy on understanding why there is a what-it’s-likeness to our experiences. Again, there are important clues: for instance, people who were born blind cannot see while dreaming:

When a blind man is asked if he dreams the answer is immediate: ‘Yes!’ But if we ask him if he sees anything in the dream, the answer is always doubtful because he does not know what it is to see. Even if there were images and colours in his brain during the dream, how could he recognize them? There is, therefore, no direct way, through the dream reports, to evaluate the presence of visual activation in the dream of congenitally blind subjects.

Carr, Michelle. “Do Blind People See in Their Dreams?”. Psychology Today. 29 Dec 2017. Web.

If experiential particles give rise to sight, then why do particles seem entirely dependent on eyes? Why do they not simply configure themselves in another way in order to circumvent the blindness of the eyes? It is quite telling that in the absence of a sense, the corresponding phenomenal aspect of experiences associated with that sense are also absent. My compound reductive account predicts this; this is unsurprising on my theory of consciousness whereas on Strawson’s account, and any panpsychists account, there is no way to account for this. Severe retinopathy is usually what causes people to be born blind. There are millions of light-sensitive cells within the retina, along with other nerve cells that receive and process the information that is sent to your brain, by way of the optic nerve. On EFT, therefore, blindness is damage within the electric fields that result in sight. The cure for blindness is to restore the electric fields within these cells so that communication between nerve cells is possible. That would then restore any corresponding phenomenal experiences. The mere organization of particles clearly does not accomplish this. EFT seems to have far more explanatory power than panpsychism does and if we took pains to assess just our five ordinary senses, we would be able to see that like blindness, anosmia, aguesia, deafness, and things like neuropathy, hypoesthesia, and CIP (congenital insensitivity to pain) are all reducible to nerve cell damage in the nose, mouth, ears, and extremities respectively. In simple terms, bioelectric pathways are damaged and thus, turn off communication to the brain, and in turn, cut off the corresponding qualia. This is essentially what I mean by recontextualizing qualia and Strawson clearly does not dodge that bullet.

Ultimately, I think EFT should supplant panpsychism in popular circles. I can agree with the notion of conscious cells because they are among the smallest structures atoms have assembled into within living things. I disagree with the idea of conscious particles because when they organize into air fryers, thermostats, buildings, mountains, and sand dunes, despite having comparable mass, size, shape, and charge to living things, none of these objects appear to be conscious; in other words, none of these objects appear to be aware, awake, attentive, and most importantly, alive. I can knock on a fish tank and the fish with the blankest stare in the tank can respond to a stress signal and move away from that area in the tank. I can throw a rock repeatedly into a harder material and it will continue to remain idle; put another way, I can take a geologist’s hammer to sediment over and over again, whether for a dig or in a fit of sustained rage, and the sediment will remain idle, allowing me to crack and break it as much as I please. Conscious beings, on the other hand, have a bias toward survival and retention of their structure. To use as humane an example as possible, if you were to do something that caused an insect pain, perhaps sending minor electrical charges into one of its legs, its automatic response will be to try to escape the situation. The insect, like you, wants to survive and ensure that it is not crushed or, in this case, burnt to a crisp. The same cannot be said of the myriad, non-experiential macro objects around us day in and day out. Strawson and panpsychists, aside from co-opting terms like physicalism when they really do not mean physicalism, would do well to renounce panpsychism and accept a better theory of ultimates: electric fields theory of consciousness. Then they can come to my pluralist physicalist account that allows for compound reductionism. To my mind, this is the only real way to study consciousness.

Rebuking Rasmussen’s Geometric Argument

By R.N. Carmona

My purpose here is twofold: first and foremost, I want to clarify Rasmussen’s argument because though I can understand why word of mouth can lead to what is essentially a straw man of his argument, especially in light of the fact that his argument requires one to pay for an online article or his book Is God the Best Explanation of Things? which he coauthored with Felipe Leon, it is simply good practice to present an argument fairly. Secondly, I want to be stern about the fact that philosophy of religion cannot continue to rake these dead coals. Rasmussen’s argument is just another in a long, winding, and quite frankly, tired history of contingency arguments. In in any case, the following is the straw man I want my readers and anyone else who finds this post to stop citing. This is decidedly not Rasmussen’s argument:

Image

Rasmussen has no argument called The Argument From Arbitrary Limits. Arbitrary limits actually feature in Leon’s chapter in where he expresses skepticism of Rasmussen’s Geometric Argument (Rasmussen Joshua and Leon, Felipe. Is God The Best Explanation For Things. Switzerland: Palgrave Macmillan. 53-68. Print.). Also, Rasmussen has a Theistic conception of God (omnipresent, wholly good, etc.) that is analogous to what Plantinga means by maximal greatness, but Rasmussen does not refer to God using that term. Perhaps there is confusion with his use of the word maximal conceivable. While given Rasmussen’s beliefs, he implies God with what he calls a maximal foundation, “a foundation complete with respect to its fundamental (basic, uncaused) features” (Ibid., 140). He makes it clear throughout the book that he is open to such a foundation that is not synonymous with God. In any case, his maximal conceivable is not a being possessing maximal greatness; at least, not exactly, since it appears he means something more elementary given his descriptions of basic and uncaused, as these clearly do not refer to omnipresence, perfect goodness, and so on. There may also be some confusion with his later argument, which he calls “The Maximal Mind Argument” (Ibid. 112-113), which fails because it is relies heavily on nonphysicalism, a series of negative theories in philosophy of mind that do not come close to offering alternative explanations for an array of phenomena thoroughly explained by physicalism (see here). In any case, Rasmussen has no argument resembling the graphic above. His arguments rest on a number of dubious assumptions, the nexus of which is his Geometric Argument:

JR1 Geometry is a geometric state.

JR2 Every geometric state is dependent.

JR3 Therefore, Geometry is dependent.

JR4 Geometry cannot depend on any state featuring only things that have a geometry.

JR5 Geometry cannot depend on any state featuring only non-concrete (non-causal) things.

JRC Therefore, Geometry depends on a state featuring at lest one geometry-less concrete thing (3-5) (Ibid., 42).

Like Leon, I take issue with JR2. Leon does not really elaborate on why JR2 is questionable saying only that “the most basic entities with geometry (if such there be) have their geometrics of factual or metaphysical necessity” and that therefore, “it’s not true that every geometric state is dependent” (Ibid., 67). He is correct, of course, but elaboration could have helped here because this is a potential defeater. Factual and metaphysical necessity are inhered in physical necessity. The universe is such that the fact that every triangle containing a 90-degree angle is a right triangle is reducible to physical constraints within our universe. This fact of geometry is unlike Rasmussen’s examples, namely chair and iPhone shapes. He states: “The instantiation of [a chair’s shape] depends upon prior conditions. Chair shapes never instantiate on their own, without any prior conditions. Instead, chair-instantiations depend on something” (Ibid., 41). This overt Platonism is questionable in and of itself, but Leon’s statement is forceful in this case: the shape of the chair is not dependent because it has its shape of factual or metaphysical necessity that stem from physical necessity. Chairs, first and foremost, are shaped the way they are because of our shape when we sit down; furthermore, chairs take the shapes they do because of physical constraints like human weight, gravity, friction against a floor, etc. For a chair not to collapse under the force of gravity and the weight of an individual, it has to be engineered in some way to withstand these forces acting on it; the chair’s shape is so because of physical necessity and this explains its metaphysical necessity. There is therefore, no form of a chair in some ethereal realm; an idea like this is thoroughly retrograde and not worth considering.

In any case, the real issue is that chair and iPhone shapes are not the sort of shapes that occur naturally in the universe. Those shapes, namely spheres, ellipses, triangles, and so on, also emerge from physical necessity. It is simply the case that a suspender on a bridge forms the hypothenuse of a right triangle. Like a chair, bridge suspenders take this shape because of physical necessity. The same applies to the ubiquity of spherical and elliptical shapes in the universe. To further disabuse anyone of Platonic ideas, globular shapes are also quite ubiquitous in the universe and are more prominent the closer we get to the Big Bang. There are shapes inherent in our universe that cannot be neatly called geometrical and even still, these shapes are physically and therefore, metaphysically necessitated. If JR2 is unsound, then the argument falls apart. On another front, this addresses Rasmussen’s assertion that God explains why there is less chaos in our universe. Setting aside that the qualification of this statement is entirely relative, the relative order we see in the universe is entirely probabilistic, especially given that entropy guarantees a trend toward disorder as the universe grows older and colder.

Like Leon, I share his general concern about “any argument that moves from facts about apparent contingent particularity and an explicability principle to conclusions about the nature of fundamental reality” (Ibid., 67) or as I have been known to put it: one cannot draw ontological conclusions on the basis of logical considerations. Theistic philosophers of religion and unfortunately, philosophers in general, have a terrible habit of leaping from conceivability to possibility and then, all the way to actuality. Leon elaborates:

Indeed, the worry above seems to generalize to just about any account of ultimate reality. So, for example, won’t explicability arguments saddle Christian theism with the same concern, viz. why the deep structure of God’s nature should necessitate exactly three persons in the Godhead? In general, won’t explicability arguments equally support a required explanation for why a particular God exists rather than others, or rather than, say, an infinite hierarchy of gods? The heart of the criticism is that it seems any theory must stop somewhere and say that the fundamental character is either brute or necessary, and that if it’s necessary, the explanation of why it’s necessary (despite appearing contingent) is beyond our ability to grasp (Ibid., 67-68).

Of course, Leon is correct in his assessment. Why not Ahura Mazda, his hypostatic union to Spenta Mainyu, and his extension via the Amesha Spentas? If, for instance, the one-many problem requires the notion of a One that is also many, what exactly rules out Ahura Mazda? One starts to see how the prevailing version of Theism in philosophy of religion is just a sad force of habit. This is why it is necessary to move on from these arguments. Contingency arguments are notoriously outmoded because Mackie, Le Poidevin, and others have already provided general defeaters that can apply to any particular contingency argument. Also, how many contingency arguments do we need exactly? In other words, how many different ways can one continue to assert that all contingent things require at least one necessary explanation? Wildman guides us here:

Traditional natural theology investigates entailment relations from experienced reality to, say, a preferred metaphysics of ultimacy. But most arguments of this direct-entailment sort have fallen out of favor, mostly because they are undermined by the awareness of alternative metaphysical schemes that fit the empirical facts just as well as the preferred metaphysical scheme. By contrast with this direct-entailment approach, natural theology ought to compare numerous compelling accounts of ultimacy in as many different respects as are relevant. In this comparison-based way, we assemble the raw material for inference-to-the-best-explanation arguments on behalf of particular theories of ultimacy, and we make completely clear the criteria for preferring one view of ultimacy to another.

Wildman, Wesley J. Religious Philosophy as Multidisciplinary Comparative Inquiry: Envisioning a Future For The Philosophy of Religion. State University of New York Press. Albany, NY. 2010. 162. Print.

Setting aside that Rasmussen does not make clear why he prefers a Christian view of ultimacy as opposed to a Zoroastrian one or another one that may be proposed, I think Wildman is being quite generous when saying that “alternative metaphysical schemes fit the empirical facts just as well as the preferred metaphysical scheme” because the fact of the matter is that some alternatives fit the empirical facts better than metaphysical schemes like the ones Christian Theists resort to. Rasmussen’s preferred metaphysical scheme of a maximal foundation, which properly stated, is a disembodied, nonphysical mind who is omnipresent, wholly good, and so on rests on dubious assumptions that have not been made to cohere with the empirical facts. Nonphysicalism, as I have shown in the past, does not even attempt to explain brain-related phenomena. Physicalist theories have trounced the opposition in that department and it is not even close. What is more is that Christian Theists are especially notorious for not comparing their account to other accounts and that is because they are not doing philosophy, but rather apologetics. This is precisely why philosophy of religion must move on from Christian Theism. We can think of an intellectual corollary to forgiveness. In light of Christian Theism’s abject failure to prove God, how many more chances are we required to give this view? Philosophy of religion is, then, like an abused lover continuing to be moved by scraps of affection made to cover up heaps of trauma. The field should be past the point of forgiveness and giving Christian Theism yet another go to get things right; it has had literal centuries to get its story straight and present compelling arguments and yet here we are retreading ground that has been walked over again and again and again.

To reinforce my point, I am going to quote Mackie and Le Poidevin’s refutations of contingency arguments like Rasmussen’s. It should then become clear that we have to bury these kinds of arguments for good. Let them who are attached to these arguments mourn their loss, but I will attend no such wake. What remains of the body is an ancient skeleton, long dead. It is high time to give it a rest. Le Poidevin put one nail in the coffin of contingency arguments. Anyone offering new contingency arguments has simply failed to do their homework. It is typical of Christian Theists to indulge confirmation bias and avoid what their opponents have to say. The problem with that is that the case against contingency arguments has been made. Obstinacy does not change the fact. Le Poidevin clearly shows why necessary facts do not explain contingent ones:

Necessary facts, then, cannot explain contingent ones, and causal explanation, of any phenomenon, must link contingent facts. That is, both cause and effect must be contingent. Why is this? Because causes make a difference to their environment: they result in something that would not have happened if the cause had not been present. To say, for example, that the presence of a catalyst in a certain set of circumstances speeded up a reaction is to say that, had the catalyst not been present in those circumstances, the reaction would have proceeded at a slower rate. In general, if A caused B, then, if A had not occurred in the circumstances, B would not have occurred either. (A variant of this principle is that, if A caused B, then if A had not occurred in the circumstances, the probability of B’s occurrence would have been appreciably less than it was. It does not matter for our argument whether we accept the origin principle or this variant.) To make sense of this statement, ‘If A had not occurred in the circumstances, B would not have occurred’, we have to countenance the possibility of A’s not occurring and the possibility of B’s not occurring. If these are genuine possibilities, then both A and B are contingent. So one of the reasons why necessary facts cannot causally explain anything is that we cannot make sense of their not being the case, whereas causal explanations requires us to make sense of causally explanatory facts not being the case. Causal explanation involves the explanation of one contingent fact by appeal to another contingent fact.

Le Poidevin, Robin. Arguing for Atheism: An Introduction to the Philosophy of Religion. London: Routledge, 1996. 40-41. Print.

This is a way of substantiating that an effect is inhered in a cause or the principle, like effects from like causes. This has been precisely my criticism of the idea that a nonphysical cause created the physical universe. There is no theory of causation that permits the interaction of an ethereal entity’s dispositions and that of physical things. It is essentially a paraphrase of Elizabeth of Bohemia’s rebuttal to Cartesian dualism: how does mental substance interact with physical substance? This is why mind-body dualism remains in a state of incoherence, but I digress. Mackie puts yet another nail in the coffin:

The principle of sufficient reason, then, is more far-reaching than the principle that every occurrence has a preceding sufficient cause: the latter, but not the former, would be satisfied by a series of things or events running back infinitely in time, each determined by earlier ones, but with no further explanation of the series as a whole. Such a series would give us only what Leibniz called ‘physical’ or ‘hypothetical’ necessity, whereas the demand for a sufficient reason for the whole body of contingent things and events and laws calls for something with ‘absolute’ or ‘metaphysical’ necessity. But even the weaker, deterministic, principle is not an a priori truth, and indeed it may not be a truth at all; much less can this be claimed for the principle of sufficient reason. Perhaps it just expresses an arbitrary demand; it may be intellectually satisfying to believe there is, objectively, an explanation for everything together, even if we can only guess at what the explanation might be. But we have no right to assume that the universe will comply with our intellectual preferences. Alternatively, the supposed principle may be an unwarranted extension of the determinist one, which, in so far as it is supported, is supported only empirically, and can at most be accepted provisionally, not as an a priori truth. The form of the cosmological argument which relies on the principle of sufficient reason therefore fails completely as a demonstrative proof.

Mackie, J. L. The Miracle of Theism: Arguments for and against the Existence of God. Oxford: Clarendon, 1982. 86-87. Print.

Every contingency argument fails because it relies on the principle of sufficient reason and because necessity does not cohere with contingency as it concerns a so-called causal relation. Mackie, like Le Poidevin, also questions why God is a satisfactory termination of the regress. Why not something something else? (Ibid., 92). Contingency arguments amount to vicious special pleading and an outright refusal to entertain viable alternatives, even in cases where the alternatives are nonphysical and compatible with religious sentiments. In any case, it would appear that the principle of sufficient reason is not on stable ground. Neither is the notion that a necessary being is the ultimate explanation of the universe. Contingency arguments have been defeated and there really is no way to repeat these arguments in a way that does not fall on the horns of Le Poidevin and Mackie’s defeaters. Only the obdurate need to believe that God is the foundational explanation of the universe explains the redundancy of Christian Theists within the philosophy of religion. That is setting aside that apologetics is not philosophy and other complaints I have had. The Geometric Argument, despite using different language, just is a contingency argument. If the dead horse could speak, it would tell them all to lay down their batons once and for all, but alas.

Ultimately, contingency arguments are yet another example of how repetitive Christianized philosophy of religion has become. There is a sense in which Leon, Le Poidevin, and Mackie are paraphrasing one another because, and here is a bit of irony, like arguments result in like rebuttals. They cannot help but to sound like they each decided or even conspired to write on the same topic for a final paper. They are, after all, addressing the same argument no matter how many attempts have been made to word it differently. It is a vicious cycle, a large wheel that cannot keep on turning. It must be stopped in its tracks if progress in the philosophy of religion is to get any real traction.

Philosophy of Religion Series: A Brief Exploration of Ātman in Hinduism and Anattā in Buddhism

By R.N. Carmona

In the beginning this world was only brahman, and it knew only itself (ātman), thinking: ‘I am brahman.’ As a result, it became the Whole. Among the gods, likewise, whosoever realized this, only they became the Whole. It was the same also among the seers and among humans. Upon seeing this very point, the seer Vāmadeva proclaimed: ‘I was Manu, and I was the sun.’ This is true even now. If man knows ‘I am brahman‘ in this way, he becomes this whole world. Not even the gods are able to prevent it, for he becomes their very self (ātman). So when a man venerates another deity, thinking, ‘He is one, and I am another’, he does not understand.

Olivelle, Patrick. Upaniṣads: A new translation. New York: Oxford World’s Classics, 1996. 15. Print

This passage from the Bṛadāraṇyaka Upaniṣad coincides with the earliest ideas of ātman (the self). The Upaniṣads, unlike the Vedas, explore ātman in greater detail. The “Ṛgveda (c.1200 B.C.E.), the earliest textual source from ancient India, ātman had already a wide range of lexical meanings, including ‘breath’, ‘spirit’, and ‘body’” (Black, Brian. “Upanishads”. Internet Encyclopedia of Philosophy. Web.). Interestingly, the Upaniṣads, taken together, do not yield the same interpretation of the self, so there is a sense in which the concept of ātman anticipated a view in modern philosophy of mind. We will circle back around to that later. Of importance now is laying out a brief overview of the ātman in Hinduism. Then, we will turn to the Buddhist interpretation of the idea, anattā, which has interesting parallels to modern views of mind.

The Vedic idea of ātman never fell out of fashion as is made apparent in Uddālaka’s teachings. His idea of ātman is pretty much identical: it is the life force within all living things, the very essence creating a bridge between the parts and the whole. This is in keeping with Advaita Vedānta in where the “experiencing self (jīva) and the transcendental self of the Universe (ātman) are in reality identical (both are Brahman), though the individual self seems different as space within a container seems different from space as such” (Menon, Sangeetha. “Vedanta, Advaita”. Internet Encyclopedia of Philosophy. Web.). Yājñavalkya offers a different interpretation, equating the self with consciousness rather than a life force. He ” characterizes the self as that which has mastery over the otherwise distinct psycho-physical capacities. He goes on to explain that we know the existence of the self through actions of the self, through what the self does, not through our senses—that the self, as consciousness, cannot be an object of consciousness” (Black, Ibid.). Despite differences from Uddālaka’s interpretation, Yājñavalkya still adheres to Advaita Vedānta. The Advaita school of Vedānta yields a concept of God that accords with panentheism.

Prajāpati also equates ātman with consciousness, but crucially, he also conflates it with the material body. Prajāpati, therefore, presents a strain of another school in Vedānta, namely Dvaita, which is dualistic. In a sense, it is a dualism of mind and body or consciousness and the material, but more importantly, it is a dualism of jīva and the Brahman, e.g., humankind and God. Given Prajāpati’s distinction, we see the beginnings of monotheism or henotheism, and the much later bhakti tradition in Hinduism in where a devotee of a given god is to unite their soul to this god by way of their love and devotion. Though there are other interpretations of ātman and Brahman in Hinduism, Advaita and Dvaita suffice for our purposes.

In Buddhism, there is no ātman. We are, therefore, introduced to the concept of anattā or non-self. There is no static, immutable, essential soul or consciousness. This is crucial for Buddhist teachings regarding suffering (dukkha) and detachment because if one does not have the idea of an essential self, one is less likely to pity himself over others, to regard his own suffering as having higher priority than that of other beings. Coseru elaborates:

The centrality of the not-self doctrine in Buddhist thought is explained on the basis of its pragmatic role in guiding the adept on the path to enlightenment. Furthermore, the not-self doctrine provides a justification for treating endurance, independence, and self-subsistence as neither desirable nor attainable, but rather as what they are: mistaken notions resulting from the habitual tendency to construct an identity from a stream of physical and subjective phenomena. 

Coseru, Christian. “Mind in Indian Buddhist Philosophy”. Stanford Encyclopedia of Philosophy. 2012. Web.

As Coseru also points out, there is a sense in which the Buddhist idea of anattā anticipated Hume who thought that there was no self to apprehend within our perceptions. Along with anattā, Buddhists present “a fluid account of experience as an ever-changing stream of psycho-physical events. This dynamic model of human existence comprises the five classes of phenomena the Buddha referred to as the “aggregates of grasping” (upādāna-skandha), on account of our tendency to grasp after and identify with them” (Ibid.). This is opposed to our idea of a fixed self or consciousness experiencing life in a Cartesian theater.

When considering the Hindu idea of ātman and the Buddhist response of anattā, we can start to see how we could have avoided all of Descartes’ mistakes in the philosophy of mind had we been more studied on Eastern religions or other religions aside from Christianity. Christianity, akin to Dvaita, creates a dualism between God and man. There is never a sense, per Christian theology, in where man and God are identical or one. There is no sense in which man’s consciousness and God’s are identical either. Descartes took this a step further, dualizing the physical body and the mental soul. Hindus adhering to Dvaita Vedānta had already committed this error and the Buddhist idea of anattā, aside from reducing consciousness to the physical domain, suggested that there is no-self to speak of and more importantly, that there is no phenomenal consciousness to capture. It is an illusion.

Interestingly, the non-duality of Advaita Vedanta (monism), can be seen as paraphrasing anattā in that ideas of the self are illusory, a part of the Brahman dream (maya). This leads to the idea of mokṣa, the notion that we can free ourselves from the cycle of death and rebirth. For Hindus adhering to Advaita Vedānta, mokṣa is attained when one accepts the self as being one with Brahman. For Buddhists, Nirvana is the emptying of ideas of self and ultimately realizing that there is no self; this is how one comes to free oneself from the cycle of death and rebirth. Under both interpretations, there is a sense in which there is no self. On the one hand, any self that is at variance with the Brahman is illusory, a product of the maya while on the other, there is simply no self and any erroneous ideas we get about the self proceed from the ego. The ego is the engine through which false narratives of the self are created.

Further exploration of the self and ego delve too far into the philosophy of mind, but brief comments are in order. The Churchlands and Dennett adhere to anattā if ātman is defined as phenomenal consciousness. Ramsey states:

Dennett challenges not just our conception of pain, but all of our different notions of qualitative states. His argument focuses on the apparently essential features of qualia, including their inherent subjectivity and their private nature. Dennett discusses several cases—both actual and imaginary—to expose ways in which these ordinary intuitions about qualia pull apart. In so doing, Dennett suggests our qualia concepts are fundamentally confused and fail to correspond with the actual inner workings of our cognitive system.

Ramsey, William. “Eliminative Materialism”. Stanford Encyclopedia of Philosophy. 2019. Web.

It can be argued, therefore, that if the history of philosophy of religion had been different, then the history of philosophy of mind would have proceeded differently. In other words, the missteps philosophers have taken throughout the history of philosophy of mind likely would not have happened. Of course, we would be dealing with a set of different mistakes, but some of these mistakes would not prevail till this day due to the obstinacy of apologists who do not want to relinquish the idea of Cartesian dualism. A thorough understanding of ātman and anattā would have at least disabused us of the idea of a theater of consciousness or a fixed self, and related ideas like qualia, which as Dennett points out, are problematic. See my recent “Nonphysicalism in The Philosophy of Mind and Its Shortcomings” for a discussion on why the ideas of qualia and phenomenal consciousness are untenable.

On the philosophy of religion front, the concepts of ātman and anattā are fertile ground for discussions within the cosmotheological and ontotheological traditions (see Wildman, Wesley J. Religious Philosophy as Multidisciplinary Comparative Inquiry: Envisioning a Future For The Philosophy of Religion. State University of New York Press. Albany, NY. 2010. 248-261. Print. for an overview of these traditions in the philosophy of religion). As stated earlier, we now move away from mono or henotheistic frameworks and consider, for example whether panentheism best explains features of the universe, on the one hand, and features of being on the other. For one, consider the idea that we are star stuff. We are comprised of the same matter and energy that pervades the rest of the universe. In that sense, then, we are not distinct and all things in the universe recede back to the Big Bang singularity. Perhaps our ideas of essentialist distinction are illusory, a dream-like story we continue to tell ourselves. In light of this, there is either no self or the self reduces to the universe. Given the recent resurgence of panpsychism, some have argued that the universe is very much like a supermassive brain (see Ratner, Paul. “The universe works like a huge human brain, discover scientists”. Big Think. 19 Nov 2020). In any case, a closer look at Hinduism and Buddhism will take us in non-monotheistic directions that may prove fruitful in ongoing discussions in the philosophy of religion and of mind.

Ultimately, we begin to see why it is of the utmost importance to break up the Christian monopoly in philosophy of religion, so to speak. We can see how the winding history of ātman and anattā anticipate certain strains in the philosophy of mind while also providing new, fertile ground in the philosophy of religion. In Advaita Vedānta, there is just one self, the Brahman. Every other idea of self is illusory. This has some staggering implications for ongoing discussions about identity as well. In Buddhism, given anattā, we see that the “I Am that I Am” uttered by Yahweh is ultimately an error of the ego, overinflated and now extended into the idea of God. Furthermore, this supports the idea that the jealous, vindictive, tribalist gods so often prone to favoritism, unironically, of the people who happen to worship them, are created in our image. Anattā suggests that gods like Yahweh, Allah, and those pertaining to the various mono and henotheisms around the world are extensions of the ego imposing false ideas of the self. Most philosophers of religion, concerned not only with the nature of but also with the identity of God, seldom wrestle with the idea that perhaps there is no universal ātman, e.g., there is no God. This has some resonating implications all its own. The purpose here has been to move the philosophy of religion in yet another fruitful direction; while I can begin to exhaust possibilities, it is important for me not to create a self-induced echo chamber, especially given that my interest is to encourage philosophers of religion to travel down these newly paved roads. Anattā has far reaching implications for free will, ethics, identity, existentialism, and other areas of philosophy as well. In any case, it should be clear why Christianity’s iron grip on the philosophy of religion needs to be loosened.

Nonphysicalism in The Philosophy of Mind and Its Shortcomings

By R.N. Carmona

Philosophy of mind begins and ends with the mind-body problem. Philosophy, more generally, begins and ends with problems, so philosophy can be considered a sort of Russian doll in where a major problem implies any number of minor problems. Philosophy, therefore, makes progress insofar as there are solutions for these problems. The enterprise, however, is hyper-specific and thus, what appears to be a solution for the major aspect of an issue is often not considered a solution for the minor aspects connected to it. For example, Howard Robinson identifies the following implicit issue within the mind-body problem: “The problem of consciousness: what is consciousness? How is it related to the brain and the body?” (Robinson, Howard, “Dualism”The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (ed.)). The hard problem of consciousness is entailed as well: what is phenomenal consciousness? How do qualia relate to the brain and/or body? Though we understand awakeness, awareness, and a lot about how consciousness traces to the brain, e.g., Francis Crick and Christoff Koch’s idea that the claustrum is the “conductor of consciousness” (see Stiefel, Klaus M. “Is the key to consciousness in the claustrum?”. The Conversation. 26 May 2014. Web.), proponents of the hard problem are not convinced that physicalism has solved the mind-body problem. This is where we find ourselves in the philosophy of mind.

The mind-body problem appears to find a solution in the severe brain trauma experienced by Phineas Gage in 1848. An explosion sent a tamping iron through his left cheek bone at a high speed; the iron exited at the top of his head and was found several rods (the equivalent of ~5 meters) behind him. His brain injury was such that it resulted in drastic changes in his behavior. John Martin Harlow, the physician who attended to Gage, published a report in the Bulletin of the Massachusetts Medical Society in where he discussed Gage’s behavioral changes: 

His contractors, who regarded him as the most efficient and capable foreman in their employ previous to his injury, considered the change in his mind so marked that they could not give him his place again. He is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint of advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operation, which are no sooner arranged than they are abandoned in turn for others appearing more feasible. In this regard, his mind was radically changed, so decidedly that his friends and acquaintances said he was “no longer Gage.”

Costandi, Mo. “Phineas Gage and the effect of an iron bar through the head on personality”. The Guardian. 8 Nov 2010. Web.

Gage’s case lends strong support to the notion that what we call the mind is intimately connected to the brain in some important way. Even though thinkers like Leucippus, Hobbes, La Mettrie, d’Holbach, Carnap, Reichenbach, and Schlick thought that the brain generates thought similarly to how the liver secretes bile, there was no theory in the philosophy of mind that equated the brain and mind until U.T. Place in 1956 (Smart, J. J. C., “The Mind/Brain Identity Theory”The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.)). Mind/Brain Identity Theory can therefore, be regarded as the earliest version of physicalism in the philosophy of mind.

In philosophy, more generally, I take major issue with negative positions. Briefly, a positive philosophical theory is the result of the evidence in question. Take, for instance, bare data about brain impairments and corresponding behavioral changes from Phineas Gage to the 40-year-old man who exhibited sexually deviant behavior caused by a cancerous tumor in the right lobe of his orbifrontal cortex (Choi, Charles. “Brain tumour causes uncontrollable paedophilia”. New Scientist. 21 Oct 2002. Web.). Mind/Brain Identity Theory, given bare data of this sort, would point to changes in the mind corresponding to some impairment in the brain. There are other points of data for a theory to account for, but the insights of neuroscience, cognitive science, and psychology are crucial and must be accounted for under any positive theory in the philosophy of mind. Negative theories, on the other hand, often bypass an alternative interpretation of the relevant data and hinge on the fact that a given positive theory or a group of them falling under some category, e.g., physicalism, fail to account for or have overlooked other points of data. The challenge to physicalism is that it has failed, hitherto, to account for qualia.

Qualia (singular quale), what it is like to see red or taste pizza or touch silk, are supposed to lead us to the inescapable conclusion that since qualia are nonphysical properties, mental property is also nonphysical. Nonphysicalist theories in the philosophy of mind are the result of physicalism’s lone incapacity up till this point to incorporate phenomenal experiences into its paradigm. Later, I will show why nonphysicalists have been hasty to dismiss physicalist theories. For present purposes, some historical background on the hard problem of consciousness is in order. While credit is due to Chalmers for naming the problem, he does not deserve recognition for being the first person to identify it.

A Short History of The Hard Problem of Consciousness

David Chalmers is often associated with the hard problem of consciousness, but I think the credit rightfully belongs to Wilfrid Sellars. The basic thrust of the problem was spelled out in such a manner as to be the equivalent of stating it explicitly. The fact that Sellars did not call the problem what we now call it, ‘the hard problem of consciousness’, does not take away from the fact that he did much more work in attempts to unify two conflicting images which he dubbed manifest and scientific

At first glance, this might be a reframing of Kant’s phenomena and noumena, but it is useful to note that Sellars’ manifest and scientific images would both be categorized as phenomena. On Kant, the scientific image would not qualify as noumena. Some modern day philosophers, taking after Donald Hoffman, a professor at the University of California Irvine, have it that we have evolved in such a way that we are pretty much shielded from apprehending ultimate reality, i.e., the Kantian noumena (Frank, Adam. “What If Evolution Bred Reality Out Of Us?” NPR. 6 Sep 2016. Web.). We evolved to perceive and thus, to solely apprehend the phenomena. 

With that in mind, Sellars’ scientific and manifest images correspond to the Kantian phenomena. Yet there appears to be an irreconcilable contradiction between them. On the manifest image, a Rubik’s cube has a distinct three-dimensional shape and six colors – usually yellow, orange, red, green, blue, and white. Assuming we are trichromats that do not have green-red color blindness, we all apprehend this object more or less equally. On the scientific image, however, the cube does not have a distinct shape; nor does it have colors. The cube is comprised of particles and empty space, and though the colors are fully explainable by the science of chromatics – namely as the result of wavelengths in the electromagnetic spectrum – particles in and of themselves do not have a color. Aside from that, the Rubik’s cube seems to have these colors because we have three photoreceptor cells in each retina allowing us to see these colors. The colors, to put it another way, are not inherent to the object.

Sellars was interested in the project of saving appearances or, in other words, unifying reality as it seems given human perception versus reality as explained through science. This is the hard problem of consciousness made explicit: neuroscience cannot explain phenomenal consciousness. This is Sellars’ exact dilemma. The contradictory images are best viewed in human consciousness. Neurologists and neuroscientists can explain to us why we see and what brain regions are involved when we see or even when we imagine seeing, but they cannot tell us why we see how and what we see. In other words, science can readily explain why we see the colors we see, but it cannot tell us how neurons and brain regions give rise to quaila; there is something it is like to see a Rubik’s cube and given the hard problem, the scientific image cannot be invoked to explain the manifest image.

The Challenge From Nonphysicalists

Nonphysicalists, like any negative theorists, are essentially telling us to forget about the explanatory success of physicalism and focus, instead, on its seeming failure. In other words, nonphysicalists have no alternative explanation that works as well or better than physicalist explanations of non-phenomenal consciousness like awakeness and awareness, for example, but since they are purporting to offer a metaphysical explanation for phenomenal consciousness, we should therefore, abandon physicalist modes of explanation. I think that, first and foremost, the onus is on any negative theorist in philosophy to account for all of the same data in a more cogent manner than positive theorists before reaching into areas not illuminated by them. Otherwise, the inductive bias that people tend to have for a working paradigm remains justified. Put another way, if the positive theory has successfully accounted for all of these points of data, we have no reason to believe it cannot account for more troublesome points of data like qualia, given that enough time is granted. There is also a glaring problem with the inclination toward a metaphysical account after physical accounts have done most of the heavy lifting. It appears to beg the question for a nonphysical bias usually tracing back to religious predilections.

Think of synesthesia. For people who have synesthesia, hearing color, tasting sounds, and seeing numbers and letters as colored is a common experience. As with most sensory disorders, there is a neurophysical correlate to synesthesia (Barry, Susan R. “The Brain of a Synesthete”. Psychology Today. 26 Jul 2012. Web.). Sometimes the onset of the disorder is preceded by brain trauma. Jason Padgett, who was assaulted outside a karaoke bar, suffered a severe concussion. He claims to see geometric shapes and angles all around him. This is an unusual sense(s) for the majority of us and there would obviously be something it is like to experience the world in the way he does, i.e, qualia associated with these quirky senses. There is, however, something to be said about the fact that a brain injury preceded the emergence of these peculiar senses. While I am wary of inferring causation from correlation, correlation is a powerful indicator and when considering that Padgett’s case is not unique, the correlation might be suggestive of causation (see St. Michael’s Hospital. “Second known case of patient developing synesthesia after brain injury.” ScienceDaily, 30 July 2013.).

Panpsychists and Aristotelian hylomorphists say nothing about the misattribution of qualia. They want potential detractors of physicalism to believe that qualia are invariably uniform and predictable. In other words, the examples invariably are what is it like to taste pizza or what is it like to see red, but they never make mention of an increasing number of cases in where we can ask what is it like to taste Bach’s “Lacrimosa” or what is it like to hear burgundy. Recently, Julie McDowall’s synesthesia went viral because she can tell people what their names taste like. Interestingly, in some cases, she told people what their names looked like, e.g., Naomi looks like colorful lego pieces. Panpsychism and modern-day hylomorphism, aside from having no way of accounting for awareness, awakeness, and other aspects of consciousness already explained under physicalism, have overlooked synesthetic qualia because they are essentially live counterfactuals. We do not have to imagine another world in where people taste sounds and hear colors; these peculiarities happen all around us, and so, if they want to conclude that qualia and physicalism are incongruous, then synesthetic qualia and nonphysicalism are irreconcilable. To see this, it will be necessary to take a closer look at these two negative theories.

An Examination of Panpsychism and Hylomorphism

Setting aside the more mystic treatments of panpsychism, the non-reductive physicalist version of it promoted by Strawson and Chalmers is fallacious and though that is not enough to show where it has gone wrong, it makes for a false start. John Heil states:

The idea would not be that electrons and quarks have minds, but that qualities of conscious experiences are included among the qualities possessed by fundamental things. On the version of panpsychism endorsed by Strawson, electrons would not have mini-souls, but merely contain experiential ingredients of consciousness, a kind of primitive awareness. Only fully developed sentient creatures would undergo recognizably conscious experiences. You can make a triangle or a sphere by organized non-triangular or non-spherical elements in the right way, but the elements must have some split characteristics. Similarly, you can make a conscious mind by arranging elements that are not themselves minds, but the elements must have some conscious characteristics.

Heil, John. Philosophy of Mind: A Contemporary Introduction. Third ed. New York: Routledge, 2006. 172. Print.

The notion that the constituents of consciousness must themselves be conscious, as some proponents of panpsychism put it, or as Strawson puts it, that they must have rudimentary experiential ingredients, is a fallacy of division. Despite this, it is not difficult to see the appeal of panpsychism. If you spread out the parts of a thermostat or a microwave, given that you know anything about the components that go into a thermostat or a microwave, you will understand how the bimetallic strips in a thermostat, that are comprised of two different metals, when placed back to back, serve an important function because one metal has a high coefficient of linear expansion and therefore, expands when the temperature increases, resulting in the bending of the bimetallic strip in one direction toward either opening or closing the circuit. Unfortunately, it is just not possible to spread out the parts of the brain and nervous systems, right down to the microscopic level, in order to confirm that particles have experiential ingredients. Moreover, one can pick apart a computer, a microwave, or a thermostat down to its barest parts, understand the function of each part, and put these appliances back together in working order. A panpsychist can then assert that brains are similar to household appliances, but they would have considerable difficulty showing how the combination of w amount of electrons and x amount of protons, if configured to make a neocortex and an amygdala, will result in y functions and z behaviors. The panpsychist’s domain of analysis is on the same macro-level that physicalists operate in, so they therefore, have no way of substantiating their assertions.

We will circle back around to panpsychism shortly, but a brief overview of hylomorphism is necessary because both of these negative theories in the philosophy of mind rely too heavily on the same considerations. A go-to example used by hylomorphists is in order:

Suppose we put Godehard in a strong bag — a very strong bag since we want to ensure that nothing leaks out when we squash him with several tons of force. Before the squashing, the contents of the bag include one human being; after, they include none. In addition, before the squashing the contents of the bag can think, feel, and act, but after the squashing they can’t. What explains these differences in the contents of the bag pre-squashing and post-squashing? The physical materials (whether particles or stuffs) remain the same — none of them leaked out. Intuitively, we want to say that what changed was the way those materials were structured or organized.

William Jaworski, Structure and the Metaphysics of Mind: How Hylomorphism Solves the Mind-Body Problem. Oxford: Oxford University Press. 9. 2016. Print.

Setting aside Bernard Williams’ astute observation, namely that this ‘polite materialism’ is incongruous with the Neo-Aristotelian’s confessed dissatisfaction with materialism, this thought experiment misses the mark. Briefly, in keeping with Aristotle, hylomorphists state that the mind is not just an accumulation of materials, but instead, a structure or a composite organized in a certain way that then gives rise to powers that have causal capacities. On the surface, it is a non-reductive physicalist account, but once Aristotelian causation is properly accounted for, along with Aristotle’s treatment of substance and forms, one starts to see how hylomorphism, like panpsychism, are appropriated by nonphysicalists, especially ones who have religious biases. In any case, we do not need to imagine a graphic case like a crushed human being in a bag, a thought experiment that begs the question for hylomorphism. Hylomorphists should ask what structural difference is there between a living person at 3:49 pm and the same person pronounced dead at 3:53 pm. Setting aside theories of time, the living human, named Henry, is structurally organized as the hylomorphist asserts; dead Henry is organized in the same way, no pulverization necessary. Hylomorphism, unlike versions of physicalism, cannot explain what changed over the course of four minutes.

This is precisely the problem with negative theories in the philosophy of mind. They take for granted what physicalism has already explained, offering no alternative explanations, and then proceed to make claims extending from questions physicalists have, thus far, not been able to address. A panpsychist or hylomorphist is not saying anything about the brain gradually moving toward complete inactivity, the death of neurons, and ultimately, brain death resulting in the loss of function of every one of Henry’s organs. The physicalists did all the heavy lifting, offering positive theories that account for far more data than nonphysicalists hoping to high heavens that there is a viable alternative to Cartesian dualism. The nonphysicalists, by contrast, want to get by with nothing in the way of a cogent explanation, content with identifying what is, more than likely, a temporary weakness of a physicalist framework that has had far more explanatory success. In other words, we do not need to crush Henry’s structure to account for any difference between a living Henry and a dead one. Nor do we have plausible ways of deconstructing Henry, down to scores of subatomic particles, in order to understand his internal workings and how these experiential ingredients come together as a fully functioning human consciousness. It appears that the panpsychist making these assertions intends to make demands that are impossible to meet, to essentially move the goalposts out of a begrudging recognition of the fact that physicalists have much in the way of a working explanation for how all of the parts of the brain communicate, how neurons and synapses account for connectivity, and how these constituents come together to produce consciousness, including qualia. We do not need negative theories of mind unless these negative theorists do the hard work that will put them in a position to offer alternative explanations that are consistent with their nonphysicalist framework; it is not enough to stand on the shoulders of a giant they believe to be wrong.

The Sensuous Zombie

Now, imagine a person indistinguishable from a human being. Imagine then that this person is blind, deaf, and mute. Furthermore, imagine that this person cannot taste, smell or feel anything. Imagine that this person is devoid of all senses, even hunger pangs, a full bladder, and bowel movements. On my reductionist account, sensations feature in the information received from the physical world. Sights, sounds, colors, textures, and so on inform our awareness, which in turn informs our consciousness. Information mediates awareness and consciousness. This is in agreement with David Chalmers’ view. Where we differ is that I conclude that without our senses, we would not have phenomenal consciousness, especially since the qualia of sight is simultaneous with whatever we are seeing.

My p-zombie shows that my reductionist account succeeds, since accounting for the p-zombie’s self-knowledge and qualia is impossible. Whatever account one might render is all but ineffable. Can this p-zombie proceed as Descartes did and eventually say “I think therefore I am”? If s/he knows of no people and no other objects, how can this person prove him/herself to exist? On my differential ontological view, we know who we are, in part, because of differentiation with other people and objects, i.e., “I am because we are, and since we are, therefore I am” (see Herrera, Jack. “Because You Are, I Am”. Philosophy Talk. 12 May 2017. Web.); there are no essential properties about us. 

The pivotal difference between my p-zombie and Chalmers’ is that it is probable that someone can be born this way. Of course, I would not wish the combination of these disabilities on anyone, but paralysis, blindness, deafness, anosmia, etc. have all occurred separately. Though the probability of all of these conditions being present in one individual is extremely low, it suffices to say that someone born taste-less or with a distorted sense of taste will either have no associated qualia or corresponding qualia that differ from normal experiences, e.g., chocolate tastes like spinach. Therefore, this leaves us with a powerful suggestion we cannot ignore: qualia, whether normal or synesthetic, and the lack thereof are contingent upon sense apparatus and communication, via the nervous system, to our eyes, noses, mouths, etc. There are, for example, widespread reports from people who have been infected with COVID-19 in where their sense of smell is degraded or goes away entirely. Ordinary flu strains and common colds can have these effects as well, but scientists looking into why this happens have shown that COVID-19 disrupts the normal functions of sustentacular cells, which “maintain the delicate balance of salt ions in the mucus that neurons depend on to send signals to the brain” (Sutherland, Stephani. “Mysteries of COVID Smell Loss Finally Yield Some Answers”. Scientific American. 18 Nov 2020. Web.). How is it possible, then, that the disruption of the function of sustentacular cells turns off smell-related qualia entirely? While one may remember what it is like to smell a rose, there is no longer corresponding qualia when one takes in a huge whiff of a bouquet of roses. Anosmia prevents one from having this experience.

The likelihood that anosmia can turn off qualia is much higher given physicalism than on any nonphysicalist alternatives, especially in light of the fact that nonphysicalists have no alternative explanations for why certain qualia are normally associated with this or that sense apparatus, be it the eyes or the nose. Moreover, the nonphysicalist has no explanation as to why brain trauma leads to synesthetic qualia and often omits such cases to suit his arguments against physicalism. What is more damning for the nonphysicalist enterprise is that though they go on ad nauseam about what it is like to be a bat, they have no account for why some blind people develop echolocation to navigate their surroundings. In other words, in cases where normal qualia are inaccessible due to impairment or lack of use of, for instance, the eyes, new senses must result in new qualia. This is readily predicted under physicalism, but not at all under nonphysicalism.

The Future of Consciousness

Ultimately, I think that as the issue currently stands, we are at the mercy of our scientific tools. To my mind, the best way forward is comparative study of consciousness. However, I do not think our current scientific tools are fit for the task, e.g., to monitor the brain activity of Thomas’ flying squirrel as it calculates trajectories while navigating the lush forests in Indonesia and Malaysia. What is it like to be an octopus, to taste through one’s arms (see Lambert, Jonathan. “How octopuses ‘taste’ things by touching”. Science News. 29 Oct 2020. Web.)? One may think this is not possible for a human, but what if it is? What if neuroscientists had a way to map the sense of taste onto the skin of our arms? Would this not result in qualia that correspond to this strange new sense? I will let David Eagleman have the last word:

If it sounds crazy that you would ever be able to understand all these signals through your skin, remember that all the auditory system is doing is taking signals and turning them into electrical signals in your brain. It doesn’t matter how you get those data streams there. In the future, other data streams could be streamed into the vest, meaning that people could walk around unconsciously perceiving the weather report. Snakes see in the infrared range and honey bees see in the ultravnstantiolet range. There’s no reason why we can’t start building devices to see that and feed it directly into our brains.

Erickson, Megan. “Welcome to Your Future Brain: Inside David Eagleman’s Neuro Lab”. Big Think. 17 May 2012. Web.

Why Dispositions Make More Sense Than Powers

By R.N. Carmona

Consider what follows some scattered thoughts after reading an excellent paper by Marius Backmann. I think he succeeds in showing how the Neo-Aristotelian notion of powers is incongruous with pretty much any theory of time of note. My issue with powers is more basic: what in the world are Neo-Aristotelians even saying when they invoke this idea and why does it seem that no one has raised the concern that powers are an elementary paraphrase of dispositions? With respect to this concern, Neo-Aristotelians do not even attempt to make sense of our experience with matter and energy. They seem to go on the assumption that something just has to underlie the physical world whereas I take it as extraneous to include metaphysical postulates where entirely physical ones make do. Dispositions are precisely the sort of physical postulates that adequately explain what we perceive as cause-effect relationships. What I will argue is that a more thorough analysis of dispositions is all that is needed to understand why a given a caused some given effect b.

My idea that powers are an elementary paraphrase is entailed in Alexander Bird’s analysis of what powers are. He states:

According to Bird, powers, or potencies, as he calls them alternatively, are a subclass of dispositions. Bird holds that not all dispositions need to be powers, since there could be dispositions that are not characterised by an essence, apart from self-identity. Powers, on the other hand, Bird (2013) holds to be properties with a dispositional essence. On this view, a power is a property that furnishes its bearer with the same dispositional character in every metaphysically possible world where the property is instantiated. If the disposition to repel negatively charged objects if there are some in the vicinity is a power in that sense, then every object that has that property does the same in every metaphysically possible world, i.e. repel negatively charged objects if there are some in the vicinity.

Marius Backmann (2019) No time for powers, Inquiry, 62:9-10, 979-1007, DOI: 10.1080/0020174X.2018.1470569

Upon closer analysis of Bird’s definition, a power just is a disposition. The issue is that Bird and the Neo-Aristotelians who complain that he has not gone far enough have isolated what they take to be a power from the properties of an electron, which is a good example of a particle that repels negatively charged objects given that some are in its vicinity. Talk of possible worlds makes no sense unless one can prove mathematically that an electron-like particle with a different mass would also repulse other negatively charged particles. However, though it can easily be shown that a slightly more massive electron-like particle will repulse other particles of negative charge, its electrical charge will be slightly higher than an electrons because according to Robert Milikan’s calculation, there seems to be a relationship between the mass of a particle and its charge. The most elementary charge is e = ~1.602 x 10^19 coulombs. The charge of a quark is measured in multiples of e/3, implying a smaller charge, which is expected given that they are sub-particles. So what is of interest is why the configuration of even an elementary particle yields predictable “behaviors.”

To see this, let us dig into an example Backmann uses: “My power to bake a cake would not bring a cake that did not exist simpliciter before into existence, but only make a cake that eternally exists simpliciter present. Every activity reduces to a change in what is present” (Ibid.). The Neo-Aristotelian is off track to say we have power to bake a cake and that the oven has power to yield this desired outcome that do not trace back to its parts or as Cartwright states of general nomological machines: “We explicate how the machine arrangement dictates what can happen – it has emergent powers which are not to be found in its components” (Cartwright, Nancy & Pemberton, John (2013). Aristotelian powers: without them, what would modern science do? In John Greco & Ruth Groff (eds.), Powers and Capacities in Philosophy: the New Aristotelianism. London, U.K.: Routledge. pp. 93-112.). Of the nomological machines in nature, Cartwright appears to bypass the role of evolution. Of such machines invented by humans, she ignores the fact that we often wrongly predict what a given invention will do. Evolution proceeds via probabilities and so, from our ad hoc point of view, it looks very much like trial and error. Humans have the advantage of being much more deliberate about what they are selecting for and therefore, our testing and re-testing of inventions and deciding when they are safe and suitable to hit the market is markedly similar to evolutionary selection.

That being said, the components of a machine do account for its function. It is only due to our understanding of other machines that we understand what should go into building a new one in order for it to accomplish a new task(s). Powers are not necessary because then we should be asking, why did we not start off with machines that have superior powers? In other words, why start with percolators if we could have just skipped straight to Keurig or Nespresso machines or whatever more advanced models that might be invented? Talk of powers seems to insinuate that objects, whether complex or simple, are predetermined to behave the way they do, even in the absence of trial runs, modifications, or outright upgrades. This analysis sets aside the cake. It does not matter what an oven or air fryer is supposed to do. If the ingredients are wrong, either because I neglected to use baking powder or did not use enough flour, the cake may not raise. The ingredients that go into baked goods play a “causal” role as well.

Dispositions, on the other hand, readily explain why one invention counts as an upgrade over a previous iteration. Take, for instance, Apple’s A14 Bionic chip. At bottom, this chip accounts for, “a 5 nanometer manufacturing process” and CPU and GPU improvements over the iPhone 11 (Truly, Alan. “A14 Bionic: Apple’s iPhone 12 Chip Benefits & Improvements Explained”. Screenrant. 14 Oct 2020. Web). Or more accurately, key differences in the way this chip was made accounts for the improvement over its predecessors. Perhaps more crucially is that critics of dispositions have mostly tended to isolate dispositions, as though a glass cup’s fragility exists in a vacuum. Did the cup free fall at 9.8m/s^2? Did it fall on a mattress or on a floor? What kind of floor? Or was the cup thrown at some velocity because Sharon was angry with her boyfriend Albert? What did she throw the cup at: a wall, the floor, Albert’s head, or did it land in a half-full hamper with Sharon and Albert’s dirty clothes?

Answering these questions solves the masking and mimicker problems. The masking problem can be framed as follows:

Another kind of counterexample to SCA, due to Johnston (1992) and Bird (1998), involves a fragile glass that is carefully protected by packing material. It is claimed that the glass is disposed to break when struck but, if struck, it wouldn’t break thanks to the work of the packing material. There is an important difference between this example and Martin’s: the packing material would prevent the breaking of the glass not by removing its disposition to break when struck but by blocking the process that would otherwise lead from striking to breaking.

Choi, Sungho and Michael Fara, “Dispositions”The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.).

I would not qualify that the packing material prevents the glass from breaking by blocking the process that would result if it were exposed. The packing material has its own properties and dispositions that we have discovered through trial and error making this material good at protecting glass. Packing paper was more common, but now we have bubble wrap and heavy duty degradable stretch wrap, also capable of protecting glass, china, porcelain, and other fragile items. The dispositions of these protective materials readily explain why their encompassing of fragile objects protects them from incidental striking or drops. If I were, however, to throw a wrapped coffee mug as hard as I can toward a brick wall, the mug is likely to break. This entails that variables are important in this thing we call cause and effect.

A perfect example is simple collisions of the sort you learn about in an elementary physics course. If a truck and haul speeding down a highway in one direction at ~145 km/h, and a sedan traveling in the opposite direction at cruising speed of ~89 km/h collide, we can readily predict the outcome and that this particular collision is inelastic. The speeding truck would likely barrel through the sedan and the sedan will be pushed in the direction the truck was traveling in. The vehicles’ respective speeds and masses are extremely important in understanding what goes on here. There is no sense in which we can say that trucks just have a power to mow things down because a collision between the truck in our original example and a truck and haul driving at roughly the same speed in the opposite direction results in an entirely difficult outcome, a perfectly elastic collision in where both trucks collide and come to an immediate halt after the effects of the impact are fully realized.

Neo-Aristotelian analyses of powers give us nothing that is keeping with physics. What these explanations demand is something they imagine happening behind the veil of what science has already explained. There are just dispositions and what is needed is a more critical analysis of what is entailed across each instance of cause and effect. Power ontologies beg the question, in any case, because they require dispositions to make sense of powers. That is because powers are just a cursory analysis of cause-effect relationships, a way of paraphrasing that is overly simplistic and ultimately, not analytical enough. Power ontologies, along with talk of dynamism, which properly belongs to Nietzsche not Aristotle, severely undermine the Neo-Aristotelian project. Nietzsche’s diagnosis of causation makes this clear:

Cause and effect: such a duality probably never exists; in truth we are confronted by a continuum out of which we isolate a couple of pieces, just as we perceive motion only as isolated points and then infer it without ever actually seeing it. The suddenness with which many effects stand out misleads us; actually, it is sudden only for us. In this moment of suddenness there is an infinite number of processes that elude us. n intellect that could see cause and effect as a continuum and a flux and not, as we do, in terms of an arbitrary division and dismemberment, would repudiate the concept of cause and effect and deny all conditionality.

Nietzsche, Friedrich W, and Walter Kaufmann. The Gay Science: With a Prelude in Rhymes and an Appendix of Songs. New York: Vintage Books, 1974. 173. Print.

Nietzsche describes a continuum and a flux, in other words, a dynamism thoroughly unlike what can be attributed to Aristotle’s theory of causation. So the fact that Neo-Aristotelians even speak of a dynamism feels like a sort of plagiarism, since they are associating the idea of a dynamism with a thinker that said nothing to that effect. Nietzsche is critical of Aristotle’s causal-teleological marriage and can be seen as explicitly accusing Aristotle and also Hume of arbitrarily splicing a dynamic continuum in an ad hoc manner that does not find justification in metaphysical ideas. If Nietzsche had been properly exposed to modern science, he would probably agree that this splicing does not find justification in physical ideas either. The hard sciences confirm a continuum, preferring complex processes from which predictable results follow. There is just no sense in which we can apply any theory of causation to a chemical reaction. What features in these reactions are the properties and dispositions of the elements involved and how they are constituted explains why we get one reaction or another. Any talk of dynamisms is properly Nietzschean in spirit and as should be clear in his words, there is no invocation of powers.

Suffice to say that a deeper analysis of dispositions also explains away the mimicker problem. Styrofoam plates simply do not break in the way glass plates do and their underlying composition explains why that is. Ultimately, Neo-Aristotelians are not in a good position to get to the bottom of what we call cause and effect. Aside from the difficulties Backmann sheds light on, the notion of powers is incoherent and lacking in explanatory power, especially at levels requiring deeper analysis. Predictably, I can see Neo-Aristotelians invoking an infinite regress of sorts. In other words, is it simply the composition of the glass interacting with the composition of a hardwood floor that results in the glass shattering or is there more to the story? To that I would respond that events like these happen within a causally closed space-time system. It is then when we will be asked who or what decided that a glass cup should break on impact when landing on a hardwood floor? Well, who or what decided that a compound fracture of the tibia is expected given that it receives a strong enough blow from an equally dense or denser object? The Neo-Aristotelian will keep pushing the buck back, demanding deeper levels of analysis, effectively moving the goalposts. What will remain is that there is no intelligence that decided on these things, i.e., there is no teleological explanation involved in these cases, because then they would have to account for undesired ends like broken bones.

In the end, I think that the deepest level of analysis will involve a stochastic process in where degrees of probability encompass possible outcomes. Not every blow leads to a broken tibia. Dropping a glass cup on just any surface is not enough to crack or shatter it. There are cases in where angular momentum as a result of a human foot can change a falling glass cup’s trajectory just enough to ensure that it does not break upon hitting the ground. I have met people quite adept at breaking these kinds of falls with a simple extension of their foot. As such, probabilities will change given the circumstances on a case by case basis. This element of chance at the deepest level of analysis coheres perfectly with the universe we find ourselves in because even the fact that we are beings made of matter, as opposed to beings made of anti-matter, is due to chance. Apparently, God has always rolled dice. On this, I will let Lawrence Krauss have the last word:

Because antiparticles otherwise have the same properties as particles, a world made of antimatter would behave the same way as a world of matter, with antilovers sitting in anticars making love under an anti-Moon.  It is merely an accident of our circumstances, due, we think, to rather more profound factors…that we live in a universe that is made up of matter and not antimatter or one with equal amounts of both.

Krauss, Lawrence. A Universe From Nothing: Why There Is Something Rather Than Nothing. 1st ed. New York, NY: Free Press, 2012. 61. Print.

A Refutation of Bruce Gordon’s “Argument From The Incompleteness of Nature”

By R.N. Carmona

Before setting out to formulate Gordon’s “Argument From The Incompleteness of Nature,” a general note is in order. After years of dealing with the more common arguments for God, e.g., the Kalam Cosmological, Moral, Fine-Tuning, Teleological, Ontological arguments, I began to notice that such arguments collapse when the complexity of the facts are analyzed. For instance, P1 of the Moral Argument states that “If God does not exist, objective values and duties do not exist.” This has proved to be the most controversial premise of the argument, but analyses of what is meant by objective, values, and duties lead us in directions where we can apprehend morality along these lines without God being necessarily involved. What I’m noticing now about more complex Theistic arguments is that they collapse when the simplicity of the facts are put on the table, i.e., when simple considerations are taken into account. This also applies to Gordon’s argument. To see what I mean, it will be necessary, first and foremost, to frame Gordon’s argument.

G1 “Quantum mechanics reveals a genuine ontological indeterminacy and incompleteness present in nature” (Gordon, Bruce L.. The Necessity of Sufficiency: The Argument From The Incompleteness of Nature. Two Dozen (or so) Arguments for God: The Plantinga Project, Edited by Walls, Jerry L. & Dougherty Trent. Oxford: Oxford University Press, 2018. 420. Print.)

G2 “Since all physical cause-and-effect relations are local, however, the completeness of quantum theory implies the causal-ontological incompleteness of physical reality: the universe is shot through with mathematically predictable non-local correlations that, on pain of experimental contradiction, have no physical explanation” (Gordon, 421)

G3 “Quantum theory raises fundamental questions about the coherence of material identity, individuality, and causality that pose a prima facie problem for naturalistic metaphysics” (Gordon, 423)

G4 (By way of inference) it is probable that all naturalistic interpretations of quantum mechanics contain conceptual shortcomings (Gordon, 423-429)

GC1 Therefore, “a theistic variant of the Copenhagen interpretation brings metaphysical completion to quantum theory so as to resolve the fundamental puzzle” (Gordon, 423)

GC2 Therefore, “God’s existence and continuous activity is the best explanation for the reality, persistence, and coherence of natural phenomena, and the account of divine action best meeting this explanatory demand is a form of occasionalist idealism” (Gordon, 436)

Gordon also condenses his argument as follows:

Now, in quantum physics we are confronted with a situation in which material causality falls irremediably short of explanatory demand, for there is no collection of physical variables jointly sufficient to the explanation of irreducibly probabilistic quantum outcomes. On pain of postulations to the contrary refuted by experimental violations of Bell inequalities, an ontological gap exists in the causal structure of physical reality that no collection of material causes can be offered to fill. So if a prior commitment to metaphysical naturalism constrains us, no non-naturalistic (transcendent) explanation is available to bridge this gap, and we must embrace the conclusion that innumerable physical events transpire without a sufficient cause, that is, for no explanatorily sufficient reason. In short, Copenhagen orthodoxy, framed in a purely physical context, entails a denial of the principle of sufficient reason (PSR) understood as the general maxim that every contingent event has an explanation. (425)

Right away, one can see how G1 through G3 hold insofar as scientific ignorance remains the case. But first, it will be useful to take note of what motivates Gordon to think that there is any truth to these premises. His primary motivations are informed by what he thinks is the inability of physicists to solve the measurement problem and that, at least from what he interprets is a fault of naturalism, quantum interpretations violate the Principle of Sufficient Reason (PSR) and/or are metaphysically implausible. If Gordon can draw his conclusions by way of induction, by ruling out particular interpretations yet to be offered on the basis of the shortcomings of six more general interpretations, then a naturalist has more warrant to rule out Theism by way of induction, by highlighting the many failures of Theism to square with scientific facts and its many more failures to offer sound philosophical arguments. God was once a local deity, intimately involved in matters far more mundane than quanta. It was widely believed that God created the Earth, not via the gradual work of physical laws, but as intimately as a potter forms his vase. Christians of the past even set out to prove God’s involvement in the world. Donald Prothero gives us a prime example:

Other geologists and paleontologists followed Cuvier’s lead and tried to describe each layer with its distinctive fossils as evidence of yet another Creation and Flood event not mentioned in the Bible. In 1842, Alcide d’Orbigny began describing the Jurassic fossils from the southwestern French Alps and soon recognized 10 different stages, each of which he interpreted as a separate non-Biblical creation and flood. As the work continued, it became more and more complicated until 27 separate creations and floods were recognized, which distorted the Biblical account out of shape. By this time, European geologists finally began to admit that the sequence of fossils was too long and complex to fit it with Genesis at all. They abandoned the attempt to reconcile it with the Bible. Once again, however, these were devout men who did not doubt the Bible and were certainly not interested in shuffling the sequence of fossils to prove Darwinian evolution (an idea still not published at this point). They simply did not see how the Bible could explain the rock record as it was then understood.

Prothero, Donald.  Evolution:  What the Fossils Say and Why it Matters.  New York:  Columbia University Press, 2007. 56-57. Print.

Going over the litany of examples throughout history is not necessary because Theism’s lack of explanatory success informs the behavior of today’s Theists. Therefore, it suffices to point out that Theists have gone from asserting that God is intimately involved in every aspect of reality, in addition to positing that the Bible renders an infallible account of many historical events, including a global flood, to relegating God to the outskirts of human knowledge where the refulgence of science remains unfelt, as hidden somewhere before the Big Bang, active solely in quantum phenomena that evade the experiences of even the most devout believers, and as grounds for some explanation of human consciousness that allows for the continuance of consciousness after death, i.e., a philosophy of mind that entails the existence of the soul, e.g., Cartesian dualism, Aristotelian hylomorphism, panpsychism. Gordon’s argument is a prime example of this retreat to the far reaches of scientific ignorance, hoping with all his might that he will find God at the fringes of reality. If naturalism has pushed Theism this far, then it is safe to say that Theism is teetering on the edge, that any argument Theists put forth now are highly likely to fail, and that it is only a matter of time before Theism plunges into the abyss.

Before exposing glaring issues with Gordon’s conclusion, I will go over issues with his analysis of the many worlds interpretation (MWI) and the Ghirardi-Rimini-Weber spontaneous collapse interpretation (GRWI). Then I will provide an overview of two interpretations that circumvent the measurement problem and one of its entailments, the observer effect. Prior to that, there are already issues with his analysis of the PSR that sound suspiciously like Plantinga’s EAAN or worse, Lewis’ Argument Against Naturalism. Gordon states:

Suppose, among all of the events that happen in the universe, there are countless many that happen without cause or reason. If this were true, we would have no principled way of telling which events were caused and which were not, for events that appeared to have a cause might, in fact, lack one. Our current perceptual states, for example, might have no explanation, in which case they would bear no reliable connection to the way the world is. So if the PSR were false, we could never have any confidence in our cognitive states. (425)

It is important to note that scientists are only concerned about causes inasmuch as they have explanatory power. If a cause does no explanatory work, then it does not help them to get a better understanding of a given phenomenon. Think of Nancy Cartwright’s $1,000 bill descending in St. Stephen’s Square. Scientists simply do not care to arrive at a model that accurately predicts where the bill will land and more precisely, about its exact movements through the air prior to landing. This particular example, that involves any number of difficult to quantify variables, e.g., bird droppings hitting the bill on the way down, dust particles slightly changing the bill’s trajectory, wind speeds, does not help scientists better understand drift, free fall, etc. Physicists already have general principles that help them understand how, for instance, a basketball succumbs to the magnus effect. A disposition of the ball, in particular its shape, makes it susceptible to this effect whereas the dispositions of the bill guarantee that it will drift wildly during the entirety of its descent to the ground.

Any event appearing to be caused does not immediately invite scientific scrutiny. Only events that do explanatory work or are suspected of having some explanatory power over a given effect, specifically in relation to a theory or model, are worth examining. In any case, it does not follow from the possibility that the PSR is false that our perceptual states have no explanation or cause. Therefore, that we can have no confidence in our perceptual states is completely non sequitur. Neuroscientists, cognitive scientists, and psychologists have done plenty of work to show that our perceptual states do have explanations, regardless of whether the PSR is true or not. Thus, if the PSR turns out to not be the case, our perceptual states are not among events lacking a cause or an explanation.

A general note of relevance is in order. Gordon’s citations are mostly decades old, which any peer reviewer in philosophy would immediately be suspicious of. Of the Many Worlds Interpretation, Gordon states: “So which way of building the universal wavefunction is to be preferred? This difficulty, known as the “preferred basis problem,” reveals that the branching process itself is completely arbitrary from a mathematical standpoint and therefore, from the abstract point of view presupposed by the MWI, not reflective of any physical reality” (427). Setting aside the non sequitur, “not reflective of any physical reality,” his primary authority informing this statement, namely David Wallace in 2003, no longer considers preferred basis to be an issue. Gordon would know that if he had read Wallace’s 2010 paper “Quantum Mechanics on Spacetime I: Spacetime State Realism,” in where he states:

We might sum up the objection thus: wave-function realism requires a meta-physically preferred basis… This objection is probably most significant for Everettians, who generally regard it as a virtue of their preferred interpretation that it requires no additional formalism, and so are unlikely to look kindly on a requirement in the metaphysics for additional formalism. Advocates of dynamical-collapse and hidden-variable theories are already committed to adding additional formalism, and in fact run into problems in QFT for rather similar reasons: there is no longer a natural choice of basis to use in defining the collapse mechanism or the hidden variables. We are not ourselves sanguine about the prospects of overcoming this problem; but if it were to be overcome, the solution might well also suggest a metaphysically preferred basis to use in formulating a QFT version of wave-function realism.

Wallace, David, and Christopher G. Timpson. “Quantum Mechanics on Spacetime I: Spacetime State Realism.” The British Journal for the Philosophy of Science, vol. 61, no. 4, 2010, pp. 697–727. https://arxiv.org/pdf/0907.5294.pdf. Accessed 1 Feb. 2021.

Lev Vaidman, Professor at the School of Physics and Astronomy in Tel Aviv, corroborates this: “due to the extensive research on decoherence, the problem of preferred basis is not considered as a serious objection anymore” (Vaidman, Lev, “Many-Worlds Interpretation of Quantum Mechanics”, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), Fall 2018, https://plato.stanford.edu/archives/fall2018/entries/qm-manyworlds/).

Gordon raises a second difficulty for the MWI: “The second difficulty lies in its treatment of quantum probabilities” (Ibid.). Worse than using outdated sources is Gordon’s misrepresentation of a source that actually disagrees with his statement. Simon Saunders, in “Chance in the Everett interpretation,” actually states: “To conclude: there is no good reason to think EQM is really a theory of over-lapping worlds. If questions of overlap of branches are to be settled by appeal to the underlying mathematics, in terms of vector space structure, then there is at least one natural mereology in terms of which worlds that differ in some feature, since orthogonal, are non-overlapping” (Saunders, Simon (2010). Chance in the Everett interpretation. In Simon Saunders, Jonathan Barrett, Adrian Kent & David Wallace (eds.), Many Worlds?: Everett, Quantum Theory & Reality. Oxford University Press.). Saunders attempts to “solve the problem without introducing additional structure into the theory” (Vaidman, Ibid.) and yet Gordon tells his reader to “see Saunders et al. 2010 for extensive polemics regarding it” (Ibid.). This is an egregious level of malpractice that can only be explained by his desperation to prove his belief in God.

Turning now to his analysis of GRWI, the prospects for his argument do not improve. Gordon states of GRWI: “The problem is that it cannot be rendered compatible with relativity theory or extended to the treatment of quantum fields in this form” (Ibid.); “the theory remains radically non-local and has the additional drawback of eliminating the possibility of particle interactions and thus any physics of interest” (Ibid.); and “there are no versions of the theory in which the collapse is complete, with the consequence that all “material” objects have low- density copies at multiple locations, the presence and effect of which linger forever in the GRWI wavefunction” (Ibid.). The first and third concerns are not an issue for GRWI. The first issue simply restates the more general difficulty physicists have had with reconciling quantum mechanics and general relativity; this would then be an issue for the entire enterprise of quantum mechanics, so we would essentially be tossing the bath water, baby and all! The third issue is an appeal to ignorance. That there is currently no version of GRWI offering a collapse that is complete does not mean that scientists ought to give up on the search for a version containing a complete collapse. This leaves the second concern, which is addressed in Tejinder Singh’s 2018 paper “Space and Time as a Consequence of GRW Quantum Jumps,” where he deploys GRWI to solve the measurement problem. Singh states:

This classical metric is in turn produced by classical bodies, according to the laws of general relativity. And classical bodies are themselves the result of GRW localisation. Thus it is not reasonable to assume space to exist prior to the GRW quantum jumps. Rather, it seems quite intuitive that space results from GRW collapses taking place all over the universe. Space is that which is between collapsed objects. No collapse, no space. This also helps us understand why the GRW jumps take place in space: it is because space in the first place is created because of these jumps.

Singh, Tejinder. “Space and time as a consequence of GRW quantum jumps.” TZeitschrift für Naturforschung A73 (2018) 923. https://arxiv.org/pdf/1806.01297.pdf. Accessed 1 Feb. 2021.

Singh considers Hilbert space as more fundamental than classical space, so these GRW jumps occurring in Hilbert space give rise to the classical fabric of space we are accustomed to. He posits that the wave function is contingent on the configuration space where the particle moves through time, to potentially infinite degrees of freedom. This then results in a complete collapse of the wave function. Gordon’s hasty conclusion no longer holds if Singh has succeeded in offering a version of GRWI containing a complete collapse of the wave function.

This is setting aside the fact that Gordon overlooked what many consider an updated or even upgraded version of MWI, namely the Many Interacting Worlds Interpretation (MIWI). The MIWI differs from the MWI in that all quantum phenomena are the result of an inter-universal repulsive force acting on worlds in close proximity to one another, thus explaining any dissimilarity between them. Michael Hall, et. al. conclude that the MIWI can reproduce quantum interference phenomena, in addition to offering advantages with respect to computational modeling. They note that on the de Broglie–Bohm Interpretation, the wave function denoted by Ψ, even when it is a very large value allows computer modeling to focus on high density regions in configuration space, specifically regions where calculation errors have to be corrected to analyze convergence given norms of angular momentum (see Hall, Michael J. W., Deckert, Dirk-André, and Wiseman, Howard M.. Quantum Phenomena Modeled by Interactions between Many Classical Worlds. Physical Review X, 2014; 4 (4) DOI: 10.1103/PhysRevX.4.041013).

There is also the Lindgren-Liukkonen Interpretation (LLI), championed by two quantum physicists that take Ockham’s Razor seriously. Given this, their quantum interpretation is a statistical interpretation that solves the observer effect. In other words, there is no logical reason, to their minds, why the results of a measurement are dependent on an observer. They dispense with the notion of a conscious observer changing the result of measurements. The LLI shows that any epistemological and ontological issues that stem from the uncertainty principle are solved given that the uncertainty principle is a fixed property of stochastic mechanics (see Lindgren, Jussi and Liukkonen, Jukka. The Heisenberg Uncertainty Principle as an Endogenous Equilibrium Property of Stochastic Optimal Control Systems in Quantum Mechanics. Symmetry, 2020; 12 (9): 1533 DOI: 10.3390/sym12091533

Gordon not only failed to rule out enough interpretations of quantum mechanics to make his conclusion more likely, but he failed to rule out the best defenses of, at least, two of the interpretations he is skeptical about. The larger issue for Gordon is that even if he managed to rule out say, twenty interpretations in quantum mechanics, his conclusion simply does not follow and if it did, there are simple considerations that render it untenable. Recall: “God’s existence and continuous activity is the best explanation for the reality, persistence, and coherence of natural phenomena, and the account of divine action best meeting this explanatory demand is a form of occasionalist idealism” (Gordon, 436). It follows from this that God’s existence and continuous activity is the best explanation for the reality, persistence, and coherence of viruses, diseases, natural disasters, and pretty much any undesired consequence a Theist can imagine. Clearly, Gordon does not want to admit these natural phenomena into his conclusion, choosing instead to special plead for any cases he thinks suit his argument. In other words, one of his concerns fits better on his foot: Suppose, among all of the events that happen in the universe, there are countless many that happen without God’s continuous activity, e.g., pretty much all the bad stuff. If this were true, we would have no principled way of telling which events were caused by his activity and which were not, for events that appeared to have been caused by God, in fact, were not. It is far more probable therefore, that God has no hand in any event in the natural world, not even granting a retreat into the quantum realm.

Ultimately, if a Theist wants to continue to assert that God has a hand in the unification of quantum and classical phenomena, they need to take a different route than Gordon has. Gordon severely undermines his own project by using outdated sources, being completely unaware of the fact that one of the authors of one of his primary sources changed their mind and actually proved the opposite of what seemed to lend a hand to Gordon’s argument, and overlooking a number of interpretations that may provide a stable and complete collapse of the wave function, thus solving quantum paradoxes, like the measurement problem and related observer effect. More damning to such arguments is that if a personal, loving deity saw fit to retreat to the far reaches of metaphysical reality, then he can have no desire to be known or detected by even people who are hopelessly devoted and attached to him. Quanta lies so far outside of the everyday experience of human beings that the idea that God is asking us to pursue him into the microcosms of the quanta is, quite frankly, nonsensical. It makes more sense that retreats like Gordon’s, into profoundly metaphysical territory, has everything to do with Theism’s failure to square with science, in addition to offering philosophical arguments or proofs that are sound or, at the very least, cogent and without controversy. This is precisely the prognosis for Theism and the relentless advances of science and philosophy, closely in tow, do not look poised to provide any remedy. Gordon’s argument, while complex, completely collapses in the face of simple considerations, which is a happy irony given his claims about the quantum wave function.