By R.N. Carmona
In recent years, there has been a surge in the use of Bayes’ Theorem with the intention of bolstering this or that argument. This has resulted in an abject misuse or abuse of Bayes’ Theorem as a tool. It has also resulted in an incapacity to filter out bias in the context of some debates, e.g. theism and naturalism. Participants in these debates, on all sides, betray a tendency to inflate their prior probabilities in accordance with their unmerited epistemic certainty in either a presupposition or key premise of one of their arguments. The prophylactic, to my mind, is found in a retreat to the basics of logic and reasoning.
An Overview on Validity
Validity, for instance, is more involved than some people realize. It is not enough for an argument to appear to have logical form. An analysis of whether it, in fact, has logical form is a task that is seldom undertaken. When people think of validity, something like the following comes to mind: “A deductive argument is said to be valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Otherwise, a deductive argument is said to be invalid” (NA. Validity and Soundness. Internet Encyclopedia of Philosophy. ND. Web.).
Kelley, however, gives us rules to go by:
- In a valid syllogism, the middle term must be distributed in at least one of the premises
- If either of the terms in the conclusion is distributed, it must be distributed in the premise in which it occurs
- No valid syllogism can have two negative premises
- If either premise of a valid syllogism is negative, the conclusion must be negative; and if the conclusion is negative, one premise must be negative
- If the conclusion is particular, one premise must be particular (Kelley, D.. The Art of Reasoning. WW Norton & Co. 2013. Print. 243-249)
With respect to the first rule, any argument that does not adhere to it commits the fallacy of undistributed middle. Logically, if we take Modus Ponens to be a substitute for a hypothetical syllogism, then undistributed middle is akin to affirming the consequent. Consider the following invalid form:
All P are Q.
All R are P.
∴ All R are Q.
When affirming the consequent, one is saying Q ⊃ P. It is not surprising that these two fallacies are so closely related because both are illegitimate transformations of valid argument forms. We want to say that since all P are Q and all R are P, therefore all R are Q in much the same way we want to infer that P ⊃ Q. Consider the well-known Kalam Cosmological Argument. No one on both sides questions the validity of the argument because validity, for many of us, is met when the conclusion follows from the premises. However, one can ask whether the argument adheres to Kelley’s rules. If one were to analyze the argument closely enough, it is very arguable that the argument violates Kelley’s fourth rule. The issue is that it takes transposing from the fifth rule to fourth rule because the argument does not violate the fifth and therefore, appears valid. However, when restated under the fourth rule, the problem becomes obvious. In other words, the universe is a particular in both Craig’s conclusion and in the second premise of his argument. Let’s consider the KCA restated under the fourth rule:
There are no things that are uncaused.
There is no universe that is uncaused.
∴ All universes have a cause.
Restating it this way appears controversial only because the argument seems to presuppose that there is more than one universe. Two negatives must have properties in common. Put another way, since there are many of all things, then the universe cannot be the only thing of its kind, if we even agree that the universe is like ordinary entities at all. Craig, perhaps unintentionally, attempts to get a universal from a particular, as his argument restated under the fourth rule shows. Given this, we come to the startling conclusion that Craig’s KCA is invalid. Analyses of this kind are extremely rare in debates because most participants do not know or have forgotten the rules of validity. No amount of complexity hides a violation of basic principles. The advent of analytic philosophy with Bertrand and Moore led to an increasing complexity in arguments and for the most part, validity is respected. As shown here, this is not always the case, so a cursory analysis should always be done at the start.
Validity is necessary but not sufficient for an argument to prove effective and persuasive. This is why arguments themselves cannot substitute for or amount to evidence. Soundness is determined by taking a full account of the evidence with respect to the argument. The soundness of an argument is established given that the pertinent evidence supports it; otherwise, the argument is unsound. Let us turn to some simple examples to start.
An Overview of Soundness
“A deductive argument is sound if and only if it is both valid, and all of its premises are actually true. Otherwise, a deductive argument is unsound” (Ibid.).
All ducks are birds.
Larry is a duck.
∴ Larry is a bird.
This argument is stated under Kelley’s fifth rule and is no doubt valid. Now, whether or not the argument is sound will have us looking for external verification. We might say that, a priori, we know that there are no ducks that are not birds. By definition, a duck is a kind of bird. All well and good. There is still the question of whether there is a duck named Larry. This is also setting aside the legitimacy of a priori knowledge because, to my mind, normal cognitive function is necessary to apprehend human languages and to comprehend the litany of predicates that follow from these languages. We know that ducks are birds a posteriori, but on this point I digress. Consider, instead, the following argument.
All ducks are mammals.
Larry is a duck.
∴ Larry is a mammal.
This argument, like the previous one, is valid and in accordance with Kelley’s fifth rule. However, it is unsound. This harkens back to the notion that ducks belonging to the domain of birds is not a piece of a priori knowledge. Despite knowing that all ducks are birds, the differences between birds and mammals are not at all obvious. That is perhaps the underlying issue, a question of how identity is arrived at, in particular the failure of the essentialist program to capture what a thing is. The differentialist program would have us identify a thing by pinning down what it is not. It follows that we know ducks are birds because anatomically and genetically, ducks do not have the signatures of mammals or any other phylum for that matter. A deeper knowledge of taxonomy is required to firmly establish that ducks are, in fact, birds.
An exploration of soundness is much more challenging when analyzing metaphysically laden premises. Consider, for example, the second premise of the KCA: “The universe began to exist.” What exactly does it mean for anything to begin to exist? This question has posed more problems than solutions in the literature; for our purposes, it is not necessary to summarize that here. We can say of a Vizio 50-inch plasma screen television that it began to exist in some warehouse; in other words, there is a given point in time where a functioning television was manufactured and sold to someone. The start of a living organism’s life is also relatively easy to identify. However, mapping these intuitions onto the universe gets us nowhere because as I alluded to earlier, the universe is unlike ordinary entities. This is why the KCA has not been able to escape the charge of fallacy of composition. All ordinary entities we know of, from chairs to cars to elephants to human beings exist within the universe. They are, as it were, the parts that comprise the universe. It does not follow that because it is probable that all ordinary things begin to exist that the universe must have begun to exist.
This is a perfect segue into probability. Again, since Bayes’ Theorem is admittedly complex and not something that is easily handled even by skilled analytic philosophers, a return to the basics is in order. I will assume that the rule of distribution applies to basic arguments; this will turn out to be fairer to all arguments because treating premises as distinct events greatly reduces the chances of a given argument being true. I will demonstrate how this filters out bias in our arguments and imposes on us the need to strictly analyze arguments.
Using Basic Probability to Assess Arguments
Let us state the KCA plainly:
Everything that begins to exist has a cause for its existence.
The universe began to exist.
∴ The universe has a cause for its existence.
As aforementioned, the first premise of the KCA is metaphysically laden. It is, at best, indeterminable because it is an inductive premise; all it takes is just one entity within the universe to throw the entire argument into the fire. To be fair, we can only assign a probability of .5 for this argument being true. We can then use distribution to get the probability of the argument being sound, so since we have a .5 probability of the first premise being sound, and given that we accept that the argument is not in violation of Kelley’s rules, we can therefore distribute this probability across one other premise and arrive at the conclusion that the argument has a 50% chance of being true.
This is preferable to treating each premise as an isolated event; I am being charitable to all arguers by assuming they have properly distributed their middles. Despite this, a slightly different convention might have to be adopted to assess the initial probability of an argument with multiple premises. An argument with six individual premises has a 1.56% chance of being true, i.e. .5^6. This convention would be adopted because we want a probability between 0 and 100. If we use the same convention used for simpler arguments with less premises, then an argument with six premises would have a 300% chance of being true. An arguer can then arbitrarily increase the amount of premises in his argument to boost the probability of his argument being true. Intuitively, an argument with multiple premises has a greater chance of being false; the second convention, at least, shows this while the first clearly does not. The jury is still out on whether the second convention is fair enough to more complex arguments. There is still the option of following standard practice and isolating an individual premise to see if it holds up to scrutiny. Probabilities do not need to be used uniformly; they should be used to make clear our collective epistemic uncertainty about something, i.e., to filter out dogma.
Let us recall my negation strategy and offer the anti-Kalam:
Everything that begins to exist has a cause for its existence.
The universe did not begin to exist.
∴ The universe does not have a cause.
Despite my naturalistic/atheistic leanings, the probability of my argument is also .5 because Craig and I share premise 1. The distribution of that probability into the next premise does not change because my second premise is a negation of his second premise. In one simple demonstration, it should become obvious why using basic probabilities is preferable over the use of Bayes’ Theorem. No matter one’s motivations or biases, one cannot grossly overstate priors or assign a probability much higher than .5 for metaphysically laden premises that are not easily established. We cannot even begin to apply the notion of a priori knowledge to the first premise of the KCA. We can take Larry being a bird as obvious, but we cannot take as obvious that the universe, like all things within it, began to exist and therefore, has a cause.
Now, a final question remains: how exactly does the probability of an argument being sound increase? Probability increases in accordance with the evidence. For the KCA to prove sound, a full exploration of evidence from cosmology is needed. A proponent of the KCA cannot dismiss four-dimensional black holes, white holes, a cyclic universe, eternal inflation, and any theory not in keeping with his predilections. That being the case, his argument becomes one based on presupposition and is therefore, circular. A full account of the evidence available in cosmology actually cuts sharply against the arteries of the KCA and therefore, greatly reduces the probability of it being sound. Conversely, it increases the probability of an argument like the Anti-Kalam being true. The use of basic probability is so parsimonious that the percentage decrease of the Kalam being sound mirrors the percentage increase of the Anti-Kalam being sound. In other words, the percentage decrease of any argument proving sound mirrors the percentage increase of its alternative(s) proving true. So if a full account of cosmological evidence lowers the probability of the Kalam being sound by 60%, the Anti-Kalam’s probability of being true increases by 60%. In other words, the Kalam would now have a 20% probability of being true while its opposite would now have an 80% of being true.
Then, if a Bayesian theorist is not yet satisfied, he can keep all priors neutral and plug in probabilities that were fairly assessed to compare a target argument to its alternatives. Even more to the point regarding fairness, rather than making a favored argument the target of analysis, the Bayesian theorist can make an opponent’s argument the target of analysis. It would follow that their opponent’s favored argument has a low probability of being true, given a more basic analysis that filters out bias and a systematic heuristic like the one I have offered. It is free of human emotion or more accurately, devotion to any given dogma. It also further qualifies the significance of taking evidence seriously. This also lends much credence to the conclusion that arguments themselves are not evidence. If that were the case, logically valid and unsound arguments would be admissible as evidence. How would we be able to determine whether one argument or another is true if the arguments themselves serve as evidence? We would essentially regard arguments as self-evident or tautologous. They would be presuppositionalist in nature and viciously circular. All beliefs would be equal. This, thankfully, is not the case.
Ultimately, my interest here has been a brief exploration into a fairer way to assess competing arguments. All of this stems from a deep disappointment in the abuse of Bayes’ Theorem; everyone is inflating their priors and no progress will be made if that continues to be permitted. A more detailed overview of Bayes’ Theorem is not necessary for such purposes and would likely scare away even some readers versed in analytic philosophy and more advanced logic. My interest, as always, is in communicating philosophy to the uninitiated in a way that is approachable and intelligible. At any rate, a return to the basics should be in order. Arguments should continue to be assessed; validity and soundness must be met. Where soundness proves difficult to come by, a fair initial probability must be applied to all arguments. Then, all pertinent evidence must be accounted for and the consequences the evidence presents for a given argument must be absorbed and accepted. Where amending of the argument is possible, the argument should be restructured, to the best of the arguer’s ability, in a way that demonstrates recognition of what the evidence entails. This may sound like a lot to ask, but the pursuit of truth is an arduous journey, not an easy endeavor by any stretch. Anyone who takes the pursuit seriously would go to great lengths to increase the epistemic certainty of his views. All else is folly.
By R.N. Carmona
Every deductive argument can be negated. I consider this an uncontroversial statement. The problem is, there are people who proceed as though deductive arguments speak to an a priori truth. The Freedom Tower is taller than the Empire State Building; the Empire State Building is taller than the Chrysler Building; therefore, the Freedom Tower is taller than the Chrysler Building. This is an example of an a priori truth because given that one understands the concepts of taller and shorter, the conclusion follows uncontroversially from the premises. This is one way in which the soundness of an argument can be assessed.
Of relevance is how one would proceed if one is unsure of the argument. Thankfully, we no longer live in a world in where one would have to go out of their way to measure the heights of the three buildings. A simple Google search will suffice. The Freedom Tower is ~546m. The Empire State Building is ~443. The Chrysler is ~318m. Granted, this is knowledge by way of testimony. I do not intend to connote religious testimony. What I intend to say is that one’s knowledge is grounded on knowledge directly acquired by someone else. In other words, at least one other person actually measured the heights of these buildings and these are the measurements they got.
Most of our knowledge claims rest on testimony. Not everyone has performed an experimental proof to show that the acceleration of gravity is 9.8m/s^2. Either one learned it from a professor or read it in a physics textbook or learned it when watching a science program. Or, they believe the word of someone they trust, be it a friend or a grade school teacher. This does not change that fact that if one cared to, one could exchange knowledge by way of testimony for directly acquired knowledge by performing an experimental proof. This is something I have done, so I do not believe on basis of mere testimony that Newton’s law holds. I can say that it holds because I tested it for myself.
To whet the appetite, let us consider a well-known deductive argument and let us ignore, for the moment, whether it is sound:
P1 All men are mortal.
P2 Socrates is a man.
C Therefore, Socrates is mortal.
If someone were completely disinterested in checking whether this argument, which is merely a finite set of propositions, coheres with the world or reality, I would employ my negation strategy: the negation of an argument someone assumes to be sound without epistemic warrant or justification. The strategy forces them into exploring whether their argument or its negation is sound. Inevitably, the individual will have to abandon their bizarre commitment to a sort of propositional idealism (namely that propositions can only be logically assessed and do not contain any real world entities contextually or are not claims about the world). In other words, they will abandon the notion that “All men are mortal” is a mere proposition lacking context that is not intended to make a claim about states of affairs objectively accessible to everyone, including the person who disagrees with them. With that in mind, I would offer the following:
P1 All men are immortal.
P2 Socrates is a man.
C Therefore, Socrates is immortal.
This is extremely controversial for reasons we are all familiar with. That is because everyone accepts that the original argument is sound. When speaking of ‘men’, setting aside the historical tendency to dissolve the distinction between men and women, what is meant is “all human persons from everywhere and at all times.” Socrates, as we know, was an ancient Greek philosopher who reportedly died in 399 BCE. Like all people before him, and presumably all people after him, he proved to be mortal. No human person has proven to be immortal and therefore, the original argument holds.
Of course, matters are not so straightforward. Christian apologists offer no arguments that are uncontroversially true like the original argument above. Therefore, the negation strategy will prove extremely effective to disabuse them of propositional idealism and to make them empirically assess whether their arguments are sound. What follows are examples of arguments for God that have been discussed ad nauseam. Clearly, theists are not interested in conceding. They are not interested in admitting that even one of their arguments does not work. Sure, what you find are theists committed to Thomism, for instance, and as such, they will reject Craig’s Kalam Cosmological Argument (KCA) because it does not fit into their Aristotelian paradigm and not because it is unsound. They prefer Aquinas’ approach to cosmological arguments. What is more common is the kind of theist that ignores the incongruity between one argument for another; since they are arguments for God, it counts as evidence for his existence and it really does not matter that Craig’s KCA is not Aristotelian. I happen to think that it is, despite Craig’s denial, but I digress.
Negating Popular Arguments For God’s Existence
Let us explore whether Craig’s Moral Argument falls victim to the negation strategy. Craig’s Moral Argument is as follows:
P1 If God does not exist, objective moral values do not exist.
P2 Objective moral values do exist.
C Therefore, God exist (Craig, William L. “Moral Argument (Part 1)”. Reasonable Faith. 15 Oct 2007. Web.)
With all arguments, a decision must be made. First, an assessment of the argument form is in order. Is it a modus ponens (MP) or a modus tollens (MT)? Perhaps it is neither and is instead, a categorical or disjunctive syllogism. In any case, one has to decide which premise(s) is going to be negated or whether by virtue of the argument form, one will have to change the argument form to state the opposite. You can see this with the original example. I could have very well negated P2 and stated “Socrates is not a man.” Socrates is an immortal jellyfish that I tagged in the Mediterranean. Or he is an eternal being that I met while tripping out on DMT. For purposes of the argument, however, since he is not a man, at the very least, the question of whether or not he is mortal is open. We would have to ask what Socrates is. Now, if Socrates is my pet hamster, then yes, Socrates is mortal despite not being a man. It follows that the choice of negation has to be in a place that proves most effective. Some thought has to go into it.
Likewise, the choice has to be made when confronting Craig’s Moral Argument. Craig’s Moral Argument is a modus tollens. For the uninitiated, it simply states: [((p –> q) ^ ~q) –> ~p] (Potter, A. (2020). The rhetorical structure of Modus Tollens: An exploration in logic-mining. Proceedings of the Society for Computation in Linguistics, 3, 170-179.). Another way of putting it is that one is denying the consequent. That is precisely what Craig does. “Objective moral values do not exist” is the consequent q. Craig is saying ~q or “Objective moral values do exist.” Therefore, one route one can take is keeping the argument form and negating P1, which in turn negates P2.
MT Negated Moral Argument
P1 If God exists, objective moral values and duties exist.
P2 Objective moral values do not exist.
C Therefore, God does not exist.
The key is to come up with a negation that is either sound or, at the very least, free of any controversy. Straight away, I do not like P2. Moral realists would also deny this negation because, to their minds, P2 is not true. The controversy with P2 is not so much whether it is true or false, but that it falls on the horns of the objectivism-relativism and moral realism/anti-realism debates in ethics. The argument may accomplish something with respect to countering Craig’s Moral Argument, but we are in no better place because of it. This is when we should explore changing the argument’s form in order to get a better negation.
MP Negated Moral Argument
P1 If God does not exist, objective moral values and duties exist.
P2 God does not exist.
C Therefore, objective moral values and duties exist.
This is a valid modus ponens. I have changed the argument form of Craig’s Moral Argument and I now have what I think to be a better negation of his argument. From P2, atheists can find satisfaction. This is the epistemic proposition atheists are committed to. The conclusion also alleviates any concerns moral realists might have had with the MT Negated Moral Argument. For my own purposes, I think this argument works better. That, however, is beside the point. The point is that this forces theists to either justify the premises of Craig’s Moral Argument, i.e. prove that the argument is sound, or assert, on the basis of mere faith, that Craig’s argument is true. In either case, one will have succeeded in either forcing the theist to abandon their propositional idealism, in getting them to test the argument against the world as ontologically construed or in getting them to confess that they are indulging in circular reasoning and confirmation bias, i.e. getting them to confess that they are irrational and illogical. Both of these count as victories. We can explore whether other arguments for God fall on this sword.
We can turn our attention to Craig’s Kalam Cosmological Argument (KCA):
P1 Everything that begins to exist has a cause.
P2 The universe began to exist.
C Therefore, the universe has a cause. (Reichenbach, Bruce. “Cosmological Argument”. Stanford Encyclopedia of Philosophy. 2021. Web.)
Again, negation can take place in two places: P1 or P2. Negating P1, however, does not make sense. Negating P2, like in the case of his Moral Argument, changes the argument form; this is arguable and more subtle. So we get the following:
MT Negated KCA
P1 Everything that begins to exist has a cause.
P2 The universe did not begin to exist.
C Therefore, the universe does not have a cause.
Technically, Craig’s KCA is a categorical syllogism. Such syllogisms present a universal (∀) or existential quantifier (∃); the latter is introduced by saying all. Consider, “all philosophers are thinkers; all philosophers are logicians; therefore, all thinkers are logicians.” Conversely, one could say “no mallards are insects; some birds are mallards; therefore, some birds are not insects.” What Craig is stating is that all things that begin to exist have a cause, so if the universe is a thing that began to exist, then it has a cause. Alternatively, his argument is an implicit modus ponens: “if the universe began to exist, then it has a cause; the universe began to exist; therefore, the universe has a cause.” In any case, the negation works because if the universe did not begin to exist, then the universe is not part of the group of all things that have a cause.
Whether the universe is finite or eternal has been debated for millennia and in a sense, despite changing context, the debate rages on. If the universe is part of an eternal multiverse, it is just one universe in a vast sea of universes within a multiverse that has no temporal beginning. Despite this, the MT Negated KCA demonstrates how absurd the KCA is. The singularity was already there ‘before’ the Big Bang. The Big Bang started the cosmic clock, but the universe itself did not begin to exist. This is more plausible. Consider that everything that begins to exist does so when the flow of time is already in motion, i.e. when the arrow of time pointed in a given direction due to entropic increase reducible to the decreasing temperature throughout the universe. Nothing that has ever come into existence has done so simultaneously with time itself because any causal relationship speaks to a change and change requires the passage of time, but at T=0, no time has passed, and therefore, no change could have taken place. This leads to an asymmetry. We thus cannot speak of anything beginning to exist at T=0. The MT Negated KCA puts cosmology in the right context. The universe did not come into existence at T=0. T=0 simply represents the first measure of time; matter and energy did not emerge at that point.
For a more complicated treatment, Malpass and Morriston argue that “one cannot traverse an actual infinite in finite steps” (Malpass, Alex & Morriston, Wes (2020). Endless and Infinite. Philosophical Quarterly 70 (281):830-849.). In other words, from a mathematical point of view, T=0 is the x-axis. All of the events after T=0 are an asymptote along the x-axis. The events go further and further back, ever closer to T=0 but never actually touch it. For a visual representation, see below:
Credit: Free Math Help
The implication here is that time began to exist, but the universe did not begin to exist. A recent paper implies that this is most likely the case (Quantum Experiment Shows How Time ‘Emerges’ from Entanglement. The Physics arXiv Blog. 23 Oct 2013. Web.). The very hot, very dense singularity before the emergence of time at T=0 would have been subject to quantum mechanics rather than the macroscopic forces that came later, e.g., General Relativity. As such, the conditions were such that entanglement could have resulted in the emergence of time in our universe, but not the emergence of the universe. All of the matter and energy were already present before the clock started to tick. Conversely, if the universe is akin to a growing runner, then the toddler is at the starting line before the gun goes off. The sound of the gun starts the clock. The runner starts running sometime after she hears the sound. As she runs, she goes through all the stages of childhood, puberty, adolescence, adulthood, and finally dies. Crucially, the act of her running and her growth do not begin until after the gun goes off. Likewise, no changes take place at T=0; all changes take place after T=0. While there is this notion of entanglement, resulting in a change occurring before the clock even started ticking, quantum mechanics demonstrates that quantum changes do not require time and in fact, may result in the emergence of time. Therefore, it is plausible that though time began to exist at the Big Bang, the universe did not begin to exist—thus, making the MT Negated KCA sound. The KCA is therefore, false.
Finally, so that the Thomists do not feel left out, we can explore whether the negation strategy can be applied to Aquinas’ Five Ways. For our purposes, the Second Way is closely related to the KCA and would be defeated by the same considerations. Of course, we would have to negate the Second Way so that it is vulnerable to the considerations that cast doubt on the KCA. The Second Way can be stated as follows:
We perceive a series of efficient causes of things in the world.
Nothing exists prior to itself.
Therefore nothing [in the world of things we perceive] is the efficient cause of itself.
If a previous efficient cause does not exist, neither does the thing that results (the effect).
Therefore if the first thing in a series does not exist, nothing in the series exists.
If the series of efficient causes extends ad infinitum into the past, then there would be no things existing now.
That is plainly false (i.e., there are things existing now that came about through efficient causes).
Therefore efficient causes do not extend ad infinitum into the past.
Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God. (Gracyk, Theodore. “Argument Analysis of the Five Ways”. Minnesota State University Moorhead. 2016. Web.)
This argument is considerably longer than the KCA, but there are still areas where the argument can be negated. I think P1 is uncontroversial and so, I do not mind starting from there:
Negated Second Way
We perceive a series of efficient causes of things in the world.
Nothing exists prior to itself.
Therefore nothing [in the world of things we perceive] is the efficient cause of itself.
If a previous efficient cause does not exist, neither does the thing that results (the effect).
Therefore if the earlier thing in a series does not exist, nothing in the series exists.
If the series of efficient causes extends ad infinitum into the past, then there would be things existing now.
That is plainly true (i.e., efficient causes, per Malpass and Morriston, extend infinitely into the past or, the number of past efficient causes is a potential infinity).
Therefore efficient causes do extend ad infinitum into the past.
Therefore it is not necessary to admit a first efficient cause, to which everyone gives the name of God.
Either the theist will continue to assert that the Second Way is sound, epistemic warrant and justification be damned, or they will abandon their dubious propositional idealism and run a soundness test. Checking whether the Second Way or the Negated Second Way is sound would inevitably bring them into contact with empirical evidence supporting one argument or the other. As I have shown with the KCA, it appears that considerations of time, from a philosophical and quantum mechanical perspective, greatly lower the probability of the KCA being sound. This follows neatly into Aquinas’ Second Way and as such, one has far less epistemic justification for believing the KCA or Aquinas’ Second Way are sound. The greater justification is found in the negated versions of these arguments.
Ultimately, one either succeeds at making the theist play the game according to the right rules or getting them to admit their beliefs are not properly epistemic at all; instead, they believe by way of blind faith and all of their redundant arguments are exercises in circular reasoning and any pretense of engaging the evidence is an exercise in confirmation bias. Arguments for God are a perfect example of directionally motivated reasoning (see Galef, Julia. The Scout Mindset: Why Some People See Things Clearly and Others Don’t. New York: Portfolio, 2021. 63-66. Print). I much prefer accuracy motivated reasoning. We are all guilty of motivated reasoning, but directionally motivated reasoning is indicative of irrationality and usually speaks to the fact that one holds beliefs that do not square with the facts. Deductive arguments are only useful insofar as premises can be supported by evidence, which therefore makes it easier to show that an argument is sound. This is why we can reason that if Socrates is a man, more specifically, the ancient Greek philosopher that we all know, then Socrates was indeed mortal and that is why he died in 399 BCE. Likewise, this is why we cannot reason that objective morality can only be the case if the Judeo-Christian god exists, that if the universe began to exist, God is the cause, and that if the series of efficient causes cannot regress infinitely and must terminate somewhere, they can only terminate at a necessary first cause, which some call God. These arguments can be negated and the negations will show that they are either absurd or that the reasoning in the arguments is deficient and rests on the laurels of directionally motivated reasoning due to a bias for one’s religious faith rather than on the bedrock of carefully reasoned, meticulously demonstrated, accuracy motivated reasoning which does not ignore or omit pertinent facts.
The arguments for God, no matter how old or new, simple or complex, do not work because not only do they rely on directionally motivated and patently biased reasoning, but because when testing for soundness, being sure not to exclude any pertinent evidence, the arguments turn out to be unsound. In the main, they all contain controversial premises that do not work unless one already believes in God. So there is a sense in which these arguments exist to give believers a false sense of security or more pointedly, a false sense of certainty. Unlike my opponents, I am perfectly content with being wrong, with changing my mind, but the fact remains, theism is simply not the sort of belief that I give much credence to. Along with the Vagueness Strategy, the Negation Strategy is something that should be in every atheist’s toolbox.
By R.N. Carmona
Why a blog post and not a proper response in a philosophy journal? My very first journal submission is still in the review process, close to two months later, for one. Secondly, blogging allows me to be pedantic, to be human, that is, to express frustration, to show anger, to be candid; in other words, blogging allows me to be myself. Probably of highest priority is the fact that I do not want my first publication in the philosophy of mind to be a response. I want to eventually outline my own theory of consciousness, which is strongly hinted at here, and I prefer for that to be my first contribution to the philosophy of mind. I do not find panpsychism convincing and I think there is another theory of consciousness, similar to panpsychism in ways, that is much more cogent. I have outlined some qualms I have with panpsychism before; to people new to the blog, you can read here. In any case, I will be responding to a number of points in Strawson’s Realistic Monism: Why Physicalism Entails Panpsychism. Here I will outline refutations that should leave panpsychism unresponsive once and for all as it is not a productive theory of consciousness, i.e., it does no explanatory work and does not illuminate further research; it gives us no real direction to go in.
Strawson states: “You’re certainly not a realistic physicalist, you’re not a real physicalist, if you deny the existence of the phenomenon whose existence is more certain than the existence of anything else: experience, ‘consciousness’, conscious experience, ‘phenomenology’, experiential ‘what-it’s-likeness’, feeling, sensation, explicit conscious thought as we have it and know it at almost every waking moment” (3). Strawson not only sounds like an absolutist, but he has, no doubt intentionally, boxed out real physicalists like the Churchlands and Daniel Dennett. For my purposes, I deny none of these things. I am not an eliminativist though in the past I have called myself such when I lacked a better term for my own point of view. Now, I believe I have located a better term and so, I call myself a recontextualist. I do not deny qualia. What I strongly deny is what panpsychists think they entail: usually a version of nonphysicalist panpsychism or even covert substance dualism in where mental phenomena are ethereal. In light of this, I suggest that qualia are physically reducible in obvious ways already known to us and in currently non-obvious ways yet to be discovered or understood; we simply have to do the work of demonstrating how what-it’s-likeness is physically reducible. I do not think Strawson dodges recontextualism and this will become clearer as we move on.
He explains: “It follows that real physicalism can have nothing to do with physicSalism, the view — the faith — that the nature or essence of all concrete reality can in principle be fully captured in the terms of physics. Real physicalism cannot have anything to do with physicSalism unless it is supposed — obviously falsely — that the terms of physics can fully capture the nature or essence of experience” (4). I think the word physicSalism is clunky and so, I will exchange it for the word physicsism, which I think ties nicely to its predecessor scientism. There is not a chasm between someone who thinks science is the only way of knowing and someone who thinks physics is capable of explaining everything. Strawson makes the mistake of thinking physics stands alone among the hard sciences, as if it is the ground level of scientific explanation. I think chemistry joins physics in that department and as such, real physicalists can be physicsists if they are also chemistrists, the idea that a great number of physical phenomena are reducible to chemistry. If monism, that there is only one substance, and physicalism, that this one substance is physical in nature, are true then it is incumbent on Strawson to address the notion that science cannot apprehend certain physical phenomena. Strawson, therefore, is guilty of the same dualistic tendencies he accuses Dennett of (5), and he seems to bite the bullet on this in offering his “‘experiential-and-non- experiential ?-ism’” (7). Per his view, there are actual physical phenomena explainable by science, especially ground level hard sciences like physics and chemistry. On the other hand, there are quasi-physical phenomena in where Strawson feigns at physicalism while also betraying the fact that he means nothing more than nonphysicalism. This has to be qualified.
So, let us grant that Strawson would qualify the sense of sight as uncontroversially physical. Now, he claims that the what-it’s-likeness of seeing red is also physical and yet, science has no account for this per his claims; not only does science have no current account, but it can never have a viable account because, in his own words, “experiential phenomena ‘just are’ physical, so that there is a lot more to neurons than physics and neurophysiology record (or can record)” (7). I am a real physicalist and I strongly disagree with this statement. For starters, I think his statement is informed by a conflation some people tend to make: if something is explainable by science, it lacks existential meaning and so, anything that is explained by science enables nihilism. In other words, if we can explain the origin of morality without recourse to God, morality is suddenly meaningless in the so-called ultimate sense and just is relativistic or subjectivistic. This is wrong-headed. Explaining the what-it’s-likeness of red would not change the fact that red is my favorite color; nor would it change my experience of seeing a blood red trench coat hanging in a clothing store, as if begging me to purchase it. In a naturalistic world, meaning is decided by us anyway and so, nihilism does not follow from the fact that science explains something. Love is not any less riveting, captivating, and enrapturing if science somehow explained every detail about falling in love, loving one’s children, loving the species one belongs to, and loving species entirely different from oneself.
This aversion to science eventually explaining qualia reeks of nonphysicalism and to my mind, just is nonphyiscalism being labeled as physicalism, which is really just a nominal label that is so far failing to cohere with what is normally meant by physicalism. The notion that physics, chemistry, genetics, and neurophysiology can never record those aspects of neurons that account for qualia is incompatible with physicalism. If science can apprehend physical substance, and qualia are a physical substance as Strawson claims, then science can apprehend qualia. To say otherwise is for Strawson to admit that what he means by physical in the case of qualia is actually not physical at all. It is covert dualism and nonphysicalism. I have no qualms with scientists fully understanding why red is my favorite color. This does not then dampen my experience or make it meaningless.
Likewise, I know that sexual attraction reduces to mostly superficial, properly aesthetic, considerations and pheromones that induce a reaction in my brain, which then translate to a host of bodily reactions, e.g., feeling flush and then blushing, feeling either nervous, excited, or some combination of both, feeling a knot in my stomach. This does not accomplish making my attraction meaningless or, at least, making it less meaningful because, in truth, while I understand the science of attraction, it does not register when I am in the middle of experiencing attraction. These considerations factor even less when I have fallen in love. I do not think, “well damn, scientists have me pegged and I am only feeling all of these sensations because of serotonin and dopamine releases in my brain; love is ultimately meaningless.” What gives vibrance to experience is the experiencer.
Experience is akin to aesthetics, hence why we find some experiences pleasurable while there are others we can describe with an array of negative words and connotations. Science can also explain why a lot of people hate waiting for a long period of time, why just as many people hate the feeling of being out of breath, and why pretty much anyone hates going to work after a night of inadequate sleep. Science explaining these experiences does not change the interpretation of the experiencer; science does suggest why we have very common associations between most experiences, from pleasurable to painful to everything between, and that speaks to us being one species. So, experience can be explained by science and science can even predict the interpretation of this or that experiencer, but science does not dampen phenomenal experience. Panpsychists confuse that we have phenomenal experience with that we interpret phenomenal experience. Physicalism is not opposed to science fully explaining either of these and in fact, it has done much in the way of explaining both. Strawson tries to avoid this and yet claims: “If everything that concretely exists is intrinsically experience-involving, well, that is what the physical turns out to be; it is what energy (another name for physical stuff) turns out to be. This view does not stand out as particularly strange against the background of present-day science, and is in no way incompatible with it” (8). Well, if indeed it does not stand out as particularly strange against the background of present-day science, then all concrete things can be explained by science. This entailment seems uncontroversial and obvious for anyone identifying as a physicalist.
Strawson stipulates that “real physicalists … cannot deny that when you put physical stuff together in the way in which it is put together in brains like ours, it constitutes — is — experience like ours; all by itself. All by itself: there is on their own physicalist view nothing else, nothing non-physical, involved” (12). This is patently false as it alludes to mind-brain identity theory. It is not just atoms coming together in brains like ours. Human consciousness is compound reductive. In other words, human consciousness is not reducible to just one physical, macro aspect about our biological structure. That is to say that it is not reducible to just our hands or just our feet or just our brains. Strawson’s conflation of physicalism, as usually construed, and mind-brain identity theory leaves out crucial elements of experience, namely our central and peripheral nervous systems; the parts of the brain because anyone versed in the pertinent science knows that when it comes to the brain, the parts are more integral to consciousness than the whole; sense apparatus like our eyes, noses, pain receptors, and so on; and finally, external objects that provide the mind with data to process, interpret, make sense of, and so on.
From the perspective of differential ontology, and given that I have been thoroughly disabused of flippant idealism and solipsism, I know that my thoughts are not organically generated as if in a vacuum, within my brain. My thoughts are invariably and intimately connected to whatever I am occupied with, in the present time by Strawson’s various claims about what physicalism entails. If he had never written his thoughts, then I would not be countering his claims with my own. Perhaps I would be thinking about lunch at 12:26 pm ET, but alas, I am not. The point being that when I do start to think about having lunch, my thoughts about what to eat will be influenced by hunger pangs that amount to a feedback loop between my brain and my gut, again demonstrating the importance of organs other than just the brain in accounting for my experience, and pretty much any human being’s experience, of hunger. That feeling would take priority over my desire to respond to Strawson. Deciding what to eat amounts to constraints, like what food I have in my pantry and refrigerator and a desire not to spend money on takeout. So, I can only end up eating what is already available to me; in this case, only unexpected factors can change this course. Perhaps a neighbor or a relative is decided on bringing me baked lasagna and since I currently do not know that they have these plans, that option does not feature in what I am thinking of having for lunch. In any case, what has become clear is that phenomenal consciousness reduces, in part, to a litany of physical objects, some of which are not even in your vicinity. What is also clear is that the brain alone does not account for phenomenal consciousness.
Strawson and other panpsychists are looking in one of the right places, to be sure, but understanding phenomenal consciousness is like understanding a crime scene, and as such, we have to be aware of various factors that make the crime cohere, from blood spatter patterns to the murder weapon to point of entry (and whether or not it was forced entry) all the way up to possible motive. If we stop short at the murder weapon, then we can conclude the person was stabbed, but we cannot make any conclusions as to how many times, in what areas of the body, by whom, and for what reason. Phenomenal consciousness, uncontroversially, is exactly like that! Strawson and panpsychists sit out on the porch of the brain and do not venture into a mansion with many rooms, light switches, outlets, and the such. Neurons, synapses, neurogenesis, neurodegeneration, memory formation, recollection, confabulation, and so on are critically important in accounting for certain experiences. We cannot say the what-it’s-likeness of déjà vu is due to the fact that particles are conscious. That tells us nothing, does not help us elucidate on this experience, and ultimately, lacks explanatory power. It is simply a vacuous claim. Real physicalists can enter the many-roomed mansion and start to explain why this experience feels a certain way, and why some of us interpret it the way we do; for instance, there is a delay between seeing and being aware that we have seen, and so, in those small intervals of time, we can fool ourselves into thinking we have already seen what we just realized we saw. In other words, your brain “knows” what you see before you realize you have seen it. Generally, however, scientists think that déjà vu is tied to memory, so if we are sitting on the porch, trying to explain what it’s like to have this experience, we are in the wrong part of the house. We have to venture into the hippocampus, for instance (see Hamzelou, Jessica. “Mystery of déjà vu explained – it’s how we check our memories”. New Scientist. 16 Aug 2016. Web.).
I will free to skip the entire section on emergentism because while I find this account intriguing, it is implausible and has, what I think, are obvious commitments. Strawson defines it as follows:
Experiential phenomena are emergent phenomena. Consciousness properties, experience properties, are emergent properties of wholly and utterly non- conscious, non-experiential phenomena. Physical stuff in itself, in its basic nature, is indeed a wholly non-conscious, non-experiential phenomenon. Nevertheless when parts of it combine in certain ways, experiential phenomena ‘emerge’. Ultimates in themselves are wholly non-conscious, non-experiential phenomena. Nevertheless, when they combine in certain ways, experiential phenomena ‘emerge’. (12)
If this is the case, then emergentism is committed to idealism and to solipsism, “sometimes expressed as the view that “I am the only mind which exists,” or “My mental states are the only mental states”” (Thornton, Stephen. “Solipsism and the Problem of Other Minds”. Internet Encyclopedia of Philosophy. Web.). The obvious drawback here is that there is no way to pin down where these properties emerge from. The source will vary from one first person point of view to the next or, to put it facetiously, from one person under the illusion that they have first person perspective to another person under the same illusion. I will claim that all that exists is my mind while someone else can lay claim to their own mind existing. I will then claim that all else emerges from my mental states while the next person makes the same claim. Then the question becomes, when we are both shopping for clothes, why do we both see a blood red trench coat for sale and why is it that my mental state of wanting to buy it does not emerge from his mental state of barely noticing the coat? How can these same properties group together to become the same object from two people under the illusion that their respective mental states are the only mental states? Emergentism, with respect to consciousness, does not evade these problematic commitments.
To understand the next couple of sections in his paper, in where Strawson’s claims go off the rails and get even wilder, the following have to be kept in mind:
- The non-experiential thesis: “[NE] physical stuff is, in itself, in its fundamental nature, something wholly and utterly non-experiential” (11)
- Real Physicalism: “[RP] experience is a real concrete phenomenon and every real concrete phenomenon is physical” (12)
- P-phenomena: “the phenomena of liquidity reduce without remainder to shape-size-mass-charge-etc. phenomena” (13)
- “The central idea of neutral monism is that there is a fundamental, correct way of conceiving things — let us say that it involves conceiving of them in terms of ‘Z’ properties — given which all concrete phenomena, experiential and non-experiential, are on a par in all being equally Z phenomena” (23)
Setting aside Strawson’s side-stepping of chemistry, which easily shows how liquid water can “emerge” from two hydrogen atoms and one oxygen atom, the reason we cannot have Z phenomena is because the question of how consciousness can come from non-consciousness is itself reducible to a scientific question that has yet to be fully answered: how did life arise from non-life? Consciousness, as we know, is found in living things, so per the combination problem, what criteria need to be met for a consciousness like ours to take shape? Is it size, mass, shape, charge? Buildings and mountains are far more massive than us and by extension, are larger and have more particles generating what should amount to greater charges; and yet, mountains and buildings do not appear to be conscious at all. This is a critical clue because clearly, the haphazard combination of particles when a mountain forms or when a building is erected does not accomplish giving rise to consciousness like ours. Briefly, the combination problem can be formulated as follows:
Take a hundred of them [feelings], shuffle them and pack them as close together as you can (whatever that may mean); still each remains the same feeling it always was, shut in its own skin, windowless, ignorant of what the other feelings are and mean. There would be a hundred-and first-feeling there, if, when a group or series of such feelings where set up, a consciousness belonging to the group as such should emerge. And this 101st feeling would be a totally new fact; the 100 feelings might, by a curious physical law, be a signal for its creation, when they came together; but they would have no substantial identity with it, not it with them, and one could never deduce the one from the others, nor (in any intelligible sense) say that they evolved it.Goff, Philip, William Seager, and Sean Allen-Hermanson. “Panpsychism”. The Stanford Encyclopedia of Philosophy. Summer 2020. Web.
To do away with Strawson’s assertions concerning consciousness coming from experiential ultimates, I summon help from an unexpected source. Though the neo-Aristotelian uses this thought experiment for different purposes, it is enough to show that basic organization does not consciousness make. Jaworski, no doubt inadvertently, presents a version of the combination problem that cuts deeply into Strawson’s thesis. He explains:
Suppose we put Godehard in a strong bag — a very strong bag since we want to ensure that nothing leaks out when we squash him with several tons of force. Before the squashing, the contents of the bag include one human being; after, they include none. In addition, before the squashing the contents of the bag can think, feel, and act, but after the squashing they can’t. What explains these differences in the contents of the bag pre-squashing and post-squashing? The physical materials (whether particles or stuffs) remain the same — none of them leaked out. Intuitively, we want to say that what changed was the way those materials were structured or organized.Jaworski, William. Structure and the Metaphysics of Mind: How Hylomorphism Solves the Mind-Body Problem. Oxford: Oxford University Press, 2016. 9. Print.
Intuitively, I do not say that what changed is just the organization or structure of these materials. That dodges Jaworski’s neo-Aristotelian commitments. I also add that the spark of consciousness is what changed. There is, in this case, irreparable damage to the claustrum, thus making consciousness impossible to turn back on, so to speak (see Koch, Christoph. “Neuronal “Superhub” Might Generate Consciousness”. Scientific American. 1 Nov 2014. Web.). Furthermore, there is irreparable damage to other pivotal organs that make it possible for us to make any claim to being alive. The liver, heart, stomach, etc. have all lost their function. The matter is still there, but the electric fields that make us conscious are permanently off. This is why I am conscious and an inanimate object, equal in size and weight to me, perhaps a boulder, is not conscious. Non-experiential things can be used to design other non-experiential things or can naturally form into other non-experiential things given that organic compounds and electric fields are entirely absent. The question of how consciousness arises from non-consciousness just is the question of how life arises from non-life. Just because we currently do not have a fuller, more detailed picture does not mean we have license to offer theories like panpsychism, which possess nothing in the way of explanatory power. The panpsychist and neo-Aristotelian think they are headed in some definite direction, but they are both quickly approaching dead ends.
Electric fields theory (EFT) of consciousness, indeed similar to panpsychism, at least prima facie, is where panpsychists should place their chips. Tam Hunt elaborates:
Nature seems to have figured out that electric fields, similar to the role they play in human-created machines, can power a wide array of processes essential to life. Perhaps even consciousness itself. A veritable army of neuroscientists and electrophysiologists around the world are developing steadily deeper insights into the degree that electric and magnetic fields—“brainwaves” or “neural oscillations”—seem to reveal key aspects of consciousness. The prevailing view for some time now has been that the brain’s bioelectric fields, which are electrical and magnetic fields produced at various physical scales, are an interesting side effect—or epiphenomenon—of the brains’ activity, but not necessarily relevant to the functioning of consciousness itself.
A number of thinkers are suggesting now, instead, that these fields may in fact be the main game in town when it comes to explaining consciousness. In a 2013 paper, philosopher Mostyn Jones reviewed various field theories of consciousness, still a minority school of thought in the field but growing. If that approach is right, it is likely that the body’s bioelectric fields are also, more generally, associated in some manner with some kind of consciousness at various levels. Levin provided some support for this notion when I asked him about the potential for consciousness, in at least some rudimentary form, in the body’s electric fields.Hunt, Tam. “The Link Between Bioelectricity and Consciousness”. Nautilus. 10 Mar 2021. Web.
While I am committed to monism, the idea that only physical substance exists, and am therefore committed to physicalism, I am not committed to the idea that particles are the kinds of ultimates that attend to consciousness. Cells are the ultimates that attend to conscious beings like ourselves. This is the reason why the boulder lacks consciousness despite weighing as much as I do. Intuitively, the boulder should have roughly the same amount of particles in its structure as I do, but of utmost priority here is determining what I possess that the boulder does not. I am, as it were, activated by electric fields, receptive to my environment, can respond and adapt to it. The boulder, on the other hand, cannot do this. One might want to ask why, when the boulder finds itself gradually eroding under a small waterfall, it does not simply relocate itself? If, like me, it has a rudimentary spark of consciousness, why does it resign itself to a slow death, i.e., erosion? Bioelectric fields account for why I will move out of the way of an oncoming vehicle while the boulder remains idle under the waterfall, slowing eroding as time goes on.
This is probably the most damning response to Strawson: various domains of science are needed to understand consciousness. If EFT is accurate, and I see no reason for it to be inaccurate, cell biology is just as crucial as physics, chemistry, and neurophysiology. This makes for a comprehensive understanding of consciousness, comprised of the convergence of all our domains of knowledge. Philosophy of mind no doubt has a role to play, but not when it ventures far and wide from what science suggests. There is already a fundamental distinction between non-life and life, between inanimate objects and people. It follows, therefore, that if consciousness is inhered in living things, then it cannot be attributed to non-living things. This smacks of common sense and yet, philosophers are willing to occupy the fringes with popular theories like panpsychism. Some pretty anecdotes have come from this idea. Some say we are the universe experiencing itself, but if the universe already had all the makings of phenomenal consciousness, why does it need overrated chimpanzees who are preoccupied with reality tv, social media, violence against one another, and all manner of mundane and superficial nonsense to understand itself? If any composition at all, even absent bioelectric fields, is enough to account for consciousness, why not just configure itself into a Boltzmann brain that has no potential to develop the silly prejudices and biases humans tend to have?
My account starts to make clear how I can be committed to NE and RP and lead us in the right direction as it concerns Z phenomena. P phenomena are well-accounted for as they pertain to inorganic compounds. Of course, it begs the question to say that we have not quite nailed P phenomena as they concern organic chemistry. To reiterate, our ignorance with respect to how inorganic compounds become organic compounds that are essential to life does not give us license to make as many wild assumptions as we please. Any hypothesis, even within philosophy, especially if it encroaches on the territory of science, has to demonstrate awareness of scientific literature or at least, incorporate science wherever it is germane to one’s theory. Claiming that particles are actually experiential entities that account for why we are conscious pushes the buck back. Panpsychists have moved the goalposts and if they were correct, we would still be tasked with comprehending the consciousness of things utterly unlike ourselves. Thankfully, we do not have to do that and we can focus our energy on understanding why there is a what-it’s-likeness to our experiences. Again, there are important clues: for instance, people who were born blind cannot see while dreaming:
When a blind man is asked if he dreams the answer is immediate: ‘Yes!’ But if we ask him if he sees anything in the dream, the answer is always doubtful because he does not know what it is to see. Even if there were images and colours in his brain during the dream, how could he recognize them? There is, therefore, no direct way, through the dream reports, to evaluate the presence of visual activation in the dream of congenitally blind subjects.Carr, Michelle. “Do Blind People See in Their Dreams?”. Psychology Today. 29 Dec 2017. Web.
If experiential particles give rise to sight, then why do particles seem entirely dependent on eyes? Why do they not simply configure themselves in another way in order to circumvent the blindness of the eyes? It is quite telling that in the absence of a sense, the corresponding phenomenal aspect of experiences associated with that sense are also absent. My compound reductive account predicts this; this is unsurprising on my theory of consciousness whereas on Strawson’s account, and any panpsychists account, there is no way to account for this. Severe retinopathy is usually what causes people to be born blind. There are millions of light-sensitive cells within the retina, along with other nerve cells that receive and process the information that is sent to your brain, by way of the optic nerve. On EFT, therefore, blindness is damage within the electric fields that result in sight. The cure for blindness is to restore the electric fields within these cells so that communication between nerve cells is possible. That would then restore any corresponding phenomenal experiences. The mere organization of particles clearly does not accomplish this. EFT seems to have far more explanatory power than panpsychism does and if we took pains to assess just our five ordinary senses, we would be able to see that like blindness, anosmia, aguesia, deafness, and things like neuropathy, hypoesthesia, and CIP (congenital insensitivity to pain) are all reducible to nerve cell damage in the nose, mouth, ears, and extremities respectively. In simple terms, bioelectric pathways are damaged and thus, turn off communication to the brain, and in turn, cut off the corresponding qualia. This is essentially what I mean by recontextualizing qualia and Strawson clearly does not dodge that bullet.
Ultimately, I think EFT should supplant panpsychism in popular circles. I can agree with the notion of conscious cells because they are among the smallest structures atoms have assembled into within living things. I disagree with the idea of conscious particles because when they organize into air fryers, thermostats, buildings, mountains, and sand dunes, despite having comparable mass, size, shape, and charge to living things, none of these objects appear to be conscious; in other words, none of these objects appear to be aware, awake, attentive, and most importantly, alive. I can knock on a fish tank and the fish with the blankest stare in the tank can respond to a stress signal and move away from that area in the tank. I can throw a rock repeatedly into a harder material and it will continue to remain idle; put another way, I can take a geologist’s hammer to sediment over and over again, whether for a dig or in a fit of sustained rage, and the sediment will remain idle, allowing me to crack and break it as much as I please. Conscious beings, on the other hand, have a bias toward survival and retention of their structure. To use as humane an example as possible, if you were to do something that caused an insect pain, perhaps sending minor electrical charges into one of its legs, its automatic response will be to try to escape the situation. The insect, like you, wants to survive and ensure that it is not crushed or, in this case, burnt to a crisp. The same cannot be said of the myriad, non-experiential macro objects around us day in and day out. Strawson and panpsychists, aside from co-opting terms like physicalism when they really do not mean physicalism, would do well to renounce panpsychism and accept a better theory of ultimates: electric fields theory of consciousness. Then they can come to my pluralist physicalist account that allows for compound reductionism. To my mind, this is the only real way to study consciousness.
Lately, I have been thinking a lot about time, both on a personal level and on a philosophical one. Setting aside all of my thoughts on the kind of freedoms and privileges I would need to truly own my time, the phenomenon of time occupies my thoughts, in many ways from the mundane to the complex. When I was younger and far less patient, I would grow incensed having convinced myself that I picked the wrong bus out of the two that showed up at roughly the same time because the other bus eventually drove past mine. As I got older and my philosophical tendencies started to take root ever deeper, I started to notice how whenever the other bus drove by the bus I was on, it would never get too far ahead. In fact, it would arrive at my stop less than a minute before mine did.
So came my idea of virtual simultaneity. From there, I would imagine a parallel me, a sort of ghost, who in another universe got on the other bus and I would imagine how far ahead of me he would be on his walk toward my building. I would then speed up so that I would catch up to my parallel on my way home as means to feel better about not having been on the bus that arrived earlier. This is how I think about time, in a very intimate sense. Nothing stops me in my tracks like an article about time. My mind could be furtherest from time and from philosophy more generally, reading fan theories about a show I am into or looking up the latest sports scores, and when scrolling, an article about time shows up and I am immediately in that headspace. This was the case two days ago when I read Musser’s article “A Defense of the Reality of Time.”
Time is one of the most difficult phenomena to apprehend in the universe. It is as elusive as it is seductive. It is safe to say that no one has won over this lover’s heart. No one has seduced her enough to understand all of her mysteries and secrets. It is because of this that I am often discouraged to write anything about time, despite growing confidence in my philosophical capacities. But just then, Musser said something that got the gears of my mind running.
“Well, what if time had two dimensions?” As a purely algebraic question, I can say that. But if you ask me what could it mean, physically, for time to have two dimensions, I haven’t the vaguest idea. Is it consistent with the nature of time that it be a two-dimensional thing? Because if you think that what time does is order events, then that order is a linear order, and you’re talking about a fundamentally one-dimensional kind of organization.Musser, George. “A Defense of the Reality of Time”. Quanta Magazine. 16 May 2017. Web.
What if the critical error we continue to make is thinking of time as one-dimensional? What if time has more than one dimension, just like space does? A cursory look at Superstring Theory (ST) will begin to sound exactly like me in a footrace with parallel me. Parallel or possible worlds emerge from the fifth and sixth dimensions of ST. If it were possible to see the fifth dimension, we would be able to take note of similarities and differences between the world we occupy and a parallel world. Perhaps this is what our imagination does. If a runner gets a cramp in her calf a few meters from the finish line, she may envision herself winning the race in a world in where she did not get injured. Perhaps we do, in fact, see the fifth dimension, but I digress.
In the sixth dimension, one would be able to see the universe as it was at the beginning and an array of possibilities. One would, for instance, see universes in where the rate of entropy is lower or higher than in ours. If Boltzmann was correct, and time emerges from basic probabilities, then we would see universes in where .000000001 nanoseconds do not equal a second, but instead, .0.000000001515 nanoseconds equal a second, implying that time moves slightly slower in that universe than it does here. This is to say nothing about the arrow of time, which could move in the opposite direction (see Cartwright, John. “We may have a spotted a parallel universe going backwards in time”. New Scientist. 8 Apr 2020. Web. and De Chant, Tim. “Big Bang May Have Created a Mirror Universe Where Time Runs Backwards”. Nova. 8 Dec 2014. Web.).
In the seventh dimension, one may find themselves in the mirror universe on the other side of the Big Bang, in where the arrow of time travels backwards. The eighth dimension would give one access to exotic universes, with initial conditions so far from our own. In the ninth, one would possess the power to compare and contrast every possible world from worlds very much like our own to worlds decidedly unlike ours. Finally, the tenth dimension exhausts every possibility.
The fact of the matter is that we simply do not fully comprehend our own universe and so, we can be gravely mistaken about time. If one considers ST, and not necessarily whether the theory is true, but rather what the theory can teach us, one quickly notices that ST’s dimensions have consequences for time that are arguably more important than their ramifications for space. If ST is the case or if something close to it is true, time is ultimately nonlinear and therefore, not one-dimensional. So let us walk through what two-dimensional time might look like. First, imagine that the ancestors of homo sapiens in the far off future have invented time machines. If they can travel to and fro in time, time is still one-dimensional. In two dimensions of space, one can move horizontally and vertically. So then the question arises, what would it mean for time to move vertically?
Prior talks about a quasi change or “what is common to the flow of a literal river on the one hand…and the flow of time on the other” (Oaklander, Nathan. Adrian Bardon ed. “A-, B- and R-Theories of Time: A Debate”. The Future of the Philosophy of Time. New York: Routledge, 2012. 23. Print.). Rivers, in our world, do not rise, but they do fall. Setting aside evaporation, which does speak to water traveling upward, for time to flow vertically downward, we can imagine a powerful waterfall. Though this is completely new territory and may look like a new way of thinking about time, anyone who has ever thought about the past as something long dead and inaccessible implies that the flow of time travels vertically downward, forever making the past impossible to re-experience directly. For instance, Emery, et al. write: “According to presentism, only present objects exist. More precisely, presentism is the view that, necessarily, it is always true that only present objects exist. Even more precisely, no objects exist in time without being present” (Emery, Nina, et al. “Time”. Stanford Encyclopedia of Philosophy. 2020. Web.). Presentists therefore, believe that objects in the past are in a state of oblivion, but if Temporal Parts Theory (TPT) is correct, then objects and events in the past are annihilated and their corresponding intervals of time are also annihilated. A brief review of TPT is in order.
The Temporal Parts Theory of Identity (hereon TPT) is derived from the notion of time being, in some sense or to some degree, like space. One can think of, for instance, a linear timeline depicting the years all 46 U.S. Presidents held office. Or one can think of the x-y axis used in physics. Time is represented by the x-axis whilst space is represented by the y. Or one can think of a space-time diagram containing two axes representing space and another to represent time. These sorts of considerations have led some philosophers and scientists to ask whether time is a dimension. According to some accounts, time is the fourth dimension. Time, however, is not always analogous to space. D.H. Mellor discussed these disanalogies at length. He, for instance, concluded that there is no spatial analogue for our feeling of the passing of time. We cannot, in other words, attribute the passing of time to spatial changes (Mellor, D. H. Real Time II. London: Routledge, 1998. 95-96. Print.).
With respect to parts, however, time and space are analogous. Theodore Sider explains:
Temporal parts theory is the claim that time is like space in one particular respect, namely, with respect to parts. First think about parts in space. A spatially extended object such as a person has spatial parts: her head, arms, etc. Likewise, according to temporal parts theory, a temporally extended object has temporal parts. Following the analogy, since spatial parts are smaller than the whole object in spatial dimensions, temporal parts are smaller than the whole object in the temporal dimension. They are shorter-lived.Sider, Theodore. “Temporal Parts”. 2008. Web.
Consider, as an example, the b-moments in Friedrich Nietzsche’s life. Friedrich Nietzsche’s birth on October 15, 1844 is one b-moment and his death on August 25, 1900 is another. The dates of both represent distinct b-times. On TPT, he is spread out from October 15, 1844 to August 25, 1900. If we were to depict him in a space-time diagram, his parts on our diagram will depict his temporal parts. If we were capable of watching Nietzsche in his infancy, we will be observing a temporal part, then another that resembles it, and then another. If one were to watch infant Nietzsche long enough, his later temporal parts will be slightly bigger than the previous ones. That is to say that Nietzsche is no longer an infant; he is now, for instance, a toddler. So on our space-time diagram, Nietzsche grows the further we move away from his birth. It is also worth noting that temporal parts have spatial parts and vice versa. Nietzsche’s hand, like Nietzsche himself, persisted within the interval of time his life occupies. The parts he was comprised of will also be represented on our space-time diagram.
Nietzsche, therefore, has spatio and temporal parts, so if presentism were correct, the interval of time represented by the ~54 years Nietzsche lived would enter oblivion. The flow of time, as it pertains to Nietzsche’s life, ends and yet, continues. Perhaps a better analogy (though in actuality, this is time moving in three dimensions), presentists seem to imagine that time is like a river, at least on the surface, but within it are whirlpools in where intervals of time meet their end. With difficulty, therefore, one can imagine time in more than one dimension even if one is not convinced of presentism. Consider instead the growing blocking theory (GBT). On GBT, the present and the past are real and the future becomes real when the present edge meets it. Similarly, on the moving spotlight theory (MST), only objects within the spotlight are considered present though objects on the peripheries still exist (Emery, Ibid.). How might two-dimensional time look like under GBT or MST?
For simplicity’s sake, I will consider GBT. Since presentists deny the spatio-temporal parts of the past and thus, bury the past, we can imagine that for presentists, time moves to the right on the x-axis (that is, along the first quadrant of the axis) and also downward on the y-axis (along the fourth quadrant). Growing block theorists, on the other hand, since they believe in the future emerging at the present edge, see time moving in the same direction as presentists on the x-axis, but moving along the y-axis in the opposite direction, upward in the first quadrant. In other words, if the death of temporal parts is a downward movement, the birth of temporal parts is an upward movement.
To pursue a brief tangent, not all theories of time think of time as a line. Eternalists see time in one dimension, but their view of time is more in keeping with a circle. Emery, et al. state:
One version of non-presentism is eternalism, which says that objects from both the past and the future exist. According to eternalism, non-present objects like Socrates and future Martian outposts exist now, even though they are not currently present. We may not be able to see them at the moment, on this view, and they may not be in the same space-time vicinity that we find ourselves in right now, but they should nevertheless be on the list of all existing things.Ibid.
Another eternalist conception that has been entertained can be called recurrentism. Nietzsche probably explains it best:
What if some day or night a demon were to steal after you into your loneliest loneliness and say to you: “This life as you now live it and have lived it, you will have to live once more and innumerable times more; and there will be nothing new in it, but every pain and every joy and every thought and sigh and everything unutterably small or great in your life will have to return to you, all in the same succession and sequence—even this spider and this moonlight between the trees, and even this moment and I myself. The eternal hourglass of existence is turned upside down again and again, and you with it, speck of dust!”
Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus? Or have you once experienced a tremendous moment when you would have answered him: “You are a god and never have I heard anything more divine.” If this thought gained possession of you, it would change you as you are or perhaps crush you. The question in each and every thing, “Do you desire this once more and innumerable times more?” would lie upon your actions as the greatest weight. Or how well disposed would you have to become to yourself and to life to crave nothing more fervently than this ultimate eternal confirmation and seal?Nietzsche, Friedrich W, and Walter Kaufmann. The Gay Science: With a Prelude in Rhymes and an Appendix of Songs. New York: Vintage Books, 1974. 373. Print.
On the cosmological interpretation of eternal return (see here), eternalism simply is the idea that time is like a circle. There is a sense in which recurrentism is already implied and though eternalists are usually not committed to the idea of time literally repeating itself, if Socrates and Martian outposts exist, then time and causation are a loop. This implies that some future event will jumpstart the Big Bang and every event in the universe will play out in identical ways all over again. This is precisely the sort of thinking entailed in the presentist’s response to eternalism. Heather Dyke explains:
According to this argument, any theory that assigns ontological privilege to the present moment while also recognizing the existence of non-present times faces an insurmountable problem: it is unable to account for our knowledge that we are located in the present. If anything is certain, surely our knowledge that we are present is! But if past times exist as well as the present time, what is to say we are not located in one of those past times, mistakenly believing ourselves to be present? We might insist that our experience of presentness is so compelling that it must be veridical. But what about Queen Elizabeth I’s experience of presentness? That’s pretty compelling too, yet she is in the past, so her experience misleads her. Perhaps our experience misleads us too.Dyke, Heather. Presentism and eternalism. Time, metaphysics of, 2011, doi:10.4324/9780415249126-N123-2. Routledge Encyclopedia of Philosophy, Taylor and Francis.
We tend to give the present an ontological status greater than that of the past and the future. It appears that the only escape we have is the idea of spatio-temporal oblivion. Queen Elizabeth I is not somewhere in the past under the mistaken impression that she and not us is present. Time, therefore, probably does not have one dimension. Forward implies backward and up implies down, so any linear or circular view of time runs into problems of recurrence and all sorts of time-related paradoxes like the well-known Grandfather Paradox. Without straying too far in quantum mechanics, all of this is already implied in Bell’s Theorem: “Most models of nature are reversible in time; we can run the basic equations backwards in time as easily as forwards in time. This implies that theories with causality forwards in time must also have causality backwards in time; this was ignored by Bell” (t’Hooft, Gerard. “Time, the Arrow of Time, and Quantum Mechanics”. Frontiers. 29 Aug 2018. Web.).
Especially for people high on the idea of the universe being conscious or that there is a god(s), the universe would therefore offer us clues to help us better understand time. Think of the notion of space and time changing places within a black hole. Going back to the paradigm, namely the four-dimensional concept of space-time, time would then have three dimensions and space would have one. As Pösel explains:
Only outside the cylinder does the intersection with a plane at constant height (“at constant time” as seen from the outside) correspond to a snapshot. Inside the cylinder, time and space have switched places. Inside, the intersection image doesn’t show a snapshot – it shows something much more weird: a caleidoscopic combination of many different times. After all, inside, time is not the axial, but the radial coordinate, and all the different distances from the “center” which you see in the sketch correspond to different moments in time. Instead of the spatial structure of the black hole, the sketch shows a strange mix of space and time!Pössel, Markus. “Changing places – space and time inside a black hole”. Einstein Online. 2010. Web.
I would highly recommend getting a handle of Pössel’s illustrations if you would like to better understand how time and space trade places within a black hole. For our purposes, the fact that this happens in a black hole might offer a clue. Perhaps the lesson to be learned from the inner workings of blacks holes is that time has more than one dimension. Maybe the relativity of time inhered in General and Special Relativity has to do with the fact that time is superimposed or even supervenes on space in a different way relative to one’s direction in space. In other words, time has a horizontal behavior, so to speak. It has a slightly different vertical behavior. Further still, it has another behavior along curved or edged spaces within the dimension of depth. On Earth, we commonly move as though we lived in two-dimensions, so thinking of moving in a third dimension is difficult to conceptualize. We would have to travel in space to get a better idea of how the effects of gravity, over long distances, look like traveling along a curved surface. Or, we can simply think of the difference between Mario on Super Nintendo versus Mario on the Nintendo Switch. When Mario performs any movements in the depth dimension, or to add to our x- and y-axes, a z-axis, he is now covering diagonal domains analogous to the width of a cube, for instance.
To exhaust a well-known example, time moves faster at higher altitudes, the further one is from the Earth’s center. This might be suggestive of time behaving differently along the vertical axis. This also readily explains the relative experiences of someone falling into a black hole and another person seeing this happen. Recall the following:
1. The light coming from the person gets redshifted; they’ll start to take on a redder hue and then, eventually, will require infrared, microwave, and then radio “vision” to see.
2. The speed at which they appear to fall in will get asymptotically slow; they will appear to fall in towards the event horizon at a slower and slower speed, never quite reaching it.
3. The amount of light coming from them gets less and less. In addition to getting redder, they also will appear dimmer, even if they emit their own source of light!
4. The person falling in notices no difference in how time passes or how light appears to them. They would continue to fall in to the black hole and cross the event horizon as though nothing happened. (“Falling Into a Black Hole Sucks!”. ScienceBlogs. 20 Nov 2009. Web.)
If the observer never quite sees the faller fall into the back hole while the faller notices no difference in the passage of time and ultimately, crosses the event horizon, time here can be taking on a different behavior along the z-axis, in the dimension of depth along a curved plain. The 3:1 relation of space-time can be simplified with a 3:3 relation which is virtually the same as 1:1 or colloquially, one-to-one. To my mind, if simplicity is the aim, then this is our next recourse. ST was borne out of an attempt to condense the three dimensions of space into one dimension. That unfortunately did not work, at least not as intended. We should therefore attempt to think of time in three dimensions and see how this might help the project of unifying physics.
Ultimately, our theories of time, along with physics, may be impeded by the idea of one-dimensional time. Maybe it is time for us to begin thinking about the behavior of time in two or three dimensions. ST already shows that there are peculiarities about time across multiple dimensions, implying further that the ten dimensions of ST are not dominated by space. In other words, it is not that nine out of the ten dimensions belong to space and still one to time. There could be a disproportion, like a space-time ratio of 6:4, but if ST entails a few dimensions of time, we should begin conceptualizing how time behaves in a multi-dimensional setting. This might help us to finally unravel all of her mysteries and secrets. Until then, I leave you with a mind-bending interpretation of what happens inside of black hole from Christopher Nolan’s “Interstellar.”
By R.N. Carmona
Philosophy of mind begins and ends with the mind-body problem. Philosophy, more generally, begins and ends with problems, so philosophy can be considered a sort of Russian doll in where a major problem implies any number of minor problems. Philosophy, therefore, makes progress insofar as there are solutions for these problems. The enterprise, however, is hyper-specific and thus, what appears to be a solution for the major aspect of an issue is often not considered a solution for the minor aspects connected to it. For example, Howard Robinson identifies the following implicit issue within the mind-body problem: “The problem of consciousness: what is consciousness? How is it related to the brain and the body?” (Robinson, Howard, “Dualism”, The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (ed.)). The hard problem of consciousness is entailed as well: what is phenomenal consciousness? How do qualia relate to the brain and/or body? Though we understand awakeness, awareness, and a lot about how consciousness traces to the brain, e.g., Francis Crick and Christoff Koch’s idea that the claustrum is the “conductor of consciousness” (see Stiefel, Klaus M. “Is the key to consciousness in the claustrum?”. The Conversation. 26 May 2014. Web.), proponents of the hard problem are not convinced that physicalism has solved the mind-body problem. This is where we find ourselves in the philosophy of mind.
The mind-body problem appears to find a solution in the severe brain trauma experienced by Phineas Gage in 1848. An explosion sent a tamping iron through his left cheek bone at a high speed; the iron exited at the top of his head and was found several rods (the equivalent of ~5 meters) behind him. His brain injury was such that it resulted in drastic changes in his behavior. John Martin Harlow, the physician who attended to Gage, published a report in the Bulletin of the Massachusetts Medical Society in where he discussed Gage’s behavioral changes:
His contractors, who regarded him as the most efficient and capable foreman in their employ previous to his injury, considered the change in his mind so marked that they could not give him his place again. He is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint of advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operation, which are no sooner arranged than they are abandoned in turn for others appearing more feasible. In this regard, his mind was radically changed, so decidedly that his friends and acquaintances said he was “no longer Gage.”Costandi, Mo. “Phineas Gage and the effect of an iron bar through the head on personality”. The Guardian. 8 Nov 2010. Web.
Gage’s case lends strong support to the notion that what we call the mind is intimately connected to the brain in some important way. Even though thinkers like Leucippus, Hobbes, La Mettrie, d’Holbach, Carnap, Reichenbach, and Schlick thought that the brain generates thought similarly to how the liver secretes bile, there was no theory in the philosophy of mind that equated the brain and mind until U.T. Place in 1956 (Smart, J. J. C., “The Mind/Brain Identity Theory”, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.)). Mind/Brain Identity Theory can therefore, be regarded as the earliest version of physicalism in the philosophy of mind.
In philosophy, more generally, I take major issue with negative positions. Briefly, a positive philosophical theory is the result of the evidence in question. Take, for instance, bare data about brain impairments and corresponding behavioral changes from Phineas Gage to the 40-year-old man who exhibited sexually deviant behavior caused by a cancerous tumor in the right lobe of his orbifrontal cortex (Choi, Charles. “Brain tumour causes uncontrollable paedophilia”. New Scientist. 21 Oct 2002. Web.). Mind/Brain Identity Theory, given bare data of this sort, would point to changes in the mind corresponding to some impairment in the brain. There are other points of data for a theory to account for, but the insights of neuroscience, cognitive science, and psychology are crucial and must be accounted for under any positive theory in the philosophy of mind. Negative theories, on the other hand, often bypass an alternative interpretation of the relevant data and hinge on the fact that a given positive theory or a group of them falling under some category, e.g., physicalism, fail to account for or have overlooked other points of data. The challenge to physicalism is that it has failed, hitherto, to account for qualia.
Qualia (singular quale), what it is like to see red or taste pizza or touch silk, are supposed to lead us to the inescapable conclusion that since qualia are nonphysical properties, mental property is also nonphysical. Nonphysicalist theories in the philosophy of mind are the result of physicalism’s lone incapacity up till this point to incorporate phenomenal experiences into its paradigm. Later, I will show why nonphysicalists have been hasty to dismiss physicalist theories. For present purposes, some historical background on the hard problem of consciousness is in order. While credit is due to Chalmers for naming the problem, he does not deserve recognition for being the first person to identify it.
A Short History of The Hard Problem of Consciousness
David Chalmers is often associated with the hard problem of consciousness, but I think the credit rightfully belongs to Wilfrid Sellars. The basic thrust of the problem was spelled out in such a manner as to be the equivalent of stating it explicitly. The fact that Sellars did not call the problem what we now call it, ‘the hard problem of consciousness’, does not take away from the fact that he did much more work in attempts to unify two conflicting images which he dubbed manifest and scientific.
At first glance, this might be a reframing of Kant’s phenomena and noumena, but it is useful to note that Sellars’ manifest and scientific images would both be categorized as phenomena. On Kant, the scientific image would not qualify as noumena. Some modern day philosophers, taking after Donald Hoffman, a professor at the University of California Irvine, have it that we have evolved in such a way that we are pretty much shielded from apprehending ultimate reality, i.e., the Kantian noumena (Frank, Adam. “What If Evolution Bred Reality Out Of Us?” NPR. 6 Sep 2016. Web.). We evolved to perceive and thus, to solely apprehend the phenomena.
With that in mind, Sellars’ scientific and manifest images correspond to the Kantian phenomena. Yet there appears to be an irreconcilable contradiction between them. On the manifest image, a Rubik’s cube has a distinct three-dimensional shape and six colors – usually yellow, orange, red, green, blue, and white. Assuming we are trichromats that do not have green-red color blindness, we all apprehend this object more or less equally. On the scientific image, however, the cube does not have a distinct shape; nor does it have colors. The cube is comprised of particles and empty space, and though the colors are fully explainable by the science of chromatics – namely as the result of wavelengths in the electromagnetic spectrum – particles in and of themselves do not have a color. Aside from that, the Rubik’s cube seems to have these colors because we have three photoreceptor cells in each retina allowing us to see these colors. The colors, to put it another way, are not inherent to the object.
Sellars was interested in the project of saving appearances or, in other words, unifying reality as it seems given human perception versus reality as explained through science. This is the hard problem of consciousness made explicit: neuroscience cannot explain phenomenal consciousness. This is Sellars’ exact dilemma. The contradictory images are best viewed in human consciousness. Neurologists and neuroscientists can explain to us why we see and what brain regions are involved when we see or even when we imagine seeing, but they cannot tell us why we see how and what we see. In other words, science can readily explain why we see the colors we see, but it cannot tell us how neurons and brain regions give rise to quaila; there is something it is like to see a Rubik’s cube and given the hard problem, the scientific image cannot be invoked to explain the manifest image.
The Challenge From Nonphysicalists
Nonphysicalists, like any negative theorists, are essentially telling us to forget about the explanatory success of physicalism and focus, instead, on its seeming failure. In other words, nonphysicalists have no alternative explanation that works as well or better than physicalist explanations of non-phenomenal consciousness like awakeness and awareness, for example, but since they are purporting to offer a metaphysical explanation for phenomenal consciousness, we should therefore, abandon physicalist modes of explanation. I think that, first and foremost, the onus is on any negative theorist in philosophy to account for all of the same data in a more cogent manner than positive theorists before reaching into areas not illuminated by them. Otherwise, the inductive bias that people tend to have for a working paradigm remains justified. Put another way, if the positive theory has successfully accounted for all of these points of data, we have no reason to believe it cannot account for more troublesome points of data like qualia, given that enough time is granted. There is also a glaring problem with the inclination toward a metaphysical account after physical accounts have done most of the heavy lifting. It appears to beg the question for a nonphysical bias usually tracing back to religious predilections.
Think of synesthesia. For people who have synesthesia, hearing color, tasting sounds, and seeing numbers and letters as colored is a common experience. As with most sensory disorders, there is a neurophysical correlate to synesthesia (Barry, Susan R. “The Brain of a Synesthete”. Psychology Today. 26 Jul 2012. Web.). Sometimes the onset of the disorder is preceded by brain trauma. Jason Padgett, who was assaulted outside a karaoke bar, suffered a severe concussion. He claims to see geometric shapes and angles all around him. This is an unusual sense(s) for the majority of us and there would obviously be something it is like to experience the world in the way he does, i.e, qualia associated with these quirky senses. There is, however, something to be said about the fact that a brain injury preceded the emergence of these peculiar senses. While I am wary of inferring causation from correlation, correlation is a powerful indicator and when considering that Padgett’s case is not unique, the correlation might be suggestive of causation (see St. Michael’s Hospital. “Second known case of patient developing synesthesia after brain injury.” ScienceDaily, 30 July 2013.).
Panpsychists and Aristotelian hylomorphists say nothing about the misattribution of qualia. They want potential detractors of physicalism to believe that qualia are invariably uniform and predictable. In other words, the examples invariably are what is it like to taste pizza or what is it like to see red, but they never make mention of an increasing number of cases in where we can ask what is it like to taste Bach’s “Lacrimosa” or what is it like to hear burgundy. Recently, Julie McDowall’s synesthesia went viral because she can tell people what their names taste like. Interestingly, in some cases, she told people what their names looked like, e.g., Naomi looks like colorful lego pieces. Panpsychism and modern-day hylomorphism, aside from having no way of accounting for awareness, awakeness, and other aspects of consciousness already explained under physicalism, have overlooked synesthetic qualia because they are essentially live counterfactuals. We do not have to imagine another world in where people taste sounds and hear colors; these peculiarities happen all around us, and so, if they want to conclude that qualia and physicalism are incongruous, then synesthetic qualia and nonphysicalism are irreconcilable. To see this, it will be necessary to take a closer look at these two negative theories.
An Examination of Panpsychism and Hylomorphism
Setting aside the more mystic treatments of panpsychism, the non-reductive physicalist version of it promoted by Strawson and Chalmers is fallacious and though that is not enough to show where it has gone wrong, it makes for a false start. John Heil states:
The idea would not be that electrons and quarks have minds, but that qualities of conscious experiences are included among the qualities possessed by fundamental things. On the version of panpsychism endorsed by Strawson, electrons would not have mini-souls, but merely contain experiential ingredients of consciousness, a kind of primitive awareness. Only fully developed sentient creatures would undergo recognizably conscious experiences. You can make a triangle or a sphere by organized non-triangular or non-spherical elements in the right way, but the elements must have some split characteristics. Similarly, you can make a conscious mind by arranging elements that are not themselves minds, but the elements must have some conscious characteristics.Heil, John. Philosophy of Mind: A Contemporary Introduction. Third ed. New York: Routledge, 2006. 172. Print.
The notion that the constituents of consciousness must themselves be conscious, as some proponents of panpsychism put it, or as Strawson puts it, that they must have rudimentary experiential ingredients, is a fallacy of division. Despite this, it is not difficult to see the appeal of panpsychism. If you spread out the parts of a thermostat or a microwave, given that you know anything about the components that go into a thermostat or a microwave, you will understand how the bimetallic strips in a thermostat, that are comprised of two different metals, when placed back to back, serve an important function because one metal has a high coefficient of linear expansion and therefore, expands when the temperature increases, resulting in the bending of the bimetallic strip in one direction toward either opening or closing the circuit. Unfortunately, it is just not possible to spread out the parts of the brain and nervous systems, right down to the microscopic level, in order to confirm that particles have experiential ingredients. Moreover, one can pick apart a computer, a microwave, or a thermostat down to its barest parts, understand the function of each part, and put these appliances back together in working order. A panpsychist can then assert that brains are similar to household appliances, but they would have considerable difficulty showing how the combination of w amount of electrons and x amount of protons, if configured to make a neocortex and an amygdala, will result in y functions and z behaviors. The panpsychist’s domain of analysis is on the same macro-level that physicalists operate in, so they therefore, have no way of substantiating their assertions.
We will circle back around to panpsychism shortly, but a brief overview of hylomorphism is necessary because both of these negative theories in the philosophy of mind rely too heavily on the same considerations. A go-to example used by hylomorphists is in order:
Suppose we put Godehard in a strong bag — a very strong bag since we want to ensure that nothing leaks out when we squash him with several tons of force. Before the squashing, the contents of the bag include one human being; after, they include none. In addition, before the squashing the contents of the bag can think, feel, and act, but after the squashing they can’t. What explains these differences in the contents of the bag pre-squashing and post-squashing? The physical materials (whether particles or stuffs) remain the same — none of them leaked out. Intuitively, we want to say that what changed was the way those materials were structured or organized.William Jaworski, Structure and the Metaphysics of Mind: How Hylomorphism Solves the Mind-Body Problem. Oxford: Oxford University Press. 9. 2016. Print.
Setting aside Bernard Williams’ astute observation, namely that this ‘polite materialism’ is incongruous with the Neo-Aristotelian’s confessed dissatisfaction with materialism, this thought experiment misses the mark. Briefly, in keeping with Aristotle, hylomorphists state that the mind is not just an accumulation of materials, but instead, a structure or a composite organized in a certain way that then gives rise to powers that have causal capacities. On the surface, it is a non-reductive physicalist account, but once Aristotelian causation is properly accounted for, along with Aristotle’s treatment of substance and forms, one starts to see how hylomorphism, like panpsychism, are appropriated by nonphysicalists, especially ones who have religious biases. In any case, we do not need to imagine a graphic case like a crushed human being in a bag, a thought experiment that begs the question for hylomorphism. Hylomorphists should ask what structural difference is there between a living person at 3:49 pm and the same person pronounced dead at 3:53 pm. Setting aside theories of time, the living human, named Henry, is structurally organized as the hylomorphist asserts; dead Henry is organized in the same way, no pulverization necessary. Hylomorphism, unlike versions of physicalism, cannot explain what changed over the course of four minutes.
This is precisely the problem with negative theories in the philosophy of mind. They take for granted what physicalism has already explained, offering no alternative explanations, and then proceed to make claims extending from questions physicalists have, thus far, not been able to address. A panpsychist or hylomorphist is not saying anything about the brain gradually moving toward complete inactivity, the death of neurons, and ultimately, brain death resulting in the loss of function of every one of Henry’s organs. The physicalists did all the heavy lifting, offering positive theories that account for far more data than nonphysicalists hoping to high heavens that there is a viable alternative to Cartesian dualism. The nonphysicalists, by contrast, want to get by with nothing in the way of a cogent explanation, content with identifying what is, more than likely, a temporary weakness of a physicalist framework that has had far more explanatory success. In other words, we do not need to crush Henry’s structure to account for any difference between a living Henry and a dead one. Nor do we have plausible ways of deconstructing Henry, down to scores of subatomic particles, in order to understand his internal workings and how these experiential ingredients come together as a fully functioning human consciousness. It appears that the panpsychist making these assertions intends to make demands that are impossible to meet, to essentially move the goalposts out of a begrudging recognition of the fact that physicalists have much in the way of a working explanation for how all of the parts of the brain communicate, how neurons and synapses account for connectivity, and how these constituents come together to produce consciousness, including qualia. We do not need negative theories of mind unless these negative theorists do the hard work that will put them in a position to offer alternative explanations that are consistent with their nonphysicalist framework; it is not enough to stand on the shoulders of a giant they believe to be wrong.
The Sensuous Zombie
Now, imagine a person indistinguishable from a human being. Imagine then that this person is blind, deaf, and mute. Furthermore, imagine that this person cannot taste, smell or feel anything. Imagine that this person is devoid of all senses, even hunger pangs, a full bladder, and bowel movements. On my reductionist account, sensations feature in the information received from the physical world. Sights, sounds, colors, textures, and so on inform our awareness, which in turn informs our consciousness. Information mediates awareness and consciousness. This is in agreement with David Chalmers’ view. Where we differ is that I conclude that without our senses, we would not have phenomenal consciousness, especially since the qualia of sight is simultaneous with whatever we are seeing.
My p-zombie shows that my reductionist account succeeds, since accounting for the p-zombie’s self-knowledge and qualia is impossible. Whatever account one might render is all but ineffable. Can this p-zombie proceed as Descartes did and eventually say “I think therefore I am”? If s/he knows of no people and no other objects, how can this person prove him/herself to exist? On my differential ontological view, we know who we are, in part, because of differentiation with other people and objects, i.e., “I am because we are, and since we are, therefore I am” (see Herrera, Jack. “Because You Are, I Am”. Philosophy Talk. 12 May 2017. Web.); there are no essential properties about us.
The pivotal difference between my p-zombie and Chalmers’ is that it is probable that someone can be born this way. Of course, I would not wish the combination of these disabilities on anyone, but paralysis, blindness, deafness, anosmia, etc. have all occurred separately. Though the probability of all of these conditions being present in one individual is extremely low, it suffices to say that someone born taste-less or with a distorted sense of taste will either have no associated qualia or corresponding qualia that differ from normal experiences, e.g., chocolate tastes like spinach. Therefore, this leaves us with a powerful suggestion we cannot ignore: qualia, whether normal or synesthetic, and the lack thereof are contingent upon sense apparatus and communication, via the nervous system, to our eyes, noses, mouths, etc. There are, for example, widespread reports from people who have been infected with COVID-19 in where their sense of smell is degraded or goes away entirely. Ordinary flu strains and common colds can have these effects as well, but scientists looking into why this happens have shown that COVID-19 disrupts the normal functions of sustentacular cells, which “maintain the delicate balance of salt ions in the mucus that neurons depend on to send signals to the brain” (Sutherland, Stephani. “Mysteries of COVID Smell Loss Finally Yield Some Answers”. Scientific American. 18 Nov 2020. Web.). How is it possible, then, that the disruption of the function of sustentacular cells turns off smell-related qualia entirely? While one may remember what it is like to smell a rose, there is no longer corresponding qualia when one takes in a huge whiff of a bouquet of roses. Anosmia prevents one from having this experience.
The likelihood that anosmia can turn off qualia is much higher given physicalism than on any nonphysicalist alternatives, especially in light of the fact that nonphysicalists have no alternative explanations for why certain qualia are normally associated with this or that sense apparatus, be it the eyes or the nose. Moreover, the nonphysicalist has no explanation as to why brain trauma leads to synesthetic qualia and often omits such cases to suit his arguments against physicalism. What is more damning for the nonphysicalist enterprise is that though they go on ad nauseam about what it is like to be a bat, they have no account for why some blind people develop echolocation to navigate their surroundings. In other words, in cases where normal qualia are inaccessible due to impairment or lack of use of, for instance, the eyes, new senses must result in new qualia. This is readily predicted under physicalism, but not at all under nonphysicalism.
The Future of Consciousness
Ultimately, I think that as the issue currently stands, we are at the mercy of our scientific tools. To my mind, the best way forward is comparative study of consciousness. However, I do not think our current scientific tools are fit for the task, e.g., to monitor the brain activity of Thomas’ flying squirrel as it calculates trajectories while navigating the lush forests in Indonesia and Malaysia. What is it like to be an octopus, to taste through one’s arms (see Lambert, Jonathan. “How octopuses ‘taste’ things by touching”. Science News. 29 Oct 2020. Web.)? One may think this is not possible for a human, but what if it is? What if neuroscientists had a way to map the sense of taste onto the skin of our arms? Would this not result in qualia that correspond to this strange new sense? I will let David Eagleman have the last word:
If it sounds crazy that you would ever be able to understand all these signals through your skin, remember that all the auditory system is doing is taking signals and turning them into electrical signals in your brain. It doesn’t matter how you get those data streams there. In the future, other data streams could be streamed into the vest, meaning that people could walk around unconsciously perceiving the weather report. Snakes see in the infrared range and honey bees see in the ultravnstantiolet range. There’s no reason why we can’t start building devices to see that and feed it directly into our brains.Erickson, Megan. “Welcome to Your Future Brain: Inside David Eagleman’s Neuro Lab”. Big Think. 17 May 2012. Web.
By R.N. Carmona
Consider what follows some scattered thoughts after reading an excellent paper by Marius Backmann. I think he succeeds in showing how the Neo-Aristotelian notion of powers is incongruous with pretty much any theory of time of note. My issue with powers is more basic: what in the world are Neo-Aristotelians even saying when they invoke this idea and why does it seem that no one has raised the concern that powers are an elementary paraphrase of dispositions? With respect to this concern, Neo-Aristotelians do not even attempt to make sense of our experience with matter and energy. They seem to go on the assumption that something just has to underlie the physical world whereas I take it as extraneous to include metaphysical postulates where entirely physical ones make do. Dispositions are precisely the sort of physical postulates that adequately explain what we perceive as cause-effect relationships. What I will argue is that a more thorough analysis of dispositions is all that is needed to understand why a given a caused some given effect b.
My idea that powers are an elementary paraphrase is entailed in Alexander Bird’s analysis of what powers are. He states:
According to Bird, powers, or potencies, as he calls them alternatively, are a subclass of dispositions. Bird holds that not all dispositions need to be powers, since there could be dispositions that are not characterised by an essence, apart from self-identity. Powers, on the other hand, Bird (2013) holds to be properties with a dispositional essence. On this view, a power is a property that furnishes its bearer with the same dispositional character in every metaphysically possible world where the property is instantiated. If the disposition to repel negatively charged objects if there are some in the vicinity is a power in that sense, then every object that has that property does the same in every metaphysically possible world, i.e. repel negatively charged objects if there are some in the vicinity.Marius Backmann (2019) No time for powers, Inquiry, 62:9-10, 979-1007, DOI: 10.1080/0020174X.2018.1470569
Upon closer analysis of Bird’s definition, a power just is a disposition. The issue is that Bird and the Neo-Aristotelians who complain that he has not gone far enough have isolated what they take to be a power from the properties of an electron, which is a good example of a particle that repels negatively charged objects given that some are in its vicinity. Talk of possible worlds makes no sense unless one can prove mathematically that an electron-like particle with a different mass would also repulse other negatively charged particles. However, though it can easily be shown that a slightly more massive electron-like particle will repulse other particles of negative charge, its electrical charge will be slightly higher than an electrons because according to Robert Milikan’s calculation, there seems to be a relationship between the mass of a particle and its charge. The most elementary charge is e = ~1.602 x 10^19 coulombs. The charge of a quark is measured in multiples of e/3, implying a smaller charge, which is expected given that they are sub-particles. So what is of interest is why the configuration of even an elementary particle yields predictable “behaviors.”
To see this, let us dig into an example Backmann uses: “My power to bake a cake would not bring a cake that did not exist simpliciter before into existence, but only make a cake that eternally exists simpliciter present. Every activity reduces to a change in what is present” (Ibid.). The Neo-Aristotelian is off track to say we have power to bake a cake and that the oven has power to yield this desired outcome that do not trace back to its parts or as Cartwright states of general nomological machines: “We explicate how the machine arrangement dictates what can happen – it has emergent powers which are not to be found in its components” (Cartwright, Nancy & Pemberton, John (2013). Aristotelian powers: without them, what would modern science do? In John Greco & Ruth Groff (eds.), Powers and Capacities in Philosophy: the New Aristotelianism. London, U.K.: Routledge. pp. 93-112.). Of the nomological machines in nature, Cartwright appears to bypass the role of evolution. Of such machines invented by humans, she ignores the fact that we often wrongly predict what a given invention will do. Evolution proceeds via probabilities and so, from our ad hoc point of view, it looks very much like trial and error. Humans have the advantage of being much more deliberate about what they are selecting for and therefore, our testing and re-testing of inventions and deciding when they are safe and suitable to hit the market is markedly similar to evolutionary selection.
That being said, the components of a machine do account for its function. It is only due to our understanding of other machines that we understand what should go into building a new one in order for it to accomplish a new task(s). Powers are not necessary because then we should be asking, why did we not start off with machines that have superior powers? In other words, why start with percolators if we could have just skipped straight to Keurig or Nespresso machines or whatever more advanced models that might be invented? Talk of powers seems to insinuate that objects, whether complex or simple, are predetermined to behave the way they do, even in the absence of trial runs, modifications, or outright upgrades. This analysis sets aside the cake. It does not matter what an oven or air fryer is supposed to do. If the ingredients are wrong, either because I neglected to use baking powder or did not use enough flour, the cake may not raise. The ingredients that go into baked goods play a “causal” role as well.
Dispositions, on the other hand, readily explain why one invention counts as an upgrade over a previous iteration. Take, for instance, Apple’s A14 Bionic chip. At bottom, this chip accounts for, “a 5 nanometer manufacturing process” and CPU and GPU improvements over the iPhone 11 (Truly, Alan. “A14 Bionic: Apple’s iPhone 12 Chip Benefits & Improvements Explained”. Screenrant. 14 Oct 2020. Web). Or more accurately, key differences in the way this chip was made accounts for the improvement over its predecessors. Perhaps more crucially is that critics of dispositions have mostly tended to isolate dispositions, as though a glass cup’s fragility exists in a vacuum. Did the cup free fall at 9.8m/s^2? Did it fall on a mattress or on a floor? What kind of floor? Or was the cup thrown at some velocity because Sharon was angry with her boyfriend Albert? What did she throw the cup at: a wall, the floor, Albert’s head, or did it land in a half-full hamper with Sharon and Albert’s dirty clothes?
Answering these questions solves the masking and mimicker problems. The masking problem can be framed as follows:
Another kind of counterexample to SCA, due to Johnston (1992) and Bird (1998), involves a fragile glass that is carefully protected by packing material. It is claimed that the glass is disposed to break when struck but, if struck, it wouldn’t break thanks to the work of the packing material. There is an important difference between this example and Martin’s: the packing material would prevent the breaking of the glass not by removing its disposition to break when struck but by blocking the process that would otherwise lead from striking to breaking.Choi, Sungho and Michael Fara, “Dispositions”, The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.).
I would not qualify that the packing material prevents the glass from breaking by blocking the process that would result if it were exposed. The packing material has its own properties and dispositions that we have discovered through trial and error making this material good at protecting glass. Packing paper was more common, but now we have bubble wrap and heavy duty degradable stretch wrap, also capable of protecting glass, china, porcelain, and other fragile items. The dispositions of these protective materials readily explain why their encompassing of fragile objects protects them from incidental striking or drops. If I were, however, to throw a wrapped coffee mug as hard as I can toward a brick wall, the mug is likely to break. This entails that variables are important in this thing we call cause and effect.
A perfect example is simple collisions of the sort you learn about in an elementary physics course. If a truck and haul speeding down a highway in one direction at ~145 km/h, and a sedan traveling in the opposite direction at cruising speed of ~89 km/h collide, we can readily predict the outcome and that this particular collision is inelastic. The speeding truck would likely barrel through the sedan and the sedan will be pushed in the direction the truck was traveling in. The vehicles’ respective speeds and masses are extremely important in understanding what goes on here. There is no sense in which we can say that trucks just have a power to mow things down because a collision between the truck in our original example and a truck and haul driving at roughly the same speed in the opposite direction results in an entirely difficult outcome, a perfectly elastic collision in where both trucks collide and come to an immediate halt after the effects of the impact are fully realized.
Neo-Aristotelian analyses of powers give us nothing that is keeping with physics. What these explanations demand is something they imagine happening behind the veil of what science has already explained. There are just dispositions and what is needed is a more critical analysis of what is entailed across each instance of cause and effect. Power ontologies beg the question, in any case, because they require dispositions to make sense of powers. That is because powers are just a cursory analysis of cause-effect relationships, a way of paraphrasing that is overly simplistic and ultimately, not analytical enough. Power ontologies, along with talk of dynamism, which properly belongs to Nietzsche not Aristotle, severely undermine the Neo-Aristotelian project. Nietzsche’s diagnosis of causation makes this clear:
Cause and effect: such a duality probably never exists; in truth we are confronted by a continuum out of which we isolate a couple of pieces, just as we perceive motion only as isolated points and then infer it without ever actually seeing it. The suddenness with which many effects stand out misleads us; actually, it is sudden only for us. In this moment of suddenness there is an infinite number of processes that elude us. n intellect that could see cause and effect as a continuum and a flux and not, as we do, in terms of an arbitrary division and dismemberment, would repudiate the concept of cause and effect and deny all conditionality.Nietzsche, Friedrich W, and Walter Kaufmann. The Gay Science: With a Prelude in Rhymes and an Appendix of Songs. New York: Vintage Books, 1974. 173. Print.
Nietzsche describes a continuum and a flux, in other words, a dynamism thoroughly unlike what can be attributed to Aristotle’s theory of causation. So the fact that Neo-Aristotelians even speak of a dynamism feels like a sort of plagiarism, since they are associating the idea of a dynamism with a thinker that said nothing to that effect. Nietzsche is critical of Aristotle’s causal-teleological marriage and can be seen as explicitly accusing Aristotle and also Hume of arbitrarily splicing a dynamic continuum in an ad hoc manner that does not find justification in metaphysical ideas. If Nietzsche had been properly exposed to modern science, he would probably agree that this splicing does not find justification in physical ideas either. The hard sciences confirm a continuum, preferring complex processes from which predictable results follow. There is just no sense in which we can apply any theory of causation to a chemical reaction. What features in these reactions are the properties and dispositions of the elements involved and how they are constituted explains why we get one reaction or another. Any talk of dynamisms is properly Nietzschean in spirit and as should be clear in his words, there is no invocation of powers.
Suffice to say that a deeper analysis of dispositions also explains away the mimicker problem. Styrofoam plates simply do not break in the way glass plates do and their underlying composition explains why that is. Ultimately, Neo-Aristotelians are not in a good position to get to the bottom of what we call cause and effect. Aside from the difficulties Backmann sheds light on, the notion of powers is incoherent and lacking in explanatory power, especially at levels requiring deeper analysis. Predictably, I can see Neo-Aristotelians invoking an infinite regress of sorts. In other words, is it simply the composition of the glass interacting with the composition of a hardwood floor that results in the glass shattering or is there more to the story? To that I would respond that events like these happen within a causally closed space-time system. It is then when we will be asked who or what decided that a glass cup should break on impact when landing on a hardwood floor? Well, who or what decided that a compound fracture of the tibia is expected given that it receives a strong enough blow from an equally dense or denser object? The Neo-Aristotelian will keep pushing the buck back, demanding deeper levels of analysis, effectively moving the goalposts. What will remain is that there is no intelligence that decided on these things, i.e., there is no teleological explanation involved in these cases, because then they would have to account for undesired ends like broken bones.
In the end, I think that the deepest level of analysis will involve a stochastic process in where degrees of probability encompass possible outcomes. Not every blow leads to a broken tibia. Dropping a glass cup on just any surface is not enough to crack or shatter it. There are cases in where angular momentum as a result of a human foot can change a falling glass cup’s trajectory just enough to ensure that it does not break upon hitting the ground. I have met people quite adept at breaking these kinds of falls with a simple extension of their foot. As such, probabilities will change given the circumstances on a case by case basis. This element of chance at the deepest level of analysis coheres perfectly with the universe we find ourselves in because even the fact that we are beings made of matter, as opposed to beings made of anti-matter, is due to chance. Apparently, God has always rolled dice. On this, I will let Lawrence Krauss have the last word:
Because antiparticles otherwise have the same properties as particles, a world made of antimatter would behave the same way as a world of matter, with antilovers sitting in anticars making love under an anti-Moon. It is merely an accident of our circumstances, due, we think, to rather more profound factors…that we live in a universe that is made up of matter and not antimatter or one with equal amounts of both.Krauss, Lawrence. A Universe From Nothing: Why There Is Something Rather Than Nothing. 1st ed. New York, NY: Free Press, 2012. 61. Print.
By R.N. Carmona
I have submitted a paper to Philosophical Studies addressing Dustin Crummett and Philip Swenson’s paper. Admittedly, this is my first attempt at publishing in a philosophy journal. I took a swing with no guidance, no co-author, and no funding. There is of course a chance it gets rejected, but I am hoping for the best. In any case, I think my paper provides heuristics for anyone looking to refute Evolutionary Moral Debunking Arguments like Crummet and Swenson’s. Let us turn to how I dissect their argument.
They claim that their Evolutionary Moral Debunking Argument Against Naturalism (EMDAAN) stems from Street’s and Korman and Locke’s EMDAs. The latter EMDAs target moral realism while Crummett and Swenson’s targets naturalism. The issue with theirs is that they grossly overlook the fact that both Street and Korman & Locke do not argue that naturalism is threatened by EMDAs. Street argues that her practical standpoint characterization of constructivism sidesteps any issues her EMDA might have presented for her naturalism. Korman and Locke target the minimalist response and in a separate paper, not cited by Crummett, relativism. They do not target naturalism either.
At first glance, I compared Crummett and Swenson’s argument to Lewis’ long-defeated Argument Against Atheism. They state: “The problem for the naturalist here is that, if naturalism is true, it seems that the faculties responsible for our intuitions were formed through purely natural processes that didn’t aim at producing true beliefs” (Crummett & Swenson, 37). One can easily see how they paraphrase Lewis who says:
Supposing there was no intelligence behind the universe, no creative mind. In that case, nobody designed my brain for the purpose of thinking. It is merely that when the atoms inside my skull happen, for physical or chemical reasons, to arrange themselves in a certain way, this gives me, as a by-product, the sensation I call thought. But, if so, how can I trust my own thinking to be true? It’s like upsetting a milk jug and hoping that the way it splashes itself will give you a map of London. But if I can’t trust my own thinking, of course I can’t trust the arguments leading to Atheism, and therefore have no reason to be an Atheist, or anything else. Unless I believe in God, I cannot believe in thought: so I can never use thought to disbelieve in God.Marsden, George M.. C.S. Lewis’s Mere Christianity : A Biography. Princeton University Press. 89. 2016. Print.
This is a known predecessor of Plantinga’s Evolutionary Argument Against Naturalism (EAAN). Therefore, the first angle I take in the paper is to show how Crummett and Swenson did not understand Street’s paper. Perhaps it is the sheer length of her excellent paper (over 50 pages) or perhaps they were so intent on addressing New Atheists that they overlooked her more robust approach to showing how anti-realism fares against EMDAs. I think her paper makes a lot more sense when read in conjunction with her overview of constructivism (see here). Bearing that in mind, I attempt to divorce Crummet and Swenson’s EMDAAN from Street’s EMDA against moral realism. Korman and Locke’s project is markedly different, but their work does not help Crummett and Swenson’s argument either.
With the EAAN now in focus, I show how Crummett and Swenson’s EMDAAN just is an iteration of the EAAN. The EAAN applies to general truths. Put simply, Plantinga argues that if we take seriously the low probability of evolution and naturalism being true despite the fact that that our cognitive faculties formed from accidental evolutionary pressures, then we have a defeater for all of our beliefs, most notably among them, naturalism. Crummett and Swenson make the same exact argument, the difference being that they apply it to specific beliefs, moral beliefs. Given that moral beliefs are a sub-category within the domain of all beliefs, their EMDAAN is an iteration of the EAAN. Here is an example I did not pursue in my paper, call it the Evolutionary Scientific Debunking Argument.
RC1 P(Sm/E&S) is low (The probability that our faculties generate basic scientific beliefs, given that evolution and science are true, is low.)
RC2 If one accepts that P(Sm/E&S) is low, then one possesses a defeater for the notion that our faculties generate basic scientific beliefs.
RCC Therefore, one possesses a defeater for one’s belief in science.
Perhaps I would be called upon to specify a philosophical view of science, be it realism or something else, but the basic gist is the same as Crummett and Swenson’s EMDA. I am, like them, targeting a specific area of our beliefs, namely our beliefs resulting from science. My argument is still in the vein of Plantinga’s EAAN and is a mere subsidiary of it.
After I establish the genealogy of Crummett and Swenson’s argument, I turn the EAAN on its head and offer an Evolutionary Argument Against Theism. If Plantinga’s argument holds sway and the Theist believes that evolution is true, he is in no better epistemic shape than the naturalist. Therefore, Plantinga’s conditionalization problem, which offers that P(R/N&E) is high iff there exists a belief B that conditionalizes on N&E, is an issue for Theists as well. In other words, perhaps the probability that our cognitive faculties are reliable given that evolution and naturalism are true increases iff there is an added clause in the conjunction. Put another way, the probability that our cognitive faculties are reliable, granting that evolution and naturalism and (a successful philosophy of mind), is high. This successful philosophy of mind will have to show precisely how a brain that resulted from naturalistic evolutionary processes can generate the sort of consciousness capable of acquiring true beliefs. The theist who says P(R/T&E) is high is begging the question because merely asserting that “God ensured that there would be some degree of alignment between our intuitions and moral truth” ((Crummett & Swenson, 44) does not help the Theist avoid the conditionalization problem.
With that established, and I cannot give too much away here because this is the novelty in my paper, I argue that the only recourse the Theist has, especially given that they have no intention of disavowing Theism, is to abandon their belief in evolution. They would have to opt, instead, for a belief in creationism or a close variant like intelligent design. In either case, they would then be left asserting that a Creationary Moral Confirming Argument in Favor of Theism is the case. I explore the litany of issues that arises if the Theist abandons evolution and claims that God’s act of creating us makes moral realism the case. Again, the Theist ends up between a rock and a hard place. Theism simply has far less explanatory power because, unlike naturalism, it does not account for our propensity to make evaluative errors and our inclination toward moral deviancy. If God did, in fact, ensure that our moral intuitions align with transcendent moral truths, why do we commit errors when making moral decisions and why do we behave immorally? Naturalism can explain both of these problems, especially given the role of reason under the moral anti-realist paradigm. Evaluative errors are therefore, necessary to improve our evaluative judgments; reason is the engine by which we identify these errors and improve our moral outlook. The Theist would be back at square one, perhaps deploying the patently mythical idea of a Fall to account for the fact that humans are far from embodying the moral perfection God is said to have.
With Crummett and Swenson’s argument now thoroughly in Plantinga’s territory, I explore whether the anti-realist can solve the conditionalization problem. I suggest that evolution accounts for moral rudiments and then introduce the notion that cultural evolution accounts for reliable moral beliefs. Cooperation and altruism feature heavily into why I draw this conclusion. So P(Rm/E&MAR) (if evolution and moral anti-realism are true, the probability that our faculties generate evaluative truths) is high given that cooperation and/or altruism conditionalize on our belief that evolution and moral anti-realism are the case. We are left with P[(Rm/E&MAR) & (C v A)] or P[(Rm/E&MAR) & (C&A)]. In other words, if evolution and moral anti-realism are true, and cooperation and/or altruism conditionalize on our beliefs that evolution and moral anti-realism are the case, the probability that our faculties generate evaluative truths/reliable moral beliefs is high.
Ultimately, like Moon, I think my paper will provide fertile ground for further discussion on the conditionalization problem. The jury is still out on whether the naturalist’s belief that evolution and naturalism are true even requires a clause to conditionalize on that belief. In any case, much can be learned about EMDAs against naturalism from the vast literature discussing Plantinga’s EAAN. I think that my arguments go a long way in dispensing with EMDAs in the philosophy of religion that target naturalism. When one considers that the Theist cannot account for moral truths without unsubstantiated assertions about God, it is easy to see how they are on less secure ground than the naturalist. If the Theist is a Christian or a Muslim, then they ought to be reminded that their scriptures communicate things about their gods that are not befitting of moral perfection. If the choice is between naturalism and the belief that a god who made parents eat their children is, despite all evidence to the contrary, morally perfect, I will take my chances with naturalism!
By R.N. Carmona
Before starting my discussion of the first chapter of Neo-Aristotelian Perspectives On Contemporary Science, some prefatory remarks are in order. In the past, I might have committed to reading an entire book for purposes of writing a chapter by chapter review. With other projects in my periphery, I cannot commit to writing an exhaustive review of this book. That remains undecided for now. What I will say is that a sample size might be enough to confirm my suspicions that the Neo-Aristotelian system is rife with problems or even worse, is a failed system of metaphysics. I am skeptical of the system because it appears to have been recruited to bolster patently religious arguments, in particular those of modern Thomists looking to usher in yet another age of apologetics disguised as philosophy. I maintain that apologetics still needs to be thoroughly demarcated from philosophy of religion; moreover, philosophy of religion should be more than one iteration after another of predominantly Christian literature. With respect to apologetics, I am in agreement with Kai Nielsen who stated:
It is a waste of time to rehearse arguments about the proofs or evidences for God or immortality. There are no grounds — or at least no such grounds — for belief in God or belief that God exists and/or that we are immortal. Hume and Kant (perhaps with a little rational reconstruction from philosophers like J.L. Mackie and Wallace Matson) pretty much settled that. Such matters have been thoroughly thrashed out and there is no point of raking over the dead coals. Philosophers who return to them are being thoroughly retrograde.Nielsen, Kai. Naturalism and Religion. Amherst, N.Y.: Prometheus, 2001. 399-400. Print.
The issue is that sometimes one’s hand is forced because the number of people qualified to rake dead coals is far fewer than the people rehashing these arguments. Furthermore, the history of Christianity, aside from exposing a violent tendency to impose the Gospel by force, also exposes a tendency to prey on individuals who are not qualified to address philosophical and theological arguments. Recently, this was made egregiously obvious by Catholic writer Pat Flynn:
So what we as religious advocates must be ready for is to offer the rational, logical basis—the metaphysical realism, and the reality of God—that so many of these frustrated, young people are searching for who are patently fed up with the absurd direction the secular world seems to be going. They’re looking for solid ground. And we’ve got it.Flynn, Pat. “A Hole in The Intellectual Dark Web”. World On Fire Blog. 26 Jun 2019. Web.
Unfortunately, against all sound advice and blood pressure readings, people like myself must rake dead coals or risk allowing Christians to masquerade as the apex predators in this intellectual jungle. I therefore have to say to the Pat Flynns of the world, no you don’t got it. More importantly, let young people lead their lives free of the draconian prohibitions so often imposed on people by religions like yours. If you care to offer the rational, logical basis for your beliefs, then perhaps you should not be approaching young people who likely have not had an adequate exposure to the scholarship necessary to understand apologetics. This is not to speak highly of the apologist, who typically distorts facts and evidence to fit his predilections, making it necessary to acquire sufficient knowledge of various fields of inquiry so that one is more capable of identifying distortions or omission of evidence and thus, refuting his arguments. If rational, logical discourse were his aim, then he would approach people capable of handling his arguments and contentions. That is when it becomes abundantly clear that the aim is to target people who are more susceptible to his schemes by virtue of lacking exposure to the pertinent scholarship and who may already be gullible due to existing sympathy for religious belief, like Flynn himself, a self-proclaimed re-converted Catholic.
Lanao and Teh’s Anti-Fundamentalist Argument and Problems Within The Neo-Aristotelian System
With these prefatory remarks out of the way, I can now turn to Xavi Lanao and Nicholas J. Teh’s “Dodging The Fundamentalist Threat.” Though I can admire how divorced Lanao and Teh’s argument is from whatever theological views they might subscribe to, it should be obvious to anyone, especially the Christian Thomist, that their argument is at variance with Theism. Lanao and Teh write: “The success of science (especially fundamental physics) at providing a unifying explanation for phenomena in disparate domains is good evidence for fundamentalism” (16). They then add: “The goal of this essay is to recommend a particular set of resources to Neo- Aristotelians for resisting Fundamentalist Unification and thus for resisting fundamentalism” (Ibid.). In defining Christian Theism, Timothy Chappell, citing Paul Veyne, offers the following:
“The originality of Christianity lies… in the gigantic nature of its god, the creator of both heaven and earth: it is a gigantism that is alien to the pagan gods and is inherited from the god of the Bible. This biblical god was so huge that, despite his anthropomorphism (humankind was created in his image), it was possible for him to become a metaphysical god: even while retaining his human, passionate and protective character, the gigantic scale of the Judaic god allowed him eventually to take on the role of the founder and creator of the cosmic order.”Chappell, Timothy. “Theism, History and Experience”. Philosophy Now. 2013. Web.
Thomists appear more interested in proving that Neo-Aristotelianism is a sound approach to metaphysics and the philosophy of science than they do in ensuring that the system is not at odds with Theism. The notion that God is the founder and creator of the cosmic order is uncontroversial among Christians and Theists more generally. Inherent in this notion is that God maintains the cosmic order and created a universe that bears his fingerprints, and as such, physical laws are capable of unification because the universe exhibits God’s perfection; the universe is therefore, at least at its start, perfectly symmetric, already containing within it intelligible forces, including finely tuned parameters that result in human beings, creatures made in God’s image. Therefore, in the main, Christians who accept Lanao and Teh’s anti-fundamentalism have, inadvertently or deliberately, done away with a standard Theistic view.
So already one finds that Neo-Aristotelianism, at least from the perspective of the Theist, is not systematic in that the would-be system is internally inconsistent. Specifically, when a system imposes cognitive dissonance of this sort, it is usually good indication that some assumption within the system needs to be radically amended or entirely abandoned. In any case, there are of course specifics that need to be addressed because I am not entirely sure Lanao and Teh fully understand Nancy Cartwright’s argument. I think Cartwright is saying quite a bit more and that her reasoning is mostly correct, even if her conclusion is off the mark.
While I strongly disagree with the Theistic belief that God essentially created a perfect universe, I do maintain that Big Bang cosmology imposes on us the early symmetry of the universe via the unification of the four fundamental forces. Cartwright is therefore correct in her observation that science gives us a dappled portrait, a patchwork stemming from domains operating very much independently of one another; like Lanao and Teh observe: “point particle mechanics and fluid dynamics are physical theories that apply to relatively disjoint sets of classical phenomena” (18). The problem is that I do not think Lanao and Teh understand why this is the case, or at least, they do not make clear that they know why we are left with this dappled picture. I will therefore attempt to argue in favor of Fundamentalism without begging the question although, like Cartwright, I am committed to a position that more accurately describes hers: Non-Fundamentalism. It may be that the gradual freezing of the universe, over the course of about 14 billion years, leaves us entirely incapable of reconstructing the early symmetry of the universe; I will elaborate on this later, but this makes for a different claim altogether, and one that I take Cartwright to be saying, namely that Fundamentalists are not necessarily wrong to think that fundamental unification (FU) is possible but given the state of our present universe, it cannot be obtained. Cartwright provides us with a roadmap of what it would take to arrive at FU, thereby satisfying Fundamentalism, but the blanks need to be filled, so that we get from the shattered glass that is our current universe to the perfectly symmetric mirror it once was.
Lanao and Teh claim that Fundamentalism usually results from the following reasoning:
We also have good reason to believe that everything in the physical world is made up of these same basic kinds of particles. So, from the fact that everything is made up of the same basic particles and that we have reliable knowledge of the behavior of these particles under some experimental conditions, it is plausible to infer that the mathematical laws governing these basic kinds of particles within the restricted experimental settings also govern the particles everywhere else, thereby governing everything everywhere. (Ibid.)
They go on to explain that Sklar holds that biology and chemistry do not characterize things as they really are. This is what they mean when they say Fundamentalists typically beg the question, in that they take Fundamentalism as a given. However, given Lanao and Teh’s construction of Cartwright’s argument, they can also be accused of fallacious reasoning, namely arguing from ignorance. They formulate Cartwright’s Anti-Fundamentalist Argument as follows:
(F1) Theories only apply to a domain insofar as there is a principled way of generating a set of models that are jointly able to describe all the phenomena in that domain.
(AF2) Classical mechanics has a limited set principled models, so it only applies to a limited number of sub-domains.
(AF3) The limited sub-domains of AF2 do not exhaust the entire classical domain.
(AF4) From (F1), (AF2), and (AF3), the domain of classical mechanics is not universal, but dappled. (25-26)
On AF2, how can we expect classical mechanics to acquire more principled models than it presently has? How do we know that, if given enough time, scientists working on classical mechanics will not have come up with a sufficient number of principled models to satisfy even the anti-fundamentalist? That results in quite the conundrum for the anti-fundamentalist. Can the anti-fundamentalist provide the fundamentalist with a satisfactory number of principled models that exhaust an entire domain? This is to ask whether anyone can know how many principled models are necessary to contradict AF3. On any reasonable account, science has not had sufficient time to come up with enough principled models in all of its domains and thus, this argument cannot be used to bolster the case for anti-fundamentalism.
While Lanao and Teh are dismissive of Cartwright’s particularism, it is necessary for the correct degree of tentativeness she exhibits. Lanao and Teh, eager to disprove fundamentalism, are not as tentative, but given the very limited amount of time scientists have had to build principled models, we cannot expect for them to have come up with enough models to exhaust the classical or any other scientific domain. Cartwright’s tentativeness is best exemplified in the following:
And what kinds of interpretative models do we have? In answering this, I urge, we must adopt the scientific attitude: we must look to see what kinds of models our theories have and how they function, particularly how they function when our theories are most successful and we have most reason to believe in them. In this book I look at a number of cases which are exemplary of what I see when I study this question. It is primarily on the basis of studies like these that I conclude that even our best theories are severely limited in their scope.Cartwright, Nancy. The Dappled World: A Study of The Boundaries of Science. Cambridge: Cambridge University Press, 1999. 9. Print.
The fact that our best theories are limited in their scope reduces to the fact that our fragmented, present universe is too complex to generalize via one law per domain or one law that encompasses all domains. For purposes of adequately capturing what I am attempting to say, it is worth revisiting what Cartwright says about a $1,000 bill falling in St. Stephen’s Square:
Mechanics provides no model for this situation. We have only a partial model, which describes the 1000 dollar bill as an unsupported object in the vicinity of the earth, and thereby introduces the force exerted on it due to gravity. Is that the total force? The fundamentalist will say no: there is in principle (in God’s completed theory?) a model in mechanics for the action of the wind, albeit probably a very complicated one that we may never succeed in constructing. This belief is essential for the fundamentalist. If there is no model for the 1000 dollar bill in mechanics, then what happens to the note is not determined by its laws. Some falling objects, indeed a very great number, will be outside the domain of mechanics, or only partially affected by it. But what justifies this fundamentalist belief? The successes of mechanics in situations that it can model accurately do not support it, no matter how precise or surprising they are. They show only that the theory is true in its domain, not that its domain is universal. The alternative to fundamentalism that I want to propose supposes just that: mechanics is true, literally true we may grant, for all those motions whose causes can be adequately represented by the familiar models that get assigned force functions in mechanics. For these motions, mechanics is a powerful and precise tool for prediction. But for other motions, it is a tool of limited serviceability.Cartwright, Nancy. “Fundamentalism vs. the Patchwork of Laws.” Proceedings of the Aristotelian Society, vol. 94, 1994, pp. 279–292. JSTOR, http://www.jstor.org/stable/4545199.
Notice how even Cartwright alludes to the Theistic notion of FU being attributable to a supremely intelligent creator who people call God. In any case, what she is saying here does not speak to the notion that only the opposite of Fundamentalism can be the case. Even philosophers slip into thinking in binaries, but we are not limited to Fundamentalism or Anti-Fundamentalism; Lanao and Teh admit that much. There can be a number of Non-Fundamentalist positions that prove more convincing. In the early universe, the medium of water, and therefore, motions in water, were not available. Because of this, there was no real way to derive physical laws within that medium. Moreover, complex organisms like jellyfish did not exist then either and so, the dynamics of their movements were not known and could not feature in any data concerning organisms moving about in water. This is where I think Cartwright, and Lanao and Teh taking her lead, go astray.
Cartwright, for example, strangely calls for a scientific law of wind. She states: “When we have a good-fitting molecular model for the wind, and we have in our theory (either by composition from old principles or by the admission of new principles) systematic rules that assign force functions to the models, and the force functions assigned predict exactly the right motions, then we will have good scientific reason to maintain that the wind operates via a force” (Ibid). Wind, unlike inertia or gravity, is an inter-body phenomenon in that the heat from the Sun is distributed unevenly across the Earth’s surface. Warmer air from the equator tends toward the atmosphere and moves to the poles while cooler air tends toward the equator. Wind moves between areas of high pressure to areas of low pressure and the boundary between these areas is called a front. This is why we cannot have a law of wind because aside from the complex systems on Earth, this law would have to apply to the alien systems on gas giants like Jupiter and Saturn. This point is best exemplified by the fact that scientists cannot even begin to comprehend why Neptune’s Dark Spot did a complete about-face. A law of wind would have to apply universally, not just on Earth, and would thus, have to explain the behavior of wind on other planets. That is an impossible ask because the composition of other planets and their stars would make for different conditions that are best analyzed in complex models, accounting for as much data as possible, rather than a law attempting to generalize what wind should do assuming simple conditions.
Despite Cartwright’s lofty demand, her actual argument does not preclude Fundamentalism despite what Lanao and Teh might have thought. Cartwright introduces a view that I think is in keeping with the present universe: “Metaphysical nomological pluralism is the doctrine that nature is governed in different domains by different systems of laws not necessarily related to each other in any systematic or uniform way: by a patchwork of laws” (Ibid.). I think it is entirely possible to get from metaphysical nomological pluralism (MNP) to FU if one fills in the blanks by way of symmetry breaking. Prior to seeing how symmetry breaking bridges the gap between MNP and FU, it is necessary to outline an argument from Cartwright’s MNP to FU:
F1 Theories only apply to a domain insofar as there is a principled way of generating a set of models that are jointly able to describe all the phenomena in that domain.
MNP1 Nature is governed in different domains by different systems of laws not necessarily related to each other in any systematic or uniform way: by a patchwork of laws.
MNP2 It is possible that the initial properties in the universe allow these laws to be true together.
MNP3 From F1, MNP1, and MNP2, the emergence of different systems of laws from the initial properties in the universe imply that FU is the probable.
Lanao and Teh agree that F1 is a shared premise between Fundamentalists and Anti-Fundamentalists. As a Non-Fundamentalist, I see it as straightforwardly obvious as well. With respect to our present laws, I think that FU may be out of our reach. As has been famously repeated, humans did not evolve to do quantum mechanics, let alone piece together a shattered mirror. This is why I’m a Non– as opposed to Anti-Fundamentalist; the subtle distinction is that I am neither opposed to FU being the case nor do I think it is false, but rather that it is extremely difficult to come by. Michio Kaku describes the universe as follows: “Think of the way a beautiful mirror shatters into a thousand pieces. The original mirror possessed great symmetry. You can rotate a mirror at any angle and it still reflects light in the same way. But after it is shattered, the original symmetry is broken. Determining precisely how the symmetry is broken determines how the mirror shatters” (Kaku, Michio. Parallel Worlds: A Journey Through Creation, Higher Dimensions, and The Future of The Cosmos. New York: Doubleday, 2005. 97. Print.).
If Kaku’s thinking is correct, then there is no way to postulate that God had St. Peter arrange the initial properties of the universe so that all of God’s desired laws are true simultaneously without realizing that FU is not only probable but true, however unobtainable it may be. The shards would have to pertain to the mirror. Kaku explains that Grand Unified Theory (GUT) Symmetry breaks down to SU(3) x SU(2) x U(1), which yields 19 free parameters required to describe our present universe. There are other ways for the mirror to have broken, to break down GUT Symmetry. This implies that other universes would have residual symmetry different from that of our universe and therefore, would have entirely different systems of laws. These universes, at minimum, would have different values for these free parameters, like a weaker nuclear force that would prevent star formation and make the emergence of life impossible. In other scenarios, the symmetry group can have an entirely different Standard Model in where protons quickly decay into anti-electrons, which would also prevent life as we know it (Ibid., 100).
Modern scientists are then tasked with working backwards. The alternative to that is to undertake the gargantuan task, as Cartwright puts it, of deriving the initial properties, which would no doubt be tantamount to a Theory of Everything from which all of the systems of laws extend, i.e., hypothesize that initial conditions q, r, and s yield the different systems of laws we know. This honors the concretism Lanao and Teh call for in scientific models while also giving abstractionism its due. Like Paul Davies offered, the laws of physics may be frozen accidents. In other words, the effective laws of physics, which is to say the laws of physics we observe, might differ from the fundamental laws of physics, which would be, so to speak, the original state of the laws of physics. In a chaotic early universe, physical constants may not have existed. Hawking also spoke of physical laws that tell us how the universe will evolve if we know its state at some point in time. He added that God could have chosen an “initial configuration” or fundamental laws for reasons we cannot comprehend. He asks, however, “if he had started it off in such an incomprehensible way, why did he choose to let it evolve according to laws that we could understand? (Hawking, Stephen. A Brief History of Time, New York: Bantam Books. 1988. 127. Print.)” He then goes on to discuss possible reasons for this, e.g. chaotic boundary conditions; anthropic principles.
Implicit in Hawking’s reasoning is that we can figure out what physical laws will result in our universe in its present state. The obvious drawback is that the observable universe is ~13.8 billion years old and 93 billion lightyears in diameter. The universe may be much larger, making the task of deriving this initial configuration monumentally difficult. This would require a greater deal of abstraction than Lanao and Teh, and apparently Neo-Aristotelians, desire, but it is the only way to discover how past iterations of physical laws or earlier systems of laws led to our present laws of physics. The issue with modern science is that it does not often concern itself with states in the distant past and so, a lot of equations and models deal in the present, and even the future, but not enough of them confront the past. Cosmological models, for purposes of understanding star formation, the formation of solar systems, and the formation of large galaxies have to use computer models to test their theories against the past, since there is no way to observe the distant past directly. In this way, I think technology will prove useful in arriving at earlier conditions until we arrive at the mirror before it shattered. The following model, detailing how an early collision explains the shape of our galaxy, is a fine example of what computer models can do to help illuminate the distant past:
Further Issues With The Neo-Aristotelian System
A recent rebuttal to Alexander Pruss’ Grim Reaper Paradox can be generalized to refute Aristotelianism overall. The blogger over at Boxing Pythagoras states:
Though Alexander Pruss discusses this Grim Reaper Paradox in a few of his other blog posts, I have not seen him discuss any other assumptions which might underly the problem. He seems to have focused upon these as being the prime constituents. However, it occurs to me that the problem includes another assumption, which is a bit more subtle. The Grim Reaper Paradox, as formulated, seems to presume the Tensed Theory of Time. I have discussed, elsewhere, the reasons that I believe the Tensed Theory of Time does not hold, so I’ll simply focus here on how Tenseless Time resolves the Grim Reaper Paradox.
To see the difference between old and new tenseless theories, it is necessary to first contrast an old tenseless theory against a tensed theory that holds that properties of the pastness, presentness, and futurity of events are ascribed by tensed sentences. The debate regarding which theory is true centered around whether tensed sentences could be translated by tenseless sentences that instead ascribe relations of earlier than, later than, or simultaneous. For example, “the sun will soon rise” seems to entail the sun’s rising in the future, as an event that will become present, whereas the “sun is rising now” seems to entail the event being present and “the sun has risen” as having receded into the past. If these sentences are true, the first sentence ascribes futurity whilst the second ascribes presentness and the last ascribes pastness. Even if true, however, that is not evidence to suggest that events have such properties. Tensed sentences may have tenseless counterparts having the same meaning.
This is where Quine’s notion of de-tensing natural language comes in. Rather than saying “the sun is rising” as uttered on some date, we would instead say that “the sun is rising” on that date. The present in the first sentence does not ascribe presentness to the sun’s rising, but instead refers to the date the sentence is spoken. In like manner, if “the sun has risen” as uttered on some date is translated into “the sun has risen” on a given date, then the former sentence does not ascribe pastness to the sun’s rising but only refers to the sun’s rising as having occurred earlier than the date when the sentence is spoken. If these translations are true, temporal becoming is unreal and reality is comprised of earlier than, later than, and simultaneous. Time then consists of these relations rather the properties of pastness, presentness, and futurity (Oaklander, Nathan. Adrian Bardon ed. “A-, B- and R-Theories of Time: A Debate”. The Future of the Philosophy of Time. New York: Routledge, 2012. 23. Print.).
The writer at Boxing Pythagoras continues:
On Tensed Time, the future is not yet actual, and actions in the present are what give shape and form to the reality of the future. As such, the actions of each individual future Grim Reaper, in our paradox, can be contingent upon the actions of the Reapers which precede them. However, this is not the case on Tenseless Time. If we look at the problem from the notion of Tenseless Time, then it is not possible that a future Reaper’s action is only potential and contingent upon Fred’s state at the moment of activation. Whatever action is performed by any individual Reaper is already actual and cannot be altered by the previous moments of time. At 8:00 am, before any Reapers activate, Fred’s state at any given time between 8:00 am and 9:00 am is set. It is not dependent upon some potential, but not yet actual, future action as no such thing can exist.
I think this rebuttal threatens the entire Aristotelian enterprise. Aristotelians will have to deny time while maintaining that changes happen in order to escape the fact that de-tensed theories of time, which are more than likely the correct way of thinking about time, impose a principle: any change at a later point in time is not dependent on a previous state. That’s ignoring that God, being timeless, could not have created the universe at some time prior to T = 0, the first instance of time on the universal clock. This is to say nothing of backward causation, which is entirely plausible given quantum mechanics. Causation calls for a deeper analysis, which neo-Humeans pursue despite not being entirely correct. The notion of dispositions is crucial. It is overly simplistic to say the hot oil caused the burns on my hand or the knife caused the cut on my hand. The deeper analysis in each case is that the boiling point of cooking oil, almost two times that of water, has something to do with why the burn feels distinct from a knife cutting into my hand. Likewise, the dispositions of the blade have a different effect on the skin than oil does. Causal relationships are simplistic and, as Nietzsche suggested, do not account for the continuum within the universe and the flux that permeates it. Especially in light of quantum mechanics, we are admittedly ignorant about most of the intricacies within so-called causal relationships. Neo-Humeans are right to think that dispositions are important. This will disabuse of us of appealing to teleology in the following manner:
‘The function of X is Z’ [e.g., the function of oxygen in the blood is… the function of the human heart is… etc.] means
(a) X is there because it does Z,Larry Wright, ‘Function’, Philosophical Review 82(2) (April 1973):139–68, see 161.
(b) Z is a consequence (or result) of X’s being there.
It is more accurate to say that a disposition of X is instantiated in Z rather than that X exists for purposes of Z because in real world examples, a given X can give rise to A, B, C, and so on. This is to say that one so-called cause can have different effects. A knife can slice, puncture, saw, etc. Hot oil can burn human skin, melt ice but not mix with it, combust when near other mediums or when left to increase to temperatures beyond its boiling point, etc. One would have to ask why cooking oil does not combust when a cube of ice is thrown into the pan; what about the canola oil, for a more specific example, causes it to auto-ignite at 435 degrees Fahrenheit and why does this not happen when water is heated beyond its boiling point?
As it turns out then, Neo-Aristotelians are not as committed to concretism as Lanao and Teh would hope. They are striving for generalizations despite refusing to investigate the details of how models are employed in normal science, as was made obvious by Lanao and Teh’s dismissal of Cartwright’s particularism and further, in their argument against Fundamentalism, which does not flow neatly from Cartwright’s argument. For science to arrive at anything concrete, abstraction needs to be allowed, specifically in cases venturing further and further into the past. Furthermore, a more detailed analysis of changes needs to be incorporated into our data. Briefly, when thinking of the $1,000 bill descending into St. Stephen’s Square, it is a simple fact that we must ask whether there is precipitation or not and if so, how much; we are also required to ask whether bird droppings may have altered its trajectory on the way down?; what effect does smog or dust particles have on the $1,000 bill’s trajectory; as Cartwright asked, what about wind gusts? What is concrete is consistent with the logical atomist’s view that propositions speak precisely to simple particulars or many of them bearing some relation to one another.
Ultimately, I think that Lanao and Teh fail to establish a Neo-Aristotelian approach to principled scientific models. They also fail to show that FU and therefore, Fundamentalism is false. What is also clear is that they did not adequately engage Cartwright’s argument, which is thoroughly Non-Fundamentalist, even if that conclusion escaped her. This is why I hold that Cartwright’s conclusions are off the mark because she is demanding that generalized laws be derived from extremely complex conditions. It is not incumbent on dappled laws within a given domain of science to be unified in order for FU to ultimately be the case. It could be that due to symmetry breaking, one domain appears distinct from another and because of our failure, at least until now, to realize how the two cohere, unifying principles between the two domains currently elude us. Lanao and Teh’s argument against FU therefore appeals to the ignorance of science not unlike apologetic arguments of much lesser quality. The ignorance of today’s science does not suggest that current problems will continue to confront us while their solutions perpetually elude us. What is needed is time. Like Lanao and Teh, I agree that Cartwright has a lot of great ideas concerning principled scientific models, but that her ideas lend support to FU. A unified metaphysical account of reality would likely end up in a more dappled state than modern science finds itself in and despite Lanao and Teh’s attempts, a hypothetical account of that sort would rely too heavily on science to be considered purely metaphysical. My hope is that my argument, one that employs symmetry breaking to bolster the probability of FU being the case, is more provocative, if even, persuasive.
By R.N. Carmona
What follows is Alexander Pruss’ Argument For An Omniscient Being. While he does not exactly give his argument a ringing endorsement, admitting that he is skeptical of the first two premises, there are other problems that elude him and any theist who believes that omniscience is possible. Pruss formulates the argument as follows:
1. The analytic/synthetic distinction between truths is the same as the a priori / a posteriori distinction.
2. The analytic/synthetic distinction between truths makes sense.
3. If 1 and 2, then every truth is knowable.
4. So, every truth is knowable. (1–3)
5. If every truth is knowable, then every truth is known.
6. So, every truth is known. (4–5)
7. If every truth is known, there is an omniscient being.
8. So, there is an omniscient being. (6–7)Pruss, Alexander. “An odd argument for an omniscient being”. Alexander Pruss Blog. 2 Nov 2020. Web.
In my new book “The Definitive Case Against Christianity: Life After The Death Of God,” I state the following:
God’s belief in propositions has to change in accordance with migrating facts. While it is true that the Sun is currently one astronomical unit away, that will not always be the case. At every moment when the Sun begins to expand during its Red Giant phase, the distance between the Earth and the Sun will gradually decrease until the Sun ultimately ends all life on our planet, if not disintegrating it entirely. At each moment, it will be incumbent on God to update his knowledge by changing his prior beliefs concerning the distance between these two bodies. It is prerequisite for facts to be fixed in order for God to be immutable. Since facts are not fixed, his beliefs and corresponding propositions about any given state of affairs have to change — otherwise he fails to be omniscient. (193)
A Christian might assert that there is a simple solution to the issue I have raised: God is also omnipresent. The issue with this objection is that God’s perspectives would be in direct contradiction with one another and so, from the perspective of other sentient beings, he would regard two logically contradictory propositions as true. From our perspective, he would believe in a truth and a lie, namely that from Earth, there is a supernova two million lightyears away, but in Andromeda, there is no longer a supernova to speak of. In other words, since the light from this event took two million years to reach humans on Earth, humans are just now learning of this supernova in Andromeda whereas an intelligent species on a planet relatively near to the event in Andromeda would report no supernova at that location. Perhaps it happened long before they emerged or before they were advanced enough to observe, record, and describe such an event. The fact remains that their present does not feature this supernova event while ours does.
Another fun example from theoretical physics involves watching someone falling into a black hole. The following is a summary of the relativistic experiences the observer and the faller would have:
1. The light coming from the person gets redshifted; they’ll start to take on a redder hue and then, eventually, will require infrared, microwave, and then radio “vision” to see.
2. The speed at which they appear to fall in will get asymptotically slow; they will appear to fall in towards the event horizon at a slower and slower speed, never quite reaching it.
3. The amount of light coming from them gets less and less. In addition to getting redder, they also will appear dimmer, even if they emit their own source of light!
4. The person falling in notices no difference in how time passes or how light appears to them. They would continue to fall in to the black hole and cross the event horizon as though nothing happened.“Falling Into a Black Hole Sucks!”. ScienceBlogs. 20 Nov 2009. Web.
God’s omnipresence, therefore, fails to solve the issue because in order for him to have all possible perspectives, he would have to hold contradictory propositions on pretty much any and all events in our universe. He would have our perspective in the Milky Way as well as the point of view of the Andromeda galaxy’s civilization. He would also have the perspectives of the observer and the faller in our black hole example. The glaring issue is that he would have these perspectives at the same time and in the same respect, thus resulting in contradictions. Perhaps one can still find a way to try and circumvent these issues.
Given the idea that a day is as a thousand years and vice versa for God (2 Peter 3:8), if he, for sake of argument, experiences time-laden events in God-days (equivalent to one human millennium) or even all at once, God would make entirely different claims from the ones we believe to be knowable. In other words, while we are discussing here and now, before and after, duration, and the such, God would state something like the following: “all of the people, places, events, etc. that existed from the first century through the tenth century CE existed simultaneously.” For us, this is unlike propositions we believe are knowable, indeed nonsensical. God, therefore, being a timeless being cannot know anything about time-laden truths. It would be incumbent on him to not be timeless, but then, he is immediately confronted with the relativity of experience in the physical universe.
More importantly, 5 is debatable despite Fitch’s Knowability Paradox. Pruss states: “The argument for 5 is the famous knowability paradox: If p is an unknown truth, then that p is an unknown truth is a truth that cannot be known (for if someone know that p is an unknown truth, then they would thereby know that p is a truth, and then it wouldn’t be an unknown truth, and no one can’t know what isn’t so)” (Ibid.). The tendency, however, to leap from the possibility of knowing every truth to someone knowing every truth is dubious. It is similar to the leap rooted in Anselm: conceivability implies possibility. Worse still is that Pruss leaps from possibility to actuality. One should not draw ontological conclusions on the basis of logical considerations.
Pruss would appreciate an example from mathematics, namely that mathematicians work with infinity in their equations and even think of it as a real, tangible object in the universe. Unfortunately, there does not seem to be a physical correlate to infinity. Pradeep Mutalik, writing for Quanta Magazine, explains:
While “most physicists and mathematicians have become so enamored with infinity that they rarely question it,” Tegmark writes, infinity is just “an extremely convenient approximation for which we haven’t discovered convenient alternatives.” Tegmark believes that we need to discover the infinity-free equations describing the true laws of physics.Mutalik, Pradeep. “The Infinity Puzzle”. Quanta Magazine. 16 Jun 2016. Web.
With this in mind, one can see that though mathematicians logically consider and defend the concept of infinity, one should proceed with caution in terms of stipulating that reality features anything like this concept. It follows then, that just because all truths are potentially knowable, there does not already exist a being that knows all things. Aside from the problem resulting from the relativity of truth, stemming from the relativity of space-time especially as one approaches the speed of light, there is this unjustified assumption that possibility implies actuality. In the main, possibility does not necessarily entail probability, the latter of which having to be established before concluding that something exists. Given these brief objections, one should maintain that there is no omniscient being.
Ultimately, a lot more can be said. All humans can really say about knowledge is what they experience with respect to acquiring it. As such, we would be wise to recall that we acquire knowledge first by way of awareness and conscious focus on what it is we are inquiring about. A truly omniscient being, which would be difficult to distinguish from a being who knows all things except how to play billiards or count to infinity (the conclusion of my Argument From Vagueness (see The Definitive Case Against Christianity, 194), would first and foremost have to be perfectly aware and focused for all of eternity. If this being loses focus at any point, myriad truths would have changed, progressing toward inevitable obsolescence, and new truths, that are not all related to the old truths, would have emerged. This being would therefore, have lost its claim to omniscience. This is setting aside that humans can apprehend truths intuitively, without having dedicated concentrated inquiry into a matter. Other sentient beings could have this capacity as well. In any case, the likelihood that an omniscient being exists is practically zero.
By R.N. Carmona
“A little philosophy inclineth man’s mind to atheism; but depth in philosophy bringeth men’s minds about to religion.” — Francis Bacon
Even when philosophy was considered the handmaiden of theology, this statement was patently false. One need only consider the methods of philosophy to disabuse oneself of the notion that depth in philosophy convinces one that religious claims are true or more accurately, that Christian claims are true. Philosophy, first and foremost, is a pre-Christian enterprise. It may not appear that way because works that were not palatable to Christian sentiments were destroyed. Carlo Rovelli outlines this succinctly:
I often think that the loss of the works of Democritus in their entirety is the greatest intellectual tragedy to ensue from the collapse of the old classical civilization…We have been left with all of Aristotle, by way of which Western thought reconstructed itself, and nothing of Democritus. Perhaps if all the works of Democritus had survived, and nothing of Aristotle’s, the intellectual history of our civilization would have been better … But centuries dominated by monotheism have not permitted the survival of Democritus’s naturalism. The closure of the ancient schools such as those of Athens and Alexandria, and the destruction of all the texts not in accordance with Christian ideas was vast and systematic, at the time of the brutal antipagan repression following from the edicts of Emperor Theodisius, which in 390-391 declared that Christianity was to be the only and obligatory religion of the empire. Plato and Aristotle, pagans who believed in the immortality of the soul or in the existence of a Prime Mover, could be tolerated by a triumphant Christianity. Not Democritus.Rovelli, Carlo. Reality is Not What it Seems: The Journey to Quantum Gravity. New York: Riverhead, 2018. 32-33. Print.
The suppression of philosophical ideas that do not blend well with Christianity persists today, albeit in a different form. Since Christianity does not have the sociopolitical power it once had, these tendencies are confined to Christian publications, institutions, and of course, churches. What one encounters in all of these areas is the Christian propensity to overstate philosophical schools or theories that appear to support Christian claims coupled with a disingenuous presentation or outright censorship of competing thought. This is how one gets online Christians, with little to no college experience, professing an immoveable foundationalism, usually stemming from washed up philosophers turned theologians or apologists, like Paul Moser and Alvin Plantinga. Given the suppression of the competition, Christians like this often do not realize how thoroughly retrograde their assertions are. Moser’s opponents recognized, over three decades ago, that foundationalism was out of fashion and that Moser’s iteration was flawed (e.g., Laurence Bonjour, Kevin Possin, Mark Timmons). That is more true today than it was then. Then this sort of Christian will arrogantly claim that Moser’s arguments against the competition were ironclad all while ignoring that in order for foundationalism to remain out of fashion, his arguments must have been defeated by other philosophers.
Wesley Wildman, Associate Professor of Philosophy, Theology, and Ethics at Boston University, states:
The kind of epistemic foundationalism that has prevailed in most modern Western philosophy has now mostly collapsed. Its artless insistence on certainty in the foundations of knowledge proved unsuitable even for mathematics and natural sciences, and it was a particularly inapt standard for big-question philosophy.Wildman, Wesley J. Religious Philosophy as Multidisciplinary Comparative Inquiry: Envisioning a Future For The Philosophy of Religion. State University of New York Press. Albany, NY. 2010. 11. Print.
He adds that since Peirce, Dewey, and later, Quine, a thorough rejection of foundationalism was followed by a fallibilist epistemological framework. This is how philosophy proceeds in the modern day. The history of philosophy should be enough to disabuse such Christians of their tendency to think philosophy was and still is beholden to theology. Unfortunately, it does not suffice. In that same vein, the history of philosophy ought to remind them of the origin of science, in the works of natural philosophers like Boyle, Galileo, Harvey, Kepler, and Newton. Perhaps that would make them more capable of taking C.S. Peirce’s timeless advice:
What I would recommend is that every person who wishes to form an opinion concerning fundamental problems, should first of all make a complete survey of human knowledge, should take note of all the valuable ideas in each branch of science, should observe in just what respect each has been successful and where it has failed, in order that in the light of the thorough acquaintance so attained of the available materials for a philosophical theory and of the nature and strength of each, he may proceed to the study of what the problem of philosophy consists in, and of the proper way of solving it.Charles Sanders Peirce (1891. “The Architecture of Theories”, The Essential Peirce: Selected Philosophical Writings (1867-1893). Bloomington, IN: Indiana University Press, 1992. 292. Print.
There is a sense in which philosophy underpins every other discipline and as such, breadth in philosophy is not only a knowledge of philosophy of language, mind, religion, science, and time, in addition to epistemology, ethics, metaphysics, ontology, and so on, but also a knowledge of areas of inquiry that make use of philosophical methodology, particularly logic, reasoning, and the clarification of important distinctions. Furthermore, if the aim of an enterprise is to locate the truth of a matter, then this should be of interest to anyone who purports to have a real enthusiasm for philosophy. To ignore the conclusions of other disciplines is to behave in a patently unphilosophical manner, but I digress.
Wildman is perhaps one of the more important thinkers in the philosophy of religion because he is at the forefront of defining how thinkers in his field will proceed. He recognizes already, unlike Christian theologians and philosophers, that philosophy of religion, as well as philosophy in general, have been divorced from Christianity and apologetics. Philosophers of religion are moving away from Christianized treatments of the issues they discuss, as well as the search for a personal, anthropomorphic deity. In defining the prominent theological traditions, Wildman makes it abundantly clear that the central arguments that got these traditions started, as Kant showed, do not prove the existence of God. This applies most especially to any variant of the Cosmological Argument.
He says, for instance, of the ontotheological tradition that if the entire tradition were based on the ontological argument, “most philosophers would probably consign it to the dustbin of history — and not without reason, despite the ballooning contemporary literature on the subject” (Ibid., 251). He goes on to say that the tradition is far broader and thrives separately from the ontological argument, specifically in that it does not rely on the argument’s anthropomorphic thinking. Similarly, of the cosmotheological tradition, Wildman expresses frustration resulting from the stubborn refusal to field nontheistic arguments. He adds: “Many religious philosophers nowadays recognize that the cosmotheological approach does not produce results that are immediately applicable to the religious beliefs of living theistic religions” (Ibid., 258). Therefore, this makes the tradition useful for nontheistic approaches as well. Wildman makes similar observations as it pertains to the physicotheological, pyschotheological, axiotheological, and other traditions he discusses.
Philosophers of religion are taking Nielsen’s advice, indeed their only recourse at this point after the obstinate insistence on the part of Christians to keep repeating these arguments as though they have yet to encounter any defeaters. This is precisely my gripe with Christians on social media. They have been so taken by Bacon’s statement that they think there is truth to it and that moreover, depth and breadth are equivalent. Further still, depth in a particular author that convinces you is not depth in philosophy. Even if a Christian can name ten philosophers that align with their views, they are doing nothing but indulging their confirmation bias and speaking to the fact that since roughly 30 percent of the world population is Christian, it is then no surprise that a good percentage of scholars harbor Christian sentiments or are sympathetic to Christianity. With respect to the arguments underlying the traditions Wildman discusses, Nielsen states:
It is a waste of time to rehearse arguments about the proofs or evidences for God or immortality. There are no grounds—or at least no such grounds—for belief in God or belief that God exists and/or that we are immortal. Hume and Kant (perhaps with a little rational reconstruction from philosophers like J.L. Mackie and Wallace Matson) pretty much settled that. Such matters have been thoroughly thrashed out and there is no point of raking over the dead coals. Philosophers who return to them are being thoroughly retrograde.Nielsen, Kai. Naturalism and Religion. Amherst, N.Y.: Prometheus, 2001. 399. Print.
Depth and breadth in philosophy would incline a man’s mind to a wide range of possibilities. Atheism and theism are not the only options on the table. Depth and breadth would also disabuse any would-be philosopher of the tendency to think in binaries. When philosophers speak of distinctions, there are not always two choices on offer. Sometimes what muddles the pursuit of proper distinctions is the fact that there are several options to consider. This is often lost on Christians, especially the less initiated who spend their time engaging in sophistry on social media. If that were not the case, they would realize, without much in the way of effort, that the notion of a personal being is utterly at odds with a deity who sparks the universe to then supervise its evolution via physical laws and eventually, evolutionary drivers that finally and ultimately result in his desired creative end: humankind. The notion is incongruous with the virtually instantaneous creation described in Genesis. That is setting aside that the fine-tuning of the parameters just right for life is owed to the gradual freezing and entropy of the universe. There is no indication that the parameters were decided from the start and as such, the idea of a silent creator of this sort is indistinguishable from the conclusion that the universe simply does not require a creator.
Ultimately, breadth and depth in philosophy disabuses one’s mind of binaries. Belief in a personal god or the lack thereof are not the only options on offer. The fact that I identify as an atheist is a consequence of what the evidence seems to dictate. Moreover, the ad hoc inclusion of an agent, especially as it pertains to causation, always struck me as suspicious. That is, until I realized that causation and teleology have long diverged. Therefore, if a thing or an event or an entire universe can be explained without recourse to an agent of any kind, it is unnecessary to attach such an agent to a self-contained and consistent explanation. It is precisely because I realize that the inclusion of a god in any mode of explanation is simply inadvisable that I identify as an atheist. However, since my mind is free of binaries and confirmation bias and other cognitive shortcomings that will hinder anyone from finding the truth of a matter, I am, as I have always been, open to being wrong. Like Wildman though, I realize that if there is a hand behind the curtain, it is unlike anything that most modern religions describe and furthermore, it may be vastly inappropriate to tarnish this being with the monicker of god. Perhaps the concept is so invariably tied to the Abrahamic monotheisms, that there is no real way to isolate the concept. This would imply that philosophers of religion are in pursuit of a sufficiently advanced alien race that may be simulating a universe for purposes of understanding philosophical big-questions like volition in higher sentient beings, consciousness, universals, mathematics, and so on. Or perhaps they are perpetually in pursuit of themselves, the proverbial cat spinning after its own tail.