By R.N. Carmona
Every deductive argument can be negated. I consider this an uncontroversial statement. The problem is, there are people who proceed as though deductive arguments speak to an a priori truth. The Freedom Tower is taller than the Empire State Building; the Empire State Building is taller than the Chrysler Building; therefore, the Freedom Tower is taller than the Chrysler Building. This is an example of an a priori truth because given that one understands the concepts of taller and shorter, the conclusion follows uncontroversially from the premises. This is one way in which the soundness of an argument can be assessed.
Of relevance is how one would proceed if one is unsure of the argument. Thankfully, we no longer live in a world in where one would have to go out of their way to measure the heights of the three buildings. A simple Google search will suffice. The Freedom Tower is ~546m. The Empire State Building is ~443. The Chrysler is ~318m. Granted, this is knowledge by way of testimony. I do not intend to connote religious testimony. What I intend to say is that one’s knowledge is grounded on knowledge directly acquired by someone else. In other words, at least one other person actually measured the heights of these buildings and these are the measurements they got.
Most of our knowledge claims rest on testimony. Not everyone has performed an experimental proof to show that the acceleration of gravity is 9.8m/s^2. Either one learned it from a professor or read it in a physics textbook or learned it when watching a science program. Or, they believe the word of someone they trust, be it a friend or a grade school teacher. This does not change that fact that if one cared to, one could exchange knowledge by way of testimony for directly acquired knowledge by performing an experimental proof. This is something I have done, so I do not believe on basis of mere testimony that Newton’s law holds. I can say that it holds because I tested it for myself.
To whet the appetite, let us consider a well-known deductive argument and let us ignore, for the moment, whether it is sound:
P1 All men are mortal.
P2 Socrates is a man.
C Therefore, Socrates is mortal.
If someone were completely disinterested in checking whether this argument, which is merely a finite set of propositions, coheres with the world or reality, I would employ my negation strategy: the negation of an argument someone assumes to be sound without epistemic warrant or justification. The strategy forces them into exploring whether their argument or its negation is sound. Inevitably, the individual will have to abandon their bizarre commitment to a sort of propositional idealism (namely that propositions can only be logically assessed and do not contain any real world entities contextually or are not claims about the world). In other words, they will abandon the notion that “All men are mortal” is a mere proposition lacking context that is not intended to make a claim about states of affairs objectively accessible to everyone, including the person who disagrees with them. With that in mind, I would offer the following:
P1 All men are immortal.
P2 Socrates is a man.
C Therefore, Socrates is immortal.
This is extremely controversial for reasons we are all familiar with. That is because everyone accepts that the original argument is sound. When speaking of ‘men’, setting aside the historical tendency to dissolve the distinction between men and women, what is meant is “all human persons from everywhere and at all times.” Socrates, as we know, was an ancient Greek philosopher who reportedly died in 399 BCE. Like all people before him, and presumably all people after him, he proved to be mortal. No human person has proven to be immortal and therefore, the original argument holds.
Of course, matters are not so straightforward. Christian apologists offer no arguments that are uncontroversially true like the original argument above. Therefore, the negation strategy will prove extremely effective to disabuse them of propositional idealism and to make them empirically assess whether their arguments are sound. What follows are examples of arguments for God that have been discussed ad nauseam. Clearly, theists are not interested in conceding. They are not interested in admitting that even one of their arguments does not work. Sure, what you find are theists committed to Thomism, for instance, and as such, they will reject Craig’s Kalam Cosmological Argument (KCA) because it does not fit into their Aristotelian paradigm and not because it is unsound. They prefer Aquinas’ approach to cosmological arguments. What is more common is the kind of theist that ignores the incongruity between one argument for another; since they are arguments for God, it counts as evidence for his existence and it really does not matter that Craig’s KCA is not Aristotelian. I happen to think that it is, despite Craig’s denial, but I digress.
Negating Popular Arguments For God’s Existence
Let us explore whether Craig’s Moral Argument falls victim to the negation strategy. Craig’s Moral Argument is as follows:
P1 If God does not exist, objective moral values do not exist.
P2 Objective moral values do exist.
C Therefore, God exist (Craig, William L. “Moral Argument (Part 1)”. Reasonable Faith. 15 Oct 2007. Web.)
With all arguments, a decision must be made. First, an assessment of the argument form is in order. Is it a modus ponens (MP) or a modus tollens (MT)? Perhaps it is neither and is instead, a categorical or disjunctive syllogism. In any case, one has to decide which premise(s) is going to be negated or whether by virtue of the argument form, one will have to change the argument form to state the opposite. You can see this with the original example. I could have very well negated P2 and stated “Socrates is not a man.” Socrates is an immortal jellyfish that I tagged in the Mediterranean. Or he is an eternal being that I met while tripping out on DMT. For purposes of the argument, however, since he is not a man, at the very least, the question of whether or not he is mortal is open. We would have to ask what Socrates is. Now, if Socrates is my pet hamster, then yes, Socrates is mortal despite not being a man. It follows that the choice of negation has to be in a place that proves most effective. Some thought has to go into it.
Likewise, the choice has to be made when confronting Craig’s Moral Argument. Craig’s Moral Argument is a modus tollens. For the uninitiated, it simply states: [((p –> q) ^ ~q) –> ~p] (Potter, A. (2020). The rhetorical structure of Modus Tollens: An exploration in logic-mining. Proceedings of the Society for Computation in Linguistics, 3, 170-179.). Another way of putting it is that one is denying the consequent. That is precisely what Craig does. “Objective moral values do not exist” is the consequent q. Craig is saying ~q or “Objective moral values do exist.” Therefore, one route one can take is keeping the argument form and negating P1, which in turn negates P2.
MT Negated Moral Argument
P1 If God exists, objective moral values and duties exist.
P2 Objective moral values do not exist.
C Therefore, God does not exist.
The key is to come up with a negation that is either sound or, at the very least, free of any controversy. Straight away, I do not like P2. Moral realists would also deny this negation because, to their minds, P2 is not true. The controversy with P2 is not so much whether it is true or false, but that it falls on the horns of the objectivism-relativism and moral realism/anti-realism debates in ethics. The argument may accomplish something with respect to countering Craig’s Moral Argument, but we are in no better place because of it. This is when we should explore changing the argument’s form in order to get a better negation.
MP Negated Moral Argument
P1 If God does not exist, objective moral values and duties exist.
P2 God does not exist.
C Therefore, objective moral values and duties exist.
This is a valid modus ponens. I have changed the argument form of Craig’s Moral Argument and I now have what I think to be a better negation of his argument. From P2, atheists can find satisfaction. This is the epistemic proposition atheists are committed to. The conclusion also alleviates any concerns moral realists might have had with the MT Negated Moral Argument. For my own purposes, I think this argument works better. That, however, is beside the point. The point is that this forces theists to either justify the premises of Craig’s Moral Argument, i.e. prove that the argument is sound, or assert, on the basis of mere faith, that Craig’s argument is true. In either case, one will have succeeded in either forcing the theist to abandon their propositional idealism, in getting them to test the argument against the world as ontologically construed or in getting them to confess that they are indulging in circular reasoning and confirmation bias, i.e. getting them to confess that they are irrational and illogical. Both of these count as victories. We can explore whether other arguments for God fall on this sword.
We can turn our attention to Craig’s Kalam Cosmological Argument (KCA):
P1 Everything that begins to exist has a cause.
P2 The universe began to exist.
C Therefore, the universe has a cause. (Reichenbach, Bruce. “Cosmological Argument”. Stanford Encyclopedia of Philosophy. 2021. Web.)
Again, negation can take place in two places: P1 or P2. Negating P1, however, does not make sense. Negating P2, like in the case of his Moral Argument, changes the argument form; this is arguable and more subtle. So we get the following:
MT Negated KCA
P1 Everything that begins to exist has a cause.
P2 The universe did not begin to exist.
C Therefore, the universe does not have a cause.
Technically, Craig’s KCA is a categorical syllogism. Such syllogisms present a universal (∀) or existential quantifier (∃); the latter is introduced by saying all. Consider, “all philosophers are thinkers; all philosophers are logicians; therefore, all thinkers are logicians.” Conversely, one could say “no mallards are insects; some birds are mallards; therefore, some birds are not insects.” What Craig is stating is that all things that begin to exist have a cause, so if the universe is a thing that began to exist, then it has a cause. Alternatively, his argument is an implicit modus ponens: “if the universe began to exist, then it has a cause; the universe began to exist; therefore, the universe has a cause.” In any case, the negation works because if the universe did not begin to exist, then the universe is not part of the group of all things that have a cause.
Whether the universe is finite or eternal has been debated for millennia and in a sense, despite changing context, the debate rages on. If the universe is part of an eternal multiverse, it is just one universe in a vast sea of universes within a multiverse that has no temporal beginning. Despite this, the MT Negated KCA demonstrates how absurd the KCA is. The singularity was already there ‘before’ the Big Bang. The Big Bang started the cosmic clock, but the universe itself did not begin to exist. This is more plausible. Consider that everything that begins to exist does so when the flow of time is already in motion, i.e. when the arrow of time pointed in a given direction due to entropic increase reducible to the decreasing temperature throughout the universe. Nothing that has ever come into existence has done so simultaneously with time itself because any causal relationship speaks to a change and change requires the passage of time, but at T=0, no time has passed, and therefore, no change could have taken place. This leads to an asymmetry. We thus cannot speak of anything beginning to exist at T=0. The MT Negated KCA puts cosmology in the right context. The universe did not come into existence at T=0. T=0 simply represents the first measure of time; matter and energy did not emerge at that point.
For a more complicated treatment, Malpass and Morriston argue that “one cannot traverse an actual infinite in finite steps” (Malpass, Alex & Morriston, Wes (2020). Endless and Infinite. Philosophical Quarterly 70 (281):830-849.). In other words, from a mathematical point of view, T=0 is the x-axis. All of the events after T=0 are an asymptote along the x-axis. The events go further and further back, ever closer to T=0 but never actually touch it. For a visual representation, see below:
Credit: Free Math Help
The implication here is that time began to exist, but the universe did not begin to exist. A recent paper implies that this is most likely the case (Quantum Experiment Shows How Time ‘Emerges’ from Entanglement. The Physics arXiv Blog. 23 Oct 2013. Web.). The very hot, very dense singularity before the emergence of time at T=0 would have been subject to quantum mechanics rather than the macroscopic forces that came later, e.g., General Relativity. As such, the conditions were such that entanglement could have resulted in the emergence of time in our universe, but not the emergence of the universe. All of the matter and energy were already present before the clock started to tick. Conversely, if the universe is akin to a growing runner, then the toddler is at the starting line before the gun goes off. The sound of the gun starts the clock. The runner starts running sometime after she hears the sound. As she runs, she goes through all the stages of childhood, puberty, adolescence, adulthood, and finally dies. Crucially, the act of her running and her growth do not begin until after the gun goes off. Likewise, no changes take place at T=0; all changes take place after T=0. While there is this notion of entanglement, resulting in a change occurring before the clock even started ticking, quantum mechanics demonstrates that quantum changes do not require time and in fact, may result in the emergence of time. Therefore, it is plausible that though time began to exist at the Big Bang, the universe did not begin to exist—thus, making the MT Negated KCA sound. The KCA is therefore, false.
Finally, so that the Thomists do not feel left out, we can explore whether the negation strategy can be applied to Aquinas’ Five Ways. For our purposes, the Second Way is closely related to the KCA and would be defeated by the same considerations. Of course, we would have to negate the Second Way so that it is vulnerable to the considerations that cast doubt on the KCA. The Second Way can be stated as follows:
We perceive a series of efficient causes of things in the world.
Nothing exists prior to itself.
Therefore nothing [in the world of things we perceive] is the efficient cause of itself.
If a previous efficient cause does not exist, neither does the thing that results (the effect).
Therefore if the first thing in a series does not exist, nothing in the series exists.
If the series of efficient causes extends ad infinitum into the past, then there would be no things existing now.
That is plainly false (i.e., there are things existing now that came about through efficient causes).
Therefore efficient causes do not extend ad infinitum into the past.
Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God. (Gracyk, Theodore. “Argument Analysis of the Five Ways”. Minnesota State University Moorhead. 2016. Web.)
This argument is considerably longer than the KCA, but there are still areas where the argument can be negated. I think P1 is uncontroversial and so, I do not mind starting from there:
Negated Second Way
We perceive a series of efficient causes of things in the world.
Nothing exists prior to itself.
Therefore nothing [in the world of things we perceive] is the efficient cause of itself.
If a previous efficient cause does not exist, neither does the thing that results (the effect).
Therefore if the earlier thing in a series does not exist, nothing in the series exists.
If the series of efficient causes extends ad infinitum into the past, then there would be things existing now.
That is plainly true (i.e., efficient causes, per Malpass and Morriston, extend infinitely into the past or, the number of past efficient causes is a potential infinity).
Therefore efficient causes do extend ad infinitum into the past.
Therefore it is not necessary to admit a first efficient cause, to which everyone gives the name of God.
Either the theist will continue to assert that the Second Way is sound, epistemic warrant and justification be damned, or they will abandon their dubious propositional idealism and run a soundness test. Checking whether the Second Way or the Negated Second Way is sound would inevitably bring them into contact with empirical evidence supporting one argument or the other. As I have shown with the KCA, it appears that considerations of time, from a philosophical and quantum mechanical perspective, greatly lower the probability of the KCA being sound. This follows neatly into Aquinas’ Second Way and as such, one has far less epistemic justification for believing the KCA or Aquinas’ Second Way are sound. The greater justification is found in the negated versions of these arguments.
Ultimately, one either succeeds at making the theist play the game according to the right rules or getting them to admit their beliefs are not properly epistemic at all; instead, they believe by way of blind faith and all of their redundant arguments are exercises in circular reasoning and any pretense of engaging the evidence is an exercise in confirmation bias. Arguments for God are a perfect example of directionally motivated reasoning (see Galef, Julia. The Scout Mindset: Why Some People See Things Clearly and Others Don’t. New York: Portfolio, 2021. 63-66. Print). I much prefer accuracy motivated reasoning. We are all guilty of motivated reasoning, but directionally motivated reasoning is indicative of irrationality and usually speaks to the fact that one holds beliefs that do not square with the facts. Deductive arguments are only useful insofar as premises can be supported by evidence, which therefore makes it easier to show that an argument is sound. This is why we can reason that if Socrates is a man, more specifically, the ancient Greek philosopher that we all know, then Socrates was indeed mortal and that is why he died in 399 BCE. Likewise, this is why we cannot reason that objective morality can only be the case if the Judeo-Christian god exists, that if the universe began to exist, God is the cause, and that if the series of efficient causes cannot regress infinitely and must terminate somewhere, they can only terminate at a necessary first cause, which some call God. These arguments can be negated and the negations will show that they are either absurd or that the reasoning in the arguments is deficient and rests on the laurels of directionally motivated reasoning due to a bias for one’s religious faith rather than on the bedrock of carefully reasoned, meticulously demonstrated, accuracy motivated reasoning which does not ignore or omit pertinent facts.
The arguments for God, no matter how old or new, simple or complex, do not work because not only do they rely on directionally motivated and patently biased reasoning, but because when testing for soundness, being sure not to exclude any pertinent evidence, the arguments turn out to be unsound. In the main, they all contain controversial premises that do not work unless one already believes in God. So there is a sense in which these arguments exist to give believers a false sense of security or more pointedly, a false sense of certainty. Unlike my opponents, I am perfectly content with being wrong, with changing my mind, but the fact remains, theism is simply not the sort of belief that I give much credence to. Along with the Vagueness Strategy, the Negation Strategy is something that should be in every atheist’s toolbox.
By R.N. Carmona
My purpose here is twofold: first and foremost, I want to clarify Rasmussen’s argument because though I can understand why word of mouth can lead to what is essentially a straw man of his argument, especially in light of the fact that his argument requires one to pay for an online article or his book Is God the Best Explanation of Things? which he coauthored with Felipe Leon, it is simply good practice to present an argument fairly. Secondly, I want to be stern about the fact that philosophy of religion cannot continue to rake these dead coals. Rasmussen’s argument is just another in a long, winding, and quite frankly, tired history of contingency arguments. In in any case, the following is the straw man I want my readers and anyone else who finds this post to stop citing. This is decidedly not Rasmussen’s argument:
Rasmussen has no argument called The Argument From Arbitrary Limits. Arbitrary limits actually feature in Leon’s chapter in where he expresses skepticism of Rasmussen’s Geometric Argument (Rasmussen Joshua and Leon, Felipe. Is God The Best Explanation For Things. Switzerland: Palgrave Macmillan. 53-68. Print.). Also, Rasmussen has a Theistic conception of God (omnipresent, wholly good, etc.) that is analogous to what Plantinga means by maximal greatness, but Rasmussen does not refer to God using that term. Perhaps there is confusion with his use of the word maximal conceivable. While given Rasmussen’s beliefs, he implies God with what he calls a maximal foundation, “a foundation complete with respect to its fundamental (basic, uncaused) features” (Ibid., 140). He makes it clear throughout the book that he is open to such a foundation that is not synonymous with God. In any case, his maximal conceivable is not a being possessing maximal greatness; at least, not exactly, since it appears he means something more elementary given his descriptions of basic and uncaused, as these clearly do not refer to omnipresence, perfect goodness, and so on. There may also be some confusion with his later argument, which he calls “The Maximal Mind Argument” (Ibid. 112-113), which fails because it is relies heavily on nonphysicalism, a series of negative theories in philosophy of mind that do not come close to offering alternative explanations for an array of phenomena thoroughly explained by physicalism (see here). In any case, Rasmussen has no argument resembling the graphic above. His arguments rest on a number of dubious assumptions, the nexus of which is his Geometric Argument:
JR1 Geometry is a geometric state.
JR2 Every geometric state is dependent.
JR3 Therefore, Geometry is dependent.
JR4 Geometry cannot depend on any state featuring only things that have a geometry.
JR5 Geometry cannot depend on any state featuring only non-concrete (non-causal) things.
JRC Therefore, Geometry depends on a state featuring at lest one geometry-less concrete thing (3-5) (Ibid., 42).
Like Leon, I take issue with JR2. Leon does not really elaborate on why JR2 is questionable saying only that “the most basic entities with geometry (if such there be) have their geometrics of factual or metaphysical necessity” and that therefore, “it’s not true that every geometric state is dependent” (Ibid., 67). He is correct, of course, but elaboration could have helped here because this is a potential defeater. Factual and metaphysical necessity are inhered in physical necessity. The universe is such that the fact that every triangle containing a 90-degree angle is a right triangle is reducible to physical constraints within our universe. This fact of geometry is unlike Rasmussen’s examples, namely chair and iPhone shapes. He states: “The instantiation of [a chair’s shape] depends upon prior conditions. Chair shapes never instantiate on their own, without any prior conditions. Instead, chair-instantiations depend on something” (Ibid., 41). This overt Platonism is questionable in and of itself, but Leon’s statement is forceful in this case: the shape of the chair is not dependent because it has its shape of factual or metaphysical necessity that stem from physical necessity. Chairs, first and foremost, are shaped the way they are because of our shape when we sit down; furthermore, chairs take the shapes they do because of physical constraints like human weight, gravity, friction against a floor, etc. For a chair not to collapse under the force of gravity and the weight of an individual, it has to be engineered in some way to withstand these forces acting on it; the chair’s shape is so because of physical necessity and this explains its metaphysical necessity. There is therefore, no form of a chair in some ethereal realm; an idea like this is thoroughly retrograde and not worth considering.
In any case, the real issue is that chair and iPhone shapes are not the sort of shapes that occur naturally in the universe. Those shapes, namely spheres, ellipses, triangles, and so on, also emerge from physical necessity. It is simply the case that a suspender on a bridge forms the hypothenuse of a right triangle. Like a chair, bridge suspenders take this shape because of physical necessity. The same applies to the ubiquity of spherical and elliptical shapes in the universe. To further disabuse anyone of Platonic ideas, globular shapes are also quite ubiquitous in the universe and are more prominent the closer we get to the Big Bang. There are shapes inherent in our universe that cannot be neatly called geometrical and even still, these shapes are physically and therefore, metaphysically necessitated. If JR2 is unsound, then the argument falls apart. On another front, this addresses Rasmussen’s assertion that God explains why there is less chaos in our universe. Setting aside that the qualification of this statement is entirely relative, the relative order we see in the universe is entirely probabilistic, especially given that entropy guarantees a trend toward disorder as the universe grows older and colder.
Like Leon, I share his general concern about “any argument that moves from facts about apparent contingent particularity and an explicability principle to conclusions about the nature of fundamental reality” (Ibid., 67) or as I have been known to put it: one cannot draw ontological conclusions on the basis of logical considerations. Theistic philosophers of religion and unfortunately, philosophers in general, have a terrible habit of leaping from conceivability to possibility and then, all the way to actuality. Leon elaborates:
Indeed, the worry above seems to generalize to just about any account of ultimate reality. So, for example, won’t explicability arguments saddle Christian theism with the same concern, viz. why the deep structure of God’s nature should necessitate exactly three persons in the Godhead? In general, won’t explicability arguments equally support a required explanation for why a particular God exists rather than others, or rather than, say, an infinite hierarchy of gods? The heart of the criticism is that it seems any theory must stop somewhere and say that the fundamental character is either brute or necessary, and that if it’s necessary, the explanation of why it’s necessary (despite appearing contingent) is beyond our ability to grasp (Ibid., 67-68).
Of course, Leon is correct in his assessment. Why not Ahura Mazda, his hypostatic union to Spenta Mainyu, and his extension via the Amesha Spentas? If, for instance, the one-many problem requires the notion of a One that is also many, what exactly rules out Ahura Mazda? One starts to see how the prevailing version of Theism in philosophy of religion is just a sad force of habit. This is why it is necessary to move on from these arguments. Contingency arguments are notoriously outmoded because Mackie, Le Poidevin, and others have already provided general defeaters that can apply to any particular contingency argument. Also, how many contingency arguments do we need exactly? In other words, how many different ways can one continue to assert that all contingent things require at least one necessary explanation? Wildman guides us here:
Traditional natural theology investigates entailment relations from experienced reality to, say, a preferred metaphysics of ultimacy. But most arguments of this direct-entailment sort have fallen out of favor, mostly because they are undermined by the awareness of alternative metaphysical schemes that fit the empirical facts just as well as the preferred metaphysical scheme. By contrast with this direct-entailment approach, natural theology ought to compare numerous compelling accounts of ultimacy in as many different respects as are relevant. In this comparison-based way, we assemble the raw material for inference-to-the-best-explanation arguments on behalf of particular theories of ultimacy, and we make completely clear the criteria for preferring one view of ultimacy to another.Wildman, Wesley J. Religious Philosophy as Multidisciplinary Comparative Inquiry: Envisioning a Future For The Philosophy of Religion. State University of New York Press. Albany, NY. 2010. 162. Print.
Setting aside that Rasmussen does not make clear why he prefers a Christian view of ultimacy as opposed to a Zoroastrian one or another one that may be proposed, I think Wildman is being quite generous when saying that “alternative metaphysical schemes fit the empirical facts just as well as the preferred metaphysical scheme” because the fact of the matter is that some alternatives fit the empirical facts better than metaphysical schemes like the ones Christian Theists resort to. Rasmussen’s preferred metaphysical scheme of a maximal foundation, which properly stated, is a disembodied, nonphysical mind who is omnipresent, wholly good, and so on rests on dubious assumptions that have not been made to cohere with the empirical facts. Nonphysicalism, as I have shown in the past, does not even attempt to explain brain-related phenomena. Physicalist theories have trounced the opposition in that department and it is not even close. What is more is that Christian Theists are especially notorious for not comparing their account to other accounts and that is because they are not doing philosophy, but rather apologetics. This is precisely why philosophy of religion must move on from Christian Theism. We can think of an intellectual corollary to forgiveness. In light of Christian Theism’s abject failure to prove God, how many more chances are we required to give this view? Philosophy of religion is, then, like an abused lover continuing to be moved by scraps of affection made to cover up heaps of trauma. The field should be past the point of forgiveness and giving Christian Theism yet another go to get things right; it has had literal centuries to get its story straight and present compelling arguments and yet here we are retreading ground that has been walked over again and again and again.
To reinforce my point, I am going to quote Mackie and Le Poidevin’s refutations of contingency arguments like Rasmussen’s. It should then become clear that we have to bury these kinds of arguments for good. Let them who are attached to these arguments mourn their loss, but I will attend no such wake. What remains of the body is an ancient skeleton, long dead. It is high time to give it a rest. Le Poidevin put one nail in the coffin of contingency arguments. Anyone offering new contingency arguments has simply failed to do their homework. It is typical of Christian Theists to indulge confirmation bias and avoid what their opponents have to say. The problem with that is that the case against contingency arguments has been made. Obstinacy does not change the fact. Le Poidevin clearly shows why necessary facts do not explain contingent ones:
Necessary facts, then, cannot explain contingent ones, and causal explanation, of any phenomenon, must link contingent facts. That is, both cause and effect must be contingent. Why is this? Because causes make a difference to their environment: they result in something that would not have happened if the cause had not been present. To say, for example, that the presence of a catalyst in a certain set of circumstances speeded up a reaction is to say that, had the catalyst not been present in those circumstances, the reaction would have proceeded at a slower rate. In general, if A caused B, then, if A had not occurred in the circumstances, B would not have occurred either. (A variant of this principle is that, if A caused B, then if A had not occurred in the circumstances, the probability of B’s occurrence would have been appreciably less than it was. It does not matter for our argument whether we accept the origin principle or this variant.) To make sense of this statement, ‘If A had not occurred in the circumstances, B would not have occurred’, we have to countenance the possibility of A’s not occurring and the possibility of B’s not occurring. If these are genuine possibilities, then both A and B are contingent. So one of the reasons why necessary facts cannot causally explain anything is that we cannot make sense of their not being the case, whereas causal explanations requires us to make sense of causally explanatory facts not being the case. Causal explanation involves the explanation of one contingent fact by appeal to another contingent fact.Le Poidevin, Robin. Arguing for Atheism: An Introduction to the Philosophy of Religion. London: Routledge, 1996. 40-41. Print.
This is a way of substantiating that an effect is inhered in a cause or the principle, like effects from like causes. This has been precisely my criticism of the idea that a nonphysical cause created the physical universe. There is no theory of causation that permits the interaction of an ethereal entity’s dispositions and that of physical things. It is essentially a paraphrase of Elizabeth of Bohemia’s rebuttal to Cartesian dualism: how does mental substance interact with physical substance? This is why mind-body dualism remains in a state of incoherence, but I digress. Mackie puts yet another nail in the coffin:
The principle of sufficient reason, then, is more far-reaching than the principle that every occurrence has a preceding sufficient cause: the latter, but not the former, would be satisfied by a series of things or events running back infinitely in time, each determined by earlier ones, but with no further explanation of the series as a whole. Such a series would give us only what Leibniz called ‘physical’ or ‘hypothetical’ necessity, whereas the demand for a sufficient reason for the whole body of contingent things and events and laws calls for something with ‘absolute’ or ‘metaphysical’ necessity. But even the weaker, deterministic, principle is not an a priori truth, and indeed it may not be a truth at all; much less can this be claimed for the principle of sufficient reason. Perhaps it just expresses an arbitrary demand; it may be intellectually satisfying to believe there is, objectively, an explanation for everything together, even if we can only guess at what the explanation might be. But we have no right to assume that the universe will comply with our intellectual preferences. Alternatively, the supposed principle may be an unwarranted extension of the determinist one, which, in so far as it is supported, is supported only empirically, and can at most be accepted provisionally, not as an a priori truth. The form of the cosmological argument which relies on the principle of sufficient reason therefore fails completely as a demonstrative proof.Mackie, J. L. The Miracle of Theism: Arguments for and against the Existence of God. Oxford: Clarendon, 1982. 86-87. Print.
Every contingency argument fails because it relies on the principle of sufficient reason and because necessity does not cohere with contingency as it concerns a so-called causal relation. Mackie, like Le Poidevin, also questions why God is a satisfactory termination of the regress. Why not something something else? (Ibid., 92). Contingency arguments amount to vicious special pleading and an outright refusal to entertain viable alternatives, even in cases where the alternatives are nonphysical and compatible with religious sentiments. In any case, it would appear that the principle of sufficient reason is not on stable ground. Neither is the notion that a necessary being is the ultimate explanation of the universe. Contingency arguments have been defeated and there really is no way to repeat these arguments in a way that does not fall on the horns of Le Poidevin and Mackie’s defeaters. Only the obdurate need to believe that God is the foundational explanation of the universe explains the redundancy of Christian Theists within the philosophy of religion. That is setting aside that apologetics is not philosophy and other complaints I have had. The Geometric Argument, despite using different language, just is a contingency argument. If the dead horse could speak, it would tell them all to lay down their batons once and for all, but alas.
Ultimately, contingency arguments are yet another example of how repetitive Christianized philosophy of religion has become. There is a sense in which Leon, Le Poidevin, and Mackie are paraphrasing one another because, and here is a bit of irony, like arguments result in like rebuttals. They cannot help but to sound like they each decided or even conspired to write on the same topic for a final paper. They are, after all, addressing the same argument no matter how many attempts have been made to word it differently. It is a vicious cycle, a large wheel that cannot keep on turning. It must be stopped in its tracks if progress in the philosophy of religion is to get any real traction.
By R.N. Carmona
Consider what follows some scattered thoughts after reading an excellent paper by Marius Backmann. I think he succeeds in showing how the Neo-Aristotelian notion of powers is incongruous with pretty much any theory of time of note. My issue with powers is more basic: what in the world are Neo-Aristotelians even saying when they invoke this idea and why does it seem that no one has raised the concern that powers are an elementary paraphrase of dispositions? With respect to this concern, Neo-Aristotelians do not even attempt to make sense of our experience with matter and energy. They seem to go on the assumption that something just has to underlie the physical world whereas I take it as extraneous to include metaphysical postulates where entirely physical ones make do. Dispositions are precisely the sort of physical postulates that adequately explain what we perceive as cause-effect relationships. What I will argue is that a more thorough analysis of dispositions is all that is needed to understand why a given a caused some given effect b.
My idea that powers are an elementary paraphrase is entailed in Alexander Bird’s analysis of what powers are. He states:
According to Bird, powers, or potencies, as he calls them alternatively, are a subclass of dispositions. Bird holds that not all dispositions need to be powers, since there could be dispositions that are not characterised by an essence, apart from self-identity. Powers, on the other hand, Bird (2013) holds to be properties with a dispositional essence. On this view, a power is a property that furnishes its bearer with the same dispositional character in every metaphysically possible world where the property is instantiated. If the disposition to repel negatively charged objects if there are some in the vicinity is a power in that sense, then every object that has that property does the same in every metaphysically possible world, i.e. repel negatively charged objects if there are some in the vicinity.Marius Backmann (2019) No time for powers, Inquiry, 62:9-10, 979-1007, DOI: 10.1080/0020174X.2018.1470569
Upon closer analysis of Bird’s definition, a power just is a disposition. The issue is that Bird and the Neo-Aristotelians who complain that he has not gone far enough have isolated what they take to be a power from the properties of an electron, which is a good example of a particle that repels negatively charged objects given that some are in its vicinity. Talk of possible worlds makes no sense unless one can prove mathematically that an electron-like particle with a different mass would also repulse other negatively charged particles. However, though it can easily be shown that a slightly more massive electron-like particle will repulse other particles of negative charge, its electrical charge will be slightly higher than an electrons because according to Robert Milikan’s calculation, there seems to be a relationship between the mass of a particle and its charge. The most elementary charge is e = ~1.602 x 10^19 coulombs. The charge of a quark is measured in multiples of e/3, implying a smaller charge, which is expected given that they are sub-particles. So what is of interest is why the configuration of even an elementary particle yields predictable “behaviors.”
To see this, let us dig into an example Backmann uses: “My power to bake a cake would not bring a cake that did not exist simpliciter before into existence, but only make a cake that eternally exists simpliciter present. Every activity reduces to a change in what is present” (Ibid.). The Neo-Aristotelian is off track to say we have power to bake a cake and that the oven has power to yield this desired outcome that do not trace back to its parts or as Cartwright states of general nomological machines: “We explicate how the machine arrangement dictates what can happen – it has emergent powers which are not to be found in its components” (Cartwright, Nancy & Pemberton, John (2013). Aristotelian powers: without them, what would modern science do? In John Greco & Ruth Groff (eds.), Powers and Capacities in Philosophy: the New Aristotelianism. London, U.K.: Routledge. pp. 93-112.). Of the nomological machines in nature, Cartwright appears to bypass the role of evolution. Of such machines invented by humans, she ignores the fact that we often wrongly predict what a given invention will do. Evolution proceeds via probabilities and so, from our ad hoc point of view, it looks very much like trial and error. Humans have the advantage of being much more deliberate about what they are selecting for and therefore, our testing and re-testing of inventions and deciding when they are safe and suitable to hit the market is markedly similar to evolutionary selection.
That being said, the components of a machine do account for its function. It is only due to our understanding of other machines that we understand what should go into building a new one in order for it to accomplish a new task(s). Powers are not necessary because then we should be asking, why did we not start off with machines that have superior powers? In other words, why start with percolators if we could have just skipped straight to Keurig or Nespresso machines or whatever more advanced models that might be invented? Talk of powers seems to insinuate that objects, whether complex or simple, are predetermined to behave the way they do, even in the absence of trial runs, modifications, or outright upgrades. This analysis sets aside the cake. It does not matter what an oven or air fryer is supposed to do. If the ingredients are wrong, either because I neglected to use baking powder or did not use enough flour, the cake may not raise. The ingredients that go into baked goods play a “causal” role as well.
Dispositions, on the other hand, readily explain why one invention counts as an upgrade over a previous iteration. Take, for instance, Apple’s A14 Bionic chip. At bottom, this chip accounts for, “a 5 nanometer manufacturing process” and CPU and GPU improvements over the iPhone 11 (Truly, Alan. “A14 Bionic: Apple’s iPhone 12 Chip Benefits & Improvements Explained”. Screenrant. 14 Oct 2020. Web). Or more accurately, key differences in the way this chip was made accounts for the improvement over its predecessors. Perhaps more crucially is that critics of dispositions have mostly tended to isolate dispositions, as though a glass cup’s fragility exists in a vacuum. Did the cup free fall at 9.8m/s^2? Did it fall on a mattress or on a floor? What kind of floor? Or was the cup thrown at some velocity because Sharon was angry with her boyfriend Albert? What did she throw the cup at: a wall, the floor, Albert’s head, or did it land in a half-full hamper with Sharon and Albert’s dirty clothes?
Answering these questions solves the masking and mimicker problems. The masking problem can be framed as follows:
Another kind of counterexample to SCA, due to Johnston (1992) and Bird (1998), involves a fragile glass that is carefully protected by packing material. It is claimed that the glass is disposed to break when struck but, if struck, it wouldn’t break thanks to the work of the packing material. There is an important difference between this example and Martin’s: the packing material would prevent the breaking of the glass not by removing its disposition to break when struck but by blocking the process that would otherwise lead from striking to breaking.Choi, Sungho and Michael Fara, “Dispositions”, The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.).
I would not qualify that the packing material prevents the glass from breaking by blocking the process that would result if it were exposed. The packing material has its own properties and dispositions that we have discovered through trial and error making this material good at protecting glass. Packing paper was more common, but now we have bubble wrap and heavy duty degradable stretch wrap, also capable of protecting glass, china, porcelain, and other fragile items. The dispositions of these protective materials readily explain why their encompassing of fragile objects protects them from incidental striking or drops. If I were, however, to throw a wrapped coffee mug as hard as I can toward a brick wall, the mug is likely to break. This entails that variables are important in this thing we call cause and effect.
A perfect example is simple collisions of the sort you learn about in an elementary physics course. If a truck and haul speeding down a highway in one direction at ~145 km/h, and a sedan traveling in the opposite direction at cruising speed of ~89 km/h collide, we can readily predict the outcome and that this particular collision is inelastic. The speeding truck would likely barrel through the sedan and the sedan will be pushed in the direction the truck was traveling in. The vehicles’ respective speeds and masses are extremely important in understanding what goes on here. There is no sense in which we can say that trucks just have a power to mow things down because a collision between the truck in our original example and a truck and haul driving at roughly the same speed in the opposite direction results in an entirely difficult outcome, a perfectly elastic collision in where both trucks collide and come to an immediate halt after the effects of the impact are fully realized.
Neo-Aristotelian analyses of powers give us nothing that is keeping with physics. What these explanations demand is something they imagine happening behind the veil of what science has already explained. There are just dispositions and what is needed is a more critical analysis of what is entailed across each instance of cause and effect. Power ontologies beg the question, in any case, because they require dispositions to make sense of powers. That is because powers are just a cursory analysis of cause-effect relationships, a way of paraphrasing that is overly simplistic and ultimately, not analytical enough. Power ontologies, along with talk of dynamism, which properly belongs to Nietzsche not Aristotle, severely undermine the Neo-Aristotelian project. Nietzsche’s diagnosis of causation makes this clear:
Cause and effect: such a duality probably never exists; in truth we are confronted by a continuum out of which we isolate a couple of pieces, just as we perceive motion only as isolated points and then infer it without ever actually seeing it. The suddenness with which many effects stand out misleads us; actually, it is sudden only for us. In this moment of suddenness there is an infinite number of processes that elude us. n intellect that could see cause and effect as a continuum and a flux and not, as we do, in terms of an arbitrary division and dismemberment, would repudiate the concept of cause and effect and deny all conditionality.Nietzsche, Friedrich W, and Walter Kaufmann. The Gay Science: With a Prelude in Rhymes and an Appendix of Songs. New York: Vintage Books, 1974. 173. Print.
Nietzsche describes a continuum and a flux, in other words, a dynamism thoroughly unlike what can be attributed to Aristotle’s theory of causation. So the fact that Neo-Aristotelians even speak of a dynamism feels like a sort of plagiarism, since they are associating the idea of a dynamism with a thinker that said nothing to that effect. Nietzsche is critical of Aristotle’s causal-teleological marriage and can be seen as explicitly accusing Aristotle and also Hume of arbitrarily splicing a dynamic continuum in an ad hoc manner that does not find justification in metaphysical ideas. If Nietzsche had been properly exposed to modern science, he would probably agree that this splicing does not find justification in physical ideas either. The hard sciences confirm a continuum, preferring complex processes from which predictable results follow. There is just no sense in which we can apply any theory of causation to a chemical reaction. What features in these reactions are the properties and dispositions of the elements involved and how they are constituted explains why we get one reaction or another. Any talk of dynamisms is properly Nietzschean in spirit and as should be clear in his words, there is no invocation of powers.
Suffice to say that a deeper analysis of dispositions also explains away the mimicker problem. Styrofoam plates simply do not break in the way glass plates do and their underlying composition explains why that is. Ultimately, Neo-Aristotelians are not in a good position to get to the bottom of what we call cause and effect. Aside from the difficulties Backmann sheds light on, the notion of powers is incoherent and lacking in explanatory power, especially at levels requiring deeper analysis. Predictably, I can see Neo-Aristotelians invoking an infinite regress of sorts. In other words, is it simply the composition of the glass interacting with the composition of a hardwood floor that results in the glass shattering or is there more to the story? To that I would respond that events like these happen within a causally closed space-time system. It is then when we will be asked who or what decided that a glass cup should break on impact when landing on a hardwood floor? Well, who or what decided that a compound fracture of the tibia is expected given that it receives a strong enough blow from an equally dense or denser object? The Neo-Aristotelian will keep pushing the buck back, demanding deeper levels of analysis, effectively moving the goalposts. What will remain is that there is no intelligence that decided on these things, i.e., there is no teleological explanation involved in these cases, because then they would have to account for undesired ends like broken bones.
In the end, I think that the deepest level of analysis will involve a stochastic process in where degrees of probability encompass possible outcomes. Not every blow leads to a broken tibia. Dropping a glass cup on just any surface is not enough to crack or shatter it. There are cases in where angular momentum as a result of a human foot can change a falling glass cup’s trajectory just enough to ensure that it does not break upon hitting the ground. I have met people quite adept at breaking these kinds of falls with a simple extension of their foot. As such, probabilities will change given the circumstances on a case by case basis. This element of chance at the deepest level of analysis coheres perfectly with the universe we find ourselves in because even the fact that we are beings made of matter, as opposed to beings made of anti-matter, is due to chance. Apparently, God has always rolled dice. On this, I will let Lawrence Krauss have the last word:
Because antiparticles otherwise have the same properties as particles, a world made of antimatter would behave the same way as a world of matter, with antilovers sitting in anticars making love under an anti-Moon. It is merely an accident of our circumstances, due, we think, to rather more profound factors…that we live in a universe that is made up of matter and not antimatter or one with equal amounts of both.Krauss, Lawrence. A Universe From Nothing: Why There Is Something Rather Than Nothing. 1st ed. New York, NY: Free Press, 2012. 61. Print.
By R.N. Carmona
Before setting out to formulate Gordon’s “Argument From The Incompleteness of Nature,” a general note is in order. After years of dealing with the more common arguments for God, e.g., the Kalam Cosmological, Moral, Fine-Tuning, Teleological, Ontological arguments, I began to notice that such arguments collapse when the complexity of the facts are analyzed. For instance, P1 of the Moral Argument states that “If God does not exist, objective values and duties do not exist.” This has proved to be the most controversial premise of the argument, but analyses of what is meant by objective, values, and duties lead us in directions where we can apprehend morality along these lines without God being necessarily involved. What I’m noticing now about more complex Theistic arguments is that they collapse when the simplicity of the facts are put on the table, i.e., when simple considerations are taken into account. This also applies to Gordon’s argument. To see what I mean, it will be necessary, first and foremost, to frame Gordon’s argument.
G1 “Quantum mechanics reveals a genuine ontological indeterminacy and incompleteness present in nature” (Gordon, Bruce L.. The Necessity of Sufficiency: The Argument From The Incompleteness of Nature. Two Dozen (or so) Arguments for God: The Plantinga Project, Edited by Walls, Jerry L. & Dougherty Trent. Oxford: Oxford University Press, 2018. 420. Print.)
G2 “Since all physical cause-and-effect relations are local, however, the completeness of quantum theory implies the causal-ontological incompleteness of physical reality: the universe is shot through with mathematically predictable non-local correlations that, on pain of experimental contradiction, have no physical explanation” (Gordon, 421)
G3 “Quantum theory raises fundamental questions about the coherence of material identity, individuality, and causality that pose a prima facie problem for naturalistic metaphysics” (Gordon, 423)
G4 (By way of inference) it is probable that all naturalistic interpretations of quantum mechanics contain conceptual shortcomings (Gordon, 423-429)
GC1 Therefore, “a theistic variant of the Copenhagen interpretation brings metaphysical completion to quantum theory so as to resolve the fundamental puzzle” (Gordon, 423)
GC2 Therefore, “God’s existence and continuous activity is the best explanation for the reality, persistence, and coherence of natural phenomena, and the account of divine action best meeting this explanatory demand is a form of occasionalist idealism” (Gordon, 436)
Gordon also condenses his argument as follows:
Now, in quantum physics we are confronted with a situation in which material causality falls irremediably short of explanatory demand, for there is no collection of physical variables jointly sufficient to the explanation of irreducibly probabilistic quantum outcomes. On pain of postulations to the contrary refuted by experimental violations of Bell inequalities, an ontological gap exists in the causal structure of physical reality that no collection of material causes can be offered to fill. So if a prior commitment to metaphysical naturalism constrains us, no non-naturalistic (transcendent) explanation is available to bridge this gap, and we must embrace the conclusion that innumerable physical events transpire without a sufficient cause, that is, for no explanatorily sufficient reason. In short, Copenhagen orthodoxy, framed in a purely physical context, entails a denial of the principle of sufficient reason (PSR) understood as the general maxim that every contingent event has an explanation. (425)
Right away, one can see how G1 through G3 hold insofar as scientific ignorance remains the case. But first, it will be useful to take note of what motivates Gordon to think that there is any truth to these premises. His primary motivations are informed by what he thinks is the inability of physicists to solve the measurement problem and that, at least from what he interprets is a fault of naturalism, quantum interpretations violate the Principle of Sufficient Reason (PSR) and/or are metaphysically implausible. If Gordon can draw his conclusions by way of induction, by ruling out particular interpretations yet to be offered on the basis of the shortcomings of six more general interpretations, then a naturalist has more warrant to rule out Theism by way of induction, by highlighting the many failures of Theism to square with scientific facts and its many more failures to offer sound philosophical arguments. God was once a local deity, intimately involved in matters far more mundane than quanta. It was widely believed that God created the Earth, not via the gradual work of physical laws, but as intimately as a potter forms his vase. Christians of the past even set out to prove God’s involvement in the world. Donald Prothero gives us a prime example:
Other geologists and paleontologists followed Cuvier’s lead and tried to describe each layer with its distinctive fossils as evidence of yet another Creation and Flood event not mentioned in the Bible. In 1842, Alcide d’Orbigny began describing the Jurassic fossils from the southwestern French Alps and soon recognized 10 different stages, each of which he interpreted as a separate non-Biblical creation and flood. As the work continued, it became more and more complicated until 27 separate creations and floods were recognized, which distorted the Biblical account out of shape. By this time, European geologists finally began to admit that the sequence of fossils was too long and complex to fit it with Genesis at all. They abandoned the attempt to reconcile it with the Bible. Once again, however, these were devout men who did not doubt the Bible and were certainly not interested in shuffling the sequence of fossils to prove Darwinian evolution (an idea still not published at this point). They simply did not see how the Bible could explain the rock record as it was then understood.Prothero, Donald. Evolution: What the Fossils Say and Why it Matters. New York: Columbia University Press, 2007. 56-57. Print.
Going over the litany of examples throughout history is not necessary because Theism’s lack of explanatory success informs the behavior of today’s Theists. Therefore, it suffices to point out that Theists have gone from asserting that God is intimately involved in every aspect of reality, in addition to positing that the Bible renders an infallible account of many historical events, including a global flood, to relegating God to the outskirts of human knowledge where the refulgence of science remains unfelt, as hidden somewhere before the Big Bang, active solely in quantum phenomena that evade the experiences of even the most devout believers, and as grounds for some explanation of human consciousness that allows for the continuance of consciousness after death, i.e., a philosophy of mind that entails the existence of the soul, e.g., Cartesian dualism, Aristotelianism hylomorphism, panpsychism. Gordon’s argument is a prime example of this retreat to the far reaches of scientific ignorance, hoping with all his might that he will find God at the fringes of reality. If naturalism has pushed Theism this far, then it is safe to say that Theism is teetering on the edge, that any argument Theists put forth now are highly likely to fail, and that it is only a matter of time before Theism plunges into the abyss.
Before exposing glaring issues with Gordon’s conclusion, I will go over issues with his analysis of the many worlds interpretation (MWI) and the Ghirardi-Rimini-Weber spontaneous collapse interpretation (GRWI). Then I will provide an overview of two interpretations that circumvent the measurement problem and one its entailments, the observer effect. Prior to that, there are already issues with his analysis of the PSR that sound suspiciously like Plantinga’s EAAN or worse, Lewis’ Argument Against Naturalism. Gordon states:
Suppose, among all of the events that happen in the universe, there are countless many that happen without cause or reason. If this were true, we would have no principled way of telling which events were caused and which were not, for events that appeared to have a cause might, in fact, lack one. Our current perceptual states, for example, might have no explanation, in which case they would bear no reliable connection to the way the world is. So if the PSR were false, we could never have any confidence in our cognitive states. (425)
It is important to note that scientists are only concerned about causes inasmuch as they have explanatory power. If a cause does no explanatory work, then it does not help them to get a better understanding of a given phenomenon. Think of Nancy Cartwright’s $1,000 bill descending in St. Stephen’s Square. Scientists simply do not care to arrive at a model that accurately predicts where the bill will land and more precisely, about its exact movements through the air prior to landing. This particular example, that involves any number of difficult to quantify variables, e.g., bird droppings hitting the bill on the way down, dust particles slightly changing the bill’s trajectory, wind speeds, does not help scientists better understand drift, free fall, etc. Physicists already have general principles that help them understand how, for instance, a basketball succumbs to the magnus effect. A disposition of the ball, in particular its shape, makes it susceptible to this effect whereas the dispositions of the bill guarantee that it will drift wildly during the entirety of its descent to the ground.
Any event appearing to be caused does not immediately invite scientific scrutiny. Only events that do explanatory work or are suspected of having some explanatory power over a given effect, specifically in relation to a theory or model, are worth examining. In any case, it does not follow from the possibility that the PSR is false that our perceptual states have no explanation or cause. Therefore, that we can have no confidence in our perceptual states is completely non sequitur. Neuroscientists, cognitive scientists, and psychologists have done plenty of work to show that our perceptual states do have explanations, regardless of whether the PSR is true or not. Thus, if the PSR turns out to not be the case, our perceptual states are not among events lacking a cause or an explanation.
A general note of relevance is in order. Gordon’s citations are mostly decades old, which any peer reviewer in philosophy would immediately be suspicious of. Of the Many Worlds Interpretation, Gordon states: “So which way of building the universal wavefunction is to be preferred? This difficulty, known as the “preferred basis problem,” reveals that the branching process itself is completely arbitrary from a mathematical standpoint and therefore, from the abstract point of view presupposed by the MWI, not reflective of any physical reality” (427). Setting aside the non sequitur, “not reflective of any physical reality,” his primary authority informing this statement, namely David Wallace in 2003, no longer considers preferred basis to be an issue. Gordon would know that if he had read Wallace’s 2010 paper “Quantum Mechanics on Spacetime I: Spacetime State Realism,” in where he states:
We might sum up the objection thus: wave-function realism requires a meta-physically preferred basis… This objection is probably most significant for Everettians, who generally regard it as a virtue of their preferred interpretation that it requires no additional formalism, and so are unlikely to look kindly on a requirement in the metaphysics for additional formalism. Advocates of dynamical-collapse and hidden-variable theories are already committed to adding additional formalism, and in fact run into problems in QFT for rather similar reasons: there is no longer a natural choice of basis to use in defining the collapse mechanism or the hidden variables. We are not ourselves sanguine about the prospects of overcoming this problem; but if it were to be overcome, the solution might well also suggest a metaphysically preferred basis to use in formulating a QFT version of wave-function realism.Wallace, David, and Christopher G. Timpson. “Quantum Mechanics on Spacetime I: Spacetime State Realism.” The British Journal for the Philosophy of Science, vol. 61, no. 4, 2010, pp. 697–727. https://arxiv.org/pdf/0907.5294.pdf. Accessed 1 Feb. 2021.
Lev Vaidman, Professor at the School of Physics and Astronomy in Tel Aviv, corroborates this: “due to the extensive research on decoherence, the problem of preferred basis is not considered as a serious objection anymore” (Vaidman, Lev, “Many-Worlds Interpretation of Quantum Mechanics”, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), Fall 2018, https://plato.stanford.edu/archives/fall2018/entries/qm-manyworlds/).
Gordon raises a second difficulty for the MWI: “The second difficulty lies in its treatment of quantum probabilities” (Ibid.). Worse than using outdated sources is Gordon’s misrepresentation of a source that actually disagrees with his statement. Simon Saunders, in “Chance in the Everett interpretation,” actually states: “To conclude: there is no good reason to think EQM is really a theory of over-lapping worlds. If questions of overlap of branches are to be settled by appeal to the underlying mathematics, in terms of vector space structure, then there is at least one natural mereology in terms of which worlds that differ in some feature, since orthogonal, are non-overlapping” (Saunders, Simon (2010). Chance in the Everett interpretation. In Simon Saunders, Jonathan Barrett, Adrian Kent & David Wallace (eds.), Many Worlds?: Everett, Quantum Theory & Reality. Oxford University Press.). Saunders attempts to “solve the problem without introducing additional structure into the theory” (Vaidman, Ibid.) and yet Gordon tells his reader to “see Saunders et al. 2010 for extensive polemics regarding it” (Ibid.). This is an egregious level of malpractice that can only be explained by his desperation to prove his belief in God.
Turning now to his analysis of GRWI, the prospects for his argument do not improve. Gordon states of GRWI: “The problem is that it cannot be rendered compatible with relativity theory or extended to the treatment of quantum fields in this form” (Ibid.); “the theory remains radically non-local and has the additional drawback of eliminating the possibility of particle interactions and thus any physics of interest” (Ibid.); and “there are no versions of the theory in which the collapse is complete, with the consequence that all “material” objects have low- density copies at multiple locations, the presence and effect of which linger forever in the GRWI wavefunction” (Ibid.). The first and third concerns are not an issue for GRWI. The first issue simply restates the more general difficulty physicists have had with reconciling quantum mechanics and general relativity; this would then be an issue for the entire enterprise of quantum mechanics, so we would essentially be tossing the bath water, baby and all! The third issue is an appeal to ignorance. That there is currently no version of GRWI offering a collapse that is complete does not mean that scientists ought to give up on the search for a version containing a complete collapse. This leaves the second concern, which is addressed in Tejinder Singh’s 2018 paper “Space and Time as a Consequence of GRW Quantum Jumps,” where he deploys GRWI to solve the measurement problem. Singh states:
This classical metric is in turn produced by classical bodies, according to the laws of general relativity. And classical bodies are themselves the result of GRW localisation. Thus it is not reasonable to assume space to exist prior to the GRW quantum jumps. Rather, it seems quite intuitive that space results from GRW collapses taking place all over the universe. Space is that which is between collapsed objects. No collapse, no space. This also helps us understand why the GRW jumps take place in space: it is because space in the first place is created because of these jumps.Singh, Tejinder. “Space and time as a consequence of GRW quantum jumps.” TZeitschrift für Naturforschung A73 (2018) 923. https://arxiv.org/pdf/1806.01297.pdf. Accessed 1 Feb. 2021.
Singh considers Hilbert space as more fundamental than classical space, so these GRW jumps occurring in Hilbert space give rise to the classical fabric of space we are accustomed to. He posits that the wave function is contingent on the configuration space where the particle moves through time, to potentially infinite degrees of freedom. This then results in a complete collapse of the wave function. Gordon’s hasty conclusion no longer holds if Singh has succeeded in offering a version of GRWI containing a complete collapse of the wave function.
This is setting aside the fact that Gordon overlooked what many consider an updated or even upgraded version of MWI, namely the Many Interacting Worlds Interpretation (MIWI). The MIWI differs from the MWI in that all quantum phenomena are the result of an inter-universal repulsive force acting on worlds in close proximity to one another, thus explaining any dissimilarity between them. Michael Hall, et. al. conclude that the MIWI can reproduce quantum interference phenomena, in addition to offering advantages with respect to computational modeling. They note that on the de Broglie–Bohm Interpretation, the wave function denoted by Ψ, even when it is a very large value allows computer modeling to focus on high density regions in configuration space, specifically regions where calculation errors have to be corrected to analyze convergence given norms of angular momentum (see Hall, Michael J. W., Deckert, Dirk-André, and Wiseman, Howard M.. Quantum Phenomena Modeled by Interactions between Many Classical Worlds. Physical Review X, 2014; 4 (4) DOI: 10.1103/PhysRevX.4.041013).
There is also the Lindgren-Liukkonen Interpretation (LLI), championed by two quantum physicists that take Ockham’s Razor seriously. Given this, their quantum interpretation is a statistical interpretation that solves the observer effect. In other words, there is no logical reason, to their minds, why the results of a measurement are dependent on an observer. They dispense with the notion of a conscious observer changing the result of measurements. The LLI shows that any epistemological and ontological issues that stem from the uncertainty principle are solved given that the uncertainty principle is a fixed property of stochastic mechanics (see Lindgren, Jussi and Liukkonen, Jukka. The Heisenberg Uncertainty Principle as an Endogenous Equilibrium Property of Stochastic Optimal Control Systems in Quantum Mechanics. Symmetry, 2020; 12 (9): 1533 DOI: 10.3390/sym12091533
Gordon not only failed to rule out enough interpretations of quantum mechanics to make his conclusion more likely, but he failed to rule out the best defenses of, at least, two of the interpretations he is skeptical about. The larger issue for Gordon is that even if he managed to rule out say, twenty interpretations in quantum mechanics, his conclusion simply does not follow and if it did, there are simple considerations that render it untenable. Recall: “God’s existence and continuous activity is the best explanation for the reality, persistence, and coherence of natural phenomena, and the account of divine action best meeting this explanatory demand is a form of occasionalist idealism” (Gordon, 436). It follows from this that God’s existence and continuous activity is the best explanation for the reality, persistence, and coherence of viruses, diseases, natural disasters, and pretty much any undesired consequence a Theist can imagine. Clearly, Gordon does not want to admit these natural phenomena into his conclusion, choosing instead to special plead for any cases he thinks suit his argument. In other words, one of his concerns fits better on his foot: Suppose, among all of the events that happen in the universe, there are countless many that happen without God’s continuous activity, e.g., pretty much all the bad stuff. If this were true, we would have no principled way of telling which events were caused by his activity and which were not, for events that appeared to have been caused by God, in fact, were not. It is far more probable therefore, that God has no hand in any event in the natural world, not even granting a retreat into the quantum realm.
Ultimately, if a Theist wants to continue to assert that God has a hand in the unification of quantum and classical phenomena, they need to take a different route than Gordon has. Gordon severely undermines his own project by using outdated sources, being completely unaware of the fact that one of the authors of one of his primary sources changed their mind and actually proved the opposite of what seemed to lend a hand to Gordon’s argument, and overlooking a number of interpretations that may provide a stable and complete collapse of the wave function, thus solving quantum paradoxes, like the measurement problem and related observer effect. More damning to such arguments is that if a personal, loving deity saw fit to retreat to the far reaches of metaphysical reality, then he can have no desire to be known or detected by even people who are hopelessly devoted and attached to him. Quanta lies so far outside of the everyday experience of human beings that the idea that God is asking us to pursue him into the microcosms of the quanta is, quite frankly, nonsensical. It makes more sense that retreats like Gordon’s, into profoundly metaphysical territory, has everything to do with Theism’s failure to square with science, in addition to offering philosophical arguments or proofs that are sound or, at the very least, cogent and without controversy. This is precisely the prognosis for Theism and the relentless advances of science and philosophy, closely in tow, do not look poised to provide any remedy. Gordon’s argument, while complex, completely collapses in the face of simple considerations, which is a happy irony given his claims about the quantum wave function.
By R.N. Carmona
I have submitted a paper to Philosophical Studies addressing Dustin Crummett and Philip Swenson’s paper. Admittedly, this is my first attempt at publishing in a philosophy journal. I took a swing with no guidance, no co-author, and no funding. There is of course a chance it gets rejected, but I am hoping for the best. In any case, I think my paper provides heuristics for anyone looking to refute Evolutionary Moral Debunking Arguments like Crummet and Swenson’s. Let us turn to how I dissect their argument.
They claim that their Evolutionary Moral Debunking Argument Against Naturalism (EMDAAN) stems from Street’s and Korman and Locke’s EMDAs. The latter EMDAs target moral realism while Crummett and Swenson’s targets naturalism. The issue with theirs is that they grossly overlook the fact that both Street and Korman & Locke do not argue that naturalism is threatened by EMDAs. Street argues that her practical standpoint characterization of constructivism sidesteps any issues her EMDA might have presented for her naturalism. Korman and Locke target the minimalist response and in a separate paper, not cited by Crummett, relativism. They do not target naturalism either.
At first glance, I compared Crummett and Swenson’s argument to Lewis’ long-defeated Argument Against Atheism. They state: “The problem for the naturalist here is that, if naturalism is true, it seems that the faculties responsible for our intuitions were formed through purely natural processes that didn’t aim at producing true beliefs” (Crummett & Swenson, 37). One can easily see how they paraphrase Lewis who says:
Supposing there was no intelligence behind the universe, no creative mind. In that case, nobody designed my brain for the purpose of thinking. It is merely that when the atoms inside my skull happen, for physical or chemical reasons, to arrange themselves in a certain way, this gives me, as a by-product, the sensation I call thought. But, if so, how can I trust my own thinking to be true? It’s like upsetting a milk jug and hoping that the way it splashes itself will give you a map of London. But if I can’t trust my own thinking, of course I can’t trust the arguments leading to Atheism, and therefore have no reason to be an Atheist, or anything else. Unless I believe in God, I cannot believe in thought: so I can never use thought to disbelieve in God.Marsden, George M.. C.S. Lewis’s Mere Christianity : A Biography. Princeton University Press. 89. 2016. Print.
This is a known predecessor of Plantinga’s Evolutionary Argument Against Naturalism (EAAN). Therefore, the first angle I take in the paper is to show how Crummett and Swenson did not understand Street’s paper. Perhaps it is the sheer length of her excellent paper (over 50 pages) or perhaps they were so intent on addressing New Atheists that they overlooked her more robust approach to showing how anti-realism fares against EMDAs. I think her paper makes a lot more sense when read in conjunction with her overview of constructivism (see here). Bearing that in mind, I attempt to divorce Crummet and Swenson’s EMDAAN from Street’s EMDA against moral realism. Korman and Locke’s project is markedly different, but their work does not help Crummett and Swenson’s argument either.
With the EAAN now in focus, I show how Crummett and Swenson’s EMDAAN just is an iteration of the EAAN. The EAAN applies to general truths. Put simply, Plantinga argues that if we take seriously the low probability of evolution and naturalism being true despite the fact that that our cognitive faculties formed from accidental evolutionary pressures, then we have a defeater for all of our beliefs, most notably among them, naturalism. Crummett and Swenson make the same exact argument, the difference being that they apply it to specific beliefs, moral beliefs. Given that moral beliefs are a sub-category within the domain of all beliefs, their EMDAAN is an iteration of the EAAN. Here is an example I did not pursue in my paper, call it the Evolutionary Scientific Debunking Argument.
RC1 P(Sm/E&S) is low (The probability that our faculties generate basic scientific beliefs, given that evolution and science are true, is low.)
RC2 If one accepts that P(Sm/E&S) is low, then one possesses a defeater for the notion that our faculties generate basic scientific beliefs.
RCC Therefore, one possesses a defeater for one’s belief in science.
Perhaps I would be called upon to specify a philosophical view of science, be it realism or something else, but the basic gist is the same as Crummett and Swenson’s EMDA. I am, like them, targeting a specific area of our beliefs, namely our beliefs resulting from science. My argument is still in the vein of Plantinga’s EAAN and is a mere subsidiary of it.
After I establish the genealogy of Crummett and Swenson’s argument, I turn the EAAN on its head and offer an Evolutionary Argument Against Theism. If Plantinga’s argument holds sway and the Theist believes that evolution is true, he is in no better epistemic shape than the naturalist. Therefore, Plantinga’s conditionalization problem, which offers that P(R/N&E) is high iff there exists a belief B that conditionalizes on N&E, is an issue for Theists as well. In other words, perhaps the probability that our cognitive faculties are reliable given that evolution and naturalism are true increases iff there is an added clause in the conjunction. Put another way, the probability that our cognitive faculties are reliable, granting that evolution and naturalism and (a successful philosophy of mind), is high. This successful philosophy of mind will have to show precisely how a brain that resulted from naturalistic evolutionary processes can generate the sort of consciousness capable of acquiring true beliefs. The theist who says P(R/T&E) is high is begging the question because merely asserting that “God ensured that there would be some degree of alignment between our intuitions and moral truth” ((Crummett & Swenson, 44) does not help the Theist avoid the conditionalization problem.
With that established, and I cannot give too much away here because this is the novelty in my paper, I argue that the only recourse the Theist has, especially given that they have no intention of disavowing Theism, is to abandon their belief in evolution. They would have to opt, instead, for a belief in creationism or a close variant like intelligent design. In either case, they would then be left asserting that a Creationary Moral Confirming Argument in Favor of Theism is the case. I explore the litany of issues that arises if the Theist abandons evolution and claims that God’s act of creating us makes moral realism the case. Again, the Theist ends up between a rock and a hard place. Theism simply has far less explanatory power because, unlike naturalism, it does not account for our propensity to make evaluative errors and our inclination toward moral deviancy. If God did, in fact, ensure that our moral intuitions align with transcendent moral truths, why do we commit errors when making moral decisions and why do we behave immorally? Naturalism can explain both of these problems, especially given the role of reason under the moral anti-realist paradigm. Evaluative errors are therefore, necessary to improve our evaluative judgments; reason is the engine by which we identify these errors and improve our moral outlook. The Theist would be back at square one, perhaps deploying the patently mythical idea of a Fall to account for the fact that humans are far from embodying the moral perfection God is said to have.
With Crummett and Swenson’s argument now thoroughly in Plantinga’s territory, I explore whether the anti-realist can solve the conditionalization problem. I suggest that evolution accounts for moral rudiments and then introduce the notion that cultural evolution accounts for reliable moral beliefs. Cooperation and altruism feature heavily into why I draw this conclusion. So P(Rm/E&MAR) (if evolution and moral anti-realism are true, the probability that our faculties generate evaluative truths) is high given that cooperation and/or altruism conditionalize on our belief that evolution and moral anti-realism are the case. We are left with P[(Rm/E&MAR) & (C v A)] or P[(Rm/E&MAR) & (C&A)]. In other words, if evolution and moral anti-realism are true, and cooperation and/or altruism conditionalize on our beliefs that evolution and moral anti-realism are the case, the probability that our faculties generate evaluative truths/reliable moral beliefs is high.
Ultimately, like Moon, I think my paper will provide fertile ground for further discussion on the conditionalization problem. The jury is still out on whether the naturalist’s belief that evolution and naturalism are true even requires a clause to conditionalize on that belief. In any case, much can be learned about EMDAs against naturalism from the vast literature discussing Plantinga’s EAAN. I think that my arguments go a long way in dispensing with EMDAs in the philosophy of religion that target naturalism. When one considers that the Theist cannot account for moral truths without unsubstantiated assertions about God, it is easy to see how they are on less secure ground than the naturalist. If the Theist is a Christian or a Muslim, then they ought to be reminded that their scriptures communicate things about their gods that are not befitting of moral perfection. If the choice is between naturalism and the belief that a god who made parents eat their children is, despite all evidence to the contrary, morally perfect, I will take my chances with naturalism!
By R.N. Carmona
Weaver’s argument, although robust, commits what I think is a cardinal sin in philosophy: “An objection from logical considerations against atheism is one which attempts to show that some deliverance of logic is at odds with atheism or something strictly implied by atheism” (Weaver, C.G. (2019). Logical Objections to Atheism. In A Companion to Atheism and Philosophy, G. Oppy (Ed.). https://doi.org/10.1002/9781119119302.ch30). One should not get in the habit of drawing ontological conclusions on the basis of logical considerations and though Weaver makes a good attempt to justify his conclusion, there are too many areas in his composite argument that are vulnerable to attack. There are parts of his composite argument that are clearly stated in his own words, but other parts have to be sifted out from his discussions, specifically on logical monism and classical logical consequence (CLC). Also, the conclusion that atheism is false has to be gathered from his discussion following his claim that ontological naturalism is false.
A general note, prior to proceeding, is in order. Weaver’s paper is quite technical and not at all easy for the untrained eye to read, let alone understand, so I will endeavor to avoid technicality wherever necessary; I will only permit pursuing one technical element because I disagree with Weaver’s treatment of supervenience, how he conveniently begs the question regarding reductionist materialism (if only to ensure that his argument is not met with immediate difficulty), and the conclusion he believes follows. More importantly, I think that the domestication of philosophy within the ivory towers of academia was a critical misstep that needs to be rectified. While analytic philosophy has its use, its abuse makes philosophy the slave of academic elites and therefore, keeps it well out of the reach of ordinary people. Philosophy, therefore, if it is to be understood by laypeople, needs to be communicated in ordinary, relatable language. Since my interest is to, first and foremost, communicate philosophy in an approachable way, I tend to avoid technicalities as much as possible. With that said, it is not at all necessary to quibble with Weaver’s logical proofs of validity (especially because validity matters much less than soundness) or Williamson’s notion that contingentist statements can be mapped onto necessitist ones and vice versa, but that “The asymmetry favours necessitism. Every distinction contingentists can draw has a working equivalent in neutral terms, but the extra commitments of necessitism allow one to draw genuine distinctions which have no working equivalents in neutral terms. If one wants to draw those distinctions, one may have to be a necessitist” (Williamson, T.. “Necessitism, Contingentism, and Plural Quantification.” Mind 119 (2010): 657-748. 86. Web.).
Williamson and Weaver, following his cue, are both guilty of ignoring logical atomism, so ultimately, it does not matter if the validity of logical statements suggests that necessitism about mere propositions is probably true because ultimately, we are not talking about mere propositions but rather Sachverhalte, “conglomerations of objects combined with a definite structure” (Klement, Kevin, “Russell’s Logical Atomism”, The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.)). This is perhaps Weaver’s motivation for dismissing Carnap who was anti-metaphysical. It can be argued, therefore, that reinstating metaphysics or overstating its importance is necessary for any argument against naturalism and/or atheism or conversely, for Theism, to get any traction. The fact remains, however, that propositions comprising a sound logical argument are dependent on real world experiences via the senses. The proposition “there is a cat” may speak to the fact that either i) one believes they have seen a cat in whatever space they find themselves in ii) one knows and can confirm that there is a cat in their vicinity iii) there is presently a cat within ones field of vision. While I grant that propositions can speak to entirely imaginary or, at least, hypothetical entities, all propositions rely on entities we have identified in our common tongue. Therefore, statements like “there is a cat” will always rely on content not necessarily entailed within a given proposition. There is still a question as to the context of such propositions and the preciseness of what one is trying to say.
Weaver’s Composite Argument Against Naturalism and Atheism, and Its Problems
With these preliminary concerns in our rearview, I can now turn to Weaver’s composite argument and provide a few avenues for the atheist to refute his argument.
W1 Since situationspf do not exist (“I will therefore be entitled to reject…the existence of situationsPF” (Weaver, 6).), situationsC exist.
W2 Given situationsC , classical logical consequence (CLC) is the case.
W3 From W2, necessitism is true.
W4 “If necessitism is true, then ontological naturalism is false.”
W5 “Necessitism is true.”
W6 “Therefore, ontological naturalism is false” (Weaver, 15).
W7 From W6, “Necessitism is true and modal properties are indispensable to our best physical theories.”
W8 If W7, “then there is a new phenomenon of coordination (NPC).”
W9 “Necessarily, (if there is an NPC, it has an explanation).”
W10 “Necessarily, [if possibly both (atheism is true and there is an NPC), then it is not possible that the NPC has an explanation]”
C “Therefore, atheism is false” (Weaver, 18).
Setting aside that Weaver assumes that suitably precisified situations (situationspf) cannot exist and the problems he would face if just one instance of such a situation does exist, there is a way to show that even on the assumption that just classically precisified situations (situationsC) exist, it doesn’t follow that CLC holds. Weaver seems to think that CLC follows from a schema concerning mere validity: “A deductive argument is valid, just in case, there is no situation in which the premises are true and the conclusion false” (Weaver, 4). I think it is straightforwardly obvious that a typical non sequitur already violates this schema. Consider the following:
P1 If it is cloudy outside, there is a chance of precipitation.
P2 It is cloudy outside.
C Therefore, the Yankee game will be postponed.
The first two premises are true perspectively. In New York City, at this present hour, it is partly cloudy outside and there is thus, a chance of precipitation. However, the conclusion is false because the New York Yankees are not even in Spring training and it is out of the norm for them to have a regular season home game in late January. The above argument can prove true given not only at least one extra premise, but also the fact that it is not winter but spring, and that the MLB regular season is underway. This goes a long way in showing that propositions are usually missing crucial content and are true given specified context. Perhaps, then, Weaver should provide a different schema to ground CLC.
Weaver, unfortunately, does not give an adequate account of what he means by situationspf and what such situations would look like. It is enough to reiterate that the existence of even one such situation takes him back to square one. This is aside from the fact that a rejection of pluralism entails a rejection of arguments operating outside of classical logic, e.g., Plantinga’s Modal Ontological Argument, which rests of the axioms of S5 modal logic. A thorough rejection of free logical systems would limit Theists to the domain of classical logic, which will prove unforgiving since nothing like God seems operative in the real world.
Weaver’s dependence on situationsC and CLC proves problematic and is one place for an atheist to focus on. Another avenue for an atheist to take is W4 and W5. Is the notion that ontological naturalism is false conditional on necessitism being true? I do not think Weaver established that this premise is true. Furthermore, aside from exploring whether these clauses have a conditional relationship, one can simply ask whether necessitism is true. The jury is still out on whether necessitism or contingentism is the case, and there may yet be a synthesis or a handful of alternative positions that challenge both. Given the current state of the debate, I am uncommitted to either position, but I am suspicious of anyone siding with one for sake of attempting to disprove a position they already assume is false, which, in Weaver’s case, are naturalism and atheism.
In plain language, the perspective of necessitists falls flat or appears to be saying something nonsensical. Williamson outlines where disagreement lies:
For instance, a contingentist typically holds that it is contingent that there is the Thames: there could have been no such river, and in those circumstances there would have been no Thames. By contrast, a necessitist typically holds that it is necessary that there is the Thames: there could have been no such river, but in those circumstances there would still have been the Thames, a non-river located nowhere that could have been a river located in England. Thus the contingentist will insist that necessarily if there is the Thames it is a river, while the necessitist allows at most that necessarily if the Thames is located somewhere it is a river.Williamson, T.. “Necessitism, Contingentism, and Plural Quantification.” Mind 119 (2010): 657-748. 9. Web.
Contingentists deny the necessity of the Thames, whether river or not. These identity discussions extend further when one considers people. Manuel Pérez Otero explores this and tries to synthesize these two opposing point of views (see Otero, Manuel Pérez. “Contingentism about Individuals and Higher-Order Necessitism.” Theoria: An International Journal for Theory, History and Foundations of Science, vol. 28, no. 3(78), 2013, pp. 393–406. JSTOR, http://www.jstor.org/stable/23926328. Accessed 25 Jan. 2021.). Though Otero’s synthesis is tangential for our purposes, it shows that this binary Weaver thinks exists is one of his own making, essentially a false dichotomy. Given the issues necessitism presents for ordinary language, and the likelihood of one of its alternatives being true, it follows that necessitism is probably false. An exhaustive defense of a position I am not committed to is not at all required to show where Weaver has gone wrong.
This takes us to Weaver’s treatment of supervenience and his New Phenomenon of Coordination (NPC), which states:
Why is it that modal properties and notions enter the verisimilitudinous fundamental dynamical laws of our best and most empirically successful physical theories given that modal properties do not weakly supervene upon the physical or material? (or) How is it that the material world came to be ordered in such a way that it evolves in a manner that is best captured by modally laden physical theorizing or dynamical laws given that modal properties do not even weakly supervene upon the material and non-modal? (Weaver, 17)
If necessitism is probably false, then ontological naturalism still has a chance of being true. This is despite the fact that Weaver failed to show that the falsity of ontological naturalism is conditional on necessitism being true. A stronger route for him to have took is to argue that ontological naturalism is false iff necessitism is true because even if turns out that necessitism is true, ontological naturalism can also be true. Weaver has not established that they are mutually exclusive. Therefore, an atheist can feel no pressure at all when confronted with NPC. This is setting aside that Weaver appears to be undisturbed by the incongruity of our scientific and manifest images. One would think a reconciliation is required before proclaiming that the material world is organized via modally laden physical theories and dynamic laws that supervene, whether strongly or weakly, on the material world.
The primary issue with Weaver’s assessment is the assumption that all atheists must be committed to reductionist materialism or physicalism to be a consistent ontological naturalist. There are alternative naturalisms that easily circumvent Weaver’s NPC because such a naturalist would not be committed to any version of supervenience. As an example, this naturalist can hold, to put it as simply as possible, that scientific theories and models are merely representations. Therefore, the modality of scientific theories need not supervene on the material world at all. Given a representationalist account of scientific theories, perhaps something like a reverse supervenience is the case.
∎∀𝑥∀𝑦(∀𝐹 𝐹𝑥 ≡ 𝐹𝑦 ⟶ ∎∀R R𝑥 ≡ R𝑦 )
Necessarily for any entity x and for any entity y, [(if for any material property F, (x has F, just in case, y has F), then necessarily, for any representational property M, (x has M, just in case, y has M)].
Scientific theories and models are, in other words, more akin to impressionist paintings than a group of modally laden propositions. This is a more commonsense view in that a scientific model is a portrait of the real world. While there is a feedback between the model and the material world, in that theories have to be tested against reality, theories and models are not conceived in a vacuum. Real world observations impose the postulates of a theory or render a portrait that we call a model. Ptolemy misconstrued planetary orbits and attributed their motions to invisible spheres rather than the ellipses we are familiar with. He was not far off the mark, especially given that there is an intangible involved, namely gravity, but his impression was inexact. This is what a representationalist account of scientific theories would look like and whether something like reverse supervenience is necessary does no real harm to the account.
The last route atheists can take is in Weaver’s conflation of atheism and naturalism. Though I am sympathetic to the conflation, like Nielsen, who stated, “Naturalism, where consistent, is an atheism” (Nielsen, Kai. Naturalism and Religion. Amherst, N.Y.: Prometheus, 2001. 30. Print.), the same need not apply in vice versa. In other words, the following statement need not be the case: “atheism, where consistent, is a naturalism.” While I am also partial to that statement, even going as far as defending it in Philosophical Atheism: Counter Apologetics and Arguments For Atheism, that gods do not exist does not entail that no immaterial beings can exist. It could be the case that no iteration of god exists, but that ghosts do. Weaver’s conflation seems to rest on the assumption that naturalism is the antithesis of supernaturalism. Naturalism is also opposed to paranormal phenomena, so there can be defeaters of naturalism that are not also defeaters of atheism. In other words, a definitive proof of the paranormal does not debase the thesis that gods do not exist. A definitive proof of one’s great grandma roaming the estate does not imply that God or any other god undeniably exists. Nielsen’s statement implies only that a disproof of atheism is also a disproof of naturalism, but this does not work in the other direction.
Ultimately, in light of the composite argument above, one that I think is true to Weaver’s overall argument, fails to disprove ontological naturalism and atheism. There is far too much controversy in a number of places throughout his argument to regard it as convincing. The argument needs to be critically amended or entirely abandoned because in its present form, it does not meet its end. My rebuttal provides fertile ground for further exploration with respect to necessitism, contigentism, and any possible syntheses or alternatives, in addition to what is required to contradict naturalism and atheism. God, whether the idea Theist philosophers defend, or a more common concept tied to a particular religion, is still resolutely resigned to silence, hiddenness, and outright indifference. Therefore, Theists have their own onus that must go beyond even a successful argument against naturalism and/or atheism.
By R.N. Carmona
Before starting my discussion of the first chapter of Neo-Aristotelian Perspectives On Contemporary Science, some prefatory remarks are in order. In the past, I might have committed to reading an entire book for purposes of writing a chapter by chapter review. With other projects in my periphery, I cannot commit to writing an exhaustive review of this book. That remains undecided for now. What I will say is that a sample size might be enough to confirm my suspicions that the Neo-Aristotelian system is rife with problems or even worse, is a failed system of metaphysics. I am skeptical of the system because it appears to have been recruited to bolster patently religious arguments, in particular those of modern Thomists looking to usher in yet another age of apologetics disguised as philosophy. I maintain that apologetics still needs to be thoroughly demarcated from philosophy of religion; moreover, philosophy of religion should be more than one iteration after another of predominantly Christian literature. With respect to apologetics, I am in agreement with Kai Nielsen who stated:
It is a waste of time to rehearse arguments about the proofs or evidences for God or immortality. There are no grounds — or at least no such grounds — for belief in God or belief that God exists and/or that we are immortal. Hume and Kant (perhaps with a little rational reconstruction from philosophers like J.L. Mackie and Wallace Matson) pretty much settled that. Such matters have been thoroughly thrashed out and there is no point of raking over the dead coals. Philosophers who return to them are being thoroughly retrograde.Nielsen, Kai. Naturalism and Religion. Amherst, N.Y.: Prometheus, 2001. 399-400. Print.
The issue is that sometimes one’s hand is forced because the number of people qualified to rake dead coals is far fewer than the people rehashing these arguments. Furthermore, the history of Christianity, aside from exposing a violent tendency to impose the Gospel by force, also exposes a tendency to prey on individuals who are not qualified to address philosophical and theological arguments. Recently, this was made egregiously obvious by Catholic writer Pat Flynn:
So what we as religious advocates must be ready for is to offer the rational, logical basis—the metaphysical realism, and the reality of God—that so many of these frustrated, young people are searching for who are patently fed up with the absurd direction the secular world seems to be going. They’re looking for solid ground. And we’ve got it.Flynn, Pat. “A Hole in The Intellectual Dark Web”. World On Fire Blog. 26 Jun 2019. Web.
Unfortunately, against all sound advice and blood pressure readings, people like myself must rake dead coals or risk allowing Christians to masquerade as the apex predators in this intellectual jungle. I therefore have to say to the Pat Flynns of the world, no you don’t got it. More importantly, let young people lead their lives free of the draconian prohibitions so often imposed on people by religions like yours. If you care to offer the rational, logical basis for your beliefs, then perhaps you should not be approaching young people who likely have not had an adequate exposure to the scholarship necessary to understand apologetics. This is not to speak highly of the apologist, who typically distorts facts and evidence to fit his predilections, making it necessary to acquire sufficient knowledge of various fields of inquiry so that one is more capable of identifying distortions or omission of evidence and thus, refuting his arguments. If rational, logical discourse were his aim, then he would approach people capable of handling his arguments and contentions. That is when it becomes abundantly clear that the aim is to target people who are more susceptible to his schemes by virtue of lacking exposure to the pertinent scholarship and who may already be gullible due to existing sympathy for religious belief, like Flynn himself, a self-proclaimed re-converted Catholic.
Lanao and Teh’s Anti-Fundamentalist Argument and Problems Within The Neo-Aristotelian System
With these prefatory remarks out of the way, I can now turn to Xavi Lanao and Nicholas J. Teh’s “Dodging The Fundamentalist Threat.” Though I can admire how divorced Lanao and Teh’s argument is from whatever theological views they might subscribe to, it should be obvious to anyone, especially the Christian Thomist, that their argument is at variance with Theism. Lanao and Teh write: “The success of science (especially fundamental physics) at providing a unifying explanation for phenomena in disparate domains is good evidence for fundamentalism” (16). They then add: “The goal of this essay is to recommend a particular set of resources to Neo- Aristotelians for resisting Fundamentalist Unification and thus for resisting fundamentalism” (Ibid.). In defining Christian Theism, Timothy Chappell, citing Paul Veyne, offers the following:
“The originality of Christianity lies… in the gigantic nature of its god, the creator of both heaven and earth: it is a gigantism that is alien to the pagan gods and is inherited from the god of the Bible. This biblical god was so huge that, despite his anthropomorphism (humankind was created in his image), it was possible for him to become a metaphysical god: even while retaining his human, passionate and protective character, the gigantic scale of the Judaic god allowed him eventually to take on the role of the founder and creator of the cosmic order.”Chappell, Timothy. “Theism, History and Experience”. Philosophy Now. 2013. Web.
Thomists appear more interested in proving that Neo-Aristotelianism is a sound approach to metaphysics and the philosophy of science than they do in ensuring that the system is not at odds with Theism. The notion that God is the founder and creator of the cosmic order is uncontroversial among Christians and Theists more generally. Inherent in this notion is that God maintains the cosmic order and created a universe that bears his fingerprints, and as such, physical laws are capable of unification because the universe exhibits God’s perfection; the universe is therefore, at least at its start, perfectly symmetric, already containing within it intelligible forces, including finely tuned parameters that result in human beings, creatures made in God’s image. Therefore, in the main, Christians who accept Lanao and Teh’s anti-fundamentalism have, inadvertently or deliberately, done away with a standard Theistic view.
So already one finds that Neo-Aristotelianism, at least from the perspective of the Theist, is not systematic in that the would-be system is internally inconsistent. Specifically, when a system imposes cognitive dissonance of this sort, it is usually good indication that some assumption within the system needs to be radically amended or entirely abandoned. In any case, there are of course specifics that need to be addressed because I am not entirely sure Lanao and Teh fully understand Nancy Cartwright’s argument. I think Cartwright is saying quite a bit more and that her reasoning is mostly correct, even if her conclusion is off the mark.
While I strongly disagree with the Theistic belief that God essentially created a perfect universe, I do maintain that Big Bang cosmology imposes on us the early symmetry of the universe via the unification of the four fundamental forces. Cartwright is therefore correct in her observation that science gives us a dappled portrait, a patchwork stemming from domains operating very much independently of one another; like Lanao and Teh observe: “point particle mechanics and fluid dynamics are physical theories that apply to relatively disjoint sets of classical phenomena” (18). The problem is that I do not think Lanao and Teh understand why this is the case, or at least, they do not make clear that they know why we are left with this dappled picture. I will therefore attempt to argue in favor of Fundamentalism without begging the question although, like Cartwright, I am committed to a position that more accurately describes hers: Non-Fundamentalism. It may be that the gradual freezing of the universe, over the course of about 14 billion years, leaves us entirely incapable of reconstructing the early symmetry of the universe; I will elaborate on this later, but this makes for a different claim altogether, and one that I take Cartwright to be saying, namely that Fundamentalists are not necessarily wrong to think that fundamental unification (FU) is possible but given the state of our present universe, it cannot be obtained. Cartwright provides us with a roadmap of what it would take to arrive at FU, thereby satisfying Fundamentalism, but the blanks need to be filled, so that we get from the shattered glass that is our current universe to the perfectly symmetric mirror it once was.
Lanao and Teh claim that Fundamentalism usually results from the following reasoning:
We also have good reason to believe that everything in the physical world is made up of these same basic kinds of particles. So, from the fact that everything is made up of the same basic particles and that we have reliable knowledge of the behavior of these particles under some experimental conditions, it is plausible to infer that the mathematical laws governing these basic kinds of particles within the restricted experimental settings also govern the particles everywhere else, thereby governing everything everywhere. (Ibid.)
They go on to explain that Sklar holds that biology and chemistry do not characterize things as they really are. This is what they mean when they say Fundamentalists typically beg the question, in that they take Fundamentalism as a given. However, given Lanao and Teh’s construction of Cartwright’s argument, they can also be accused of fallacious reasoning, namely arguing from ignorance. They formulate Cartwright’s Anti-Fundamentalist Argument as follows:
(F1) Theories only apply to a domain insofar as there is a principled way of generating a set of models that are jointly able to describe all the phenomena in that domain.
(AF2) Classical mechanics has a limited set principled models, so it only applies to a limited number of sub-domains.
(AF3) The limited sub-domains of AF2 do not exhaust the entire classical domain.
(AF4) From (F1), (AF2), and (AF3), the domain of classical mechanics is not universal, but dappled. (25-26)
On AF2, how can we expect classical mechanics to acquire more principled models than it presently has? How do we know that, if given enough time, scientists working on classical mechanics will not have come up with a sufficient number of principled models to satisfy even the anti-fundamentalist? That results in quite the conundrum for the anti-fundamentalist. Can the anti-fundamentalist provide the fundamentalist with a satisfactory number of principled models that exhaust an entire domain? This is to ask whether anyone can know how many principled models are necessary to contradict AF3. On any reasonable account, science has not had sufficient time to come up with enough principled models in all of its domains and thus, this argument cannot be used to bolster the case for anti-fundamentalism.
While Lanao and Teh are dismissive of Cartwright’s particularism, it is necessary for the correct degree of tentativeness she exhibits. Lanao and Teh, eager to disprove fundamentalism, are not as tentative, but given the very limited amount of time scientists have had to build principled models, we cannot expect for them to have come up with enough models to exhaust the classical or any other scientific domain. Cartwright’s tentativeness is best exemplified in the following:
And what kinds of interpretative models do we have? In answering this, I urge, we must adopt the scientific attitude: we must look to see what kinds of models our theories have and how they function, particularly how they function when our theories are most successful and we have most reason to believe in them. In this book I look at a number of cases which are exemplary of what I see when I study this question. It is primarily on the basis of studies like these that I conclude that even our best theories are severely limited in their scope.Cartwright, Nancy. The Dappled World: A Study of The Boundaries of Science. Cambridge: Cambridge University Press, 1999. 9. Print.
The fact that our best theories are limited in their scope reduces to the fact that our fragmented, present universe is too complex to generalize via one law per domain or one law that encompasses all domains. For purposes of adequately capturing what I am attempting to say, it is worth revisiting what Cartwright says about a $1,000 bill falling in St. Stephen’s Square:
Mechanics provides no model for this situation. We have only a partial model, which describes the 1000 dollar bill as an unsupported object in the vicinity of the earth, and thereby introduces the force exerted on it due to gravity. Is that the total force? The fundamentalist will say no: there is in principle (in God’s completed theory?) a model in mechanics for the action of the wind, albeit probably a very complicated one that we may never succeed in constructing. This belief is essential for the fundamentalist. If there is no model for the 1000 dollar bill in mechanics, then what happens to the note is not determined by its laws. Some falling objects, indeed a very great number, will be outside the domain of mechanics, or only partially affected by it. But what justifies this fundamentalist belief? The successes of mechanics in situations that it can model accurately do not support it, no matter how precise or surprising they are. They show only that the theory is true in its domain, not that its domain is universal. The alternative to fundamentalism that I want to propose supposes just that: mechanics is true, literally true we may grant, for all those motions whose causes can be adequately represented by the familiar models that get assigned force functions in mechanics. For these motions, mechanics is a powerful and precise tool for prediction. But for other motions, it is a tool of limited serviceability.Cartwright, Nancy. “Fundamentalism vs. the Patchwork of Laws.” Proceedings of the Aristotelian Society, vol. 94, 1994, pp. 279–292. JSTOR, http://www.jstor.org/stable/4545199.
Notice how even Cartwright alludes to the Theistic notion of FU being attributable to a supremely intelligent creator who people call God. In any case, what she is saying here does not speak to the notion that only the opposite of Fundamentalism can be the case. Even philosophers slip into thinking in binaries, but we are not limited to Fundamentalism or Anti-Fundamentalism; Lanao and Teh admit that much. There can be a number of Non-Fundamentalist positions that prove more convincing. In the early universe, the medium of water, and therefore, motions in water, were not available. Because of this, there was no real way to derive physical laws within that medium. Moreover, complex organisms like jellyfish did not exist then either and so, the dynamics of their movements were not known and could not feature in any data concerning organisms moving about in water. This is where I think Cartwright, and Lanao and Teh taking her lead, go astray.
Cartwright, for example, strangely calls for a scientific law of wind. She states: “When we have a good-fitting molecular model for the wind, and we have in our theory (either by composition from old principles or by the admission of new principles) systematic rules that assign force functions to the models, and the force functions assigned predict exactly the right motions, then we will have good scientific reason to maintain that the wind operates via a force” (Ibid). Wind, unlike inertia or gravity, is an inter-body phenomenon in that the heat from the Sun is distributed unevenly across the Earth’s surface. Warmer air from the equator tends toward the atmosphere and moves to the poles while cooler air tends toward the equator. Wind moves between areas of high pressure to areas of low pressure and the boundary between these areas is called a front. This is why we cannot have a law of wind because aside from the complex systems on Earth, this law would have to apply to the alien systems on gas giants like Jupiter and Saturn. This point is best exemplified by the fact that scientists cannot even begin to comprehend why Neptune’s Dark Spot did a complete about-face. A law of wind would have to apply universally, not just on Earth, and would thus, have to explain the behavior of wind on other planets. That is an impossible ask because the composition of other planets and their stars would make for different conditions that are best analyzed in complex models, accounting for as much data as possible, rather than a law attempting to generalize what wind should do assuming simple conditions.
Despite Cartwright’s lofty demand, her actual argument does not preclude Fundamentalism despite what Lanao and Teh might have thought. Cartwright introduces a view that I think is in keeping with the present universe: “Metaphysical nomological pluralism is the doctrine that nature is governed in different domains by different systems of laws not necessarily related to each other in any systematic or uniform way: by a patchwork of laws” (Ibid.). I think it is entirely possible to get from metaphysical nomological pluralism (MNP) to FU if one fills in the blanks by way of symmetry breaking. Prior to seeing how symmetry breaking bridges the gap between MNP and FU, it is necessary to outline an argument from Cartwright’s MNP to FU:
F1 Theories only apply to a domain insofar as there is a principled way of generating a set of models that are jointly able to describe all the phenomena in that domain.
MNP1 Nature is governed in different domains by different systems of laws not necessarily related to each other in any systematic or uniform way: by a patchwork of laws.
MNP2 It is possible that the initial properties in the universe allow these laws to be true together.
MNP3 From F1, MNP1, and MNP2, the emergence of different systems of laws from the initial properties in the universe imply that FU is the probable.
Lanao and Teh agree that F1 is a shared premise between Fundamentalists and Anti-Fundamentalists. As a Non-Fundamentalist, I see it as straightforwardly obvious as well. With respect to our present laws, I think that FU may be out of our reach. As has been famously repeated, humans did not evolve to do quantum mechanics, let alone piece together a shattered mirror. This is why I’m a Non– as opposed to Anti-Fundamentalist; the subtle distinction is that I am neither opposed to FU being the case nor do I think it is false, but rather that it is extremely difficult to come by. Michio Kaku describes the universe as follows: “Think of the way a beautiful mirror shatters into a thousand pieces. The original mirror possessed great symmetry. You can rotate a mirror at any angle and it still reflects light in the same way. But after it is shattered, the original symmetry is broken. Determining precisely how the symmetry is broken determines how the mirror shatters” (Kaku, Michio. Parallel Worlds: A Journey Through Creation, Higher Dimensions, and The Future of The Cosmos. New York: Doubleday, 2005. 97. Print.).
If Kaku’s thinking is correct, then there is no way to postulate that God had St. Peter arrange the initial properties of the universe so that all of God’s desired laws are true simultaneously without realizing that FU is not only probable but true, however unobtainable it may be. The shards would have to pertain to the mirror. Kaku explains that Grand Unified Theory (GUT) Symmetry breaks down to SU(3) x SU(2) x U(1), which yields 19 free parameters required to describe our present universe. There are other ways for the mirror to have broken, to break down GUT Symmetry. This implies that other universes would have residual symmetry different from that of our universe and therefore, would have entirely different systems of laws. These universes, at minimum, would have different values for these free parameters, like a weaker nuclear force that would prevent star formation and make the emergence of life impossible. In other scenarios, the symmetry group can have an entirely different Standard Model in where protons quickly decay into anti-electrons, which would also prevent life as we know it (Ibid., 100).
Modern scientists are then tasked with working backwards. The alternative to that is to undertake the gargantuan task, as Cartwright puts it, of deriving the initial properties, which would no doubt be tantamount to a Theory of Everything from which all of the systems of laws extend, i.e., hypothesize that initial conditions q, r, and s yield the different systems of laws we know. This honors the concretism Lanao and Teh call for in scientific models while also giving abstractionism its due. Like Paul Davies offered, the laws of physics may be frozen accidents. In other words, the effective laws of physics, which is to say the laws of physics we observe, might differ from the fundamental laws of physics, which would be, so to speak, the original state of the laws of physics. In a chaotic early universe, physical constants may not have existed. Hawking also spoke of physical laws that tell us how the universe will evolve if we know its state at some point in time. He added that God could have chosen an “initial configuration” or fundamental laws for reasons we cannot comprehend. He asks, however, “if he had started it off in such an incomprehensible way, why did he choose to let it evolve according to laws that we could understand? (Hawking, Stephen. A Brief History of Time, New York: Bantam Books. 1988. 127. Print.)” He then goes on to discuss possible reasons for this, e.g. chaotic boundary conditions; anthropic principles.
Implicit in Hawking’s reasoning is that we can figure out what physical laws will result in our universe in its present state. The obvious drawback is that the observable universe is ~13.8 billion years old and 93 billion lightyears in diameter. The universe may be much larger, making the task of deriving this initial configuration monumentally difficult. This would require a greater deal of abstraction than Lanao and Teh, and apparently Neo-Aristotelians, desire, but it is the only way to discover how past iterations of physical laws or earlier systems of laws led to our present laws of physics. The issue with modern science is that it does not often concern itself with states in the distant past and so, a lot of equations and models deal in the present, and even the future, but not enough of them confront the past. Cosmological models, for purposes of understanding star formation, the formation of solar systems, and the formation of large galaxies have to use computer models to test their theories against the past, since there is no way to observe the distant past directly. In this way, I think technology will prove useful in arriving at earlier conditions until we arrive at the mirror before it shattered. The following model, detailing how an early collision explains the shape of our galaxy, is a fine example of what computer models can do to help illuminate the distant past:
Further Issues With The Neo-Aristotelian System
A recent rebuttal to Alexander Pruss’ Grim Reaper Paradox can be generalized to refute Aristotelianism overall. The blogger over at Boxing Pythagoras states:
Though Alexander Pruss discusses this Grim Reaper Paradox in a few of his other blog posts, I have not seen him discuss any other assumptions which might underly the problem. He seems to have focused upon these as being the prime constituents. However, it occurs to me that the problem includes another assumption, which is a bit more subtle. The Grim Reaper Paradox, as formulated, seems to presume the Tensed Theory of Time. I have discussed, elsewhere, the reasons that I believe the Tensed Theory of Time does not hold, so I’ll simply focus here on how Tenseless Time resolves the Grim Reaper Paradox.
To see the difference between old and new tenseless theories, it is necessary to first contrast an old tenseless theory against a tensed theory that holds that properties of the pastness, presentness, and futurity of events are ascribed by tensed sentences. The debate regarding which theory is true centered around whether tensed sentences could be translated by tenseless sentences that instead ascribe relations of earlier than, later than, or simultaneous. For example, “the sun will soon rise” seems to entail the sun’s rising in the future, as an event that will become present, whereas the “sun is rising now” seems to entail the event being present and “the sun has risen” as having receded into the past. If these sentences are true, the first sentence ascribes futurity whilst the second ascribes presentness and the last ascribes pastness. Even if true, however, that is not evidence to suggest that events have such properties. Tensed sentences may have tenseless counterparts having the same meaning.
This is where Quine’s notion of de-tensing natural language comes in. Rather than saying “the sun is rising” as uttered on some date, we would instead say that “the sun is rising” on that date. The present in the first sentence does not ascribe presentness to the sun’s rising, but instead refers to the date the sentence is spoken. In like manner, if “the sun has risen” as uttered on some date is translated into “the sun has risen” on a given date, then the former sentence does not ascribe pastness to the sun’s rising but only refers to the sun’s rising as having occurred earlier than the date when the sentence is spoken. If these translations are true, temporal becoming is unreal and reality is comprised of earlier than, later than, and simultaneous. Time then consists of these relations rather the properties of pastness, presentness, and futurity (Oaklander, Nathan. Adrian Bardon ed. “A-, B- and R-Theories of Time: A Debate”. The Future of the Philosophy of Time. New York: Routledge, 2012. 23. Print.).
The writer at Boxing Pythagoras continues:
On Tensed Time, the future is not yet actual, and actions in the present are what give shape and form to the reality of the future. As such, the actions of each individual future Grim Reaper, in our paradox, can be contingent upon the actions of the Reapers which precede them. However, this is not the case on Tenseless Time. If we look at the problem from the notion of Tenseless Time, then it is not possible that a future Reaper’s action is only potential and contingent upon Fred’s state at the moment of activation. Whatever action is performed by any individual Reaper is already actual and cannot be altered by the previous moments of time. At 8:00 am, before any Reapers activate, Fred’s state at any given time between 8:00 am and 9:00 am is set. It is not dependent upon some potential, but not yet actual, future action as no such thing can exist.
I think this rebuttal threatens the entire Aristotelian enterprise. Aristotelians will have to deny time while maintaining that changes happen in order to escape the fact that de-tensed theories of time, which are more than likely the correct way of thinking about time, impose a principle: any change at a later point in time is not dependent on a previous state. That’s ignoring that God, being timeless, could not have created the universe at some time prior to T = 0, the first instance of time on the universal clock. This is to say nothing of backward causation, which is entirely plausible given quantum mechanics. Causation calls for a deeper analysis, which neo-Humeans pursue despite not being entirely correct. The notion of dispositions is crucial. It is overly simplistic to say the hot oil caused the burns on my hand or the knife caused the cut on my hand. The deeper analysis in each case is that the boiling point of cooking oil, almost two times that of water, has something to do with why the burn feels distinct from a knife cutting into my hand. Likewise, the dispositions of the blade have a different effect on the skin than oil does. Causal relationships are simplistic and, as Nietzsche suggested, do not account for the continuum within the universe and the flux that permeates it. Especially in light of quantum mechanics, we are admittedly ignorant about most of the intricacies within so-called causal relationships. Neo-Humeans are right to think that dispositions are important. This will disabuse of us of appealing to teleology in the following manner:
‘The function of X is Z’ [e.g., the function of oxygen in the blood is… the function of the human heart is… etc.] means
(a) X is there because it does Z,Larry Wright, ‘Function’, Philosophical Review 82(2) (April 1973):139–68, see 161.
(b) Z is a consequence (or result) of X’s being there.
It is more accurate to say that a disposition of X is instantiated in Z rather than that X exists for purposes of Z because in real world examples, a given X can give rise to A, B, C, and so on. This is to say that one so-called cause can have different effects. A knife can slice, puncture, saw, etc. Hot oil can burn human skin, melt ice but not mix with it, combust when near other mediums or when left to increase to temperatures beyond its boiling point, etc. One would have to ask why cooking oil does not combust when a cube of ice is thrown into the pan; what about the canola oil, for a more specific example, causes it to auto-ignite at 435 degrees Fahrenheit and why does this not happen when water is heated beyond its boiling point?
As it turns out then, Neo-Aristotelians are not as committed to concretism as Lanao and Teh would hope. They are striving for generalizations despite refusing to investigate the details of how models are employed in normal science, as was made obvious by Lanao and Teh’s dismissal of Cartwright’s particularism and further, in their argument against Fundamentalism, which does not flow neatly from Cartwright’s argument. For science to arrive at anything concrete, abstraction needs to be allowed, specifically in cases venturing further and further into the past. Furthermore, a more detailed analysis of changes needs to be incorporated into our data. Briefly, when thinking of the $1,000 bill descending into St. Stephen’s Square, it is a simple fact that we must ask whether there is precipitation or not and if so, how much; we are also required to ask whether bird droppings may have altered its trajectory on the way down?; what effect does smog or dust particles have on the $1,000 bill’s trajectory; as Cartwright asked, what about wind gusts? What is concrete is consistent with the logical atomist’s view that propositions speak precisely to simple particulars or many of them bearing some relation to one another.
Ultimately, I think that Lanao and Teh fail to establish a Neo-Aristotelian approach to principled scientific models. They also fail to show that FU and therefore, Fundamentalism is false. What is also clear is that they did not adequately engage Cartwright’s argument, which is thoroughly Non-Fundamentalist, even if that conclusion escaped her. This is why I hold that Cartwright’s conclusions are off the mark because she is demanding that generalized laws be derived from extremely complex conditions. It is not incumbent on dappled laws within a given domain of science to be unified in order for FU to ultimately be the case. It could be that due to symmetry breaking, one domain appears distinct from another and because of our failure, at least until now, to realize how the two cohere, unifying principles between the two domains currently elude us. Lanao and Teh’s argument against FU therefore appeals to the ignorance of science not unlike apologetic arguments of much lesser quality. The ignorance of today’s science does not suggest that current problems will continue to confront us while their solutions perpetually elude us. What is needed is time. Like Lanao and Teh, I agree that Cartwright has a lot of great ideas concerning principled scientific models, but that her ideas lend support to FU. A unified metaphysical account of reality would likely end up in a more dappled state than modern science finds itself in and despite Lanao and Teh’s attempts, a hypothetical account of that sort would rely too heavily on science to be considered purely metaphysical. My hope is that my argument, one that employs symmetry breaking to bolster the probability of FU being the case, is more provocative, if even, persuasive.
By R.N. Carmona
What follows is Alexander Pruss’ Argument For An Omniscient Being. While he does not exactly give his argument a ringing endorsement, admitting that he is skeptical of the first two premises, there are other problems that elude him and any theist who believes that omniscience is possible. Pruss formulates the argument as follows:
1. The analytic/synthetic distinction between truths is the same as the a priori / a posteriori distinction.
2. The analytic/synthetic distinction between truths makes sense.
3. If 1 and 2, then every truth is knowable.
4. So, every truth is knowable. (1–3)
5. If every truth is knowable, then every truth is known.
6. So, every truth is known. (4–5)
7. If every truth is known, there is an omniscient being.
8. So, there is an omniscient being. (6–7)Pruss, Alexander. “An odd argument for an omniscient being”. Alexander Pruss Blog. 2 Nov 2020. Web.
In my new book “The Definitive Case Against Christianity: Life After The Death Of God,” I state the following:
God’s belief in propositions has to change in accordance with migrating facts. While it is true that the Sun is currently one astronomical unit away, that will not always be the case. At every moment when the Sun begins to expand during its Red Giant phase, the distance between the Earth and the Sun will gradually decrease until the Sun ultimately ends all life on our planet, if not disintegrating it entirely. At each moment, it will be incumbent on God to update his knowledge by changing his prior beliefs concerning the distance between these two bodies. It is prerequisite for facts to be fixed in order for God to be immutable. Since facts are not fixed, his beliefs and corresponding propositions about any given state of affairs have to change — otherwise he fails to be omniscient. (193)
A Christian might assert that there is a simple solution to the issue I have raised: God is also omnipresent. The issue with this objection is that God’s perspectives would be in direct contradiction with one another and so, from the perspective of other sentient beings, he would regard two logically contradictory propositions as true. From our perspective, he would believe in a truth and a lie, namely that from Earth, there is a supernova two million lightyears away, but in Andromeda, there is no longer a supernova to speak of. In other words, since the light from this event took two million years to reach humans on Earth, humans are just now learning of this supernova in Andromeda whereas an intelligent species on a planet relatively near to the event in Andromeda would report no supernova at that location. Perhaps it happened long before they emerged or before they were advanced enough to observe, record, and describe such an event. The fact remains that their present does not feature this supernova event while ours does.
Another fun example from theoretical physics involves watching someone falling into a black hole. The following is a summary of the relativistic experiences the observer and the faller would have:
1. The light coming from the person gets redshifted; they’ll start to take on a redder hue and then, eventually, will require infrared, microwave, and then radio “vision” to see.
2. The speed at which they appear to fall in will get asymptotically slow; they will appear to fall in towards the event horizon at a slower and slower speed, never quite reaching it.
3. The amount of light coming from them gets less and less. In addition to getting redder, they also will appear dimmer, even if they emit their own source of light!
4. The person falling in notices no difference in how time passes or how light appears to them. They would continue to fall in to the black hole and cross the event horizon as though nothing happened.“Falling Into a Black Hole Sucks!”. ScienceBlogs. 20 Nov 2009. Web.
God’s omnipresence, therefore, fails to solve the issue because in order for him to have all possible perspectives, he would have to hold contradictory propositions on pretty much any and all events in our universe. He would have our perspective in the Milky Way as well as the point of view of the Andromeda galaxy’s civilization. He would also have the perspectives of the observer and the faller in our black hole example. The glaring issue is that he would have these perspectives at the same time and in the same respect, thus resulting in contradictions. Perhaps one can still find a way to try and circumvent these issues.
Given the idea that a day is as a thousand years and vice versa for God (2 Peter 3:8), if he, for sake of argument, experiences time-laden events in God-days (equivalent to one human millennium) or even all at once, God would make entirely different claims from the ones we believe to be knowable. In other words, while we are discussing here and now, before and after, duration, and the such, God would state something like the following: “all of the people, places, events, etc. that existed from the first century through the tenth century CE existed simultaneously.” For us, this is unlike propositions we believe are knowable, indeed nonsensical. God, therefore, being a timeless being cannot know anything about time-laden truths. It would be incumbent on him to not be timeless, but then, he is immediately confronted with the relativity of experience in the physical universe.
More importantly, 5 is debatable despite Fitch’s Knowability Paradox. Pruss states: “The argument for 5 is the famous knowability paradox: If p is an unknown truth, then that p is an unknown truth is a truth that cannot be known (for if someone know that p is an unknown truth, then they would thereby know that p is a truth, and then it wouldn’t be an unknown truth, and no one can’t know what isn’t so)” (Ibid.). The tendency, however, to leap from the possibility of knowing every truth to someone knowing every truth is dubious. It is similar to the leap rooted in Anselm: conceivability implies possibility. Worse still is that Pruss leaps from possibility to actuality. One should not draw ontological conclusions on the basis of logical considerations.
Pruss would appreciate an example from mathematics, namely that mathematicians work with infinity in their equations and even think of it as a real, tangible object in the universe. Unfortunately, there does not seem to be a physical correlate to infinity. Pradeep Mutalik, writing for Quanta Magazine, explains:
While “most physicists and mathematicians have become so enamored with infinity that they rarely question it,” Tegmark writes, infinity is just “an extremely convenient approximation for which we haven’t discovered convenient alternatives.” Tegmark believes that we need to discover the infinity-free equations describing the true laws of physics.Mutalik, Pradeep. “The Infinity Puzzle”. Quanta Magazine. 16 Jun 2016. Web.
With this in mind, one can see that though mathematicians logically consider and defend the concept of infinity, one should proceed with caution in terms of stipulating that reality features anything like this concept. It follows then, that just because all truths are potentially knowable, there does not already exist a being that knows all things. Aside from the problem resulting from the relativity of truth, stemming from the relativity of space-time especially as one approaches the speed of light, there is this unjustified assumption that possibility implies actuality. In the main, possibility does not necessarily entail probability, the latter of which having to be established before concluding that something exists. Given these brief objections, one should maintain that there is no omniscient being.
Ultimately, a lot more can be said. All humans can really say about knowledge is what they experience with respect to acquiring it. As such, we would be wise to recall that we acquire knowledge first by way of awareness and conscious focus on what it is we are inquiring about. A truly omniscient being, which would be difficult to distinguish from a being who knows all things except how to play billiards or count to infinity (the conclusion of my Argument From Vagueness (see The Definitive Case Against Christianity, 194), would first and foremost have to be perfectly aware and focused for all of eternity. If this being loses focus at any point, myriad truths would have changed, progressing toward inevitable obsolescence, and new truths, that are not all related to the old truths, would have emerged. This being would therefore, have lost its claim to omniscience. This is setting aside that humans can apprehend truths intuitively, without having dedicated concentrated inquiry into a matter. Other sentient beings could have this capacity as well. In any case, the likelihood that an omniscient being exists is practically zero.
By R.N. Carmona
The problem, as commonly framed, is that the truth of P1 is substantiated by a P2, which is then substantiated by a P3. The thought is that this goes on forever. The Infinite Regress problem resulted in foundationalism, which was motivated by the pursuit of certainty. Ross Cameron frames the problem as follows:
An infinite regress is a series of appropriately related elements with a first member but no last member, where each element leads to or generates the next in some sense. An infinite regress argument is an argument that makes appeal to an infinite regress. Usually such arguments take the form of objections to a theory, with the fact that the theory implies an infinite regress being taken to be objectionable.Cameron, Ross. “Infinite Regress Arguments”. Stanford Encyclopedia of Philosophy. 2018. Web.
The Infinite Regress Problem is therefore, not much of a problem unless a given interlocutor decides that it is. Such an interlocutor usually makes that decision due to prejudice, an unabashed bias for their own conclusion or perspective while in other cases, the individual disagrees with an alternative explanation so much that they go out of their way to express skepticism toward this explanation to an extent that they never applied to their own. In other words, someone who is skeptical of Correspondence Theory will go as far as questioning reality, e.g. Descartes’ Evil Demon, or questioning the very existence of the person they are debating, e.g., “how do you know you’re not a brain in a vat?” This is all while ignoring that if such an evil demon is distorting reality on a whim, they too are subject to its deception and that if the person they are debating is a brain in a vat, it is far likelier that they themselves are in the same predicament.
The issue with any Infinite Regress argument is that the radical skeptic has glossed over basics in philosophy. For the skeptic’s argument to work, the onus is on him to find a premise containing necessary and sufficient conditions in relation to the premise he is skeptical of. Put another way, if I say that Correspondence Theory says nothing other than the fact that the proposition “it is snowing” holds true if, in fact, it is snowing, the interlocutor is tasked with finding a premise on which the truth of the proposition “it is snowing” rests. The fact that it is snowing is a distinct reality from my proposition, especially because I can make that claim, for whatever reason, even when it is not the case that it is snowing. I could either be off my rocker or lying, but any proposition can be proposed even when what informs the proposition is not the case. Andrew Brennan puts it this way:
The standard theory makes use of the fact that in classical logic, the truth-function “p ⊃ q” (“If p, q”) is false only when p is true and q is false. The relation between “p” and “q” in this case is often referred to as material implication. On this account of “if p, q”, if the conditional “p ⊃ q” is true, and p holds, then q also holds; likewise if q fails to be true, then p must also fail of truth (if the conditional as a whole is to be true). The standard theory thus claims that when the conditional “p ⊃ q” is true the truth of the consequent, “q”, is necessary for the truth of the antecedent, “p”, and the truth of the antecedent is in turn sufficient for the truth of the consequentBrennan, Andrew. “Necessary and Sufficient Conditions”. Stanford Encyclopedia of Philosophy. 2017. Web.
If Brennan is correct, then an Infinite Regress is not, in fact, an issue no matter how much a disingenuous interlocutor says it is. An Infinite Regress is nothing more than a rebranded Slippery Slope, the termination of which is decided by a premise containing either a viable truth maker or that corresponds to reality in a noncontroversial way. Furthermore, it would be a premise that has no conditional relationship to some other premise. This premise q would not require a premise r on which the necessity of its truth is grounded. It is simply one proposition that is established by some external reality or lines of evidence that make its truth more likelier than not. This is what is meant by propositions like “evolution is true.” This conclusion is supported by lines of scientific evidence strongly suggesting that the proposition is probable. Given the advent of fallibism, what epistemologists look for are propositions that are highly probably true. They are no longer in the business of certainty. So while any true proposition has a small, usually negligible, chance of being false, one could achieve a high degree of certainty in exactly those propositions that are highly likely to be true.
Recall that to terminate a Slippery Slope, it is necessary to show that a proposed consequence will not end up being the case if a given action is taken. Opponents of same-sex unions would often say things like, “what’s next!? people marrying their dogs!?” It was easily shown that their concerns were non sequitur and thus, in similar fashion, one could do away with an Infinite Regress argument by establishing that the interlocutor has failed to find a premise r on which the truth of q rests. The onus is heavy because he is tasked with finding a premise that is necessary and sufficient in relation to the truth of q. If he cannot do so, he has admitted that the regress terminates at q and accepts justification, however begrudgingly, for why this is the case.
In general, the issue at the heart of any Infinite Regress argument is the fact that people, especially non-philosophers, tend to be disingenuous. They will concoct some ridiculous standard for any point of view that disagrees with theirs while failing to scrutinize their own views in accordance with that standard. There is no Infinite Regress. In the end, what remains is disagreement, to some degree of strength, with the justification(s) underlying certain beliefs. If, for example, someone claims that they know we are all brains in vats because a being outside of our reality told them this, then it is within my right for me to inquire about this being. Moreover, it is within my right to question this person’s sanity or at the very least, their sobriety. If this revelation was received while this person was drunk or high on a hallucinogen, then it is far likelier that their account is false. The same applies if this person has been diagnosed with a mental illness that makes hallucinations a frequent occurrence for him.
Ultimately, the nature of dialogue, especially on social media, has revealed the basest human fault: the propensity to be disingenuous. Everyone who has a bias distorts facts, omits evidence to the contrary, employs radical skepticism, and sets up an Infinite Regress problem as the standard for the opposition to reach. With respect to the latter, it is a standard that their own views have not met, despite the disingenuous interlocutor’s assertions. The Infinite Regress Problem is not a problem, but rather an argument offered by someone bent on remaining obstinately unconvinced by a position or conclusion that rubs them the wrong way. These arguments are no different from Slippery Slope arguments and terminate at the point in where you locate a proposition that is not contingent on another. This issue no longer concerns epistemologists and should be of no concern to any student of philosophy.
My purpose here is to respond to a post published by Steven Dunn over at Philosophic Augustine. I met Steven several years ago in conversations on Tumblr. Over the years, he has maintained a resolute interest in philosophy, which is something I greatly admire about him. Few things remain constant over several years, so the fact that he has retained his passion for philosophy is impressive. He’s also grown a lot with respect to his knowledge and that’s to be applauded.
Prior to reproducing our discussion hitherto, I want to be clear about what I’ll be looking to accomplish in this response: a) address his latest response that features Aristotelian personalism and metaphysics b) circle back around to Bill Vallicella’s argument. I think it’s important not to lose sight of the argument, especially given that that’s the reason I commented on Steven’s Facebook post to begin with. I will make it clear that even if one granted the undeniable personhood of a fetus, it still would not follow that an abortion is equivalent to murder. With that said, here is the discussion as it stands; my reply to Steven’s latest response will follow.
Steven Dunn: Philosopher Bill Vallicella over at his blog Maverick Philosopher considers a brief but important argument:
(a) Abortion is murder.
(b) Abortion ought to be illegal.
The question: Can one consistently hold (a) and not (b)? Suppose an added proposition:
(a) Abortion is murder.
(b) Abortion ought to be illegal.
(c) Murder is illegal.
I posted this argument on my personal Facebook page which wrought the response of one of my old friends from the Tumblr blogosphere, R. N. Carmona. Carmona is a philosophic tour de force, one of whom I’m familiar with conversing and debating since I was 17 in 2013 (now I’m 24).
There was a lot of heated exchanges between me and Carmona. After learning of his upcoming book, Ending the Abortion Debate, I knew that this issue was something he was well-versed in and felt passionate about. The following exchange doesn’t do full justice to Carmona’s overall position, but the highlights I’m sure are as he would see fit. Enjoy!
R. N. Carmona: (a) is unsound, hence making the whole argument unsound. Aborting an embryo or non-viable fetus simply is NOT murder. Most abortions happen before week 16, with a majority of them happening before week 9. At no point in those times does a fetus resemble an infant and more importantly, the hallmarks of a person aren’t present yet. That happens at around week 22, hence the hard cut off in most states at 20 weeks. Specifically, EEG waves register in the neocortex at around week 22 and the neocortex isn’t full developed till around week 36. I’d argue that it’s murder after week 22.
The only time I make exceptions after that many weeks is if there’s a threat on the mother’s life, but if the choice is between the quality of life of a mom and her family and a nine week old fetus, it’s an easy choice. Keeping abortion legal prior to week 20 reduces maternal mortality, which, if you’re pro-life, you should care about. Moreover, restrictive policies increase infant and maternal mortality. We’ve had lots of tries at the conservative Christian way: Northern Island, the Muslim World, the Philippines, etc. Restrictive policies do not work.
This is precisely what my next book is about. Want to end abortion? Get behind the issue. Address poverty, lack of education, lack of access to contraception, domestic violence, etc. That’s the only way to slow the rate of abortion. Restricting it won’t work and those conservative states that have passed heartbeat bills are about to find out the hard way.
Steven Dunn: There is actually a large extent in which I agree with you. I’ve read a lot of your writing on this issue and I appreciate you’ve pointed the dangerous restrictive policies that do currently exist. There is also an importance as you say in addressing poverty, lack of education, etc.
However, my initial problem began when you claimed that aborting an embryo or non-viable fetus is not murder. Even though non-viable fetuses have no chance of survival, that still does not warrant moral permissibility to end its life. I don’t see where the line of moral difference changes with an embryo, fetus, or fully grown human infant. Is it a spatial difference? Is it a temporal difference? Does the week, day, or trimester matter when ending the life of a *potential human?
Of course, we could have a metaphysically more significant conversation than the kinds of questions I’m asking you. I just think that these questions are a good starting point. Also, what is the “hallmarks of a person”?
R. N. Carmona: The hallmark of a person is quite simply, the consciousness attributed to human beings and higher order animals like dolphins and the great apes: neuroplasticity, memory acquisition, language capacity, etc. Even simpler than that, the capacity to apprehend taste, texture, sound, and so on. Even them who are mentally disabled, assuming they aren’t blind or deaf, can have these experiences. The blind and deaf, though lacking an important sense, still have propensities for memory acquisition, language, and so on.
And that’s the difference: spatio-temporal. Of space, because the potential person now occupies a uterus, taking nutrients from the would-be mother; of time because the potential person is currently not in the world, i.e. is not a citizen of a given country; is not protected by laws.
Potential simply is not enough. The fetus is potentially stillborn or potentially going to die of SIDs or will potentially be an ectopic pregnancy or will potentially be born to become a serial killer that will make Ted Bundy blush. You can’t speak of potential as though it’s solely and predictably positive; potential can be very negative. In fact, this child can be the reason the mother dies and leaves behind a husband to raise several kids, including the newborn, on his own. Potential simply isn’t enough to obligate a woman to continue a pregnancy she’d rather terminate.
So yes, the week matters because so long as a fetus isn’t viable, abortion should be permissible. The moral difference changes once the fetus is viable. Potentiality simply isn’t a good argument. Viability is a much stronger argument.
Furthermore, the moral difference changes when purposeful modification comes into play. Sure, an infant doesn’t have that capacity: it doesn’t, for example, set goals for itself. However, the parents, once they are told that the fetus is developing well, start to purposely modify on the fetus’ behalf: they start thinking of a name, buying clothing, setting up its room, putting money in its college fund, etc.
No parent, even if they’re a Christian conservative, begins to purposely modify at conception or even in the early weeks. It’s simply not enough to go on and tells you that, behaviorally speaking, most parents write off potential. Potential isn’t enough for anyone to go on and that’s why most people need something concrete before they begin to purposely modify on their baby’s behalf.
So yes, there’s a simple line to draw between a non-viable fetus and a viable one. I can speak of organismality as well, namely comparing and contrasting between organisms to come to a good conclusion regarding what a non-viable fetus most closely resembles, and it’s clear that they don’t resemble a newborn. There are marked differences, but I digress.
Steven Dunn: I would clarify that you are *technically correct in saying that potentiality is not enough. A potential X of course is not an X. Because I am a potential speaker of the French language doesn’t mean I can speak the French language. However, potentialities are still nonetheless grounded in being: they are realities not merely possibilities.
They are actual human beings with various potentials.
Though they are not realized among differing spatial and temporal locations/positions, I don’t think you’ve provided a meaningful account of persons. Human persons, as I see it, are instances of personalized being; persons possess phenomenological qualities that make them eligible for relationships – or interpersonal love. I think our definitions of persons should capture something specific (and simple), rather than be a construct of various qualifiers (neuroplasticity, apprehension of the senses, etc).
One biological example worth mentioning that I think you’ll appreciate is the cognitive capacity of bonobos – which is one of my favorite areas of primatology to examine their analogous behavior with humans. They’re sympathetic, they can experience pain, they are highly intelligent, they can have an extensive non-verbal vocabulary, etc.
Despite these striking qualities, they do not fit under the definition of persons I’ve provided above.
What does it mean to be a real person? A couple things: (i) what W. Norris Clark has called the “participation structure of the universe”; rational-intelligibility that allows for human persons and the universe to find meaningful relations/predictions; (ii) existence as a dynamic act of presence and (iii) action as a [self] manifestation of inner-being.
I think your definition of persons is merely conditional; it’s dynamic but not exhaustive. Human beings – that is, if a personalized being possesses such potentialities – are intrinsically valuable; there can be no moral difference among this being’s spatial or temporal location.
In summary. . . I think you are raising issues that aren’t typically addressed by conservatives. It’s important that we better handle areas of women’s reproductive healthcare, which can be dealt with through better and intentional education, personal conviction, etc. However, I think we need to agree that the moral question is not somehow addressed because we’ve raised current social or political problems surrounding abortion. There are consequences and symptoms that need to be taken care of, by all means. However, if structurally we are dealing with the intentional ending of a human life then we need to talk about it.
R. N. Carmona: I disagree there. What I hear is Aristotelian language here and I reject his metaphysics all the way through. Potentialities are not realities grounded in being. I think even Aristotle makes a distinction between potentiality and actuality, and as I recall, he doesn’t conflate them in this manner.
Persons do, however, possess phenomenological qualities, like phenomenal consciousness, but that isn’t what makes them eligible for relationships and interpersonal love. What’s needed there is simple empathy and bonobos and chimps, in general, are capable of that; that’s one reason why some advanced nations recognize them as persons. So rather than a construct of qualifiers, it’s more a recognition of qualifiers taken together to get a basic definition of person.
The base anthropocentrism of theists doesn’t allow them to accept that other higher order animals are persons, and that’s what you’re doing here. Dolphins call each other by name and remember individuals for decades. Elephants can also remember individuals after years of not seeing them. So while there’s certainly a distinction between a human person and a dolphin person, there is overlap that qualifies them both and that overlap is found in the sciences. The issue here is that your definition relies less on science and more so on an implied belief in the soul or on metaphysics rather than science.
I agree with (I) as it’s pretty much purposeful modification paraphrased. (II) relies heavily on Aristotelian metaphysics and I reject it outright. (III) stems from two, but alludes to libertarian action of will, which I don’t think anyone has. There’s no [self] without the [other] and other is much more crucial in action, especially willful. I think human persons can change course, but only after realizing enough deterministic conditions underlying their actions, thus empowering them to experience determinants that may lead to an overall change of course.
Think of the proverbial alcoholic; he doesn’t willfully change his bad habit, but what he does is “change” a given number of determinants so that his actions may change and out of a recognition that if he doesn’t make these changes, he is pretty much enabled: a) stops associating with friends who drink regularly b) doesn’t go to bars when invited c) goes to rehab. And so on and so on and so on.
Action itself isn’t in a vacuum, but dynamic and intertwined with the flux of all there is. So human potential takes course in a deterministic manner and any human looking to have any semblance of control over that will reposition herself with respect to determinants. A fetus is incapable of this sort of purposeful modification, which I think is the most actualized of all.
So my definition, while not alluding to souls or anything religious, is also sufficient because it recognizes the role of the other in the shaping of the self. The embryo and non-viable fetus do not interact with the other in the manner in which persons do and it is simply potentiality and not actuality, to use your language. What is commonly aborted is (what can be) rather than (what is), so that spatio-temporal difference lays the groundwork for a moral difference and the moral difference lies in purposeful modification which I think your (I) paraphrases.
Any metaphysic that doesn’t account for the other, even something as arbitrary as a chair, isn’t complete. Hegel understood this and is the forefather, I think, of modern metaphysics, starting in the phenomenologists that soon followed him. Hurssel relied a lot on Aristotle too and I’m not alone in seeing that he was mistaken for doing so, but again, I digress.
Steven Dunn: I would respond to your outlook on Aristotelian personalism as not fully appreciating what the system has to offer. In our personal conversations on abortion I have mentioned to you that yes, personalism and the potentiality principle has largely been carried and grounded in Catholic moral theology. I understand, therefore, we both don’t share views in the inspiration of Christian theology.
Hence, we need to find anthropological and philosophical commonalities in which we can meaningfully proceed in a discourse about human persons and what exactly is developing in the womb. The best system that does this, in my view, is Aristotle’s conception of being from his Metaphysics and the further extension from Thomas Aquinas’ doctrine of persons (with some modifications).
Aquinas argued that persons are that which is “perfected in all of nature.” This essentially means that persons are not merely special “modes” of being amongst others, but that personhood is being when it is allowed to be at its fullest. In other words, persons are not restricted by sub-intelligent matter. There are a number of reasons why persons – humans – possess the special status that they do, and while I think there are theological reasons for this I think I can still demonstrate that apart from any inspired or “revealed” source.
First, Aquinas’ notion of person was conscious of the distinction between person and nature, because providing a consistent account of personhood meant that a consideration of God as Triune (one divine nature amongst three persons) and Christ as the God-man (divine person possessing two natures) needed to be contextually consistent.
Therefore, it has been an accusation by some leading Catholic philosophers (Wojtyla, Ratzinger, Clarke, etc) that Aquinas falls short of a comprehensive philosophical definition of person because the medievals relied primarily on the Boethian definition of person: “An individual substance of a rational nature.” Hence, I would argue for a further inclusion of the concept of “relation,” which is fundamental to our understanding of what it means “to be.”
Aquinas moved away from the what has been called “self-diffusiveness” of the Good as seen by the NeoPlatonists (the collaboration of the Good with all “substances”) and instead moves to Supreme Being, where Existence (esse) – as I said before – now becomes the root of all perfection. Supreme Being is the subsistent Act of Existence, where now the self-diffusive Good now becomes self-communicative Love.
Hence we have three primary qualities for the relationability of persons: being is (1) self-communicative (showing that persons are intelligible [ratio] by their actuality); (2) being is self-manifesting (persons are immediately relatable to other beings); and (3) being is intrinsically active (persons are not merely present but actively present).
Suppose a being for example that existed in reality but didn’t or couldn’t manifest itself to other beings, or if it didn’t or couldn’t act in any way. If this kind of being lacked such properties then other beings couldn’t have knowledge of its existence; it would be almost as if it had no being at all. Now imagine if all beings existed in this sort of way; the universe itself wouldn’t be connected in the unified sense that philosophers and scientists typically speak of the rationality of the universe.
Now combine this with the potentiality principle. According to Current Anthropology (2013), potentiality is a principle not so foreign to them: potentiality “denotes a hidden force determined to manifest itself – something that with or without intervention has its future built into it.”1
Let me be clear that potentiality is the only relevant metaphysical principle worth considering for abortion. My position, to be clear, is not:
- X is a creature of a certain sort.
- Creatures of this sort have right R.
- Therefore, X has a right R.
Premise 2 of course begs the question in favor of what I want to prove. In summary, my position is that the full and perfect realization of being is always inherent in its nature. All living things, including mindless plants, dolphins and gorillas have a proper end or “good” which is naturally directed within their nature – even from a formless or potential state. Nothing can exist without potentials, and potentials cannot be realized apart from something actual.
Now the metaphysical picture I’ve provided is not mere conjecture but is what historically has served as the foundation of Western intelligentsia for over 1,500 years, until the advent of the modern model by Descartes and Newton. And I would argue that that move away from the model of classical metaphysics has been one of the greatest errors and blunders of Western thought. It was an illegitimate move because it’s not as if new physical discoveries were made, hence “outdating” the Aristotelian system.
Descartes, among other French intellectuals at his time were responsible for the shift away from potential/actual, the four causes, and his metaphysic of being. Bad move.
R.N. Carmona: While not mere conjecture and arguably the foundation of Western intellgentsia for over 1,500 years, one would have to gloss over important bits of history to make that argument. One of the more significant bits of history I have in mind is the Christian and Muslim censorship and destruction of texts that were not in agreement with monotheism, especially works that were of a more naturalistic flavor. Carlo Rovelli puts it succinctly:
I often think that the loss of the works of Democritus in their entirety is the greatest intellectual tragedy to ensue from the collapse of the old classical civilization…We have been left with all of Aristotle, by way of which Western thought reconstructed itself, and nothing of Democritus. Perhaps if all the works of Democritus had survived, and nothing of Aristotle’s, the intellectual history of our civilization would have been better … But centuries dominated by monotheism have not permitted the survival of Democritus’s naturalism. The closure of the ancient schools such as those of Athens and Alexandria, and the destruction of all the texts not in accordance with Christian ideas was vast and systematic, at the time of the brutal antipagan repression following from the edicts of Emperor Theodisius, which in 390-391 declared that Christianity was to be the only and obligatory religion of the empire. Plato and Aristotle, pagans who believed in the immortality of the soul or in the existence of a Prime Mover, could be tolerated by a triumphant Christianity. Not Democritus.2
It’s not a mere coincidence that Aristotelian metaphysics stood in fashion for so long, nor was it established that the Aristotelian system was better than other systems. In that same time period, theists, especially Christians, held a virtual monopoly on ideas and as such, metaphysical frameworks with more naturalistic bents were destroyed or censored. Due to this, there was a reluctance on the part of skeptics and naturalists to offer a naturalistic metaphysical system. They were rightfully afraid of The Inquisition. So, Aristotelian metaphysics didn’t dominate the landscape because it was the best framework, but rather, because the game was rigged in its favor.
Copernicus and Galileo, for instance, dealt with the consequences of challenging theistic thought. Copernicus’ De revolutionibus “was forbidden by the Congregation of the Index ‘until corrected’, and in 1620 these corrections were indicated. Nine sentences, by which the heliocentric system was represented as certain, had to be either omitted or changed.”3 Galileo’s house arrest is a well-known historical fact and there’s no need to tread over old coals here. More to the point, “Bruno [was]…much more of a philosopher than a scientist. He felt that a physicist’s field of study was the tangible universe, so he challenged any line of thought that utilized nonphysical elements and avoided what he considered the juvenile exercise of calculation. To him, computational astronomy missed the true significance of the sky.”4 Bruno held to eight purportedly heretical theses and they served as the reason for his execution. Among these theses were patently naturalistic positions: the universe is spatially infinite, there are other planets very similar to ours, there were humans before Adam and Eve, the Earth moves in accordance with Copernican theory.5 In all but one of these positions, Bruno has been proven correct. So the notion that Aristotle’s system is best overall or, at the very least with respect to defining personhood, is already disputable because as has been demonstrated, competing, especially naturalistic, frameworks were discouraged. Despite this, before I set out on my own exploration with regards to what best explains potentiality, I will challenge Aristotle’s personalism.
Even if I were to grant that Aristotle offered much in the way of explaining personhood, there’s still the question of how any of these criteria apply to embryos and early fetuses. Take for instance, (1) self-communicative (showing that persons are intelligible [ratio] by their actuality). To my mind, embryos and early fetuses are not self-communicative nor intelligible, and that’s because they have yet to develop the organ that makes this possible, namely the brain. Now, while I recognize that Aristotle offers an interesting conundrum worth considering, i.e., as you put it, “Nothing can exist without potentials, and potentials cannot be realized apart from something actual,” I don’t see that a fetus’ obvious intelligibility and self-communication follows. As we will see shortly, there’s a better explanation for the notion that nothing can exist without potentials and that no potentials can be realized separate from something actual.
In like manner, let’s consider also (2) being is self-manifesting (persons are immediately relatable to other beings). On the assumption that Aristotle was correct, I don’t see how embryos and early fetuses are self-manifesting and immediately relatable to other beings. If anything, it is only relatable to its parents and siblings, assuming they had children already. Since it has not emerged independently within the world, it is not relatable to other people, the ecosystem in its locale, nor the wider biosphere. The fact that it isn’t independently within the world makes so that it isn’t relatable to any other beings. Furthermore, it’s relation to its parents and siblings is best explained and anticipated by genetics, which we will get to shortly.
Let’s also consider (3) being is intrinsically active (persons are not merely present but actively present). Again, embryos and early fetuses are not present, let alone actively present for the same reasons they aren’t self-manifesting. The fact that it is not independently within the world, once again, proves problematic for the third criterion. I can grant that it is actively present once born, for its parent(s) is now self-modifying on its behalf. It is interacting with other persons in a very obvious manner and relies on them for its physical, emotional, and psychological growth, growth that is crucial for its potential to eventually become a person who has a theory of mind, a sense of self, an ego, memories, desires, goals, and so on. Within the womb, that simply is not the case early in any pregnancy. The woman’s voice and music can help with brain development starting at 29-33 weeks. So harkening back to what I said earlier, interpersonal interaction is only possible when the brain is sufficiently developed, which strongly favors the thesis that the brain is integral to a human person rather than a soul.
Now to the matter of what better explains the predictable potential of a human fetus, after which three conclusions should be immediately clear: either 1) that we do not require a metaphysical explanation for potentiality and personhood or 2) that given genetics and evolution, what’s necessary is a metaphysical framework that is congruous with and readily predicts scientific facts and 3) that nothing can exist without potentials and that potentials are not realized without actuals has been solved.
One of the primary reasons I reject Christian theism as a worldview is because it gives human beings an undue “special status” all while ignoring human evolution. Human potential or more specifically, homo sapien potential wouldn’t exist without an ancestor’s (probably homo antecessor) potential. Furthermore, homo sapien potential would not have been realized without the actuality of ancestors and likewise, without the divergence of ancestors, great apes would not have progressed as they have. As you well know, we share about 98.5% DNA with chimpanzees and some 96% with gorillas, two facts that establish a common ancestry. So this “special status” is actually an example of special pleading because it’s not at all clear why chimps and gorillas do not qualify for such status; also, neanderthals, given what we currently know about them, are in many respects like homo sapiens (a fact that made their interbreeding possible) and as such, would qualify for such status without question. Yet on Christian theism, no human ancestor qualifies for this status, an attitude I find very suspicious.
Human evolution, like evolution in toto, has an underlying genetic component that explains these variations in populations over time. That same genetic component continues among all populations of species and thus, better explains “the perfect realization of being…inherent in the nature of mindless plants, dolphins, and gorillas.” Furthermore, potentiality does not denote a “hidden force determined to manifest itself,” but a rather statistically predictable pattern present in the genome of an organism; it isn’t hidden at all, but rather in plain sight. The pattern is so predictable that one can readily explain how and why that which is formless becomes something with form.
So prior to circling back to Vallicella’s argument, I will offer a brief overview showing how the actuality of parents results in the potentiality and probable actuality of a child. It is also important to note that given genetics, there are a number of factors that determine morphological sex, eye color, hair color, skin tone, and so on. So let’s imagine that in universe A, Jack and Jill have a baby girl named Janice and that in universe B, they have a boy named Jake. Let’s consider the important differences in each child, differences that explain why Janice exists in A and Jake in B.
In universe A, Jack and Jill are both 24-years-old when they agree to having a child. Jack and Jill are wealthy and have spent the last four years of their relationship traveling. Neither of them are stressed and have no trouble being happy and grateful for all that they have. When considering that high stress increases the probability of having a boy, it is no surprise that Jill gives birth to Janice nine months later. Yet that still does not explain why Janice has brown eyes (though both her parents have blue eyes), her mother’s hair color, and her father’s hitchhiker thumb. In the main, had another sperm fertilized the egg, Janice very likely would have been born with completely different features. Despite low stress levels, there’s also the fact that Jack has five sisters and no brothers, therefore increasing the probability of having a girl. This still does not explain why Janice has brown rather than blue eyes, blonde rather than brown hair, and a hitchhiker thumb rather than a straight thumb.
Allelic combination is important in explaining her phenotypic features. Should both parents pass on recessive genes, Janice is born with a hitchhiker thumb. Or alternatively, if there’s a combination of dominant and recessive genes, she may have a chance to have a hitchhiker thumb or a straight thumb. If there’s a combination of dominant genes, then she will predictably have a straight thumb (see reference). Eye color tends to be similar, albeit more complicated.
For instance, the assumption is that since Janice’s parents have blue eyes, she will also have blue eyes. There are two genes integral to determine eye pigmentation: OCA2 and HERC2. An active HERC2 activates OCA2, which determines pigment; given this, we know that this is what explains Janice’s brown eyes. Her counterpart in universe B, Jake, has either a broken HERC2 or a broken OCA2 and therefore, has blue eyes (see reference). He also has brown hair and a straight thumb. He has a straight thumb because instead of a recessive and dominant gene (what we find in Janice’s genome), we find two dominant genes in Jake’s genome. Their disparate hair colors are also explained in this manner as well. It is also likely that in Universe B, Jill gave birth to Jake because of the high levels of stress she experienced during pregnancy. Jack and Jill decided to conceive at the ages of 31. During the pregnancy, they moved from Middletown, NY to New York City because they both wanted more job opportunities. The crowded commutes, her career, the noise pollution, among other things, stressed Jill out to no end and this increased the probability of having a boy, hence (probably) the birth of Jake.
Form, likewise, follows suit. Drosophila have been important in research in evolutionary biology and genetics. In observing curious mutations in these flies, geneticists discovered homeotic genes that determine the body pattern of all organisms. The research gets very technical and makes for quite the tangent, but homeotic, or Hox genes themselves come from a Hox-like ancestor that explains the similarities Hox genes have from organism to organism (see reference). The graphic makes the extrapolations of their research clear.
What is also clear is that the following turns out to be false: “All living things, including mindless plants, dolphins and gorillas have a proper end or “good” which is naturally directed within their nature – even from a formless or potential state.” It’s not so much that there’s a natural tendency even from a formless, potential state, but rather, that there’s an evolutionary and genetic history that informs how a comparatively formless embryo develops into a human being. There are also traits that are arbitrary as there’s no purpose as to why someone would have brown rather than blue eyes or a hitchhiker thumb rather than a straight thumb. Some traits are inconsequential with respect to who a given person is.
Aristotelian metaphysics is considered outmoded by contemporary philosophers and scientists because it is incongruous with various scientific paradigms. Setting cosmology and physics aside, as I think I’ve shown, Aristotle’s concept of a person is incongruous with evolution and genetics and the system did nothing in the way of anticipating the advent of evolutionary biology and genetics. His system speaks of personhood in a patently non-naturalistic or even supernatural manner whereas genetics and evolution show that personhood is linked to certain types of purely physical organisms. What is required is either no metaphysical framework at all (à la logical positivists) or a framework that coincides with modern philosophical and scientific paradigms. Aristotle’s system doesn’t accomplish that and far from a “bad move,” Copernicus, Galileo, Newton, and everyone who eventually stood on their shoulders have every justification to move away from the Aristotelian system. Although interesting, that potentials can’t exist apart from actuals is explained by genetics for that which is animate and memetics for that which is inanimate. In fact, Aristotle was closer to correct with regards to universals and particulars, an explanation that can be applied to inanimate objects rather than living entities. Yet despite the potential for a human embryo to become an actual human person, a roughly predictable, naturalistic set of occurrences take place before every human birth. The process is also a fragile one as an injury to the would-be mother can end the pregnancy; genetic anomalies and an implantation anywhere other than the uterus can make it so that this potentiality never results in an actuality. Aristotle’s system also doesn’t explain the fragility of this process not just in humans, but in other organisms as well. It should therefore be clear that conclusions aforementioned have been firmly established and that if metaphysics remains a concern for philosophers, we have to do better than what Aristotle and his disciples rendered us.
Now to circle all the way back to Vallicella’s argument. Even if one were to grant the undeniable personhood of a fetus, either through the medium of Aristotelian metaphysics or another metaphysical framework altogether, there’s still the issue that the intentional killing of this person doesn’t constitute a murder. The pivotal error pro-choicers make is that they tend to define abortion and ignore what it’s being equated to. They should also consider the legal definition of murder, since Vallicella is alluding to the legal rather than the moral definition.
The killing of an embryo or fetus is done with intentions and motives altogether different from those underlying homicide, and as such, from a legal standpoint, it can’t be approached as murder or even a lesser offense like manslaughter. There is no degree of murder applicable to abortion; the intention and motive are not the same either, so even from a legal perspective, abortion is not murder. It would constitute an intentional killing of a different sort, of an even benevolent sort. Therefore, a woman who has an abortion can’t be tried and convicted as a murderer, neither can the doctor who performed the abortion. Let’s consider first degree murder. Premeditation is already an issue for Vallicella’s argument; the prosecution wouldn’t be able to argue that the mother had a malicious intent to kill this person. As for second degree, even though lacking the premeditation criterion, implies a reckless disregard for human life. The prosecution can’t accuse a woman of that either.
What’s more is that, if you were right in that a fetus is a person despite its viability, then restrictive policies would be the only choice we’d have. Like Vallicella’s argument implies, murder is treated in an extremely restrictive manner; even self-defense has to be established with no room for doubt. So if abortion were murder, it would be dealt with in like manner. So setting metaphysics and ethics aside, from a practical point of view, we should be wary of equating abortion with murder because we have dealt with the latter in a restrictive manner and we should know better, especially given the deadly consequences of such policies. So even if for solely practical reasons, we should shy from such equivalence even if it could be proven that abortion is murder. The issue here is that no pro-lifer has qualified that statement in any manner that doesn’t make for a bare assertion. Abortion is simply not murder and to think of women who have abortions as murderers is to misunderstand this issue altogether. What we should be addressing are the common motivations for seeking an abortion: poverty, domestic violence, lack of employment opportunity, and so on. I can go on, but I’ve probably overstayed my welcome as it is, so for the time being, I will leave this here.
The featured image to this article was taken from https://www.ngv.vic.gov.au/explore/collection/work/4291/
The featured image to this article is Rembrandt’s Two Old Men Disputing (1628).
-  Taussig, Karen-Sue, Klaus Hoeyer, and Stefan Helmreich: “The Anthropology of Potentiality in Biomedicine: An Introduction to Supplement. Current Anthropology (vol. 54). 2013.
-  Rovelli, Carlo, et al. Reality Is Not What It Seems: the Journey to Quantum Gravity. Riverhead Books, 2018, pp. 32-33.
-  “Nicolaus Copernicus.” The Catholic Encyclopedia. Vol. 4. New York: Robert Appleton Company, 1908.20 Jun. 2019 <http://www.newadvent.org/cathen/04352b.htm>.
-  NA. “Bruno and Galileo in Rome.” Honors Program in Rome, University of Washington. 2003. <https://depts.washington.edu/hrome/Authors/pev42/BrunoandGalileoinRome/pub_zbarticle_view_printable.html>
-  Ibid.