# The Negation Strategy

By R.N. Carmona

Every deductive argument can be negated. I consider this an uncontroversial statement. The problem is, there are people who proceed as though deductive arguments speak to an a priori truth. The Freedom Tower is taller than the Empire State Building; the Empire State Building is taller than the Chrysler Building; therefore, the Freedom Tower is taller than the Chrysler Building. This is an example of an a priori truth because given that one understands the concepts of taller and shorter, the conclusion follows uncontroversially from the premises. This is one way in which the soundness of an argument can be assessed.

Of relevance is how one would proceed if one is unsure of the argument. Thankfully, we no longer live in a world in where one would have to go out of their way to measure the heights of the three buildings. A simple Google search will suffice. The Freedom Tower is ~546m. The Empire State Building is ~443. The Chrysler is ~318m. Granted, this is knowledge by way of testimony. I do not intend to connote religious testimony. What I intend to say is that one’s knowledge is grounded on knowledge directly acquired by someone else. In other words, at least one other person actually measured the heights of these buildings and these are the measurements they got.

Most of our knowledge claims rest on testimony. Not everyone has performed an experimental proof to show that the acceleration of gravity is 9.8m/s^2. Either one learned it from a professor or read it in a physics textbook or learned it when watching a science program. Or, they believe the word of someone they trust, be it a friend or a grade school teacher. This does not change that fact that if one cared to, one could exchange knowledge by way of testimony for directly acquired knowledge by performing an experimental proof. This is something I have done, so I do not believe on basis of mere testimony that Newton’s law holds. I can say that it holds because I tested it for myself.

To whet the appetite, let us consider a well-known deductive argument and let us ignore, for the moment, whether it is sound:

P1 All men are mortal.

P2 Socrates is a man.

C Therefore, Socrates is mortal.

If someone were completely disinterested in checking whether this argument, which is merely a finite set of propositions, coheres with the world or reality, I would employ my negation strategy: the negation of an argument someone assumes to be sound without epistemic warrant or justification. The strategy forces them into exploring whether their argument or its negation is sound. Inevitably, the individual will have to abandon their bizarre commitment to a sort of propositional idealism (namely that propositions can only be logically assessed and do not contain any real world entities contextually or are not claims about the world). In other words, they will abandon the notion that “All men are mortal” is a mere proposition lacking context that is not intended to make a claim about states of affairs objectively accessible to everyone, including the person who disagrees with them. With that in mind, I would offer the following:

P1 All men are immortal.

P2 Socrates is a man.

C Therefore, Socrates is immortal.

This is extremely controversial for reasons we are all familiar with. That is because everyone accepts that the original argument is sound. When speaking of ‘men’, setting aside the historical tendency to dissolve the distinction between men and women, what is meant is “all human persons from everywhere and at all times.” Socrates, as we know, was an ancient Greek philosopher who reportedly died in 399 BCE. Like all people before him, and presumably all people after him, he proved to be mortal. No human person has proven to be immortal and therefore, the original argument holds.

Of course, matters are not so straightforward. Christian apologists offer no arguments that are uncontroversially true like the original argument above. Therefore, the negation strategy will prove extremely effective to disabuse them of propositional idealism and to make them empirically assess whether their arguments are sound. What follows are examples of arguments for God that have been discussed ad nauseam. Clearly, theists are not interested in conceding. They are not interested in admitting that even one of their arguments does not work. Sure, what you find are theists committed to Thomism, for instance, and as such, they will reject Craig’s Kalam Cosmological Argument (KCA) because it does not fit into their Aristotelian paradigm and not because it is unsound. They prefer Aquinas’ approach to cosmological arguments. What is more common is the kind of theist that ignores the incongruity between one argument for another; since they are arguments for God, it counts as evidence for his existence and it really does not matter that Craig’s KCA is not Aristotelian. I happen to think that it is, despite Craig’s denial, but I digress.

Negating Popular Arguments For God’s Existence

Let us explore whether Craig’s Moral Argument falls victim to the negation strategy. Craig’s Moral Argument is as follows:

P1 If God does not exist, objective moral values do not exist.

P2 Objective moral values do exist.

C Therefore, God exist (Craig, William L. “Moral Argument (Part 1)”. Reasonable Faith. 15 Oct 2007. Web.)

With all arguments, a decision must be made. First, an assessment of the argument form is in order. Is it a modus ponens (MP) or a modus tollens (MT)? Perhaps it is neither and is instead, a categorical or disjunctive syllogism. In any case, one has to decide which premise(s) is going to be negated or whether by virtue of the argument form, one will have to change the argument form to state the opposite. You can see this with the original example. I could have very well negated P2 and stated “Socrates is not a man.” Socrates is an immortal jellyfish that I tagged in the Mediterranean. Or he is an eternal being that I met while tripping out on DMT. For purposes of the argument, however, since he is not a man, at the very least, the question of whether or not he is mortal is open. We would have to ask what Socrates is. Now, if Socrates is my pet hamster, then yes, Socrates is mortal despite not being a man. It follows that the choice of negation has to be in a place that proves most effective. Some thought has to go into it.

Likewise, the choice has to be made when confronting Craig’s Moral Argument. Craig’s Moral Argument is a modus tollens. For the uninitiated, it simply states: [((p –> q) ^ ~q) –> ~p] (Potter, A. (2020). The rhetorical structure of Modus Tollens: An exploration in logic-mining. Proceedings of the Society for Computation in Linguistics, 3, 170-179.). Another way of putting it is that one is denying the consequent. That is precisely what Craig does. “Objective moral values do not exist” is the consequent q. Craig is saying ~q or “Objective moral values do exist.” Therefore, one route one can take is keeping the argument form and negating P1, which in turn negates P2.

MT Negated Moral Argument

P1 If God exists, objective moral values and duties exist.

P2 Objective moral values do not exist.

C Therefore, God does not exist.

The key is to come up with a negation that is either sound or, at the very least, free of any controversy. Straight away, I do not like P2. Moral realists would also deny this negation because, to their minds, P2 is not true. The controversy with P2 is not so much whether it is true or false, but that it falls on the horns of the objectivism-relativism and moral realism/anti-realism debates in ethics. The argument may accomplish something with respect to countering Craig’s Moral Argument, but we are in no better place because of it. This is when we should explore changing the argument’s form in order to get a better negation.

MP Negated Moral Argument

P1 If God does not exist, objective moral values and duties exist.

P2 God does not exist.

C Therefore, objective moral values and duties exist.

This is a valid modus ponens. I have changed the argument form of Craig’s Moral Argument and I now have what I think to be a better negation of his argument. From P2, atheists can find satisfaction. This is the epistemic proposition atheists are committed to. The conclusion also alleviates any concerns moral realists might have had with the MT Negated Moral Argument. For my own purposes, I think this argument works better. That, however, is beside the point. The point is that this forces theists to either justify the premises of Craig’s Moral Argument, i.e. prove that the argument is sound, or assert, on the basis of mere faith, that Craig’s argument is true. In either case, one will have succeeded in either forcing the theist to abandon their propositional idealism, in getting them to test the argument against the world as ontologically construed or in getting them to confess that they are indulging in circular reasoning and confirmation bias, i.e. getting them to confess that they are irrational and illogical. Both of these count as victories. We can explore whether other arguments for God fall on this sword.

We can turn our attention to Craig’s Kalam Cosmological Argument (KCA):

P1 Everything that begins to exist has a cause.

P2 The universe began to exist.

C Therefore, the universe has a cause. (Reichenbach, Bruce. “Cosmological Argument”. Stanford Encyclopedia of Philosophy. 2021. Web.)

Again, negation can take place in two places: P1 or P2. Negating P1, however, does not make sense. Negating P2, like in the case of his Moral Argument, changes the argument form; this is arguable and more subtle. So we get the following:

MT Negated KCA

P1 Everything that begins to exist has a cause.

P2 The universe did not begin to exist.

C Therefore, the universe does not have a cause.

Technically, Craig’s KCA is a categorical syllogism. Such syllogisms present a universal () or existential quantifier (∃); the latter is introduced by saying all. Consider, “all philosophers are thinkers; all philosophers are logicians; therefore, all thinkers are logicians.” Conversely, one could say “no mallards are insects; some birds are mallards; therefore, some birds are not insects.” What Craig is stating is that all things that begin to exist have a cause, so if the universe is a thing that began to exist, then it has a cause. Alternatively, his argument is an implicit modus ponens: “if the universe began to exist, then it has a cause; the universe began to exist; therefore, the universe has a cause.” In any case, the negation works because if the universe did not begin to exist, then the universe is not part of the group of all things that have a cause.

Whether the universe is finite or eternal has been debated for millennia and in a sense, despite changing context, the debate rages on. If the universe is part of an eternal multiverse, it is just one universe in a vast sea of universes within a multiverse that has no temporal beginning. Despite this, the MT Negated KCA demonstrates how absurd the KCA is. The singularity was already there ‘before’ the Big Bang. The Big Bang started the cosmic clock, but the universe itself did not begin to exist. This is more plausible. Consider that everything that begins to exist does so when the flow of time is already in motion, i.e. when the arrow of time pointed in a given direction due to entropic increase reducible to the decreasing temperature throughout the universe. Nothing that has ever come into existence has done so simultaneously with time itself because any causal relationship speaks to a change and change requires the passage of time, but at T=0, no time has passed, and therefore, no change could have taken place. This leads to an asymmetry. We thus cannot speak of anything beginning to exist at T=0. The MT Negated KCA puts cosmology in the right context. The universe did not come into existence at T=0. T=0 simply represents the first measure of time; matter and energy did not emerge at that point.

For a more complicated treatment, Malpass and Morriston argue that “one cannot traverse an actual infinite in finite steps” (Malpass, Alex & Morriston, Wes (2020). Endless and Infinite. Philosophical Quarterly 70 (281):830-849.). In other words, from a mathematical point of view, T=0 is the x-axis. All of the events after T=0 are an asymptote along the x-axis. The events go further and further back, ever closer to T=0 but never actually touch it. For a visual representation, see below:

Credit: Free Math Help

The implication here is that time began to exist, but the universe did not begin to exist. A recent paper implies that this is most likely the case (Quantum Experiment Shows How Time ‘Emerges’ from Entanglement. The Physics arXiv Blog. 23 Oct 2013. Web.). The very hot, very dense singularity before the emergence of time at T=0 would have been subject to quantum mechanics rather than the macroscopic forces that came later, e.g., General Relativity. As such, the conditions were such that entanglement could have resulted in the emergence of time in our universe, but not the emergence of the universe. All of the matter and energy were already present before the clock started to tick. Conversely, if the universe is akin to a growing runner, then the toddler is at the starting line before the gun goes off. The sound of the gun starts the clock. The runner starts running sometime after she hears the sound. As she runs, she goes through all the stages of childhood, puberty, adolescence, adulthood, and finally dies. Crucially, the act of her running and her growth do not begin until after the gun goes off. Likewise, no changes take place at T=0; all changes take place after T=0. While there is this notion of entanglement, resulting in a change occurring before the clock even started ticking, quantum mechanics demonstrates that quantum changes do not require time and in fact, may result in the emergence of time. Therefore, it is plausible that though time began to exist at the Big Bang, the universe did not begin to existthus, making the MT Negated KCA sound. The KCA is therefore, false.

Finally, so that the Thomists do not feel left out, we can explore whether the negation strategy can be applied to Aquinas’ Five Ways. For our purposes, the Second Way is closely related to the KCA and would be defeated by the same considerations. Of course, we would have to negate the Second Way so that it is vulnerable to the considerations that cast doubt on the KCA. The Second Way can be stated as follows:

We perceive a series of efficient causes of things in the world.

Nothing exists prior to itself.

Therefore nothing [in the world of things we perceive] is the efficient cause of itself.

If a previous efficient cause does not exist, neither does the thing that results (the effect).

Therefore if the first thing in a series does not exist, nothing in the series exists.

If the series of efficient causes extends ad infinitum into the past, then there would be no things existing now.

That is plainly false (i.e., there are things existing now that came about through efficient causes).

Therefore efficient causes do not extend ad infinitum into the past.

Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God. (Gracyk, Theodore. “Argument Analysis of the Five Ways”. Minnesota State University Moorhead. 2016. Web.)

This argument is considerably longer than the KCA, but there are still areas where the argument can be negated. I think P1 is uncontroversial and so, I do not mind starting from there:

Negated Second Way

We perceive a series of efficient causes of things in the world.

Nothing exists prior to itself.

Therefore nothing [in the world of things we perceive] is the efficient cause of itself.

If a previous efficient cause does not exist, neither does the thing that results (the effect).

Therefore if the earlier thing in a series does not exist, nothing in the series exists.

If the series of efficient causes extends ad infinitum into the past, then there would be things existing now.

That is plainly true (i.e., efficient causes, per Malpass and Morriston, extend infinitely into the past or, the number of past efficient causes is a potential infinity).

Therefore efficient causes do extend ad infinitum into the past.

Therefore it is not necessary to admit a first efficient cause, to which everyone gives the name of God.

Either the theist will continue to assert that the Second Way is sound, epistemic warrant and justification be damned, or they will abandon their dubious propositional idealism and run a soundness test. Checking whether the Second Way or the Negated Second Way is sound would inevitably bring them into contact with empirical evidence supporting one argument or the other. As I have shown with the KCA, it appears that considerations of time, from a philosophical and quantum mechanical perspective, greatly lower the probability of the KCA being sound. This follows neatly into Aquinas’ Second Way and as such, one has far less epistemic justification for believing the KCA or Aquinas’ Second Way are sound. The greater justification is found in the negated versions of these arguments.

Ultimately, one either succeeds at making the theist play the game according to the right rules or getting them to admit their beliefs are not properly epistemic at all; instead, they believe by way of blind faith and all of their redundant arguments are exercises in circular reasoning and any pretense of engaging the evidence is an exercise in confirmation bias. Arguments for God are a perfect example of directionally motivated reasoning (see Galef, Julia. The Scout Mindset: Why Some People See Things Clearly and Others Don’t. New York: Portfolio, 2021. 63-66. Print). I much prefer accuracy motivated reasoning. We are all guilty of motivated reasoning, but directionally motivated reasoning is indicative of irrationality and usually speaks to the fact that one holds beliefs that do not square with the facts. Deductive arguments are only useful insofar as premises can be supported by evidence, which therefore makes it easier to show that an argument is sound. This is why we can reason that if Socrates is a man, more specifically, the ancient Greek philosopher that we all know, then Socrates was indeed mortal and that is why he died in 399 BCE. Likewise, this is why we cannot reason that objective morality can only be the case if the Judeo-Christian god exists, that if the universe began to exist, God is the cause, and that if the series of efficient causes cannot regress infinitely and must terminate somewhere, they can only terminate at a necessary first cause, which some call God. These arguments can be negated and the negations will show that they are either absurd or that the reasoning in the arguments is deficient and rests on the laurels of directionally motivated reasoning due to a bias for one’s religious faith rather than on the bedrock of carefully reasoned, meticulously demonstrated, accuracy motivated reasoning which does not ignore or omit pertinent facts.

The arguments for God, no matter how old or new, simple or complex, do not work because not only do they rely on directionally motivated and patently biased reasoning, but because when testing for soundness, being sure not to exclude any pertinent evidence, the arguments turn out to be unsound. In the main, they all contain controversial premises that do not work unless one already believes in God. So there is a sense in which these arguments exist to give believers a false sense of security or more pointedly, a false sense of certainty. Unlike my opponents, I am perfectly content with being wrong, with changing my mind, but the fact remains, theism is simply not the sort of belief that I give much credence to. Along with the Vagueness Strategy, the Negation Strategy is something that should be in every atheist’s toolbox.

# Rebuking Rasmussen’s Geometric Argument

By R.N. Carmona

My purpose here is twofold: first and foremost, I want to clarify Rasmussen’s argument because though I can understand why word of mouth can lead to what is essentially a straw man of his argument, especially in light of the fact that his argument requires one to pay for an online article or his book Is God the Best Explanation of Things? which he coauthored with Felipe Leon, it is simply good practice to present an argument fairly. Secondly, I want to be stern about the fact that philosophy of religion cannot continue to rake these dead coals. Rasmussen’s argument is just another in a long, winding, and quite frankly, tired history of contingency arguments. In in any case, the following is the straw man I want my readers and anyone else who finds this post to stop citing. This is decidedly not Rasmussen’s argument:

Rasmussen has no argument called The Argument From Arbitrary Limits. Arbitrary limits actually feature in Leon’s chapter in where he expresses skepticism of Rasmussen’s Geometric Argument (Rasmussen Joshua and Leon, Felipe. Is God The Best Explanation For Things. Switzerland: Palgrave Macmillan. 53-68. Print.). Also, Rasmussen has a Theistic conception of God (omnipresent, wholly good, etc.) that is analogous to what Plantinga means by maximal greatness, but Rasmussen does not refer to God using that term. Perhaps there is confusion with his use of the word maximal conceivable. While given Rasmussen’s beliefs, he implies God with what he calls a maximal foundation, “a foundation complete with respect to its fundamental (basic, uncaused) features” (Ibid., 140). He makes it clear throughout the book that he is open to such a foundation that is not synonymous with God. In any case, his maximal conceivable is not a being possessing maximal greatness; at least, not exactly, since it appears he means something more elementary given his descriptions of basic and uncaused, as these clearly do not refer to omnipresence, perfect goodness, and so on. There may also be some confusion with his later argument, which he calls “The Maximal Mind Argument” (Ibid. 112-113), which fails because it is relies heavily on nonphysicalism, a series of negative theories in philosophy of mind that do not come close to offering alternative explanations for an array of phenomena thoroughly explained by physicalism (see here). In any case, Rasmussen has no argument resembling the graphic above. His arguments rest on a number of dubious assumptions, the nexus of which is his Geometric Argument:

JR1 Geometry is a geometric state.

JR2 Every geometric state is dependent.

JR3 Therefore, Geometry is dependent.

JR4 Geometry cannot depend on any state featuring only things that have a geometry.

JR5 Geometry cannot depend on any state featuring only non-concrete (non-causal) things.

JRC Therefore, Geometry depends on a state featuring at lest one geometry-less concrete thing (3-5) (Ibid., 42).

Like Leon, I take issue with JR2. Leon does not really elaborate on why JR2 is questionable saying only that “the most basic entities with geometry (if such there be) have their geometrics of factual or metaphysical necessity” and that therefore, “it’s not true that every geometric state is dependent” (Ibid., 67). He is correct, of course, but elaboration could have helped here because this is a potential defeater. Factual and metaphysical necessity are inhered in physical necessity. The universe is such that the fact that every triangle containing a 90-degree angle is a right triangle is reducible to physical constraints within our universe. This fact of geometry is unlike Rasmussen’s examples, namely chair and iPhone shapes. He states: “The instantiation of [a chair’s shape] depends upon prior conditions. Chair shapes never instantiate on their own, without any prior conditions. Instead, chair-instantiations depend on something” (Ibid., 41). This overt Platonism is questionable in and of itself, but Leon’s statement is forceful in this case: the shape of the chair is not dependent because it has its shape of factual or metaphysical necessity that stem from physical necessity. Chairs, first and foremost, are shaped the way they are because of our shape when we sit down; furthermore, chairs take the shapes they do because of physical constraints like human weight, gravity, friction against a floor, etc. For a chair not to collapse under the force of gravity and the weight of an individual, it has to be engineered in some way to withstand these forces acting on it; the chair’s shape is so because of physical necessity and this explains its metaphysical necessity. There is therefore, no form of a chair in some ethereal realm; an idea like this is thoroughly retrograde and not worth considering.

In any case, the real issue is that chair and iPhone shapes are not the sort of shapes that occur naturally in the universe. Those shapes, namely spheres, ellipses, triangles, and so on, also emerge from physical necessity. It is simply the case that a suspender on a bridge forms the hypothenuse of a right triangle. Like a chair, bridge suspenders take this shape because of physical necessity. The same applies to the ubiquity of spherical and elliptical shapes in the universe. To further disabuse anyone of Platonic ideas, globular shapes are also quite ubiquitous in the universe and are more prominent the closer we get to the Big Bang. There are shapes inherent in our universe that cannot be neatly called geometrical and even still, these shapes are physically and therefore, metaphysically necessitated. If JR2 is unsound, then the argument falls apart. On another front, this addresses Rasmussen’s assertion that God explains why there is less chaos in our universe. Setting aside that the qualification of this statement is entirely relative, the relative order we see in the universe is entirely probabilistic, especially given that entropy guarantees a trend toward disorder as the universe grows older and colder.

Like Leon, I share his general concern about “any argument that moves from facts about apparent contingent particularity and an explicability principle to conclusions about the nature of fundamental reality” (Ibid., 67) or as I have been known to put it: one cannot draw ontological conclusions on the basis of logical considerations. Theistic philosophers of religion and unfortunately, philosophers in general, have a terrible habit of leaping from conceivability to possibility and then, all the way to actuality. Leon elaborates:

Indeed, the worry above seems to generalize to just about any account of ultimate reality. So, for example, won’t explicability arguments saddle Christian theism with the same concern, viz. why the deep structure of God’s nature should necessitate exactly three persons in the Godhead? In general, won’t explicability arguments equally support a required explanation for why a particular God exists rather than others, or rather than, say, an infinite hierarchy of gods? The heart of the criticism is that it seems any theory must stop somewhere and say that the fundamental character is either brute or necessary, and that if it’s necessary, the explanation of why it’s necessary (despite appearing contingent) is beyond our ability to grasp (Ibid., 67-68).

Of course, Leon is correct in his assessment. Why not Ahura Mazda, his hypostatic union to Spenta Mainyu, and his extension via the Amesha Spentas? If, for instance, the one-many problem requires the notion of a One that is also many, what exactly rules out Ahura Mazda? One starts to see how the prevailing version of Theism in philosophy of religion is just a sad force of habit. This is why it is necessary to move on from these arguments. Contingency arguments are notoriously outmoded because Mackie, Le Poidevin, and others have already provided general defeaters that can apply to any particular contingency argument. Also, how many contingency arguments do we need exactly? In other words, how many different ways can one continue to assert that all contingent things require at least one necessary explanation? Wildman guides us here:

Traditional natural theology investigates entailment relations from experienced reality to, say, a preferred metaphysics of ultimacy. But most arguments of this direct-entailment sort have fallen out of favor, mostly because they are undermined by the awareness of alternative metaphysical schemes that fit the empirical facts just as well as the preferred metaphysical scheme. By contrast with this direct-entailment approach, natural theology ought to compare numerous compelling accounts of ultimacy in as many different respects as are relevant. In this comparison-based way, we assemble the raw material for inference-to-the-best-explanation arguments on behalf of particular theories of ultimacy, and we make completely clear the criteria for preferring one view of ultimacy to another.

Wildman, Wesley J. Religious Philosophy as Multidisciplinary Comparative Inquiry: Envisioning a Future For The Philosophy of Religion. State University of New York Press. Albany, NY. 2010. 162. Print.

Setting aside that Rasmussen does not make clear why he prefers a Christian view of ultimacy as opposed to a Zoroastrian one or another one that may be proposed, I think Wildman is being quite generous when saying that “alternative metaphysical schemes fit the empirical facts just as well as the preferred metaphysical scheme” because the fact of the matter is that some alternatives fit the empirical facts better than metaphysical schemes like the ones Christian Theists resort to. Rasmussen’s preferred metaphysical scheme of a maximal foundation, which properly stated, is a disembodied, nonphysical mind who is omnipresent, wholly good, and so on rests on dubious assumptions that have not been made to cohere with the empirical facts. Nonphysicalism, as I have shown in the past, does not even attempt to explain brain-related phenomena. Physicalist theories have trounced the opposition in that department and it is not even close. What is more is that Christian Theists are especially notorious for not comparing their account to other accounts and that is because they are not doing philosophy, but rather apologetics. This is precisely why philosophy of religion must move on from Christian Theism. We can think of an intellectual corollary to forgiveness. In light of Christian Theism’s abject failure to prove God, how many more chances are we required to give this view? Philosophy of religion is, then, like an abused lover continuing to be moved by scraps of affection made to cover up heaps of trauma. The field should be past the point of forgiveness and giving Christian Theism yet another go to get things right; it has had literal centuries to get its story straight and present compelling arguments and yet here we are retreading ground that has been walked over again and again and again.

To reinforce my point, I am going to quote Mackie and Le Poidevin’s refutations of contingency arguments like Rasmussen’s. It should then become clear that we have to bury these kinds of arguments for good. Let them who are attached to these arguments mourn their loss, but I will attend no such wake. What remains of the body is an ancient skeleton, long dead. It is high time to give it a rest. Le Poidevin put one nail in the coffin of contingency arguments. Anyone offering new contingency arguments has simply failed to do their homework. It is typical of Christian Theists to indulge confirmation bias and avoid what their opponents have to say. The problem with that is that the case against contingency arguments has been made. Obstinacy does not change the fact. Le Poidevin clearly shows why necessary facts do not explain contingent ones:

Necessary facts, then, cannot explain contingent ones, and causal explanation, of any phenomenon, must link contingent facts. That is, both cause and effect must be contingent. Why is this? Because causes make a difference to their environment: they result in something that would not have happened if the cause had not been present. To say, for example, that the presence of a catalyst in a certain set of circumstances speeded up a reaction is to say that, had the catalyst not been present in those circumstances, the reaction would have proceeded at a slower rate. In general, if A caused B, then, if A had not occurred in the circumstances, B would not have occurred either. (A variant of this principle is that, if A caused B, then if A had not occurred in the circumstances, the probability of B’s occurrence would have been appreciably less than it was. It does not matter for our argument whether we accept the origin principle or this variant.) To make sense of this statement, ‘If A had not occurred in the circumstances, B would not have occurred’, we have to countenance the possibility of A’s not occurring and the possibility of B’s not occurring. If these are genuine possibilities, then both A and B are contingent. So one of the reasons why necessary facts cannot causally explain anything is that we cannot make sense of their not being the case, whereas causal explanations requires us to make sense of causally explanatory facts not being the case. Causal explanation involves the explanation of one contingent fact by appeal to another contingent fact.

Le Poidevin, Robin. Arguing for Atheism: An Introduction to the Philosophy of Religion. London: Routledge, 1996. 40-41. Print.

This is a way of substantiating that an effect is inhered in a cause or the principle, like effects from like causes. This has been precisely my criticism of the idea that a nonphysical cause created the physical universe. There is no theory of causation that permits the interaction of an ethereal entity’s dispositions and that of physical things. It is essentially a paraphrase of Elizabeth of Bohemia’s rebuttal to Cartesian dualism: how does mental substance interact with physical substance? This is why mind-body dualism remains in a state of incoherence, but I digress. Mackie puts yet another nail in the coffin:

The principle of sufficient reason, then, is more far-reaching than the principle that every occurrence has a preceding sufficient cause: the latter, but not the former, would be satisfied by a series of things or events running back infinitely in time, each determined by earlier ones, but with no further explanation of the series as a whole. Such a series would give us only what Leibniz called ‘physical’ or ‘hypothetical’ necessity, whereas the demand for a sufficient reason for the whole body of contingent things and events and laws calls for something with ‘absolute’ or ‘metaphysical’ necessity. But even the weaker, deterministic, principle is not an a priori truth, and indeed it may not be a truth at all; much less can this be claimed for the principle of sufficient reason. Perhaps it just expresses an arbitrary demand; it may be intellectually satisfying to believe there is, objectively, an explanation for everything together, even if we can only guess at what the explanation might be. But we have no right to assume that the universe will comply with our intellectual preferences. Alternatively, the supposed principle may be an unwarranted extension of the determinist one, which, in so far as it is supported, is supported only empirically, and can at most be accepted provisionally, not as an a priori truth. The form of the cosmological argument which relies on the principle of sufficient reason therefore fails completely as a demonstrative proof.

Mackie, J. L. The Miracle of Theism: Arguments for and against the Existence of God. Oxford: Clarendon, 1982. 86-87. Print.

Every contingency argument fails because it relies on the principle of sufficient reason and because necessity does not cohere with contingency as it concerns a so-called causal relation. Mackie, like Le Poidevin, also questions why God is a satisfactory termination of the regress. Why not something something else? (Ibid., 92). Contingency arguments amount to vicious special pleading and an outright refusal to entertain viable alternatives, even in cases where the alternatives are nonphysical and compatible with religious sentiments. In any case, it would appear that the principle of sufficient reason is not on stable ground. Neither is the notion that a necessary being is the ultimate explanation of the universe. Contingency arguments have been defeated and there really is no way to repeat these arguments in a way that does not fall on the horns of Le Poidevin and Mackie’s defeaters. Only the obdurate need to believe that God is the foundational explanation of the universe explains the redundancy of Christian Theists within the philosophy of religion. That is setting aside that apologetics is not philosophy and other complaints I have had. The Geometric Argument, despite using different language, just is a contingency argument. If the dead horse could speak, it would tell them all to lay down their batons once and for all, but alas.

Ultimately, contingency arguments are yet another example of how repetitive Christianized philosophy of religion has become. There is a sense in which Leon, Le Poidevin, and Mackie are paraphrasing one another because, and here is a bit of irony, like arguments result in like rebuttals. They cannot help but to sound like they each decided or even conspired to write on the same topic for a final paper. They are, after all, addressing the same argument no matter how many attempts have been made to word it differently. It is a vicious cycle, a large wheel that cannot keep on turning. It must be stopped in its tracks if progress in the philosophy of religion is to get any real traction.

# A Refutation of Weaver’s “An Objection to Naturalism and Atheism from Logic”

By R.N. Carmona

Weaver’s argument, although robust, commits what I think is a cardinal sin in philosophy: “An objection from logical considerations against atheism is one which attempts to show that some deliverance of logic is at odds with atheism or something strictly implied by atheism” (Weaver, C.G. (2019). Logical Objections to Atheism. In A Companion to Atheism and Philosophy, G. Oppy (Ed.). https://doi.org/10.1002/9781119119302.ch30). One should not get in the habit of drawing ontological conclusions on the basis of logical considerations and though Weaver makes a good attempt to justify his conclusion, there are too many areas in his composite argument that are vulnerable to attack. There are parts of his composite argument that are clearly stated in his own words, but other parts have to be sifted out from his discussions, specifically on logical monism and classical logical consequence (CLC). Also, the conclusion that atheism is false has to be gathered from his discussion following his claim that ontological naturalism is false.

A general note, prior to proceeding, is in order. Weaver’s paper is quite technical and not at all easy for the untrained eye to read, let alone understand, so I will endeavor to avoid technicality wherever necessary; I will only permit pursuing one technical element because I disagree with Weaver’s treatment of supervenience, how he conveniently begs the question regarding reductionist materialism (if only to ensure that his argument is not met with immediate difficulty), and the conclusion he believes follows. More importantly, I think that the domestication of philosophy within the ivory towers of academia was a critical misstep that needs to be rectified. While analytic philosophy has its use, its abuse makes philosophy the slave of academic elites and therefore, keeps it well out of the reach of ordinary people. Philosophy, therefore, if it is to be understood by laypeople, needs to be communicated in ordinary, relatable language. Since my interest is to, first and foremost, communicate philosophy in an approachable way, I tend to avoid technicalities as much as possible. With that said, it is not at all necessary to quibble with Weaver’s logical proofs of validity (especially because validity matters much less than soundness) or Williamson’s notion that contingentist statements can be mapped onto necessitist ones and vice versa, but that “The asymmetry favours necessitism. Every distinction contingentists can draw has a working equivalent in neutral terms, but the extra commitments of necessitism allow one to draw genuine distinctions which have no working equivalents in neutral terms. If one wants to draw those distinctions, one may have to be a necessitist” (Williamson, T.. “Necessitism, Contingentism, and Plural Quantification.” Mind 119 (2010): 657-748. 86. Web.).

Williamson and Weaver, following his cue, are both guilty of ignoring logical atomism, so ultimately, it does not matter if the validity of logical statements suggests that necessitism about mere propositions is probably true because ultimately, we are not talking about mere propositions but rather Sachverhalte, “conglomerations of objects combined with a definite structure” (Klement, Kevin, “Russell’s Logical Atomism”The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.)). This is perhaps Weaver’s motivation for dismissing Carnap who was anti-metaphysical. It can be argued, therefore, that reinstating metaphysics or overstating its importance is necessary for any argument against naturalism and/or atheism or conversely, for Theism, to get any traction. The fact remains, however, that propositions comprising a sound logical argument are dependent on real world experiences via the senses. The proposition “there is a cat” may speak to the fact that either i) one believes they have seen a cat in whatever space they find themselves in ii) one knows and can confirm that there is a cat in their vicinity iii) there is presently a cat within ones field of vision. While I grant that propositions can speak to entirely imaginary or, at least, hypothetical entities, all propositions rely on entities we have identified in our common tongue. Therefore, statements like “there is a cat” will always rely on content not necessarily entailed within a given proposition. There is still a question as to the context of such propositions and the preciseness of what one is trying to say.

Weaver’s Composite Argument Against Naturalism and Atheism, and Its Problems

With these preliminary concerns in our rearview, I can now turn to Weaver’s composite argument and provide a few avenues for the atheist to refute his argument.

W1 Since situationspf do not exist (“I will therefore be entitled to reject…the existence of situationsPF” (Weaver, 6).), situationsC exist.

W2 Given situationsC , classical logical consequence (CLC) is the case.

W3 From W2, necessitism is true.

W4 “If necessitism is true, then ontological naturalism is false.”

W5 “Necessitism is true.”

W6 “Therefore, ontological naturalism is false” (Weaver, 15).

W7 From W6, “Necessitism is true and modal properties are indispensable to our best physical theories.”

W8 If W7, “then there is a new phenomenon of coordination (NPC).”

W9 “Necessarily, (if there is an NPC, it has an explanation).”

W10 “Necessarily, [if possibly both (atheism is true and there is an NPC), then it is not possible that the NPC has an explanation]”

C “Therefore, atheism is false” (Weaver, 18).

Setting aside that Weaver assumes that suitably precisified situations (situationspf) cannot exist and the problems he would face if just one instance of such a situation does exist, there is a way to show that even on the assumption that just classically precisified situations (situationsC) exist, it doesn’t follow that CLC holds. Weaver seems to think that CLC follows from a schema concerning mere validity: “A deductive argument is valid, just in case, there is no situation in which the premises are true and the conclusion false” (Weaver, 4). I think it is straightforwardly obvious that a typical non sequitur already violates this schema. Consider the following:

P1 If it is cloudy outside, there is a chance of precipitation.

P2 It is cloudy outside.

C Therefore, the Yankee game will be postponed.

The first two premises are true perspectively. In New York City, at this present hour, it is partly cloudy outside and there is thus, a chance of precipitation. However, the conclusion is false because the New York Yankees are not even in Spring training and it is out of the norm for them to have a regular season home game in late January. The above argument can prove true given not only at least one extra premise, but also the fact that it is not winter but spring, and that the MLB regular season is underway. This goes a long way in showing that propositions are usually missing crucial content and are true given specified context. Perhaps, then, Weaver should provide a different schema to ground CLC.

Weaver, unfortunately, does not give an adequate account of what he means by situationspf and what such situations would look like. It is enough to reiterate that the existence of even one such situation takes him back to square one. This is aside from the fact that a rejection of pluralism entails a rejection of arguments operating outside of classical logic, e.g., Plantinga’s Modal Ontological Argument, which rests of the axioms of S5 modal logic. A thorough rejection of free logical systems would limit Theists to the domain of classical logic, which will prove unforgiving since nothing like God seems operative in the real world.

Weaver’s dependence on situationsC and CLC proves problematic and is one place for an atheist to focus on. Another avenue for an atheist to take is W4 and W5. Is the notion that ontological naturalism is false conditional on necessitism being true? I do not think Weaver established that this premise is true. Furthermore, aside from exploring whether these clauses have a conditional relationship, one can simply ask whether necessitism is true. The jury is still out on whether necessitism or contingentism is the case, and there may yet be a synthesis or a handful of alternative positions that challenge both. Given the current state of the debate, I am uncommitted to either position, but I am suspicious of anyone siding with one for sake of attempting to disprove a position they already assume is false, which, in Weaver’s case, are naturalism and atheism.

In plain language, the perspective of necessitists falls flat or appears to be saying something nonsensical. Williamson outlines where disagreement lies:

For instance, a contingentist typically holds that it is contingent that there is the Thames: there could have been no such river, and in those circumstances there would have been no Thames. By contrast, a necessitist typically holds that it is necessary that there is the Thames: there could have been no such river, but in those circumstances there would still have been the Thames, a non-river located nowhere that could have been a river located in England. Thus the contingentist will insist that necessarily if there is the Thames it is a river, while the necessitist allows at most that necessarily if the Thames is located somewhere it is a river.

Williamson, T.. “Necessitism, Contingentism, and Plural Quantification.” Mind 119 (2010): 657-748. 9. Web.

Contingentists deny the necessity of the Thames, whether river or not. These identity discussions extend further when one considers people. Manuel Pérez Otero explores this and tries to synthesize these two opposing point of views (see Otero, Manuel Pérez. “Contingentism about Individuals and Higher-Order Necessitism.” Theoria: An International Journal for Theory, History and Foundations of Science, vol. 28, no. 3(78), 2013, pp. 393–406. JSTOR, http://www.jstor.org/stable/23926328. Accessed 25 Jan. 2021.). Though Otero’s synthesis is tangential for our purposes, it shows that this binary Weaver thinks exists is one of his own making, essentially a false dichotomy. Given the issues necessitism presents for ordinary language, and the likelihood of one of its alternatives being true, it follows that necessitism is probably false. An exhaustive defense of a position I am not committed to is not at all required to show where Weaver has gone wrong.

This takes us to Weaver’s treatment of supervenience and his New Phenomenon of Coordination (NPC), which states:

Why is it that modal properties and notions enter the verisimilitudinous fundamental dynamical laws of our best and most empirically successful physical theories given that modal properties do not weakly supervene upon the physical or material? (or) How is it that the material world came to be ordered in such a way that it evolves in a manner that is best captured by modally laden physical theorizing or dynamical laws given that modal properties do not even weakly supervene upon the material and non-modal? (Weaver, 17)

If necessitism is probably false, then ontological naturalism still has a chance of being true. This is despite the fact that Weaver failed to show that the falsity of ontological naturalism is conditional on necessitism being true. A stronger route for him to have took is to argue that ontological naturalism is false iff necessitism is true because even if turns out that necessitism is true, ontological naturalism can also be true. Weaver has not established that they are mutually exclusive. Therefore, an atheist can feel no pressure at all when confronted with NPC. This is setting aside that Weaver appears to be undisturbed by the incongruity of our scientific and manifest images. One would think a reconciliation is required before proclaiming that the material world is organized via modally laden physical theories and dynamic laws that supervene, whether strongly or weakly, on the material world.

The primary issue with Weaver’s assessment is the assumption that all atheists must be committed to reductionist materialism or physicalism to be a consistent ontological naturalist. There are alternative naturalisms that easily circumvent Weaver’s NPC because such a naturalist would not be committed to any version of supervenience. As an example, this naturalist can hold, to put it as simply as possible, that scientific theories and models are merely representations. Therefore, the modality of scientific theories need not supervene on the material world at all. Given a representationalist account of scientific theories, perhaps something like a reverse supervenience is the case.

∎∀𝑥∀𝑦(∀𝐹 𝐹𝑥 ≡ 𝐹𝑦 ⟶ ∎∀R R𝑥 ≡ R𝑦 )

Necessarily for any entity and for any entity y, [(if for any material property F, (has F, just in case, has F), then necessarily, for any representational property M, (has M, just in case, has M)].

Scientific theories and models are, in other words, more akin to impressionist paintings than a group of modally laden propositions. This is a more commonsense view in that a scientific model is a portrait of the real world. While there is a feedback between the model and the material world, in that theories have to be tested against reality, theories and models are not conceived in a vacuum. Real world observations impose the postulates of a theory or render a portrait that we call a model. Ptolemy misconstrued planetary orbits and attributed their motions to invisible spheres rather than the ellipses we are familiar with. He was not far off the mark, especially given that there is an intangible involved, namely gravity, but his impression was inexact. This is what a representationalist account of scientific theories would look like and whether something like reverse supervenience is necessary does no real harm to the account.

The last route atheists can take is in Weaver’s conflation of atheism and naturalism. Though I am sympathetic to the conflation, like Nielsen, who stated, “Naturalism, where consistent, is an atheism” (Nielsen, Kai. Naturalism and Religion. Amherst, N.Y.: Prometheus, 2001. 30. Print.), the same need not apply in vice versa. In other words, the following statement need not be the case: “atheism, where consistent, is a naturalism.” While I am also partial to that statement, even going as far as defending it in Philosophical Atheism: Counter Apologetics and Arguments For Atheism, that gods do not exist does not entail that no immaterial beings can exist. It could be the case that no iteration of god exists, but that ghosts do. Weaver’s conflation seems to rest on the assumption that naturalism is the antithesis of supernaturalism. Naturalism is also opposed to paranormal phenomena, so there can be defeaters of naturalism that are not also defeaters of atheism. In other words, a definitive proof of the paranormal does not debase the thesis that gods do not exist. A definitive proof of one’s great grandma roaming the estate does not imply that God or any other god undeniably exists. Nielsen’s statement implies only that a disproof of atheism is also a disproof of naturalism, but this does not work in the other direction.

Ultimately, in light of the composite argument above, one that I think is true to Weaver’s overall argument, fails to disprove ontological naturalism and atheism. There is far too much controversy in a number of places throughout his argument to regard it as convincing. The argument needs to be critically amended or entirely abandoned because in its present form, it does not meet its end. My rebuttal provides fertile ground for further exploration with respect to necessitism, contigentism, and any possible syntheses or alternatives, in addition to what is required to contradict naturalism and atheism. God, whether the idea Theist philosophers defend, or a more common concept tied to a particular religion, is still resolutely resigned to silence, hiddenness, and outright indifference. Therefore, Theists have their own onus that must go beyond even a successful argument against naturalism and/or atheism.

# Problems With “Neo-Aristotelian Perspectives On Contemporary Science: Dodging the Fundamentalist Threat”

By R.N. Carmona

Before starting my discussion of the first chapter of Neo-Aristotelian Perspectives On Contemporary Science, some prefatory remarks are in order. In the past, I might have committed to reading an entire book for purposes of writing a chapter by chapter review. With other projects in my periphery, I cannot commit to writing an exhaustive review of this book. That remains undecided for now. What I will say is that a sample size might be enough to confirm my suspicions that the Neo-Aristotelian system is rife with problems or even worse, is a failed system of metaphysics. I am skeptical of the system because it appears to have been recruited to bolster patently religious arguments, in particular those of modern Thomists looking to usher in yet another age of apologetics disguised as philosophy. I maintain that apologetics still needs to be thoroughly demarcated from philosophy of religion; moreover, philosophy of religion should be more than one iteration after another of predominantly Christian literature. With respect to apologetics, I am in agreement with Kai Nielsen who stated:

It is a waste of time to rehearse arguments about the proofs or evidences for God or immortality. There are no grounds — or at least no such grounds — for belief in God or belief that God exists and/or that we are immortal. Hume and Kant (perhaps with a little rational reconstruction from philosophers like J.L. Mackie and Wallace Matson) pretty much settled that. Such matters have been thoroughly thrashed out and there is no point of raking over the dead coals. Philosophers who return to them are being thoroughly retrograde.

Nielsen, Kai. Naturalism and Religion. Amherst, N.Y.: Prometheus, 2001. 399-400. Print.

The issue is that sometimes one’s hand is forced because the number of people qualified to rake dead coals is far fewer than the people rehashing these arguments. Furthermore, the history of Christianity, aside from exposing a violent tendency to impose the Gospel by force, also exposes a tendency to prey on individuals who are not qualified to address philosophical and theological arguments. Recently, this was made egregiously obvious by Catholic writer Pat Flynn:

So what we as religious advocates must be ready for is to offer the rational, logical basis—the metaphysical realism, and the reality of God—that so many of these frustrated, young people are searching for who are patently fed up with the absurd direction the secular world seems to be going. They’re looking for solid ground. And we’ve got it.

Flynn, Pat. “A Hole in The Intellectual Dark Web”. World On Fire Blog. 26 Jun 2019. Web.

Unfortunately, against all sound advice and blood pressure readings, people like myself must rake dead coals or risk allowing Christians to masquerade as the apex predators in this intellectual jungle. I therefore have to say to the Pat Flynns of the world, no you don’t got it. More importantly, let young people lead their lives free of the draconian prohibitions so often imposed on people by religions like yours. If you care to offer the rational, logical basis for your beliefs, then perhaps you should not be approaching young people who likely have not had an adequate exposure to the scholarship necessary to understand apologetics. This is not to speak highly of the apologist, who typically distorts facts and evidence to fit his predilections, making it necessary to acquire sufficient knowledge of various fields of inquiry so that one is more capable of identifying distortions or omission of evidence and thus, refuting his arguments. If rational, logical discourse were his aim, then he would approach people capable of handling his arguments and contentions. That is when it becomes abundantly clear that the aim is to target people who are more susceptible to his schemes by virtue of lacking exposure to the pertinent scholarship and who may already be gullible due to existing sympathy for religious belief, like Flynn himself, a self-proclaimed re-converted Catholic.

Lanao and Teh’s Anti-Fundamentalist Argument and Problems Within The Neo-Aristotelian System

With these prefatory remarks out of the way, I can now turn to Xavi Lanao and Nicholas J. Teh’s “Dodging The Fundamentalist Threat.” Though I can admire how divorced Lanao and Teh’s argument is from whatever theological views they might subscribe to, it should be obvious to anyone, especially the Christian Thomist, that their argument is at variance with Theism. Lanao and Teh write: “The success of science (especially fundamental physics) at providing a unifying explanation for phenomena in disparate domains is good evidence for fundamentalism” (16). They then add: “The goal of this essay is to recommend a particular set of resources to Neo- Aristotelians for resisting Fundamentalist Unification and thus for resisting fundamentalism” (Ibid.). In defining Christian Theism, Timothy Chappell, citing Paul Veyne, offers the following:

“The originality of Christianity lies… in the gigantic nature of its god, the creator of both heaven and earth: it is a gigantism that is alien to the pagan gods and is inherited from the god of the Bible. This biblical god was so huge that, despite his anthropomorphism (humankind was created in his image), it was possible for him to become a metaphysical god: even while retaining his human, passionate and protective character, the gigantic scale of the Judaic god allowed him eventually to take on the role of the founder and creator of the cosmic order.”

Chappell, Timothy. “Theism, History and Experience”. Philosophy Now. 2013. Web.

Thomists appear more interested in proving that Neo-Aristotelianism is a sound approach to metaphysics and the philosophy of science than they do in ensuring that the system is not at odds with Theism. The notion that God is the founder and creator of the cosmic order is uncontroversial among Christians and Theists more generally. Inherent in this notion is that God maintains the cosmic order and created a universe that bears his fingerprints, and as such, physical laws are capable of unification because the universe exhibits God’s perfection; the universe is therefore, at least at its start, perfectly symmetric, already containing within it intelligible forces, including finely tuned parameters that result in human beings, creatures made in God’s image. Therefore, in the main, Christians who accept Lanao and Teh’s anti-fundamentalism have, inadvertently or deliberately, done away with a standard Theistic view.

So already one finds that Neo-Aristotelianism, at least from the perspective of the Theist, is not systematic in that the would-be system is internally inconsistent. Specifically, when a system imposes cognitive dissonance of this sort, it is usually good indication that some assumption within the system needs to be radically amended or entirely abandoned. In any case, there are of course specifics that need to be addressed because I am not entirely sure Lanao and Teh fully understand Nancy Cartwright’s argument. I think Cartwright is saying quite a bit more and that her reasoning is mostly correct, even if her conclusion is off the mark.

While I strongly disagree with the Theistic belief that God essentially created a perfect universe, I do maintain that Big Bang cosmology imposes on us the early symmetry of the universe via the unification of the four fundamental forces. Cartwright is therefore correct in her observation that science gives us a dappled portrait, a patchwork stemming from domains operating very much independently of one another; like Lanao and Teh observe: “point particle mechanics and fluid dynamics are physical theories that apply to relatively disjoint sets of classical phenomena” (18). The problem is that I do not think Lanao and Teh understand why this is the case, or at least, they do not make clear that they know why we are left with this dappled picture. I will therefore attempt to argue in favor of Fundamentalism without begging the question although, like Cartwright, I am committed to a position that more accurately describes hers: Non-Fundamentalism. It may be that the gradual freezing of the universe, over the course of about 14 billion years, leaves us entirely incapable of reconstructing the early symmetry of the universe; I will elaborate on this later, but this makes for a different claim altogether, and one that I take Cartwright to be saying, namely that Fundamentalists are not necessarily wrong to think that fundamental unification (FU) is possible but given the state of our present universe, it cannot be obtained. Cartwright provides us with a roadmap of what it would take to arrive at FU, thereby satisfying Fundamentalism, but the blanks need to be filled, so that we get from the shattered glass that is our current universe to the perfectly symmetric mirror it once was.

Lanao and Teh claim that Fundamentalism usually results from the following reasoning:

We also have good reason to believe that everything in the physical world is made up of these same basic kinds of particles. So, from the fact that everything is made up of the same basic particles and that we have reliable knowledge of the behavior of these particles under some experimental conditions, it is plausible to infer that the mathematical laws governing these basic kinds of particles within the restricted experimental settings also govern the particles everywhere else, thereby governing everything everywhere. (Ibid.)

They go on to explain that Sklar holds that biology and chemistry do not characterize things as they really are. This is what they mean when they say Fundamentalists typically beg the question, in that they take Fundamentalism as a given. However, given Lanao and Teh’s construction of Cartwright’s argument, they can also be accused of fallacious reasoning, namely arguing from ignorance. They formulate Cartwright’s Anti-Fundamentalist Argument as follows:

(F1) Theories only apply to a domain insofar as there is a principled way of generating a set of models that are jointly able to describe all the phenomena in that domain.

(AF2) Classical mechanics has a limited set principled models, so it only applies to a limited number of sub-domains.

(AF3) The limited sub-domains of AF2 do not exhaust the entire classical domain.

(AF4) From (F1), (AF2), and (AF3), the domain of classical mechanics is not universal, but dappled. (25-26)

On AF2, how can we expect classical mechanics to acquire more principled models than it presently has? How do we know that, if given enough time, scientists working on classical mechanics will not have come up with a sufficient number of principled models to satisfy even the anti-fundamentalist? That results in quite the conundrum for the anti-fundamentalist. Can the anti-fundamentalist provide the fundamentalist with a satisfactory number of principled models that exhaust an entire domain? This is to ask whether anyone can know how many principled models are necessary to contradict AF3. On any reasonable account, science has not had sufficient time to come up with enough principled models in all of its domains and thus, this argument cannot be used to bolster the case for anti-fundamentalism.

While Lanao and Teh are dismissive of Cartwright’s particularism, it is necessary for the correct degree of tentativeness she exhibits. Lanao and Teh, eager to disprove fundamentalism, are not as tentative, but given the very limited amount of time scientists have had to build principled models, we cannot expect for them to have come up with enough models to exhaust the classical or any other scientific domain. Cartwright’s tentativeness is best exemplified in the following:

And what kinds of interpretative models do we have? In answering this, I urge, we must adopt the scientific attitude: we must look to see what kinds of models our theories have and how they function, particularly how they function when our theories are most successful and we have most reason to believe in them. In this book I look at a number of cases which are exemplary of what I see when I study this question. It is primarily on the basis of studies like these that I conclude that even our best theories are severely limited in their scope.

Cartwright, Nancy. The Dappled World: A Study of The Boundaries of Science. Cambridge: Cambridge University Press, 1999. 9. Print.

The fact that our best theories are limited in their scope reduces to the fact that our fragmented, present universe is too complex to generalize via one law per domain or one law that encompasses all domains. For purposes of adequately capturing what I am attempting to say, it is worth revisiting what Cartwright says about a \$1,000 bill falling in St. Stephen’s Square:

Mechanics provides no model for this situation. We have only a partial model, which describes the 1000 dollar bill as an unsupported object in the vicinity of the earth, and thereby introduces the force exerted on it due to gravity. Is that the total force? The fundamentalist will say no: there is in principle (in God’s completed theory?) a model in mechanics for the action of the wind, albeit probably a very complicated one that we may never succeed in constructing. This belief is essential for the fundamentalist. If there is no model for the 1000 dollar bill in mechanics, then what happens to the note is not determined by its laws. Some falling objects, indeed a very great number, will be outside the domain of mechanics, or only partially affected by it. But what justifies this fundamentalist belief? The successes of mechanics in situations that it can model accurately do not support it, no matter how precise or surprising they are. They show only that the theory is true in its domain, not that its domain is universal. The alternative to fundamentalism that I want to propose supposes just that: mechanics is true, literally true we may grant, for all those motions whose causes can be adequately represented by the familiar models that get assigned force functions in mechanics. For these motions, mechanics is a powerful and precise tool for prediction. But for other motions, it is a tool of limited serviceability.

Cartwright, Nancy. “Fundamentalism vs. the Patchwork of Laws.” Proceedings of the Aristotelian Society, vol. 94, 1994, pp. 279–292. JSTOR, http://www.jstor.org/stable/4545199.

Notice how even Cartwright alludes to the Theistic notion of FU being attributable to a supremely intelligent creator who people call God. In any case, what she is saying here does not speak to the notion that only the opposite of Fundamentalism can be the case. Even philosophers slip into thinking in binaries, but we are not limited to Fundamentalism or Anti-Fundamentalism; Lanao and Teh admit that much. There can be a number of Non-Fundamentalist positions that prove more convincing. In the early universe, the medium of water, and therefore, motions in water, were not available. Because of this, there was no real way to derive physical laws within that medium. Moreover, complex organisms like jellyfish did not exist then either and so, the dynamics of their movements were not known and could not feature in any data concerning organisms moving about in water. This is where I think Cartwright, and Lanao and Teh taking her lead, go astray.

Cartwright, for example, strangely calls for a scientific law of wind. She states: “When we have a good-fitting molecular model for the wind, and we have in our theory (either by composition from old principles or by the admission of new principles) systematic rules that assign force functions to the models, and the force functions assigned predict exactly the right motions, then we will have good scientific reason to maintain that the wind operates via a force” (Ibid). Wind, unlike inertia or gravity, is an inter-body phenomenon in that the heat from the Sun is distributed unevenly across the Earth’s surface. Warmer air from the equator tends toward the atmosphere and moves to the poles while cooler air tends toward the equator. Wind moves between areas of high pressure to areas of low pressure and the boundary between these areas is called a front. This is why we cannot have a law of wind because aside from the complex systems on Earth, this law would have to apply to the alien systems on gas giants like Jupiter and Saturn. This point is best exemplified by the fact that scientists cannot even begin to comprehend why Neptune’s Dark Spot did a complete about-face. A law of wind would have to apply universally, not just on Earth, and would thus, have to explain the behavior of wind on other planets. That is an impossible ask because the composition of other planets and their stars would make for different conditions that are best analyzed in complex models, accounting for as much data as possible, rather than a law attempting to generalize what wind should do assuming simple conditions.

Despite Cartwright’s lofty demand, her actual argument does not preclude Fundamentalism despite what Lanao and Teh might have thought. Cartwright introduces a view that I think is in keeping with the present universe: “Metaphysical nomological pluralism is the doctrine that nature is governed in different domains by different systems of laws not necessarily related to each other in any systematic or uniform way: by a patchwork of laws” (Ibid.). I think it is entirely possible to get from metaphysical nomological pluralism (MNP) to FU if one fills in the blanks by way of symmetry breaking. Prior to seeing how symmetry breaking bridges the gap between MNP and FU, it is necessary to outline an argument from Cartwright’s MNP to FU:

F1 Theories only apply to a domain insofar as there is a principled way of generating a set of models that are jointly able to describe all the phenomena in that domain.

MNP1 Nature is governed in different domains by different systems of laws not necessarily related to each other in any systematic or uniform way: by a patchwork of laws.

MNP2 It is possible that the initial properties in the universe allow these laws to be true together.

MNP3 From F1, MNP1, and MNP2, the emergence of different systems of laws from the initial properties in the universe imply that FU is the probable.

Lanao and Teh agree that F1 is a shared premise between Fundamentalists and Anti-Fundamentalists. As a Non-Fundamentalist, I see it as straightforwardly obvious as well. With respect to our present laws, I think that FU may be out of our reach. As has been famously repeated, humans did not evolve to do quantum mechanics, let alone piece together a shattered mirror. This is why I’m a Non– as opposed to Anti-Fundamentalist; the subtle distinction is that I am neither opposed to FU being the case nor do I think it is false, but rather that it is extremely difficult to come by. Michio Kaku describes the universe as follows: “Think of the way a beautiful mirror shatters into a thousand pieces. The original mirror possessed great symmetry. You can rotate a mirror at any angle and it still reflects light in the same way. But after it is shattered, the original symmetry is broken. Determining precisely how the symmetry is broken determines how the mirror shatters” (Kaku, Michio. Parallel Worlds: A Journey Through Creation, Higher Dimensions, and The Future of The Cosmos. New York: Doubleday, 2005. 97. Print.).

If Kaku’s thinking is correct, then there is no way to postulate that God had St. Peter arrange the initial properties of the universe so that all of God’s desired laws are true simultaneously without realizing that FU is not only probable but true, however unobtainable it may be. The shards would have to pertain to the mirror. Kaku explains that Grand Unified Theory (GUT) Symmetry breaks down to SU(3) x SU(2) x U(1), which yields 19 free parameters required to describe our present universe. There are other ways for the mirror to have broken, to break down GUT Symmetry. This implies that other universes would have residual symmetry different from that of our universe and therefore, would have entirely different systems of laws. These universes, at minimum, would have different values for these free parameters, like a weaker nuclear force that would prevent star formation and make the emergence of life impossible. In other scenarios, the symmetry group can have an entirely different Standard Model in where protons quickly decay into anti-electrons, which would also prevent life as we know it (Ibid., 100).

Modern scientists are then tasked with working backwards. The alternative to that is to undertake the gargantuan task, as Cartwright puts it, of deriving the initial properties, which would no doubt be tantamount to a Theory of Everything from which all of the systems of laws extend, i.e., hypothesize that initial conditions q, r, and s yield the different systems of laws we know. This honors the concretism Lanao and Teh call for in scientific models while also giving abstractionism its due. Like Paul Davies offered, the laws of physics may be frozen accidents. In other words, the effective laws of physics, which is to say the laws of physics we observe, might differ from the fundamental laws of physics, which would be, so to speak, the original state of the laws of physics. In a chaotic early universe, physical constants may not have existed. Hawking also spoke of physical laws that tell us how the universe will evolve if we know its state at some point in time. He added that God could have chosen an “initial configuration” or fundamental laws for reasons we cannot comprehend. He asks, however, “if he had started it off in such an incomprehensible way, why did he choose to let it evolve according to laws that we could understand? (Hawking, Stephen. A Brief History of Time, New York: Bantam Books. 1988. 127. Print.)” He then goes on to discuss possible reasons for this, e.g. chaotic boundary conditions; anthropic principles.

Implicit in Hawking’s reasoning is that we can figure out what physical laws will result in our universe in its present state. The obvious drawback is that the observable universe is ~13.8 billion years old and 93 billion lightyears in diameter. The universe may be much larger, making the task of deriving this initial configuration monumentally difficult. This would require a greater deal of abstraction than Lanao and Teh, and apparently Neo-Aristotelians, desire, but it is the only way to discover how past iterations of physical laws or earlier systems of laws led to our present laws of physics. The issue with modern science is that it does not often concern itself with states in the distant past and so, a lot of equations and models deal in the present, and even the future, but not enough of them confront the past. Cosmological models, for purposes of understanding star formation, the formation of solar systems, and the formation of large galaxies have to use computer models to test their theories against the past, since there is no way to observe the distant past directly. In this way, I think technology will prove useful in arriving at earlier conditions until we arrive at the mirror before it shattered. The following model, detailing how an early collision explains the shape of our galaxy, is a fine example of what computer models can do to help illuminate the distant past:

Further Issues With The Neo-Aristotelian System

A recent rebuttal to Alexander Pruss’ Grim Reaper Paradox can be generalized to refute Aristotelianism overall. The blogger over at Boxing Pythagoras states:

Though Alexander Pruss discusses this Grim Reaper Paradox in a few of his other blog posts, I have not seen him discuss any other assumptions which might underly the problem. He seems to have focused upon these as being the prime constituents. However, it occurs to me that the problem includes another assumption, which is a bit more subtle. The Grim Reaper Paradox, as formulated, seems to presume the Tensed Theory of Time. I have discussed, elsewhere, the reasons that I believe the Tensed Theory of Time does not hold, so I’ll simply focus here on how Tenseless Time resolves the Grim Reaper Paradox.

To see the difference between old and new tenseless theories, it is necessary to first contrast an old tenseless theory against a tensed theory that holds that properties of the pastness, presentness, and futurity of events are ascribed by tensed sentences. The debate regarding which theory is true centered around whether tensed sentences could be translated by tenseless sentences that instead ascribe relations of earlier than, later than, or simultaneous. For example, “the sun will soon rise” seems to entail the sun’s rising in the future, as an event that will become present, whereas the “sun is rising now” seems to entail the event being present and “the sun has risen” as having receded into the past. If these sentences are true, the first sentence ascribes futurity whilst the second ascribes presentness and the last ascribes pastness. Even if true, however, that is not evidence to suggest that events have such properties. Tensed sentences may have tenseless counterparts having the same meaning.

This is where Quine’s notion of de-tensing natural language comes in. Rather than saying “the sun is rising” as uttered on some date, we would instead say that “the sun is rising” on that date. The present in the first sentence does not ascribe presentness to the sun’s rising, but instead refers to the date the sentence is spoken. In like manner, if “the sun has risen” as uttered on some date is translated into “the sun has risen” on a given date, then the former sentence does not ascribe pastness to the sun’s rising but only refers to the sun’s rising as having occurred earlier than the date when the sentence is spoken. If these translations are true, temporal becoming is unreal and reality is comprised of earlier than, later than, and simultaneous. Time then consists of these relations rather the properties of pastness, presentness, and futurity (Oaklander, Nathan. Adrian Bardon ed. “A-, B- and R-Theories of Time: A Debate”. The Future of the Philosophy of Time. New York: Routledge, 2012. 23. Print.).

The writer at Boxing Pythagoras continues:

On Tensed Time, the future is not yet actual, and actions in the present are what give shape and form to the reality of the future. As such, the actions of each individual future Grim Reaper, in our paradox, can be contingent upon the actions of the Reapers which precede them. However, this is not the case on Tenseless Time. If we look at the problem from the notion of Tenseless Time, then it is not possible that a future Reaper’s action is only potential and contingent upon Fred’s state at the moment of activation. Whatever action is performed by any individual Reaper is already actual and cannot be altered by the previous moments of time. At 8:00 am, before any Reapers activate, Fred’s state at any given time between 8:00 am and 9:00 am is set. It is not dependent upon some potential, but not yet actual, future action as no such thing can exist.

I think this rebuttal threatens the entire Aristotelian enterprise. Aristotelians will have to deny time while maintaining that changes happen in order to escape the fact that de-tensed theories of time, which are more than likely the correct way of thinking about time, impose a principle: any change at a later point in time is not dependent on a previous state. That’s ignoring that God, being timeless, could not have created the universe at some time prior to T = 0, the first instance of time on the universal clock. This is to say nothing of backward causation, which is entirely plausible given quantum mechanics. Causation calls for a deeper analysis, which neo-Humeans pursue despite not being entirely correct. The notion of dispositions is crucial. It is overly simplistic to say the hot oil caused the burns on my hand or the knife caused the cut on my hand. The deeper analysis in each case is that the boiling point of cooking oil, almost two times that of water, has something to do with why the burn feels distinct from a knife cutting into my hand. Likewise, the dispositions of the blade have a different effect on the skin than oil does. Causal relationships are simplistic and, as Nietzsche suggested, do not account for the continuum within the universe and the flux that permeates it. Especially in light of quantum mechanics, we are admittedly ignorant about most of the intricacies within so-called causal relationships. Neo-Humeans are right to think that dispositions are important. This will disabuse of us of appealing to teleology in the following manner:

‘The function of X is Z’ [e.g., the function of oxygen in the blood is… the function of the human heart is… etc.] means

(a) X is there because it does Z,
(b) Z is a consequence (or result) of X’s being there.

Larry Wright, ‘Function’, Philosophical Review 82(2) (April 1973):139–68, see 161.

It is more accurate to say that a disposition of X is instantiated in Z rather than that X exists for purposes of Z because in real world examples, a given X can give rise to A, B, C, and so on. This is to say that one so-called cause can have different effects. A knife can slice, puncture, saw, etc. Hot oil can burn human skin, melt ice but not mix with it, combust when near other mediums or when left to increase to temperatures beyond its boiling point, etc. One would have to ask why cooking oil does not combust when a cube of ice is thrown into the pan; what about the canola oil, for a more specific example, causes it to auto-ignite at 435 degrees Fahrenheit and why does this not happen when water is heated beyond its boiling point?

As it turns out then, Neo-Aristotelians are not as committed to concretism as Lanao and Teh would hope. They are striving for generalizations despite refusing to investigate the details of how models are employed in normal science, as was made obvious by Lanao and Teh’s dismissal of Cartwright’s particularism and further, in their argument against Fundamentalism, which does not flow neatly from Cartwright’s argument. For science to arrive at anything concrete, abstraction needs to be allowed, specifically in cases venturing further and further into the past. Furthermore, a more detailed analysis of changes needs to be incorporated into our data. Briefly, when thinking of the \$1,000 bill descending into St. Stephen’s Square, it is a simple fact that we must ask whether there is precipitation or not and if so, how much; we are also required to ask whether bird droppings may have altered its trajectory on the way down?; what effect does smog or dust particles have on the \$1,000 bill’s trajectory; as Cartwright asked, what about wind gusts? What is concrete is consistent with the logical atomist’s view that propositions speak precisely to simple particulars or many of them bearing some relation to one another.

Ultimately, I think that Lanao and Teh fail to establish a Neo-Aristotelian approach to principled scientific models. They also fail to show that FU and therefore, Fundamentalism is false. What is also clear is that they did not adequately engage Cartwright’s argument, which is thoroughly Non-Fundamentalist, even if that conclusion escaped her. This is why I hold that Cartwright’s conclusions are off the mark because she is demanding that generalized laws be derived from extremely complex conditions. It is not incumbent on dappled laws within a given domain of science to be unified in order for FU to ultimately be the case. It could be that due to symmetry breaking, one domain appears distinct from another and because of our failure, at least until now, to realize how the two cohere, unifying principles between the two domains currently elude us. Lanao and Teh’s argument against FU therefore appeals to the ignorance of science not unlike apologetic arguments of much lesser quality. The ignorance of today’s science does not suggest that current problems will continue to confront us while their solutions perpetually elude us. What is needed is time. Like Lanao and Teh, I agree that Cartwright has a lot of great ideas concerning principled scientific models, but that her ideas lend support to FU. A unified metaphysical account of reality would likely end up in a more dappled state than modern science finds itself in and despite Lanao and Teh’s attempts, a hypothetical account of that sort would rely too heavily on science to be considered purely metaphysical. My hope is that my argument, one that employs symmetry breaking to bolster the probability of FU being the case, is more provocative, if even, persuasive.

# The Argument From the Impossibility of Singular Consciousness

By R.N. Carmona

The following argument is based on an obvious truth and also on a theistic assumption. The obvious truth comes from John Mbiti who in his African Religions and Philosophy (1975) said: “I am because we are, and since we are, therefore I am.” This isn’t the Cartesian view many people operate from: “I think, therefore I am.” Consciousness, in other words, isn’t born in and doesn’t exist in a vacuum. It isn’t, as it were, a location on a map that can be identified in isolation of other locations; it is like a location that’s identified only in its relation to other locations. I know where I find myself only because I know where all other minds in my vicinity are. Even deeper than that is the unsettling fact that my entire personality isn’t a melody, but rather a cacophony; I am who I am because the people in my lives are who they are and they are who they are because of the influence of others and the circumstances they’ve faced, and so on and so forth. As Birhane explains:

We need others in order to evaluate our own existence and construct a coherent self-image. Think of that luminous moment when a poet captures something you’d felt but had never articulated; or when you’d struggled to summarise your thoughts, but they crystallised in conversation with a friend. Bakhtin believed that it was only through an encounter with another person that you could come to appreciate your own unique perspective and see yourself as a whole entity. By ‘looking through the screen of the other’s soul,’ he wrote, ‘I vivify my exterior.’ Selfhood and knowledge are evolving and dynamic; the self is never finished – it is an open book.

Most people, given the Cartesian view, look at the self through the lens of what Dennett calls the Cartesian theater. There is, to our minds, a continuity between the self when we are children and the self now as adults. We point to attributes, even if only loosely related: our temperament, our competitive nature, the fact that we’re friendly or not, and so on. Few of us consider the circumstances and the people who played a role in molding these seeming consistencies. Where many of us see a straight continuous line, others see points on a graph, and yet, even if there’s virtual consistency in one’s competitive edge, for instance, there are milieus to consider, from the school(s) one attended, to one’s upbringing, to the media one was exposed to. The self is indeed an open and ever-changing book. The Cartesian theater, like the Cartesian self, is a convenient illusion; there is no self without other selves.

The Cartesian view is problematic on its own. “I think, therefore I am” was Descartes’ conclusion, but one can imagine saying to Descartes: “okay, but what do you think about? What is the content of your thoughts?” So even on the Cartesian view, Mbiti’s truth is found. It is, in fact, a tacit admission contained in Descartes’ view because in order to think one must be thinking about something or someone. Some thoughts are elaborate and involve representations of places one is familiar with whether it be one’s living room or local grocery store. Even the content of Descartes’ thoughts acknowledged other people and things, so Descartes didn’t conclude “I think [full stop], therefore I am.” In truth, it was more like “I think [about x things and y people represented in z places], therefore I am.” He identified himself only through other selves.

The theistic assumption is the idea that the mind of god(s) is like ours. On Judaism and Christianity, we were fashioned in his image. This doesn’t apply so much to our physical bodies, but more so to our minds because on the theistic assumption, the mind proceeds from an immaterial, spiritual source rather than from a physical source like our brains or the combination of our brains and nervous systems.

On the assumption that god’s mind is like ours and given the truth expressed by Mbiti, it is impossible for a singular consciousness to have existed on its own in eternity past. In other words, before god created angels, humans, and animals, there was some point in eternity past in when he was the only mind that existed. Yet if his mind is like ours, then there was never a point in where he existed on his own. The only recourse for the monotheist is therefore, polytheism because the implication is that at least one other mind must have existed along with god’s in eternity past.

Muslims and Jews, if Mbiti’s truth is accepted, will have no choice but to concede. Some Christians, on the other hand, will think they find recourse in the idea of the Trinity. Some might try to qualify the notion that the minds of the Father, Son, and Holy Spirit are distinct from one another. The obvious issue with that idea is that that would undermine the unity their god is said to have. In fact, that has been at the core of much philosophical dispute since the Muslim golden age. As Tuggy explains:

Muslim philosopher Abu Yusef al-Kindi (ca. 800–70) understood the doctrine to assert that there are three divine persons, three individuals, each composed of the divine essence together with its own distinctive characteristic. But whatever is composed is caused, and whatever is caused is not eternal. So the doctrine, he holds, absurdly claims that each of the persons isn’t eternal, and since they’re all divine, each is eternal.

Whether or not these contentions hold is still a matter of dispute and is not our present focus. The Trinity on its own wouldn’t be sufficient because it would require a milieu to exist within. Given this, then there would be other things that also existed in eternity past. Plato’s Forms might be those sorts of things because god’s mind, being like ours, would require a number of things to experience and to assist with maintaining god’s self, per se. Mbiti’s truth applies to cognitive and psychological aspects about humans and other animals even, especially mammals. It also applies, more broadly, to consciousness and as such, the Problem of Other Minds as it is so-called is only a problem if one were to assume that the Cartesian view is the case; other minds and other things are the reasons a self forms and can come to identify itself as distinct. Cognitive and psychological aspects about us don’t exist in a vacuum, but neither does consciousness. The same, on the assumption that god’s mind is like ours, applies to god’s mind.

Ultimately, a singular consciousness could not have existed in eternity past absent other consciousnesses and things. Unless one continues to obstinately assume that Descartes’ “I think, therefore I am” is true over and above Mibti’s “I am because we are, and since we are, therefore I am,” there’s no recourse outside of polytheism. Either there were two or more gods that existed in eternity past or there are no gods. What should be clear from what’s been outlined here is that a singular consciousness that once existed in a vacuum at some point in eternity past, i.e., the monotheistic conception of god, is impossible.

# An Excerpt From My New Book

It is useful to note that even if Plantinga or any Christian rejects the contra-argument, the first premise can be challenged. Rather than quibble with what is meant by maximal excellence, an atheist can accept the definition as it stands. The atheist can, however, question whether this is possible world W in where a being of maximal excellence exists and explore the consequences if it turns out that this isn’t that possible world. In other words, if this isn’t that specific possible world, then the argument is speaking of a possible world that is inaccessible to the believer and the believer is therefore in no better position to convince the non-believer. Put another way, if a being of maximal excellence doesn’t exist in this possible world, then it possibly exists in another world that cannot be accessed by any of the inhabitants in this world. There is therefore no utility or pragmatic value in belief. The argument would only speak of a logical possibility that is ontologically impossible in this world.

The atheist can take it a step further. What Christian theists purport to know about god stems from the Bible. The Bible, in other words, gives us information about god, his character, and his history as it relates to this world. Assuming this is possible world W, does he represent a being having maximal excellence? Is he, for instance, identical to a being who is wholly good? Any honest consideration of parts of the Bible would lead one to conclude that god is not identical to a being who is wholly good; god, in other words, isn’t wholly good. So obvious is his evil that Marcion of Sinope diverged from proto-Orthodox Christians in concluding that the Jewish God in the Old Testament is an evil deity and is in no way the father of Jesus. Yet if he’s evil, then he isn’t wholly good and if he isn’t wholly good, he fails to have maximal excellence.

Moreover, and much more damning to Plantinga’s argument, is that a being of maximal greatness has maximal excellence in all worlds. Therefore, if this being does not have maximal excellence in one of those worlds or more specifically, in this world, then it does not possess maximal greatness. Far from victorious, Plantinga’s argument would taste irreparable defeat and this, in more ways than one.

# On Challenging the Laws of Logic

R.N. Carmona

In the past, I’ve argued that the laws of logic can be challenged or even violated. A response to my post on procedural realism and the Moral Argument mentioned that the laws of excluded middle and non-contradiction have been challenged by analytic philosophers. I found it curious that there was no mention of a challenge to the law of identity, since I think it’s the most easily challenged.

In order to challenge the law of identity, one need only challenge its underlying assumption, namely essentialist ontology. “The essentialist tradition, in contrast to the tradition of differential ontology, attempts to locate the identity of any given thing in some essential properties or self-contained identities” (see here). According to modern physics, as it now stands, all objects are atoms in flux and empty space. Where then is the atomic glue that holds a table or chair together and how does one differentiate between two chairs that look precisely alike without presupposing the essentialist tradition?

The essentialist tradition begs the question when concerning identity, since there’s no way to prove that any one object has essential properties. Interestingly, the reason for presupposing the essentialist tradition might have everything to do with personal identity. People are animate objects, but objects nonetheless. Without essentialism, we can no longer assume that we have a distinct identity. Physically, we are atoms in flux and empty space as well and thus, what we’re left with are second order grounds for personal identity. In other words, we can avoid talk of atoms and empty space and instead look to DNA, neurons, brain anatomy, and so on. In this way we retain our uniqueness without first order grounds.

That aside, if we instead argue from the basis of differential ontology, the law of identity is no longer as unassailable as it appeared. As stated, we would rely on second order grounds. “Differential ontology…understands the identity of any given thing as constituted on the basis of the ever-changing nexus of relations in which it is found, and thus, identity is a secondary determination, while difference, or the constitutive relations that make up identities, is primary.” We would therefore ignore notions of a stable identity and instead look to differences between objects.

Given this, the law of identity (A = A) will be replaced with the law of distinction, i.e., something like A =/= B or C or D and so on. Since A is not B or C or D, then we identify A because it is contrasted with objects in relation to it. We are no longer assuming that there are essential properties that make A, A. This is, after all, what we say of ourselves. We do not say I am me because I have essential characteristics. Instead we contrast ourselves with others; we factor in physical appearance, ethnicity, gender, personality, and so on. We then add other factors like level of income and education, personal tastes, and so on. Clearly none of these characteristics are essential.

Ultimately, the law of identity is not unassailable and can be challenged by uprooting its essentialist assumption. One way of doing so is by positing differential ontology. One can, however, do so by positing human consciousness. In other words, another traditional philosophical assumption (contra-pragmatism) is that there’s a deeper reality that goes beyond our everyday experience; perhaps quantum mechanics hints at this. On the basis of this, we cannot draw ontological conclusions on the basis of our faculties. In other words, the four chairs and dining room table in my living room look distinct because my faculties see them as such. In reality, however, there’s nothing but atomic flux and empty space. This is in no way an attempt to undermine the usefulness of our faculties, but if there’s a deeper layer to reality that we cannot capture, then there’s no way we can argue for essential properties. Furthermore, we wouldn’t be able to argue from difference either. We would, in other words, have to assume the accuracy of our faculties in order to argue for a law of identity.

# Interpretations of Nietzsche’s Doctrine of Eternal Return

By R.N. Carmona

In approaching the doctrine of eternal return, one will find that there are three ways to interpret it. There is the cosmological interpretation (CI). There are also the ethical (EI) and existential interpretations (EXI). After an extended discussion of these interpretations, I will demonstrate that EXI is the most plausible, especially when considering Nietzsche’s philosophy as fully as possible. In other words, if one can agree that it is possible to, at the very least, attempt to consider Nietzsche’s philosophy as a whole, then one can also agree that of these interpretations, EXI is consistent with Nietzsche’s philosophy or perhaps more forcefully, EXI is what allows for there to be any talk of a consistent Nietzschean philosophy. To my mind, the more forceful point is tenable and I will endeavor to demonstrate it. To accomplish this, it is necessary to show that CI and EI are not consistent with Nietzsche’s philosophy; moreover, it must be shown that neither of these interpretations can make his philosophy consistent.

Prior to discussing the interpretations, it will be useful to consider aphorism 341 of Nietzsche’s The Gay Science:

What if some day or night a demon were to steal after you into your loneliest loneliness and say to you: “This life as you now live it and have lived it, you will have to live once more and innumerable times more; and there will be nothing new in it, but every pain and every joy and every thought and sigh and everything unutterably small or great in your life will have to return to you, all in the same succession and sequence—even this spider and this moonlight between the trees, and even this moment and I myself. The eternal hourglass of existence is turned upside down again and again, and you with it, speck of dust!”

Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus? Or have you once experienced a tremendous moment when you would have answered him: “You are a god and never have I heard anything more divine.” If this thought gained possession of you, it would change you as you are or perhaps crush you. The question in each and every thing, “Do you desire this once more and innumerable times more?” would lie upon your actions as the greatest weight. Or how well disposed would you have to become to yourself and to life to crave nothing more fervently than this ultimate eternal confirmation and seal? (Nietzsche, 273)

Given this aphorism, it would seem that every interpretation is prima facie plausible. It would also seem that CI and EI are more tenable, since they’re made explicit in the passage. Prior to seeing that more clearly, it is imperative to explain what is entailed by CI and EI. It is time now to flesh out the three interpretations of eternal return.

The cosmological interpretation (CI) tells us that there is a finite set of ways in which matter can organize itself. It also states that determinism is true and that the universe is eternal. Given these premises, all events eternally recur and matter will repeatedly organize itself in a finite number of ways. On the latter, the implication is that every person that has ever existed will exist again and since determinism is true, they will live precisely the same life they lived the previous time. This is perfectly in keeping with aphorism 341, since the demon states that you will return once the eternal hourglass is turned upside down over and over again; even the spider and the moonlight between the trees will recur in precisely the same succession.

The ethical interpretation (EI) sidesteps the metaphysical commitments of the premises of CI and seems to prescribe to us an ethical principle with an unusual Kantian flavor. This is part of the reason it’s untenable once one considers Nietzsche’s philosophy as a whole. Nietzsche seems to disagree with Kant in a number of places as will be shown momentarily, so it would be curious if he employed the doctrine of eternal return to prescribe an ethical principle which sounds like a paraphrase of Kant’s categorical imperative, “will only those actions which you wish to recur for all of eternity.” It would appear to be the case given that the doctrine bears upon one’s actions as the greatest weight.

The existential interpretation (EXI) also circumvents the metaphysical commitments of CI and rather than prescribe an ethical principle as EI appears to do, it implores the great individual to live the sort of life they would approve of living an infinite amount of times over. It is as Ronald Dworkin offered, an adverbial rather than adjectival life, a life comprised of the total performance rather than what remains when the performance is subtracted. As the demon states at the end of the passage: “Or how well disposed would you have to become to yourself and to life to crave nothing more fervently than this ultimate confirmation and seal?” The life of the great individual, the Übermensch, Nietzsche believes, will crave an ultimate confirmation and seal, the ultimate acknowledgement of the life s/he led. In order to see why EXI succeeds where CI and EI fail, it is imperative to capture Nietzsche’s philosophy as fully as possible.

CI cannot counter the fact that Nietzsche doesn’t apply determinism to great people. When speaking of the equivalence of greatness and a lack of compassion, Nietzsche states that this experience is “a parable for the whole effect of great human beings on others and on their age; precisely with what is best in them, with what only they can do, they destroy many who are weak, unsure, still in the process of becoming” (Nietzsche, 101). The key is in the phrase “with what only they can do,” which would seem to attribute free will solely to great people. When coupled with Nietzsche’s analysis of herd instincts (see pp.174-175) along with the herd’s attribution of free will to bad conscience, then it would seem that Nietzsche is arguing that only great people can act out of their own volition.

To further establish the notion that only great people can exhibit free will, we can consider Nietzsche’s concept of self-creation. Nietzsche speaks of giving style to one’s character. He also implies that the great individual has the ultimate self-knowledge that is to such an extent that s/he is fully cognizant of their strengths and weaknesses (Nietzsche, 232). This knowledge enables them to build a unified character, one that affirms the good and the bad that exists within them. This self-creation, giving style to oneself, is not possible without free will, without the capacity to tear the head off the snake — a snake that can be seen as the determinism inherent in the herd instinct.

In addition to this, Nietzsche strongly disagreed with a mechanical world. He states that “an essentially mechanical world would be an essentially meaningless world” (Nietzsche, 335). He refers to such thinking as a degradation of existence and asks us to consider whether music can be reduced to calculations and formulas. He refers to the scientific view of the world as “stupid” and yet a scientific view implies a deterministic view, “ ‘a world of truth’ that can be mastered completely and forever with the aid of our square little reason” (Ibid.). Though there are other instances in where Nietzsche appears to undertake the metaphysical commitments of CI—in particular in his discussions on history and the herd instincts inherent in morality—a full consideration of his overall philosophy disabuses one of committing the error of thinking he’s confined himself to such commitments. Given this, CI is untenable and a fuller exploration of Nietzsche’s views of individuals will only further establish this.

Though EI circumvents the metaphysical commitments of CI, EI is an untenable interpretation as well. As mentioned above, Nietzsche disagreed with Kant explicitly. In his Critique of Practical Reason, Kant states that his categorical imperative “determines quite precisely what is to be done to solve a problem and does not let him miss.” Given Nietzsche’s opposition to a mechanical view of the world, one can speculate that he would be staunchly opposed to Kant’s claim. More explicitly, however, Nietzsche inverts the very values Kantian ethics rests upon. In aphorism 4, for example, he says that “the evil instincts are expedient, species-preserving, and indispensable to as high a degree as the good ones; their function is merely different” (Nietzsche, 79). In aphorism 5, Nietzsche seems to indict Kant’s categorical imperative as a quasi-religious alternative. He speaks of “talk of ‘duties,’ and actually always of duties that are supposed to be unconditional” (Nietzsche, 80). He adds that “they would lack the justification for their great pathos” in the absence of such talk and that they therefore “reach for moral philosophies that preach some categorical imperative” or “ingest a good piece of religion” (Ibid.). Given this and his lengthier disagreements with Kant specifically in Beyond Good and Evil, and given Kant’s mechanistic view of his own categorical imperative, EI must be wrong since it suggests that Nietzsche is proposing an ethical principle based on unconditional duty and that would therefore justify our great pathos. This would no doubt run counter to Nietzsche’s overall project of revaluation of values, which had till his time been based upon herd instincts.

Given what’s been surveyed above, it is clear that CI and EI do not allow Nietzsche’s philosophy to be consistent. In fact, both interpretations lead to glaring inconsistencies. Though it may be argued that Nietzsche was not a hard determinist and that thus, a modification of the premise “determinism is true” is in order, there is still no way of demonstrating that he committed himself to the other premises. Given his discussion of causality (see pp.172-173), for example, it can be argued that he believes in the sort of infinities that would cancel out the notion of a finite number of states in which matter can organize itself. EI will lead to still other inconsistencies as we’ve seen. Perhaps the most damning point to be made is that Nietzsche’s thesis involved an inversion of Christian values and an admonition for us to see evil as vital to the preservation of our species. Far from allowing for his philosophy to remain consistent, EI would make it obviously inconsistent.

EXI, to the contrary, succeeds at unifying the threads of Nietzschean philosophy. His view of individuals, especially great individuals, his revaluation of values, and his belief in a dynamic rather than mechanistic world are all encompassed in EXI. His doctrine of eternal return  is therefore telling us to live the kind of life we would approve of living over and over again for all of eternity, for in permitting this revelation to possess our thoughts and thereby bear upon our actions is the equivalent of living a great life once. This not only encompasses Nietzsche’s ideas of self-creation and greatness, but it also anticipates the Übermensch, the overman, the human ideal who prevails against the herd instinct and fully succeeds at creating both for himself and for others new values. The doctrine of eternal return connects his later projects and perhaps this is why he assigned to this idea such great importance.

Of the possible challenges EXI faces, I will deal with two. One challenge I’ll call the nihilistic challenge (NC) and the other I’ll call the inconsistency challenge (IC). On NC, one can argue that Nietzsche simply didn’t care about the life you choose to live. He did suggest that the doctrine of eternal return may crush you and perhaps this will be the common reaction to the demon’s revelation. On the basis of this, we’ll surrender our commitment to life and give up notions of meaning and purpose; we will behave as though nothing matters. On IC, one can argue that EXI leads to an inconsistency in thought. In other words, self-creation and the Übermensch are null concepts when considering that Nietzsche is prescribing to them an existential principle. Both challenges fail to adequately challenge EXI for the following reasons.

NC fails because it gives more weight to the suggestion that the doctrine of eternal return will crush people than to other suggestions, in particular the suggestion of desiring an eternally recurring life and receiving it as an ultimate confirmation. The latter suggestion encompasses EXI, but even if it didn’t, the suggestion that it would crush people to the point of nihilism ignores the human penchant for talk of meaning and purpose, and the ensuing search for them. As we saw earlier, even Nietzsche was not immune to talk of meaning; in fact, meaning is arguably the primary reason why he was opposed to a mechanistic view of the world. IC, on the other hand, fails to present an adequate challenge because the proponent of IC would have to assume that Nietzsche didn’t think we can influence one another. To the contrary, he speaks of the sort of intoxication that leads to the breaking of limbs along false paths (see p.101). To his mind, great human beings offer a drink that is too potent to them who are weak. This implies that the drink isn’t too strong for them who are ready to receive it. Nietzsche is basically borrowing Christian imagery and is saying something akin to milk is for babes and meat is for the strong (1 Corinthians 3:2), and therefore, the great can influence the great. If this holds, then there’s nothing inconsistent about EXI.

In light of what has been briefly surveyed here, EXI is not only more tenable than CI and EI, but it also succeeds where they fail with regards to either being consistent with Nietzsche’s philosophy in toto or in allowing for any talk of a consistent Nietzschean philosophy. EXI is harmed by neither NC nor IC. More importantly, it is the connective thread of Nietzsche’s works, starting with The Gay Science and ending with On the Genealogy of Morals. Perhaps a more elaborate discussion is needed, in particular one that is able to employ Nietzsche’s insights in the works mentioned above in addition to insights found in Thus Spoke Zarathustra and Beyond Good and Evil. At any rate, if Nietzsche’s philosophy is to retain its consistency, it is necessary for EXI to remain tenable across these four works. What’s been established here is that it is tenable within the purview of The Gay Science.

# A Brief Introduction to The Philosophy of Time

By R.N. Carmona

For purposes of my forthcoming argument, consider what follows a primer for it. §I will be brief, which isn’t to say it won’t be exhaustive. In it, I will render a summary of competing theories of time and I will also discuss matters germane to my argument. I will summarize A-, B-, and R-theories of time. I will place emphasis on a version of the B-theory that will feature throughout my argument, namely Mellor’s token-reflexive theory. I will do so by reviewing a pivotal discussion in the philosophy of time between Quentin Smith and L. Nathan Oaklander. This discussion will make clear Oaklander’s possible motivations for embracing a B/R-theory of time, a version of which I will defend in §2. I am, however, more concerned with our perception of time and its passing, i.e., the phenomenology of time. Given that, the temporal parts theory of identity will feature in my argument, but with an important difference. I will briefly discuss this theory below. In §2, I will then address possible objections to my argument and briefly sketch its implications if true. I will thus briefly discuss mathematical nominalism and a matter of concern in the philosophy of science.

If anyone feels sufficiently comfortable with the jargon of this discussion, you may feel free to skip §I altogether. Since I want layman and, in particular, people new to this discussion to understand my argument, it is necessary to survey the discussion thus far. Mellor and others have expressed dismay when confronted by the existing literature in the philosophy of time, so any survey has to have a limited scope since it isn’t practical to attempt a survey of all of the existing literature. I will therefore forgo discussing J.J.C. Smart’s date theory of time and Michelle Beer’s co-reporting thesis. I will limit my introduction to the parts forming the whole, namely my argument.

## I

A-Theory of Time

On the A-Theory, there are actual properties of being five days past, becoming present, being present, and being in the near or far future. It follows that my birth becomes more and more past with each passing day. My inevitable death becomes nearer and nearer to the present. What is present will eventually become past and then recede further into the past. This is known as the a-series. Contrary to McTaggart, the A-theorist denies that the a-series is contradictory. Moreover, they believe that the a-series cannot be reduced to b-relations: notions of earlier, later, and simultaneous. B-relations are comprised of b-moments that have different b-times such that b-moment with b-timel occurs earlier than b-moment with b-timell. So if we take, for instance, Friedrich Nietzsche’s death on August 25, 1900, we now have a b-moment. If we then take his birth on October 15, 1844, we have another b-moment. The dates of both represent distinct b-times. The b-relation is twofold, either in his birth being earlier than his death or his death being later than his birth. A-theorists hold that b-times come into existence, i.e., when Nietzsche’s death was future, it had no b-time.

On A-theory, what’s irreducible and essential is the a-time. Events actually do move from future to past once arriving at the present. Or, events have b-times, but the present moves, which gives these b-times real a-times. A-times have what I’ll call an a-relation, which is to say how much earlier or later an a-moment is than the present. If Jane James played volleyball yesterday, is painting right now, and is going to swim tomorrow, then a-momentl is her having played volleyball yesterday, which occurred earlier than her painting right now and a-momentll is her going to swim tomorrow, which will occur later than her painting at present. Mellor described this as an a-scale reserving a-series for the sequence of events located at those times. He adds that there is more to the a-scale than the sequence of a-times. There is a measure reflecting the speed at which one a-time succeeds another.1 Next Monday, for example, succeeds last Monday by seven days and the following Monday will succeed that one by precisely the same number of days. These measures also reflect how long such events are present. Therefore Nietzsche, who lived for ~56 years, occupies 56 years in the past of the a-scale. On the a-scale, entities cannot be located at any single moment since they occupy an interval on the scale.

R-Theory of Time

On the R-theory, temporal relations are unanalyzable, which means that they cannot be reduced to the properties of their terms nor the terms of temporal relations such as pastness, presentness, and futurity. The only category of temporal entities are hence relations. There are no moments or points in time as with both the A- and B-theories. There is no absolute becoming, i.e., the coming into or going out of existence of events. Monadic A-properties are not ascribed to these events. On the R-theory, time is relational and what’s entailed are durations, which similar to the B-theory, consist in dyadic terms. Instead of dyadic terms like earlier than and later than, however, these relations are lasts as long as, lasts longer than, or lasts shorter than.

In explaining Russell’s analysis of the order in relational facts, Oaklander remarks:

These relations hold between relata and facts. Since all series have a direction, Russell differs from McTaggart in both his account of the transitory aspect of time and its direction. For McTaggart what gives time a direction and its transitory character are changing A-characteristics. A Russellian will ground the transitory or dynamic aspect of time in the relation “is earlier than,” and the direction of a whole series is aggregated from the order relations for all the relational facts contained in it.2

There is, in other words, a difference in a and b as related by R and b and a as related by R. If it’s an asymmetrical relation, then one fact holds. If there are such facts as that a R b, i.e., a has an R-relation to b, then such facts can be reduced to a fact about either a or b. This neither makes them innately complex nor entail that they have some properties to distinguish them from c or d. This is what a Russelian means when saying that relations are external. To supplement, an example is necessary. An R-relation is simply in lasts as long as, lasts longer than, or vice versa. So let a be Mary’s wedding and b be Martha’s funeral. If our a and b have some relation R, and the relation is asymmetrical, which is to say that they didn’t have the same duration, then the fact of their relation is reducible to a fact about either event. Therefore, if Mary’s wedding lasts longer than Martha’s funeral, then the fact of their R-relation is simply reducible to the fact that Mary’s wedding lasted longer. The same would hold if the opposite were true. An R-relation is irreducible iff the relation is symmetric.

Unlike the B-theory, the direction of time is not dependent on causation or entropy, which are complex relations that are perhaps derivative of the simpler R-relations. As far as what we perceive, we don’t always perceive causes and their effects directly, and moreover, we don’t perceive entropic relations, i.e., x has less entropy than y. If we are to perceive such relations, we do so only in an extended sense, which is to say that we only do so via the apparatus used in the sciences. It is not an innate perception like the passage of time. On the Russelian theory, the direction of time is contingent on such simpler R-relations.

B-Theory of Time

On the B-theory of time, there are only b-times, which are described in terms of an a-scale. No events are in themselves past, present, or future and though the a-scale seems indispensable, the utility of the a-scale and our perception of the past and the future aren’t reflective of reality. Also, time doesn’t really flow. According to Prior, the passage of time is simply a metaphor.3 If time actually flows, does this flow occur in time or does it take time to occur? If it is the case that time passes, is there not what Prior dubbed a ‘super-time’ in which it does so? If indeed it does flow, then it flows at some rate, but a rate is a movement through time, so how can rate apply to time itself? So if time doesn’t flow at some rate, how can it be said that it flows at all?

The answer to these questions lie in the manner in which we speak about time. What causes the confusion is the way we employ tense-adverbs. This remains a central point of disagreement between A- and B-theorists. A-theorists take tense seriously whilst B-theorists offer tenseless theories of time, which isn’t to say they don’t take tense seriously, but rather, that regardless of our experience, time is not required to be tensed. Since I’ll be defending a B/R-theory of time, I will not belabor this summary. This entire argument will, to some degree, cover the history of the B-theory and also the place it occupies in today’s discussion. Since questions of tense feature heavily in today’s discussion, it is time now to discuss that at length.

Time and Tense

To simplify matters, proponents of tensed theories of time will from here on be called ‘tensers’ and proponents of tenseless theories of time will be called ‘detensers’. One of the earliest approaches to a tensed theory of time is credited to A.N. Prior who makes use of first-order logic in laying out his theory. W.V.O. Quine also made use of first-order logic, but assumed, like others before him, that physicists accept the tenseless theory of time. Tensers sometimes argue that this assumption is at the center of any tenseless theory though in the modern day, physicists following Minkowski have been supplanted by physicists following Einstein, i.e., tenseless theory is assumed to be true because of special relativity. Both Prior’s early tensed theory and Quine’s assumption are problematic. Since, I am offering a B/R-theory of time, I will not spend time on the challenges facing Prior’s tensed theory. I will, however, briefly discuss the difficulties Quine’s assumption faces so that it’s clear why it is necessary to pass from the old to the new tenseless theories of time, which depend on the New Theory of Reference in the philosophy of language. Before doing so, it is necessary to review Prior’s theory and Quine’s assumption because that will only serve to make the new tenseless theory easier to comprehend. To this we now turn.

Prior, following Augustine, stated that the past is the past present and the future is the future present. In applying and reapplying tense-adverbs, individual facts, i.e. facts about a thing, are preserved. We will return to the distinction between individual and general facts in my discussion on change in §II, but the distinction for our current purposes will make clear that the following sentences are about real things. So, to return to one of our previous examples, rather than saying I was born x years ago, we could say: It was the case two months ago that (it was the case only x years ago that (I am being born). In the same vein, we also have the tendency to obscure what our sentences are actually about, as in when we talk about one’s falling off the wagon or one’s homecoming. For instance:

(1) It is now four years since it was the case that I am falling off the wagon.

This could be paraphrased with:

(2) My falling off the wagon has receded four years into the past.

The suggestion here is that this event dubbed, one’s falling off the wagon, has gone through the motion of becoming further and further past. It has gone through this motion since it ended. (2), however, is just a paraphrase of (1) and (2) is not about this event of one’s falling off the wagon, but rather, a complex way of speaking about changes in the individual who fell off the wagon. What looks like changes in events, thence, are actually changes in things.

Prior had to deal with a possible objection, however. Borrowing McTaggart’s example of Queen Anne’s death, it appeared as though his sentence structure couldn’t apply to that example. If sentences like the aforementioned are facts about individual things, then how can such a sentence be about Queen Anne if she’s dead? The statement, rather than expressing an individual fact, is instead expressing a general fact. Even when, for example, we are mistaken in thinking that someone stole one of our possessions, whether someone stole it or not, the general fact can be expressed by stating: I think that (for some specific y (y stole my lunch)). On such a formulation, Queen Anne’s death isn’t a fact about her, but rather, a general fact. Her death becoming more past is not a change in her, but it does express what Prior called a ‘quasi-change’ and “what is common to the flow of a literal river on the one hand…and the flow of time on the other.”4

Quine, on the other hand, in attempting to preserve the tenseless extensional symbolism of sentential and predicate logic, defended his view by appealing to the fact that physicists accept the tenseless theory of time. Concerning natural language, Quine thought that it could be paraphrased by a tenseless language that denotes dates for tenses and substitutes singular phrases. For instance, James was running if uttered at noon on September 15, 2003 becomes James runs before noon on September 15, 2003. As mentioned above, Quine’s attempt to de-tense natural language has been met with a number of challenges and thus runs into difficulties. Not only does his view rest on a misapprehension of physics but also on questionable assumptions concerning the philosophy of language.5 Old tenseless theories of time sometimes relied on Quine’s assumption, but more importantly, they relied on the notion of de-tensing natural language.

To see the difference between old and new tenseless theories, it is necessary to contrast an old tenseless theory against a tensed theory that holds that properties of the pastness, presentness, and futurity of events are ascribed by tensed sentences. The debate regarding which theory is true centered around whether tensed sentences could be translated by tenseless sentences that instead ascribe relations of earlier than, later than, or simultaneous. For example, “the sun will soon rise” seems to entail the sun’s rising in the future, as an event that will become present, whereas the “sun is rising now” seems to entail the event being present and “the sun has risen” as having receded into the past. If these sentences are true, the first sentence ascribes futurity whilst the second ascribes presentness and the last ascribes pastness. Even if true, however, that isn’t evidence to suggest that events have such properties. Tensed sentences may have tenseless counterparts having the same meaning.

This is where Quine’s notion of de-tensing natural language comes in. Rather than saying “the sun is rising” as uttered on some date, we would instead say that “the sun is rising” on that date. The present in the first sentence doesn’t ascribe presentness to the sun’s rising, but instead refers to the date the sentence is spoken. In like manner, if “the sun has risen” as uttered on some date is translated into “the sun has risen” on a given date, then the former sentence does not ascribe pastness to the sun’s rising but only refers to the sun’s rising as having occurred earlier than the date when the sentence is spoken. If these translations are true, temporal becoming is unreal and reality is comprised of earlier than, later than, and simultaneous. Time then consists of these relations rather the properties of pastness, presentness, and futurity.6

Due to advancements in the philosophy of language, however, old tenseless theories have been abandoned. Prior to discussing new tenseless theories, it is necessary to see why old theories fell out of style. The New Theory of Reference, which was first developed by Ruth Barcon Marcus in “Modalities and Intensional Languages” (1961) and which was further developed by Donnellan, Kaplan, Putnam, and others was the reason tensed and tenseless sentences were reconsidered. David Kaplan, in applying the theory to indexicals like ‘now,’ argued that the rule of use of the indexical ‘now’ is that it strictly concerns the time it is spoken and does not ascribe a property of presentness. Thus, the ‘now’ in the “sun is now rising” strictly refers to the date on which the sentence is said. Furthermore, the sentence has no tenseless translation. Kaplan held that translations met two requirements: identical meaning and identical semantic content. Meaning, in this sense, speaks of a sentence’s rule of use, so in the “the sun is now rising,” the rule of use concerns the date it is spoken. If, however, one tacks on “at 7a.m. on October 4, 2001,” then the sentence instead concerns at 7a.m. on October 4, 2001. Therefore, the former cannot translate into the latter and vice versa because the two sentences have different rules of use. It follows that the tokens of either sentence do not translate into the tokens of the other. I will reserve the type-token distinction for my summary of D.H. Mellor’s tenseless theory. The central idea at the core of the new tenseless theory is “that tensed sentences (as uttered on some occasion) are untranslatable by tenseless sentences, but that it is nonetheless the case that tensed sentences ascribe no temporal determinations not ascribed by tenseless sentences.”7

New Tenseless Theories of Time

As far as new tenseless theories are concerned, there are two progenitor theories: Mellor’s token-reflexive theory and Smart’s date theory. Both have been challenged frequently by Quentin Smith and defended just as frequently by L. Nathan Oaklander. Since Mellor’s token-reflexive theory is agreed to be the most developed and serves as the common ancestor of modern tenseless theories, I will trace Smith and Oaklander’s extensive exchanges in order to provide a more thorough understanding of the new tenseless theory. I will then make explicit which theory I am defending and expounding on. I will make clear that Mellor’s intuitions were best served had he placed more emphasis on our experience of time, which is precisely the horns of the issue. These concerns will compose the content in the next section, but for now, let us explore an important exchange between Smith and Oaklander through the lens of Mellor’s token-reflexive theory.

The force of Smith’s challenges and Oaklander’s defenses will neither be understood nor felt unless Mellor’s theory is adequately summarized. A pivotal distinction serving as the core of Mellor’s theory is the type-token distinction. The type-token distinction can present difficulties when applied, but the distinction itself isn’t hard to understand. Take for example the word institution. If asked how many letters are in that word, one may respond with two answers: eleven or six. The first answer is arrived at if one counts all of the letters in the word whilst the second is arrived at if one counts all of the letters once, i.e., if one forgoes counting a letter that is repeated. That is to say that one notes that there are only the letters i, n, s, t, u, and o. In the former reply, one is answering on the basis of tokens whereas in the second reply, one is answering on the basis of types. Tokens refer to particular things and therefore, when we talk of particular facts, we are in turn talking about tokens. Types, on the other hand, refer to abstract objects, e.g., the letter B or the number 2. Mellor employs the distinction in order to find B-truthmakers for A-propositions. To see why this is necessary, I will sketch out the problem Mellor faces.

A-propositions are multifarious, but for simplicity, we will use the example Mellor uses: Jim races tomorrow.8 So if today is June 1, its B-truthmaker, then, is Jim racing on a day occurring after June 1. B-truthmakers, unlike A-truthmakers, are limited to B-facts. The difficulty Mellor faced is that the B-fact “Jim races on June 2” is always a fact, as he thought of all B-facts. Therefore, how can B-facts make propositions true at some times and false at other times? This problem arises because B-theorists acknowledge that A-propositions are sometimes true. B-facts, however, are unlike A-facts in that they don’t come and go, so one B-fact can’t make “Jim races tomorrow” true at some times and false at others. It follows that multiple B-facts are needed; in fact, as many B-facts as there are times in which an A-proposition can have a varying truth value. It therefore takes a new B-fact to make our statement true and false every day of Jim’s life.

The exchange between Smith and Oaklander centers around whether Mellor’s theory is self-contradictory and whether it can be reduced to the old tenseless theory of time. More generally, it centered around whether the new tenseless theory was to be either abandoned or radically revised. The disagreement between them was not resolved, but Smith did find value in what he considered an alternative theory offered by Oaklander. Despite accusing one another of misunderstanding the new tenseless theory, it is clear that they both adequately understood it. What’s more manifest is their opposing frameworks and convictions. Ultimately, the exchange may have played a role, whether large or small, in Oaklander espousing a B/R-theory of time, which isn’t to say he abandoned the tenseless theory entirely, but eventually came to realize that, as it stood, it was incomplete. To summarize their exchange, I will start with Smith’s criticisms of Mellor’s theory.

Smith vs. Oaklander

Smith held that Mellor’s theory wasn’t the final word in this discussion. He also held the theory to be self-contradictory. Smith explains:

Mellor inconsistently holds all five of these positions: (1) tensed sentences have different truth conditions from tenseless sentences and thus are untranslatable by them; (2) tensed sentences have tenseless truth conditions, namely, tenseless facts; (3) these tenseless facts are the only facts needed to make tensed sentences truth; (4) tensed sentences state the facts that are their truth conditions; and (5) tensed sentences state the same facts that are stated by the tenseless sentences that state the former sentences’ truth conditions.9

Smith held that (1) and (5) were incompatible. Since he wanted to develop an internal critique of Mellor’s theory, he assumed “fact” as defined and used by Mellor. Smith thought that Mellor’s definition committed him to three theses that implied the principle of the identity of truth (hereon PITC), which Smith formulates as follows:

If two tokens of the same sentence or two tokens of different sentences state the same fact, F1, they have the same truth conditions; that is, are true iff F1 and every fact implied by F1 exist.10

The theses are as follows: (a) facts only correspond to true sentence tokens or in Mellor’s words, “we are only concerned with sentences expressing judgments, that is, stating what people take to be facts.”11 (b) conditions that are both necessary and sufficient to make sentences true, are facts, i.e., truth conditions; (c ) in keeping with Mellor’s “Jim races tomorrow” example, a token that states some fact is true iff the fact and its implications exist; one fact can imply another iff the one cannot exist unless the other does. For instance, it cannot be the fact that Jim races tomorrow unless there also exists the fact that Jim doesn’t go to the ER because he feels ill. In other words, Jim has to be physically able to race if he is to race tomorrow.

Smith held that Mellor’s theory contradicts PITC and its own assumptions; more specifically, that the combination of (1) and (5) contradicts PITC. To make this contradiction clear, Smith focused on Mellor’s primary example of tensed and tenseless sentences:

Let R be any token of “Cambridge is here” and S be any token of “It is now 1980.” Then R is true if and only if it occurs in Cambridge, and S is true if and if it occurs in 1980. If a sentence giving another’s truth conditions means what it does, R should mean the same as “R occurs in Cambridge” and S should mean the same as “S occurs in 1980.” But these sentences have different truth conditions. In particular, if true at all, they are true everywhere and at all times. If R does occur in Cambridge, that is a fact all over the world, and if S occurs in 1980, that is a fact at all times. You need not be in Cambridge in 1980 to meet true tokens of “R occurs in Cambridge” and “S occurs in 1980.” But you do need to be in Cambridge in 1980 to meet the true tokens, R and S; for only there and then can R and S themselves be true.12

Smith then notes four items about the sentences “It is now 1980” and “S occurs in 1980.” He considers these items to be derivatives of Mellor’s five items mentioned above. The items are as follows: (1) A token of “S occurs in 1980” states only the fact that S occurs in 1980; (2) This fact is the truth condition of any token S of “It is now 1980”; (3) this tenseless fact is the sole fact stated by S, which is a token of a tensed sentence expressing belief in token-reflexive truth conditions; (4) Any token of “S occurs in 1980” has a different truth condition from any token of “It is now 1980” because any token of the latter is true only if it occurs in 1980 and the former is true at all times it is tokened. According to Smith, the combination of (1), (2), and (3) contradict (4) because according to PITC, two tokens of different sentences which state the same fact have the same truth conditions.

There is, however, a way out if one were to reject (4). Given two tokens S and U, if they state the same fact, they are both made true by that same fact. According to Smith, Mellor fails to see the truth-conditional resemblance S and U share. In other words, it is necessary and sufficient for S to be true being that it occurs in 1980, but it is not necessary nor sufficient for U to be true being that it occurs in 1980. There is no difference in their truth conditions, but rather, in what the facts are about. The fact about token S is a fact about it and not U and thus, restricts S to 1980. This does not constitute a further fact about S’s truth conditions. This resolution, according to Smith, reduces Mellor’s new tenseless theory to the old tenseless theory. On this, Oaklander takes Smith to be both mistaken and not fully understanding Mellor’s theory.

This misunderstanding is due to the fact that Mellor is not always clear. This might provide basis for attributing internal inconsistency to his theory. Oaklander argues that a clearer distinction of sentence types and tokens is required to circumvent the contradiction Smith seemingly uncovers. To recap, tokens refer to particular things and therefore, when we talk of particular facts, we are in turn talking about tokens. Types, on the other hand, refer to abstract objects, e.g., the letter B or the number 2. If S, “It is now 1980,” is understood as a type, then it doesn’t have truth conditions. So by extension, tensed sentence types have no truth-value. If, on the other hand, we consider “It is now 1980” and “S occurs in 1980” as tokens, then their truth conditions are identical. According to Oaklander, Smith “interprets Mellor to be saying that “any token of ’S occurs in 1980’ has different truth conditions from any token S of ‘It is now 1980,’ because S is true iff it occurs in 1980 and “S occurs in 1980,” if true at all, is true ‘at all times’ it is tokened.”13 Though it is true that tenseless sentence types are either true or false at all times they are tokened, the tokens themselves are not tokened at different times. Mellor argued that tensed tokens have unqualified truth-values. For example, stating or writing that ‘E is past’ is false given that it occurs before E. Therefore, tensed sentence tokens do not have different truth conditions from the tenseless ones. According to Oaklander, Smith would only arrive at this conclusion given that he confused tokens with types.

Oaklander further argues that the token-reflexive theory can avoid Smith’s objections if it is modified. It is not inconsistent to hold that tensed and tenseless sentence types have tokens with different truth conditions whilst holding that tensed and tenseless sentence tokens have identical truth conditions. Oaklander does not consider this to be a reduction of Mellor’s new tenseless theory to the old one because for Mellor, having identical truth conditions is a necessary but not a sufficient condition for translation of tensed sentences. As a consequence, even if tensed and tenseless sentence tokens have identical truth conditions, it doesn’t entail that tensed sentence tokens can be translated by tenseless sentence tokens. Moreover, contra Smith, this isn’t Mellor’s only reason for rejecting translatability. He, for instance, makes the following claim, which is in keeping with the New Theory of Reference aforementioned: two sentences have the same meaning iff they have the same use. Recall, that according to Kaplan, meaning refers to a sentence’s rule of use. According to Mellor, since tensed sentence tokens have different meanings that differ from those of tenseless sentence tokens, the former cannot be translated by the latter.

Oaklander does acknowledge at least the appearance of difficulty raised by Smith. Specifically, when Smith speaks of one sentence logically entailing another, the inference should be justifiable by truth conditions. One should be able to demonstrate that what makes the first true also makes the other true. If this cannot be done, the truth conditions for one’s sentences are incorrect or one is mistaken about such entailments. Oaklander concedes that Mellor does not consider this objection, but suggests that Mellor has a way to circumvent Smith’s contentions. Mellor makes use of Kaplan’s demonstratives and indexicals and argues that this helps to account for the logical equivalence of “It is now 1980” and “1980 is present.”14 The meaning of “It is now 1980” and “1980 is present” concerns a rule from the time when the tokens are uttered to their tenseless truth conditions. The truth conditions of their tokens will vary since their context of utterance varies. In each case their truth conditions are tenseless notwithstanding. Therefore, any token of “It is now 1980” is true with respect to the time at which the token is spoken iff it is spoken in 1980. The same applies to “1980 is present.” So if the truth conditions of both are identical, there is no longer a difficulty in getting them to be logically equivalent.

Smith responds by alleging that Oaklander misunderstands Mellor’s theory and that the type-token distinction cannot be employed to show that Mellor’s theory is consistent. According to Smith, Oaklander’s misunderstanding of Mellor’s theory lies in the fact that Mellor defines differences in meaning and use of sentence tokens in terms of differences in their truth conditions. If logically contingent sentences, that is to say, sentences that are neither tautologous nor unsatisfiable, have identical truth conditions, then they also have identical meaning and use. Smith maintains that the contradiction is present in Mellor’s theory and that it can only be circumvented by altering the theory. Smith instead chooses to consider whether Oaklander’s revisions have any merit.

Smith accuses Oaklander of merely reproducing the problem Smith pointed out in a different guise. He argues that Oaklander equivocates upon “it” the logical sameness of “It is now 1980” and “1980 is present.” If one were to replace “it” by identifying the relevant tokens, the seeming sameness of their truth conditions disappear. He presents the truth conditions as follows:

Any token S of (1) is true with respect to the context of S’s utterance if and only if the year of S’s context of utterance is 1980.

Any token V of (2) is true with respect to the context of V’s utterance if and only if the year of V’s context of utterance is 1980.15

Recall that Oaklander states that sentence types do not have truth conditions; only tokens do. Smith then considers two 1980 tokens S and V. S and V are true with respect to their utterances iff the year of their context of utterance is 1980. This, according to Smith, brings us back to the problem he raised because their truth conditions lie in the fact that they both occur in 1980. These tenseless facts do not entail each other because it is possible for S to occur in 1980 whilst V does not occur in 1980, if at all. As a consequence, these facts aren’t enough to explain their logical equivalence. Smith asserts that their logical equivalence is explained by the tensed fact, “1980 is present,” because it belongs to both their truth conditions. Given this, we should see that the tensed theory is more in keeping with facts about language and time than Mellor’s token-reflexive theory.

Oaklander responds by arguing that Smith’s objection is valid only if one presupposes a conception of analysis rejected by proponents of the new theory though having been accepted by proponents of the old theory. Oaklander states:

To begin to see what is involved in this last point note that the early defenders of the tenseless view believed that a complete description or analysis of time could be symbolically represented in a non-indexical tenseless language. To give a complete description or analysis involves constructing a single language that performs two functions. First, in its “logical” function, this perspicuous or ideal language (IL) is a symbolic device for representing or transcribing the logic of sentences contained in ordinary language.16

IL has both a logical and an ontological function. The former is the manner in which IL represents the correct logical form all sentences and entailments in a natural language can assume, e.g., well formulated formulae (wffs) in propositional and predicate logic. The latter represents the facts and the kinds of entities that exist. On the old tenseless theory, given logical considerations, one could draw ontological conclusions. For instance, the logic of temporal discourse could be translated into tenseless language, so therefore, time, ontologically speaking, is comprised of immutable temporal relations between terms that do not have A-properties.

Oaklander concedes that due to such assumptions, Smith’s objections to the old token-reflexive theory are strong. This, however, is not an argument against the new token-reflexive theory. In rejecting translatability in determining the nature of time, them who espouse the new theory are rejecting the very basis of Smith’s argument. Detensers accept the A-theorists claim that tense is indispensable when concerning discourse, but deny that in an attempt to capture the nature of time, tense is indispensable. The former is to say that tenseless sentences cannot replace tensed sentences without loss of meaning. In accepting this, thence, any proponent of the new theory is abandoning an IL capable of adequately accounting for both logical and ontological considerations. The ontological function is hence kept distinct from the logical function. Logical connections therefore are not representative of ontological links between facts in reality. There need not be a connection between the facts of “It is now 1980” and “1980 is present” that provides a basis for their truth conditions. Contra smith, tensed facts do not need to be employed in order to account for their logical equivalence. According to the new theory, two sentences with different meanings can correspond to the same fact and to a different one. “It is now 1980” and “1980 is present” can either correspond to the same fact or not. In failing to see this, Smith’s objections are not applicable to the new token-reflexive theory.

Though the discussion didn’t end there, Oaklander’s response adequately defends Mellor’s token-reflexive theory or, at the very least, offers a viable alternative. The distinction he draws given an IL and the isolation of the ontological from the logical allows detensers to reject the notion that logical equivalence provides a basis for the truth conditions of “It is now 1980” and “1980 is present.” Furthermore, they can reject the notion that tensed facts are necessary to explain logical equivalence. This is what allows Mellor to say that the sentences tokens of “Cambridge is here” and “It is now 1980” have different truth conditions. If the ontological is held as distinct from the logical, these sentences can be made true by the same fact or not. Their logical equivalence does not imply that some fact corresponds to it.

Temporal Parts Theory of Identity

Since the Temporal Parts Theory of Identity will feature in my forthcoming argument, I will outline it here. I will then note a difference between the theory as it stands and my version of it, which can constitute a reduction of the theory. This may seem tangential, but even if so, it is a relevant detour because when accounting for our experience of time and its passing, a theory of identity is necessary in any theory of time and specifically, one that is consistent with the theory of time in question. Given the B/R-theory I will offer, I will argue that the temporal parts theory is best suited to meet the criterion of consistency. One will find that in speaking of temporal parts, the detensers’ acceptance of tense being indispensable in regards to temporal discourse will be made apparent. In other words, I will be speaking of earlier temporal parts differing from later ones.

The Temporal Parts Theory of Identity (hereon TPT) is derived from the notion of time being, in some sense or to some degree, like space. One can think of, for instance, a linear timeline depicting the years all 44 U.S. Presidents held office. Or one can think of the x-y axis used in physics. Time is represented by the x-axis whilst space is represented by the y. Or one can think of a space-time diagram containing two axes representing space and another to represent time. These sorts of considerations have led some philosophers and scientists to ask whether time is a dimension. According to some accounts, time is the fourth dimension. Time, however, is not always analogous to space. D.H. Mellor discussed these disanalogies at length.17 He, for instance, concluded that there’s no spatial analogue for our feeling of the passing of time. We can’t, in other words, attribute the passing of time to spatial changes.

With respect to parts, however, time and space are analogous. Theodore Sider explains:

Temporal parts theory is the claim that time is like space in one particular respect, namely, with respect to parts. First think about parts in space. A spatially extended object such as a person has spatial parts: her head, arms, etc. Likewise, according to temporal parts theory, a temporally extended object has temporal parts. Following the analogy, since spatial parts are smaller than the whole object in spatial dimensions, temporal parts are smaller than the whole object in the temporal dimension. They are shorter-lived.18

Recall, as an example, the b-moments in Friedrich Nietzsche’s life. Friedrich Nietzsche’s birth on October 15, 1844 is one b-moment and his death on August 25, 1900 is another. The dates of both represent distinct b-times. On TPT, he is spread out from October 15, 1844 to August 25, 1900. If we were to depict him in a space-time diagram, his parts on our diagram will depict his temporal parts. If we were capable of watching Nietzsche in his infancy, we will be observing a temporal part, then another that resembles it, and then another. If one were to watch infant Nietzsche long enough, his later temporal parts will be slightly bigger than the previous ones. That is to say that Nietzsche is no longer an infant; he is now, for instance, a toddler. So on our space-time diagram, Nietzsche grows the further we move away from his birth. It is also worth noting that temporal parts have spatial parts and vice versa. Nietzsche’s hand, like Nietzsche himself, persisted within the interval of time his life occupies. The parts he was comprised of will also be represented on our space-time diagram.

TPT is consistent with the B-theory and the B/R-theory that I will offer. For example, with regards to distant objects, time and space are alike. M31, though very distant, is just as real as any object in close proximity on Earth. Temporally distant objects, likewise, are real. This is the view known as eternalism, which differs from presentism in denying the thesis that only objects in the present exist. Also, with regards to here and now, time and space are analogous. If, for instance, I’m talking to my friend in China, she may say that it is sunny ‘here’ whilst I may say it is snowing ‘here.’ There’s no disagreement between us. Here is, in other words, relative to the person. The word ‘now’ works in like manner. If I were speaking to Fred, a homo erectus, via a time-traversing telephone, he could express that it is currently 1.8 million years before the advent of our calendars whilst I express that it is 2015. He and I do not disagree. ‘Now,’ like ‘here,’ is relative to the person. This is the gist of the B-theory of time.

Regardless of whether TPT is consistent with my B/R-theory of time, there’s still the question of whether TPT is true. Presenting a philosophical case in defense of the theory and reducing it to a simpler theory will be tasks undertaken in §2. The reduction of the theory is necessary because though detensers do not deny the indispensability of tense when concerning temporal discourse, TPT is, to my mind, mistaken because it bridges the chasm between the logical and ontological aspects previously discussed. In other words, TPT goes from ordinary tensed discourse to a dubious ontological commitment, namely that temporal parts are objects that exist and we can, in some sense, comprehend what they are like.

In the next section, which is my argument, I will attempt to answer the question as to whether anything can have the characteristic of being in time. I will endeavor to show that nothing can have this characteristic. My approach will differ from McTaggart’s because a philosophical theory stands in relation to logical, conceptual, perceptual or actual phenomena in our world. Doing away with the A-series is therefore necessary but not sufficient to show that nothing exists in time, i.e., that time is unreal. If our experience of time is not evidence for real time, one must still account for this experience: ponder its origin, what might this experience hinge on, and whether it is possible for time to exist in isolation of these things. I will endeavor to show that time does not exist in isolation of these and I will argue that some of the literature has hitherto agreed with this assessment. What’s left is to bring together relevant threads of this discussion to draw a conclusion that, at the very least, offers a plausible solution to our conundrum. These threads will consist of valid parts of Mellor’s new token-reflexive theory, the R-theory as defended by Oaklander, and the union of B- and R-theories which will bear some resemblance to but differ in key aspects from Oaklander’s theory. I will also defend TPT, defend my reason for reducing it to a simpler theory of identity, and then relate this reduction to my B/R-theory of time. Then, I will consider objections to my theory, discuss its implications, especially as they relate to the philosophy of science, and introduce mathematical nominalism to circumvent the main problem arising from my theory.

Works Cited

1 Mellor, D. H. Real Time II. London: Routledge, 1998. 8. Print.

2 Oaklander, Nathan. Adrian Bardon ed. “A-, B- and R-Theories of Time: A Debate”. The Future of the Philosophy of Time. New York: Routledge, 2012. 23. Print.

3 Prior, A. N. Papers on Time and Tense. Oxford: Clarendon P., 1968. Print.

4 Ibid. [2]

5 Oaklander, L. Nathan, and Quentin Smith, eds. The New Theory of Time. New Haven: Yale UP, 1994. 10. Print.

6 Ibid. [4]

7 Ibid. [4], p.18-19

8 Ibid. [1], p.27

9 Smith, Quentin (1987). “Problems with The New Tenseless Theory of Time”. Philosophical Studies 52 (3): 371-392.

10 Ibid. [9]

11 Mellor, D. H. Real Time. Cambridge [Cambridgeshire: Cambridge UP, 1981. 28. Print.

12 Ibid. [11], p.24.

13 Oaklander, L. N. (1991). “A Defence of the New Tenseless Theory of Time”. The Philosophical Quarterly 41: 26–38.

14 D. Kaplan (1978). “On the Logic of Demonstratives”. Journal of Philosophical Logic 8: 81-98.

15 Ibid. [5], p.73.

16 Oaklander, L. Nathan (1990). “The New Tenseless Theory of Time: A Reply to Smith”. Philosophical Studies 58 (3): 287 – 292.

17 Ibid. [1], p.95-96

18 Sider, Theodore (2008). “Temporal Parts”. Web. <http://tedsider.org/papers/temporal_parts.pdf>