Category: cosmology

Argumentative Strategy Series: The Analog Strategy

Let us consider a well-worn argument:

P1 Everything that begins to exist has a cause.

P2 The universe began to exist.

C Therefore, the universe has a cause. (Reichenbach, Bruce. “Cosmological Argument”Stanford Encyclopedia of Philosophy. 2021. Web.)

When nonbelievers first encounter this argument, it is likely they find it suspicious even if they cannot immediately identify why. My first written refutation of the argument zeroed in on the fact that it appears to commit the fallacy of composition. As an inexperienced and naive thinker, it was a real eureka moment for me. As it turns out, this is a common retort.

Russell replies that the move from the contingency of the components of the universe to the contingency of the universe commits the Fallacy of Composition, which mistakenly concludes that since the parts have a certain property, the whole likewise has that property. Hence, whereas we legitimately can ask for the cause of particular things, to require a cause of the universe based on the contingency of its parts is mistaken.

Reichenbach, Bruce, “Cosmological Argument”The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.)

My interest here is not whether or not this rebuttal works against the Kalam. We can all agree that proponents of the argument will reject this rebuttal and certainly won’t regard it as a defeater. To my mind, it does work and although it is not the best rebuttal or strongest defeater for the argument, it is worth understanding why it works. Unfortunately, one would still have to deal with the obstinate apologist refusing to hear your case. In cases like this, the analog strategy works perfectly.

Basically put, the strategy would have one devise an argument so similar to the original that it would be difficult to deny the analog without also denying the original. This is admittedly tricky, but if successful, this could save a lot of time otherwise spent on trying to convince someone who refuses to budge and doubles down. Discussions on the Kalam give us a perfect segue: another common refutation is the possibility of a multiverse. Whether one goes with M-theory or Everett’s Many Worlds Interpretation of quantum mechanics, other universes are a consequence in some theories in cosmology and quantum mechanics. It does not, as some apologists erroneously claim, result from a refusal to admit that God created the universe. Carroll alluded to this refutation in his debate with Craig.

So without further ado, here is a Kalam analog that theists will almost certainly reject:

P1 For every particular thing that exists, there is always more than one of that particular thing

P2 The universe exists

C Therefore, there is more than one universe

A standard refutation to this argument will likely point out that I have committed the fallacy of composition. People will likely say that the universe is unlike everyday objects. They might find it patently absurd that made a logical inference from many cats, ants, and planets to many universes. Another standard refutation of the Kalam can be mapped onto the analog: what is meant by “particular thing that exists”? Although I find the argument prima facie intriguing, I would concede; my argument does not succeed at logically proving the existence of other universes. Whether a dissenter points out the fallacy or whether they quibble with the semantics of the argument, there refutations have force. Now is when sleight of hand occurs and you hold up a mirror.

The theist will have to strongly consider whether the analog is a charitable reflection of the argument they think is true. In this particular case, if they are suspicious of my inference from many entities to many universes, then they are obligated to be equally as suspicious of the inference from every entity in the universe being an effect to the universe being an effect. There is only one other option: they will have to show that the analog is dissimilar to the original in some critical way. If they choose that route, I do not see a way forward. If after employing this strategy, they still double down, then you know you are not dealing with someone who is reasoning properly. They likely have an emotional attachment to the argument or accept it because of motivated reasoning. Whatever the case, any further discussion with such an individual is fruitless.

Ultimately, the analog strategy is an effective and powerful tool to have in one’s reasoning kit. As shown here, when used properly, proponents of flawed or outright unsound arguments have to reckon with the efficacy of certain refutations. Where they might have waved these refutations away, they now have to consider the possibility that the argument they thought was true is actually false. The strategies I have offered so far make one thing abundantly clear, and it is perhaps one thing most philosophers are reluctant to admit, certainly not something I am ashamed to admit: persuasion involves manipulation and deception. These terms tend to have bad connotations, but reasoning, like playing a game of chess, must involve setting traps for your opponent. In order for philosophy to be more truth-obtaining than it currently is, these strategies have to be learned and used effectively, so that we can begin to eliminate erroneous arguments and schools of thought.

Rescuing Logic From the Abuse of Bayes’ Theorem: Validity, Soundness, and Probability

By R.N. Carmona

In recent years, there has been a surge in the use of Bayes’ Theorem with the intention of bolstering this or that argument. This has resulted in an abject misuse or abuse of Bayes’ Theorem as a tool. It has also resulted in an incapacity to filter out bias in the context of some debates, e.g. theism and naturalism. Participants in these debates, on all sides, betray a tendency to inflate their prior probabilities in accordance with their unmerited epistemic certainty in either a presupposition or key premise of one of their arguments. The prophylactic, to my mind, is found in a retreat to the basics of logic and reasoning.

An Overview on Validity

Validity, for instance, is more involved than some people realize. It is not enough for an argument to appear to have logical form. An analysis of whether it, in fact, has logical form is a task that is seldom undertaken. When people think of validity, something like the following comes to mind: “A deductive argument is said to be valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Otherwise, a deductive argument is said to be invalid” (NA. Validity and Soundness. Internet Encyclopedia of Philosophy. ND. Web.).

Kelley, however, gives us rules to go by:

  1. In a valid syllogism, the middle term must be distributed in at least one of the premises
  2. If either of the terms in the conclusion is distributed, it must be distributed in the premise in which it occurs
  3. No valid syllogism can have two negative premises
  4. If either premise of a valid syllogism is negative, the conclusion must be negative; and if the conclusion is negative, one premise must be negative
  5. If the conclusion is particular, one premise must be particular (Kelley, D.. The Art of Reasoning. WW Norton & Co. 2013. Print. 243-249)

With respect to the first rule, any argument that does not adhere to it commits the fallacy of undistributed middle. Logically, if we take Modus Ponens to be a substitute for a hypothetical syllogism, then undistributed middle is akin to affirming the consequent. Consider the following invalid form:

All P are Q.

All R are P.

∴ All R are Q.

When affirming the consequent, one is saying Q ⊃ P. It is not surprising that these two fallacies are so closely related because both are illegitimate transformations of valid argument forms. We want to say that since all P are Q and all R are P, therefore all R are Q in much the same way we want to infer that P ⊃ Q. Consider the well-known Kalam Cosmological Argument. No one on both sides questions the validity of the argument because validity, for many of us, is met when the conclusion follows from the premises. However, one can ask whether the argument adheres to Kelley’s rules. If one were to analyze the argument closely enough, it is very arguable that the argument violates Kelley’s fourth rule. The issue is that it takes transposing from the fifth rule to fourth rule because the argument does not violate the fifth and therefore, appears valid. However, when restated under the fourth rule, the problem becomes obvious. In other words, the universe is a particular in both Craig’s conclusion and in the second premise of his argument. Let’s consider the KCA restated under the fourth rule:

There are no things that are uncaused.

There is no universe that is uncaused.

∴ All universes have a cause.

Restating it this way appears controversial only because the argument seems to presuppose that there is more than one universe. Two negatives must have properties in common. Put another way, since there are many of all things, then the universe cannot be the only thing of its kind, if we even agree that the universe is like ordinary entities at all. Craig, perhaps unintentionally, attempts to get a universal from a particular, as his argument restated under the fourth rule shows. Given this, we come to the startling conclusion that Craig’s KCA is invalid. Analyses of this kind are extremely rare in debates because most participants do not know or have forgotten the rules of validity. No amount of complexity hides a violation of basic principles. The advent of analytic philosophy with Bertrand and Moore led to an increasing complexity in arguments and for the most part, validity is respected. As shown here, this is not always the case, so a cursory analysis should always be done at the start.

Validity is necessary but not sufficient for an argument to prove effective and persuasive. This is why arguments themselves cannot substitute for or amount to evidence. Soundness is determined by taking a full account of the evidence with respect to the argument. The soundness of an argument is established given that the pertinent evidence supports it; otherwise, the argument is unsound. Let us turn to some simple examples to start.

An Overview of Soundness

“A deductive argument is sound if and only if it is both valid, and all of its premises are actually true. Otherwise, a deductive argument is unsound” (Ibid.).

All ducks are birds.

Larry is a duck.

∴ Larry is a bird.

This argument is stated under Kelley’s fifth rule and is no doubt valid. Now, whether or not the argument is sound will have us looking for external verification. We might say that, a priori, we know that there are no ducks that are not birds. By definition, a duck is a kind of bird. All well and good. There is still the question of whether there is a duck named Larry. This is also setting aside the legitimacy of a priori knowledge because, to my mind, normal cognitive function is necessary to apprehend human languages and to comprehend the litany of predicates that follow from these languages. We know that ducks are birds a posteriori, but on this point I digress. Consider, instead, the following argument.

All ducks are mammals.

Larry is a duck.

∴ Larry is a mammal.

This argument, like the previous one, is valid and in accordance with Kelley’s fifth rule. However, it is unsound. This harkens back to the notion that ducks belonging to the domain of birds is not a piece of a priori knowledge. Despite knowing that all ducks are birds, the differences between birds and mammals are not at all obvious. That is perhaps the underlying issue, a question of how identity is arrived at, in particular the failure of the essentialist program to capture what a thing is. The differentialist program would have us identify a thing by pinning down what it is not. It follows that we know ducks are birds because anatomically and genetically, ducks do not have the signatures of mammals or any other phylum for that matter. A deeper knowledge of taxonomy is required to firmly establish that ducks are, in fact, birds.

An exploration of soundness is much more challenging when analyzing metaphysically laden premises. Consider, for example, the second premise of the KCA: “The universe began to exist.” What exactly does it mean for anything to begin to exist? This question has posed more problems than solutions in the literature; for our purposes, it is not necessary to summarize that here. We can say of a Vizio 50-inch plasma screen television that it began to exist in some warehouse; in other words, there is a given point in time where a functioning television was manufactured and sold to someone. The start of a living organism’s life is also relatively easy to identify. However, mapping these intuitions onto the universe gets us nowhere because as I alluded to earlier, the universe is unlike ordinary entities. This is why the KCA has not been able to escape the charge of fallacy of composition. All ordinary entities we know of, from chairs to cars to elephants to human beings exist within the universe. They are, as it were, the parts that comprise the universe. It does not follow that because it is probable that all ordinary things begin to exist that the universe must have begun to exist.

This is a perfect segue into probability. Again, since Bayes’ Theorem is admittedly complex and not something that is easily handled even by skilled analytic philosophers, a return to the basics is in order. I will assume that the rule of distribution applies to basic arguments; this will turn out to be fairer to all arguments because treating premises as distinct events greatly reduces the chances of a given argument being true. I will demonstrate how this filters out bias in our arguments and imposes on us the need to strictly analyze arguments.

Using Basic Probability to Assess Arguments

Let us state the KCA plainly:

Everything that begins to exist has a cause for its existence.

The universe began to exist.

∴ The universe has a cause for its existence.

As aforementioned, the first premise of the KCA is metaphysically laden. It is, at best, indeterminable because it is an inductive premise; all it takes is just one entity within the universe to throw the entire argument into the fire. To be fair, we can only assign a probability of .5 for this argument being true. We can then use distribution to get the probability of the argument being sound, so since we have a .5 probability of the first premise being sound, and given that we accept that the argument is not in violation of Kelley’s rules, we can therefore distribute this probability across one other premise and arrive at the conclusion that the argument has a 50% chance of being true.

This is preferable to treating each premise as an isolated event; I am being charitable to all arguers by assuming they have properly distributed their middles. Despite this, a slightly different convention might have to be adopted to assess the initial probability of an argument with multiple premises. An argument with six individual premises has a 1.56% chance of being true, i.e. .5^6. This convention would be adopted because we want a probability between 0 and 100. If we use the same convention used for simpler arguments with less premises, then an argument with six premises would have a 300% chance of being true. An arguer can then arbitrarily increase the amount of premises in his argument to boost the probability of his argument being true. Intuitively, an argument with multiple premises has a greater chance of being false; the second convention, at least, shows this while the first clearly does not. The jury is still out on whether the second convention is fair enough to more complex arguments. There is still the option of following standard practice and isolating an individual premise to see if it holds up to scrutiny. Probabilities do not need to be used uniformly; they should be used to make clear our collective epistemic uncertainty about something, i.e., to filter out dogma.

Let us recall my negation strategy and offer the anti-Kalam:

Everything that begins to exist has a cause for its existence.

The universe did not begin to exist.

∴ The universe does not have a cause.

Despite my naturalistic/atheistic leanings, the probability of my argument is also .5 because Craig and I share premise 1. The distribution of that probability into the next premise does not change because my second premise is a negation of his second premise. In one simple demonstration, it should become obvious why using basic probabilities is preferable over the use of Bayes’ Theorem. No matter one’s motivations or biases, one cannot grossly overstate priors or assign a probability much higher than .5 for metaphysically laden premises that are not easily established. We cannot even begin to apply the notion of a priori knowledge to the first premise of the KCA. We can take Larry being a bird as obvious, but we cannot take as obvious that the universe, like all things within it, began to exist and therefore, has a cause.

Now, a final question remains: how exactly does the probability of an argument being sound increase? Probability increases in accordance with the evidence. For the KCA to prove sound, a full exploration of evidence from cosmology is needed. A proponent of the KCA cannot dismiss four-dimensional black holes, white holes, a cyclic universe, eternal inflation, and any theory not in keeping with his predilections. That being the case, his argument becomes one based on presupposition and is therefore, circular. A full account of the evidence available in cosmology actually cuts sharply against the arteries of the KCA and therefore, greatly reduces the probability of it being sound. Conversely, it increases the probability of an argument like the Anti-Kalam being true. The use of basic probability is so parsimonious that the percentage decrease of the Kalam being sound mirrors the percentage increase of the Anti-Kalam being sound. In other words, the percentage decrease of any argument proving sound mirrors the percentage increase of its alternative(s) proving true. So if a full account of cosmological evidence lowers the probability of the Kalam being sound by 60%, the Anti-Kalam’s probability of being true increases by 60%. In other words, the Kalam would now have a 20% probability of being true while its opposite would now have an 80% of being true.

Then, if a Bayesian theorist is not yet satisfied, he can keep all priors neutral and plug in probabilities that were fairly assessed to compare a target argument to its alternatives. Even more to the point regarding fairness, rather than making a favored argument the target of analysis, the Bayesian theorist can make an opponent’s argument the target of analysis. It would follow that their opponent’s favored argument has a low probability of being true, given a more basic analysis that filters out bias and a systematic heuristic like the one I have offered. It is free of human emotion or more accurately, devotion to any given dogma. It also further qualifies the significance of taking evidence seriously. This also lends much credence to the conclusion that arguments themselves are not evidence. If that were the case, logically valid and unsound arguments would be admissible as evidence. How would we be able to determine whether one argument or another is true if the arguments themselves serve as evidence? We would essentially regard arguments as self-evident or tautologous. They would be presuppositionalist in nature and viciously circular. All beliefs would be equal. This, thankfully, is not the case.

Ultimately, my interest here has been a brief exploration into a fairer way to assess competing arguments. All of this stems from a deep disappointment in the abuse of Bayes’ Theorem; everyone is inflating their priors and no progress will be made if that continues to be permitted. A more detailed overview of Bayes’ Theorem is not necessary for such purposes and would likely scare away even some readers versed in analytic philosophy and more advanced logic. My interest, as always, is in communicating philosophy to the uninitiated in a way that is approachable and intelligible. At any rate, a return to the basics should be in order. Arguments should continue to be assessed; validity and soundness must be met. Where soundness proves difficult to come by, a fair initial probability must be applied to all arguments. Then, all pertinent evidence must be accounted for and the consequences the evidence presents for a given argument must be absorbed and accepted. Where amending of the argument is possible, the argument should be restructured, to the best of the arguer’s ability, in a way that demonstrates recognition of what the evidence entails. This may sound like a lot to ask, but the pursuit of truth is an arduous journey, not an easy endeavor by any stretch. Anyone who takes the pursuit seriously would go to great lengths to increase the epistemic certainty of his views. All else is folly.

Argumentative Strategies Series: The Negation Strategy

By R.N. Carmona

Every deductive argument can be negated. I consider this an uncontroversial statement. The problem is, there are people who proceed as though deductive arguments speak to an a priori truth. The Freedom Tower is taller than the Empire State Building; the Empire State Building is taller than the Chrysler Building; therefore, the Freedom Tower is taller than the Chrysler Building. This is an example of an a priori truth because given that one understands the concepts of taller and shorter, the conclusion follows uncontroversially from the premises. This is one way in which the soundness of an argument can be assessed.

Of relevance is how one would proceed if one is unsure of the argument. Thankfully, we no longer live in a world in where one would have to go out of their way to measure the heights of the three buildings. A simple Google search will suffice. The Freedom Tower is ~546m. The Empire State Building is ~443. The Chrysler is ~318m. Granted, this is knowledge by way of testimony. I do not intend to connote religious testimony. What I intend to say is that one’s knowledge is grounded on knowledge directly acquired by someone else. In other words, at least one other person actually measured the heights of these buildings and these are the measurements they got.

Most of our knowledge claims rest on testimony. Not everyone has performed an experimental proof to show that the acceleration of gravity is 9.8m/s^2. Either one learned it from a professor or read it in a physics textbook or learned it when watching a science program. Or, they believe the word of someone they trust, be it a friend or a grade school teacher. This does not change that fact that if one cared to, one could exchange knowledge by way of testimony for directly acquired knowledge by performing an experimental proof. This is something I have done, so I do not believe on basis of mere testimony that Newton’s law holds. I can say that it holds because I tested it for myself.

To whet the appetite, let us consider a well-known deductive argument and let us ignore, for the moment, whether it is sound:

P1 All men are mortal.

P2 Socrates is a man.

C Therefore, Socrates is mortal.

If someone were completely disinterested in checking whether this argument, which is merely a finite set of propositions, coheres with the world or reality, I would employ my negation strategy: the negation of an argument someone assumes to be sound without epistemic warrant or justification. The strategy forces them into exploring whether their argument or its negation is sound. Inevitably, the individual will have to abandon their bizarre commitment to a sort of propositional idealism (namely that propositions can only be logically assessed and do not contain any real world entities contextually or are not claims about the world). In other words, they will abandon the notion that “All men are mortal” is a mere proposition lacking context that is not intended to make a claim about states of affairs objectively accessible to everyone, including the person who disagrees with them. With that in mind, I would offer the following:

P1 All men are immortal.

P2 Socrates is a man.

C Therefore, Socrates is immortal.

This is extremely controversial for reasons we are all familiar with. That is because everyone accepts that the original argument is sound. When speaking of ‘men’, setting aside the historical tendency to dissolve the distinction between men and women, what is meant is “all human persons from everywhere and at all times.” Socrates, as we know, was an ancient Greek philosopher who reportedly died in 399 BCE. Like all people before him, and presumably all people after him, he proved to be mortal. No human person has proven to be immortal and therefore, the original argument holds.

Of course, matters are not so straightforward. Christian apologists offer no arguments that are uncontroversially true like the original argument above. Therefore, the negation strategy will prove extremely effective to disabuse them of propositional idealism and to make them empirically assess whether their arguments are sound. What follows are examples of arguments for God that have been discussed ad nauseam. Clearly, theists are not interested in conceding. They are not interested in admitting that even one of their arguments does not work. Sure, what you find are theists committed to Thomism, for instance, and as such, they will reject Craig’s Kalam Cosmological Argument (KCA) because it does not fit into their Aristotelian paradigm and not because it is unsound. They prefer Aquinas’ approach to cosmological arguments. What is more common is the kind of theist that ignores the incongruity between one argument for another; since they are arguments for God, it counts as evidence for his existence and it really does not matter that Craig’s KCA is not Aristotelian. I happen to think that it is, despite Craig’s denial, but I digress.

Negating Popular Arguments For God’s Existence

Let us explore whether Craig’s Moral Argument falls victim to the negation strategy. Craig’s Moral Argument is as follows:

P1 If God does not exist, objective moral values do not exist.

P2 Objective moral values do exist.

C Therefore, God exist (Craig, William L. “Moral Argument (Part 1)”. Reasonable Faith. 15 Oct 2007. Web.)

With all arguments, a decision must be made. First, an assessment of the argument form is in order. Is it a modus ponens (MP) or a modus tollens (MT)? Perhaps it is neither and is instead, a categorical or disjunctive syllogism. In any case, one has to decide which premise(s) is going to be negated or whether by virtue of the argument form, one will have to change the argument form to state the opposite. You can see this with the original example. I could have very well negated P2 and stated “Socrates is not a man.” Socrates is an immortal jellyfish that I tagged in the Mediterranean. Or he is an eternal being that I met while tripping out on DMT. For purposes of the argument, however, since he is not a man, at the very least, the question of whether or not he is mortal is open. We would have to ask what Socrates is. Now, if Socrates is my pet hamster, then yes, Socrates is mortal despite not being a man. It follows that the choice of negation has to be in a place that proves most effective. Some thought has to go into it.

Likewise, the choice has to be made when confronting Craig’s Moral Argument. Craig’s Moral Argument is a modus tollens. For the uninitiated, it simply states: [((p –> q) ^ ~q) –> ~p] (Potter, A. (2020). The rhetorical structure of Modus Tollens: An exploration in logic-mining. Proceedings of the Society for Computation in Linguistics, 3, 170-179.). Another way of putting it is that one is denying the consequent. That is precisely what Craig does. “Objective moral values do not exist” is the consequent q. Craig is saying ~q or “Objective moral values do exist.” Therefore, one route one can take is keeping the argument form and negating P1, which in turn negates P2.

MT Negated Moral Argument

P1 If God exists, objective moral values and duties exist.

P2 Objective moral values do not exist.

C Therefore, God does not exist.

The key is to come up with a negation that is either sound or, at the very least, free of any controversy. Straight away, I do not like P2. Moral realists would also deny this negation because, to their minds, P2 is not true. The controversy with P2 is not so much whether it is true or false, but that it falls on the horns of the objectivism-relativism and moral realism/anti-realism debates in ethics. The argument may accomplish something with respect to countering Craig’s Moral Argument, but we are in no better place because of it. This is when we should explore changing the argument’s form in order to get a better negation.

MP Negated Moral Argument

P1 If God does not exist, objective moral values and duties exist.

P2 God does not exist.

C Therefore, objective moral values and duties exist.

This is a valid modus ponens. I have changed the argument form of Craig’s Moral Argument and I now have what I think to be a better negation of his argument. From P2, atheists can find satisfaction. This is the epistemic proposition atheists are committed to. The conclusion also alleviates any concerns moral realists might have had with the MT Negated Moral Argument. For my own purposes, I think this argument works better. That, however, is beside the point. The point is that this forces theists to either justify the premises of Craig’s Moral Argument, i.e. prove that the argument is sound, or assert, on the basis of mere faith, that Craig’s argument is true. In either case, one will have succeeded in either forcing the theist to abandon their propositional idealism, in getting them to test the argument against the world as ontologically construed or in getting them to confess that they are indulging in circular reasoning and confirmation bias, i.e. getting them to confess that they are irrational and illogical. Both of these count as victories. We can explore whether other arguments for God fall on this sword.

We can turn our attention to Craig’s Kalam Cosmological Argument (KCA):

P1 Everything that begins to exist has a cause.

P2 The universe began to exist.

C Therefore, the universe has a cause. (Reichenbach, Bruce. “Cosmological Argument”. Stanford Encyclopedia of Philosophy. 2021. Web.)

Again, negation can take place in two places: P1 or P2. Negating P1, however, does not make sense. Negating P2, like in the case of his Moral Argument, changes the argument form; this is arguable and more subtle. So we get the following:

MT Negated KCA

P1 Everything has cause iff it begins to exist. {(∀x)[(C)x (E)x]}

P2 The universe did not begin to exist.

C Therefore, the universe does not have a cause.

Technically, Craig’s KCA is a categorical syllogism. Such syllogisms present a universal () or existential quantifier (∃); the latter is introduced by saying all. Consider, “all philosophers are thinkers; all philosophers are logicians; therefore, all thinkers are logicians.” Conversely, one could say “no mallards are insects; some birds are mallards; therefore, some birds are not insects.” What Craig is stating is that all things that begin to exist have a cause, so if the universe is a thing that began to exist, then it has a cause. Alternatively, his argument can be considered an implicit modus ponens: “if the universe began to exist, then it has a cause; the universe began to exist; therefore, the universe has a cause.” I consider this a much weaker formulation and since Craig or anyone else for that matter doesn’t come across things that didn’t begin to exist, interpreting his first premise as a biconditional makes more sense. Hence, the negation works because if the universe did not begin to exist, then the universe is not part of the group of all things that have a cause. This is why my original formulation began with Craig’s P1 because I consider it a biconditional. Notice that even if p and q are false, Craig’s P1 is still true. One way to make a biconditional true is if p and q are false. All that’s said by the Negated or anti-Kalam is that the universe is not within the domain of things that begin to exist. Therefore, the universe does not have a cause.

Whether the universe is finite or eternal has been debated for millennia and in a sense, despite changing context, the debate rages on. If the universe is part of an eternal multiverse, it is just one universe in a vast sea of universes within a multiverse that has no temporal beginning. Despite this, the MT Negated KCA demonstrates how absurd the KCA is. The singularity was already there ‘before’ the Big Bang. The Big Bang started the cosmic clock, but the universe itself did not begin to exist. This is more plausible. Consider that everything that begins to exist does so when the flow of time is already in motion, i.e. when the arrow of time pointed in a given direction due to entropic increase reducible to the decreasing temperature throughout the universe. Nothing that has ever come into existence has done so simultaneously with time itself because any causal relationship speaks to a change and change requires the passage of time, but at T=0, no time has passed, and therefore, no change could have taken place. This leads to an asymmetry. We thus cannot speak of anything beginning to exist at T=0. The MT Negated KCA puts cosmology in the right context. The universe did not come into existence at T=0. T=0 simply represents the first measure of time; matter and energy did not emerge at that point.

For a more complicated treatment, Malpass and Morriston argue that “one cannot traverse an actual infinite in finite steps” (Malpass, Alex & Morriston, Wes (2020). Endless and Infinite. Philosophical Quarterly 70 (281):830-849.). In other words, from a mathematical point of view, T=0 is the x-axis. All of the events after T=0 are an asymptote along the x-axis. The events go further and further back, ever closer to T=0 but never actually touch it. For a visual representation, see below:

Asymptotes - Free Math Help

Credit: Free Math Help

The implication here is that time began to exist, but the universe did not begin to exist. A recent paper implies that this is most likely the case (Quantum Experiment Shows How Time ‘Emerges’ from Entanglement. The Physics arXiv Blog. 23 Oct 2013. Web.). The very hot, very dense singularity before the emergence of time at T=0 would have been subject to quantum mechanics rather than the macroscopic forces that came later, e.g., General Relativity. As such, the conditions were such that entanglement could have resulted in the emergence of time in our universe, but not the emergence of the universe. All of the matter and energy were already present before the clock started to tick. Conversely, if the universe is akin to a growing runner, then the toddler is at the starting line before the gun goes off. The sound of the gun starts the clock. The runner starts running sometime after she hears the sound. As she runs, she goes through all the stages of childhood, puberty, adolescence, adulthood, and finally dies. Crucially, the act of her running and her growth do not begin until after the gun goes off. Likewise, no changes take place at T=0; all changes take place after T=0. While there is this notion of entanglement, resulting in a change occurring before the clock even started ticking, quantum mechanics demonstrates that quantum changes do not require time and in fact, may result in the emergence of time. Therefore, it is plausible that though time began to exist at the Big Bang, the universe did not begin to existthus, making the MT Negated KCA sound. The KCA is therefore, false.

Finally, so that the Thomists do not feel left out, we can explore whether the negation strategy can be applied to Aquinas’ Five Ways. For our purposes, the Second Way is closely related to the KCA and would be defeated by the same considerations. Of course, we would have to negate the Second Way so that it is vulnerable to the considerations that cast doubt on the KCA. The Second Way can be stated as follows:

We perceive a series of efficient causes of things in the world.

Nothing exists prior to itself.

Therefore nothing [in the world of things we perceive] is the efficient cause of itself.

If a previous efficient cause does not exist, neither does the thing that results (the effect).

Therefore if the first thing in a series does not exist, nothing in the series exists.

If the series of efficient causes extends ad infinitum into the past, then there would be no things existing now.

That is plainly false (i.e., there are things existing now that came about through efficient causes).

Therefore efficient causes do not extend ad infinitum into the past.

Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God. (Gracyk, Theodore. “Argument Analysis of the Five Ways”. Minnesota State University Moorhead. 2016. Web.)

This argument is considerably longer than the KCA, but there are still areas where the argument can be negated. I think P1 is uncontroversial and so, I do not mind starting from there:

Negated Second Way

We perceive a series of efficient causes of things in the world.

Nothing exists prior to itself.

Therefore nothing [in the world of things we perceive] is the efficient cause of itself.

If a previous efficient cause does not exist, neither does the thing that results (the effect).

Therefore if the earlier thing in a series does not exist, nothing in the series exists.

If the series of efficient causes extends ad infinitum into the past, then there would be things existing now.

That is plainly true (i.e., efficient causes, per Malpass and Morriston, extend infinitely into the past or, the number of past efficient causes is a potential infinity).

Therefore efficient causes do extend ad infinitum into the past.

Therefore it is not necessary to admit a first efficient cause, to which everyone gives the name of God.

Either the theist will continue to assert that the Second Way is sound, epistemic warrant and justification be damned, or they will abandon their dubious propositional idealism and run a soundness test. Checking whether the Second Way or the Negated Second Way is sound would inevitably bring them into contact with empirical evidence supporting one argument or the other. As I have shown with the KCA, it appears that considerations of time, from a philosophical and quantum mechanical perspective, greatly lower the probability of the KCA being sound. This follows neatly into Aquinas’ Second Way and as such, one has far less epistemic justification for believing the KCA or Aquinas’ Second Way are sound. The greater justification is found in the negated versions of these arguments.

Ultimately, one either succeeds at making the theist play the game according to the right rules or getting them to admit their beliefs are not properly epistemic at all; instead, they believe by way of blind faith and all of their redundant arguments are exercises in circular reasoning and any pretense of engaging the evidence is an exercise in confirmation bias. Arguments for God are a perfect example of directionally motivated reasoning (see Galef, Julia. The Scout Mindset: Why Some People See Things Clearly and Others Don’t. New York: Portfolio, 2021. 63-66. Print). I much prefer accuracy motivated reasoning. We are all guilty of motivated reasoning, but directionally motivated reasoning is indicative of irrationality and usually speaks to the fact that one holds beliefs that do not square with the facts. Deductive arguments are only useful insofar as premises can be supported by evidence, which therefore makes it easier to show that an argument is sound. This is why we can reason that if Socrates is a man, more specifically, the ancient Greek philosopher that we all know, then Socrates was indeed mortal and that is why he died in 399 BCE. Likewise, this is why we cannot reason that objective morality can only be the case if the Judeo-Christian god exists, that if the universe began to exist, God is the cause, and that if the series of efficient causes cannot regress infinitely and must terminate somewhere, they can only terminate at a necessary first cause, which some call God. These arguments can be negated and the negations will show that they are either absurd or that the reasoning in the arguments is deficient and rests on the laurels of directionally motivated reasoning due to a bias for one’s religious faith rather than on the bedrock of carefully reasoned, meticulously demonstrated, accuracy motivated reasoning which does not ignore or omit pertinent facts.

The arguments for God, no matter how old or new, simple or complex, do not work because not only do they rely on directionally motivated and patently biased reasoning, but because when testing for soundness, being sure not to exclude any pertinent evidence, the arguments turn out to be unsound. In the main, they all contain controversial premises that do not work unless one already believes in God. So there is a sense in which these arguments exist to give believers a false sense of security or more pointedly, a false sense of certainty. Unlike my opponents, I am perfectly content with being wrong, with changing my mind, but the fact remains, theism is simply not the sort of belief that I give much credence to. Along with the Vagueness Strategy, the Negation Strategy is something that should be in every atheist’s toolbox.

Rebuking Rasmussen’s Geometric Argument

By R.N. Carmona

My purpose here is twofold: first and foremost, I want to clarify Rasmussen’s argument because though I can understand why word of mouth can lead to what is essentially a straw man of his argument, especially in light of the fact that his argument requires one to pay for an online article or his book Is God the Best Explanation of Things? which he coauthored with Felipe Leon, it is simply good practice to present an argument fairly. Secondly, I want to be stern about the fact that philosophy of religion cannot continue to rake these dead coals. Rasmussen’s argument is just another in a long, winding, and quite frankly, tired history of contingency arguments. In in any case, the following is the straw man I want my readers and anyone else who finds this post to stop citing. This is decidedly not Rasmussen’s argument:

Image

Rasmussen has no argument called The Argument From Arbitrary Limits. Arbitrary limits actually feature in Leon’s chapter in where he expresses skepticism of Rasmussen’s Geometric Argument (Rasmussen Joshua and Leon, Felipe. Is God The Best Explanation For Things. Switzerland: Palgrave Macmillan. 53-68. Print.). Also, Rasmussen has a Theistic conception of God (omnipresent, wholly good, etc.) that is analogous to what Plantinga means by maximal greatness, but Rasmussen does not refer to God using that term. Perhaps there is confusion with his use of the word maximal conceivable. While given Rasmussen’s beliefs, he implies God with what he calls a maximal foundation, “a foundation complete with respect to its fundamental (basic, uncaused) features” (Ibid., 140). He makes it clear throughout the book that he is open to such a foundation that is not synonymous with God. In any case, his maximal conceivable is not a being possessing maximal greatness; at least, not exactly, since it appears he means something more elementary given his descriptions of basic and uncaused, as these clearly do not refer to omnipresence, perfect goodness, and so on. There may also be some confusion with his later argument, which he calls “The Maximal Mind Argument” (Ibid. 112-113), which fails because it is relies heavily on nonphysicalism, a series of negative theories in philosophy of mind that do not come close to offering alternative explanations for an array of phenomena thoroughly explained by physicalism (see here). In any case, Rasmussen has no argument resembling the graphic above. His arguments rest on a number of dubious assumptions, the nexus of which is his Geometric Argument:

JR1 Geometry is a geometric state.

JR2 Every geometric state is dependent.

JR3 Therefore, Geometry is dependent.

JR4 Geometry cannot depend on any state featuring only things that have a geometry.

JR5 Geometry cannot depend on any state featuring only non-concrete (non-causal) things.

JRC Therefore, Geometry depends on a state featuring at lest one geometry-less concrete thing (3-5) (Ibid., 42).

Like Leon, I take issue with JR2. Leon does not really elaborate on why JR2 is questionable saying only that “the most basic entities with geometry (if such there be) have their geometrics of factual or metaphysical necessity” and that therefore, “it’s not true that every geometric state is dependent” (Ibid., 67). He is correct, of course, but elaboration could have helped here because this is a potential defeater. Factual and metaphysical necessity are inhered in physical necessity. The universe is such that the fact that every triangle containing a 90-degree angle is a right triangle is reducible to physical constraints within our universe. This fact of geometry is unlike Rasmussen’s examples, namely chair and iPhone shapes. He states: “The instantiation of [a chair’s shape] depends upon prior conditions. Chair shapes never instantiate on their own, without any prior conditions. Instead, chair-instantiations depend on something” (Ibid., 41). This overt Platonism is questionable in and of itself, but Leon’s statement is forceful in this case: the shape of the chair is not dependent because it has its shape of factual or metaphysical necessity that stem from physical necessity. Chairs, first and foremost, are shaped the way they are because of our shape when we sit down; furthermore, chairs take the shapes they do because of physical constraints like human weight, gravity, friction against a floor, etc. For a chair not to collapse under the force of gravity and the weight of an individual, it has to be engineered in some way to withstand these forces acting on it; the chair’s shape is so because of physical necessity and this explains its metaphysical necessity. There is therefore, no form of a chair in some ethereal realm; an idea like this is thoroughly retrograde and not worth considering.

In any case, the real issue is that chair and iPhone shapes are not the sort of shapes that occur naturally in the universe. Those shapes, namely spheres, ellipses, triangles, and so on, also emerge from physical necessity. It is simply the case that a suspender on a bridge forms the hypothenuse of a right triangle. Like a chair, bridge suspenders take this shape because of physical necessity. The same applies to the ubiquity of spherical and elliptical shapes in the universe. To further disabuse anyone of Platonic ideas, globular shapes are also quite ubiquitous in the universe and are more prominent the closer we get to the Big Bang. There are shapes inherent in our universe that cannot be neatly called geometrical and even still, these shapes are physically and therefore, metaphysically necessitated. If JR2 is unsound, then the argument falls apart. On another front, this addresses Rasmussen’s assertion that God explains why there is less chaos in our universe. Setting aside that the qualification of this statement is entirely relative, the relative order we see in the universe is entirely probabilistic, especially given that entropy guarantees a trend toward disorder as the universe grows older and colder.

Like Leon, I share his general concern about “any argument that moves from facts about apparent contingent particularity and an explicability principle to conclusions about the nature of fundamental reality” (Ibid., 67) or as I have been known to put it: one cannot draw ontological conclusions on the basis of logical considerations. Theistic philosophers of religion and unfortunately, philosophers in general, have a terrible habit of leaping from conceivability to possibility and then, all the way to actuality. Leon elaborates:

Indeed, the worry above seems to generalize to just about any account of ultimate reality. So, for example, won’t explicability arguments saddle Christian theism with the same concern, viz. why the deep structure of God’s nature should necessitate exactly three persons in the Godhead? In general, won’t explicability arguments equally support a required explanation for why a particular God exists rather than others, or rather than, say, an infinite hierarchy of gods? The heart of the criticism is that it seems any theory must stop somewhere and say that the fundamental character is either brute or necessary, and that if it’s necessary, the explanation of why it’s necessary (despite appearing contingent) is beyond our ability to grasp (Ibid., 67-68).

Of course, Leon is correct in his assessment. Why not Ahura Mazda, his hypostatic union to Spenta Mainyu, and his extension via the Amesha Spentas? If, for instance, the one-many problem requires the notion of a One that is also many, what exactly rules out Ahura Mazda? One starts to see how the prevailing version of Theism in philosophy of religion is just a sad force of habit. This is why it is necessary to move on from these arguments. Contingency arguments are notoriously outmoded because Mackie, Le Poidevin, and others have already provided general defeaters that can apply to any particular contingency argument. Also, how many contingency arguments do we need exactly? In other words, how many different ways can one continue to assert that all contingent things require at least one necessary explanation? Wildman guides us here:

Traditional natural theology investigates entailment relations from experienced reality to, say, a preferred metaphysics of ultimacy. But most arguments of this direct-entailment sort have fallen out of favor, mostly because they are undermined by the awareness of alternative metaphysical schemes that fit the empirical facts just as well as the preferred metaphysical scheme. By contrast with this direct-entailment approach, natural theology ought to compare numerous compelling accounts of ultimacy in as many different respects as are relevant. In this comparison-based way, we assemble the raw material for inference-to-the-best-explanation arguments on behalf of particular theories of ultimacy, and we make completely clear the criteria for preferring one view of ultimacy to another.

Wildman, Wesley J. Religious Philosophy as Multidisciplinary Comparative Inquiry: Envisioning a Future For The Philosophy of Religion. State University of New York Press. Albany, NY. 2010. 162. Print.

Setting aside that Rasmussen does not make clear why he prefers a Christian view of ultimacy as opposed to a Zoroastrian one or another one that may be proposed, I think Wildman is being quite generous when saying that “alternative metaphysical schemes fit the empirical facts just as well as the preferred metaphysical scheme” because the fact of the matter is that some alternatives fit the empirical facts better than metaphysical schemes like the ones Christian Theists resort to. Rasmussen’s preferred metaphysical scheme of a maximal foundation, which properly stated, is a disembodied, nonphysical mind who is omnipresent, wholly good, and so on rests on dubious assumptions that have not been made to cohere with the empirical facts. Nonphysicalism, as I have shown in the past, does not even attempt to explain brain-related phenomena. Physicalist theories have trounced the opposition in that department and it is not even close. What is more is that Christian Theists are especially notorious for not comparing their account to other accounts and that is because they are not doing philosophy, but rather apologetics. This is precisely why philosophy of religion must move on from Christian Theism. We can think of an intellectual corollary to forgiveness. In light of Christian Theism’s abject failure to prove God, how many more chances are we required to give this view? Philosophy of religion is, then, like an abused lover continuing to be moved by scraps of affection made to cover up heaps of trauma. The field should be past the point of forgiveness and giving Christian Theism yet another go to get things right; it has had literal centuries to get its story straight and present compelling arguments and yet here we are retreading ground that has been walked over again and again and again.

To reinforce my point, I am going to quote Mackie and Le Poidevin’s refutations of contingency arguments like Rasmussen’s. It should then become clear that we have to bury these kinds of arguments for good. Let them who are attached to these arguments mourn their loss, but I will attend no such wake. What remains of the body is an ancient skeleton, long dead. It is high time to give it a rest. Le Poidevin put one nail in the coffin of contingency arguments. Anyone offering new contingency arguments has simply failed to do their homework. It is typical of Christian Theists to indulge confirmation bias and avoid what their opponents have to say. The problem with that is that the case against contingency arguments has been made. Obstinacy does not change the fact. Le Poidevin clearly shows why necessary facts do not explain contingent ones:

Necessary facts, then, cannot explain contingent ones, and causal explanation, of any phenomenon, must link contingent facts. That is, both cause and effect must be contingent. Why is this? Because causes make a difference to their environment: they result in something that would not have happened if the cause had not been present. To say, for example, that the presence of a catalyst in a certain set of circumstances speeded up a reaction is to say that, had the catalyst not been present in those circumstances, the reaction would have proceeded at a slower rate. In general, if A caused B, then, if A had not occurred in the circumstances, B would not have occurred either. (A variant of this principle is that, if A caused B, then if A had not occurred in the circumstances, the probability of B’s occurrence would have been appreciably less than it was. It does not matter for our argument whether we accept the origin principle or this variant.) To make sense of this statement, ‘If A had not occurred in the circumstances, B would not have occurred’, we have to countenance the possibility of A’s not occurring and the possibility of B’s not occurring. If these are genuine possibilities, then both A and B are contingent. So one of the reasons why necessary facts cannot causally explain anything is that we cannot make sense of their not being the case, whereas causal explanations requires us to make sense of causally explanatory facts not being the case. Causal explanation involves the explanation of one contingent fact by appeal to another contingent fact.

Le Poidevin, Robin. Arguing for Atheism: An Introduction to the Philosophy of Religion. London: Routledge, 1996. 40-41. Print.

This is a way of substantiating that an effect is inhered in a cause or the principle, like effects from like causes. This has been precisely my criticism of the idea that a nonphysical cause created the physical universe. There is no theory of causation that permits the interaction of an ethereal entity’s dispositions and that of physical things. It is essentially a paraphrase of Elizabeth of Bohemia’s rebuttal to Cartesian dualism: how does mental substance interact with physical substance? This is why mind-body dualism remains in a state of incoherence, but I digress. Mackie puts yet another nail in the coffin:

The principle of sufficient reason, then, is more far-reaching than the principle that every occurrence has a preceding sufficient cause: the latter, but not the former, would be satisfied by a series of things or events running back infinitely in time, each determined by earlier ones, but with no further explanation of the series as a whole. Such a series would give us only what Leibniz called ‘physical’ or ‘hypothetical’ necessity, whereas the demand for a sufficient reason for the whole body of contingent things and events and laws calls for something with ‘absolute’ or ‘metaphysical’ necessity. But even the weaker, deterministic, principle is not an a priori truth, and indeed it may not be a truth at all; much less can this be claimed for the principle of sufficient reason. Perhaps it just expresses an arbitrary demand; it may be intellectually satisfying to believe there is, objectively, an explanation for everything together, even if we can only guess at what the explanation might be. But we have no right to assume that the universe will comply with our intellectual preferences. Alternatively, the supposed principle may be an unwarranted extension of the determinist one, which, in so far as it is supported, is supported only empirically, and can at most be accepted provisionally, not as an a priori truth. The form of the cosmological argument which relies on the principle of sufficient reason therefore fails completely as a demonstrative proof.

Mackie, J. L. The Miracle of Theism: Arguments for and against the Existence of God. Oxford: Clarendon, 1982. 86-87. Print.

Every contingency argument fails because it relies on the principle of sufficient reason and because necessity does not cohere with contingency as it concerns a so-called causal relation. Mackie, like Le Poidevin, also questions why God is a satisfactory termination of the regress. Why not something something else? (Ibid., 92). Contingency arguments amount to vicious special pleading and an outright refusal to entertain viable alternatives, even in cases where the alternatives are nonphysical and compatible with religious sentiments. In any case, it would appear that the principle of sufficient reason is not on stable ground. Neither is the notion that a necessary being is the ultimate explanation of the universe. Contingency arguments have been defeated and there really is no way to repeat these arguments in a way that does not fall on the horns of Le Poidevin and Mackie’s defeaters. Only the obdurate need to believe that God is the foundational explanation of the universe explains the redundancy of Christian Theists within the philosophy of religion. That is setting aside that apologetics is not philosophy and other complaints I have had. The Geometric Argument, despite using different language, just is a contingency argument. If the dead horse could speak, it would tell them all to lay down their batons once and for all, but alas.

Ultimately, contingency arguments are yet another example of how repetitive Christianized philosophy of religion has become. There is a sense in which Leon, Le Poidevin, and Mackie are paraphrasing one another because, and here is a bit of irony, like arguments result in like rebuttals. They cannot help but to sound like they each decided or even conspired to write on the same topic for a final paper. They are, after all, addressing the same argument no matter how many attempts have been made to word it differently. It is a vicious cycle, a large wheel that cannot keep on turning. It must be stopped in its tracks if progress in the philosophy of religion is to get any real traction.

Why Dispositions Make More Sense Than Powers

By R.N. Carmona

Consider what follows some scattered thoughts after reading an excellent paper by Marius Backmann. I think he succeeds in showing how the Neo-Aristotelian notion of powers is incongruous with pretty much any theory of time of note. My issue with powers is more basic: what in the world are Neo-Aristotelians even saying when they invoke this idea and why does it seem that no one has raised the concern that powers are an elementary paraphrase of dispositions? With respect to this concern, Neo-Aristotelians do not even attempt to make sense of our experience with matter and energy. They seem to go on the assumption that something just has to underlie the physical world whereas I take it as extraneous to include metaphysical postulates where entirely physical ones make do. Dispositions are precisely the sort of physical postulates that adequately explain what we perceive as cause-effect relationships. What I will argue is that a more thorough analysis of dispositions is all that is needed to understand why a given a caused some given effect b.

My idea that powers are an elementary paraphrase is entailed in Alexander Bird’s analysis of what powers are. He states:

According to Bird, powers, or potencies, as he calls them alternatively, are a subclass of dispositions. Bird holds that not all dispositions need to be powers, since there could be dispositions that are not characterised by an essence, apart from self-identity. Powers, on the other hand, Bird (2013) holds to be properties with a dispositional essence. On this view, a power is a property that furnishes its bearer with the same dispositional character in every metaphysically possible world where the property is instantiated. If the disposition to repel negatively charged objects if there are some in the vicinity is a power in that sense, then every object that has that property does the same in every metaphysically possible world, i.e. repel negatively charged objects if there are some in the vicinity.

Marius Backmann (2019) No time for powers, Inquiry, 62:9-10, 979-1007, DOI: 10.1080/0020174X.2018.1470569

Upon closer analysis of Bird’s definition, a power just is a disposition. The issue is that Bird and the Neo-Aristotelians who complain that he has not gone far enough have isolated what they take to be a power from the properties of an electron, which is a good example of a particle that repels negatively charged objects given that some are in its vicinity. Talk of possible worlds makes no sense unless one can prove mathematically that an electron-like particle with a different mass would also repulse other negatively charged particles. However, though it can easily be shown that a slightly more massive electron-like particle will repulse other particles of negative charge, its electrical charge will be slightly higher than an electrons because according to Robert Milikan’s calculation, there seems to be a relationship between the mass of a particle and its charge. The most elementary charge is e = ~1.602 x 10^19 coulombs. The charge of a quark is measured in multiples of e/3, implying a smaller charge, which is expected given that they are sub-particles. So what is of interest is why the configuration of even an elementary particle yields predictable “behaviors.”

To see this, let us dig into an example Backmann uses: “My power to bake a cake would not bring a cake that did not exist simpliciter before into existence, but only make a cake that eternally exists simpliciter present. Every activity reduces to a change in what is present” (Ibid.). The Neo-Aristotelian is off track to say we have power to bake a cake and that the oven has power to yield this desired outcome that do not trace back to its parts or as Cartwright states of general nomological machines: “We explicate how the machine arrangement dictates what can happen – it has emergent powers which are not to be found in its components” (Cartwright, Nancy & Pemberton, John (2013). Aristotelian powers: without them, what would modern science do? In John Greco & Ruth Groff (eds.), Powers and Capacities in Philosophy: the New Aristotelianism. London, U.K.: Routledge. pp. 93-112.). Of the nomological machines in nature, Cartwright appears to bypass the role of evolution. Of such machines invented by humans, she ignores the fact that we often wrongly predict what a given invention will do. Evolution proceeds via probabilities and so, from our ad hoc point of view, it looks very much like trial and error. Humans have the advantage of being much more deliberate about what they are selecting for and therefore, our testing and re-testing of inventions and deciding when they are safe and suitable to hit the market is markedly similar to evolutionary selection.

That being said, the components of a machine do account for its function. It is only due to our understanding of other machines that we understand what should go into building a new one in order for it to accomplish a new task(s). Powers are not necessary because then we should be asking, why did we not start off with machines that have superior powers? In other words, why start with percolators if we could have just skipped straight to Keurig or Nespresso machines or whatever more advanced models that might be invented? Talk of powers seems to insinuate that objects, whether complex or simple, are predetermined to behave the way they do, even in the absence of trial runs, modifications, or outright upgrades. This analysis sets aside the cake. It does not matter what an oven or air fryer is supposed to do. If the ingredients are wrong, either because I neglected to use baking powder or did not use enough flour, the cake may not raise. The ingredients that go into baked goods play a “causal” role as well.

Dispositions, on the other hand, readily explain why one invention counts as an upgrade over a previous iteration. Take, for instance, Apple’s A14 Bionic chip. At bottom, this chip accounts for, “a 5 nanometer manufacturing process” and CPU and GPU improvements over the iPhone 11 (Truly, Alan. “A14 Bionic: Apple’s iPhone 12 Chip Benefits & Improvements Explained”. Screenrant. 14 Oct 2020. Web). Or more accurately, key differences in the way this chip was made accounts for the improvement over its predecessors. Perhaps more crucially is that critics of dispositions have mostly tended to isolate dispositions, as though a glass cup’s fragility exists in a vacuum. Did the cup free fall at 9.8m/s^2? Did it fall on a mattress or on a floor? What kind of floor? Or was the cup thrown at some velocity because Sharon was angry with her boyfriend Albert? What did she throw the cup at: a wall, the floor, Albert’s head, or did it land in a half-full hamper with Sharon and Albert’s dirty clothes?

Answering these questions solves the masking and mimicker problems. The masking problem can be framed as follows:

Another kind of counterexample to SCA, due to Johnston (1992) and Bird (1998), involves a fragile glass that is carefully protected by packing material. It is claimed that the glass is disposed to break when struck but, if struck, it wouldn’t break thanks to the work of the packing material. There is an important difference between this example and Martin’s: the packing material would prevent the breaking of the glass not by removing its disposition to break when struck but by blocking the process that would otherwise lead from striking to breaking.

Choi, Sungho and Michael Fara, “Dispositions”The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.).

I would not qualify that the packing material prevents the glass from breaking by blocking the process that would result if it were exposed. The packing material has its own properties and dispositions that we have discovered through trial and error making this material good at protecting glass. Packing paper was more common, but now we have bubble wrap and heavy duty degradable stretch wrap, also capable of protecting glass, china, porcelain, and other fragile items. The dispositions of these protective materials readily explain why their encompassing of fragile objects protects them from incidental striking or drops. If I were, however, to throw a wrapped coffee mug as hard as I can toward a brick wall, the mug is likely to break. This entails that variables are important in this thing we call cause and effect.

A perfect example is simple collisions of the sort you learn about in an elementary physics course. If a truck and haul speeding down a highway in one direction at ~145 km/h, and a sedan traveling in the opposite direction at cruising speed of ~89 km/h collide, we can readily predict the outcome and that this particular collision is inelastic. The speeding truck would likely barrel through the sedan and the sedan will be pushed in the direction the truck was traveling in. The vehicles’ respective speeds and masses are extremely important in understanding what goes on here. There is no sense in which we can say that trucks just have a power to mow things down because a collision between the truck in our original example and a truck and haul driving at roughly the same speed in the opposite direction results in an entirely difficult outcome, a perfectly elastic collision in where both trucks collide and come to an immediate halt after the effects of the impact are fully realized.

Neo-Aristotelian analyses of powers give us nothing that is keeping with physics. What these explanations demand is something they imagine happening behind the veil of what science has already explained. There are just dispositions and what is needed is a more critical analysis of what is entailed across each instance of cause and effect. Power ontologies beg the question, in any case, because they require dispositions to make sense of powers. That is because powers are just a cursory analysis of cause-effect relationships, a way of paraphrasing that is overly simplistic and ultimately, not analytical enough. Power ontologies, along with talk of dynamism, which properly belongs to Nietzsche not Aristotle, severely undermine the Neo-Aristotelian project. Nietzsche’s diagnosis of causation makes this clear:

Cause and effect: such a duality probably never exists; in truth we are confronted by a continuum out of which we isolate a couple of pieces, just as we perceive motion only as isolated points and then infer it without ever actually seeing it. The suddenness with which many effects stand out misleads us; actually, it is sudden only for us. In this moment of suddenness there is an infinite number of processes that elude us. n intellect that could see cause and effect as a continuum and a flux and not, as we do, in terms of an arbitrary division and dismemberment, would repudiate the concept of cause and effect and deny all conditionality.

Nietzsche, Friedrich W, and Walter Kaufmann. The Gay Science: With a Prelude in Rhymes and an Appendix of Songs. New York: Vintage Books, 1974. 173. Print.

Nietzsche describes a continuum and a flux, in other words, a dynamism thoroughly unlike what can be attributed to Aristotle’s theory of causation. So the fact that Neo-Aristotelians even speak of a dynamism feels like a sort of plagiarism, since they are associating the idea of a dynamism with a thinker that said nothing to that effect. Nietzsche is critical of Aristotle’s causal-teleological marriage and can be seen as explicitly accusing Aristotle and also Hume of arbitrarily splicing a dynamic continuum in an ad hoc manner that does not find justification in metaphysical ideas. If Nietzsche had been properly exposed to modern science, he would probably agree that this splicing does not find justification in physical ideas either. The hard sciences confirm a continuum, preferring complex processes from which predictable results follow. There is just no sense in which we can apply any theory of causation to a chemical reaction. What features in these reactions are the properties and dispositions of the elements involved and how they are constituted explains why we get one reaction or another. Any talk of dynamisms is properly Nietzschean in spirit and as should be clear in his words, there is no invocation of powers.

Suffice to say that a deeper analysis of dispositions also explains away the mimicker problem. Styrofoam plates simply do not break in the way glass plates do and their underlying composition explains why that is. Ultimately, Neo-Aristotelians are not in a good position to get to the bottom of what we call cause and effect. Aside from the difficulties Backmann sheds light on, the notion of powers is incoherent and lacking in explanatory power, especially at levels requiring deeper analysis. Predictably, I can see Neo-Aristotelians invoking an infinite regress of sorts. In other words, is it simply the composition of the glass interacting with the composition of a hardwood floor that results in the glass shattering or is there more to the story? To that I would respond that events like these happen within a causally closed space-time system. It is then when we will be asked who or what decided that a glass cup should break on impact when landing on a hardwood floor? Well, who or what decided that a compound fracture of the tibia is expected given that it receives a strong enough blow from an equally dense or denser object? The Neo-Aristotelian will keep pushing the buck back, demanding deeper levels of analysis, effectively moving the goalposts. What will remain is that there is no intelligence that decided on these things, i.e., there is no teleological explanation involved in these cases, because then they would have to account for undesired ends like broken bones.

In the end, I think that the deepest level of analysis will involve a stochastic process in where degrees of probability encompass possible outcomes. Not every blow leads to a broken tibia. Dropping a glass cup on just any surface is not enough to crack or shatter it. There are cases in where angular momentum as a result of a human foot can change a falling glass cup’s trajectory just enough to ensure that it does not break upon hitting the ground. I have met people quite adept at breaking these kinds of falls with a simple extension of their foot. As such, probabilities will change given the circumstances on a case by case basis. This element of chance at the deepest level of analysis coheres perfectly with the universe we find ourselves in because even the fact that we are beings made of matter, as opposed to beings made of anti-matter, is due to chance. Apparently, God has always rolled dice. On this, I will let Lawrence Krauss have the last word:

Because antiparticles otherwise have the same properties as particles, a world made of antimatter would behave the same way as a world of matter, with antilovers sitting in anticars making love under an anti-Moon.  It is merely an accident of our circumstances, due, we think, to rather more profound factors…that we live in a universe that is made up of matter and not antimatter or one with equal amounts of both.

Krauss, Lawrence. A Universe From Nothing: Why There Is Something Rather Than Nothing. 1st ed. New York, NY: Free Press, 2012. 61. Print.

Problems With “Neo-Aristotelian Perspectives On Contemporary Science: Dodging the Fundamentalist Threat”

By R.N. Carmona

Before starting my discussion of the first chapter of Neo-Aristotelian Perspectives On Contemporary Science, some prefatory remarks are in order. In the past, I might have committed to reading an entire book for purposes of writing a chapter by chapter review. With other projects in my periphery, I cannot commit to writing an exhaustive review of this book. That remains undecided for now. What I will say is that a sample size might be enough to confirm my suspicions that the Neo-Aristotelian system is rife with problems or even worse, is a failed system of metaphysics. I am skeptical of the system because it appears to have been recruited to bolster patently religious arguments, in particular those of modern Thomists looking to usher in yet another age of apologetics disguised as philosophy. I maintain that apologetics still needs to be thoroughly demarcated from philosophy of religion; moreover, philosophy of religion should be more than one iteration after another of predominantly Christian literature. With respect to apologetics, I am in agreement with Kai Nielsen who stated:

It is a waste of time to rehearse arguments about the proofs or evidences for God or immortality. There are no grounds — or at least no such grounds — for belief in God or belief that God exists and/or that we are immortal. Hume and Kant (perhaps with a little rational reconstruction from philosophers like J.L. Mackie and Wallace Matson) pretty much settled that. Such matters have been thoroughly thrashed out and there is no point of raking over the dead coals. Philosophers who return to them are being thoroughly retrograde.

Nielsen, Kai. Naturalism and Religion. Amherst, N.Y.: Prometheus, 2001. 399-400. Print.

The issue is that sometimes one’s hand is forced because the number of people qualified to rake dead coals is far fewer than the people rehashing these arguments. Furthermore, the history of Christianity, aside from exposing a violent tendency to impose the Gospel by force, also exposes a tendency to prey on individuals who are not qualified to address philosophical and theological arguments. Recently, this was made egregiously obvious by Catholic writer Pat Flynn:

So what we as religious advocates must be ready for is to offer the rational, logical basis—the metaphysical realism, and the reality of God—that so many of these frustrated, young people are searching for who are patently fed up with the absurd direction the secular world seems to be going. They’re looking for solid ground. And we’ve got it.

Flynn, Pat. “A Hole in The Intellectual Dark Web”. World On Fire Blog. 26 Jun 2019. Web.

Unfortunately, against all sound advice and blood pressure readings, people like myself must rake dead coals or risk allowing Christians to masquerade as the apex predators in this intellectual jungle. I therefore have to say to the Pat Flynns of the world, no you don’t got it. More importantly, let young people lead their lives free of the draconian prohibitions so often imposed on people by religions like yours. If you care to offer the rational, logical basis for your beliefs, then perhaps you should not be approaching young people who likely have not had an adequate exposure to the scholarship necessary to understand apologetics. This is not to speak highly of the apologist, who typically distorts facts and evidence to fit his predilections, making it necessary to acquire sufficient knowledge of various fields of inquiry so that one is more capable of identifying distortions or omission of evidence and thus, refuting his arguments. If rational, logical discourse were his aim, then he would approach people capable of handling his arguments and contentions. That is when it becomes abundantly clear that the aim is to target people who are more susceptible to his schemes by virtue of lacking exposure to the pertinent scholarship and who may already be gullible due to existing sympathy for religious belief, like Flynn himself, a self-proclaimed re-converted Catholic.

Lanao and Teh’s Anti-Fundamentalist Argument and Problems Within The Neo-Aristotelian System

With these prefatory remarks out of the way, I can now turn to Xavi Lanao and Nicholas J. Teh’s “Dodging The Fundamentalist Threat.” Though I can admire how divorced Lanao and Teh’s argument is from whatever theological views they might subscribe to, it should be obvious to anyone, especially the Christian Thomist, that their argument is at variance with Theism. Lanao and Teh write: “The success of science (especially fundamental physics) at providing a unifying explanation for phenomena in disparate domains is good evidence for fundamentalism” (16). They then add: “The goal of this essay is to recommend a particular set of resources to Neo- Aristotelians for resisting Fundamentalist Unification and thus for resisting fundamentalism” (Ibid.). In defining Christian Theism, Timothy Chappell, citing Paul Veyne, offers the following:

“The originality of Christianity lies… in the gigantic nature of its god, the creator of both heaven and earth: it is a gigantism that is alien to the pagan gods and is inherited from the god of the Bible. This biblical god was so huge that, despite his anthropomorphism (humankind was created in his image), it was possible for him to become a metaphysical god: even while retaining his human, passionate and protective character, the gigantic scale of the Judaic god allowed him eventually to take on the role of the founder and creator of the cosmic order.” 

Chappell, Timothy. “Theism, History and Experience”. Philosophy Now. 2013. Web.

Thomists appear more interested in proving that Neo-Aristotelianism is a sound approach to metaphysics and the philosophy of science than they do in ensuring that the system is not at odds with Theism. The notion that God is the founder and creator of the cosmic order is uncontroversial among Christians and Theists more generally. Inherent in this notion is that God maintains the cosmic order and created a universe that bears his fingerprints, and as such, physical laws are capable of unification because the universe exhibits God’s perfection; the universe is therefore, at least at its start, perfectly symmetric, already containing within it intelligible forces, including finely tuned parameters that result in human beings, creatures made in God’s image. Therefore, in the main, Christians who accept Lanao and Teh’s anti-fundamentalism have, inadvertently or deliberately, done away with a standard Theistic view.

So already one finds that Neo-Aristotelianism, at least from the perspective of the Theist, is not systematic in that the would-be system is internally inconsistent. Specifically, when a system imposes cognitive dissonance of this sort, it is usually good indication that some assumption within the system needs to be radically amended or entirely abandoned. In any case, there are of course specifics that need to be addressed because I am not entirely sure Lanao and Teh fully understand Nancy Cartwright’s argument. I think Cartwright is saying quite a bit more and that her reasoning is mostly correct, even if her conclusion is off the mark.

While I strongly disagree with the Theistic belief that God essentially created a perfect universe, I do maintain that Big Bang cosmology imposes on us the early symmetry of the universe via the unification of the four fundamental forces. Cartwright is therefore correct in her observation that science gives us a dappled portrait, a patchwork stemming from domains operating very much independently of one another; like Lanao and Teh observe: “point particle mechanics and fluid dynamics are physical theories that apply to relatively disjoint sets of classical phenomena” (18). The problem is that I do not think Lanao and Teh understand why this is the case, or at least, they do not make clear that they know why we are left with this dappled picture. I will therefore attempt to argue in favor of Fundamentalism without begging the question although, like Cartwright, I am committed to a position that more accurately describes hers: Non-Fundamentalism. It may be that the gradual freezing of the universe, over the course of about 14 billion years, leaves us entirely incapable of reconstructing the early symmetry of the universe; I will elaborate on this later, but this makes for a different claim altogether, and one that I take Cartwright to be saying, namely that Fundamentalists are not necessarily wrong to think that fundamental unification (FU) is possible but given the state of our present universe, it cannot be obtained. Cartwright provides us with a roadmap of what it would take to arrive at FU, thereby satisfying Fundamentalism, but the blanks need to be filled, so that we get from the shattered glass that is our current universe to the perfectly symmetric mirror it once was.

Lanao and Teh claim that Fundamentalism usually results from the following reasoning:

We also have good reason to believe that everything in the physical world is made up of these same basic kinds of particles. So, from the fact that everything is made up of the same basic particles and that we have reliable knowledge of the behavior of these particles under some experimental conditions, it is plausible to infer that the mathematical laws governing these basic kinds of particles within the restricted experimental settings also govern the particles everywhere else, thereby governing everything everywhere. (Ibid.)

They go on to explain that Sklar holds that biology and chemistry do not characterize things as they really are. This is what they mean when they say Fundamentalists typically beg the question, in that they take Fundamentalism as a given. However, given Lanao and Teh’s construction of Cartwright’s argument, they can also be accused of fallacious reasoning, namely arguing from ignorance. They formulate Cartwright’s Anti-Fundamentalist Argument as follows:

(F1) Theories only apply to a domain insofar as there is a principled way of generating a set of models that are jointly able to describe all the phenomena in that domain.

(AF2) Classical mechanics has a limited set principled models, so it only applies to a limited number of sub-domains.

(AF3) The limited sub-domains of AF2 do not exhaust the entire classical domain.

(AF4) From (F1), (AF2), and (AF3), the domain of classical mechanics is not universal, but dappled. (25-26)

On AF2, how can we expect classical mechanics to acquire more principled models than it presently has? How do we know that, if given enough time, scientists working on classical mechanics will not have come up with a sufficient number of principled models to satisfy even the anti-fundamentalist? That results in quite the conundrum for the anti-fundamentalist. Can the anti-fundamentalist provide the fundamentalist with a satisfactory number of principled models that exhaust an entire domain? This is to ask whether anyone can know how many principled models are necessary to contradict AF3. On any reasonable account, science has not had sufficient time to come up with enough principled models in all of its domains and thus, this argument cannot be used to bolster the case for anti-fundamentalism.

While Lanao and Teh are dismissive of Cartwright’s particularism, it is necessary for the correct degree of tentativeness she exhibits. Lanao and Teh, eager to disprove fundamentalism, are not as tentative, but given the very limited amount of time scientists have had to build principled models, we cannot expect for them to have come up with enough models to exhaust the classical or any other scientific domain. Cartwright’s tentativeness is best exemplified in the following:

And what kinds of interpretative models do we have? In answering this, I urge, we must adopt the scientific attitude: we must look to see what kinds of models our theories have and how they function, particularly how they function when our theories are most successful and we have most reason to believe in them. In this book I look at a number of cases which are exemplary of what I see when I study this question. It is primarily on the basis of studies like these that I conclude that even our best theories are severely limited in their scope.

Cartwright, Nancy. The Dappled World: A Study of The Boundaries of Science. Cambridge: Cambridge University Press, 1999. 9. Print.

The fact that our best theories are limited in their scope reduces to the fact that our fragmented, present universe is too complex to generalize via one law per domain or one law that encompasses all domains. For purposes of adequately capturing what I am attempting to say, it is worth revisiting what Cartwright says about a $1,000 bill falling in St. Stephen’s Square:

Mechanics provides no model for this situation. We have only a partial model, which describes the 1000 dollar bill as an unsupported object in the vicinity of the earth, and thereby introduces the force exerted on it due to gravity. Is that the total force? The fundamentalist will say no: there is in principle (in God’s completed theory?) a model in mechanics for the action of the wind, albeit probably a very complicated one that we may never succeed in constructing. This belief is essential for the fundamentalist. If there is no model for the 1000 dollar bill in mechanics, then what happens to the note is not determined by its laws. Some falling objects, indeed a very great number, will be outside the domain of mechanics, or only partially affected by it. But what justifies this fundamentalist belief? The successes of mechanics in situations that it can model accurately do not support it, no matter how precise or surprising they are. They show only that the theory is true in its domain, not that its domain is universal. The alternative to fundamentalism that I want to propose supposes just that: mechanics is true, literally true we may grant, for all those motions whose causes can be adequately represented by the familiar models that get assigned force functions in mechanics. For these motions, mechanics is a powerful and precise tool for prediction. But for other motions, it is a tool of limited serviceability.

Cartwright, Nancy. “Fundamentalism vs. the Patchwork of Laws.” Proceedings of the Aristotelian Society, vol. 94, 1994, pp. 279–292. JSTOR, http://www.jstor.org/stable/4545199.

Notice how even Cartwright alludes to the Theistic notion of FU being attributable to a supremely intelligent creator who people call God. In any case, what she is saying here does not speak to the notion that only the opposite of Fundamentalism can be the case. Even philosophers slip into thinking in binaries, but we are not limited to Fundamentalism or Anti-Fundamentalism; Lanao and Teh admit that much. There can be a number of Non-Fundamentalist positions that prove more convincing. In the early universe, the medium of water, and therefore, motions in water, were not available. Because of this, there was no real way to derive physical laws within that medium. Moreover, complex organisms like jellyfish did not exist then either and so, the dynamics of their movements were not known and could not feature in any data concerning organisms moving about in water. This is where I think Cartwright, and Lanao and Teh taking her lead, go astray.

Cartwright, for example, strangely calls for a scientific law of wind. She states: “When we have a good-fitting molecular model for the wind, and we have in our theory (either by composition from old principles or by the admission of new principles) systematic rules that assign force functions to the models, and the force functions assigned predict exactly the right motions, then we will have good scientific reason to maintain that the wind operates via a force” (Ibid). Wind, unlike inertia or gravity, is an inter-body phenomenon in that the heat from the Sun is distributed unevenly across the Earth’s surface. Warmer air from the equator tends toward the atmosphere and moves to the poles while cooler air tends toward the equator. Wind moves between areas of high pressure to areas of low pressure and the boundary between these areas is called a front. This is why we cannot have a law of wind because aside from the complex systems on Earth, this law would have to apply to the alien systems on gas giants like Jupiter and Saturn. This point is best exemplified by the fact that scientists cannot even begin to comprehend why Neptune’s Dark Spot did a complete about-face. A law of wind would have to apply universally, not just on Earth, and would thus, have to explain the behavior of wind on other planets. That is an impossible ask because the composition of other planets and their stars would make for different conditions that are best analyzed in complex models, accounting for as much data as possible, rather than a law attempting to generalize what wind should do assuming simple conditions.

Despite Cartwright’s lofty demand, her actual argument does not preclude Fundamentalism despite what Lanao and Teh might have thought. Cartwright introduces a view that I think is in keeping with the present universe: “Metaphysical nomological pluralism is the doctrine that nature is governed in different domains by different systems of laws not necessarily related to each other in any systematic or uniform way: by a patchwork of laws” (Ibid.). I think it is entirely possible to get from metaphysical nomological pluralism (MNP) to FU if one fills in the blanks by way of symmetry breaking. Prior to seeing how symmetry breaking bridges the gap between MNP and FU, it is necessary to outline an argument from Cartwright’s MNP to FU:

F1 Theories only apply to a domain insofar as there is a principled way of generating a set of models that are jointly able to describe all the phenomena in that domain.

MNP1 Nature is governed in different domains by different systems of laws not necessarily related to each other in any systematic or uniform way: by a patchwork of laws.

MNP2 It is possible that the initial properties in the universe allow these laws to be true together.

MNP3 From F1, MNP1, and MNP2, the emergence of different systems of laws from the initial properties in the universe imply that FU is the probable.

Lanao and Teh agree that F1 is a shared premise between Fundamentalists and Anti-Fundamentalists. As a Non-Fundamentalist, I see it as straightforwardly obvious as well. With respect to our present laws, I think that FU may be out of our reach. As has been famously repeated, humans did not evolve to do quantum mechanics, let alone piece together a shattered mirror. This is why I’m a Non– as opposed to Anti-Fundamentalist; the subtle distinction is that I am neither opposed to FU being the case nor do I think it is false, but rather that it is extremely difficult to come by. Michio Kaku describes the universe as follows: “Think of the way a beautiful mirror shatters into a thousand pieces. The original mirror possessed great symmetry. You can rotate a mirror at any angle and it still reflects light in the same way. But after it is shattered, the original symmetry is broken. Determining precisely how the symmetry is broken determines how the mirror shatters” (Kaku, Michio. Parallel Worlds: A Journey Through Creation, Higher Dimensions, and The Future of The Cosmos. New York: Doubleday, 2005. 97. Print.).

If Kaku’s thinking is correct, then there is no way to postulate that God had St. Peter arrange the initial properties of the universe so that all of God’s desired laws are true simultaneously without realizing that FU is not only probable but true, however unobtainable it may be. The shards would have to pertain to the mirror. Kaku explains that Grand Unified Theory (GUT) Symmetry breaks down to SU(3) x SU(2) x U(1), which yields 19 free parameters required to describe our present universe. There are other ways for the mirror to have broken, to break down GUT Symmetry. This implies that other universes would have residual symmetry different from that of our universe and therefore, would have entirely different systems of laws. These universes, at minimum, would have different values for these free parameters, like a weaker nuclear force that would prevent star formation and make the emergence of life impossible. In other scenarios, the symmetry group can have an entirely different Standard Model in where protons quickly decay into anti-electrons, which would also prevent life as we know it (Ibid., 100).

Modern scientists are then tasked with working backwards. The alternative to that is to undertake the gargantuan task, as Cartwright puts it, of deriving the initial properties, which would no doubt be tantamount to a Theory of Everything from which all of the systems of laws extend, i.e., hypothesize that initial conditions q, r, and s yield the different systems of laws we know. This honors the concretism Lanao and Teh call for in scientific models while also giving abstractionism its due. Like Paul Davies offered, the laws of physics may be frozen accidents. In other words, the effective laws of physics, which is to say the laws of physics we observe, might differ from the fundamental laws of physics, which would be, so to speak, the original state of the laws of physics. In a chaotic early universe, physical constants may not have existed. Hawking also spoke of physical laws that tell us how the universe will evolve if we know its state at some point in time. He added that God could have chosen an “initial configuration” or fundamental laws for reasons we cannot comprehend. He asks, however, “if he had started it off in such an incomprehensible way, why did he choose to let it evolve according to laws that we could understand? (Hawking, Stephen. A Brief History of Time, New York: Bantam Books. 1988. 127. Print.)” He then goes on to discuss possible reasons for this, e.g. chaotic boundary conditions; anthropic principles.

Implicit in Hawking’s reasoning is that we can figure out what physical laws will result in our universe in its present state. The obvious drawback is that the observable universe is ~13.8 billion years old and 93 billion lightyears in diameter. The universe may be much larger, making the task of deriving this initial configuration monumentally difficult. This would require a greater deal of abstraction than Lanao and Teh, and apparently Neo-Aristotelians, desire, but it is the only way to discover how past iterations of physical laws or earlier systems of laws led to our present laws of physics. The issue with modern science is that it does not often concern itself with states in the distant past and so, a lot of equations and models deal in the present, and even the future, but not enough of them confront the past. Cosmological models, for purposes of understanding star formation, the formation of solar systems, and the formation of large galaxies have to use computer models to test their theories against the past, since there is no way to observe the distant past directly. In this way, I think technology will prove useful in arriving at earlier conditions until we arrive at the mirror before it shattered. The following model, detailing how an early collision explains the shape of our galaxy, is a fine example of what computer models can do to help illuminate the distant past:

Credit: Quanta Magazine

Further Issues With The Neo-Aristotelian System

A recent rebuttal to Alexander Pruss’ Grim Reaper Paradox can be generalized to refute Aristotelianism overall. The blogger over at Boxing Pythagoras states:

Though Alexander Pruss discusses this Grim Reaper Paradox in a few of his other blog posts, I have not seen him discuss any other assumptions which might underly the problem. He seems to have focused upon these as being the prime constituents. However, it occurs to me that the problem includes another assumption, which is a bit more subtle. The Grim Reaper Paradox, as formulated, seems to presume the Tensed Theory of Time. I have discussed, elsewhere, the reasons that I believe the Tensed Theory of Time does not hold, so I’ll simply focus here on how Tenseless Time resolves the Grim Reaper Paradox.

To see the difference between old and new tenseless theories, it is necessary to first contrast an old tenseless theory against a tensed theory that holds that properties of the pastness, presentness, and futurity of events are ascribed by tensed sentences. The debate regarding which theory is true centered around whether tensed sentences could be translated by tenseless sentences that instead ascribe relations of earlier than, later than, or simultaneous. For example, “the sun will soon rise” seems to entail the sun’s rising in the future, as an event that will become present, whereas the “sun is rising now” seems to entail the event being present and “the sun has risen” as having receded into the past. If these sentences are true, the first sentence ascribes futurity whilst the second ascribes presentness and the last ascribes pastness. Even if true, however, that is not evidence to suggest that events have such properties. Tensed sentences may have tenseless counterparts having the same meaning.

This is where Quine’s notion of de-tensing natural language comes in. Rather than saying “the sun is rising” as uttered on some date, we would instead say that “the sun is rising” on that date. The present in the first sentence does not ascribe presentness to the sun’s rising, but instead refers to the date the sentence is spoken. In like manner, if “the sun has risen” as uttered on some date is translated into “the sun has risen” on a given date, then the former sentence does not ascribe pastness to the sun’s rising but only refers to the sun’s rising as having occurred earlier than the date when the sentence is spoken. If these translations are true, temporal becoming is unreal and reality is comprised of earlier than, later than, and simultaneous. Time then consists of these relations rather the properties of pastness, presentness, and futurity (Oaklander, Nathan. Adrian Bardon ed. “A-, B- and R-Theories of Time: A Debate”. The Future of the Philosophy of Time. New York: Routledge, 2012. 23. Print.).

The writer at Boxing Pythagoras continues:

On Tensed Time, the future is not yet actual, and actions in the present are what give shape and form to the reality of the future. As such, the actions of each individual future Grim Reaper, in our paradox, can be contingent upon the actions of the Reapers which precede them. However, this is not the case on Tenseless Time. If we look at the problem from the notion of Tenseless Time, then it is not possible that a future Reaper’s action is only potential and contingent upon Fred’s state at the moment of activation. Whatever action is performed by any individual Reaper is already actual and cannot be altered by the previous moments of time. At 8:00 am, before any Reapers activate, Fred’s state at any given time between 8:00 am and 9:00 am is set. It is not dependent upon some potential, but not yet actual, future action as no such thing can exist.

I think this rebuttal threatens the entire Aristotelian enterprise. Aristotelians will have to deny time while maintaining that changes happen in order to escape the fact that de-tensed theories of time, which are more than likely the correct way of thinking about time, impose a principle: any change at a later point in time is not dependent on a previous state. That’s ignoring that God, being timeless, could not have created the universe at some time prior to T = 0, the first instance of time on the universal clock. This is to say nothing of backward causation, which is entirely plausible given quantum mechanics. Causation calls for a deeper analysis, which neo-Humeans pursue despite not being entirely correct. The notion of dispositions is crucial. It is overly simplistic to say the hot oil caused the burns on my hand or the knife caused the cut on my hand. The deeper analysis in each case is that the boiling point of cooking oil, almost two times that of water, has something to do with why the burn feels distinct from a knife cutting into my hand. Likewise, the dispositions of the blade have a different effect on the skin than oil does. Causal relationships are simplistic and, as Nietzsche suggested, do not account for the continuum within the universe and the flux that permeates it. Especially in light of quantum mechanics, we are admittedly ignorant about most of the intricacies within so-called causal relationships. Neo-Humeans are right to think that dispositions are important. This will disabuse of us of appealing to teleology in the following manner:

‘The function of X is Z’ [e.g., the function of oxygen in the blood is… the function of the human heart is… etc.] means

(a) X is there because it does Z,
(b) Z is a consequence (or result) of X’s being there.

Larry Wright, ‘Function’, Philosophical Review 82(2) (April 1973):139–68, see 161.

It is more accurate to say that a disposition of X is instantiated in Z rather than that X exists for purposes of Z because in real world examples, a given X can give rise to A, B, C, and so on. This is to say that one so-called cause can have different effects. A knife can slice, puncture, saw, etc. Hot oil can burn human skin, melt ice but not mix with it, combust when near other mediums or when left to increase to temperatures beyond its boiling point, etc. One would have to ask why cooking oil does not combust when a cube of ice is thrown into the pan; what about the canola oil, for a more specific example, causes it to auto-ignite at 435 degrees Fahrenheit and why does this not happen when water is heated beyond its boiling point?

As it turns out then, Neo-Aristotelians are not as committed to concretism as Lanao and Teh would hope. They are striving for generalizations despite refusing to investigate the details of how models are employed in normal science, as was made obvious by Lanao and Teh’s dismissal of Cartwright’s particularism and further, in their argument against Fundamentalism, which does not flow neatly from Cartwright’s argument. For science to arrive at anything concrete, abstraction needs to be allowed, specifically in cases venturing further and further into the past. Furthermore, a more detailed analysis of changes needs to be incorporated into our data. Briefly, when thinking of the $1,000 bill descending into St. Stephen’s Square, it is a simple fact that we must ask whether there is precipitation or not and if so, how much; we are also required to ask whether bird droppings may have altered its trajectory on the way down?; what effect does smog or dust particles have on the $1,000 bill’s trajectory; as Cartwright asked, what about wind gusts? What is concrete is consistent with the logical atomist’s view that propositions speak precisely to simple particulars or many of them bearing some relation to one another.

Ultimately, I think that Lanao and Teh fail to establish a Neo-Aristotelian approach to principled scientific models. They also fail to show that FU and therefore, Fundamentalism is false. What is also clear is that they did not adequately engage Cartwright’s argument, which is thoroughly Non-Fundamentalist, even if that conclusion escaped her. This is why I hold that Cartwright’s conclusions are off the mark because she is demanding that generalized laws be derived from extremely complex conditions. It is not incumbent on dappled laws within a given domain of science to be unified in order for FU to ultimately be the case. It could be that due to symmetry breaking, one domain appears distinct from another and because of our failure, at least until now, to realize how the two cohere, unifying principles between the two domains currently elude us. Lanao and Teh’s argument against FU therefore appeals to the ignorance of science not unlike apologetic arguments of much lesser quality. The ignorance of today’s science does not suggest that current problems will continue to confront us while their solutions perpetually elude us. What is needed is time. Like Lanao and Teh, I agree that Cartwright has a lot of great ideas concerning principled scientific models, but that her ideas lend support to FU. A unified metaphysical account of reality would likely end up in a more dappled state than modern science finds itself in and despite Lanao and Teh’s attempts, a hypothetical account of that sort would rely too heavily on science to be considered purely metaphysical. My hope is that my argument, one that employs symmetry breaking to bolster the probability of FU being the case, is more provocative, if even, persuasive.

Print is Now Live on Amazon.com!

Book is now available for purchase here! Here are the Table of Contents to whet the appetite:

Introduction

Chapter 1: Philosophical Approaches to Atheism

Chapter 2: Refuting the Kalam Cosmological Argument

Chapter 3: The Moral Argument Refuted

Chapter 4: Refuting Plantinga’s Victorious Ontological Argument

Chapter 5: On Qualia and A Refutation of the Argument from Consciousness

Chapter 6: Refuting the Fine-Tuning Argument

Chapter 7: The Failures of Aquinas’ Five Ways

Chapter 8: Transcendental Arguments and Presuppositionalism Refuted

Chapter 9: The Argument from Assailability

Chapter 10: The Arguments from History and The Multiplicity of Religions

Chapter 11: The Argument from Cosmology

Chapter 12: On the Leibnizian Cosmological Argument

Conclusion

I hope you guys enjoy!

The Kalam Cosmological Argument and The Problem of Induction

By R.N. Carmona

Though the Kalam Cosmological Argument (KCA) is a deductive argument, it imports inductive reasoning. Since there’s a distinction between argument and reasoning, it should apply to, at least, some deductive arguments.1 To see where this occurs, it is necessary to restate P1 of the KCA: Whatever begins to exist has a cause.2

Before demonstrating how P1 imports inductive reasoning, a distinction must be made between the common application of the Problem of Induction and what it is it actually entails. The common application of the problem focuses on the notion that given the uniformity of nature, the future will be like the past. For Hume, the uniformity of nature is used to justify induction. It is, however, important to note that Hume never explicitly mentioned induction. The Problem of Induction is derived from Hume’s discussion on causation.3 This notion is at the center of the common application of the Problem of Induction. It does, however, have broader application.

The problem should focus on beliefs concerning matters of fact that extend beyond experience and observation. These beliefs could be about past, unobserved present, and future events, objects, or phenomena.4 This is key to seeing how the first premise of the KCA imports inductive reasoning.

Disregarding discussions on the nature of time, i.e. whether time corresponds with A-theory or B-theory, the past and the future, intuitively speaking, have something in common. In science, predictions are often made about the future and the past; the latter is increasingly being referred to as a retrodiction.5 The reason predictions are necessary about the past and the future is because they are out of the reach of our experience and observational apparatus. Since we cannot directly observe or experience the past and the future, inductive inferences are necessary to draw conclusions about both.

When one argues that, whatever begins to exist has a cause, one is employing inductive reasoning. During one’s lifetime, one has noticed that every effect is preceded by a cause. However, in order to argue that this has always been the case, per Hume, one has to assume the uniformity of nature. It follows that one is making an inductive inference about the past.

From an epistemic standpoint, the Problem of Induction is also the problem of transfer of belief in the evidence. Though it is true that people believe things that lack evidence, conclusions are often supported by evidence. So the problem occurs in the bond between a conclusion and the evidence said to support it–specifically in the transfer from belief in the evidence to belief in a given conclusion. This transfer of belief is what Hume sought to address. He didn’t want to call into question the belief itself, but rather, the grounds of said belief.6

Much of this shouldn’t be news for the Christian or theist. When addressing proponents of scientism or when attempting to downplay the superiority of scientific over religious reasoning, they are both fond of invoking the Problem of Induction.7 Unfortunately, they fail to realize that it’s imported into an argument they’re also quite fond of. If it’s an actual problem, then the implications of the problem apply to the KCA. In other words, if the Problem of Induction cites a lack of justification for beliefs about the past that are out of the range of experience and observation, the Christian and theist aren’t justified in saying that whatever begins to exist has a cause. Since the premise lacks justification, there’s no way to establish its soundness. Therefore, given P2’s connection to P1, we cannot establish the soundness of P2. It follows that we cannot establish the soundness of the argument as a whole.

In the past, I’ve been pointedly critical of the KCA. I focused, for instance, on the concept of causation.8 I also located a common fallacy within the argument.9 One would be hard pressed to establish the soundness of the argument even if P1 didn’t feature inductive reasoning, which the Christian and theist admit has problematic implications. With that said, this isn’t so much a rebuttal of the argument, but rather, another spotlight on the double think of theists, Christians in particular.

It seems as though Christians don’t care to correct the inconsistency within their arguments. They can’t invoke the Problem of Induction whenever it suits them only to discard it whenever it doesn’t. If one is intellectually honest, consistency is required.  If they admit that there’s a Problem of Induction and that it is still without sufficient reply, as apologists have argued, then the implications of the problem are also imported into P1 of the KCA. I find that this conclusion is inescapable.

Works Cited

1 Will, Frederick L.. Is There a Problem of Induction? The Journal of Philosophy, Vol. 39, No. 19, pp. 505-513. 1942. Print. 

2 Craig, William L. “In Defense of the Kalam Cosmological Argument”Reasonable Faith.

3 Burns, Samuel R.. The Problem of Deduction: Hume’s Problem Expanded. University of Michigan, Ann Arbor. 2009. Print.

4 Fritz Jr., Charles A.. What is Induction? The Journal of Philosophy, Vol. 57, No. 4, pp. 126-138. 1960. Print. 

5 Barrett, Martin, Sober, Elliott. Is Entropy Relevant to the Asymmetry Between Prediction and Retrodiction? (1992); Steinitz, Yuval. Prediction versus Retrodiction in MIll (1994); Begun, David R. Human Evolution: Retrodictions and Predictions (2005); Christopher J. Ellison, John R. Mahoney, James P. Crutchfield. Prediction, Retrodiction, and The Amount of Information Stored in the Present (2009).

6 Oliver, Donald W.. A Re-examination of the Problem of Induction. The Journal of Philosophy, Vol. 49, No. 25, pp. 769-780. 1952. Print. 

7 See “Does scientific knowledge presuppose God? A reply to Carroll, Coyne, Dawkins and Loftus” by Vincent Torley

8 See “Dispotions, Causality, and the Kalam Cosmological Argument”

9 See “A Simple Rebuttal: Kalam Cosmological Argument”

On The Leibnizian Cosmological Argument

By R.N. Carmona

A Christian on the Humans of New York Instagram page brought up the Leibnizian Cosmological Argument (LCA) and named a few of its Christian defenders. Like I’m fond of pointing out, in naming just Christian defenders of the argument, it is likely he hasn’t considered actual objections. He’s read a degraded form of objections via the lens of these Christian authors. This is why I call apologetics, pseudo-philosophy.

In actual philosophical discourse, the participants in a given discussion are as charitable to one another as possible and they try very hard to ensure that nothing is lost in translation. They constantly correct themselves if they misinterpret what their opponent has said or they attempt to show that their interpretation better captures what their opponent is trying to say. Apologists don’t do this. Apologists straw man an objection, sometimes in ways that seem sophisticated, in order to make the argument or counter-argument easier to address. This is precisely why I advised him to deal with J.L. Mackie, for example–to read him directly and not through the lens of one of his favored authors. Then he would find that even Christians reject the Leibnizian Cosmological Argument (LCA).

A quick example of what I mean: in responding to my Argument From Cosmology, my opponent says that the opposite of my conclusion is as follows: the fact that an x can’t be shown to exist in relation to y doesn’t mean that x doesn’t exist*; in other words, that god can’t be shown to exist in relation to the Earth doesn’t mean god doesn’t exist. Yet on Judaism and Christianity, god created the Earth. Modern science tells us that planets have no creator; they form naturally over an extended period of time. We have real time data of planet formation in other systems. The history of our planet doesn’t resemble anything mentioned in the Bible. By extension, I argue that god doesn’t exist in relation to the universe, since he didn’t create it. Modern cosmology tells us as much.

His assertion is not enough to refute my argument. In fact, all he’s concluding is the opposite of what he misunderstood as my conclusion. My actual conclusion is this: x does not exist in relation to y iff it is necessary that x exist in relation to y. If god did not create the Earth or the universe, he doesn’t exist in relation to either, and by extension, doesn’t exist; however, on Christianity, it is necessary that he does. This is precisely what his favored LCA says. Through the Principle of Sufficient Reason (PSR), for every being or state of affairs that exist, there is a sufficient reason for why they exist. It then adds, that for every being or state of affairs that exist, there is a necessary reason for why they exist. Therefore, god is the necessary and sufficient reason for why these states of affairs exist.

My Argument From Cosmology best captures a decisive criticism of the LCA without directly engaging it. J.L. Mackie stated that PSR need not be assumed by reason. Reason only asks for antecedent causes and conditions that explain each being or state of affairs. These are considered facts until they themselves are explained by something prior to them. On reason, nothing is prerequisite beyond this.

Thus, going back to my conclusion, x does not exist in relation to y iff it is necessary that x exist in relation to y. In order for Christianity to hold true, god would have had to create the Earth and the universe. If we have reason to doubt that he created the Earth, and modern science establishes this conclusively, then we have much reason to doubt that he created the universe. If the Earth can be explained by antecedent causes and conditions, then the universe can as well. My argument offers a number of plausible explanations fielded in modern cosmology all whilst arguing that there cannot be the type of causation Christians would require, i.e., the type of causation that allows for an immaterial agent to create material objects. At best, such causation is unknowable and it is probable that there will be no hard evidence for it; hence the Christian must retreat to agnosticism. At worst, such causation is impossible and there cannot be any evidence for it; thus, the Christian must retreat to atheism.

Ultimately, my argument addresses the LCA by implication, i.e., my argument implies a defeater of the LCA. Therefore, I do not have to address it directly. The same can be said of the KCA. Rebuttals to both arguments are implicit in my argument. That fact alone should give apologists pause.

In any case, my point has been made: apologetics is pseudo-philosophy. Apologists prop it up by being uncharitable and purposely (in most cases) misinterpreting a claim or argument made by atheists. Apologists are also modern sophists: it matters not how things might be or probably are; what matters is what they think is the case, what they say is the case, or what they deem possible–as though possibility implies probability. Apologists are also quite fond of straw men, which they use to make their arguments seem superior to those of their opponents. Unlike them, I have defined PSR correctly and I’ve summarized the LCA charitably. However, I’ve also shown that my Argument From Cosmology considers the LCA, albeit indirectly. My argument implies a decisive blow to the LCA. It is therefore necessary to deal with my argument directly; it isn’t enough to choose a favored argument and deem it superior on the basis of a misinterpretation of my conclusion.

*His assessment is correct assuming that x isn’t necessary with relation to y. The fact that I (x) cannot be explained with relation to an invention (y) doesn’t mean I do not exist. Even the actual inventor of invention (y) doesn’t have this sort of connection to the invention in question. Another agent could have been the inventor. On Christianity, however, there are no other gods and hence no other creators. Thus, if god did not create the Earth, there is reason to doubt he created anything else in the universe and therefore, the universe itself. As stated, for Christianity to be true, it need only be shown that god is necessary in this possible world; the LCA, as originally formulated, wants Anselm’s deity and therefore, on the LCA, god is necessary in all possible worlds. My argument casts much doubt on this, especially since god is the necessary being antecedent to all contingent beings. If this connection fails on a minor front, e.g., god didn’t create all baseballs, that’s fine, for even then he would underlie the reason for the reason of the baseballs, namely human beings. If it fails on a major front, as I’ve shown in the case of the Earth and all planets for that matter, a glaring problem arises for Christians, for even if they posit a being that willed the laws of physics, what they have is a deity far removed from the deity in the Bible. That would make for a separate discussion altogether.

Arguments For Atheism: The Argument From Cosmology

By R.N. Carmona

Though the strongest argument of the arguments for atheism, this argument is the most philosophically involved. That is to say that, due to the implications of the argument, the argument ventures into philosophical territory–much of which is the subject of continued disagreement and lack of consensus. On the surface, the argument is strong. However, the argument’s tacit assumption has to be qualified so that the strength of the argument is augmented. In doing so, however, some problems will arise. Though these problems aren’t damning to the argument and though these problems do not lend credence to antithetical arguments, the problems must be addressed. In other words, at the very least, solutions must be suggested.

It is time now to turn to the argument. Groundwork will then be laid out to qualify its assumption. Problems will then be presented and addressed. Then it will be suggested that the uncertainty of the solutions to these problems lends no support to antithetical arguments because such arguments carry their own onus. The argument is as follows:

P1 If there is a naturalistic explanation for the origin of the universe, a creative agent does not exist. (P -> Q)

P2 There is a naturalistic explanation for the origin of the universe. (P)

C   Therefore, a creative agent does not exist. (∴ Q)

Prior to qualifying the assumption of the argument, the conclusion will be qualified. The implication here is that if there is a naturalistic explanation for the origin of the universe, a creative agent is not necessary. However, as is argued below, if a creative agent is not necessary, then it follows that a creative agent doesn’t exist.

To see how the conclusion follows, it is perhaps best to shrink the scope. This is to say that rather than focus on the universe as a whole, the focus should turn to a smaller aspect. Suppose that the argument instead argues that if there is a naturalistic explanation for the formation of planets, a creative agent doesn’t exist with respect to planets. Antithetical arguments would no doubt focus on the formation of Earth and thus, would engage in special pleading or question begging. In other words, such arguments would have no choice but to accept that there is a naturalistic explanation for planet formation, but that this explanation is somehow inapplicable to the Earth.

However, if a creative agent isn’t necessary for the formation of the Earth, then there’s no respect in which it can be said to exist in relation to the Earth. Now, in broadening the scope once again, the same argument applies to the universe. If a creative agent isn’t necessary for the origin of the universe, then there’s no respect in which it can be said to exist in relation to the universe. If it isn’t necessary for the creation of the universe, then it doesn’t exist within the universe or transcendentally in respect to the universe. The latter is to say that if it played no role in the origin of the universe, even the assumption that it exists outside of the universe doesn’t lend support to its existence. With respect to this universe, it simply doesn’t exist.

The assumption of the argument is where much of this discussion will focus. The discussion will center around the question of what constitutes a naturalistic explanation. Thus, it will center around the question of what is meant by natural. Keith Augustine offers three definitions. The one he seems to accept is problematic given that supervenience lends credence to non-natural or even supernatural explanations of the mind. Though non-reductive physicalism states that mental states are contingent on physical states, mental states and physical states aren’t congruent. This implies that mental states are non-physical.1

Along these lines, a religionist who is, for example, a Cartesian dualist can argue that the soul is supervenient on the brain. She can argue that mental states are non-physical precisely because mental states are a property of the soul. For such reasons, non-reductive physicalism is to be rejected by one looking to provide a case for strict naturalism. A case for strict naturalism would entail reductive physicalism or reductionism–which agrees with the thesis of physicalism but adds that complex phenomena can be reduced to physical processes.2 This is to say, for example, that morality reduces to the mind of the moral agent. Reductionism could therefore be seen as the view that a given phenomenon is reducible to another phenomenon; alternatively, in the philosophy of science, reductionism is the view that one theory is reducible to another (e.g. Modified Newtonian Mechanics (MOND) is reducible to dark matter). This thesis runs into, at least, two problems.

The first of these issues is qualia–“the felt or phenomenal qualities associated with experiences.”3 This is often referred to as the what it’s like-ness of an experience. For example, a sharp pain in the foot, the smell of wet dog fur, and the taste of chocolate have a subjective quality that vary from one person to the next. These can only be accessed via introspection and are thus a marquee example of the so called privacy of consciousness. Qualia, however, aren’t as pervasive as the second problem. The second problem will therefore receive much warranted attention.

The second problem for reductionism is the purported existence of abstracta. Abstracta are abstract objects like propositions, letters, and numbers. Of these, arguably the most seriously debated are numbers. The debate between mathematical realists and non-realists should occupy one who is attempting a clear case for strict naturalism. If numbers exist, then at least one non-natural object exists; furthermore, this non-natural object isn’t reducible to anything physical. The existence of numbers would therefore refute reductionism.

Of the four criteria Otávio Bueno offers, two have been at the center of the debate: indispensability and explanation versus description. The indispensability criterion states that mathematics must be more than a useful part of an explanation; it must be indispensable to that explanation.4Mathematical realists don’t doubt that mathematics meets this criterion. The explanation versus description criterion states that mathematics, aside from describing a given phenomenon, must explain the phenomenon.5 On these two grounds, the nominalist has the most to say.

Mathematics, for example, doesn’t explain Kirkwood gaps. It merely provides a description for the relevant interactions between the gravitational tugs of Jupiter, the Sun, and asteroids in the asteroid belt. Briefly, Kirkwood gaps are regions within the asteroid belt that contain few asteroids; the distribution of asteroids in the belt are therefore non-uniform. There have been attempts to explain this non-uniformity mathematically (see Vrbik 2014).6 However, as argued by Bueno, proper interpretation is required before a mathematical description is relevant to the explanation of the phenomena.

This failure to explain Kirkwood gaps doesn’t harm the realist case, however. Realists have offered other phenomena that mathematics might explain: the hexagonal cells of beehives; the lifespan of cicadas, which is either 13 or 17 years–both of which are prime numbers; the bridges of Königsberg; the plateau soap film. Each phenomenon is explained semantically. Despite this, Mark Steiner suggested that when one “remove[s] the physics, we remain with a mathematical explanation—of a mathematical truth!”7

However, Baker [forthcoming] has argued that the proposal is false. And it is false for a very simple reason: there are mathematical explanations of empirical facts where the mathematics involved has no (known) mathematical explanation. Baker has argued convincingly, I think, that the example of the bees is a case in point—i.e., that the proof of the Honeycomb Theorem given by Hales [2001] does not explain the theorem. Therefore, whatever is doing the explanatory work, it isn’t the proof of the theorem.8

Steiner’s proposal has been replaced with program explanations. Conversely, program explanations make use of dispositions. John Heil provides an example:

Consider the dispositional property of being fragile. This is a property an object—this delicate vase, for instance—might have in virtue of having a particular molecular structure. Having this structure is held to be lower-level non-dispotional property that grounds the high-level dispositional property of being fragile; the vase is fragile, the vase possesses the dispositional property of being fragile, in viture of possessing some non-dispositional structural property.9

A disposition is “not causally efficacious” and “ensures the instantiation of a causally efficacious property or entity that is an actual cause of the explanandum.”10 This is precisely what program explanations make use of.

A fragile glass is struck and breaks. Why did it break? First answer: because of its Fragility. Second answer: because of the particular molecular structure of the glass. The property of fragility was efficacious in producing the breaking only if the molecular structural property was efficacious: hence 3(i) [there is a distinct property G such that F is efficacious in the production of e only if G is efficacious in its production]. But the fragility did not help to produce the molecular structure in the way in which the structure, if it was efficacious, helped to produce the breaking. There was no time-lag between the exercise of the efficacy, if it was efficacious, by the disposition and the exercise of the efficacy, if it was efficacious, by the structure. Hence 3(ii) [the F-instance does not help to produce the G-instance in the sense in which the G-instance, if G is efficacious, helps to produce e; they are not sequential causal factors]. Nor did the fragility combine with the structure, in the manner of a coordinate factor, to help in the same sense to produce e. Full information about the structure, the trigger and the relevant laws would enable one to predict e; fragility would not need to be taken into account as a coordinate factor. Hence 3(iii) [the F-instance does not combine with the G-instance, directly or via further effects, to help in the same sense to produce e (nor of course, vice versa): they are not coordinate causal factors].11

Though this is a remarkable case, it doesn’t accomplish the type of explanation the realist is looking for. There are, however, other program explanations such as the explanation of the radiation emitted by a piece of uranium over a period of time (see Jackson and Pettit, Ibid.). This kind of program explanation is both indispensable and cannot be superseded by a process explanation–i.e. “a detailed account of the actual causes that led to the event to be explained.”12 So given program explanations, it looks as though the mathematical realist has won. Not so fast.

One must ask whether program explanations bear any relation to mathematics. In other words, one must question whether program explanations are mathematical and whether they meet the indispensability criterion. There are two options for the nominalist: Kim’s exclusion principle: a principle which “hinders the acceptance of two causal explanations for a single effect unless an acceptable relation exists between the two purported causes” or “view mathematics as playing a broadly representational role in scientific explanations.”[13][14] On the exclusion principle, the fragility example can be revisited:

The problem with this example is that there seems to be some avenue to a conceptual reduction of the two properties, “anyone who had access to the [molecular] account would have all the significant information at his disposal which is offered by the fragility explanation.” This, I think, again gives support to the heterogeneous nature of Jackson and Pettit’s examples; if a conceptual reduction of some kind is possible on a particular occasion then explanatory exclusion will effectively remove the program explanation in the same way that metaphysical reduction will remove the higher-level cause.15

If conceptual reduction of the efficacious and programming non-efficacious properties is possible, there is no longer room to assume that the non-efficacious property plays any role in the explanation. More pointedly, Juha Saatsi states the following: “For it seems that mathematical properties cannot ensure the instantiation of causally efficacious properties in any realist view of mathematics without some unduly ad hoc metaphysical connection being postulated between the concrete world and mathematical abstracta.”16 Given these epistemic and metaphysical difficulties, challenges are presented to one looking to defend mathematical realism. At best, the question of whether or not numbers exist in a Platonic sense remains to be seen. The fact that the floor is still open to this question shouldn’t hinder naturalism. Such an uncertain proposition–namely the existence of numbers–shouldn’t pose problems for naturalism. Though qualia were given less attention, the same conclusion applies. Therefore, reductionism is still a plausible thesis for one attempting a case for strict naturalism. It follows that natural is that which is physical or reducible to that which is physical.

Even if applied mathematics presented an issue for naturalism, pure mathematics says nothing about the world. It is concerned with objects, relations, and structures. These, however, are abstract. They’re not spatio-temporaral and aren’t causally active.17 Assuming that the existence of numbers caused a problem for naturalism, this wouldn’t lend any support to antithetical arguments. One, for example, wouldn’t be able to argue that the existence of numbers implies the existence of a god. For the same reasons one cannot assert that there’s a moral arbiter for morality, one cannot assert that there’s a programmer for program explanations or more simply, that there’s a being who designed a mathematical universe. This is to say, a being who designed the universe so that it adheres to laws fully explainable in terms of mathematics. Such a proposition would require justification.

There’s also the fact that in assuming that numbers exist, this existence could be entirely mind-dependent. This is to say that wherever there’s a being that is sufficiently intelligent, mathematical representations will not only emerge but might be required. Given, for example, the inferior fitness of h.neanderthalensis, one is justified in regarding them as less intelligent than h.sapien. Yet there’s evidence to suggest that neanderthals crafted and used tools; there’s also evidence to suggest that they produced art.[18][19] In the case of cave art, rudimentary mathematical thinking is required. For instance, a cave artist didn’t depict  herself and aspects of nature (e.g. buffalos; trees) in actual size. She, instead, scaled down actual sizes, but still represented herself and her environment in an accurate manner. That is to say that, though she didn’t depict buffalos at their actual sizes, she still depicted them as larger than herself. This scaling down is possible evidence for rudimentary mathematical thinking. Therefore, if an intellectually inferior being is capable of thinking in this manner, it is reasonable to expect that an intellectually superior being is also capable of such thinking. When presented with given circumstances (e.g. predators that hunt in packs), the capacity to identify multiple threats becomes advantageous to survival. This, in turn, are the first fruits of mathematical thought. Mathematics, thence, could have a real contingent existence rather than, as the Platonists/Mathematical Realists argue, a real necessary existence. Thus, despite the strength of nominalism, a watered down realism could also dispel with the problem the existence of numbers would pose for reductionism and therefore, naturalism.

With a definition of natural now established and given the care taken in addressing possible problems, it is time now to turn to examples of naturalistic explanations for the origin of the universe. Prior to doing this however, it is necessary to point out that naturalism is the prevailing view in science. Sean Carroll states:

[I]f a so-called supernatural phenomenon has strictly no effect on anything we can observe about the world, then indeed it is not subject to scientific investigation. It’s also completely irrelevant, of course, so who cares? If it does have an effect, than of course science can investigate it, within the above scheme. Why not? Science does not presume the world is natural; most scientists have concluded that the world is natural because that’s the best explanation for what we observe. If you are ever confused about what “science” has to say about something, just ask yourself what actual scientists would do. If real scientists were faced with a purportedly supernatural phenomenon, they wouldn’t just shrug their shoulders because it wasn’t part of their definition of science. They would investigate it and try to come up with the best possible explanation.20

If a supernatural explanation presented itself, however, one should remember “that most naturalists would agree that naturalism at least entails that nature is a closed system containing only natural causes and their effects.”21 This is precisely what cosmological models present: a causally closed universe and explanations showing that the universe is self-contained. Even if one wrongfully assumes, like William Lane Craig, that the Borde-Guth-Vilenkin theorem yields evidence for an absolute beginning of the universe, when one considers that the theorem says that the ability to explain the universe classically gives out, such a theorem isn’t useful to argue for a beginning.22Even if we assumed, however, that this theorem implies an absolute beginning to the universe, this wouldn’t imply that a supernatural explanation is the only resort.

With this in mind, naturalistic explanations can be presented. The consensual theory, i.e. the Big Bang, will be discussed. The multiverse, which might be the mark of a paradigm shift, will also be discussed. Then an exotic possibility will be explored, namely that the universe is the product of a four-dimensional black hole.

Without surveying the history of the Big Bang, an outline of its properties can be presented: singularity, inflation, baryogenesis, cooling, structure formation, accelerated cosmic expansion. Stephen Hawking describes the singularity as follows:

At this time, the Big Bang, all the matter in the universe, would have been on top of itself. The density would have been infinite. It would have been what is called, a singularity. At a singularity, all the laws of physics would have broken down. This means that the state of the universe, after the Big Bang, will not depend on anything that may have happened before, because the deterministic laws that govern the universe will break down in the Big Bang. The universe will evolve from the Big Bang, completely independently of what it was like before. Even the amount of matter in the universe, can be different to what it was before the Big Bang, as the Law of Conservation of Matter, will break down at the Big Bang.23

Hawking later adds that “the Big Bang is a beginning that is required by the dynamical laws that govern the universe. It is therefore intrinsic to the universe, and is not imposed on it from outside.”24 Baryogenesis is the period in the early universe that resulted in the prevalence of matter over antimatter. It is useful to note here that this doesn’t make a difference given the assumption that the universe was created.

Because antiparticles otherwise have the same properties as particles, a world made of antimatter would behave the same way as a world of matter, with antilovers sitting in anticars making love under an anti-Moon.  It is merely an accident of our circumstances, due, we think, to rather more profound factors…that we live in a universe that is made up of matter and not antimatter or one with equal amounts of both.25

Given this, the prevalence of matter over antimatter is arbitrary when assuming that the universe was created. This, however, serves as evidence against the notion since it serves as an example of chance in the universe. After the universe began to cool, stars, galaxies, and planets began to form. Expansion, as discovered by Edwin Hubble, is accelerating. What was accelerating the expansion of the universe was, at the time, unknown. Today, it is held that dark energy is responsible for the expansion of the universe and this, Sean Carroll states, is because dark energy is persistent and doesn’t dilute as the universe expands. It is, he explains, a feature of space itself and therefore constant throughout space and time.26

There have been recent suggestions that the universe is a dynamical fluid. Or, at the very least, the universe is a medium that appears to have more than three phases; in fact, as many as 10^500 and maybe even an infinite amount. So aside from curving and expanding, space could be doing something like freezing or evaporating.27

Inflation was intentionally set aside because it has received a lot of recent attention. On March 17th of this year, John Kovak, with the Harvard-Smithsonian Center for Astrophysics, announced the detection of gravitational waves. Initially, the BICEP2 collaboration had ruled out the possibility that cosmic dust in the Milky Way accounted for the polarization pattern in the Cosmic Microwave Background (CMB).

The BICEP2 collaboration, on June 19th, published a paper acknowledging that cosmic dust in the Milky Way could account for more of the b-mode polarization signal than previously thought. If the signals originate from primordial gravitational waves, this would be a smoking gun for inflation. Inflation theory proposes a short burst of exponential expansion in the early universe. This rapid expansion would produce gravitational waves.

It wasn’t long before the results came under fire and were eventually proven wrong. Charles Choi writes:

The controversy hinges on their handling of the dust emission, which relied on a preliminary map based on about 15 months of data from the European Space Agency’s Planck spacecraft. Falkowski noted the BICEP2 group might have misinterpreted the Planck data, thinking that it only contained emissions from the Milky Way when it also included unpolarized emissions from other galaxies. If the BICEP2 team did not account for this fact, they might have underestimated how polarized the foreground from the Milky Way actually was. This could mean the inflation signal the group thought it saw might only be a spurious result from Milky Way emissions.28

Gravitational waves would be a smoking gun for inflation. Though the Bicep2 results were eventually shown to be wrong, a recent suggestion could prove promising. A team, consisting of Nora Elisa Chisari, a fifth year graduate student with the Department of Astrophysical Sciences at Princeton, Cora Dvorkin with the Institute of Advanced Study at the School of Natural Sciences, and Fabian Schmidt with the Max Planck Institute for Astrophysics, is now proposing that weak lensing surveys may be able to detect the cross-correlation between b-mode polarization in the CMB and cosmic shear. The team suggests the possibility that cosmic shear is the best way to confirm primordial gravitational waves resulting from b-mode polarization in the CMB.

If inflation is empirically established, it would serve as indirect evidence for an inflationary multiverse. An inflationary universe, according to Tegmark, results in a Level I and Level II multiverse. Inflation would eventually end in parts of a rapidly expanding region, forming u-shaped regions.  Each of these regions constitute a Level I multiverse while the amalgam of the universes constitute a Level II universe.29 Another way to imagine this type of multiverse is by imagining an enormous block of swiss cheese. As Brian Greene explains:

[T]he cheesy parts [are] regions where the inflaton field’s value is high and the holes [are] regions where it’s low. That is, the holes are regions, like ours, that have transitioned out of the superfast expansion and, in the process, converted the inflaton field’s energy into a bath of particles, which over time may coalesce into galaxies, stars, and planets. In this language, we’ve found that the cosmic cheese requires more and more holes because quantum processes knock the inflaton’s value downward at a random assortment of locations. At the same time, the cheesy parts stretch ever larger because they’re subject to inflationary expansion driven by the high inflaton field value they harbor. Taken together, the two processes yield an ever-expanding block of cosmic cheese riddled with an ever-growing number of holes. In the more standard language of cosmology, each hole is called a bubble universe (or a pocket universe). Each is an opening tucked within the superfast stretching cosmic expanse.30

Briefly, an inflaton field is a field corresponding to a given inflationary model. Like the Higgs field corresponds to the Higgs boson and an electromagnetic field corresponds to electromagnetism, an inflaton field corresponds to inflation. Some have written off multiverses as too hypothetical. However, Einstein’s general relativity suggested both an expanding universe and black holes long before there was evidence for either. Likewise, quantum mechanics (e.g. Everett’s Many Worlds Interpretation), string theory, and other equations suggest a multiverse. Some consider the multiverse the best explanation for the so called Fine-Tuning problem in physics. When the mathematics of such equations suggests as observable aspect of a given theory or a yet to be observed phenomena, it isn’t long before these phenomena are found to be actual.

The Everettian interpretation isn’t the only interpretation of quantum mechanics that results in a multiverse. Howard Wiseman, a theoretical quantum physicist at Griffith University, along with his team of colleagues, suggested the “many interacting worlds” approach. On this interpretation, each world is governed by Newtonian physics. However, given the interaction of these worlds, phenomena that are associated with quantum mechanics will arise. To test this approach, Wiseman suggested that the collision of two worlds could lead to the acceleration of one and the recoil of another; this would result in quantum tunneling. Wiseman and his team go through other examples, including the interaction of 41 classical worlds resulting in the type of phenomena observed in the double-slit experiment.31

It was suggested that the multiverse could mark a paradigm shift. This could be the case because the multiverse is able to explanatorily absorb Big Bang cosmology. In other words, inflation, for example, though a property of the Big Bang theory, isn’t well understood without introducing the inflationary multiverse. For Big Bang cosmology to remain the paradigm, inflation would have to be explained without recourse to the inflationary multiverse. The multiverse, aside from explaining inflation, via string theory, it can explain the behavior of dark energy. As aforementioned, as many as 10^500 phase changes of space have been proposed by string theorists.

As surveyed above, the Big Bang and the multiverse are self-contained, naturalistic explanations for the origin of the universe. There are, however, exotic explanations. One of the more recent suggestions offered by Niayesh Afshordi and his team is that the Big Bang was the result of a star that collapsed in a higher dimension. This four-dimensional star collapsed into a black hole. Interestingly enough, one of the properties of black holes is the singularity. The Big Bang, coincidentally, began as a singularity. “When Afshordi’s team modelled the death of a 4D star, they found that the ejected material would form a 3D brane surrounding that 3D event horizon, and slowly expand. The authors postulate that the 3D Universe we live in might be just such a brane—and that we detect the brane’s growth as cosmic expansion.”32 This, Afshordi argues, led astronomers to extrapolate back to the early universe and reason that it must have begun in a Big Bang. According to Afshordi, the Big Bang is a mirage.

This brief survey offers the consensual explanation: the Big Bang; a good candidate to shift the current paradigm: the multiverse; and an exotic explanation: the universe resulted from the collapse of a four-dimensional star into a black hole. The survey is by no means exhaustive. There are, for example, many inflationary theories (e.g. hybrid inflation; inflation as is related to loop quantum gravity). There are also a number of theorems (e.g. quantum eternity theorem). The feature they all share, however, is that they represent a self-contained universe–i.e. a causally closed universe.

A religionist may object and say that his god chose to work via naturalistic processes. William Provine offers a perfect reply to this notion:

A widespread theological view now exists saying that God started off the world, props it up and works through laws of nature, very subtly, so subtly that its action is undetectable. But that kind of God is effectively no different to my mind than atheism.33

This sort of suggestion is also in violation of Ockham’s razor or the principle of parsimony, which states that one shouldn’t multiply entities beyond necessity. Augustine quotes J.J.C. Smart:

“Ockham’s Razor does not imply that we should accept simpler theories at all costs. The Razor is a method for deciding between two theories that equally account for the agreed-upon facts” (Smart 1984, p. 124).

Augustine continues:

Given this basic heuristic, if we have no evidence for likely candidates for a supernatural event, we should adopt the simplest explanation for this fact–that only natural causes are operative within the natural world. If every caused event we have encountered can be explained in terms of natural causes, there is no reason to invoke supernatural causes that do no explanatory work for any particular events.34

Given this, such a god would be an added, unnecessary appendage. Attaching a god to naturalistic explanations doesn’t change the nature of the explanations. It doesn’t support the case for the supernatural. Therefore, given the naturalistic explanations surveyed above, creative agents are unnecessary. Attaching them to such explanations doesn’t make them necessary since they don’t lend support to the explanatory work. As argued earlier, if such gods are unnecessary, then they are also nonexistent with respect to the object purportedly created.

Ultimately, much is said about agnostic atheism. Agnostic atheism is an epistemic position, which disavows belief in gods but doesn’t claim to know whether or not they exist. Given The Argument From Cosmology, however, the fact that creative agents are unnecessary implies their nonexistence. We can know, with a high degree of certainty, that the universe is causally closed and therefore, self-contained. No outside influence can causally interact with or within the universe. It is therefore possible to know that gods do not exist. Therefore, from a much broader perspective, this argument lends strong support to gnostic atheism: an epistemic position which not only disavows belief, but claims knowledge, warrant, and justification. This implication makes for a much broader thesis that is perhaps worth exploring. Perhaps another time.

Works Cited 

1 Augustine, Keith. “A Defense of Naturalism”Infidels. 2001. Web. 5 Dec 2014.

“Reductionism”. 25 Nov 1999. Web. 5 Dec 2014.

3 Blackburn, Simon. The Oxford Dictionary of Philosophy. Oxford: Oxford UP, 1994. 301. Print.

4 Bueno, O. [2012a]: “An Easy Road to Nominalism”, Mind 121, pp. 967-982. Web. 5 Dec 2014. Available on Web.

5 Ibid. [4]

6 Vrbik, Jan. “Mathematical Exploration of Kirkwood Gaps”, Mathematical Journal 14. 2014. Web. 5 Dec 2014. Available on Web.

7 Lyon, Aidan [Sept 2012]. “Mathematical Explanations of Empirical Facts, And Mathematical Realism”. Australasian Journal of Philosophy, Vol. 90, No. 3, pp. 559–578. Web. 6 Dec 2014.

8 Ibid. [7]

9 Heil, John. Philosophy of Mind: A Contemporary Introduction 3rd Ed, p. 211. London: Routledge, 2013. Print.

10 Ibid. [7]

11 Jackson, Frank and Pettit, Phillip [Mar 1990]. “Program Explanation: A General Perspective”. Analysis, Vol. 50, No. 2, pp. 107-117. Web. 6 Dec 2014. Available on Web.

12 Ibid. [7]

13 Cooper, Wilson [Jun 2008]. “Causal Relevance and Heterogeneity of Program Explanations in the Face of Explanatory Exclusion”. Kritike Vol. 2, No. 1, pp. 95-109. Web. 6 Dec 2014. Available on Web.

14 Saatsi, Juha. “Mathematics and Program Explanations”. Australasian Journal of Philosophy Vol. 90, No. 3, pp. 579–584. Web. 6 Dec 2014.

15 Ibid. [13]

16 Ibid. [14]

17 Ibid. [4]

18 Tarlach, Gemma. “In Europe, Neanderthals Beat Homo Sapiens to Specialized Tools”Discovery Magazine. 12 Aug 2013. Web. 6 Dec 2014.

19 Than, Ker. “World’s Oldest Cave Art Found—Made by Neanderthals?”National Geographic. 14 Jun 2012. Web. 6 Dec 2014.

20 Carroll, Sean. “What is Science?”Preposterous Universe. 3 Jul 2013. Web. 6 Dec 2014.

21 Ibid. [1]

22 William Lane Craig and Sean Carroll, “God and Cosmology (33:33)”YouTubeYouTube, LLC. 3 Mar 2014. Web. 6 Dec 2014.

23 Hawking, Stephen. “The Beginning of Time”. ND. Web. 6 Dec 2014.

24 Ibid. [23]

25 Krauss, Lawrence. A Universe From Nothing: Why There Is Something Rather Than Nothing. 1st ed. New York, NY: Free Press, 2012. 61. Print.

26 Carroll, Sean. “Why Does Dark Energy Make the Universe Accelerate?”Preposterous Universe. 16 Nov 2013. Web. 6 Dec 2014.

27 Tegmark, Max. Our Mathematical Universe: My Quest For the Ultimate Nature of Reality, p. 135. New York: Alfred A. Knopf, 2014. Print.

28 Choi, Charles. “Will the Bicep2 Results Hold Up?”The Nature of RealityPBS. 27 May 2014. Web. 6 Dec 2014.

29 Ibid. [27], p.133-134

30 Greene, B.. The Hidden Reality: Parallel Universes and The Deep Laws of the Cosmos, p.56-58. New York: Alfred A. Knopf, 2011. Print.

31 Hall, M. J. W., Deckert, D. A. & Wiseman, H. M. [2014]. Quantum Phenomena Modeled by Interactions between Many Classical Worlds”Phys. Rev. X 4. Web. 6 Dec 2014.

32 Merali, Zeeya. “Did a hyper-black hole spawn the Universe?”Nature. 13 Sep 2013. Web. 6 Dec 2014.

33 William Provine as quoted in Strobel, Lee. The Case For A Creator. Grand Rapids, Mich.: Zondervan, 2004. 26. Print.

34 Ibid. [1]