By R.N. Carmona
In recent years, there has been a surge in the use of Bayes’ Theorem with the intention of bolstering this or that argument. This has resulted in an abject misuse or abuse of Bayes’ Theorem as a tool. It has also resulted in an incapacity to filter out bias in the context of some debates, e.g. theism and naturalism. Participants in these debates, on all sides, betray a tendency to inflate their prior probabilities in accordance with their unmerited epistemic certainty in either a presupposition or key premise of one of their arguments. The prophylactic, to my mind, is found in a retreat to the basics of logic and reasoning.
An Overview on Validity
Validity, for instance, is more involved than some people realize. It is not enough for an argument to appear to have logical form. An analysis of whether it, in fact, has logical form is a task that is seldom undertaken. When people think of validity, something like the following comes to mind: “A deductive argument is said to be valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Otherwise, a deductive argument is said to be invalid” (NA. Validity and Soundness. Internet Encyclopedia of Philosophy. ND. Web.).
Kelley, however, gives us rules to go by:
- In a valid syllogism, the middle term must be distributed in at least one of the premises
- If either of the terms in the conclusion is distributed, it must be distributed in the premise in which it occurs
- No valid syllogism can have two negative premises
- If either premise of a valid syllogism is negative, the conclusion must be negative; and if the conclusion is negative, one premise must be negative
- If the conclusion is particular, one premise must be particular (Kelley, D.. The Art of Reasoning. WW Norton & Co. 2013. Print. 243-249)
With respect to the first rule, any argument that does not adhere to it commits the fallacy of undistributed middle. Logically, if we take Modus Ponens to be a substitute for a hypothetical syllogism, then undistributed middle is akin to affirming the consequent. Consider the following invalid form:
All P are Q.
All R are P.
∴ All R are Q.
When affirming the consequent, one is saying Q ⊃ P. It is not surprising that these two fallacies are so closely related because both are illegitimate transformations of valid argument forms. We want to say that since all P are Q and all R are P, therefore all R are Q in much the same way we want to infer that P ⊃ Q. Consider the well-known Kalam Cosmological Argument. No one on both sides questions the validity of the argument because validity, for many of us, is met when the conclusion follows from the premises. However, one can ask whether the argument adheres to Kelley’s rules. If one were to analyze the argument closely enough, it is very arguable that the argument violates Kelley’s fourth rule. The issue is that it takes transposing from the fifth rule to fourth rule because the argument does not violate the fifth and therefore, appears valid. However, when restated under the fourth rule, the problem becomes obvious. In other words, the universe is a particular in both Craig’s conclusion and in the second premise of his argument. Let’s consider the KCA restated under the fourth rule:
There are no things that are uncaused.
There is no universe that is uncaused.
∴ All universes have a cause.
Restating it this way appears controversial only because the argument seems to presuppose that there is more than one universe. Two negatives must have properties in common. Put another way, since there are many of all things, then the universe cannot be the only thing of its kind, if we even agree that the universe is like ordinary entities at all. Craig, perhaps unintentionally, attempts to get a universal from a particular, as his argument restated under the fourth rule shows. Given this, we come to the startling conclusion that Craig’s KCA is invalid. Analyses of this kind are extremely rare in debates because most participants do not know or have forgotten the rules of validity. No amount of complexity hides a violation of basic principles. The advent of analytic philosophy with Bertrand and Moore led to an increasing complexity in arguments and for the most part, validity is respected. As shown here, this is not always the case, so a cursory analysis should always be done at the start.
Validity is necessary but not sufficient for an argument to prove effective and persuasive. This is why arguments themselves cannot substitute for or amount to evidence. Soundness is determined by taking a full account of the evidence with respect to the argument. The soundness of an argument is established given that the pertinent evidence supports it; otherwise, the argument is unsound. Let us turn to some simple examples to start.
An Overview of Soundness
“A deductive argument is sound if and only if it is both valid, and all of its premises are actually true. Otherwise, a deductive argument is unsound” (Ibid.).
All ducks are birds.
Larry is a duck.
∴ Larry is a bird.
This argument is stated under Kelley’s fifth rule and is no doubt valid. Now, whether or not the argument is sound will have us looking for external verification. We might say that, a priori, we know that there are no ducks that are not birds. By definition, a duck is a kind of bird. All well and good. There is still the question of whether there is a duck named Larry. This is also setting aside the legitimacy of a priori knowledge because, to my mind, normal cognitive function is necessary to apprehend human languages and to comprehend the litany of predicates that follow from these languages. We know that ducks are birds a posteriori, but on this point I digress. Consider, instead, the following argument.
All ducks are mammals.
Larry is a duck.
∴ Larry is a mammal.
This argument, like the previous one, is valid and in accordance with Kelley’s fifth rule. However, it is unsound. This harkens back to the notion that ducks belonging to the domain of birds is not a piece of a priori knowledge. Despite knowing that all ducks are birds, the differences between birds and mammals are not at all obvious. That is perhaps the underlying issue, a question of how identity is arrived at, in particular the failure of the essentialist program to capture what a thing is. The differentialist program would have us identify a thing by pinning down what it is not. It follows that we know ducks are birds because anatomically and genetically, ducks do not have the signatures of mammals or any other phylum for that matter. A deeper knowledge of taxonomy is required to firmly establish that ducks are, in fact, birds.
An exploration of soundness is much more challenging when analyzing metaphysically laden premises. Consider, for example, the second premise of the KCA: “The universe began to exist.” What exactly does it mean for anything to begin to exist? This question has posed more problems than solutions in the literature; for our purposes, it is not necessary to summarize that here. We can say of a Vizio 50-inch plasma screen television that it began to exist in some warehouse; in other words, there is a given point in time where a functioning television was manufactured and sold to someone. The start of a living organism’s life is also relatively easy to identify. However, mapping these intuitions onto the universe gets us nowhere because as I alluded to earlier, the universe is unlike ordinary entities. This is why the KCA has not been able to escape the charge of fallacy of composition. All ordinary entities we know of, from chairs to cars to elephants to human beings exist within the universe. They are, as it were, the parts that comprise the universe. It does not follow that because it is probable that all ordinary things begin to exist that the universe must have begun to exist.
This is a perfect segue into probability. Again, since Bayes’ Theorem is admittedly complex and not something that is easily handled even by skilled analytic philosophers, a return to the basics is in order. I will assume that the rule of distribution applies to basic arguments; this will turn out to be fairer to all arguments because treating premises as distinct events greatly reduces the chances of a given argument being true. I will demonstrate how this filters out bias in our arguments and imposes on us the need to strictly analyze arguments.
Using Basic Probability to Assess Arguments
Let us state the KCA plainly:
Everything that begins to exist has a cause for its existence.
The universe began to exist.
∴ The universe has a cause for its existence.
As aforementioned, the first premise of the KCA is metaphysically laden. It is, at best, indeterminable because it is an inductive premise; all it takes is just one entity within the universe to throw the entire argument into the fire. To be fair, we can only assign a probability of .5 for this argument being true. We can then use distribution to get the probability of the argument being sound, so since we have a .5 probability of the first premise being sound, and given that we accept that the argument is not in violation of Kelley’s rules, we can therefore distribute this probability across one other premise and arrive at the conclusion that the argument has a 50% chance of being true.
This is preferable to treating each premise as an isolated event; I am being charitable to all arguers by assuming they have properly distributed their middles. Despite this, a slightly different convention might have to be adopted to assess the initial probability of an argument with multiple premises. An argument with six individual premises has a 1.56% chance of being true, i.e. .5^6. This convention would be adopted because we want a probability between 0 and 100. If we use the same convention used for simpler arguments with less premises, then an argument with six premises would have a 300% chance of being true. An arguer can then arbitrarily increase the amount of premises in his argument to boost the probability of his argument being true. Intuitively, an argument with multiple premises has a greater chance of being false; the second convention, at least, shows this while the first clearly does not. The jury is still out on whether the second convention is fair enough to more complex arguments. There is still the option of following standard practice and isolating an individual premise to see if it holds up to scrutiny. Probabilities do not need to be used uniformly; they should be used to make clear our collective epistemic uncertainty about something, i.e., to filter out dogma.
Let us recall my negation strategy and offer the anti-Kalam:
Everything that begins to exist has a cause for its existence.
The universe did not begin to exist.
∴ The universe does not have a cause.
Despite my naturalistic/atheistic leanings, the probability of my argument is also .5 because Craig and I share premise 1. The distribution of that probability into the next premise does not change because my second premise is a negation of his second premise. In one simple demonstration, it should become obvious why using basic probabilities is preferable over the use of Bayes’ Theorem. No matter one’s motivations or biases, one cannot grossly overstate priors or assign a probability much higher than .5 for metaphysically laden premises that are not easily established. We cannot even begin to apply the notion of a priori knowledge to the first premise of the KCA. We can take Larry being a bird as obvious, but we cannot take as obvious that the universe, like all things within it, began to exist and therefore, has a cause.
Now, a final question remains: how exactly does the probability of an argument being sound increase? Probability increases in accordance with the evidence. For the KCA to prove sound, a full exploration of evidence from cosmology is needed. A proponent of the KCA cannot dismiss four-dimensional black holes, white holes, a cyclic universe, eternal inflation, and any theory not in keeping with his predilections. That being the case, his argument becomes one based on presupposition and is therefore, circular. A full account of the evidence available in cosmology actually cuts sharply against the arteries of the KCA and therefore, greatly reduces the probability of it being sound. Conversely, it increases the probability of an argument like the Anti-Kalam being true. The use of basic probability is so parsimonious that the percentage decrease of the Kalam being sound mirrors the percentage increase of the Anti-Kalam being sound. In other words, the percentage decrease of any argument proving sound mirrors the percentage increase of its alternative(s) proving true. So if a full account of cosmological evidence lowers the probability of the Kalam being sound by 60%, the Anti-Kalam’s probability of being true increases by 60%. In other words, the Kalam would now have a 20% probability of being true while its opposite would now have an 80% of being true.
Then, if a Bayesian theorist is not yet satisfied, he can keep all priors neutral and plug in probabilities that were fairly assessed to compare a target argument to its alternatives. Even more to the point regarding fairness, rather than making a favored argument the target of analysis, the Bayesian theorist can make an opponent’s argument the target of analysis. It would follow that their opponent’s favored argument has a low probability of being true, given a more basic analysis that filters out bias and a systematic heuristic like the one I have offered. It is free of human emotion or more accurately, devotion to any given dogma. It also further qualifies the significance of taking evidence seriously. This also lends much credence to the conclusion that arguments themselves are not evidence. If that were the case, logically valid and unsound arguments would be admissible as evidence. How would we be able to determine whether one argument or another is true if the arguments themselves serve as evidence? We would essentially regard arguments as self-evident or tautologous. They would be presuppositionalist in nature and viciously circular. All beliefs would be equal. This, thankfully, is not the case.
Ultimately, my interest here has been a brief exploration into a fairer way to assess competing arguments. All of this stems from a deep disappointment in the abuse of Bayes’ Theorem; everyone is inflating their priors and no progress will be made if that continues to be permitted. A more detailed overview of Bayes’ Theorem is not necessary for such purposes and would likely scare away even some readers versed in analytic philosophy and more advanced logic. My interest, as always, is in communicating philosophy to the uninitiated in a way that is approachable and intelligible. At any rate, a return to the basics should be in order. Arguments should continue to be assessed; validity and soundness must be met. Where soundness proves difficult to come by, a fair initial probability must be applied to all arguments. Then, all pertinent evidence must be accounted for and the consequences the evidence presents for a given argument must be absorbed and accepted. Where amending of the argument is possible, the argument should be restructured, to the best of the arguer’s ability, in a way that demonstrates recognition of what the evidence entails. This may sound like a lot to ask, but the pursuit of truth is an arduous journey, not an easy endeavor by any stretch. Anyone who takes the pursuit seriously would go to great lengths to increase the epistemic certainty of his views. All else is folly.
By R.N. Carmona
A Christian on the Humans of New York Instagram page brought up the Leibnizian Cosmological Argument (LCA) and named a few of its Christian defenders. Like I’m fond of pointing out, in naming just Christian defenders of the argument, it is likely he hasn’t considered actual objections. He’s read a degraded form of objections via the lens of these Christian authors. This is why I call apologetics, pseudo-philosophy.
In actual philosophical discourse, the participants in a given discussion are as charitable to one another as possible and they try very hard to ensure that nothing is lost in translation. They constantly correct themselves if they misinterpret what their opponent has said or they attempt to show that their interpretation better captures what their opponent is trying to say. Apologists don’t do this. Apologists straw man an objection, sometimes in ways that seem sophisticated, in order to make the argument or counter-argument easier to address. This is precisely why I advised him to deal with J.L. Mackie, for example–to read him directly and not through the lens of one of his favored authors. Then he would find that even Christians reject the Leibnizian Cosmological Argument (LCA).
A quick example of what I mean: in responding to my Argument From Cosmology, my opponent says that the opposite of my conclusion is as follows: the fact that an x can’t be shown to exist in relation to y doesn’t mean that x doesn’t exist*; in other words, that god can’t be shown to exist in relation to the Earth doesn’t mean god doesn’t exist. Yet on Judaism and Christianity, god created the Earth. Modern science tells us that planets have no creator; they form naturally over an extended period of time. We have real time data of planet formation in other systems. The history of our planet doesn’t resemble anything mentioned in the Bible. By extension, I argue that god doesn’t exist in relation to the universe, since he didn’t create it. Modern cosmology tells us as much.
His assertion is not enough to refute my argument. In fact, all he’s concluding is the opposite of what he misunderstood as my conclusion. My actual conclusion is this: x does not exist in relation to y iff it is necessary that x exist in relation to y. If god did not create the Earth or the universe, he doesn’t exist in relation to either, and by extension, doesn’t exist; however, on Christianity, it is necessary that he does. This is precisely what his favored LCA says. Through the Principle of Sufficient Reason (PSR), for every being or state of affairs that exist, there is a sufficient reason for why they exist. It then adds, that for every being or state of affairs that exist, there is a necessary reason for why they exist. Therefore, god is the necessary and sufficient reason for why these states of affairs exist.
My Argument From Cosmology best captures a decisive criticism of the LCA without directly engaging it. J.L. Mackie stated that PSR need not be assumed by reason. Reason only asks for antecedent causes and conditions that explain each being or state of affairs. These are considered facts until they themselves are explained by something prior to them. On reason, nothing is prerequisite beyond this.
Thus, going back to my conclusion, x does not exist in relation to y iff it is necessary that x exist in relation to y. In order for Christianity to hold true, god would have had to create the Earth and the universe. If we have reason to doubt that he created the Earth, and modern science establishes this conclusively, then we have much reason to doubt that he created the universe. If the Earth can be explained by antecedent causes and conditions, then the universe can as well. My argument offers a number of plausible explanations fielded in modern cosmology all whilst arguing that there cannot be the type of causation Christians would require, i.e., the type of causation that allows for an immaterial agent to create material objects. At best, such causation is unknowable and it is probable that there will be no hard evidence for it; hence the Christian must retreat to agnosticism. At worst, such causation is impossible and there cannot be any evidence for it; thus, the Christian must retreat to atheism.
Ultimately, my argument addresses the LCA by implication, i.e., my argument implies a defeater of the LCA. Therefore, I do not have to address it directly. The same can be said of the KCA. Rebuttals to both arguments are implicit in my argument. That fact alone should give apologists pause.
In any case, my point has been made: apologetics is pseudo-philosophy. Apologists prop it up by being uncharitable and purposely (in most cases) misinterpreting a claim or argument made by atheists. Apologists are also modern sophists: it matters not how things might be or probably are; what matters is what they think is the case, what they say is the case, or what they deem possible–as though possibility implies probability. Apologists are also quite fond of straw men, which they use to make their arguments seem superior to those of their opponents. Unlike them, I have defined PSR correctly and I’ve summarized the LCA charitably. However, I’ve also shown that my Argument From Cosmology considers the LCA, albeit indirectly. My argument implies a decisive blow to the LCA. It is therefore necessary to deal with my argument directly; it isn’t enough to choose a favored argument and deem it superior on the basis of a misinterpretation of my conclusion.
*His assessment is correct assuming that x isn’t necessary with relation to y. The fact that I (x) cannot be explained with relation to an invention (y) doesn’t mean I do not exist. Even the actual inventor of invention (y) doesn’t have this sort of connection to the invention in question. Another agent could have been the inventor. On Christianity, however, there are no other gods and hence no other creators. Thus, if god did not create the Earth, there is reason to doubt he created anything else in the universe and therefore, the universe itself. As stated, for Christianity to be true, it need only be shown that god is necessary in this possible world; the LCA, as originally formulated, wants Anselm’s deity and therefore, on the LCA, god is necessary in all possible worlds. My argument casts much doubt on this, especially since god is the necessary being antecedent to all contingent beings. If this connection fails on a minor front, e.g., god didn’t create all baseballs, that’s fine, for even then he would underlie the reason for the reason of the baseballs, namely human beings. If it fails on a major front, as I’ve shown in the case of the Earth and all planets for that matter, a glaring problem arises for Christians, for even if they posit a being that willed the laws of physics, what they have is a deity far removed from the deity in the Bible. That would make for a separate discussion altogether.
By R.N. Carmona
Though the strongest argument of the arguments for atheism, this argument is the most philosophically involved. That is to say that, due to the implications of the argument, the argument ventures into philosophical territory–much of which is the subject of continued disagreement and lack of consensus. On the surface, the argument is strong. However, the argument’s tacit assumption has to be qualified so that the strength of the argument is augmented. In doing so, however, some problems will arise. Though these problems aren’t damning to the argument and though these problems do not lend credence to antithetical arguments, the problems must be addressed. In other words, at the very least, solutions must be suggested.
It is time now to turn to the argument. Groundwork will then be laid out to qualify its assumption. Problems will then be presented and addressed. Then it will be suggested that the uncertainty of the solutions to these problems lends no support to antithetical arguments because such arguments carry their own onus. The argument is as follows:
P1 If there is a naturalistic explanation for the origin of the universe, a creative agent does not exist. (P -> Q)
P2 There is a naturalistic explanation for the origin of the universe. (P)
C Therefore, a creative agent does not exist. (∴ Q)
Prior to qualifying the assumption of the argument, the conclusion will be qualified. The implication here is that if there is a naturalistic explanation for the origin of the universe, a creative agent is not necessary. However, as is argued below, if a creative agent is not necessary, then it follows that a creative agent doesn’t exist.
To see how the conclusion follows, it is perhaps best to shrink the scope. This is to say that rather than focus on the universe as a whole, the focus should turn to a smaller aspect. Suppose that the argument instead argues that if there is a naturalistic explanation for the formation of planets, a creative agent doesn’t exist with respect to planets. Antithetical arguments would no doubt focus on the formation of Earth and thus, would engage in special pleading or question begging. In other words, such arguments would have no choice but to accept that there is a naturalistic explanation for planet formation, but that this explanation is somehow inapplicable to the Earth.
However, if a creative agent isn’t necessary for the formation of the Earth, then there’s no respect in which it can be said to exist in relation to the Earth. Now, in broadening the scope once again, the same argument applies to the universe. If a creative agent isn’t necessary for the origin of the universe, then there’s no respect in which it can be said to exist in relation to the universe. If it isn’t necessary for the creation of the universe, then it doesn’t exist within the universe or transcendentally in respect to the universe. The latter is to say that if it played no role in the origin of the universe, even the assumption that it exists outside of the universe doesn’t lend support to its existence. With respect to this universe, it simply doesn’t exist.
The assumption of the argument is where much of this discussion will focus. The discussion will center around the question of what constitutes a naturalistic explanation. Thus, it will center around the question of what is meant by natural. Keith Augustine offers three definitions. The one he seems to accept is problematic given that supervenience lends credence to non-natural or even supernatural explanations of the mind. Though non-reductive physicalism states that mental states are contingent on physical states, mental states and physical states aren’t congruent. This implies that mental states are non-physical.1
Along these lines, a religionist who is, for example, a Cartesian dualist can argue that the soul is supervenient on the brain. She can argue that mental states are non-physical precisely because mental states are a property of the soul. For such reasons, non-reductive physicalism is to be rejected by one looking to provide a case for strict naturalism. A case for strict naturalism would entail reductive physicalism or reductionism–which agrees with the thesis of physicalism but adds that complex phenomena can be reduced to physical processes.2 This is to say, for example, that morality reduces to the mind of the moral agent. Reductionism could therefore be seen as the view that a given phenomenon is reducible to another phenomenon; alternatively, in the philosophy of science, reductionism is the view that one theory is reducible to another (e.g. Modified Newtonian Mechanics (MOND) is reducible to dark matter). This thesis runs into, at least, two problems.
The first of these issues is qualia–“the felt or phenomenal qualities associated with experiences.”3 This is often referred to as the what it’s like-ness of an experience. For example, a sharp pain in the foot, the smell of wet dog fur, and the taste of chocolate have a subjective quality that vary from one person to the next. These can only be accessed via introspection and are thus a marquee example of the so called privacy of consciousness. Qualia, however, aren’t as pervasive as the second problem. The second problem will therefore receive much warranted attention.
The second problem for reductionism is the purported existence of abstracta. Abstracta are abstract objects like propositions, letters, and numbers. Of these, arguably the most seriously debated are numbers. The debate between mathematical realists and non-realists should occupy one who is attempting a clear case for strict naturalism. If numbers exist, then at least one non-natural object exists; furthermore, this non-natural object isn’t reducible to anything physical. The existence of numbers would therefore refute reductionism.
Of the four criteria Otávio Bueno offers, two have been at the center of the debate: indispensability and explanation versus description. The indispensability criterion states that mathematics must be more than a useful part of an explanation; it must be indispensable to that explanation.4Mathematical realists don’t doubt that mathematics meets this criterion. The explanation versus description criterion states that mathematics, aside from describing a given phenomenon, must explain the phenomenon.5 On these two grounds, the nominalist has the most to say.
Mathematics, for example, doesn’t explain Kirkwood gaps. It merely provides a description for the relevant interactions between the gravitational tugs of Jupiter, the Sun, and asteroids in the asteroid belt. Briefly, Kirkwood gaps are regions within the asteroid belt that contain few asteroids; the distribution of asteroids in the belt are therefore non-uniform. There have been attempts to explain this non-uniformity mathematically (see Vrbik 2014).6 However, as argued by Bueno, proper interpretation is required before a mathematical description is relevant to the explanation of the phenomena.
This failure to explain Kirkwood gaps doesn’t harm the realist case, however. Realists have offered other phenomena that mathematics might explain: the hexagonal cells of beehives; the lifespan of cicadas, which is either 13 or 17 years–both of which are prime numbers; the bridges of Königsberg; the plateau soap film. Each phenomenon is explained semantically. Despite this, Mark Steiner suggested that when one “remove[s] the physics, we remain with a mathematical explanation—of a mathematical truth!”7
However, Baker [forthcoming] has argued that the proposal is false. And it is false for a very simple reason: there are mathematical explanations of empirical facts where the mathematics involved has no (known) mathematical explanation. Baker has argued convincingly, I think, that the example of the bees is a case in point—i.e., that the proof of the Honeycomb Theorem given by Hales  does not explain the theorem. Therefore, whatever is doing the explanatory work, it isn’t the proof of the theorem.8
Steiner’s proposal has been replaced with program explanations. Conversely, program explanations make use of dispositions. John Heil provides an example:
Consider the dispositional property of being fragile. This is a property an object—this delicate vase, for instance—might have in virtue of having a particular molecular structure. Having this structure is held to be lower-level non-dispotional property that grounds the high-level dispositional property of being fragile; the vase is fragile, the vase possesses the dispositional property of being fragile, in viture of possessing some non-dispositional structural property.9
A disposition is “not causally efficacious” and “ensures the instantiation of a causally efficacious property or entity that is an actual cause of the explanandum.”10 This is precisely what program explanations make use of.
A fragile glass is struck and breaks. Why did it break? First answer: because of its Fragility. Second answer: because of the particular molecular structure of the glass. The property of fragility was efficacious in producing the breaking only if the molecular structural property was efficacious: hence 3(i) [there is a distinct property G such that F is efficacious in the production of e only if G is efficacious in its production]. But the fragility did not help to produce the molecular structure in the way in which the structure, if it was efficacious, helped to produce the breaking. There was no time-lag between the exercise of the efficacy, if it was efficacious, by the disposition and the exercise of the efficacy, if it was efficacious, by the structure. Hence 3(ii) [the F-instance does not help to produce the G-instance in the sense in which the G-instance, if G is efficacious, helps to produce e; they are not sequential causal factors]. Nor did the fragility combine with the structure, in the manner of a coordinate factor, to help in the same sense to produce e. Full information about the structure, the trigger and the relevant laws would enable one to predict e; fragility would not need to be taken into account as a coordinate factor. Hence 3(iii) [the F-instance does not combine with the G-instance, directly or via further effects, to help in the same sense to produce e (nor of course, vice versa): they are not coordinate causal factors].11
Though this is a remarkable case, it doesn’t accomplish the type of explanation the realist is looking for. There are, however, other program explanations such as the explanation of the radiation emitted by a piece of uranium over a period of time (see Jackson and Pettit, Ibid.). This kind of program explanation is both indispensable and cannot be superseded by a process explanation–i.e. “a detailed account of the actual causes that led to the event to be explained.”12 So given program explanations, it looks as though the mathematical realist has won. Not so fast.
One must ask whether program explanations bear any relation to mathematics. In other words, one must question whether program explanations are mathematical and whether they meet the indispensability criterion. There are two options for the nominalist: Kim’s exclusion principle: a principle which “hinders the acceptance of two causal explanations for a single effect unless an acceptable relation exists between the two purported causes” or “view mathematics as playing a broadly representational role in scientific explanations.” On the exclusion principle, the fragility example can be revisited:
The problem with this example is that there seems to be some avenue to a conceptual reduction of the two properties, “anyone who had access to the [molecular] account would have all the significant information at his disposal which is offered by the fragility explanation.” This, I think, again gives support to the heterogeneous nature of Jackson and Pettit’s examples; if a conceptual reduction of some kind is possible on a particular occasion then explanatory exclusion will effectively remove the program explanation in the same way that metaphysical reduction will remove the higher-level cause.15
If conceptual reduction of the efficacious and programming non-efficacious properties is possible, there is no longer room to assume that the non-efficacious property plays any role in the explanation. More pointedly, Juha Saatsi states the following: “For it seems that mathematical properties cannot ensure the instantiation of causally efficacious properties in any realist view of mathematics without some unduly ad hoc metaphysical connection being postulated between the concrete world and mathematical abstracta.”16 Given these epistemic and metaphysical difficulties, challenges are presented to one looking to defend mathematical realism. At best, the question of whether or not numbers exist in a Platonic sense remains to be seen. The fact that the floor is still open to this question shouldn’t hinder naturalism. Such an uncertain proposition–namely the existence of numbers–shouldn’t pose problems for naturalism. Though qualia were given less attention, the same conclusion applies. Therefore, reductionism is still a plausible thesis for one attempting a case for strict naturalism. It follows that natural is that which is physical or reducible to that which is physical.
Even if applied mathematics presented an issue for naturalism, pure mathematics says nothing about the world. It is concerned with objects, relations, and structures. These, however, are abstract. They’re not spatio-temporaral and aren’t causally active.17 Assuming that the existence of numbers caused a problem for naturalism, this wouldn’t lend any support to antithetical arguments. One, for example, wouldn’t be able to argue that the existence of numbers implies the existence of a god. For the same reasons one cannot assert that there’s a moral arbiter for morality, one cannot assert that there’s a programmer for program explanations or more simply, that there’s a being who designed a mathematical universe. This is to say, a being who designed the universe so that it adheres to laws fully explainable in terms of mathematics. Such a proposition would require justification.
There’s also the fact that in assuming that numbers exist, this existence could be entirely mind-dependent. This is to say that wherever there’s a being that is sufficiently intelligent, mathematical representations will not only emerge but might be required. Given, for example, the inferior fitness of h.neanderthalensis, one is justified in regarding them as less intelligent than h.sapien. Yet there’s evidence to suggest that neanderthals crafted and used tools; there’s also evidence to suggest that they produced art. In the case of cave art, rudimentary mathematical thinking is required. For instance, a cave artist didn’t depict herself and aspects of nature (e.g. buffalos; trees) in actual size. She, instead, scaled down actual sizes, but still represented herself and her environment in an accurate manner. That is to say that, though she didn’t depict buffalos at their actual sizes, she still depicted them as larger than herself. This scaling down is possible evidence for rudimentary mathematical thinking. Therefore, if an intellectually inferior being is capable of thinking in this manner, it is reasonable to expect that an intellectually superior being is also capable of such thinking. When presented with given circumstances (e.g. predators that hunt in packs), the capacity to identify multiple threats becomes advantageous to survival. This, in turn, are the first fruits of mathematical thought. Mathematics, thence, could have a real contingent existence rather than, as the Platonists/Mathematical Realists argue, a real necessary existence. Thus, despite the strength of nominalism, a watered down realism could also dispel with the problem the existence of numbers would pose for reductionism and therefore, naturalism.
With a definition of natural now established and given the care taken in addressing possible problems, it is time now to turn to examples of naturalistic explanations for the origin of the universe. Prior to doing this however, it is necessary to point out that naturalism is the prevailing view in science. Sean Carroll states:
[I]f a so-called supernatural phenomenon has strictly no effect on anything we can observe about the world, then indeed it is not subject to scientific investigation. It’s also completely irrelevant, of course, so who cares? If it does have an effect, than of course science can investigate it, within the above scheme. Why not? Science does not presume the world is natural; most scientists have concluded that the world is natural because that’s the best explanation for what we observe. If you are ever confused about what “science” has to say about something, just ask yourself what actual scientists would do. If real scientists were faced with a purportedly supernatural phenomenon, they wouldn’t just shrug their shoulders because it wasn’t part of their definition of science. They would investigate it and try to come up with the best possible explanation.20
If a supernatural explanation presented itself, however, one should remember “that most naturalists would agree that naturalism at least entails that nature is a closed system containing only natural causes and their effects.”21 This is precisely what cosmological models present: a causally closed universe and explanations showing that the universe is self-contained. Even if one wrongfully assumes, like William Lane Craig, that the Borde-Guth-Vilenkin theorem yields evidence for an absolute beginning of the universe, when one considers that the theorem says that the ability to explain the universe classically gives out, such a theorem isn’t useful to argue for a beginning.22Even if we assumed, however, that this theorem implies an absolute beginning to the universe, this wouldn’t imply that a supernatural explanation is the only resort.
With this in mind, naturalistic explanations can be presented. The consensual theory, i.e. the Big Bang, will be discussed. The multiverse, which might be the mark of a paradigm shift, will also be discussed. Then an exotic possibility will be explored, namely that the universe is the product of a four-dimensional black hole.
Without surveying the history of the Big Bang, an outline of its properties can be presented: singularity, inflation, baryogenesis, cooling, structure formation, accelerated cosmic expansion. Stephen Hawking describes the singularity as follows:
At this time, the Big Bang, all the matter in the universe, would have been on top of itself. The density would have been infinite. It would have been what is called, a singularity. At a singularity, all the laws of physics would have broken down. This means that the state of the universe, after the Big Bang, will not depend on anything that may have happened before, because the deterministic laws that govern the universe will break down in the Big Bang. The universe will evolve from the Big Bang, completely independently of what it was like before. Even the amount of matter in the universe, can be different to what it was before the Big Bang, as the Law of Conservation of Matter, will break down at the Big Bang.23
Hawking later adds that “the Big Bang is a beginning that is required by the dynamical laws that govern the universe. It is therefore intrinsic to the universe, and is not imposed on it from outside.”24 Baryogenesis is the period in the early universe that resulted in the prevalence of matter over antimatter. It is useful to note here that this doesn’t make a difference given the assumption that the universe was created.
Because antiparticles otherwise have the same properties as particles, a world made of antimatter would behave the same way as a world of matter, with antilovers sitting in anticars making love under an anti-Moon. It is merely an accident of our circumstances, due, we think, to rather more profound factors…that we live in a universe that is made up of matter and not antimatter or one with equal amounts of both.25
Given this, the prevalence of matter over antimatter is arbitrary when assuming that the universe was created. This, however, serves as evidence against the notion since it serves as an example of chance in the universe. After the universe began to cool, stars, galaxies, and planets began to form. Expansion, as discovered by Edwin Hubble, is accelerating. What was accelerating the expansion of the universe was, at the time, unknown. Today, it is held that dark energy is responsible for the expansion of the universe and this, Sean Carroll states, is because dark energy is persistent and doesn’t dilute as the universe expands. It is, he explains, a feature of space itself and therefore constant throughout space and time.26
There have been recent suggestions that the universe is a dynamical fluid. Or, at the very least, the universe is a medium that appears to have more than three phases; in fact, as many as 10^500 and maybe even an infinite amount. So aside from curving and expanding, space could be doing something like freezing or evaporating.27
Inflation was intentionally set aside because it has received a lot of recent attention. On March 17th of this year, John Kovak, with the Harvard-Smithsonian Center for Astrophysics, announced the detection of gravitational waves. Initially, the BICEP2 collaboration had ruled out the possibility that cosmic dust in the Milky Way accounted for the polarization pattern in the Cosmic Microwave Background (CMB).
The BICEP2 collaboration, on June 19th, published a paper acknowledging that cosmic dust in the Milky Way could account for more of the b-mode polarization signal than previously thought. If the signals originate from primordial gravitational waves, this would be a smoking gun for inflation. Inflation theory proposes a short burst of exponential expansion in the early universe. This rapid expansion would produce gravitational waves.
It wasn’t long before the results came under fire and were eventually proven wrong. Charles Choi writes:
The controversy hinges on their handling of the dust emission, which relied on a preliminary map based on about 15 months of data from the European Space Agency’s Planck spacecraft. Falkowski noted the BICEP2 group might have misinterpreted the Planck data, thinking that it only contained emissions from the Milky Way when it also included unpolarized emissions from other galaxies. If the BICEP2 team did not account for this fact, they might have underestimated how polarized the foreground from the Milky Way actually was. This could mean the inflation signal the group thought it saw might only be a spurious result from Milky Way emissions.28
Gravitational waves would be a smoking gun for inflation. Though the Bicep2 results were eventually shown to be wrong, a recent suggestion could prove promising. A team, consisting of Nora Elisa Chisari, a fifth year graduate student with the Department of Astrophysical Sciences at Princeton, Cora Dvorkin with the Institute of Advanced Study at the School of Natural Sciences, and Fabian Schmidt with the Max Planck Institute for Astrophysics, is now proposing that weak lensing surveys may be able to detect the cross-correlation between b-mode polarization in the CMB and cosmic shear. The team suggests the possibility that cosmic shear is the best way to confirm primordial gravitational waves resulting from b-mode polarization in the CMB.
If inflation is empirically established, it would serve as indirect evidence for an inflationary multiverse. An inflationary universe, according to Tegmark, results in a Level I and Level II multiverse. Inflation would eventually end in parts of a rapidly expanding region, forming u-shaped regions. Each of these regions constitute a Level I multiverse while the amalgam of the universes constitute a Level II universe.29 Another way to imagine this type of multiverse is by imagining an enormous block of swiss cheese. As Brian Greene explains:
[T]he cheesy parts [are] regions where the inflaton field’s value is high and the holes [are] regions where it’s low. That is, the holes are regions, like ours, that have transitioned out of the superfast expansion and, in the process, converted the inflaton field’s energy into a bath of particles, which over time may coalesce into galaxies, stars, and planets. In this language, we’ve found that the cosmic cheese requires more and more holes because quantum processes knock the inflaton’s value downward at a random assortment of locations. At the same time, the cheesy parts stretch ever larger because they’re subject to inflationary expansion driven by the high inflaton field value they harbor. Taken together, the two processes yield an ever-expanding block of cosmic cheese riddled with an ever-growing number of holes. In the more standard language of cosmology, each hole is called a bubble universe (or a pocket universe). Each is an opening tucked within the superfast stretching cosmic expanse.30
Briefly, an inflaton field is a field corresponding to a given inflationary model. Like the Higgs field corresponds to the Higgs boson and an electromagnetic field corresponds to electromagnetism, an inflaton field corresponds to inflation. Some have written off multiverses as too hypothetical. However, Einstein’s general relativity suggested both an expanding universe and black holes long before there was evidence for either. Likewise, quantum mechanics (e.g. Everett’s Many Worlds Interpretation), string theory, and other equations suggest a multiverse. Some consider the multiverse the best explanation for the so called Fine-Tuning problem in physics. When the mathematics of such equations suggests as observable aspect of a given theory or a yet to be observed phenomena, it isn’t long before these phenomena are found to be actual.
The Everettian interpretation isn’t the only interpretation of quantum mechanics that results in a multiverse. Howard Wiseman, a theoretical quantum physicist at Griffith University, along with his team of colleagues, suggested the “many interacting worlds” approach. On this interpretation, each world is governed by Newtonian physics. However, given the interaction of these worlds, phenomena that are associated with quantum mechanics will arise. To test this approach, Wiseman suggested that the collision of two worlds could lead to the acceleration of one and the recoil of another; this would result in quantum tunneling. Wiseman and his team go through other examples, including the interaction of 41 classical worlds resulting in the type of phenomena observed in the double-slit experiment.31
It was suggested that the multiverse could mark a paradigm shift. This could be the case because the multiverse is able to explanatorily absorb Big Bang cosmology. In other words, inflation, for example, though a property of the Big Bang theory, isn’t well understood without introducing the inflationary multiverse. For Big Bang cosmology to remain the paradigm, inflation would have to be explained without recourse to the inflationary multiverse. The multiverse, aside from explaining inflation, via string theory, it can explain the behavior of dark energy. As aforementioned, as many as 10^500 phase changes of space have been proposed by string theorists.
As surveyed above, the Big Bang and the multiverse are self-contained, naturalistic explanations for the origin of the universe. There are, however, exotic explanations. One of the more recent suggestions offered by Niayesh Afshordi and his team is that the Big Bang was the result of a star that collapsed in a higher dimension. This four-dimensional star collapsed into a black hole. Interestingly enough, one of the properties of black holes is the singularity. The Big Bang, coincidentally, began as a singularity. “When Afshordi’s team modelled the death of a 4D star, they found that the ejected material would form a 3D brane surrounding that 3D event horizon, and slowly expand. The authors postulate that the 3D Universe we live in might be just such a brane—and that we detect the brane’s growth as cosmic expansion.”32 This, Afshordi argues, led astronomers to extrapolate back to the early universe and reason that it must have begun in a Big Bang. According to Afshordi, the Big Bang is a mirage.
This brief survey offers the consensual explanation: the Big Bang; a good candidate to shift the current paradigm: the multiverse; and an exotic explanation: the universe resulted from the collapse of a four-dimensional star into a black hole. The survey is by no means exhaustive. There are, for example, many inflationary theories (e.g. hybrid inflation; inflation as is related to loop quantum gravity). There are also a number of theorems (e.g. quantum eternity theorem). The feature they all share, however, is that they represent a self-contained universe–i.e. a causally closed universe.
A religionist may object and say that his god chose to work via naturalistic processes. William Provine offers a perfect reply to this notion:
A widespread theological view now exists saying that God started off the world, props it up and works through laws of nature, very subtly, so subtly that its action is undetectable. But that kind of God is effectively no different to my mind than atheism.33
This sort of suggestion is also in violation of Ockham’s razor or the principle of parsimony, which states that one shouldn’t multiply entities beyond necessity. Augustine quotes J.J.C. Smart:
“Ockham’s Razor does not imply that we should accept simpler theories at all costs. The Razor is a method for deciding between two theories that equally account for the agreed-upon facts” (Smart 1984, p. 124).
Given this basic heuristic, if we have no evidence for likely candidates for a supernatural event, we should adopt the simplest explanation for this fact–that only natural causes are operative within the natural world. If every caused event we have encountered can be explained in terms of natural causes, there is no reason to invoke supernatural causes that do no explanatory work for any particular events.34
Given this, such a god would be an added, unnecessary appendage. Attaching a god to naturalistic explanations doesn’t change the nature of the explanations. It doesn’t support the case for the supernatural. Therefore, given the naturalistic explanations surveyed above, creative agents are unnecessary. Attaching them to such explanations doesn’t make them necessary since they don’t lend support to the explanatory work. As argued earlier, if such gods are unnecessary, then they are also nonexistent with respect to the object purportedly created.
Ultimately, much is said about agnostic atheism. Agnostic atheism is an epistemic position, which disavows belief in gods but doesn’t claim to know whether or not they exist. Given The Argument From Cosmology, however, the fact that creative agents are unnecessary implies their nonexistence. We can know, with a high degree of certainty, that the universe is causally closed and therefore, self-contained. No outside influence can causally interact with or within the universe. It is therefore possible to know that gods do not exist. Therefore, from a much broader perspective, this argument lends strong support to gnostic atheism: an epistemic position which not only disavows belief, but claims knowledge, warrant, and justification. This implication makes for a much broader thesis that is perhaps worth exploring. Perhaps another time.
1 Augustine, Keith. “A Defense of Naturalism”. Infidels. 2001. Web. 5 Dec 2014.
2 “Reductionism”. 25 Nov 1999. Web. 5 Dec 2014.
3 Blackburn, Simon. The Oxford Dictionary of Philosophy. Oxford: Oxford UP, 1994. 301. Print.
4 Bueno, O. [2012a]: “An Easy Road to Nominalism”, Mind 121, pp. 967-982. Web. 5 Dec 2014. Available on Web.
5 Ibid. 
6 Vrbik, Jan. “Mathematical Exploration of Kirkwood Gaps”, Mathematical Journal 14. 2014. Web. 5 Dec 2014. Available on Web.
7 Lyon, Aidan [Sept 2012]. “Mathematical Explanations of Empirical Facts, And Mathematical Realism”. Australasian Journal of Philosophy, Vol. 90, No. 3, pp. 559–578. Web. 6 Dec 2014.
8 Ibid. 
9 Heil, John. Philosophy of Mind: A Contemporary Introduction 3rd Ed, p. 211. London: Routledge, 2013. Print.
10 Ibid. 
11 Jackson, Frank and Pettit, Phillip [Mar 1990]. “Program Explanation: A General Perspective”. Analysis, Vol. 50, No. 2, pp. 107-117. Web. 6 Dec 2014. Available on Web.
12 Ibid. 
13 Cooper, Wilson [Jun 2008]. “Causal Relevance and Heterogeneity of Program Explanations in the Face of Explanatory Exclusion”. Kritike Vol. 2, No. 1, pp. 95-109. Web. 6 Dec 2014. Available on Web.
14 Saatsi, Juha. “Mathematics and Program Explanations”. Australasian Journal of Philosophy Vol. 90, No. 3, pp. 579–584. Web. 6 Dec 2014.
15 Ibid. 
16 Ibid. 
17 Ibid. 
18 Tarlach, Gemma. “In Europe, Neanderthals Beat Homo Sapiens to Specialized Tools”. Discovery Magazine. 12 Aug 2013. Web. 6 Dec 2014.
19 Than, Ker. “World’s Oldest Cave Art Found—Made by Neanderthals?”. National Geographic. 14 Jun 2012. Web. 6 Dec 2014.
20 Carroll, Sean. “What is Science?”. Preposterous Universe. 3 Jul 2013. Web. 6 Dec 2014.
21 Ibid. 
22 William Lane Craig and Sean Carroll, “God and Cosmology (33:33)”. YouTube. YouTube, LLC. 3 Mar 2014. Web. 6 Dec 2014.
23 Hawking, Stephen. “The Beginning of Time”. ND. Web. 6 Dec 2014.
24 Ibid. 
25 Krauss, Lawrence. A Universe From Nothing: Why There Is Something Rather Than Nothing. 1st ed. New York, NY: Free Press, 2012. 61. Print.
26 Carroll, Sean. “Why Does Dark Energy Make the Universe Accelerate?”. Preposterous Universe. 16 Nov 2013. Web. 6 Dec 2014.
27 Tegmark, Max. Our Mathematical Universe: My Quest For the Ultimate Nature of Reality, p. 135. New York: Alfred A. Knopf, 2014. Print.
28 Choi, Charles. “Will the Bicep2 Results Hold Up?”. The Nature of Reality. PBS. 27 May 2014. Web. 6 Dec 2014.
29 Ibid. , p.133-134
30 Greene, B.. The Hidden Reality: Parallel Universes and The Deep Laws of the Cosmos, p.56-58. New York: Alfred A. Knopf, 2011. Print.
31 Hall, M. J. W., Deckert, D. A. & Wiseman, H. M. . Quantum Phenomena Modeled by Interactions between Many Classical Worlds”, Phys. Rev. X 4. Web. 6 Dec 2014.
32 Merali, Zeeya. “Did a hyper-black hole spawn the Universe?”. Nature. 13 Sep 2013. Web. 6 Dec 2014.
33 William Provine as quoted in Strobel, Lee. The Case For A Creator. Grand Rapids, Mich.: Zondervan, 2004. 26. Print.
34 Ibid.