One of the many problems we face in arguing about gods is the danger of running into a false dilemma. I am specifically concerned with arguments for theism, which I think contain many false dilemmas, but arguments for atheism should also be wary of the problem.
What we will see in many apologetic arguments in favor of the traditional monotheistic God is a case of dilemmas presented. Here are two such examples:
- In the Kalam Cosmological Argument, we have immaterial minds versus abstract objects as the potential cause of the universe.
- In the Fine Tuning Argument, and in teleological arguments generally, we have design versus chance.
These arguments set up two opposing options and say these are the only options. Then, they eliminate one option and say the other, by default, must be correct. To understand why I think this is abhorrent, we should first understand what makes a true dilemma and we can then compare that to the arguments for God.
I think the best demonstration of a true dilemma can be found in mathematics. Take this chart I created using this free online graphing calculator. This is a graph of a parabola. It is a line with the formula y = x^2.
What you’ll notice from the parabola is that, for every y, there are two appropriate answers for x. If y = 1, x can equal either 1 or -1, and so on. Graphically, this can be seen in the chart because as you move up the y-axis, there is a corresponding x plotted to each side of the axis. This is a perfect dilemma. We know with certainty there are two—and only two—options.
It’s very easy to get certainty in this two-dimensional graph world. It’s simple and you know all the rules. In the actual world, we are unfortunately faced with a great deal of epistemic uncertainty. Consider a case of how this uncertainty can creep into an apparent dilemma.
Imagine an experienced farmer driving through a rural county and he notices a big red barn off in the distance. The farmer, drawing from his experience in raising animals, thinks ‘I bet that barn is used for either horses or cattle.’ The farmer does not know a movie is being shot on location in this county and what he actually sees is a barn façade with no animals.
We can see that even a commonplace belief is subject to epistemic uncertainty. Many will admit that the farmer is quite justified in making his assertion, but he still produces a false dilemma. The good news is that we can recognize that the movie scenario is extreme and highly unlikely. So, most of the time, these dilemmas will be pretty good. In other words, even though they don’t cover 100% of the possibilities, they cover a substantial portion of them. Movie sets probably occur in between 0-1% of all rural counties, so the other alternatives shouldn’t necessarily bother us without some other contrary piece of data (and also assuming the farmer has reliable knowledge of barn uses).
It might help to visualize this. You can move away from certainty, which is a dilemma that accounts for 100% of the probabilistic sample space. In everyday scenarios with the only alternatives being incredibly unlikely, it’s not a big deal. You may still have 99% of the relevant options covered. But the further you move into areas of uncertainty and ambiguity, the less you can say that you have covered all or most of the real alternatives.
If we really worked out the sample spaces for the examples above, we would see moving further right means moving further into uncertainty. The line formula for a parabola offers no alternatives, the shirt example offers a few (sweater, etc.), and the angels example…well, we just have no idea how to determine that. The first is a perfect dilemma, the second a decent dilemma, and the third is a terrible dilemma. We simply have no justification in the third case for favoring one of the options or ruling out other options.
So, that is the long way of saying this: The further you delve into areas of uncertainty and ambiguity, the less likely it is that you can produce a real dilemma.
Why does this matter?
Good question! Let’s think back to the two arguments for theism I mentioned. On the way to asserting the dilemma for the Kalam argument, we have to trudge through such muddy epistemic waters as whether there can be infinite time and/or space, the role of quantum mechanics in explaining our current universe, the correct interpretation of the inflationary model, whether the universe as a whole requires a cause, whether all of space and time began at a Big Bang singularity, and much more. We basically need to know very specific information about what took place during and perhaps before the Big Bang. Right now, we really don’t know very much about this and these may never be settled issues. We can infer a great deal of what happened after the Big Bang, but not before or during it—whatever it was. Craig’s dilemma that the universe was either caused by an immaterial mind or an abstract object, like a number, requires a stance on such matters. But with the uncertainty surrounding these questions and even the additional questions that branch off of these, we cannot adequately define our sample space. While it probably isn’t as bad as the angels example, it’s definitely at that end of the spectrum. Whatever quotes Craig throws out during his debate from select physicists, always remember he’s in Plato’s cave trying to decipher the shadows on the wall.
How about the Fine Tuning Argument? According to this argument, the values of various constants either occurred by design or by chance. I question whether we really know the sample space for each constant. What could they have been other than what they are? If you are playing poker, for example, you know the next card coming will be limited to the 13 unique options in the deck. Even though there are infinitely many numbers, your sample space in the game is restricted. You’ll never be dealt a 127 of clubs. Christian philosophers Tim and Lydia McGrew also pointed out that the ranges were so large, it wouldn’t matter how much the constants could vary, making the appeal to fine tuning somewhat meaningless because a coarsely tuned universe would be just as improbable.
What could restrict the sample space? There could be underlying factors that actually greatly reduce our estimation of the “chance” option. In other words, the dilemma posed by this argument isn’t really design versus chance. It is design versus some particular value assigned to the chance option. It’s a false dilemma because there are plenty of reasons to think the small numbers given, like 1 in 10^50, are not really the only options to consider. I’ve written before on how we can misjudge chance if we don’t recognize underlying factors. The same thing might just be happening here. It seems quite plausible that some law-like feature of the universe might account for the existing range of constants.
Be wary of the deductive arguments used by apologists that produce these and similar dilemmas. Go back to the beginning. How did you get to the dilemma? What assumptions are being made without warrant? I’m willing to bet many of the apologists’ arguments, which are being presented as if they are just obvious rational deductions, are actually sweeping quite a bit under the rug.
- The Sherlock Holmes Defense
- God and Time: A Dilemma
- Christian and Atheist Round Table Discussion and Q&A: Origins