Refuting a refutation of God.

 In my recent article "Just semantics?" I offered this "refutation" of the God of classical theism:

"Sometimes the requirements imposed by people on a term just don't make sense. For example, many people's conception of God includes the following essential characteristics:

  • Absolute goodness
  • Omnipotence (being all-powerful)
Under a straightforward interpretation of these characteristics, such a conception of God is incoherent. For suppose there are two possible choices of action: one good, one evil. Is God able to perform the evil one? Under a straightforward interpretation of "absolute goodness" the answer is no, but under a common understanding of "omnipotence" the answer is yes.

That doesn't of course mean that I've just disproved the Christian God, a sophisticated theologian can come up with a theory of these two characteristics that would remove the incoherence. But the point is that many people don't have such a theory, and operate with an incoherent notion of God."

As you can see, I wasn't actually claiming this was a serious refutation of the Christian God, that would be quite naive of me. I simply wanted to illustrate that a certain (unsophisticated) definition of a concept can be defective by virtue of being incoherent. But that got me thinking: since I don't truly believe that was an actual refutation, how would I myself critique it? In other words, how would I refute my so-called refutation?


I asked that question on the Askphilosophy subreddit and got a couple of interesting answers, but I wanted to take a stab at it myself, so here's my answer if I had to play "God's advocate".

It is commonly recognized that being all-powerful doesn't include the ability to do things that are logically impossible. An all-powerful being isn't supposed to be able to make square circles or married bachelors. Can God create a rock he cannot lift (while remaining his all-powerful self)? No, that would be as logically impossible as a married bachelor, so it's not a problem for a theist.

So similarly, it is logically impossible for God, if he is all-good, to do an evil thing. So his inability to do it is just like his inability to make a married bachelor - it doesn't cut against his omnipotence.

But not so fast, says the proponent of my "refutation". Then doesn't that mean God is severely limited in what he can do in any given situation? Arguably, given a million options for what to do, the only one he is actually capable of doing is whichever one is "the most" morally good. His all-goodness renders him powerless to do anything worse than that. So what kind of omnipotence is that? He is more like a robot, capable of doing exactly one thing in any given situation (unless two or more options are exactly equally good).

How can God's advocate respond to this challenge? Here's a plausible answer. This is a confused conception of what "having an ability" is. To bring this down to earth, so to speak, consider for example a complete egoist: he only cares about himself and thus evaluates all options based on how much each would benefit him, disregarding completely the well-being of others.

Now, does he have the ability to swim, or stand on one leg, or lift a small rock? Of course he does, even though in any given situation it is logically impossible for him to do anything other than whichever option he thinks would benefit him the most. In other words, yes he can swim, but he will only do so if in that specific situation swimming "wins" his internal evaluation of options. On the other hand, he doesn't have the power to turn water into wine, or part the Red Sea, even if those things benefited him the most.

So what does this example show us? It shows that an ability to do X, commonly understood, means you can do it if X "wins" against other options for you. And so, applying this to God, we see that his omnipotence, his ability to do any non-contradictory thing X similarly means he can do it in any situation where X "wins" against all other options, which for God means if X is the most morally good option. And presto, the conflict between all-goodness and omnipotence evaporates.

Let me know in the comments what you think - did I successfully refute myself?




















3 Comments - Go to bottom

  1. Hey Dmitriy,

    I’m going to co-opt this comment section for a little bit to just expand on my last comment in the Searle discussion thread (if it’s alright with you). I sent you a bunch of emails clarifying what I meant by behavior and thinking; to be clear, I envision them as mutually exclusive.

    Also, to sum up what I mean by a logical gap. I am just saying there is a logical gap between rejection of premise 1b, and the acceptance of “forming the brain patterns for thinking/saying it's conscious happens perfectly fine without actual consciousness (in the simulation).”

    to the conclusion that epiphenomenalism is true.

    This is pretty obvious I think. Just because my simulation has a cause A, which replicates my behavior, it doesn’t follow that I also have that cause A. I could have another cause (consciousness).

    This is more clear when we imagine once more my ball analogy. Simply because we simulate the mysterious “force x” that causes my ball to move using gravity, it doesn’t follow that our causes were equivalent. One cause was gravity, the other force x.

    It is only by assuming that they are equivalent that you can then arrive at the conclusion that the common cause is not consciousness (since the simulation lacks it).

    In other words, you need some way to go from this, “forming the brain patterns for thinking/saying it's conscious happens perfectly fine without actual consciousness (in the simulation).”

    To “forming the brain patterns for thinking we are conscious happens perfectly fine without actual consciousness in our brains”

    ReplyDelete
    Replies
    1. In other words, I think the problem is that you are implicitly assuming that because (the simulation is replicating me) that I must therefore have the same causes as it does. But this doesn’t follow as we can see from the ball analogy.

      Delete
  2. “The fact that it's specifically gravity (a field made up of gravitons, as opposed to something else) plays no causal role in creating the parabola. It is in fact what the rock and the simulation share in common that's responsible“

    I’m not sure I ever understand this point to be honest. Perhaps you can clarify here before you give a full in-depth response to my comments.

    I agree that the “additional stuff” peculiar to gravity doesn’t play a causal role. But how does that demonstrate that gravity = simulated gravity?

    Just because you demonstrated that A and B have a shared thing C in common, and C was the thing which caused D in both instances, doesn’t therefore imply that A= B.

    So how does it follow that it is not specifically the gravitons which caused the parabola, on account of there being additional possible causes of parabolas?

    There might be multiple potential ways to break my window (e.g. throwing a rock, heating it up). What all these causes have in common is that they used the application of force, but it doesn’t therefore follow that it wasn’t the specific action of throwing a rock that caused the window to break.

    That’s because the word “action” encompasses both the common feature (force) and the stuff peculiar to the rock itself. Since the category of “rock breaking window” encompasses the force, it follows that we can legitimately say that the rock caused the window to break.

    Similarly, both consciousness and simulated consciousness share in common that they are computational. It was this computational function which caused behavior in WBE or real Alex in both cases.

    But it doesn’t therefore follow that simulated consciousness = real consciousness; precisely because both words capture peculiar features that are unique to them.

    The whole point of this debate is that they are saying that the peculiar features of real consciousness (i.e. phenomenal properties) are not shared by simulated consciousness; even if they do both share in common a computational nature.

    And this is true even if it was this shared feature which caused (in both instances) the behavior in question. Does this make sense?

    ReplyDelete

Post a Comment