Many prominent physicists believe that fine-tuning implies the existence of a vast number of different universes, many theologians believe it implies God. Others say that it implies neither and that everyone else is just being silly. Two opposing views were recently published, and a vigorous and fascinating discussion ensued.

**Note:** this article is pretty long and goes carefully step by step. But here is a shorter version that doesn't discuss the background and leaves out some important issues. Its big advantage is that it gets to the one main point quickly and easily, using a nice simple analogy by Steven Novella. You might want to check it out first to get your feet wet, or if you are only interested in the anthropic reasoning aspect.

First things first, what is fine-tuning? I will give you a few awesome resources below, but very briefly: physicists have discovered that the constants of nature (speed of light, mass of the electron, etc.) seem to be balanced on a knife's edge - if they were even just a tiny bit different the universe would be a barren wasteland incapable of producing any kind of life. This fact, called

*fine-tuning,*cries out for explanation. The three main contenders for an explanation have been recognized to be extraordinary luck, multiple universes, and a cosmic designer. Physicists generally prefer the multiple universes option. This seems to make sense: just like if there are trillions of different planets a few are bound to be habitable, the same could be true of universes. So is fine-tuning evidence of a multiverse?That's where the controversy lies. In a recent Scientific American article
philosopher Philip Goff argued that physicists commit an elementary statistical fallacy known as the reverse gambler's fallacy when they take fine-tuning to be evidence of a multiverse. Neurologist Steven Novella, a prominent voice in the scientific skepticism movement, devoted to scientific literacy and critical thinking, in his article on the Neurologica blog, disagrees and thinks that Goff himself commits a mistake known as the lottery fallacy. Who is right?

Our plan, what we will show and how.

It turns out that both are right, and both are wrong. Specifically, the following two statements are true at the same time:

**1.**The inference from fine-tuning to the multiverse is indeed fallacious.

**2.**Nevertheless, the multiverse hypothesis does help physics in explaining fine-tuning.

How can both of those statements be true? They seem contradictory. Goff defends the first statement and doesn't believe the second, while Novella is the opposite: he believes the second statement but not the first. And yet, as I will attempt to explain, both statements are true.

**Here is our three-step plan of action.**

**Step 1.**We will explain Goff's claim and reformulate it in a more precise language.

**Step 2.**We will then use what I call

*the epistemic ensemble technique*to convert it into a simple counting problem, and show that fine-tuning is not evidence of a multiverse.

**Step 3.**Finally, we will understand why, despite this, the multiverse hypothesis helps physics explain fine-tuning.

**Background.**

I of course haven't told you what the above-mentioned two fallacies are. That's because both Goff's and Novella's articles do a great job of explaining them, and we actually won't need them in any of the steps. I also gave you only a brief introduction to fine-tuning. I will say a little more later, but if you want to see some really good explanations, I have two videos for you. In the one above you will see what many scientists have to say on the matter, and hear a philosophical argument for the design hypothesis. In the video below, Leonard Susskind, one of the world's top theoretical physicists and an awesome teacher and science communicator, does a great job of explaining fine-tuning in a clear and fun way and elaborates on the multiverse explanation.

I recommend that you watch them, and of course read the two opposing articles that inspired me to write mine. Here, again, is Steven Novella's article, which links to (but also summarizes it in case you really want to keep your reading list to a minimum) Philip Goff's article, explains his objections to it, and in the comments section has many people weighing in, including Goff himself.

Technically, my article is pretty self-contained, but everything will make more sense if you stop reading it now and check out the discussion that inspired it. Go ahead, don't worry, I'll wait...

...Welcome back! We are ready to embark on our three-step plan of showing that fine-tuning is not evidence for the multiverse, and yet the multiverse helps explain fine-tuning.

Step 1. Explaining Goff's claim and making it precise

So why is the inference from fine-tuning to the multiverse wrong? We can actually prove this using simple arithmetic, but first we need to make this claim more precise. I will simplify things by omitting details irrelevant to the core logic. With that in mind, here is the more precise formulation:

**Claim**. Consider two competing hypotheses:

- Normal universe, i.e. our universe is the only one, and the constants of nature were not chosen specifically to make complex structures necessary for life possible (we'll say "random constants" as a shorthand).
- Normal multiverse, i.e. the constants are random and are selected independently in each separate universe.

Then the claim is:

*fine-tuning is not evidence of the second hypothesis relative to the first.*This means the following. Suppose before learning about fine-tuning the available evidence lead us to assign certain probabilities to these two hypotheses. Then after learning that only a tiny range of constants would lead to a life-supporting universe we will need to update these probabilities, however the claim is that their ratio will stay exactly the same!**In other words.**

Before proving this claim, let me explain a bit more, in a more informal language. What the claim is saying is that if, for example, before learning about fine-tuning we would assign 7 to 1 odds in favor of a single universe versus the multiverse, then after learning about fine-tuning we should continue to assign 7 to 1 odds. Importantly, these two probabilities don't have to add up to 100%, because those two hypotheses are not the only ones available. There are other possibilities, for example that the constants are not random, that they were chosen with life in mind.

But right now we are only concerned with the two hypotheses above, specifically with how much one is more likely than the other, aka the ratio of their probabilities. That's what the Scientific American article is referring to when it says that the inference from fine-tuning to the multiverse is a mistake. Translated into the more precise language of probabilities, it is saying that learning about fine-tuning should not increase the probability we assign to the multiverse

*relative*to the single universe hypothesis. The ratio of the two probabilities must remain the same, even if the probabilities themselves change.**Learning about fine-tuning.**

We should also clarify exactly what’s meant by “learning about fine-tuning”. As we learned more and more about the physics of fundamental particles and cosmology in the second half of the 20th century, we were able to understand better and better what the universe would look like if the values of the fundamental constants were different. Specifically, we gradually learned that for almost any values of these constants the universe would not be life-supporting. So this is what we mean by “learning about fine-tuning”, we mean the drastic decrease in our estimate for the chance that a randomly selected set of constants would result in a life-supporting universe. Of course this reassessment was a gradual process, but we can simplify things without damaging the core logic by assuming this happened quickly: let’s say for example that before 1980 we thought that 1 in 1000 randomly selected universes will be life-supporting, and in 1980 we revised our estimate and now think that the chance is 1 in $10^{229}$. The specific numbers are reasonable ($10^{229}$ is Lee Smolin's estimate given in the article), but they are not particularly important, what’s important is that our estimate that a random universe will be life-supporting decreased enormously.

Step 2. Recasting the question into a simple counting problem.

So the question is, once we learned about fine-tuning (in 1980 in my simplified picture), how should we revise the probabilities of the two hypotheses: a single universe with random constants, and a multiverse with random constants in each? If, for example, before 1980 we thought that a single universe is 7 times more likely than a multiverse, what should we think now? I want to prove that we should think the exact same thing, that it is still 7 times more likely. Note that the two probabilities

*can*change, as long as their ratio stays the same. We will soon see that they in fact both decrease (by the same factor), which means that some third hypothesis must increase in probability.**Bayesian calculus anyone?**

The argument given in the Scientific American article is not very clear, I believe, so I’m not at all surprised that many people, including Dr. Novella, found it puzzling or unconvincing. Now that we have made the claim more precise, we don’t have to use qualitative arguments anymore, we can prove the claim mathematically, by using the calculus of Bayesian probabilities.

But I won’t subject you to that, a rigorous mathematical proof would be pretty boring, and probably only clear to people who are very comfortable with Bayesian probabilities. Instead, I will illustrate the proof with a story, designed to be very close in structure to the situation with single and multiple universes, but easier to imagine. It is also similar to the story used in the article, but I believe it is less ambiguous, closer to the actual situation, and better illustrates the logic.

**Story time.**

The story begins, just as it does in the original article with you waking up in a room with no memories. There is a monkey, a typewriter and some other equipment in the room. Days and weeks go by (you are provided with food etc.) and eventually, through some sort of investigation, you find out your origin story. You were actually grown from a stem cell in an incubator by the Joker. He is a mad scientist and is able to grow adult people with normal intelligence but no memories from just a single cell. Before you woke up for the first time you were put in this room with the monkey and the typewriter, and were subjected to a life and death lottery: the monkey typed out random letters on the typewriter, and unless the letters matched perfectly with some specific predetermined sequence (for example the monkey had to type exactly "Hi!") the room was going to be filled with poisonous gas and you would die way before waking up. Are you starting to see the parallel with the life-permitting universe scenarios?

Now, your investigation didn’t reveal exactly what the monkey was supposed to type out, and how unlikely that would be by random chance, but your best estimate based on the available evidence was that the chance was one in 1000. Further, your investigation revealed that the Joker entertained three possible scenarios for his weird experiment:

**1.**Normal single room scenario. The Joker did this with only one human stem cell.

**2.**Normal multi-room scenario. The Joker collected 1000 stem cells, and performed the experiment with all of them, putting them all in separate rooms, each room equipped with its own monkey and typewriter.

**3.**Rigged scenario. Here, unlike the other two scenarios, the typewriter is not normal, it’s rigged so that no matter what the monkey types the text that comes out of the typewriter is always whatever it needs to be for you to survive. The number of rooms here is not important right now, but let’s say for simplicity it's again just a single room.

Suppose further you found out how the Joker decided on which scenario to implement. He wanted the first scenario to be far more likely than the second, and the second far more likely than the third. So he took 7 million balls, marked them “Normal single room”, took 1000 more balls, marked them “Normal multi-room”, and took one more ball and marked it "Rigged single room”. He then put all of these balls in a big box, mixed them up thoroughly, and randomly chose one ball, and implemented whatever scenario was written on it. Let's summarize the story in a table:

Single | Multi | Rigged | |
---|---|---|---|

Balls | 7000000 | 1000 | 1 |

Embryos | 1 | 1000 | 1 |

Survive | 0.1% | 0.1% | 100% |

Of course the balls are just a convenient way to represent a probabilistic choice. The exact numbers I am using here don't have any particular significance, we could have used variables instead, but you will see in a minute why I chose these numbers for this example. So now the question is:

**Question 1.**With all of that information, what level of confidence should you assign to each of these three scenarios?

We will answer that question in a second, but first, just so you understand where we are going with this, here is the second question:

**Question 2.**Now suppose you “learned about fine-tuning“, which in our story means this: you found out more information about what the monkey was actually supposed to type for you to survive, and you found out the chance of survival (if the typewriter is normal, not rigged) is actually less than one in 1000, it’s one in 2000. What probabilities should you assign to the three scenarios now? Here is the updated table, which now covers the before and after "learning about fine-tuning" versions of the question:

Single | Multi | Rigged | |
---|---|---|---|

Balls | 7000000 | 1000 | 1 |

Embryos | 1 | 1000 | 1 |

Survive_{1} |
0.1% | 0.1% | 100% |

Survive_{2} |
0.05% | 0.05% | 100% |

**Parallel with fine-tuning.**

I hope now it’s clear how the story parallels the fine-tuning situation. Each room is analogous to a universe, the correct text coming out of the typewriter is analogous to the constants of nature resulting in a life-permitting universe, and the rigged scenario corresponding to something like an intelligent designer ensuring that the constants are life-permitting.

The balls are just a convenient way to represent how likely or unlikely we think each of the three scenarios is

*prior*to considering the fine-tuning and anthropic information we are about to consider. In the case of the universe vs. multiverse question, without any specific evidence the standard scientific and philosophical position would be to say that we only know of our own universe and would need some really good evidence to accept that there are other universes. In other words, initially,*prior*to evaluating some specific piece of evidence, we should "bet" in favor of the single universe hypothesis. This is why I chose the ball distribution heavily in favor of the single room vs. multi-room scenario (7000000 vs. 1000), to reflect this reasonable initial bias. I chose an even lower number of balls (only 1) for the rigged scenario to reflect an even higher scientific bias against the design hypothesis. In probabilistic analysis these initial credences we assign to hypotheses prior to evaluating some specific evidence are called*prior probabilities*or simply*priors.*The notable departure in our story is that in our case “learning about fine-tuning“ lowered the probability only by a factor of two, whereas in the actual fine-tuning situation “learning about fine-tuning“ entailed lowering the probability by many orders of magnitude (in our fake history version, it went from 1 in 1000 before 1980 to 1 in $10^{229}$ after 1980). I made that change to make sure our heads don’t blow up from dealing with incomprehensibly huge numbers, but of course the logic is unaffected by which numbers we use.

**The answers.**

First I will tell you the answers to these two questions, and then show how I got them. For question 1, before we “learned about fine-tuning“, the odds we should assign to the three hypotheses are 7000:1000:1, meaning we should think that the first hypothesis is 7 times more likely than the second (normal single room vs multi-room), which in turn is 1000 times more likely than the third (rigged scenario). The odds we should assign after learning about fine-tuning are 3500:500:1, meaning the relative odds for the first two hypotheses got reduced by half, but crucially (and this is what the claim we are proving is all about),

*it got reduced by the same amount so their ratio stayed the same*(it was 7:1 and it stayed 7:1).**The epistemic ensemble to the rescue.**

How did I arrive at those answers? You might be especially puzzled by the fact that the number of balls for the first scenario is 7000 times greater than for the second scenario, yet the probability you should assign to it is only 7 times greater. This is where all the subtleties of anthropic reasoning come out of the woodworks.

The easiest way to handle such questions is by using what I call the

*epistemic ensemble*trick. The trick entails replacing all probabilities with ensemble frequencies. What does that mean? In our situation which scenario will occur depends on a randomly selected ball, out of a total of 7,001,001 balls. So let’s imagine instead that this exact experiment was performed not once, but 7,001,001 times, one for each ball. We can pretend there are 7,001,001 parallel worlds, or planets, each with its own copy of the whole setup. This imaginary collection of parallel experiments is what I called the*epistemic ensemble*.You might ask if introducing all these parallel worlds is a legitimate move. Yes, having many identical independent experiments instead of one doesn't change probabilities, it just makes them more "visible to the naked eye". For example, if you flip one coin you of course expect the probability of heads to be 50%, but you won't "see" this probability unless you perform many identical versions of this experiment. And if you do that, you will then expect roughly 50% of them to result in heads.

**Easy as 1-2-3.**

You might then ask: ok, maybe it's legitimate but what's the point? The point is that the epistemic ensemble simplifies things tremendously. Now, instead of figuring out which formula from the Bayesian probability calculus we are supposed to use, we can just do simple counting. Remembering that each ball now corresponds to a different parallel world, let's update our table with the total number of embryos for each scenario:

Single | Multi | Rigged | |
---|---|---|---|

Balls/Worlds | 7000000 | 1000 | 1 |

Embryos | 1 | 1000 | 1 |

Tot. embryos | 7000000 | 1000000 | 1 |

Survive_{1} |
0.1% | 0.1% | 100% |

Survive_{2} |
0.05% | 0.05% | 100% |

Now all that's left is to count how many of these embryos will survive the whole "monkey business" to become a conscious adult. Let's focus on question 1, i.e.

*before*"learning about fine-tuning". There are 7 million worlds where the Joker decided to go with the normal single room scenario. Out of those 7 million embryos, we estimate that only 7000 single room embryos survive (since the chance of survival is 1 in 1000, or 0.1%). There are 1000 worlds where the Joker decided to go with the normal multi-room scenario. Each world contains 1000 rooms, so we have 1 million multi-room embryos, of which only 1000 survive. Finally, there is only one world where the Joker decided to go with the rigged scenario. In that scenario the embryo always survives. In total, we have 8001 embryos surviving to become a conscious adult, split 7000:1000:1 among the three scenarios.And now here is the key to anthropic reasoning,

*you have no idea which one of those you are, you could be any one of them, and you should assign equal degrees of confidence to being any one of them.*This is where the answer 7000:1000:1 comes from. If this step seems obvious to you and you have no idea why I am making a big deal of it, you'd be surprised at how many people, including professional philosophers, physicists etc. have trouble with it. Those who don't accept it have to pay a heavy intellectual price in the form of bizarre conclusions and paradoxes. Among them is, for example, the doomsday argument, the preposterous conclusion that we can predict the end of the human race from basically no data at all.The answer to the second question, i.e. for

*after*you "learn about fine-tuning", is obtained in exactly the same way, except that only half as many embryos survive in the first two scenarios, resulting in a total of 3500+500+1 = 4001 total surviving adults. This gives the answer 3500:500:1, with the ratio of probabilities staying at 7:1 for the single vs multi-room scenario. Here is a summary of the final results:Single | Multi | Rigged | |
---|---|---|---|

Balls/Worlds | 7000000 | 1000 | 1 |

Tot. embryos | 7000000 | 1000000 | 1 |

Adults_{1} |
7000 | 1000 | 1 |

Adults_{2} |
3500 | 500 | 1 |

**Finishing the proof.**

The crucial point here is that learning about fine-tuning decreased the relative probability of the first and second scenarios by exactly the same amount: in our example it was by half. Of course in the regular fine-tuning situation it’s decreased by a much much bigger factor. By contrast, the third scenario is of course not affected by our learning about fine-tuning, since that new information only concerns scenarios in which the constants of nature are selected at random, or, in our Joker story, scenarios where the typewriter is not rigged.

Just to make sure we don’t get confused, the

*absolute*probability of the third scenario of course does change. In our example, initially it was 1 in 8001, and after learning about fine-tuning it became 1 in 4001. And the absolute probabilities of the first two scenarios become smaller.We have now successfully proven the first statement, the one defended in the Scientific American article - learning about fine-tuning does not change the relative probability of the multiverse versus a single universe with random constants of nature (7:1 in our Joker story). In other words,

*fine-tuning is not evidence of a multiverse.*Step 3. Explaining why nevertheless the multiverse can help physics explain fine-tuning.

Now that we know that the inference from fine-tuning to the multiverse is fallacious, the question is: how can the second statement be true? In other words, how can then the multiverse hypothesis help physics explain fine-tuning?

**Realistic numbers.**

In order to answer that, we need to use more correct numbers instead of the ones I selected for my Joker example. The two crucial differences are: in reality the number of rooms (universes) in the multi-room scenario is not 1000 but some enormous number equal to $10^{500}$ according to one estimate; and the factor by which learning about fine-tuning decreases the number of surviving embryos (the fraction of universes that are life-supporting) is not 2 but some other enormous number, equal to $10^{229}$ according to one estimate.

At this point I anticipate that your head is starting to hurt from all these numbers and powers of 10 floating around. So let me, instead of plunging directly into the math, simply describe in words what the situation will look like with these more realistic numbers. There are two changes:

1. In the single room scenario the probability of producing a conscious person (life-permitting universe) has now decreased enormously. In order for an embryo to survive, the monkey now has to type several predetermined sentences without a single mistake, instead of maybe just a couple of letters for our old numbers (when the survival odds were 1 in 2000).

2. In the multi-room scenario the probability of any particular embryo surviving has decreased just as much, but now we have an enormous number of rooms. So what happens to the total number of surviving embryos? The decreased probability of survival causes that number to go down, but on the other hand the increased number of rooms causes it to go up. So whether it goes up or down depends on which enormous number is more enormous. With the estimates that I gave above, which are by no means set in stone, the enormous number of rooms wins, and the number of surviving embryos goes up by a lot.

**Realistic odds.**

So remember how in our example the odds for the three scenarios after learning about fine-tuning ended up being 3500:500:1? With the changes that substituting the more realistic numbers introduced, the ratio is now something like:$$
\frac{1}{enormous_1}:\frac{enormous_2}{enormous_1}:1
$$

To be more precise, if we let N

_{rooms}be the number of rooms (which was 1000 before) in the multi-room scenario and P_{survive}be the chance of survival, then our table will look like this:Single | Multi | Rigged | |
---|---|---|---|

Balls/Worlds | 7000000 | 1000 | 1 |

Tot. embryos | 7000000 | 1000N_{rooms} |
1 |

Adults | 7000000P_{survive} |
1000N_{rooms}P_{survive} |
1 |

**Why the multiverse can help explain fine-tuning.**

So now we see what’s happened, compared to the rigged scenario (the third one) the single room scenario (the first one) has become extremely unlikely, however the multi-room scenario (the second one) competes quite well if N

_{rooms}P_{survive}is big, i.e. if the enormous number of rooms is more enormous than the enormously small probability of survival. And that’s the reason that the multiverse hypothesis is helpful for physics in explaining the fact of fine-tuning: under certain assumptions the single universe scenario starts to look very weak in comparison to the rigged scenario (which sits somewhat uneasy with physics since it leads in the direction of intelligent design), but the multiverse hypothesis is able to compete very well.As you might suspect, that's not the end of the story, there is another serious challenge for the multiverse hypothesis, the Boltzmann brain problem. But that's a topic for another time.

Summary.

Let's quickly remind ourselves of everything that's happened. We have shown that Goff is right when he says that the inference from fine-tuning to the multiverse is fallacious. How did we do that? First, we reformulated his claim in a more precise language amenable to mathematical analysis, and then we used the epistemic ensemble technique to convert the problem into a simple counting problem. We have also shown why, despite this, the multiverse hypothesis helps the physicist who, in the face of life's seemingly improbable existence, still wants to build a model of our physical reality that is not mysteriously "rigged" in favor of life. We showed the following two statements to be true:

**1.**The inference from fine-tuning to the multiverse is fallacious: however more or less likely we thought the multiverse was compared to a single universe before we learned about fine-tuning, we should still think the same after.

**2.**Nevertheless, the multiverse hypothesis does help physics in explaining fine-tuning: fine-tuning significantly lowers the probability of a normal single universe relative to the rigged scenario but, depending on the values of some enormous numbers, the multiverse can still be more likely than either of the other two scenarios.

All of this can be very compactly summarized by the final results for our Joker example in our last table:

Single | Multi | Rigged | |
---|---|---|---|

Adults/Odds | 7000000P_{survive} |
1000N_{rooms}P_{survive} |
1 |

We can see that if we learn that P

_{survive}is much smaller than we previously thought the first two numbers become much smaller. However,*their ratio stays the same*. And that's the compact version of the first statement. All it says is that the relative "attractiveness" of multi vs single scenario stays the same after we learn about fine-tuning.Secondly, we can see that if the number of rooms (universes) in the multi scenario is sufficiently enormous,

*the second number can be very big.*And that's the compact version of the second statement. All it says is that even if the survival probability is so ridiculously low that the single room/universe loses to the rigged scenario, the multi option can still win out and be the best explanation.**What are Goff's and Novella's mistakes?**

I don't think that either of them commits some elementary statistical fallacy. It seems they both make more subtle mistakes navigating the treacherous waters of anthropic reasoning. Specifically, they don't fully recognize the effect of N

_{rooms}.Philip Goff recognizes that the relative attractiveness of multi vs single is unaffected by fine-tuning (statement 1). But it seems he doesn't appreciate the explanatory power of the multi scenario given a sufficiently enormous N

_{rooms}. Steven Novella, on the other hand, recognizes the explanatory power N_{rooms}can bring (statement 2), but only for the case*after*learning about fine-tuning. He doesn't see that the attractiveness of the multi vs single scenario due to a huge N_{rooms}is exactly the same*before*learning about fine-tuning.It also seems to me that both of them don't quite appreciate the difference between (a) fine-tuning being evidence of the multiverse and (b) the multiverse being a good explanation of fine-tuning. Perhaps this is what prevents them from recognizing that statements 1 and 2 can be true at the same time.

Epilogue. Warning: side-effects may include increased puzzlement.

After reading this, I wouldn't be surprised at all if you still had many questions, or perhaps you've thought of some objections to my analysis. Please share them in the comments and I will try to respond. For now, let me just list some possible questions that can arise:

- We never seemed to use the assumption that in the multiverse scenario the constants are selected independently in each universe instead of being the same in all universes, so what role did it play?
- If after learning about fine-tuning we don't assign a nearly 100% probability to the rigged scenario, then doesn't it imply that before that we would have had to be assigning a ridiculously low probability to it?
- What exactly is the analog of knowing the probabilities for the Joker to choose each of the three scenarios?
- Doesn't the math go out the window if the universe or multiverse is infinite?
- Doesn't this analysis imply that we should think that the multiverse is enormously more likely than a single universe, and that we should have thought that even before we learned about fine-tuning?
- Can't another hypothesis be not that the constants are "rigged" to be life-permitting by some cosmic intelligence, but that they actually had to have the values they do because of some deeper underlying theory?

## 33 Comments - Go to bottom

This is very cool. I have read parts of the post and should comment on what's going on so far.

ReplyDeleteI went to the University of Berkeley where after being on the dean's list in the math department for a few quaters in a row I came to learn I am not a mathematician. The subject is a language to itself and as good as I was at it I could never do the definitions and proofs with the fluency required.

I wanted to program a computer to help me win games-- they wanted me to prove everything I said starting at 1+1 =2 and definitions. :-)

I am not a mathematician, but the unless you want me to start with definitions we should be just fine. :-)

I have come up with a way to discuss this I hope will be good. I spent 1000's of hours with mathematicians and I will do my best to give you a glimpse of what might be going on inside one's head. I have to warn you this isn't for the feint of heart and they have no diplomacy. Warning- shocking material ahead. :-)

(I'm not going to defend any of the statements the imagined mathematician makes- I'm just trying to recreate what they might be).

the post-

"This fact, called fine-tuning, cries out for explanation."

Reply--- The explanation for something happening at random is it happened at random. The fact this universe is fine tuned doesn't need explaining because it couldn't be any other way. If it weren't that way we wouldn't be talking about it.

The only thing about fine tuning that could possibly ever need to be explained would be if this one weren't. Now that would take some explaining.

Of course this universe is fine tuned for life and that happened at random.

"The three main contenders for an explanation have been recognized to be extraordinary luck, multiple universes, and a cosmic designer."

Reply-- How about "It couldn't be any other way?"

Physicists generally prefer the multiple universes option. This seems to make sense: just like if there are trillions of different planets a few are bound to be habitable, the same could be true of universes. So is fine-tuning evidence of a multiverse?

Reply- I hate saying derogatory things about physicists, so I'd question that statement before saying "Those must be the one's who didn't do well in their probability classes." (and feel badly about having said it)

The explanation to how and why this universe came out the way it did is it happened at random. Why not just explain that?

OK, I'll stop being that guy : I think you get the idea. The math guy might not see things the same way as other people. I used to hang with a guy who had the goal of making a 'Hilbert space' in his mind's eye so he could understand things. All I want for Chirstmas is a visualizable Hilbert space.

That's a mathematician!

When I read to step 3, it sounds like a wonderful plan and I'm interested. The mathematician in me is wondering 'why not do a Bayes- take a few seconds and be certain?'

I'll discuss the sections in more detail in a bit.

Thank you for your comment, can I clarify: in one place you suggest the explanation "it happened at random", in another place you suggest "it couldn't have been any other way". Do you intend for these to mean the same thing? To me the first phrase sounds like the first of the three main explanations I listed, the "extraordinary luck" option. The second phrase sounds like it means something different though, something that is indeed different from any of the three main explanations I listed. This fourth option is often referred to as "physical (or metaphysical) necessity".

DeleteTwo points in response. Physicists mostly find this a very implausible explanation. Briefly that's because changing the values of fundamental constants in the equations produces perfectly coherent model universes, nothing breaks down. Take a look at the second video I included.

Second, that option is subsumed in my analysis into the "rigged scenario", so the logic goes through with no adjustments needed.

Hope this makes sense, and will be happy to hear your other thoughts about Step 3 and Summary.

'It happened at random' means that's how it happened. It's a brutally unsatisfactory explanation described by 'uncaused' 'without reason' 'unpredictable' 'without further explanation'. Math people tend to be so rational that when they see that's the best they can do, they accept it and move on. I'm not a mathematician. :-). I have learned to accept it as an explanation though. Can't do better.

ReplyDelete'it couldn't be any other way'- means if it weren't fine tuned we wouldn't be talking about it.

You might say the fine tuning is a necessary condition to us being able to discuss it.

If we are discussing it. then the universe is fine tuned. Having a conversation in a universe fine tuned for life is what one expect, that's not so surprising, it (our being able to converse) couldn't have happened otherwise.

Now having a conversation in a universe that wasn't fine tuned so we couldn't possibly be having it? That's what would take explaining. :-)

It's a funny 'counter-intuitive' thing this probability frame of mind.

At this point the math guy has done the Bayes. He started writing formula as soon as he started reading- it's a reflex- and so he knows P(B|A)= P(B) and that means these are independent variables and trying to figure out how they are related doesn't make sense.

He may or may not know why they are indepemdent. It might come to him right away, it might come to him after a bit, it might never come to him.

What he knows is that if he did his math correctly the answer he got is as sure as 1+1=2 because he proved everything he did was that way long ago. He doesn't have to know why 1+1=2. It does. He doesn't have to know why the variables are the way they are, but they are. He knows the ramifications of that fact because that's the basic of the subject pretty much.

Strange thing with properly done math you can get results everyone will agree are the equal in truth value to 1+1=2, but nobody has any idea why or how.

If he were going to continue beyond this point, he might be looking for an explanation why they are independent variables if it hadn't come to him yet (at first I wrote 'figured it out', but 'comes to him' is probably a better way to say it). It's highly unlikely he would be looking for an answer, he has that all ready.

I think if you understand that aspect of the person's view of things it will make it easier to understand why he might see what he sees.

Are we good?

I wanted to give a bit more-

ReplyDeleteAt this point the mathematician knows the explanation for the fine tuning (it happened at random) and that the concepts being discussed are not related. He may have noted quickly that 'how many' and 'what kind' are two distinctly different questions early on.

He would look and see the answer was wrong at the bottom and wonder why anyone was discussing the contents of a story about monkeys that gives a wrong answer.

He might check and find the author of the monkey story has staked a claim that he knows this math problem better than mathematicians with PhDs do. The author has made a big stink about this.

He might note the author did not know the fundamental relationship of 'independence'

'Hubris' is a word that may come to mind.

https://en.wikipedia.org/wiki/Hubris

You follow?

I forgot. The math man might go to the authors website and find a person touting to be pro science and skepticism expert of some sort. He might look in the comment section and see a number of math experts have attempted to alert the author of his error. He would see how those people have been treated.

ReplyDeleteI don't know what he might think after that.

If you need a more detailed analysis of the third section. It's wrong, we can know that with certainty, but I'm not sure what the error(s) are. Could try an autopsy if you want.

If I understand you correctly, you are not advocating for a fourth alternative explanation. "It happened at random" is a description that fits my first two scenarios, i.e. the single universe/room and the multi-verse/room. So I think my analysis goes through, unless you see a mistake in some specific step.

DeleteHello, I attempted to make a previous comment relating to Goff and Steven's position. But it appears not to have gone through. The comment was about Goff's position in the Scientific American Article, which is actually based on a paper of his that he published (found here: https://www.philipgoffphilosophy.com/uploads/1/4/4/4/14443634/is_the_fine-tuning_evidence_for_a_multiverse_.pdf).

ReplyDeleteIf you read it, you will see that Goff isn't arguing against the position that evidence of fine tuning increases the relative probability between a multiverse and a single universe cosmology. Rather, he is making a critique against a line of argument that attempts to demonstrate that fine tuning alone should increase the absolute probability of the multiverse being true. Notice that a distinction is typically made between two variants of the multiverse argument; known as the evidentiary and statistical. Here Steven and Goff are addressing the latter, which concerns itself with reasoning to the multiverse on the basis of fine tuning alone (as opposed to the evidentiary variant which incorporates things like reasons for believing in inflation).

Nevertheless this blog post, somewhat extraneous to the argument, still interests me. Firstly, I wonder why you have privileged the single universe hypothesis over the multiverse (in the first row). You say this is on account of the fact that we know we have a universe, but have no reason to assume the multiverse (outside of mechanisms like inflation). But I think this is confused; our knowledge that there exists a universe is not analogous to the knowledge that there is ONLY a single universe. The latter is exclusionary; the former is not.

Thus, we have no reason to privilege the single universe cosmology over the multiverse. In other words, we have no universe generating mechanism at hand which could tell us how many universes we could reasonably expect to have. In the absence of this, we must remain agnostic or adopt something like the principle of indifference (50/50 probabilities).

Thanks for a thought provoking comment. I had actually seen Goff's comment on Steve's blog and read his academic article before writing this blog post. I have several disagreements with it, but they are not that relevant to the point of my post.

DeleteWhy? Because my goal was not to defend Goff's argument, but to defend (a strengthened version of) his claim with my own argument, which I immodestly think is much better than Goff's.

But your main point is that my claim from step 1 is completely different from Goff's claim. I don't think so, let's compare:

I showed: learning about fine-tuning (updating credences to factor in fine-tuning) decreases the credences for the first two scenarios (single & multi) by the same factor.

From which, if we use the obvious statement that the third (rigged) scenario is unaffected, or not affected nearly as much, by learning about fine-tuning (i.e. the total expected number of observers / surviving embryos doesn't decrease nearly as much in the rigged scenario), it follows that the above mentioned factor is bigger than 1. Therefore the credence for the multiverse does not increase upon factoring in fine-tuning, which is identical to:

Main statement: fine-tuning is not evidence for the multiverse,

which is Goff's claim.

Do you agree?

Finally, to address your last point, the chosen number number of balls is completely immaterial to the derivation, we could have just used variables representing the priors. Still, I disagree with this:

"But I think this is confused; our knowledge that there exists a universe is not analogous to the knowledge that there is ONLY a single universe."

because I never made the assumption that it's analogous. The assumption behind assigning a lower prior to other universes is the same assumption that makes us demand a very high degree of evidence before assigning a high posterior credence to the existence of some new object or phenomenon like gravitational waves, a new planet or a new particle. If we need a high degree of evidence to have a high posterior then the prior must be low. But again, all this is irrelevant to proving my claim from step 1.

Hey Dmitriy,

ReplyDeleteThank you in turn for an interesting reply. I agree that there are two separate claims here; one relating to fine tuning increasing our credence in the multiverse, and the other to the reason for assigning the multiverse a lower initial probability than a single universe cosmology. I simply assumed that you were adopting both claims. If the second claim is not so important to you, then feel free to forget it. But forgive me for continuing to address it as I find the topic interesting.

Firstly, I think small differences in magnitude of probability are in fact important here. That is because the main line of argumentation that Goff is attacking (fine tuning increases our credence in the multiverse, on the grounds that the multiverse if true increases the probability the fine tuning is the case) is an extraordinarily weak claim, with no guarantee that the probabilities shift by much of anything.

I quote White from his paper (fine tuning and multiple universes), which Goff’s argument is about:

“One possible reason for concern is that if P(M|K) is extremely low, then P(M| E & K) might not be much higher, even if P(E|M & K) is much higher than P(E|~M& K). This need not concern us however, for the question at hand is whether E provides any support for M” (White, p.262)

Since, forgive me, you yourself admit that the relative probability between the multiverse and third option shifts by some small value, this cannot be the refutation of the argument that Goff wants it to be. Goff wants to show that any increase in absolute probability, no matter how small, doesn’t follow; as such I think his claim is more expansive than you are allowing (it's not just about the relative probabilities between the few options you listed). And I think he does show that if you grant his reasoning.

Now to address the second claim; while I agree it doesn’t have much bearing on the first question (whether fine tuning makes the multiverse more probable); it does have great influence on the question (do we have good reason to believe in the multiverse picture?). As well as whether physicists should adopt it as a good explanatory paradigm.

I agree completely that any time we introduce extraneous entities as demanded by some new theory, we would be remiss not to assign it very low priors in the absence of evidence/reason for it. But this is simply to the disfavor of the multiverse theory; I don’t quite see how it raises the relative probability of the single universe cosmology hypothesis.

I am distinguishing here between ontological and theoretical complexity, and assuming that your main line of reasoning as to the assignment of low priors is focused on the latter. Please correct me if I am wrong. But generally, philosophers differentiate between agnostic and atheistic applications of Occam’s razor (see here: https://plato.stanford.edu/entries/simplicity/#ProJusSim) and scroll to the bottom of section 5.

The idea is that we are in our rights to give low prior probability to the multiverse hypothesis, but not necessarily to give higher probability to the hypothesis that there is no multiverse (distinguishing the active vs passive claim). So hypothesis A combined with the multiverse is more theoretically extravagant than just A, and so we should assign lower prior probability to (A + multiverse) as compared to just A. But that is not the same as giving higher prior probability to (A + ~multiverse) over (A + multiverse).

Since a single universe cosmology entails ~multiverse. It follows from this interpretation of Occam’s razor that accepting (correctly) that the multiverse is too theoretically extravagant, doesn’t have to increase the relative probability that the single universe theory is correct.

Of course this only follows if the single universe cosmology is exclusionary to the multiverse. If the hypothesis were (there is at least a single universe); then we are well in our rights to give that a higher probability relative to the multiverse theory.

Thank you for your time.

As a follow up: The more I think on this, the less certain I am that my distinction in the first claim really matters. Initially, I assumed that the reasoning behind the "this universe' objection as laid out by White is independent from your line of reasoning between the relative probabilities of fine tuning and the multiverse. But now that I reflect on this, I think yours is actually a really elegant way of demonstrating that we are in fact reasoning from the basis that our universe must be fine tuned (and not just any random universe). Instead of adopting a line of reasoning similar to TER; it seems you've directly demonstrated the invalidity of the some-universe line of argumentation.

ReplyDeleteWhile technically the possibility remains that the multiverse could increase in absolute probability to an unknown option x (given knowledge of fine tuning). The only way I can see this happening is if knowledge of fine tuning were to magnify the difference in possibility of life formation by a greater degree in option x than the multiverse variant. So if for instance, knowledge of fine tuning multiplied the number of adults in the multiverse by .5 and by .3 in option x.

And I just don't see any theoretical mechanism where that can be the case. So I'm gonna take back my arguments for the first claim while keeping the second claim intact.

Best,

Alexandru

I think we are on the same page about the first claim. I 100% understand and agree with what you say in the last two paragraphs.

DeleteAbout the second claim, I am confused by your statement that if we assign a low prior probability to the multiverse, that doesn't mean we have to assign a high probability to the negation of the multiverse. Don't these probabilities have to add up to 1? I must not be understanding you correctly.

Oops, I should be more careful in what I write and edit my comments more thoroughly. I didn't meant to write that it would be reasonable to give the multiverse a low prior probability; or it's negation a high prior probability. I meant to be supporting my earlier contention about the principle of indifference. So my remark about the agnostic vs. atheistic Occam's razor was that although we are justified through the application of Occam's razor in rejecting random hypothesis (A + multiverse) in the favour of A, on the grounds that the multiverse is theoretically extravagant. It doesn't have to follow from this that we must give the multiverse a low prior probability. And since the single universe cosmology entails ~multiverse, giving it a higher prior probability would effectively be doing this.

ReplyDeleteMy point was that in the absence of any evidence/reason in support of the multiverse or against it, why not use the principle of indifference? Granted, we are delving now into theories of meta-probability, as well as philosophical positions like evidentialism. I must admit, I have no firm opinion on this topic. I was just curious what instinct spurred you to reject the principle of indifference.

Could you elaborate more on the point about the high posterior evidence threshold necessitating a very low prior probability? Why can't it simply be the case that the high posterior evidence threshold modifies, not the degree/magnitude of prior probability, but the rate of change of our probability? So that we increase our probability by a lesser amount on the basis of a new piece of evidence, given a higher posterior evidence threshold (but that our baseline prior probability is still 50%).

Thanks.

To add to this:

DeleteThe principle of insufficient reason (PIR) has many problems relating to the way we sample probabilities. This principle is also incompatible with the frequentist interpretation, but that's not so relevant to our discussion since we are talking about epistemic probabilities and not real-world probabilities. The real problem with this principle comes down to how we choose to partition our probabilities.

But I don't think that's so much a critique of the principle itself, as a lesson in the proper application of it. Given these problems, the wise thing to do is remain agnostic. We simply don't have enough information (speaking of fine tuning alone; not the evidentiary argument) for or against the multiverse to make an informed decision as to probabilistic outcomes. At best, we could use the PIR to reason heavily in the favour of the multiverse relative to a single universe cosmology. However, I don't think that tells us much other than the fact that if it were possible for a multiverse to exist and the constants to be different, then the multiverse is a very plausible option.

But we don't have a universe generating mechanism (in the statistical argument), and therefore no reason to think that the above is in fact physically possible. You seem to think that this necessitates that we just grant a very low prior to the multiverse; my question is why? How could we even hope to do that in the absence of any evidence for/against? It would be simply impossible to know what magnitude of prior probability to grant (and of course this makes all the difference in the world to your second statement).

Hence, we just can't adopt the multiverse argument; fine tuning alone doesn't make the multiverse a plausible theory for physicists to adopt.

So bottom line:

1)In the absence of evidence for/against the multiverse; we would have to use PIR

2)PIR is a very unreliable technique; producing inconsistent results according to your sampling technique

C)We shouldn't reason to any probabilistic outcome for the multiverse until we know a lot more.

Alex,

ReplyDeleteAbout the idea that in the absence of evidence either for or against the multiverse why not assign 50-50: That kind of principle of indifference is really bad I think. Why? Because by the same logic we assign 50-50 to some random non-interventionist Creator, but also to a different incompatible Creator, and to a third one. Do you agree that that would be incoherent?

The correct principle should be to assign a low a priory credence to any entity (roughly speaking) in the absence of evidence for or against.

Hey Dmitriy,

DeleteI agree. I meant the same thing when I wrote how the principle, interpreted in different ways, can lead to contradictory results. But I think the approach of assigning low priors is just as problematic, if not potentially worse. Why? Because in the absence of any evidence for/against the multiverse; the magnitude of probability is arbitrary and can be assigned any value. What is to stop me from asserting that a reasonable prior probability is some almost infinitesimal value (say 0 followed by a decimal, followed by a trillion trillion trillion zeroes and then a one).

Such a number would make even the string inflationary landscape of 10^500 universes extremely improbable in comparison with the single universe cosmology. Since there are literally an infinite number of possible probability assignments that would qualify as "low"; it becomes impossible to answer the second statement (whether the multiverse makes a good explanatory paradigm).

The PIR approach at least gives us a single answer per individual hypothesis. Hence my writing that we have to adopt it; not because it is a good principle, but because we have no choice if we want to answer the statement. As opposed to an infinite number of arbitrary answers that the low prior approach carries.

Can the low prior probability approach be saved? Yes it can, if we adopt the eternal inflation theory (infinite universe landscape). In that case, the multiverse option always comes out more favorable than the single universe cosmology, no matter what value we assign to the prior. But I think we need to take a step back here and realize that this is very much a kind of Pascalian-wager argument.

Something has gone wrong in our reasoning here. For example, by postulating a God who creates infinite universes, and by using the same logic of assigning a non-zero low prior to this god (in the absence of evidence) we can see that our god hypothesis competes just as favorably with the single universe cosmology (I think I pointed this out in an email I sent you).

I don't deny that the principle of indifference has problems; in fact as I mentioned in my last post, the existence of these problems point in the direction of agnosticism. The right thing to do here is to suspend judgement until we have more evidence/reason in favor of the multiverse. At best, if we really want to engage in probabilistic reasoning; we should try to come up with the best multiverse hypothesis we can think of and then use the PIR to reason (ignoring the contradictory results with alternative, and weaker, hypothesis). What do you think?

About your last: I don't think waiting for more evidence helps, because new evidence is not supposed to modify our priors.

ReplyDeleteYou are right that we are left with the problem of arbitrariness in selecting priors. I think the standard view is that this is ok, reasonable people can have different priors, but there are some limits, the priors can't be too ad hoc (for example it's unreasonable to have a prior saying that people with a mole on their left elbow are much more likely to be trustworthy), and can't be contradictory (principle of indifference). Given enough evidence the posteriors will converge.

Finally, "For example, by postulating a God who creates infinite universes, and by using the same logic of assigning a non-zero low prior to this god (in the absence of evidence) we can see that our god hypothesis competes just as favorably with the single universe cosmology (I think I pointed this out in an email I sent you)."

This is the hardest issue, I don't think anyone has a great way to deal with infinities in probabilistic reasoning. This issue is not specific to fine-tuning. I think this is a case where reasonable people can have different priors. One reasonable view could be to say: I expect such a God to create a "smaller" infinity of people than the multiverse would creat, plus there's a huge penalty for the prior to postulate such a radically different type of entity.

Also, I didn't get an email, did you use the contact form?

Hey,

DeleteI got your post on the other blog. Also, don't worry about the email; that was related to my first comment on this thread (which I also sent via contact form). I think I simply wasn't signed in properly and it didn't go through.

About the waiting for more evidence part. My point wasn't that waiting for more evidence would allow us to increase our prior, but that it allows us to engage in probabilistic reasoning towards a sensible posterior. In the absence of such evidence/reasons, to ask whether the multiverse makes a good explanation of fine tuning is just to ask what our prior probability should be put at. Since as you showed, fine tuning doesn't increase the relative probabilities.

Hence, we need such evidence to engage in relevant probability analysis in the first place; without it we are just debating arbitrary assignments of where a non-evidential low prior should be placed. So the problem of arbitrariness wasn't that we are prevented from engaging in such evidential reasoning, for you are right that the low priors eventually end up converging given enough evidence/reason. It's that this arbitrariness in conjunction with the lack of evidence prevents us from doing so.

As for the infinity issue, I agree that this is a broader issue. Although the solution of smaller infinities just makes the multiverse more favorable relative to the God hypothesis; the God hypothesis is still better than the single universe cosmology (no matter the penalty on the prior; as long as it is non-zero). In light of this problem; the right thing to do (it appears to me) is, I stress again, to retreat to agnosticism.

So I'm not criticizing the "assigning the low prior approach" in general; I'm arguing its not an applicable answer to a question which principally revolves around the assignment of low priors. Under this scenario, analogous to analyzing the multiverse hypothesis in the absence of any evidence/reason, we don't have any convergent posterior taking place.

Therefore, the answer of the posterior probability is totally dependent on one's prior probability, which as you admitted is ultimately arbitrary. Therefore, the problem of arbitrariness isn't actually solved.

I believe we had established that the 'multiverse hypothesis' and 'fine tuning' are logically independent.

ReplyDeleteIf you have any problem understanding that you can check here-

https://en.wikipedia.org/wiki/Independence_(mathematical_logic)

or

https://en.wikipedia.org/wiki/Independence_(probability_theory)

for tests to confirm they are logically independent.

Once that has been established, any attempt to find how they are dependent would either find 'no dependence' or the analysis is in error.

It appears the analysis above concludes two things that ARE logically independent ARE NOT logically independent. I believe that is clearly an error.

Please check to see the relationship between the 'multiverse' and 'fine tuning' fit the definitions for independence.

Then the errors in analysis should be fairly clear to you.

Philip,

DeleteWe are actually adopting the position that fine tuning is independent from the multiverse; Dmitriy showed this quite well with his probability calculations before and after fine tuning. As you can see from his post above; fine tuning played no difference. So I think we are in agreement as to the result.

I think the problem people might have had with your reasoning was related to your past methodology. I am glad to see from your other comments that you have now realized that we can show that there is independence by inputting the correct probability priors for fine tuning (e.g. 1 in a quadrillion), instead of just setting the probability equal to 1. If people, including myself, previously thought that you were in error; it was in relation to this methodology and not necessarily your answer.

Additionally, one must give an argument as to the reason why the variables are independent. That is because people are in disagreement as to whether we should reason on the basis of this universe being fine tuned or any universe being fine tuned. Hence, it is wrong to make certain claims, as you have made in the past on Novella's blog, that the multiverse theorists were doing "incorrect math". This is simply an argument over definition. It is important to realize that if the multiverse theorists are right and we should in fact reason on the basis that any universe can be fine tuned; then the two variables (multiverse and fine tuning) would be dependent on each other.

Therefore, I hope you can see that continuing to simply reiterate that the variables are independent (even though I agree with you) is not helpful. One must show WHY they are independent (i.e. why we should accept the 'this universe' definition).

Best,

Alex

"So I think we are in agreement as to the result."

ReplyDeletePhllip actually holds the view that the answer for the odds doesn't depend on the number of universes/rooms. Do you think Goff holds the same position or do you think he agrees that the odds for "multi" are boosted by N_rooms?

Last words I wanted to add to my posting below; if we accept that we get paradoxes with infinite probabilistic calculations, like those regarding eternal inflation, then we should not engage in such reasoning. I offered below a way out of the paradoxes by adopting an approach which categorizes legitimate vs illegitimate probabilistic reasoning on the basis of whether the assumed mechanism is likely to be physically possible or not.

DeleteThat said, one might still maintain that Goff is wrong regarding non-infinite multiverse hypotheses. But this needn't be so as long as Goff thinks the prior for the multiverse is sufficiently low that a huge number of universes in the multiverse (or N_rooms in multi) doesn't compensate enough to make the multiverse more favorable than not.

One might still retort that we can increase the number of universes in our hypothesis, but the idea is that this necessarily comes at a cost to prior probability. Because of ontological extravagance, the more universes one postulates (without evidence) the more the prior must fall. Goff could say that we should decrease our prior probability by at least the necessary amount that makes the posterior calculations no higher (I agree with this). This would be similar to the solution in the unicorns example.

And basically the idea here is that as the limit of postulated ontological entities (i.e. universes) approaches infinity; so too the limit of our prior decreases to 0. Hence, if the postulated entities are infinite, this is basically analogous to saying that we can't reason at all (the prior is 0). Of course this is only applicable to instances involving reasoning without evidence.

DeleteBy agreement I meant in regards to the independence of the two variables (fine tuning) and (existence of the multiverse). I thought Philip was simply talking about that, and not about whether different multiverse models have different probabilistic outcomes.

ReplyDeleteAs for Goff, I responded to your post back on Novella's blog in which I pointed out that I thought your interpretation of Goff was perhaps uncharitable. It's important to mention that there is a difference in saying that the number of universes doesn't affect the posterior of M (or N_rooms doesn't modify the posterior of multi), versus saying that your scenario is inapplicable.

I think Goff is actually saying something like the latter; that we shouldn't in fact engage in such probabilistic reasoning (your ensemble technique). In my other post on Novella's blog, I offered some speculation as to what Goff might say here. I will quote a bit of what I wrote:

"In other words, this boils down to how we should interpret epistemic possibility. Is it legitimate to reason to the probability of any logically possible outcome, or should we take into account the likelihood of physical possibility when thinking about rational epistemic possibility? The advantage of the latter approach is that it avoids absurdities like the infinite-universe god we discussed."

The bottom line is that there is, I think, a difference of belief regarding the proper interpretation of relevant probabilistic reasoning. There is a distinction between saying an outcome is very unlikely (18 heads in a row) and very likely not possible (i.e. God, the multiverse). Goff apparently adopts this distinction, and it appears that you do not. Your approach would have us treat the two equivalent, so that our thinking that something is likely not possible (i.e. God) should just modify our probabilistic prior, similar to how we regard unlikely events. In other words, all uncertainties become probabilistically equivalent.

An alternative approach, that I think Goff implicitly endorses, is not to treat the two (low possibility vs. possible but unlikely) as probabilistically similar. This alternative (my latter approach) would have us abstain from probabilistic reasoning regarding things we have no good grounds to believe could possibly exist.

The advantage of this approach should be readily apparent. There are an infinite number of potential hypotheses a Turing machine could come up with. Simply take any random hypothesis and keep adding conjuncts to it (e.g. there exists 2 unicorns, 3 unicorns etc...) to see this. As such, if we assign a non-zero prior to the possibility/existence of any of these hypotheses being correct, then the probability that at least one such hypothesis in the set of infinite unicorn hypotheses is correct approaches/equals 1. Notice that the hypotheses themselves do not concern infinites (each unicorn hypothesis just postulates a finite number of unicorns). So this approach doesn't just breakdown regarding scenarios about infinities.

However, we are in fact dealing with infinities (Goff brings infinity into play by arguing against eternal inflation), and the problem here only worsens. According to the "all uncertainties are equivalent" approach, we would have to concede (as I previously stated) that our god hypothesis is more likely than the single universe cosmology.

Worse, we get contradictory results. So the probability that a God created an infinite multiverse to send us to heaven later on, is just as likely as the probability that the Devil created an infinite landscape to send us to hell. In both scenarios the posterior approaches 1.

I don't think anyone would accept that; therefore there must be something wrong with the "equating all uncertainties" approach. Since you admit the multiverse is likely not physically possible (in the absence of evidence); it follows if this approach is correct that we should discard your ensemble scenario.

Now that I think about it, the unicorn hypothesis counter doesn't hold because one can maintain that the conjunction of unicorns decreases the probability by a greater amount than the sum of the hypothesis. So just as the limit of 1.111111.... never approaches infinity, the conjunction of all the infinite unicorn hypotheses doesn't have to approach a posterior of 1 either. But still, the "all uncertainties are equal' approach doesn't really seem to be compatible with hypotheses about infinites

DeleteAlex,

ReplyDeleteI tried to post a comment but the site gave me an error and it got lost. I’ll try to reconstruct it quickly.

1. About Goff’s position. When reading his article, the academic one, I didn’t get the impression that he was saying those things. I took his “this universe“ argument to be very similar to what Philip is arguing.

If you think I missed something crucial in his article about how to treat hypotheses that are not physically possible etc. let me know, and I will take another look.

2. I don’t think we get into contradictions when having hypotheses with infinite observers. For example, suppose the only hypotheses on the table are your God hypothesis, your devil hypothesis, and a third hypothesis with some finite number of observers, for example 42. When we deal with infinities, we often need to regularize. The regularization scheme is somewhat subjective, but no more so than choosing priors.

For example, if we believe that the first two of those three hypotheses are on an equal footing, both in terms of their prior plausibility, and in terms of how many observers they have, then it seems like we should choose their priors to be equal, for example maybe 1/3 for all three hypotheses, and regularize the first two by assuming that the number of observers is not infinite, but is some number N, which we then let approach infinity and take the limit. In the limit we will get the posterior probabilities are 0.5, 0.5, and zero. And presto, no contradictions.

3. Your proposal to exclude from probabilistic considerations hypotheses that are likely not physically possible.

First, it looks like we had a miscommunication about the physical possibility of the multi-verse. I definitely don’t admit that it is likely physically not possible, quite the opposite. I think most cosmologists would agree that not only some version of an infinite multi-verse or universe is physically possible, but they would put their money on it. For example, measurements indicate that the spatial curvature of our universe is close to zero. If it is zero or negative, then the most straightforward interpretation would be that it is spatially infinite.

And second, I'm not sure how to even make sense of the notion of something being likely physically impossible, except in epistemically probabilistic terms. We find out about the laws of physics, which determine what is and is not physically possible, by inferences from observations, which are probabilistic.

So, if for example we think that a right-handed neutrino is likely physically impossible, but maybe would give it a 0.1% chance that it is physically possible, then how can we exclude it from our analysis? It seems we necessarily must include it, because otherwise, even if future measurements in some advanced colliders indicated to us that they exist, we would be prohibited from ever inferring their existence.

Dmitriy,

DeleteSorry about your reply getting lost; that can be frustrating. There are a great deal of points here; I will try to summarize and address them all.

1. Whether Goff adopts the low-possibility/low-probability distinction:

I think you are right and I have probably misread Goff. I thought he made a comment at some point about our needing to reject the reasoning that we could have been born in any universe (that if, as White put it, it were the case that we were like a spirit floating from universe to universe waiting to be born), on the basis that that we had no good reason to think this possible. That was a misinterpretation; so you are right in this regard.

2. Defending the idea that the number of universes in the multiverse doesn't increase the odds for the multiverse:

There were, in fact, two possible approaches I mentioned that would allow Goff to plausibly maintain this. One of them is the low-possibility/probability distinction I mentioned, but the other has to do with proportioning our prior in relation to ontological assumption. I briefly brought up the latter point but didn't go into too much detail.

The idea is that it is possible to rationally maintain that a higher number of universes per multiverse doesn't correspondingly increase our relative posterior. As long as we lower our prior by the exact (or greater) amount necessary to make this so, and we justify lowering our prior on the basis that extra universes are ontologically extravagant.

Further, such a decrease in prior probability has to be logarithmic/exponential. The reason for this is that even if each multiverse hypothesis were individually unlikely, we can still reason that it is likely that a single multiverse hypothesis in the set of all multiverse hypothesis is true. If for example, multiverse hypothesis A is that there are 10^70 universes, B is that there are 10^71 etc...

Thus, if the decrease in prior probability going from A to B is logarithmic/exponential; then we approach some limit in the probability that one hypothesis is correct out of an infinite set, which is less than 50%. This shouldn't be controversial because it is the exact same approach (I argue) we use to avoid ridiculous scenarios like unicorns existing. Earlier, I wrote how one could equally give an infinite number of hypotheses regarding unicorns existing (A: 1 unicorn exists, B: 2 unicorns exist etc...).

Unless the decrease in prior probability moving from hypothesis to hypothesis is logarithmic/exponential; we run into the trouble of having to say that there likely exist some unicorns. The point is that this second interpretation allows the possibility that Goff is right. It's not that he himself has to state such a belief, but that it being possibly true means that it is uncharitable to write that Goff has to be wrong in asserting the number of universes doesn't factor into the calculations.

3. About infinity:

Notice that carrying on the reasoning from the last section would mean that the prior for postulating eternal inflation is 0. As the number of ontological entities reaches infinity, the limit of the prior approaches 0. So this method would save Goff regarding eternal inflation as well.

About the contradictions, you are right there don't have to be contradictions. But it's still problematic because we have to say that God or Devil is far preferable to a single universe cosmology. In any case, we don't need to adopt agnosticism like I previously argued, the alternative approach of decreasing our priors still works.

4. The possibility/probability approach:

Sorry that I misconstrued you regarding the multiverse likely not being physically possible. I don't want to get too much into this since it's not the main argument, but about the neutrinos I would say (as you wrote) that we determine possibility by inference from observation. So, if we found evidence of right-handed neutrinos, then at that point we should revise our previous assumptions about right-handed neutrinos being physically impossible.

Also, thanks for correcting me about Goff and the infinity contradiction issues. I wanted to succinctly (hopefully) add that I generally prefer the low possibility/probability dichotomy approach to the "lower the prior in correspondence to ontological assumption" approach. Firstly, because it doesn't seem to me that ontology should really matter, what matters to me is the number of assumptions a theory has to make in deviation from standard/ commonly accepted physics.

DeleteSecondly, because there still crop up problems with infinity. For instance, the limit of our prior for eternal inflation or the god of the infinite universes approaches 0 in the absence of evidence. But if we include any evidence, anything at all, even the most unlikeliest/weakest pieces of evidence (such as overhearing my mother saying that she prefers eternal inflation or god) would modify the posterior heavily to the favour of the multiverse/god.

That seems an unacceptable outcome to me; I don't think its reasonable to say that a god who creates infinite universes is way better than a single universe cosmology (even in the absence of fine tuning!) just because I overheard my mother say something. That becomes even more ludicrous when we realize that fine tuning plays no role here.

But that's just my personal interpretation, not necessarily vital to my defense of Goff. I will say that I don't think your objections were fatal. Like I said, we do in fact change our reasoning regarding physical possibility on the basis of observations. So, anytime we have new evidence for something, we can easily modify our possibility prior to make its possibility more likely than not. The whole point of this was just to solve issues regarding infinity. We should adopt this two tier process as opposed to plunging in straight away regarding such probabilistic calculations about infinity; precisely to avoid such conundrums.

Hey Alex,

ReplyDeleteParticipating in the various subdiscussions on Neurologica got me a bit distracted from my blog, even though the discussion we've been having here seems to me quite a bit more productive. There are basically two main issues left I think, basically your points 3 and 4, where I have disagreements I wanted to raise. I noticed though that this page has become slow to load because of so much text, and that gave me an idea:

Dealing with infinities and whether to exclude likely physically impossible hypotheses are both fascinating and important topics, deserving to have their own blog posts. So I am wondering if you want to express your positions in the form of a guest post or a dual author post, maybe pretty much in the same format we have done it here in the comments, with a couple of rounds of back and forth. You can obviously copy some of the explanations you already wrote here, and we can try make the whole thing not too long and reasonably digestible.

So let me know if you like this idea, might add some nice variety to have some articles where the author is someone other than Dmitriy :)

Hey Dmitriy,

DeleteThat sounds wonderful! I do enjoy the topic quite a bit, and I'm not at all unreasonably busy at the moment in my life. You'll have to give me a day or two to reflect on the matter and coalesce my arguments. If it's agreeable with you; I can send you an email via contact form with my argument. I'll try to keep it reasonably succinct; but I will include a quick summary to bring a general audience up to speed (I assume the reader will have read your post for background of course).

I'll concentrate on items 3 and 4, and address any potential counters I can think of. You can also email me back (I'll include an address) if you think there are obvious problems or wish to refine something. Or we can start a more formal back and forth if the argument actually has merit. Feel free to do with it as you please; if you wish to include it in a post of yours in total or just want to quote it partially etc...

Best,

Alex

Sounds great, maybe it's better to tackle 3 and 4 in separate articles to try to keep each article as digestible as possible; but if they can't be separated in your argument that's ok too.

ReplyDeleteThis comment has been removed by the author.

DeleteThis comment has been removed by the author.

DeleteReally I enjoy your site with effective and useful information. It is included very nice post with a lot of our resources.thanks for share. i enjoy this post. how much do monkeys cost

ReplyDelete## Post a Comment