Bostrom's Incubator and the epistemic ensemble.

A lot of tricky problems involving probabilities, such as the famous Monty Hall problem that has lead astray even professional statisticians, can be solved quickly and painlessly with the ensemble method (see how). The question however is: is the ensemble method always sound, or does it work in some situations and fail in others? Here I want to do two things. First, I want to present a simple thought experiment that might, in some people's eyes, call into question the ensemble method. Second, I would like to explain a couple of routes for justifying the method.

The Incubator. 

This neat foundational thought experiment in anthropic reasoning is due to philosopher Nick Bostrom. I found a great summary of it along with an excellent discussion of surrounding issues:

Stage (a): In an otherwise empty world, a machine called “the incubator” kicks into action. It starts by tossing a fair coin. If the coin falls tails then it creates one room and a man with a black beard inside it. If the coin falls heads then it creates two rooms, one with a blackbearded man and one with a white-bearded man. As the rooms are completely dark, nobody knows his beard color. Everybody who’s been created is informed about all of the above. You find yourself in one of the rooms.

Question: What should be your credence that the coin fell tails?

Stage (b): A little later, the lights are switched on, and you discover that you have a black beard.

Question: What should your credence in Tails be now?


I will show you the simple calculation a little later, but for now will simply state how the ensemble method answers those questions. For stage (a), it says the credence for Tails should be 1/3. For stage (b), the credence for Tails should be 1/2. While for most "regular" problems, such as the Monty Hall problem, there is little disagreement about the right answer, a problem like the Incubator, which involves different numbers of observers for different possibilities, elicits substantial disagreements in the academic literature.

The answer the ensemble argument gives also follows from other approaches (such as that based on the Self Indication Assumption). That doesn't mean it's correct of course, other approaches (such as that based on the Self Sampling Assumption) give different answers. Briefly, in case you are not sure how different answers are possible, let me give you a quick bumper sticker version of two main positions on stage (a):

Bumper sticker for 1/2. Before the light turns on, the only relevant information you seem to have is that each scenario was chosen by a coin flip. Your experience (of being a person in a dark room) is perfectly consistent with either scenario, so you have no reason to deviate from your knowledge that a coin flip is 50-50.

Bumper sticker for 1/3. Whoever "you" are, you should think, absent other information, that you were twice as likely to be created if there are two people created than if there is only one. So you should believe it is more likely that the coin fell heads than tails, twice more likely.

The Ensemble Premise.

Let me now give you the principle that serves as the foundation for the ensemble method and then give some justifications for it. I suspect that some people, upon hearing the principle, will think it's completely obvious and feel that I am making a big deal out of nothing. But I know that others won't feel that way at all. The justifications are mostly for this second group. First, a quick and dirty version of the Ensemble Premise:

EP. We can assume that the experiment has been performed many times.

Let me clarify what this means. Let's use the Incubator as an example. The story says that the experiment (flipping the coin and creating one or two people) is performed "in an otherwise empty world". This might seem a little ambiguous: is it necessarily saying that in this thought experiment we assume nothing else exists?  Or maybe that there is some kind of isolated world where that experiment is performed but other, "parallel" worlds can exist too as long as they don't interact with the Incubator world? If you think "who cares? That doesn't affect the answer" then I agree and I think you will have no trouble agreeing with one of the justifications for EP I will give in a minute.

What EP is saying then is that we can, if it helps with the calculations, add to the problem the stipulation that there are many many other parallel worlds where the same experiment is independently performed (and this, like the other conditions of the setup, would be known to the created people). So there now would be G (gazillions) of such incubators and, if G is made very big, to a high degree of precision in half of them the coin would land heads and in half - tails. In total there would be:

  • 0.5G black-bearded people in the Tails worlds 
  • 0.5G black-bearded people in the Heads worlds
  • 0.5G white-bearded people in the Heads worlds.
For stage (a), when the room is dark, all a person would know is that he is one of the 1.5G people, of whom 0.5G, one-third, are in the Tails worlds. Therefore he would assign a chance of 1/3 of him being one of the Tails people.

What if there's no coin?

I hope the above illustrates how EP works and how it lets us calculate probabilities. But there's one important consideration that the Incubator doesn't illustrate. Basically, in real life sometimes there's no coin. What am I talking about? Many situations in real life are actually more analogous to a different version of the Incubator: suppose everything is as before, except each created person is not told the precise probabilities for creating a one-person world vs a two-person world. Suppose instead he just has some vague information about who or what was in charge of that decision. What should he do then if he still needs to make his best guess about being in a one or two-person room? 

In that case, the only thing to do is to use his best judgment about whatever vague information is available and translate that into an educated estimate of the probabilities for the "creator" to choose which world to create. These probabilities are called the priors. For example, suppose, as far as the person knows, the "creator" had no significant obstacles for creating either world and is not aware of any other considerations why the "creator" would choose one over the other. Then it seems sensible to not give any preference to either scenario and guess a prior probability of 50-50 for the "creator" to make either one. In that case, he can treat the situation as if a coin was flipped. 

If, on the other hand, the person has some reason to believe the "creator" was more likely to make one world over the other then again he has to use his best judgment to translate those considerations into an estimate of prior probabilities. There is obviously no straightforward algorithm for this, it just comes down to the person's best understanding of what's going on - a situation we all encounter many times a day. Suppose, for example, the person assigns a 70%-30% prior probability for the "creator" to make one person vs two. In that case, he can treat the situation as if the "creator" generated a random number between 1 and 100, and if it turned out to be in the range 1-70 created one person, otherwise two people. 

Only after this preliminary step is complete he can apply EP and imagine that this whole experiment has been performed many times. For example, if the priors are 70%-30%, out of G total worlds 0.7G would be one-person worlds, and 0.3G would be two-person worlds.

Precise statement of EP.

We are now ready to formulate EP more precisely.

  1. Preliminary step: assign priors to epistemic uncertainties and replace them with decisions based on a random number generator in a way that, to your best judgment, wouldn't change the answer.
  2. EP proper: the answer wouldn't change if it's assumed that the situation in question has been repeated independently many times.
This collection of independent experiments, which I call the epistemic ensemble, will then contain many versions of the agent trying to assign probabilities to various hypotheses. Just like in the Incubator example above, the desired probabilities are obtained by using the fact that, as far as such an agent can tell, they are equally likely to be any of these versions.

Justifications of EP.

Note that the first part, the preliminary step, doesn't change the answer by construction. Only the second part, adding many repetitions of the situation, could conceivably change the answer, and one could rightfully ask for an explanation why it doesn't. I think for some people this would seem so obvious that they would be puzzled by anybody making a big deal out of it. But, as the Incubator example shows, EP implies a certain answer to a class of questions for which there is currently no consensus among philosophers. That alone creates the need to provide justifications for EP. 

Note however that we shouldn't expect to be able to prove EP in the same sense as proving some mathematical theorem. There are very few of such proofs in philosophy. Actually, even mathematical proofs rely on fundamental assumptions, axioms, that are themselves granted without proof, so the best we should expect to do in justifying EP (or some competing principle) is to derive it from some other assumptions that would themselves be granted without proof. The reason one would then accept EP is because one would accept those other assumptions.

This opens one legitimate avenue for accepting EP: take it as one of those basic assumptions that you accept without reducing it to further assumptions, until and unless there is a defeater for it. While that would be a perfectly kosher position if you find EP itself sufficiently plausible, let's look at a couple of arguments in support of EP.

Justification based on a lack of causal influence.

We can put it in this compact form:

  1. Rational credences about properties of a given situation can only be affected by information about things that can causally affect those properties.
  2. The stipulation about many independent repetitions of the situation is, by construction, a stipulation about parts of reality causally disconnected from one another.
  3. Therefore, adding such a stipulation should not affect an agent's credences about properties of his situation.
The last point follows from the first two and is equivalent to EP proper. Given that the second point is true by definition, the argument stands or falls with the first point. But the first point seems to simply express an essential feature of what it means for credences to be rational.

Justification based on the concept of credences.

What exactly is actually meant by a credence of, let's say, 70% that a certain situation S has property A? There are different accounts of the definition of credence, but one reasonable answer might be: this just means that if the situation S were to repeat, independently, many times we expect that 70% of the time it will have property A. If that's a definition that you accept then you accept EP by definition.


93 Comments - Go to bottom

  1. Hi Dmitriy,

    Since I don't accept EP, at least as you use it, I'll focus on the two arguments you offer in support of it.

    > Rational credences about properties of a given situation can only be affected by information about things that can causally affect those properties.

    I kind of agree with this but I think it's being misapplied here. Even if I accept it, I don't think it leads to step 3 if understood correctly.

    What is the situation of someone trying to decide the coin flip in the incubator? Your point would go through if this person were localised to some definite world and trying to reason about that world. The existence of other worlds would be irrelevant to such a person.

    But clearly that is not the situation. I would say that since this person doesn't know what possible world he is in, the situation is of an observer trying to self-locate given an abstract model of the space of possibilities in which he could find himself. The situation of an observer told there is only one experiment is therefore not the situation of an observer told there are many experiments. Each situation yields a different model of the problem. To me, one-or-many just is a property of the situation.

    To say that since they are causally disconnected they are irrelevant is like saying that since tails-incubator world is causally disconnected from the heads-incubator world, someone in a head-world can disregard the existence of the tails-world and just assume himself to be in a head-world (and vice versa), which of course only makes sense if that person *knows* what world he is in. Since we don't know what world we are in, all worlds are relevant and can't be disregarded. This also means that the number of worlds could be relevant.

    And I say it is relevant. We agreed on an earlier thread that an external observer sampling randomly from people inside incubators would reason differently if he were allowed to sample from many incubators than from one. He is sampling from whatever population he is allowed to sample from, then predicting from the result whether the coin was heads or tails for that incubator, without knowing the colour of the beard.

    If he may only sample from one incubator, then his credences should be 1:1 for heads/tails. Either there are one person or two people in the incubator, and he learns nothing either way when he samples, so his credences are unaffected. If there are G incubators, then half will have one person and half will have two people, such that 2/3 of people are in heads-worlds and 1/3 in tails-worlds, so if he samples at random then he will have credences 2:1 in favour of heads.

    Since the model of the situation changes credences for the external observer, I don't see why it shouldn't also change credences for the internal observer under self-location uncertainty. In both cases, it seems to be irrelevant to me that all these worlds are causally isolated from each other. In both cases, one-or-many is a property of the model the agent is reasoning from and so a property of his situation.

    ReplyDelete
  2. Hi Dmitriy,

    > this just means that if the situation S were to repeat, independently, many times we expect that 70% of the time it will have property A.

    I agree with this, but I don't think this entails the EP as you use it, because when you use the EP you are repeating some parts of the situation and not others.

    If the situation is someone in an incubator trying to predict his coin toss, then you need to repeat not only the incubators and the coin tosses but also the predictions. You think of it as having one agent paired with G incubators, but you should have G agents paired with G incubators, each only making predictions about his incubator.

    ReplyDelete
  3. Hi DM,

    your objection, and correct me if I'm wrong, is against step 2 of the first justification for EP, namely:

    """The stipulation about many independent repetitions of the situation is, by construction, a stipulation about parts of reality causally disconnected from one another."""

    But I am having some trouble understanding the objection. Its core seems to be this:

    """But clearly that is not the situation. I would say that since this person doesn't know what possible world he is in, the situation is of an observer trying to self-locate given an abstract model of the space of possibilities in which he could find himself. The situation of an observer told there is only one experiment is therefore not the situation of an observer told there are many experiments. Each situation yields a different model of the problem. To me, one-or-many just is a property of the situation."""

    I will assume that you are using the standard meaning of the term "possible world" in modal logic, in which case the possible worlds are split into two categories: ones with him being in a Tails scenario and ones with him being in a Heads scenario. Then yes, you can describe the problem as him trying to self-locate, in other words to find out which category the actual world (where he, along with everything else, "lives") belongs to. Then what you say next is true:

    """The situation of an observer told there is only one experiment is therefore not the situation of an observer told there are many experiments. Each situation yields a different model of the problem."""

    Yes, the space of options for which possible world is the actual world is of course different in those two situations. But how does that contradict step 2 (or 1)? Neither of those steps asserts that those situations are the same. Step 2 only asserts that the difference between them is of a certain type, the type step 1 talks about.

    ReplyDelete
    Replies
    1. Your second objection is against my second justification for EP. You write:
      """
      I agree with this, but I don't think this entails the EP as you use it, because when you use the EP you are repeating some parts of the situation and not others.

      If the situation is someone in an incubator trying to predict his coin toss, then you need to repeat not only the incubators and the coin tosses but also the predictions. You think of it as having one agent paired with G incubators, but you should have G agents paired with G incubators, each only making predictions about his incubator."""

      I don't see what makes you say I think of it as having only one agent. I explicitly calculate that for stage (a) there are 1.5G agents in the same position.

      Delete
    2. Hi Dmitriy,


      (By the way, when I say "model of various possibilities" in the following, I'm trying to communicate that the model need not be simple -- e.g. possibilities can be nested inside possibilities and can be weighted differently.)

      > your objection, and correct me if I'm wrong, is against step 2 of the first justification

      I'm not seeing anything with step 2 of the first justification, except perhaps that it's missing the point. I agree that these are causally disconnected. I just don't think that causal disconnection is important here.

      > Step 2 only asserts that the difference between them is of a certain type, the type step 1 talks about.

      I more or less agree with these steps as far as they go. Step 3 too. I just don't think they're relevant to this problem.

      If we fix "situation" as referring to a particular world, then the existence of other worlds is irrelevant. But if "situation" refers to self-locating uncertainty with a model of possibilities, then the structure of that model is relevant. The existence or non-existence of those worlds still has no causal influence on your situation, but your beliefs about their existence or non-existence do. If I am told there is one incubator I will reason one way. If I am told there are many, I will reason another.

      This may seem like sophistry to you. If someone said "It doesn't matter if we live in a simulation, since it makes no difference to what I can experience it's all real to me." and someone else said "But my beliefs about being in a simulation does make a difference to what I can experience because my beliefs about it affect me" then I would think they were missing the point. But in this case the problem we're trying to figure out is intimately bound up with what we believe about what worlds there are.

      So my claim is that one-or-many *is* a property of his situation, since his situation involves self-locating uncertainty in some model of various possibilities. As such it's misleading to conclude that "adding such a stipulation should not affect an agent's credences about properties of his situation."

      Delete
    3. Hi Dmitriy,

      > I don't see what makes you say I think of it as having only one agent. I explicitly calculate that for stage (a) there are 1.5G agents in the same position.

      Those other agents are just parts of the model (call them "peers"). You're not actually treating them as agents.

      A singular example would have one agent trying to model his credences and one incubator and either 1 or 2 "peers". The agent trying to model credences is the one we're taking the perspective of. The other peer, if he exists at all, is just part of the model. Yes, he has his own perspective too, but when we're working on the problem we're just trying to solve it for a particular agent.

      Where it gets tricky is that the agent is embedded in the problem, unlike other questions we might answer with an ensemble (e.g. the Monty Hall problem). So if you want to scale it up and make an ensemble out of it, I say you need to scale everything. So now you have two agents, two incubators, and 2 (p=0.25), 3 (p=0.5) or 4 (p=0.25) peers. Or you have G agents, G incubators and 1.5G peers.

      The point is that if you scale it up, you should still have each agent only considering a single incubator. It is incorrect to scale it up by having one agent considering G incubators. The way you do the ensemble analysis is then to simulate each of the G agents individually and see how it turns out. This is of course what I did in my simulation.

      Delete
  4. Hi DM,

    Regarding what I mean by "situation" when I say "adding such a stipulation should not affect an agent's credences about properties of his situation": I am defining a situation in that statement to refer only to the properties of the agent's incubator (or, more generally, to the properties of the instance of the experiment in which the agent is embedded, regardless of how many other independent experiments / situations exist. The agent trying to decide whether the coin fell heads or tails in the Incubator is, by this definition, interested in a specific property of his situation/experiment.

    Given this definition of "situation" you do disagree with step 3, which, if I understand correctly, is ultimately because you disagree with step 1, not 2.

    Your objection against step 1 is expressed here I think:

    """The existence or non-existence of those worlds still has no causal influence on your situation, but your beliefs about their existence or non-existence do. If I am told there is one incubator I will reason one way. If I am told there are many, I will reason another."""

    That, however, is not so much of an objection, as simply a statement that you deny step 1. This doesn't mean you are wrong, but it's not an argument against step 1, it's just a statement of denial.

    What you say right before that passage is, if I got it right, that what the agent knows about the space of possibilities is relevant for his credences. Of course, but that general statement doesn't contradict step 1. It would only contradict it if you meant that any two different models of such a space result in different credences, but I am sure that's not what you meant (since that would be straightforwardly false).

    So I don't quite see the objection. Of course you don't need one to deny step 1, you can just deny it period. As I mentioned in the article, at some point arguments bottom out and we just have to pick which fundamental assumptions we accept.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      > Given this definition of "situation" you do disagree with step 3, which, if I understand correctly, is ultimately because you disagree with step 1, not 2.

      I don't know, maybe, but I would say that given this definition of situation the problem is underspecified and there is no defined answer. You're asking "How long is a piece of string?". To make sense of the question, you need to give details like how many incubators there are, what are our credences regarding the coin flip etc. Once you give details like that, you have expanded the situation to encompass what I was talking about.

      > That, however, is not so much of an objection, as simply a statement that you deny step 1.

      I don't think anything I said thus far conflicts with step 1. My previous point was that the existence or non-existence of other worlds is simply part of the situation and that you were making a mistake to construe it as something that might (causally or otherwise) influence the situation.

      But considering it again, I think step 1 as currently formulated actually is straightforwardly false, depending on what you mean by situation and causal influence. "There is no greatest prime number" might be construed as a situation that I have confidence in for reasons that have nothing to do with causal influence. So, you might have logical or deductive reasons to adjust credences in some situation that have nothing to do with causal influence. The existence of other experiments might fit this bill.

      I can imagine you can do away with these concerns by defining your terms more precisely, in which case I would fall back on saying that the existence or non existence of other worlds must be considered part of the situation.

      Delete
  5. About your follow up on the second justification:

    "Those other agents are just parts of the model (call them "peers"). You're not actually treating them as agents."

    That's not correct. You are, for the purposes of this part of the discussion, assuming that we have already followed EP and are now solving the problem with the stipulation of many experiments. You are saying that I am not treating other people in the same epistemic situation as agents, but I am, that's the whole point of the ensemble method. We have many people (1.5G for stage (a)), each of which is trying to figure out his credences, and each of which has the same knowledge you do, therefore you don't know which one of them you are. In other words, they are all agents like you, you are one out of 1.5G curious people with the same knowledge. An agent is just a synonym for a person trying to figure out credences, whose knowledge is indistinguishable from yours.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      > You are saying that I am not treating other people in the same epistemic situation as agents, but I am, that's the whole point of the ensemble method.

      You think this is an important point and I'm missing something, it seems. I feel this is a semantic problem and you're missing my point because of terminology or how I'm expressing myself.

      Fine. They're all agents. So instead of "agent/peers" I should maybe use some other term like "self/agents"

      The point I'm trying to make is that for each agent, there is only one self, though there could be more than one agent who that self could be. In the original formulation, the ratio between the number of selves and the number of incubators is 1:1, and the ratio between the number of selves and the number of agents is either 1:1 or 1:2. Each agent has exactly one incubator it could be in.

      In any problem you want to turn into an ensemble, you have to hold the ratios of the entities in the problem the same. This is fine for the Monty Hall problem, where the self isn't really a part of the problem. In the incubation problem, where the self is a part of the problem, you would somehow have to multiply the selves too. This is not a simple case of multiplying the number of agents. You have to keep it so that each self has one incubator it could be in. So you can't analyse it by supposing that any agent (i.e. you) could be in any of G incubators.

      If you can't multiply the number of selves by just multiplying the number of agents, it may not be immediately obvious whether or how we can use an ensemble analysis at all under these restrictions. But we can, in the way I did in my simulation. We repeat everything G times, by conducting G complete trials. This means we simulate G (note, not 1.5G) agents being selected from G incubators after G coin flips and the ensemble is our collected results.

      Delete
  6. I start to lose you here:

    """The point I'm trying to make is that for each agent, there is only one self, though there could be more than one agent who that self could be. In the original formulation, the ratio between the number of selves and the number of incubators is 1:1, and the ratio between the number of selves and the number of agents is either 1:1 or 1:2. Each agent has exactly one incubator it could be in."""

    Can you explain how and why a "self" is different from an agent? What is your definition of a self? Can an agent tell if he is or isn't a self?

    ReplyDelete
    Replies
    1. If there are 10000 agents, and I am one of them, then I am just one of them. The self is the agent that is me. My problem is to sample only 1 of the 10000. I am not sampling 10000 times.

      In the original problem, I am in one incubator. If I suppose instead that there are G incubators, then I have changed the problem unless I suppose that somehow there are also G mes, each with an incubator. This seems like nonsense until you realise that I can just simulate the singular problem G times. The G mes in the ensemble are represented by G self-selection operations (random samplings).

      Delete
  7. Hi DM,

    Ok, so "self" is just another word for "I", but that in turn is just an indexical label each agent applies to himself. Then

    """In any problem you want to turn into an ensemble, you have to hold the ratios of the entities in the problem the same. This is fine for the Monty Hall problem, where the self isn't really a part of the problem. In the incubation problem, where the self is a part of the problem, you would somehow have to multiply the selves too. """

    I am not sure why you think that's true, or why "I" would even count as a separate type of entity distinct from just an agent, since it's just an indexical label, or in what sense you could have the number of "I"s different from the number of agents. In what sense and why would some agents be "I"s and some wouldn't?

    It sounds like what you are ultimately saying, for example here:

    """In the original problem, I am in one incubator. If I suppose instead that there are G incubators, then I have changed the problem unless I suppose that somehow there are also G mes, each with an incubator."""

    is that you are simply denying the idea that adding many independent versions of the experiment doesn't affect the answer. But that's just saying you deny EP, I'm not seeing an actual objection here. Note that saying "I have changed the problem" doesn't serve as an objection to EP because EP isn't denying that. Instead, it's saying that by changing the problem in this specific way the answer doesn't change.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      > why "I" would even count as a separate type of entity distinct from just an agent

      It isn't. It's an indexical label. You can regard it as the perspective from which the problem is posed.

      > or in what sense you could have the number of "I"s different from the number of agents

      Not sure how you can be confused about this. In self-location problems there can be any number of observers, but there's typically only one "I". The problem is which observer is the "I".

      > is that you are simply denying the idea that adding many independent versions of the experiment doesn't affect the answer.

      No I'm not. But when "I" am part of the problem, if I want to multiply the other entities in the problem to build an ensemble, I also have to multiply myself, this first person perspective from which the problem is posed. If in the original problem I have one-self location problem to solve, I now have G self-location problems to solve.

      From each perspective, there is still only one incubator. No perspective can reason from the point of view of G incubators unless it's a 3rd person perspective that is outside the problem. You can and should reason from such a 3rd person perspective, but that means running G simulations from the point of view of a 1st perspon perspective within the incubator. This means you need to sample G times, not once.

      So I'm not denying EP per se, I'm saying you're misapplying it, by repeating some parts of the situation but not all of it, and so answering a different question than the original. I probably agree that if you apply EP correctly by repeating all parts of a situation, then you should get the same answer.

      Delete
  8. Hi DM,

    """Not sure how you can be confused about this. In self-location problems there can be any number of observers, but there's typically only one "I". The problem is which observer is the "I"."""

    Sure, that's what I would normally think, each agent applies "I" only to himself, so from his perspective only one entity has that label. But when I start to get confused is when you have multiple "I"s with their number being different than the number of agents, I don't know in what sense you could have that.

    """So I'm not denying EP per se, I'm saying you're misapplying it, by repeating some parts of the situation but not all of it, """

    You are saying I multiply the agents but not the "I"s. In addition to what I said above, if you agree that "I" is just an indexical label then it doesn't designate any additional entities. So if I multiply all the agents, all coins, rooms etc then there are no entities that are left unmultiplied.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      I agree that it is confusing to say you need to multiply the "I"s. It doesn't at first make sense. If you can't make sense of it then you had better not make an ensemble at all.

      But I say you can make sense of it if you just run the trial many times. You imagine being born many times in the conditions of the original experiment. If you do so you will find yourself in heads world 50% of the time and a tails world 50% of the time.

      Another way to say it is just to say that in the original problem you have one self-location (random sampling) problem to solve and when you multiply it by G you must have G self-location problems to solve, each on a single incubator, so you must sample G times.

      Either way, the issue is you are repeating some parts of the problem but not others.

      Contrast with China/Chile where if you also repeat the sampling you get the same end result.

      Delete
  9. Hi DM,

    """ It doesn't at first make sense. If you can't make sense of it then you had better not make an ensemble at all."""

    That is a crucial statement but I don't see how you establish that. I think the main thrust of your objection is that while you agree (now) that repeating the situation doesn't change the answer (even though the space of possibilities changes) you are saying that I don't actually repeat the situation:

    """Either way, the issue is you are repeating some parts of the problem but not others."""

    As I mentioned I disagree because "I" doesn't designate any additional entities. It might be helpful to mention that in the ensemble each agent is what Bostrom calls an observer-moment, i.e. agents are localized in time. Why? Because the point is to include all epistemic states like your current state in the counting of agents, since you can't distinguish between them. And such states are by their very nature localized in time.

    """Another way to say it is just to say that in the original problem you have one self-location (random sampling) problem to solve and when you multiply it by G you must have G self-location problems to solve, each on a single incubator, so you must sample G times."""

    This is just to deny EP (as opposed to saying I'm misapplying it as you do elsewhere). EP says a certain single problem has the same answer as a certain other single problem. So to say that I must have G problems is again not an objection but just a denial.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      > you are saying that I don't actually repeat the situation:

      Yes. I think you are repeating some of the situation and not all of it. Like taking the Monty Hall problem and adding lots more doors but not adding lots more prizes.

      > As I mentioned I disagree because "I" doesn't designate any additional entities

      Well, not additional, but it picks out a particular entity as of special interest, a bit like the door in the Monty Hall problem that has the prize behind it. Having lots more agents but not lots more "I"s is like adding lots more doors but not lots more prizes. Since I am embedded in the puzzle, if I want to multiply all the entities I've got to multiply myself too, so that the ratio between me and the incubators stays constant at 1:1. This does not mean you can't build an ensemble, it just means the structure of the ensemble needs to be different than you conceive of it. It needs to be an ensemble of complete trials each including random sampling, not just an ensemble of incubators with random sampling at the end.

      I don't see the relevance of your comment about observer moments. I don't disagree with it in any case.

      > So to say that I must have G problems is again not an objection but just a denial.

      I think this is another semantic/terminology issue. The G problems are just sub-problems. There's still one single overall problem, which is what ratio to expect in the outcomes of the sub-problems. So I still think you're misapplying EP by not repeating everything you need to repeat and ending up with the wrong ensemble. On my way of thinking of the problem, I can still construct an ensemble, so it's not that I'm rejecting EP. I'm just constructing an ensemble differently -- an ensemble of complete trials rather than an ensemble of observers.

      Delete
  10. Hi DM,

    I don’t understand the analogy you draw between “I”s and prizes:

    “ Having lots more agents but not lots more "I"s is like adding lots more doors but not lots more prizes.”

    Remember, my response to your objection was that ”I” does not designate any additional entities, it's just a label that every agent applies to himself, and you I think agreed with that. By contrast, a prize is obviously an additional entity. So I’m not seeing how this addresses my response.

    EP stipulates that you need to repeat the whole situation, meaning all entities involved in the situation. I am not sure whether you:

    1. Think that’s illegitimate, something else besides all the entities must be multiplied. In that case you would just be denying EP.
    OR
    2. Think it’s legitimate but believe that I am not actually multiplying all entities. In that case you presumably disagree with the idea that the label “I” doesn’t designate any additional entities, beyond just all agents (which are of course multiplied).

    Can you clarify which route you are taking?

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      I'm not sure what you mean by "additional entity". We agree it's one of the agents, so it's not additional.

      But it is a special agent from the point of view of the agent for whom we are trying to solve the problem.

      > Can you clarify which route you are taking?

      Route 2.

      > I am not actually multiplying all entities

      I wonder if there's a problem of what we count as an entity.

      I just think you need to multiply everything about the situation. So if we had one sampling we need G samplings, for example. I don't know if "sampling" itself counts as an entity.

      Suppose you had to judge the odds of picking one black ball from an urn with 99 white balls and one black ball. Obviously the correct answer is 1/100. Suppose we want to multiply the situation by G. We could have a number of approaches of various degrees of correctness.

      1) 1 sampling, 1 urn, 1 black ball, 99G white balls
      2) 1 sampling, 1 urn, G black balls, 99G white balls
      3) 1 sampling, G urns, G black balls, 99G white balls
      4) G samplings, G urns, G black balls, 99G white balls

      1) is clearly wrong.
      2) & 3) are equivalent because there's no reason to consider each urn individually in this problem. You'll get the right answer either way. Things might be different if different urns had different balls inside them, e.g. 50% having all white balls. In either case they don't help much with the logic, as you end up dividing G/100G instead of 1/100 -- the G isn't really of any use.
      4) 4 is multiplying everything and will also get you the right answer. Instead of dividing G/100G, you can find an answer by simulating each trial and then looking at the results. This is how ensembles help solve counter-intuitive problems like Monty Hall.

      So you don't necessarily have to multiply everything all the time in simple problems where the number of some entities don't matter. But I say you do in the incubator problem. If you don't understand what I mean by multiplying the selves or the "I"s, then focus instead on multiplying the samplings.

      Delete
  11. Hi DM,

    maybe using the word sampling instead of "I" will make me understand your objection better. I still suspect you are taking route 1, denying EP. Perhaps I can clarify a little better what exactly EP says can be assumed to happen G times. If we, for simplicity, grant physicalism (so no supernatural stuff, just physics) then by "situation" I mean basically a certain arrangement of "stuff" in some location in space and time, which in physics is called a system. An entity in that case is just a subsystem, so that multiplying the system just means having a bunch of such physical systems in causally disconnected regions of space and time.

    In your case of the urn, the components/subsystems/entities of the system include: urn, balls, agent about to perform sampling. We imagine G such scenarios and ask: what fraction of the agents, upon performing the sampling, will get a black ball? Why do we ask that? Because that's the probability each of these agents should assign itself for picking a black ball since he is equally likely to be one of those G agents. Note this step is self-locating uncertainty, you call this "sampling" but let's call it SLU-sampling to distinguish it from the other, explicit sampling in the problem, sampling from the urn.

    So we have G explicit samplings, each agent, according to the description of the situation, performs the physical operation of picking out a ball.

    How many SLU samplings are there? All agents are, by construction, in the same epistemic situation as you, you can't tell which of them you are. So if you are using SLU all of them are. The number of SLU samplings is therefore equal to the number of the agents, which is also G here.

    But you seem to think it's G not just in this problem, it must be G in any problem, such as the Incubator. And I don't see how you justify that.

    1. EP says to multiply the whole system
    2. Before multiplication every agent, who you could be, is trying to figure out his credences.
    3. After multiplication the expected number of all components of the system, including the expected number of agents gets multiplied by G. That's what EP says to do and promises that in this version the credences each agent should assign are the same as in the problem without multiplication.
    4. In the new version, after multiplication, it's easy for each agent to assign credences: just use SLU sampling.
    5. Since each agent is doing this, the number of such samplings, if we care about it, is just equal to the total number of agents in the new version, which is G times the expected number of agents in the old version.

    So I'm not sure where there is room in this story to have, for example 1.5G agents but only G SLU samplings, as you say should be true in the Incubator; or which part of the story you are denying.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      I hope you don't take me to be appealing to anything supernatural if I insist that the logical structure of a situation must be repeated in order for the situation to be repeated.

      > But you seem to think it's G not just in this problem, it must be G in any problem, such as the Incubator. And I don't see how you justify that.

      Because if I'm in the incubator I'm only concerned with my point of view, my own SLU. In the original problem, where there is only one incubator but there may be one or two agents, each of which may have an SLU problem to solve, I am only concerned with my own SLU problem. So the number of samplings I need to consider is 1, not 2 or 1.5.

      If we repeat everything G times, the number of samplings we care about also must go up by a factor of G. I said that I need to repeat even myself. This is what I meant. In G experiments, there are G copies of me. The other agents are not copies of me. They are copies of the other agent who may or may not be in my incubator.

      Again, this makes sense if we consider the possibility of running the same trial multiple times, and being born in the incubator multiple times.

      > 2. Before multiplication every agent, who you could be, is trying to figure out his credences.

      True, but I'm only concerned with my credences. Once I figure out how I should assign my credences, the same would naturally go for any other agent who may exist.

      Delete
    2. Re-reading my last comment, I feel you're still not going to get it. You're going to say something like that if I am just one of the agents and I don't know which, and not some special other kind of entity, then you don't understand how I can say that there's a special agent in each incubator.

      Perhaps I could explain it like this. If you don't think of one of the agents in each incubator as a proxy for yourself, then you're not really correcting for the fact that you can only find one instance of yourself in an incubator. You can't be born twice in the same trial, so you only get one sampling per trial.

      Delete
  12. Hi DM,

    Can you clarify what you mean by a copy of yourself as opposed to an agent? Is there any physical difference?

    Just to make sure, let me state what is meant by an agent: it’s anybody who you cannot distinguish yourself from. For example, in the first stage of the incubator, you don’t know what color beard you have, so you cannot distinguish yourself from a person with either beard color.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      It's not "as opposed to an agent". It's still an agent. The idea has nothing to do with physical differences. It's an indexical/perspectival idea. I've said "copy of myself" but as these G experiments are just hypothetical thinking tools, it's not a physical copy. "Proxy of myself" may be a better idea. What I'm talking about is supposed to represent my perspective in the model.

      As I am one person, I can have only one perspective at a time. There may be many agents who might correspond to this perspective, and under SLU I may not know which one I am. But the fact remains that I can be only one of them. This is where the one of the "ones" comes from when I say we must maintain the ratio of 1:1 between "me" and "incubator" (the other of course is the number of incubators).

      So you can have G incubators, but only if you also have G "I"s. Again, this is intelligible if we interpret it as running many trials, such that the coin is flipped G times and you are born/reincarnated G times. While this might not make sense in the concrete (you would first need to explain what it means for the same as opposed to a different person to be reincarnated), it does make sense in the abstract, where we can interpret each trial as a randomised simulation of how the world might actually be as far as we know.

      Delete
  13. Hi,

    I think I understand how you think it’s supposed to work, this is basically how your simulation functions. But I can’t quite tell if you understand how EP is different from this.

    EP claims that the answer for the case where is stipulated that there is only one such experiment is the same as the answer for the case where it stipulated that there are G such experiments. In both of those two cases you are only one agent, in the first case there are 1 or 2 agents, in the second case there are approximately 1.5G agents.

    You can disagree with the claim that EP is making, but do you feel you understand now what the claim is?

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      You understand how I would do it, and I understand how you would do it.

      But what's interesting is that I don't think I am disagreeing with the claim that EP is making, even according to your own definition of EP. Instead, I disagree with how you are applying EP in this case. I think you are not following your own definition correctly.

      > EP claims that the answer for the case where is stipulated that there is only one such experiment is the same as the answer for the case where it stipulated that there are G such experiments.

      Yes, that is what EP claims, iff you interpret yourself somehow as not one but G agents.

      > In both of those two cases you are only one agent

      I say that this is not what EP claims. I say that this is a mistake in your application of EP.

      To explain: I can accept EP ("the answer wouldn't change if it's assumed that the situation in question has been repeated independently many times") while rejecting your interpretation of it in this case, because I don't think you are producing independent copies of the situation in question. The copies are not independent because you have the same perspective under SLU across all the copies at once, whereas in the situation in question there was a ratio of 1:1 between "my perspective" and incubators. In other words, you have changed the logic of the situation in ways other than just repeating it G times, and that's not what EP is about.

      This isn't an issue for non-anthropic questions because the perspective asking the question isn't embedded in the situation the way it is here. But here you are having your cake and eating it too, by both insisting that (1) it matters that this is from a first person perspective embedded in the experiment (a fish's perspective) and (2) reasoning as if it's a 3rd person perspective standing outside the incubators and looking at the whole ensemble.

      Now, of course, you have defined EP, and it would be arrogant in the extreme for me to insist you have EP wrong. So if you want to clarify and redefine EP such that my interpretation is wrong and yours is right, then of course you can and should do that, but then I would reject EP on the grounds because you're changing the structure of the problem in ways that may affect the answer.

      Delete
  14. Hi Dmitriy,

    Assuming you agree with me that there's no sense in interpreting the EP as suggesting that we can make many copies of part of a situation while keeping only one copy of another part, can you see that it's at least debatable whether the sampling or first-person perspective should be considered part of the situation that needs to be copied?

    Or is this just crazy talk from your point of view?

    ReplyDelete
    Replies
    1. Hi DM,

      in philosophy, as you know, most things are debatable, and this is one of them - I don't think it's crazy talk, I just partially disagree and partially don't understand. It seems there are two issues: whether I am misapplying my own definition of EP, and whether it's true that to keep the answer the same we must have G selves.

      1. Definition of EP.

      I currently have it as " the answer wouldn't change if it's assumed that the situation in question has been repeated independently many times." I certainly didn't mean to be misinterpreting my own definition, so I think I need to change the definition to make it clearer that

      """EP claims that the answer for the case where is stipulated that there is only one such experiment is the same as the answer for the case where it stipulated that there are G such experiments.

      In both of those two cases you are only one agent"""

      I thought that my definition and the surrounding explanation make that clear but it seems that's not true. How would I reformulate the definition to make that clear? What do you think of this:

      Preamble. Suppose you are trying to form credences about a feature of some state of affairs you are in (e.g. were two or one bearded men created by my incubator?). Let's call such a state of affairs one instance of an experiment, with the total number G of similar independent experiments in all of reality unknown to you (e.g. maybe for all you know in some other galaxies or parallel universes there are other such incubators).
      EP.G doesn't matter. Specifically, the answer is the same if you knew that G = 1 as if you knew it was any other specific number.

      Do you feel that makes it clearer? If not, how would you phrase the claim? It might be easier for you because I still don't fully understand why the original definition lead to a misunderstanding.

      2. G selves.

      If the new definition is more in line, you feel, with the claim I want EP to be making then you presumably now disagree with the new formulation of EP. Then how would your objection interact with it? Is it something like: you deny it unless it's also stipulated that each of the G experiments has exactly one physical copy of you?

      If

      Delete
    2. Hi Dmitriy,

      In answering your comment I realised I was superficially contradicting myself, in that I both want to say that you are and are not in a particular experiment. But in each case, I'm talking about different considerations.

      Consideration 1: You are not in any particular experiment insofar as you are under SLU so the entire ensemble is relevant to your credences and you don't know which experiments are causally disconnected from you.
      Consideration 2: You are in a particular experiment insofar as you should reason as if you are in one particular experiment drawn from the pool of possible experiments, and not one observer drawn from the pool of possible observers across all possible experiments.

      With this in mind...

      I think the revised explanation is clearer, and I would reject EP on this explanation. But you might want to state that you are under SLU such that you have no idea which of the G similar experiments you are in, that is you can't assume that you are in a particular one (consideration 1). I think stating this is helpful in seeing why it is wrong to characterise it as "this experiment" and "other, causally disconnected experiments which could have nothing to do with my experience". Once you don't know which experiment is yours, then the structure of the ensemble as a whole may give you reason to adjust your credences, even if experiments are causally disconnected from each other.

      > Then how would your objection interact with it?

      In addition to the above, I would say that you're only guaranteed to get the same results if you multiply all aspects of a situation uniformly, and if you don't multiply your own perspective somehow then you are distorting the problem. The problem where you know you are in a specific experiment is not the same as the problem where you could be in any of G experiments.

      You can see this clearly from the fisherman's perspective. If we return to that problem, then you will readily agree that the fisherman's credence in the case where the pond has been stocked once is completely different from the fisherman's credence where the pond has been stocked G times. When the pond has been stocked once, then all that matters is the coin flip and the relative population sizes of fish in the two ways of stocking don't enter into it. When the pond has been stocked many times, such that we can assume that the population has been stocked approximately half one way and half another way, then the population sizes do enter into it.

      Again, we can analyse the fisherman's case in two ways, depending on where we draw the borders around the experiment. If we draw the borders of the experiment around the lake and the stocking, and leave the fisherman outside of it, then "the total number G of similar independent experiments in reality unknown to you" might just be G stockings before the fisherman comes to visit. If the fisherman assumes G stockings he will get a different answer than if he assumes 1 stocking, so EP is falsified.

      If instead we draw the borders of the experiment so as to include the fisherman, then even if he assumes G experiments then he personally still only has one lake and one stocking to worry about, and he will correctly reason that all that matters is the coin toss. He can prove this with an ensemble if he likes, but this ensemble must have G copies of his particular perspective, G selves, G samplings. Any ensemble with only one sampling will get the wrong answer.

      So from the fisherman's perspective, if he counts himself as part of the experiment, he gets the right answer and he doesn't then he doesn't. I think the same is true from the fish's perspective. The fish can assume there are G experiments, but the fish should still reason as if it personally only has one particular experiment to worry about, even while it is under SLU about which experiment this might be.

      Delete
  15. Hi DM, I think we made progress by finding a definition of EP that you feel better reflects the claim I intend for it to be making. You of course disagree with that claim. Now let's look at your objections.

    """Once you don't know which experiment is yours, then the structure of the ensemble as a whole may give you reason to adjust your credences, even if experiments are causally disconnected from each other."""

    I definitely agree that you don't know which experiment is yours, which I understand to be the main thrust of the paragraphs preceding the quote. But that doesn't establish the statement in the quote. The quote itself seems to just be saying that EP is not true, but is not providing a reason. Consideration 1 does the same as far as I understand it. So if this is meant as an objection to EP or its justifications, as opposed to just a denial of EP (not that there's anything wrong with simply denying an assumption), then I can't discern what the objection is.

    About the fisherman example from the other thread, the first way to analyze it doesn't falsify EP, because the agent is not causally disconnected from G-1 instances as would be needed for EP.

    The second way is perfectly consistent with EP, as we discussed in the other thread. The ensemble method, as you probably remember, gets the right answer for the fisherman.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      Consideration 1 is just a suggestion that the causal disconnectedness argument is a red herring as we don't know which experiments we are causally connected to. That in itself doesn't refute your application of EP, but it does I think undercut some of the motivation for it. It should be fairly obvious that the structure of the model we have for where we might be should influence our credences, even if we believe ourselves to be causally disconnected from most of it. So it's not unreasonable to suppose that facts about causally disconnected universes should affect my credences about my situation, because I don't know which universes I'm causally disconnected from. That's not enough to say the number of instances does matter, it's just to undercut the intuition that it definitely shouldn't.

      About the fisherman example, you analysed it correctly before according to how I interpret EP. Whether your analysis is consistent with the new clarified form of EP is open to intepretation. Referring back to your "Preamble:.../EP:..." definition, you don't talk about causal disconnectedness. According to the text of this definition, it seems the fisherman could interpret an instance of the experiment as a stocking of the lake, and so for EP to be saying it doesn't matter how many times the lake was stocked, which we agree is wrong. So you need to further clarify the definition to rule out such an interpretation.

      I guess you would want to do so by introducing the concept of causal disconnectedness. Perhaps you should amend "with the total number G of similar independent experiments in all of reality unknown to you" to "with the total number G of similar independent experiments causally disconnected from you unknown to you".

      Note that as soon as you assume there are similar experiments which are causally disconnected from you, then you must assume that there are other fishermen just like you to take your place in those experiments, and you must be under self-location uncertainty as to which of these you are. So you're reasoning with multiple samplings, multiple selves as I'm suggesting you should (and as you do when you apply EP to the fisherman's perspective).

      Our disagreement then is whether the fisherman's perspective and the fish's perspective are all that different. It seems we both agree that the fisherman should not be considered to be outside the experiment. He's embedded in it just as the fish is. The only difference between the fish and the fisherman seems to be that the fisherman has one potential "self" per experiment and the fish has many.

      For the fisherman, we can sample once per experiment and then look at the proportions of the results of those samplings to derive our credences. For the fish, I have criticised you for not multiplying the fish's perspective in the same way, and just doing once sampling across the whole ensemble. But I realise now that you would get the same answer if you sampled for the fish many times per experiment and then looked at the proportions of the results of those samplings. So perhaps the issue is really just that I think the fish should sample once per experiment and you think it should sample as many times as there are fish in each experiment. In which case what I need to defend is the former. Are you with me so far?

      Delete
  16. Hey Guys,

    Sorry for seemingly dropping off the face of the earth for a few weeks; I'm back now.

    @DM:

    I think it might be helpful to take a step back here and realize that much of your argumentation has been focused on what you feel is the 'proper' way to extrapolate the ensemble methodology to handle a plurality of cases (large numbers of independent experiments taking place). As you yourself appeared to acknowledge, this is not in fact an attack on the justification for EP, but rather a purported criticism of Dmitriy's methodology. However, even this isn't a criticism of Dmitriy's approach in so much as an alternative way of doing things.

    In an earlier discussion in the other thread you mentioned how your simulation takes into account the number of experiments; where in the alien galaxy case an observer would have a 50/50 chance of being born in either M or S provided there was only a single galaxy, but these odds would begin shifting more and more to approach the M:S odds ratio as the number of galaxies increased. Since EP demands that the odds ratio must be fixed regardless of the number of galaxies, there are two simple ways to fix it.

    Dmitriy's approach would have us treat singular cases involving only one galaxy as bearing the same M:S odds ratio we see in the ensemble, involving millions of (if not infinite) M's and S's. Your interpretation would result in us treating all cases as though they were equivalent to the singular scenario in your simulation (so that basically the odds are always 50/50 no matter the number of galaxies). That's what you effectively get when you multiply the number of agents so that the proportion of agents to observers remains constant.

    But simply laying out an alternative vision that would retain the soundness of EP does not by itself make for a criticism of Dmitriy's approach. Further, this alternative methodology comes with many obvious problems. As I wrote above, if we adopted the same approach for similar cases (like the aliens galaxy case) we would have to believe that the odds ratio are always 50/50, even if we knew that the aliens created 100,000 galaxies. This is in fact contrary to the results of your own simulation. Is this something you really wish to maintain?

    Secondly, increasing the number of agents (or selves) in constant proportion to the number of observers has the effect of violating Occam's razor. It means that we need to introduce the additional stipulation that our agents are special, in the sense that an agent has the property that they can only be a certain kind of observer (i.e. a local observer), and this holds true for all agents. The simplest approach, however, would be to treat all agents as generic, capable of being born in any possible galaxy. But the agent who is 'paired' with every galaxy or with every incubator (as in your example) is not a generic one, and we would need some additional grounds for why we should believe that our agents are 'special' in the above sense.

    Unless you have radically changed your views in the time period of my departure; I get the sense that you too would find such an alternative approach of reasoning based on EP to entail unacceptable consequences. So this, I feel, was a somewhat unhelpful piece of reasoning which in the end brought us no closer to the real argument for why EP lacks standing.

    ReplyDelete
    Replies
    1. Hey Alex,

      Glad you are back! If I understand DM's position, he would say that the problem where we know there are 100000 galaxies is actually different from the problem with 1galaxy multiplied G = 100000 times.

      In the former, we all agree the answer is 42:1, in the latter DM would say the answer is 1:1. In the former, there's only one "us", in the latter he wants there to be G "us".

      Of course, I still don't entirely understand what exactly is meant by having G selves. DM explains it by imagining we were born G times. I still don't get it though, because I don't know what it means for someone to be born many times, how that would be different from G similar people being born.

      And of course, regardless of whether that can be made sense of, that doesn't have much to do with the claim of EP, which claims that it's the former version (with one self but G galaxies) that has the same answer regardless of whether G = 1 or 100000. As I was saying before, besides simply denying EP, I'm still unable to discern an argument for why we should think our credences should depend on stipulations about G, since those amount to stipulations about parts of the cosmos that have no physical influence on our patch.

      Delete
    2. Hey Dmitriy,

      I do actually think that DM's ultimate position is against EP. I think he was saying something like the following:

      1) If EP were sound then cases involving a large number of G's (e.g. 100,000) must yield identical credences to a singular case (1 G).
      2) The correct credences for the aliens galaxy case is 50/50 in a singular case.
      3) Reconciling 1 and 2 would have us introduce extra agents (selves) to keep the proportion of agents to observers constant (so that the ultimate odds ratio remains fixed at 50/50) as the number of G's increased.

      I don't think DM literally means to make this argument (as I said; I think he's against EP). I think he just meant to show that, even if we accepted EP, we would have to reason as in the above. Of course that's not true because we don't accept premise 2; so I agree that all of this fails to prove much of anything.

      Moving on, I interpreted DM as arguing against the plural galaxies case, for if we knew there was only one galaxy it wouldn't otherwise make much sense to say that we need to extrapolate the ensemble by constructing a multiple galaxies scenario. In that case he could just argue that inferring additional entities in the ensemble is unsound (precisely because we are supposed to quantify over actual, not possible, galaxies).

      It's only if we wished to maintain EP AND knew that there existed multiple galaxies that we would run into problems. I saw the point about stipulating additional agents as being about that. However, yours is the more charitable interpretation; so let's assume that DM meant to say "this is how you should construct your ensemble for singular cases if you wanted to, but you shouldn't because it is illegitimate to infer additional entities in singular cases". In that case I was wrong to say that DM's methodology would have us adopt a 50/50 odds ratio for cases involving large G's (in the aliens galaxy scenario), but this still reduces back to the above argument.

      Finally, about the argument itself. Criticisms of premise 2 aside; I'm not sure if its actually valid. That is, even if we accepted the condition that we must introduce agents in constant correspondence to the number of observers, where every agent is 'tied' to every observer/incubator, we should still reason as though there are M:S odds. The additional requirement of DM's just adds a superfluous element of complexity; instead of asking (what type of observer are we?) we would instead ask (what type of agent are we?). But notice it's still true that there are 42 to 1 agents that live (or have the potential to live) in an M galaxy versus a S galaxy.

      So while it's true that we don't know what type of agent/self we could be, that's no different than saying that we don't know what type of observer we could be. At the end of the day, the introduction of additional agents (or selves) in correspondence with the number of G doesn't change anything so long as we retain the same selection procedure (assuming that we could be any agent or observer). Thus, we nevertheless require an argument for why we should adopt a different sort of selection procedure, which was needed anyway. Hence, this is just a long about way of beating around the bush.

      Delete
    3. Hi Alex,

      Glad to see you back again!

      > I think it might be helpful to take a step back here and realize that much of your argumentation has been focused on what you feel is the 'proper' way to extrapolate the ensemble methodology to handle a plurality of cases

      No step back required. This is explicitly what I am arguing.

      > Your interpretation would result in us treating all cases as though they were equivalent to the singular scenario in your simulation

      No, that's not correct. I'm going to stop on this point -- the rest of your comments are seemingly irrelevant because they are influenced by this misunderstanding of my position.

      My interpretation would have us treat singular cases as singular cases and G cases as G cases.

      In singular cases, the odds are 50/50. In G cases the odds are M:S, not 50/50. In between, with low numbers of experiments we have to examine the probabilities for each combination of coin flips for each experiment, so that if there are two experiments, we have 4 possible worlds (MM, MS, SM, SS) to consider, which you can condense to (1*MM, 2*MS, 1*SS) , three experiments = 8 possible worlds (MMM, MMS, MSM, MSS, SMM, SMS, SSM, SSS) which you can condense to (1*MMM, 3*MMS, 3*MSS, 1*SSS) and so on. In the limit as the number of experiments tends to infinity, we can dispense with considering each possible world because we know that the ratio of actual observers will approach M:S with probability approaching 1.

      I don't think the disagreement is really about which methodology is the one correct methodology for assessing ensembles, because Dmitriy and I both agree (and I assume you do too) about which methodology to use for the case of the fisherman -- we just disagree about how to apply it for the case of the fish. From my perspective, the methodology I would use for the fisherman and the fish is exactly the same, even yielding the same result. Dmitriy also feels he is being consistent, but gets a different result with a different calculation because he thinks the two cases are different. So really this is not so much a question about which methodology is correct as much as a disagreement about whether the perspectives of the fisherman and the fish in answering the question are different or essentially the same.

      Delete
    4. Hey DM,

      "No, that's not correct. I'm going to stop on this point -- the rest of your comments are seemingly irrelevant because they are influenced by this misunderstanding of my position.

      My interpretation would have us treat singular cases as singular cases and G cases as G cases."

      Yes I know; as I previously stated:

      "I do actually think that DM's ultimate position is against EP.... I don't think DM literally means to make this argument (as I said; I think he's against EP)"

      So I realize that you are against EP (which is the same as saying that the odds for singular and plural cases should be different). However, I was addressing the point of yours that we should handle EP (for plural cases) by scaling everything up, including the number of agents/selves. This would have the inadvertent side effect I spoke of; where the odds for singular and plural cases would in fact be the same, contrary to your position. This is supposing that you really did intend for this scaling up process to be used to handle plural cases. Though I understand from your latest comments that you felt you were struggling to express this specific portion of your case, and so don't wish to hold so strongly to it.

      Delete
    5. Hi Alex,

      > This would have the inadvertent side effect I spoke of; where the odds for singular and plural cases would in fact be the same, contrary to your position.

      There is no such side effect.

      We need to be clear about what we are scaling up. Are we scaling up our hypothetical number G which only reflects an ensemble we're using to represent our epistemic situation, or are we scaling up the number of experiments we believe actually exist? If we're scaling up G in a hypothetical model, then we should expect to get the same answer as in the original case, because the whole point of the ensemble is to represent and help us think about the original case. And we do get the same answer -- so I agree with EP insofar as the size of G makes no difference if we duplicate the experiment correctly. I only disagree with you and Dmitriy about how to duplicate the experiment correctly.

      But if we're scaling up the number of experiments we believe to be actual, then we don't get the same answer for singular and plural cases, because if these experiments are actual we could be in any one of them, and it doesn't make sense to duplicate the samplings. If we believe there is one experiment we could actually be in, then we have one sampling and one experiment to sample from. But if we believe there are 100 actual experiments, then we still have one sampling but 100 experiments to sample from. These are now different problems with different answers.

      Delete
    6. Hey DM,

      Okay I see, when I wrote:

      "<"I think it might be helpful to take a step back here and realize that much of your argumentation has been focused on what you feel is the 'proper' way to extrapolate the ensemble methodology to handle a plurality of cases">

      No step back required. This is explicitly what I am arguing"

      I meant by 'a plurality of cases' to be referring to actual cases. It seems that you you simply intended a G plural ensemble to be used for a singular case. If so my question is, why? Why introduce the additional complexity of the agents/selves? Note that there are still 42 M agents for every 1 S agent, so even though every agent is tied to every local environment/incubator/galaxy, it's still true that if we reason that we could have been any agents, then we should accept a 42 to 1 M:S odds ratio.

      You would still need to argue that we should adopt a different selection factor (in other words, to argue that we shouldn't assume we could be any argent with equal probability). Thus this whole point of scaling up the ensemble in such a way is, in my opinion, superfluous complexity; since this was the very thing that you needed to argue from the beginning.

      Moving on, I am going to quote myself again:

      "I saw the point about stipulating additional agents as being about that (meaning, actual plural cases). However, yours is the more charitable interpretation; so let's assume that DM meant to say "this is how you should construct your ensemble for singular cases if you wanted to, but you shouldn't because it is illegitimate to infer additional entities in singular cases". In that case I was wrong to say that DM's methodology would have us adopt a 50/50 odds ratio for cases involving large G's (in the aliens galaxy scenario), but this still reduces back to the above argument"

      I did make that retraction earlier (sorry if that wasn't clear). I interpreted your latest point as actually being against that retraction and therefore agreeing with my original interpretation that you meant to be talking about a plurality of actual cases; otherwise I didn't understand why you appeared to be criticizing my retraction.; My apologies for that misunderstanding.

      In any case, the relevant point here is that you are correct to say that your reasoning doesn't have to assume the same results for both plural and singular cases. But my additional point still stands in so far as such a piece of reasoning is really just an alternative laying out of how you think we should do things, it's not really a criticism of Dmitriy's approach I would say. Do you disagree?

      Delete
  17. Hi DM,

    You made an objection against justification 1 for EP:
    " the causal disconnectedness argument is a red herring as we don't know which experiments we are causally connected to."

    I am not sure I understand what you mean by knowing which experiment you are connected to. If each incubator had an id number, would knowing it constitute knowing which experiment you are connected to? Without a clear understanding it's hard to assess the objection.

    But in any case, it seems to me that you definitely know that you are connected to only one experiment (by construction) and disconnected from the other G - 1. If those other ones don't causally affect any physical features you are forming credences about, it seems irrational for your credences to be affected.

    """I guess you would want to do so by introducing the concept of causal disconnectedness. Perhaps you should amend "with the total number G of similar independent experiments in all of reality unknown to you" to "with the total number G of similar independent experiments causally disconnected from you unknown to you"."""

    The word independent was supposed to imply that no two experiments affect each other, meaning they are causally disconnected. Perhaps I can try to make that clearer.

    """But I realise now that you would get the same answer if you sampled for the fish many times per experiment and then looked at the proportions of the results of those samplings. So perhaps the issue is really just that I think the fish should sample once per experiment and you think it should sample as many times as there are fish in each experiment. In which case what I need to defend is the former. Are you with me so far?"""

    Yes, well put, this is a nice way to frame our difference.


    ReplyDelete
    Replies
    1. Hi Dmitriy,

      > I am not sure I understand what you mean by knowing which experiment you are connected to.

      I think it's only correct to rule out some scenarios as irrelevant to our credences because of causal disconnectedness if you know that you are in fact causally disconnected from them. But in our ensemble, while we know we are causally disconnected from G-1 experiments, we don't know which onese these are. Any one of them could be our experiment, so all are relevant to our credences, and I think even the number of them is relevant to our credences.

      > If each incubator had an id number, would knowing it constitute knowing which experiment you are connected to?

      It depends on whether all experiments in the ensemble have the same id number or not! If they do, then obviously it's no help. If they don't, then the ensemble doesn't represent your epistemic state any more. You are supposed to be under SLU regarding the ensemble, but if you know the id number of the experiment you are in then there's no point in imagining an ensemble where only one experiment has this id number.

      > If those other ones don't causally affect any physical features you are forming credences about, it seems irrational for your credences to be affected.

      The ensemble represents your understanding of the probabilities of your current situation. A change in that ensemble can affect your credences not because of any causal effects, but because it represents a different mental model of your circumstances.

      What you are saying to me strikes me as like saying "The possibility of a galaxy with 42 seeded civilisations is irrelevant to me because such a galaxy is causally disconnected from me". This is confused, we both agree, because the statement is only true if you are in fact in a galaxy with only 1 seeded civilisation, which you don't know. As long as you don't know whether you're in an S galaxy or an M galaxy, then the possible existence of both S and M galaxies is relevant to your credences, even though you are only causally connected to one of them.

      > The word independent was supposed to imply that no two experiments affect each other

      I think so, because otherwise, you can draw a ring around each fish-stocking, excluding the fisherman, and regard each ring as an independent experiment, because they really do have nothing to do with each other. If you do so, you're counting the fisherman as a God's eye view quite apart from the problem, much as we do when we talk about abstract thought experiments. So not only do you need to make clear that the experiments are causally disconnected from each other, you need to make clear that if the viewpoint from which we consider the problem is a part of the experiment, then that viewpoint must also be duplicated.

      > Yes, well put, this is a nice way to frame our difference.

      Good, I must come back to you on that so.

      Delete
  18. Hi DM,

    >It depends on whether all experiments in the ensemble have the same id number or not! ...

    Suppose all ids are different. Consider two problems:

    A. You don't know the id of your experiment, you know there are G disconnected experiments.
    B. You do know the id, otherwise same as A.

    Do you think your credences in these two versions should be different? I think they should be the same, since the id is an irrelevant piece of information. But then knowing or not knowing which experiment you are in makes no difference. Presumably in B you think the answer is the same as in

    C. You know there's only one experiment.

    > What you are saying to me strikes me as like saying "The possibility of a galaxy with 42 seeded civilisations is irrelevant to me because such a galaxy is causally disconnected from me".

    Not at all. EP is talking about other actual galaxies that are basically so far away (say outside our observable universe) that no signal from them could reach us. Your quote isn't anything like that, I don't even know what it means for a possible galaxy to be disconnected from you. If it means "if it actually exists then no signals from it can reach me", then obviously the galaxy in your quote is most certainly not causally disconnected.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      In case A, the ID is irrelevant.

      My first reaction for B is what you expected. The ID is not irrelevant, as it identifies your particular experiment in the ensemble of G experiments. This makes G-1 experiments irrelevant to your credences, and reduces the effective ensemble to one case, so you're back where you started with no ensemble at all, i.e. case C.

      But then I think again, and reassess. I agree that it seems that the ID doesn't seem all that relevant to the logic of the problem. As such, taking the ID too seriously would seem to be applying TER where I would be inclined to go more generic and disregard the ID. Disregarding the ID, I'm back to not knowing which experiment I'm in, and all experiments are relevant to my credences again, just as if I didn't know the ID.

      Either way, the ID doesn't help. The point of the ensemble is to place yourself in SLU. As long as you're under SLU, you don't know what experiment you're in, so all experiments are relevant even though G-1 of them are causally disconnected from you.

      > If it means "if it actually exists then no signals from it can reach me", then obviously the galaxy in your quote is most certainly not causally disconnected.

      I don't follow. Possibly you meant to say the opposite of this? If something exists but no signals from it can reach me, then it's not obvious that it is not causally disconnected.

      Delete
  19. Hi Dmitriy,

    Before I try to defend my approach for the fish's perspective, I want to recap my understanding of our agreements and disagreements to make sure we're on the same page.

    Previously I had been thinking of your way of approaching the problem as failing because it failed to multiply your own perspective along with everything else in the experiment. I had a hard time communicating what that meant. But we have discovered that your approach is also consistent with multiplying your perspective more than I think you should, and you seem to go along with that interpretation, so perhaps that's a more promising approach.

    I take it we're all on board with what I mean by the fisherman's perspective and the fish's perspective. To recap, the fisherman is a perspective which is unique to an experiment, and which is explicitly sampling one member of a population each time the experiment is run. The fish's perspective is not unique, being one of the population being sampled from. The fisherman is trying to predict what kind of fish he will catch on knowing details about the experimental setup. The fish, which doesn't know which kind of fish it is (e.g. it doesn't know its colour because it is colour-blind) is trying to guess what kind of fish it is based on knowing the same details about the experimental setup.

    I think these two problems are exactly the same. You think they are different.

    What they have in common is that both problems can be thought of as sampling from a population of known parameters (e.g. with 50% prior probability the population will have makeup A, and with 50% prior probability it will have makeup B). What is different is that there is only one instance of a fisherman's perspective per experiment, while there are many instances of a fish's perspective per experiment.

    We agree on how to model the fisherman's perspective with an ensemble. We can think of running G experiments, and then working out what G fishermen (one for each experiment) will catch based on our knowledge of the probabilities for each kind of population, with one sampling operation per experiment. The final credences are reflected in the ratios across these G samplings. This is how my simulation models all problems of this sort, including the fish's perspective.

    We disagree on how to model the fish's perspective. Previously, you had argued that you need to sample only once across the whole ensemble. This seemed incorrect to me as the sampling seemed to be part of the experiment -- with G experiments it seemed to me you would need G samplings. It also seemed inconsistent to me with how you were modelling the fisherman's perspective. But it seems both problems can be resolved by instead regarding it as doing as many samplings as there are fish in each experiment, and then looking at the ratios of all these samplings. This appears to be mathematically equivalent to doing just one sampling across the whole ensemble while being more obviously (to me at any rate) consistent with your approach for the fisherman.

    My argument is pretty simple, but I don't expect it to be convincing. I think we're more or less back to SSA vs SIA territory, with competing intuitions but no clear contradictions or knock-downs either way. I still think the fish should sample only once per experiment, because it is just one fish. As a fish, I don't know what possible world I am in, but I'm only going to be born once. As such, for each experiment (which is really just a hypothetical abstract possible world for a circumstance in which I may find myself, as far as I know), I should only roll the dice once to simulate my birth (or my coming to be in whatever situation I'm in). While there may be 100 fish in a given possible world, any of which could be me, in any given experiment I don't get to be all of them. If I'm only sampling once per experiment, then the problem is equivalent to the fisherman's perspective.

    ReplyDelete
  20. Hi DM,

    I don't understand your position on my response to your objection that you don't know which experiments you are disconnected from. I think you you are saying

    1. A and B have the same answer.
    2. In B, you know which experiment you are in.
    3. BUT, you claim, we must disregard this knowledge (not follow TER) because ID is irrelevant (i.e. because 1).

    Did I get it right? If so, then it seems like in B you know that other experiments are causally disconnected (since you know which experiment you are in, and your objection doesn't apply), BUT, 1 (ID is irrelevant) + TER denial means, you believe, that you must disregard that knowledge.

    Is that right?

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      I think you have two options on B. Take the ID into account (respecting TER) or not taking the ID into account (disrespecting TER).

      If you take the ID into account, then you're back to case C, (so it's not the same as A). You effectively have one case so the ensemble is not relevant to your credences.

      But personally I would choose to disrespect TER, as I don't think the ID should be relevant to my credences. It's again taking my personal experience as special just because I'm me, whereas objectively there's nothing particularly special baout my ID. Disrespecting TER and disregarding the ID makes B equivalent to A.

      But again, the point is moot. The ensemble is not actual, it's a mental construction I am using to help me model my situation. As such I'm free to construct it however best represents my epistemic situation. To do so I must construct the ensemble so that I am in SLU with respect to it. This means that I must either pretend I do not know my ID number or construct the ensemble such that all experiments have the same ID number. It makes no sense to take my ID number into account while constructing an ensemble where only one room has my ID number.

      Delete
  21. Hey Dmitriy,

    This is a two part post of mine:

    I've been thinking a bit about your causal disconnectedness principle; I think it may be helpful to expand on it a little. Firstly, I should begin by mentioning that I agree with your reasoning; indeed I think it almost self-evident under a certain interpretation. To see this, let's modify the aliens galaxy setup experiment so that we can all agree that singular and plural cases should yield different odds ratios. For instance, if the aliens planned to experiment in only one galaxy then they decided beforehand to keep the M and S populations equivalent, whichever one was created (by, for instance, radically curtailing the population of M planets). If the aliens decide to conduct their experiment in many galaxies; then they will follow the usual 42:1 format after flipping a coin.

    Thus our knowledge of whether we are in a singular or plural case legitimately impacts the odds ratios. I think we can all agree on this (regardless if we subscribe to SIA or SSA); the question now is how you think this impacts your argument. Suppose that plural cases involved experimental replications beyond our present cosmological event horizon, then all galaxies are presently causally disconnected from each other. We would need to adopt a temporally capacious interpretation of what "causally disconnected" means; so that it is defined as being no physical transfer of information was/is possibly instantiated between the galaxies.

    In my example, some information was possibly transferred (for instance the aliens could have left behind information billions of years ago that they were planning on travelling at near light speed beyond our present event horizon to create new experiments/planets; thus impacting our present credences). This does make the argument somewhat tautological though, for here causal disconnectedness is almost literally defined to be a lack of any possible information transfer. But as we saw above, present causal disconnectedness is an insufficient condition for the requirement that our credences must be unmodified. It's only when we adopt the temporally expanded version that we can get away with saying that our credences are unaffected by independent experimental replications. I wonder if you agree with my definitional interpretation?

    ReplyDelete
    Replies
    1. Part 2:

      But if we agree on this, then there arises an issue of scope. For in cases where the reference class is limited, like in an ordinary aliens galaxy scenario (not the modified one above) where it is stipulated that we can only be an observer in our local event horizon (so that we don't care if there exist other aliens in some different region of the multiverse who are replicating a similar setup); we can see that this principle while true, isn't so applicable. That's because using SSA (quantifying over the actual population) means that our local knowledge regarding the number of galaxies will impact our credences, and the causally disconnected principle doesn't prevent this.

      So, I would refine your argument to only be about those experimental setups wherein the reference class is stated to include entities in causally disconnected space-time. Finally, I'm not so sure the argument proper works to discredit SSA in the intended fashion. Reasoning based on SSA yields different credences for singular cases. Now I need to introduce a caveat here, because what I really mean to say is that not reasoning based on SIA yields different results; we need to introduce all the possible entities to see that we have a M:S odds ratio (for the aliens galaxy case). But some limited amount of possible entities also need to be introduced to argue that the correct credences for singular cases are 50/50. So DM's ensemble methodology, does not, strictly speaking, rely on SSA.

      This caveat aside, I think we can save DM's reasoning for casually disconnected space-time (involving an expanded reference class beyond our causally-connected local area), by making a distinction between epistemic and frequentist probabilistic interpretations. As we saw above, it is literally true that our epistemic credences cannot be affected by what occurs in causally distant space-time regions. But that doesn't imply that the actual probabilities cannot be. If DM's methodology were sound, and if our reference class were expanded beyond our local environment, then it would matter whether such experiments were replicated multiple times or not regarding our actual probabilities of being born. Hence, the argument from causal disconnectedness cannot, I think, refute such reasoning, for it only works because of epistemic limitations.

      The hope was that this reasoning of yours based on causal disconnectedness could uphold EP, and thus refute the SSA-inspired (but not really SSA inspired) reasoning of DM's. But I think it only works provided we already assume such thinking is faulty in the first place. We still need some independent argument for why DM's probabilistic reasoning fails. I wonder if you agree or not?

      In any case, I think the simplest argument is that DM's methodology is more complex, and therefore requires additional justification (which was not provided). As I mentioned above, DM would still have us quantify over possible entities, that's because to say that we have a 50/50 chance of being born in a M or S galaxy in a singular case, is to say that we could have been certain possible entities. So, we need some extra justification for why we can't quantify over the ratio of all the possible entities.

      Apologies if I have misconstrued you in anyway.

      Best,

      Alex

      Delete
    2. Hi Alex,

      I'm not sure it's fair to say my methodology is more complex and needs special justification, since Dmitriy and I agree on how to analyse the fisherman's point of view, and both of us would use what you refer to as "my methodology" (but Dmitriy would see as an application of EP).

      Prima facie, before really thinking about it, it's not obvious why the fish's point of view should be much simpler than the fisherman's. It just so happens that on your and Dmitriy's approach of treating each individual fish as a sampling it just turns out that the complex approach happens to simplify to the approach where you do one sampling across the whole ensemble.

      Given that my approach for the fish is no more complex than Dmitriy's approach for the fisherman, I don't think this complexity needs any special justification. The only question is whether the two questions should be treated alike or differently.

      Delete
    3. Hey DM and Dmitriy,

      @DM: I plan on addressing your point against mine on the topic of complexity in a follow-up post.

      @Dmitriy:

      I wanted to make the clarification that the reason I think that the argument for EP potentially fails due to epistemic limitations, is that in such cases where we have an expanded reference class among causally independent space (CIS), meaning we think we could have been born in other universes, AND where DM’s reasoning holds, it follows that your argument entails:

      A) We can never have the proper information to change our credences in such scenarios (CIS).

      And not:

      B) The circumstances of CIS don’t actually impact the probabilities of our being here

      But, presumably, we wanted to know about whether the conduct of experiments in other causally independent places actually impacted the likelihood of our being born in some place. The only way you get to B from A is if you subscribe to certain views on probability (like a subjectivist one).

      So, one could simply claim that we we shouldn’t reason about epistemic probabilities involving a reference class that includes entities living in independent/disparate space because of these epistemic limitations, and not because of some inherent deficiency to DM’s reasoning.

      Delete
  22. Hi Alex,

    > It seems that you you simply intended a G plural ensemble to be used for a singular case. If so my question is, why?

    Because I'm arguing against the way Dmitriy is using EP.

    Dmitriy wants to say that the number of experiments doesn't matter, and that we should have the same credences for a singular case as for G cases. And I agree, as long as we duplicate the case correctly. So let's assume we think there is a singular case, but construct an equivalent ensemble of G cases that would give the same credences. Then I explain how I would do so.

    This doesn't mean that I think I should have the same credences if I think there is one actual experiment I could be in or G actual experiments I could be in. This is a different problem.

    > Why introduce the additional complexity of the agents/selves?

    I don't accept that it's additional complexity. I think it's a part of the problem that you and Dmitriy are mistakenly ignoring.

    From the fish's perspective, there may be *42* fish in the lake, but I am only *1* of them. That number *1* is a part of the problem which it seems to me you are blind to. The ratio of "me" to "lake" in the actual world is 1:1. You can't multiply everything else but keep that number 1 constant. To do so distorts the probabilities by changing this ratio. If you can't make sense of such ways of thinking about it as being born multiple times, or having multiple proxies/selves/viewpoints in the ensemble, or whatever, then you're better off just not constructing an ensemble at all.

    > Note that there are still 42 M agents for every 1 S agent

    No, not necessarily. If there is only one actual experiment, then there is either 1 S agent or 42 M agents. The ratio of M agents to S agents is either 1:0 or 0:1. It is never 42:1. It is only 42:1 if there are many actual experiments.

    > is really just an alternative laying out of how you think we should do things, it's not really a criticism of Dmitriy's approach I would say. Do you disagree?

    On the assumption that there is only one right answer, then proposing an alternative is a criticism of a kind. Insofar as my proposal makes sense, then it undercuts the validity of Dmitriy's approach, and vice versa.

    At this point in the conversation, I feel the point of disagreement is not whether EP is right or wrong, or whether we should use a simple or a complex approach, or whether it makes sense to have multiple selves. The difference is only whether the fisherman's perspective is different from the fish's and why. I have articulated my reasons for thinking they are the same as best I can (i.e., because each fish knows it is only one fish and not many, so each perspective must sample once per experiment yielding the same analysis).

    ReplyDelete
    Replies
    1. Hey DM,

      I’m a bit confused by what you write here:

      “ No, not necessarily. If there is only one actual experiment, then there is either 1 S agent or 42 M agents. The ratio of M agents to S agents is either 1:0 or 0:1. It is never 42:1. It is only 42:1 if there are many actual experiments.”

      So in this scenario we have a singular case in actuality, but have constructed an ensemble with multiple (let’s say millions) of possible galaxies (G) and millions of selves/agents, where each possible self is tied to each possible observer. Is that not correct? In that case, my point still stands (about the possible ratio between agents/selves being 42:1).

      I get that you wish to quantify over actual and not possible entities, but that would be completely contrary to that way you suggest we construct the ensemble (for singular cases). If we should try to construct our ensemble so that it approximates quantification which is based (or more closely based) on SSA, then we shouldn’t extrapolate the scenario to include multiple G’s. That would be an invalid application of reasoning; for it, by default, assumes that we are quantifying over large amounts of possible entities.

      Delete
    2. I suppose you can get away with this by specifying that a self (by definition) is the thing we are supposed to quantify over, meaning that implicitly the SIA approach is refuted. But some justification is needed for this as before, so this just reduces to my earlier point about how this extra layer of complexity seems to be unnecessary; since we always needed such an argument to begin with.

      Delete
    3. Hi Alex,

      > where each possible self is tied to each possible observer

      That does not seem right to me. Each "possible self" is tied to one particular possible galaxy, I would say.

      > about the possible ratio between agents/selves being 42:1

      Not sure what you mean by "possible ratio". A ratio of 42:1 is impossible. No possible world has such a ratio. While the ratio between possible M observers and possible S observers may be 42:1 across all possible worlds, at least if you count each observer with the same weight (which I think is a mistake), the ratio of actual observers is either 0:1 or 1:0 in all possible worlds.

      > I get that you wish to quantify over actual and not possible entities, but that would be completely contrary to that way you suggest we construct the ensemble (for singular cases).

      I don't think so.

      Since we don't know the actual numbers, we have no choice but to consider possible worlds instead. But we should consider each possible world separately and aggregate the results according to the probability of each possible world. I'm claiming it's wrong to just lump all the observers into one undifferentiated ensemble for analysis, not that it's wrong to use possible worlds in analysis.

      I'm not sure I follow the rest of your points.

      Delete
    4. Just want to clarify one thing.

      I said:
      > Not sure what you mean by "possible ratio". A ratio of 42:1 is impossible. No possible world has such a ratio

      Here I mean, in the singular case no possible world has such a ratio. If we're constructing an ensemble to help us think about our credences in the singular case, no possible world in the ensemble should have such a ratio either. Obviously if we think that there are actually G galaxies, then almost all possible worlds will have a ratio of 42:1.

      Delete
    5. Hey DM,

      Yes that's what I meant by possible ratio (the ratio amongst all possible worlds), and yes each possible agents should be paired with one galaxy (pairing to an observer is needlessly restrictive). And based on this, "we have no choice but to consider possible worlds instead. But we should consider each possible world separately and aggregate the results according to the probability of each possible world. I'm claiming it's wrong to just lump all the observers into one undifferentiated ensemble for analysis, not that it's wrong to use possible worlds in analysis."

      It seems to me that such an analysis does not accord well with a multiple g-experimental ensemble. That's because such an ensemble would have us look at multiple g-experiments at the same time; hence my point that your way of doing things should probably not use such an ensemble as representative.

      There's no point in introducing the additional complexity of an agent/self, by far the easiest way of doing things would be to construct your ensemble in such a way that it did away with (generic) possible observers altogether, but rather only took into account possible worlds. In that case, we have an ensemble of possible worlds which yields the exact same credences as your simulation for both singular and plural cases.

      Delete
  23. Hey Alex,

    >Thus our knowledge of whether we are in a singular or plural case legitimately impacts the odds ratios. I think we can all agree on this (regardless if we subscribe to SIA or SSA);

    I don't see the relevance to EP, because in the example you gave the plural case is not G independent versions of the singular case, as is needed for EP. More generally, it's not an example of comparing X with X + causally disconnected parts (in which case, I am saying the credences about X should be the same).

    But in any case, your reasoning that the disconnectedness must extend through time is completely right, and that's what I already mean by being causally disconnected - that no causes/relevant information is exchanged or shared between experiments, that they are completely independent. In fact, that's why I used the word independent in the formulation EP in the article.
    --------------------------------------------------------

    >For in cases where the reference class is limited, like in an ordinary aliens galaxy scenario (not the modified one above) where it is stipulated that we can only be an observer in our local event horizon (so that we don't care if there exist other aliens in some different region of the multiverse who are replicating a similar setup); we can see that this principle while true, isn't so applicable.

    I think you have pinpointed another crucial aspect of the ensemble method, which is present in the examples that I work out, but needs to be made more explicit. The method involves two steps:

    1. Applying EP, meaning replacing the original problem with problem A (ensemble of G independent experiments).
    2. Applying self-locating uncertainty, SLU, to evaluate the credences by evaluating what fraction of agents who you could be have the desired properties.

    The part I quoted concerns the second part of the ensemble method rather than EP. I would say, if the second part is valid, then the case where one limits the reference class to some physical region is simply not legitimate. I think we don't actually have the freedom to select different reference classes, this is something that must be done in SSA but this is actually a really bad problem for it because there are no non ad hoc prescriptions for which reference class is the right one and why, and different choices of the reference class yield different answers, thus making SSA at best incomplete.

    By contrast, part 2 of the ensemble method states that if there's a world with a bunch of living beings and you are one of them, then you should consider yourself to be a random being among the ones who you could actually be. If you know you are not a lizard, or a five-legged Jovian, or a child, then you can't be selected from among them. This seems to me almost tautological, and if that's the right idea then it automatically creates the right reference class, the beings who, given the information you have, you could be. It's known as the trivial reference class in the literature.

    So if I am right about this then the scenario you propose is illegitimate, because it uses an invalid reference class, no actual epistemic situation corresponds to it.
    -----------------------------------------------------

    ReplyDelete
    Replies


    1. >As we saw above, it is literally true that our epistemic credences cannot be affected by what occurs in causally distant space-time regions. But that doesn't imply that the actual probabilities cannot be. If DM's methodology were sound, and if our reference class were expanded beyond our local environment, then it would matter whether such experiments were replicated multiple times or not regarding our actual probabilities of being born. Hence, the argument from causal disconnectedness cannot, I think, refute such reasoning, for it only works because of epistemic limitations.

      I don't quite get the logic here, so my responses might miss the thrust of what you are arguing:

      1. We want to know what the correct epistemic credences (CEC) should be. So if there is some sort of discrepancy between those and these other things, the "actual probabilities", then it should not concern us if all we want is CEC.

      2. I am not actually sure what those actual probabilities even are. After all, it's quite possible the world is deterministic.

      3. Whatever they are, I don't understand how causally disconnected things can affect them. But maybe that's a moot point since what we are interested in is CEC.

      4. It seems you are saying that my argument assumes DM's methodology is incorrect. But it doesn't, instead the logic is:
      certain assumptions -> justifications 1, 2 -> EP -> DM's methodology is incorrect. So it's not an assumption, it's a conclusion.

      Delete
  24. Hi DM,

    >But personally I would choose to disrespect TER, as I don't think the ID should be relevant to my credences. It's again taking my personal experience as special just because I'm me, whereas objectively there's nothing particularly special baout my ID. Disrespecting TER and disregarding the ID makes B equivalent to A.

    So your view is then that problems A (G experiments, id unknown) and B (G experiments, id known) have the same answer, while problem C (one experiment) has a different answer. My view is they all have the same answer.

    In B, you know which experiment you are in, and you believe the correct approach involves you ignoring this knowledge. Your objection to justification 1 for EP was that you thought credences can be affected by stipulations about causally disconnected experiments because you don't know which experiment you are causally connected to. But this reason is inapplicable to B by construction, and yet you still think the answer is different from C.

    In other words, B is different from C only by stipulations about causally disconnected experiments AND you know which experiment you are in, so you need a new objection to explain why, your previous one doesn't apply.

    It seems your new objection would have to involve not what you know but what you know AND actually using. So how would that work to argue that B and C have different answers? It seems you would then be saying something like: You know which experiments you are causally disconnected from, but your credences are still affected because for certain reasons you need to not use this knowledge. This seems very bizarre to me.

    ReplyDelete
  25. Hey Dmitriy,

    Part 1) About your recent comments:

    It seems to me that your argument for EP isn't applicable in cases involving G replicated experiments that are causally connected, provided we specify that our reference class is limited to such causally connected experimental replications. That's because if our reference class is expanded to causally disconnected areas (e.g. we could have been born in such areas), then it's enough to show that EP works for such areas to discredit DM's methodology.

    Basically that means that it must be argued that there are no such cases involving a limited reference class amongst causally connected G-replicated experiments. I hope we are in agreement, and this does indeed seem to be the argument you are making when you write:

    "I would say, if the second part is valid, then the case where one limits the reference class to some physical region is simply not legitimate. I think we don't actually have the freedom to select different reference classes"

    However, I'm not sure that what you say establishes your points. I agree that our reference class isn't something we can choose, rather it is given in the stipulations of the question itself. In the aliens case, we are asking whether the aliens created multiple planets or just earth. In that case, I don't see how the existence of different aliens in far-away universes/galaxies could impact our reference class. It seems clear that, by the conditions imposed by the question itself, we are only supposed to consider possible entities (in one galaxy) all of whom would be causally connected. Hence, this is just a question of whether we could have been born in different planets in the same galaxy, all of which are causally connected, and where the ultimate answer is dependent on what choices the aliens made.

    It's no different than the IVF case wherein we only consider a single doctor; there's simply no way that our reference class can include other doctors/babies living in causally disconnected space in other universes because the question at hand concerns the actions of the single doctor and what he/she was likely to do (e.g. how many babies did he/she attempt to create). It's only in the multiverse case that our reference class is expanded to include causally disconnected space. The point is that our reference class is not so expansive by fiat, but rather by the operational constraints of our experimental setup. I say all this just to make sure we are on the same page, because of course in the multiverse scenario the default assumption is in your favour; we should definitely assume that our reference class includes possible entities in causally disconnected space.

    ReplyDelete
    Replies
    1. Part 2)

      This leads me onto my second point, regarding whether your argument for EP can work. I agree with you that the our credences cannot be influenced by multiple G experiments being replicated in causally independent space (CIS), but again I think that's due to epistemic limitations. These are limitations you imposed by definition, since you defined independence as being immune from information transfer. Importantly, I argue that it is this limitation which is responsible for the credences being unaffected, and not necessarily some flaw in DM's methodology.

      Nor this does actually have to mean that there are two kinds of probability in actuality, I just used that example for convenience. However, one could still argue that one's credences should in fact be affected even if all credences were epistemic. So let's suppose that all credences are epistemic, and that we don't care about 'actual probabilities' but just our CEC. In that case we know:

      1) Our current credences cannot be affected by G-experimental replications in CIS
      2) If DM's methodology were sound, then our credences would be affected by experimental replications in causally connected space.
      3) If we knew about/had information about the g-replications in premise 1, then per premise 2, our current credences would be affected.
      4) We know that if we had more information about these places (e.g. if we were God), then our credences would be affected.
      5) The correct credences are based on those that use/take into account the most information.
      6) The correct credences should be affected by G-independent replications in CIS.

      That's the argument I am making, and that is why I said you would have to assume that DM is wrong (i.e. deny premise 2). An alternative would be to conflate our credences in 1 with the CEC, but that would just be begging the question I think. Finally, one could argue that premise 5 is unsound. But I think it pretty straightforward. If for example I flipped a coin and asked you to predict heads, you would probably say its a 50/50 chance. Now suppose that I told you (and suppose I'm always reliable) that your future self who had way more knowledge than you did actually thought that it was 30/70. In that case, it seems rational to say that your future self knows something you don't (i.e. the coin is weighed) and you should go with the 30/70 probability as being representative of the CEC.

      Now suppose it was proven that you can never know the credences that your future self chose, call this the temporal limitation principle (TLP), however the TLP doesn't constrain you from getting the information that your future self chose different credences from your present choice. In that case, TLP prevents you from arriving at your future self's credences, but doesn't prevent you from realizing that your present credences are probably incorrect.

      In this example, your future self's credences are analogous to believing in DM's methodology, and TLP is analogous to the epistemic limitations imposed by your argument for EP. If DM's methodology is correct, then we know that, barring epistemic limitations, we would (if we could get the information) think that our credences should be affected by G replicated experiments. However, these epistemic limitations, similar to TLP, only prevent us from arriving at these possible credences, they do not prevent us from realizing that our credences would be different in such a case. This is enough to tell us that our present credences are not CEC (so long as we thought the possible credences are better).

      Delete
  26. Hey Alex,

    I think the most important part I must address first is

    >It's no different than the IVF case wherein we only consider a single doctor; there's simply no way that our reference class can include other doctors/babies living in causally disconnected space in other universes because the question at hand concerns the actions of the single doctor

    I disagree, and maybe if we come to an agreement on this point the other issues will become clearer. I think our reference class must include all babies who we could be, and these babies do indeed live in causally disconnected places.

    Imagine an even simpler scenario, G rooms, causally disconnected, in each of which a bearded man is magiced into existence. 70% have white beards, 30% have black beards, but all rooms are dark so they can't see their beards. It seems that each man should reason as if he is randomly selected from among all entities who, as far as he knows, he could be. In this case that includes all G men, even though they live in causally disconnected rooms.

    Do you agree with this account? If so, it seems the same logic would apply in the IVF case: you of course were created by some specific doctor (just like each bearded man was created in some specific room), but overall there are a bunch of causally disconnected copies of Earth, some of which have clones of you. You can't tell which of them you are, so you must reason as if you are a random clone out of all the ones you could be, even though all these clones were made in causally disconnected places.

    ReplyDelete
    Replies
    1. Hey,

      "Do you agree with this account? If so, it seems the same logic would apply in the IVF case: you of course were created by some specific doctor (just like each bearded man was created in some specific room), but overall there are a bunch of causally disconnected copies of Earth, some of which have clones of you. You can't tell which of them you are, so you must reason as if you are a random clone out of all the ones you could be, even though all these clones were made in causally disconnected places."

      I agree with this account in your specific example, but again it seems to me that it is the constraints of the question itself which stipulate our reference class. You specified that we don't know which doctor was responsible for our being born, so of course it follows that we must retain a capacious reference class and assume that we could have been born to any doctor. But suppose things were different; suppose that we knew that Doctor Minime had given birth to some embryo(s), that you were one of these embryo(s), and that he did this process by selecting a lottery ticket. Here we are supposed to conclude the number of embryos he made.

      In such a case wouldn't you agree that our knowledge, as given by the problem setup, constrains our reference class to be among those embryos potentially created by Doctor Minime? If we knew for sure Doctor Minime could only have created embryos from time period A to B, and between locations C and D, then we know for sure that any potential embryos have to exist in the same causally connected space.

      Again, this is not a refutation of the argument for EP. It still follows, by your definition, that G-replicated experiments have to take place in causally disconnected space; but I argue this is irrelevant if our reference class is limited. There is no way that such experimental replications in CIS (causally disconnected space) can affect our credences because of our restricted reference class, and not necessarily because of some problem with DM's methodology (as you are trying to show).

      In any case, I agree that our reference class for the multiverse case is actually expanded to include CIS; hence my second argument about the soundness of the reasoning for EP.

      Delete
    2. I think we might have been talking past each other regarding the point in my first post. I should have emphasized more heavily that I didn't see it as an argument against your argument for EP (especially since our default reference class for the multiverse case is expansive by nature). I brought all this up just to make sure we were on the same page regarding how the stipulations of the question setup our reference class. Let me know if you disagree with any of the above; I think it important because it defines the boundaries of your argument for EP. For example, it wouldn't be applicable in cases like the above involving a limited reference class.

      Delete
  27. Hi Alex,

    I mean for EP to be applicable not just for cases, such as the multiverse, that already come with CIS, but also for cases such as the IVF case. So it's important to address your example:

    >In such a case wouldn't you agree that our knowledge, as given by the problem setup, constrains our reference class to be among those embryos potentially created by Doctor Minime?

    I am assuming you are still talking about a problem where it's stipulated that there are G disconnected copies of the experiment. So in a huge cosmos we have G copies of Earth, each of which has a copy of Doctor Minime. So there's no way for you to know which of the G copies of Earth you are on, or which of the Doctor Minime's incarnations made you, right? So your reference class, if it is to correspond to your epistemic situation (otherwise I don't see its utility), must not be localized to only one Earth.

    The fact that you can point to the exact clone of Doctor Minime who made you doesn't change anything. Imagine an even simpler scenario, G identical copies of you in G disconnected rooms (maybe with different beard colors as before). You can of course point to the room you are in and say "only this specific room is mine". But that doesn't change anything, you still don't know which of the G rooms it is and which of the G copies you are. So you still need to reason as if you are a random one, hence your reference class includes all G.

    In this simple example is there any way to avoid this conclusion do you think?

    It is

    ReplyDelete
    Replies
    1. Hey Dmitriy,

      I’m not sure how relevant this discussion is to the topic at hand; given that my main argument (my second one) against the causal disconnectedness (CD) case for EP, concedes that the multiverse argument has a reference class expanded to CIS. The second argument, I feel, is independent to these concerns. Whether our reference class must always be expanded to include CIS simply means that the argument from CD for EP is unbounded. But presuming there are no limitations to the argument and that it is applicable to all cases, doesn’t impact in anyway its soundness (which is what my second argument was about).

      In any case, you wrote:
      “I am assuming you are still talking about a problem where it's stipulated that there are G disconnected copies of the experiment”

      Yes of course, but the point is that such a stipulation is just an artefact of our knowledge. I don’t view this talk about limiting our reference class as impacting in any way the correct probabilities. Suppose that we just wanted to know our chances of being born a singular embryo, given that we were born to doctor x on this earth in particular. Then it will follow that our reference class must be limited.

      Notice that arguments over quantification are inapplicable here. It is perfectly reasonable to argue against DM’s methodology in such a limited (by reference class) case; by saying that the CEC’s should be determined using SIA, which will be analogous to the credences we get if we had expanded our reference class to all the actual g independent experiments.

      But that is different from saying that the reference class is expanded in actuality. Importantly, when I speak of these experiments I am talking about the actual world; so I am imagining (as you previously put it) that if this were actual, then should our credences/reference class be changed about the knowledge of replication in CIS? Only if you thought that your potential to have been born on other earths in some ways impacted the probability of the question at hand, but it doesn’t if you’re just asking about your probability of having been born as embryo x for doctor y on earth z. What does impact the probabilities will be the ways we choose to quantify over possible entities.

      So I see it as two separate concerns, one relates to the proper way of quantifying these probabilities for singular cases (regarding the relationship between our reference class and possible entities) and the other to whether our potential ability to have been born in any actual experiment impacts the correct probabilities; which is going to depend on the type of question you want answered.

      Thus, whether a scenario counts as a singular case is determined both by physical reality and the conditions of the problem/question. I think it’s perfectly correct to say that, by definition, a scenario would involve plural cases only if such experimental replications occurred in non-CIS space.

      I do feel that much of this is perhaps a trivial distinction (let me know if you feel differently), and not just because of the points I made above relating to the independence of my second argument, but also because our viewpoints are functionally analogous. Ultimately, this whole side discussion is just about scope, meaning could there be side cases where your argument isn’t applicable? You say no, I say yes, but this is all tangential because the multiverse argument is not one of these side cases.

      Delete
    2. Sorry you weren't able to follow a lot of what I said, I agree it came off quite obtuse.
      Basically what you say here. "So your reference class, if it is to correspond to your epistemic situation (otherwise I don't see its utility), must not be localized to only one Earth".

      I just don't see that our epistemic situation concerns multiple earths in actuality. That's because the knowledge of other earths is irrelevant; I still don't get why you think it relevant (i.e. why the knowledge tells us that we need to expand our reference class to CIS space). After all, the whole point is that our self-locating uncertainty is limited. We really don't care that we could have been born on other earths. Any more than we should care that we could have been born on other earths if we were trying to find out whether I was born in country C or A.

      In that case, the extent of my self-locating uncertainty would be limited to the countries on this earth. We both agree that we should quantify using SIA; so why do we also need to expand our reference class to be about actual worlds in CIS?

      Delete
  28. Hey Alex,

    I wasn't able to follow a lot of what you said. But I understand that you want to switch focus to your second argument against justification 1 for EP (justification 1 is what you refer to as CD-based case for EP). That argument is

    1) Our current credences cannot be affected by G-experimental replications in CIS
    2) If DM's methodology were sound, then our credences would be affected by experimental replications in causally connected space.
    3) If we knew about/had information about the g-replications in premise 1, then per premise 2, our current credences would be affected.
    4) We know that if we had more information about these places (e.g. if we were God), then our credences would be affected.
    5) The correct credences are based on those that use/take into account the most information.
    6) The correct credences should be affected by G-independent replications in CIS.


    First, is that the argument you want to focus on? I think 6 should read "If DM's methodology is correct then the correct credences should be affected by stipulations about G causally disconnected replications". Otherwise the logic doesn't go through. Before I respond further, do you agree with this revision?

    ReplyDelete
    Replies
    1. Also I should justify the leap from step 6 to step 7, because of course by the definition of CIS, one can't have knowledge of it. But that's not what step 5 means, it's not about the knowledge of replicated experiments in CIS. Rather, it's about knowledge that (if we knew there were replicated experiments in CIS space [i.e. we were God], then according to DM's methodology we should change our credences]. We know this, which means we know that the correct credences (the credences we would have if we knew about the experiments in CIS) would be affected by replication in CIS.

      In other words, we have present credences that are unaffected by actual replication in CIS, but we also know because of the above argument that our credences are necessarily epistemically limited, therefore we should reason as if our credences are not the CEC. Just like in the coin case where we know our future self chose different credences, but we just don't know what those are (we don't know he picked 30/70). If someone asked us, "do you have the correct credences?" we should respond with a no, but at the same time we should still believe in our present probability estimates as being the best we got (or perhaps abstain from belief altogether); there's nothing paradoxical about that.

      Delete
    2. I meant to write:
      “I should justify the leap from step 5 to step 6”
      And not
      “Step 6 to step 7”

      Delete
  29. Hey Alex,

    If someone asked us, "do you have the correct credences?" we should respond with a no, but at the same time we should still believe in our present probability estimates as being the best we got (or perhaps abstain from belief altogether); there's nothing paradoxical about that.

    I define the correct epistemic credences as the best present probability estimates, in other words by my definition CEC = "the best we got". I am not sure what other credences could we be talking about in any of the problems we have been discussing.

    We can't hope to include in our credences information that we don't have, even if someone else has more information. If I know the result of my coin flip but don't tell you, you should assign the odds of 50-50, those are your correct credences for this problem until you get more information. Do you agree?

    --------------------------------------------------------------

    If you accept the revised version of your conclusion:

    "If DM's methodology is correct then the correct credences should be affected by stipulations about G causally disconnected replications,"

    then we don't even need to analyze the rest of the argument, because I agree with that conclusion - I think we have known that statement to be true for quite a while. However, I don't see this as disrupting in any way my argument for EP. My argument doesn't presuppose that DM's methodology is incorrect, that is a conclusion of the argument, not an assumption. Instead, the logic is this:

    Certain assumptions (not about DM's methodology) -> justifications 1, 2 -> EP -> DM's methodology is incorrect

    So I hope you agree that it can't be a serious objection to say: if DM's methodology is correct then EP is false. That statement is just logically identical to the last step in the logical chain above (EP -> DM is wrong), so it doesn't add new considerations.

    ReplyDelete
    Replies
    1. Hey Dmitriy,

      "I define the correct epistemic credences as the best present probability estimates, in other words by my definition CEC = "the best we got"."

      Alright let's define it this way. Firstly, it's important to keep in mind that, as mentioned, if one subscribed to the school of thought that there existed "objective" probabilities then your objections don't go through. "The best we got" showing us one result is not good enough, especially if we have good grounds for thinking that the best we have conflicts with these objective probabilities.

      Again, we know that statement A: [If we were God and could see the experiments in CIS, then it must be true that the objective probability estimates are affected compared to the singular case (using DM's methodology)]
      is correct. Hence, we know that the statement, "the objective probabilities will be affected by experiments in CIS" is true, and we know this in our current position of epistemic limitation. Thus, it should be pointed out that on at least some philosophical positions (i.e. the one that endorses objective probabilities) your objections don't go through.

      Now, assuming that objective probability is false/meaningless (which I agree by the way), "We can't hope to include in our credences information that we don't have, even if someone else has more information. If I know the result of my coin flip but don't tell you, you should assign the odds of 50-50, those are your correct credences for this problem until you get more information. Do you agree?"

      Yes. But it's important to keep in mind that although there is no longer a difference between CEC and objective probabilities, there still exists a distinction between CEC and our desired probability estimates. This is obviously true, otherwise our credences would never change. Hence, we know that there is a level of credence which we desire to reach (the one informed by the most knowledge), we just don't know what those credences are. So, let's revise my argument:

      1) The CEC cannot be affected by G-experimental replications in CIS
      2) If DM's methodology were sound, then the CEC would be affected by experimental replications in causally connected space.
      3) If we knew about/had information about the g-replications in premise 1, then per premise 2, the CEC would be affected.
      4) We know 3, i.e. that if we had more information about these places (e.g. if we were God), then CEC would be affected.
      5) The desired credences (DC) are based on those that use/take into account the most information.
      6) If DM's methodology is sound, then DC is affected by g-experimental replications in CIS
      6) If we knew our DC would uphold x, but our CEC will not uphold x, then we should reason as if x is upheld.
      7) We know our DC does not contradict DM's methodology
      6) We should reason as if DM's methodology is not contradicted.

      "So I hope you agree that it can't be a serious objection to say: if DM's methodology is correct then EP is false. That statement is just logically identical to the last step in the logical chain above (EP -> DM is wrong), so it doesn't add new considerations."

      Yes of course but that's not the argument I'm making, I never claimed EP is false. Rather I showed, per above, that in order to demonstrate that EP is true you must refute the above argument. But the above argument can only be refuted if we assume that DM's methodology is incorrect (unless of course it's unsound in some other way).

      Delete
    2. Or rather I showed that if the above argument is sound, then EP is indeed false. I agree this is not a serious objection to the belief that EP is true (for I don't actually believe the argument is sound), but it is a serious objection to your justification for EP, and the notion that DM is wrong because EP is true. In other words, in the absence of any argument, we have no good reason for thinking either that EP or DM's methodology is true or false. Your argument only works by presuming the above is unsound, on account of it presuming DM's methodology is false.

      It's important to realize that there is an enormous difference between saying 'if DM true then EP false', versus 'if DM true then your justification for EP is wrong'. I'm saying the latter, and that taints your justification. What we need is some independent justification that doesn't depend on assuming that DM is false. So the claim that my argument is circular is I think false. It assumes I'm saying something along the lines of the former; when in fact the former is derived from the latter statement.

      Delete
  30. Hi guys,

    Sorry haven't been able to keep on top of this the past while.

    Alex:

    > That's because such an ensemble would have us look at multiple g-experiments at the same time

    Not at the same time, I'd say. You look at each experiment individually and then aggregate the results, just as Dmitriy (and you?) would do for the fisherman. We possibly all agree that this approach makes sense for the fisherman, so there's nothing wrong with an approach like this per se. The disagreement is only over whether this is the correct approach for the fish.

    > There's no point in introducing the additional complexity of an agent/self, by far the easiest way of doing things would be to construct your ensemble in such a way that it did away with (generic) possible observers altogether, but rather only took into account possible world

    I don't follow your argument here. Again, the additional complexity you're talking about isn't there for the sake of it. We all agree (I think) that a structure like this is required for the fisherman.

    > In that case, we have an ensemble of possible worlds which yields the exact same credences as your simulation for both singular and plural cases.

    This doesn't seem right to me, in that the ensemble analysis Dmitriy described gets a different answer from the simulation in singular cases.

    ReplyDelete
  31. Hi Dmitriy,

    > So your view is then that problems A (G experiments, id unknown) and B (G experiments, id known) have the same answer, while problem C (one experiment) has a different answer. My view is they all have the same answer.

    I think the confusion arises because the problem is underspecified.

    If you not only know the ID of your experiment, but also the exact conditions of each experiment as identified by ID number, then your problem is straightforward and the other experiments are not relevant to you. You only care about your experiment. But this is so straight-forward I'm assuming that you don't have this information.

    If all you know is your ID number, but you don't know what ID numbers are associated with which conditions, then your ID number is irrelevant. You can create a mental ensemble of all the experiments with all the conditions there would be in the real world, but your ID number does not help you solve your problem unless you know how to map ID numbers to experimental conditions. So you know which real-world experiment is yours in some sense (you know the experiment as identified by ID number), but you still don't know which experiment in the ensemble is yours, because the experiments in the mental ensemble are identified not by ID numbers but by various experimental conditions.

    ReplyDelete
    Replies
    1. Hi DM,

      I have always been assuming the version where knowing the id doesn't tell you some extra details about how your experiment went.

      Can you clarify a couple of things:

      What do you mean by experimental conditions? Remember, all the experiments in the ensemble have the same setup.

      Are you saying knowing the id doesn't qualify as knowing which experiment you are causally connected to? If so, can you give some example that would illustrate how you define knowing which experiment you are in?

      Delete
  32. Hey Alex,

    I am confused by the new version of your argument, and not because it has three premise 6's:)

    6) If we knew our DC would uphold x, but our CEC will not uphold x, then we should reason as if x is upheld.
    7) We know our DC does not contradict DM's methodology
    6) We should reason as if DM's methodology is not contradicted.


    What does upholding x mean? Why should we reason against what CEC say, if CEC are based on all the knowledge we have? Is the conclusion what you wrote or is it "If DM's methodology is correct then we should reason as if DM's methodology is not contradicted"? If it's the former then I think something went seriously wrong in the preceding two steps, but I can't say what until I know your definition of upholding.

    It's important to realize that there is an enormous difference between saying 'if DM true then EP false', versus 'if DM true then your justification for EP is wrong'. I'm saying the latter, and that taints your justification. What we need is some independent justification that doesn't depend on assuming that DM is false. So the claim that my argument is circular is I think false. It assumes I'm saying something along the lines of the former; when in fact the former is derived from the latter statement.

    This confuses me too. If what you're saying is just "if DM true then your justification for EP is wrong" then I grant that immediately, no argument needed. This statement follows from the statement "if DM is true then EP is false", which, as I mentioned, is logically equivalent to the last step in the logical chain of my argument. So I immediately grant both statements, but my objection still stands: they don't add anything, they just repeat the last step in my argument.

    It is like if I had an argument A -> B -> C, and you said: but if notC then notB, would that be an objection to the argument?

    Having said this, I think the new argument you give says something stronger then just "if DM then not EP", it actually says "we should reason as if DM is true and EP false". Is that correct or does it only say the weaker claim? If it says the stronger one, then I would need to refute the argument.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      > What do you mean by experimental conditions?

      e.g. which way the coin flipped.

      If you have a mental ensemble of experiments, in 50% the coin was heads and in 50% the coin was tails. Each experiment in your ensemble might have different populations of observers. You can see each kind of experiment in your mental ensemble, but knowing the ID doesn't help you know which one of these experiments might represent your own, because you don't know how the coin flipped in your own experiment.

      So your own experiment could correspond to any experiment in your ensemble. Even knowing the ID number, you don't know which experiments in your ensemble are causally disconnected from you. You only know that one of the experiments in your ensemble corresponds to yours and the others correspond to the experiments which are causally disconnected from you.

      Delete
  33. Hi DM,

    so you are saying an agent doesn't know which experiment she is causally connected to unless she knows which way the coin flipped, i.e. (in the general case) which of the competing hypotheses is true?

    That seems like a very radical statement, it's like saying: I don't know which room I'm in unless I know which flavor of ice cream is in the fridge.

    ReplyDelete
    Replies
    1. I don't think it's radical at all, but I don't think I'm communicating my point successfully.

      You always know you are in *this* room. But you don't know which room *this* room might correspond to in your mental ensemble, where the mental ensemble is the ensemble you construct to represent your knowledge of your circumstances.

      Suppose 50% of the rooms in your mental ensemble have vanilla ice cream in the fridge, and 50% of the rooms in your mental ensemble have chocolate ice cream in the fridge.

      If there is vanilla ice cream in the fridge, then none of the rooms with chocolate ice creams in the fridge can have any effect on your observations as they are all causally disconnected. And vice versa.

      But if you don't know what flavour ice cream is in the fridge, then you don't know which rooms in your mental ensemble you can assume to be causally disconnected from you. If you do know that there is vanilla in the fridge, then you can immediately disregard all those with chocolate ice cream and remove them from your mental ensemble.

      Unless all observers in the remaining mental ensemble are in the exact same conditions (regarding coin flips, beard colours, ice cream flavours or whatever else is deemed relevant to the question at hand) then it's not generally the case that you can assume that the existence of many experiments ought have no bearing on your credences simply because they're all causally disconnected.

      Delete
  34. Hey DM,

    I also don't think I am communicating my point successfully:)

    Maybe if we go step by step we can pinpoint the exact source of the disagreement. So, let me ask first:

    Suppose you're staying in a hotel in room number 17, out of 100 rooms total. Suppose you know that half the rooms are stocked with chocolate ice cream in the fridge and half with vanilla ice cream, but you don't yet know which is in your fridge.

    Would you agree with me that in this situation you know which room you are staying in? If you do that doesn't yet refute the points in your last post, I would have to do more work.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      I think the problem is that you're assuming that there is an intrinsic independent-of-other-considerations matter of fact regarding whether you know which room you're in, when I would say that whether you know which room you're in is context-dependent. "Which" here depends on what the context demands we know about the room in order to know which room we are in. It is sensitive to the question we are trying to answer.

      In most ordinary situations, there is a perfectly clear answer. But in these rarefied thought experiments we're likely to get ourselves confused.

      All I can say about your hotel room is that you know you are in a room with a number 17. You know which room you are in if we take it that knowing which room you are in means knowing your room number.

      But we could equally take other attributes to determine which room you are in. If you saw a photograph of the hotel's exterior, and you were asked to pick out which room (balcony, say) was yours, you might not know. In this context, you don't know which room you are in, because the context demands not that you know the room's number but that you know something more like the room's physical location.

      If we are primarily concerned with the flavour of ice cream in the fridge, and we want to know are we in a chocolate room or a vanilla room, then your room number is entirely irrelevant. Knowing your room's number doesn't tell you which room you are in in this context.

      Delete
  35. Ok, so then your definition of knowing which room you are in is knowing the properties of the room that you are interested in. I am fine using this definition as long as that's the one you use when you argue that justification 1 for EP doesn't work because you don't know which experiment you are causally disconnected from. So is this the definition used in your objection?

    ReplyDelete
  36. Hi DM,

    I will try to analyze your objection under the assumption that when you refer to knowing "which room you are in" or "which experiment you are causally connected to" you don't mean something like knowing the room number or the id number of the experiment. You indicated that what you mean is basically knowing which way the coin flipped in your experiment (if the result of the coin flip is what you are interested in, as in the Incubator).

    In light of this, let's look at your objection to justification 1. For convenience, let me first copy and paste that justification here:

    1. Rational credences about properties of a given situation can only be affected by information about things that can causally affect those properties.
    2. The stipulation about many independent repetitions of the situation is, by construction, a stipulation about parts of reality causally disconnected from one another.
    3. Therefore, adding such a stipulation should not affect an agent's credences about properties of his situation.

    The last point follows from the first two and is equivalent to EP proper.


    Your objection is that "the causal disconnectedness argument is a red herring as we don't know which experiments we are causally connected to." Substituting the definition you used for "knowing which experiment", this becomes:

    We don't know which way the coin fell in the experiment we are causally connected to.

    But now the problem is that this statement, while certainly true, doesn't contradict either premise 1 or 2. But that is not the end because I have a problem too: as stated, 1 and 2 don't actually immediately logically imply 3. So let's put my argument in a logically valid form and see if your objection applies to either premise:

    1'. A rational agent's credences about properties of a given situation cannot be affected by stipulations (known to him) about things that cannot causally affect those properties.
    2'. The stipulation about the presence or absence of many independent repetitions of a rational agent's situation is a stipulation about things that cannot causally affect the properties of his situation.
    3'. Therefore, a rational agent's credences about properties of his situation cannot be affected by such a stipulation.


    In this form, the argument is much more transparently logically valid, the question is: does your objection undermine either of the two premises? I don't see how it does. I am pretty sure you don't have a problem with the first premise (I think you indicated as much a while ago), so I think you mean to argue against the second premise.

    So let's consider more carefully exactly what 2 says and see what your objection has to say about it. Consider, for example an agent from the Incubator and consider two stipulations:

    S1. Absence: the world is small, it consists of only this incubator.
    S2. Presence: the world is big, in addition to this incubator it contains, other places completely causally disconnected from each other. For example, each place is a separate parallel universe. Some of these places contain other instances of the incubator experiment. Agents in those incubators are in the same epistemic situation as the agent(s) in this incubator.

    Let a rigidly designate this incubator, and P(a) be the property you, one of the agents in a, are interested in, namely which way a's coin fell. S2 says that besides a there are other things, but these things are causally disconnected from a. S1 says there are no such things. Both are stipulations about things that cannot causally affect a and in particular P(a). That's all that 2' asserts, which seems to me pretty straightforward. Your objection, that you don't know "which" incubator you are in, in the sense of not knowing which way the coin fell, doesn't seem to have anything to do with what 2' says, at least I don't see what part it could possibly undermine.

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      Sorry for my absence for a while.

      > I am pretty sure you don't have a problem with the first premise (I think you indicated as much a while ago)

      I'm not as sure. I don't remem ber what I said before but the premise doesn't seem completely obvious to me, because I'm not sure that information about causal factors are the only thing that might affect credence. I might realise some situation is logically or mathematically impossible (or unlikely) for reasons that have little to do with causality. For example, I don't think that your telephone number is the largest prime number, but my reasons for this credence seem to have nothing to do with causal factors (though perhaps you could tell a causal story if you were determined enough).

      For another example. Suppose I have a model of my situation where 99.999% of observers in my reference class who are causally disconnected from me should expect to observe A but I observe B. If I believe that these observers actually exist, this might give me reason to doubt that my model is correct. If I believe that these observers don't exist, then perhaps they shouldn't affect my credences (especially if the observation of B is tied to existence somehow). Which is to say that their existence does affect my credences, even though they are entirely causally disconnected from me. But perhaps this is just restating the fine-tuning argument and so begging the question. In any case, the fact remains that I'm not convinced of premise 1.

      S

      Delete
  37. Hey DM,

    welcome back! I got kind of distracted too, working on VocabMeThis. It's my first web app - it was pretty cool to learn how to turn a program into a website.

    You are giving some counterexamples to the first premise, which is:

    1'. A rational agent's credences about properties of a given situation cannot be affected by stipulations (known to him) about things that cannot causally affect those properties.

    > I don't think that your telephone number is the largest prime number, but my reasons for this credence seem to have nothing to do with causal factors.

    I think this shows I haven't been able to make clear what the first premise is about. Of course your knowledge of logic and math should affect your credences. Logical statements are universal, they apply to all properties, including whatever properties you are forming credences about.

    The first premise is about knowledge/stipulations about what's happening in "distant" places. It says if you have two different versions of a problem:

    1. Information about how fridges in your hotel are stocked, plus blabla happening on Alpha Centauri.
    2. The same exact information about your hotel, plus blurgblurg happening on Alpha Centauri.

    then your credences about whether there is vanilla or chocolate ice cream in your fridge should be the same in these two versions.

    Your second example, as far as I can tell, is not so much a counterexample to premise 1 but just a statement of its denial:

    > If I believe that these observers actually exist, this might give me reason to doubt that my model is correct. If I believe that these observers don't exist, then [my credences about the model should be different, i.e. less doubt]

    Why? If there's a good argument for that, then 1' would be undermined (assuming I understood your situation correctly)

    ReplyDelete
    Replies
    1. Hi Dmitriy,

      With work, and distractions, and the pace of the conversation slowing down, I'm not sure I really have a handle on the issues any more.

      What's happening on Alpha Centauri might affect my credences if I don't know for sure I'm not on Alpha Centauri. If I know that A is happening on earth and B is happening on Alpha Centauri, but I don't know whether I'm on earth or Alpha Centauri, then both A and B could affect my credences even though I'm only causally connected to one of them.

      If knowing that I'm on earth is like knowing the ID number of the room, then I can perhaps say that I know that A is happening on either earth or Alpha Centauri, and B is happening the other way around. So I have a model where there is a world where A is happening and a world where B is happening. I may know I'm on earth but I don't know if I'm on the A-world or the B-world. So again even though I'm only causally connected to one of these worlds, both affect my credences as long as I believe I could be in either.

      This kind of reasoning doesn't feel causal to me, it feels logical or mathematical, which is why I'm pushing against premise 1 a little.

      My basic issue with your argument is that you *seem* to be saying that I can't let both A and B affect my credences because I'm only causally connected to one of them, which is obviously wrong.

      Repeating the experiment is changing my model of the ensemble in which I am under self-locating uncertainty, so even though I'm not causally connected to the other experiments, it feels like I might have reasons to change my credences on the basis of logic/mathematical considerations because I don't know which of the experiments in my model corresponds to my actual experiment. This is true even if I know the id number of my experiment, because the ID number is not salient to my model -- my model is more concerned with coin flips etc.

      Delete
  38. Hey DM,

    I think this captures the main thrust of what you are saying:

    >What's happening on Alpha Centauri might affect my credences if I don't know for sure I'm not on Alpha Centauri

    Sure, in this case stipulations about Alpha Centauri should indeed affect your credences. But doesn't violate premise 1':

    1'. A rational agent's credences about properties of a given situation cannot be affected by stipulations (known to him) about things that cannot causally affect those properties.

    In 1' we are talking about stipulations about things that the agent knows cannot causally affect the properties of interest. Otherwise the premise would be silly and clearly false. In your example that requirement is not fulfilled - t the agent doesn't know whether the stipulations about Alpha Centauri can affect his hotel, because the hotel could be on Alpha Centauri!

    You might say the same is true for premise 2' or EP, but there the situation is different: for premise 2' the role of Alpha Centauri is played by "not here". You can probably see why that's different, the agent most definitely knows that he is causally disconnected from "not here".

    ReplyDelete
    Replies
    1. > You can probably see why that's different, the agent most definitely knows that he is causally disconnected from "not here".


      I'm not sure that follows when the agent is under SLA regarding the experiments in the model, because updating the model of where the agent could be can clearly change the agents credences.

      Delete
  39. Hi DM,

    To clarify, are you not granting that you are causally disconnected from things outside your lightcone? That's essentially all my quote was expressing, and denying it would contradict physics.

    ReplyDelete
    Replies
    1. I grant that, but you don't always know what's outside your lightcone. If you have a model of your circumstances, where 99% of stuff in your model must be outside your lightcone, but you don't know which 99% of stuff, then 100% of the stuff is relevant to your credences.

      "Not here" doesn't really help if you don't know where "here" is, with respect to the attributes that are relevant for your model/credences.

      Delete
  40. Hey DM,

    But that point doesn't apply to premise 1' in my justification for EP [for reference, I put the premises at the end]. I'll modify a passage from earlier to hopefully better convey this:

    The first premise is about knowledge/stipulations about what's happening in "distant" places. It says if you have two different versions of a problem:

    1. Information about how fridges in your hotel are stocked, plus blabla happening "far" from your hotel.
    2. The same exact information about your hotel, plus blurgblurg happening "far" from your hotel.

    then your credences about whether there is vanilla or chocolate ice cream in your fridge should be the same in these two versions. ("Far" can be interpreted as "outside the lightcone".)

    So "I grant that, but..." means that you grant premise 1', though perhaps you would say your objections then apply to premise 2'. Is this fair? If so, I would need to defend 2' now.

    To remind ourselves, here's the argument we are talking about:

    1'. A rational agent's credences about properties of a given situation cannot be affected by stipulations (known to him) about things that cannot causally affect those properties.
    2'. The stipulation about the presence or absence of many independent repetitions of a rational agent's situation is a stipulation about things that cannot causally affect the properties of his situation.
    3'. Therefore, a rational agent's credences about properties of his situation cannot be affected by such a stipulation.

    ReplyDelete

Post a Comment