Thursday, May 23, 2024

A supertasked Sleeping Beauty

One of the unattractive ingredients of the Sleeping Beauty problem is that Beauty gets memory wipes. One might think that normal probabilistic reasoning presupposes no loss of evidence, and weird things happen when evidence is lost. In particular, thirding in Sleeping Beauty is supposed to be a counterexample to Van Fraassen’s reflection principle, that if you know for sure you will have a rational credence of p, you should already have one. But that principle only applies to rational credences, and it has been claimed that forgetting makes one not be rational.

Anyway, it occurred to me that a causal infinitist can manufacture something like a version of Sleeping Beauty with no loss of evidence.

Suppose that:

  • On heads, Beauty is woken up at 8 + 1/n hours for n = 2, 4, 6, ... (i.e., at 8.5 hours or 8:30, at 8.25 hours or 8:15, at 8.66… hours or 8:10, and so on).

  • On tails, Beauty is woken up at 8 + 1/n hours for n = 1, 2, 3, ... (i.e. at 9:00, 8:30, 8:20, 8:15, 8:10, …).

Each time Beauty is woken up, she remembers infinitely many wakeups. There is no forgetting. Intuitively she has twice as many wakeups on tails, which would suggest that the probability of heads is 1/3. If so, we have a counterexample to the reflection principle with no loss of memory.

Alas, though, the “twice as many” intuition is fishy, given that both infinities have the same cardinality. So we’ve traded the forgetting problem for an infinity problem.

Still, there may be a way of avoiding the infinity problem. Suppose a second independent fair coin is tossed. We then proceed as follows:

  • On heads+heads, Beauty is woken up at 8 + 1/n hours for n = 2, 4, 6, ...

  • On heads+tails, Beauty is woken up at 8 + 1/n hours for n = 1, 3, 5, ...

  • On tails+whatever, Beauty is woken up at 8 + 1/n hours for n = 1, 2, 3, ....

Then when Beauty wakes up, she can engage in standard Bayesian reasoning. She can stipulatively rigidly define t1 to be the current time. Then the probability of her waking up at t1 if the first coin is heads is 1/2, and the probability of her waking up at t1 if the first coin is tails is 1. And so by Bayes, it seems her credence in heads should be 1/3.

There is now neither forgetting nor fishy infinity stuff.

That said, one can specify that the reflection principle only applies if one can be sure ahead of time that one will at a specific time have a specific rational credence. I think one can do some further modifying of the above cases to handle that (e.g., one can maybe use time-dilation to set up a case where in one reference frame the wakeups for heads+heads are at different times from the wakeups for heads+tails, but in another frame they are the same).

All that said, the above stories all involve a supertask, so they require causal infinitism, which I reject.

2 comments:

Deliberation Under Ideal Conditions said...

Weintraub has another case that shows that there's nothing fishy about the update in sleeping beauty, quoting her paper:
I propose to uphold the second response by rebutting the premiss of the first. Sleeping Beauty, I suggest, has received new relevant information. True, she knew all along that she would be wakened. But now she knows she is awake now.

Clearly, two such statements, 'It will at some point be p' and 'It is p now' have different implications for action. For instance, the belief that it will rain sometime doesn't motivate me to take an umbrella, whereas the belief that it is raining now does.

Of course, one's opponent (Lewis 2001) might question the relevance of this information to the case in hand. But I think we can meet the challenge by slightly altering the story. This time, Sleeping Beauty is told she will see three lights flashing (one after the other), being made to forget what she has seen after each flash. If the (fair) coin lands heads, one of the three flashes will be red and two will be green. If the coin lands tails, one will be green and two will be red.

Upon seeing a red flash, she should obviously assign probability 1/3 to the coin's having landed heads. But here, too, we may be challenged to justify the change in probabilities. She knew all along she would see a red flash! Here, the argument isn't even tempting. She believes a red light is flashing now, and that clearly makes a difference.

The cases are analogous. In both of them, what she knew all along would happen she now knows to be actual. That is the only new information. And if it makes (as everyone will agree) a difference in the first case, why not in the second.

Deliberation Under Ideal Conditions said...

Also, I think this scenario seems just obviously impossible, which is a good argument for causal finitism.