One of the unattractive ingredients of the Sleeping Beauty problem is that Beauty gets memory wipes. One might think that normal probabilistic reasoning presupposes no loss of evidence, and weird things happen when evidence is lost. In particular, thirding in Sleeping Beauty is supposed to be a counterexample to Van Fraassen’s reflection principle, that if you know for sure you will have a rational credence of p, you should already have one. But that principle only applies to rational credences, and it has been claimed that forgetting makes one not be rational.
Anyway, it occurred to me that a causal infinitist can manufacture something like a version of Sleeping Beauty with no loss of evidence.
Suppose that:
On heads, Beauty is woken up at 8 + 1/n hours for n = 2, 4, 6, ... (i.e., at 8.5 hours or 8:30, at 8.25 hours or 8:15, at 8.66… hours or 8:10, and so on).
On tails, Beauty is woken up at 8 + 1/n hours for n = 1, 2, 3, ... (i.e. at 9:00, 8:30, 8:20, 8:15, 8:10, …).
Each time Beauty is woken up, she remembers infinitely many wakeups. There is no forgetting. Intuitively she has twice as many wakeups on tails, which would suggest that the probability of heads is 1/3. If so, we have a counterexample to the reflection principle with no loss of memory.
Alas, though, the “twice as many” intuition is fishy, given that both infinities have the same cardinality. So we’ve traded the forgetting problem for an infinity problem.
Still, there may be a way of avoiding the infinity problem. Suppose a second independent fair coin is tossed. We then proceed as follows:
On heads+heads, Beauty is woken up at 8 + 1/n hours for n = 2, 4, 6, ...
On heads+tails, Beauty is woken up at 8 + 1/n hours for n = 1, 3, 5, ...
On tails+whatever, Beauty is woken up at 8 + 1/n hours for n = 1, 2, 3, ....
Then when Beauty wakes up, she can engage in standard Bayesian reasoning. She can stipulatively rigidly define t1 to be the current time. Then the probability of her waking up at t1 if the first coin is heads is 1/2, and the probability of her waking up at t1 if the first coin is tails is 1. And so by Bayes, it seems her credence in heads should be 1/3.
There is now neither forgetting nor fishy infinity stuff.
That said, one can specify that the reflection principle only applies if one can be sure ahead of time that one will at a specific time have a specific rational credence. I think one can do some further modifying of the above cases to handle that (e.g., one can maybe use time-dilation to set up a case where in one reference frame the wakeups for heads+heads are at different times from the wakeups for heads+tails, but in another frame they are the same).
All that said, the above stories all involve a supertask, so they require causal infinitism, which I reject.