A countably infinite number of people, including me, is about to roll fair indeterministic dice? What probability should I assign to rolling six?
Obviously, 1/6.
But suppose I describe the situation thus: “There are two equally sized groups of people. How likely is it that I am in the former rather than the latter?” (After all, I know that infinitely many will roll six and infinitely many won’t, and that it’ll be the same infinity in both cases.) So why 1/6, instead of 1/2, or undefined?
Here’s what I want to say: “The objective chance of my rolling six is 1/6, and objective chances are king, in the absence of information posterior to the outcome.” Something like the Principal Principle should apply. And it should be irrelevant that there are infinitely many other people rolling dice.
If I say this, then I may have to deny both the self-sampling assumption and the self-indication assumption. For if I really consider myself to be a completely randomly chosen person in the set of die rollers, or in some larger set, in the self-indication cases, it seems I shouldn’t think it less likely that I rolled six than that I didn’t, since equal numbers did each.
It looks to me that we have two competing ways of generating probabilities: counting and objective chance. I used to think that counting trumped objective chance. Now I am inclined to think objective chance trumps counting, and counting counts for nothing, in the absence of objective chance.
This comment has been removed by the author.
ReplyDeleteI thought the resolution for this was that probabilities aren't sensible with infinities?
ReplyDeleteThat's not a resolution. That's giving up. :-|
ReplyDeleteMaybe SSI and SSA both fail when there are countably infinitely many people you "could have been", because there's no uniform distribution over the integers. Maybe instead of denying them, you just say you need a more general theory to deal with scenarios where the set you are drawn from has no uniform distribution.
ReplyDeleteYou should use all the (relevant) information you have. If there are relevant objective chances, use them. If there aren’t, you have to either give up or resort to controversial ‘counting’ principles like Indifference. ‘Counting’ and objective chance don’t compete: ‘counting’ is the fall-back option.
ReplyDeleteThink about the following cases. (Case A) There is a finite number N of fair, objectively chancy, 6-sided dice. An angel starts by arbitrarily choosing a die (‘your die’). The angel tells you that he will roll all the dice and note the outcomes. If there are not exactly M sixes and N - M non-sixes he will roll all the dice again. He will do this until there are exactly M sixes and N – M non-sixes. The angel tells you that he has done what he said he would. What should be your credence that your die shows six? Ans: M/N. This may look like mere ‘counting’, but actually it can be calculated from the objective chances by Bayesian conditioning. (Case B) As above, but this time the dice are biased. The angel tells you that all the dice have the same bias, but does not reveal the relevant probabilities. Again, what should be your credence that your die shows six? Answer: M/N again. (Case C) As above, but all the dice are biased differently. This time, you can’t apply Bayes. If you want to say that the answer is M/N, you will have to invoke Indifference over that angel’s choice of ‘your die’.
It is amusing and instructive to compare these cases with similar infinite ones…