The argument in this post is based on a construction by Dubins (see Example 2.1 here) that I've switched into an infinitesimal case.
Suppose you can have an infinite lottery with ticket numbers 1,2,3,... and each ticket has infinitesimal probability (perhaps the same one for each). Then really weird stuff can happen. Say I toss a fair coin, but don't show you the result. Instead, you know for sure that I will do this:
- If the coin was tails, I run an infinite lottery with ticket numbers 1,2,3,... and with each ticket having infinitesimal probability
- If the coin was heads, I run an infinite lottery with the same ticket numbers, but now the probability of ticket n is 2−n.
Here's the oddity. No matter what my announcement, you will end up all but certain—i.e., assigning a probability infinitesimally short of 1—that the coin was heads. Here's why. Suppose I announce ticket n. Now, P(n|heads)=2−n but P(n|tails) is infinitesimal. Plugging these facts into Bayes' theorem, and assuming that your prior probability for heads was 1/2 (actually, all that's needed is that it be neither zero nor infinitesimal), your posterior probability P(heads|n) ends up equal to 1−a where a is infinitesimal.
So I can rationally force you to be all but certain that it was heads, simply by telling you the result of my lottery experiment. And by reversing the arrangement, I could force you to be all but certain that it was tails. Thus there is something pathological about the infinite lottery with infinitesimal probabilities.
This is, to me, yet another of the somewhat unhappy results that show that probability theory has a quite limited sphere of epistemological application.