Suppose I independently randomly and uniformly choose X and Y between 0 and 1, not including 1 but possibly including 0. Now in the diagram above, let the blue event B be that the point (X, Y) lies one one of the two blue line segments, and let the red event R be that it lies on one of the two red line segments. (The red event is the graph of the fractional part of 2x; the blue event is the reflection of this in the line y = x.) As usual, a filled circle indicates a point included and an unfilled circle indicates a point not included; the purple point at (0, 0) is in both the red and blue events.
It seems that B is twice as likely as R. For, given any value of X—see the dotted line in the diagram—there are two possible values of Y that put one in B but only one possible value of X that puts one in R.
But of course the situation is completely symmetric between X and Y, and the above reasoning can be repeated with X and Y swapped to conclude that R is about twice as likely as B.
Hmm.
Of course, there is no paradox in classical probability theory where we just say that the red and blue events have zero probability, and twice zero equals zero.
But if we have any probability theory that distinguishes different events that are classically of zero-probability and says things like “it’s more likely that Y is 0.2 or 0.8 than that Y is 0.2” (say because both events have infinitesimal probability, with one of these infinitesimals being twice as big as the other), then the above reasoning should yield the absurd conclusion that B is more likely than R and R is more likely than B.
Technically, there is nothing new in the above. It just shows that when we have a probability theory that distinguishes classically zero-probability events, that probability theory will fail conglomerability. I.e., we have to reject the reasoning that just because conditionally on any value of X it’s twice as likely that we’re in B as in R, therefore it’s twice as likely that we’re in B as in R. We already knew that conglomerability reasoning had to be rejected in such probability theories. But I think this is a really vivid way of showing the point, as this instance of conglomerability reasoning seems super plausible. And I think the vividness of it makes it clear that the problem doesn’t depend on any kind of weird trickery with strange sets, and that no mere technical tweak (such as moving to qualitative or comparative probabilities) is likely to get us out of it.
To match the illustration, shouldn't the text state "Suppose I independently randomly and uniformly choose X and Y between 0 and 1, possibly including 0 but not including 1?
ReplyDeleteFixed! Good catch.
ReplyDelete