There is nothing essential new here, but it is a particularly vivid way to put an observation by Paul Bartha.

You are going to receive a sequence of a hundred tickets from an countably infinite fair lottery. When you get the first ticket, you will be nearly certain (your probability will be 1 or 1 minus an infinitesimal) that the next ticket will have a bigger number. When you get the second, you will be nearly certain that the third will be bigger than it. And so on. Thus, throughout the sequence you will be nearly certain that the next ticket will be bigger.

But surely at some point you will be wrong. After all, it's incredibly unlikely that a hundred tickets from a lottery will be sorted in ascending order. To make the point clear, suppose that the way the sequence of tickets is picked is as follows. First, a hundred tickets are picked via a countably infinite fair lottery, either the same lottery, in which case they are guaranteed to be different, or independent lotteries, in which case they are nearly certain to be all different. Then the hundred tickets are shuffled, and you're given them one by one. Nonetheless, the above argument is unaffected by the shuffling: at each point you will be nearly certain that the next ticket you get will have a bigger number, there being only finitely many options for that to fail and infinitely many for it to succeed, and with all the options being equally likely.

Yet if you take a hundred numbers and shuffle them, it's extremely unlikely that

they will be in ascending order. So you will be nearly certain of something, and yet very likely wrong in a number of the cases. And even while you are nearly certain of it, you will be able to go through this argument, see that in many of the judgments that the next number is bigger you will be wrong, and yet this won't affect your near certainty that the next number is bigger.

## 6 comments:

A countably infinite fair lottery is obviously impossible, for exactly this reason.

There seems to be a de re/de dicto distinction going on here, or a quantifier inversion.

Suppose we deal with a lottery of size two, and we just think about two positive integers, X and Y, which are positive integers chosen sequentially. Then the way to frame the first conclusion is to say that once we have picked X but before we have picked Y, the probability that X is less than Y is near infinite. The way to frame the second conclusion is to say that if we pick X and pick Y, the probability that X is less than Y is the same as the probability that Y is less than X, thus intuitively near 0.5.

But ‘Y’ is being used subtly differently in these two conclusions. In the first it is shorthand for a description, “the second number picked, whatever it is.” In the second it is a name for whatever number has been picked second.

Treat “a selected F” as a quantifier over Fs. Let F = the range of positive integers. Then the two claims seem to be

(1) For a selected F

x, p(less_than(x, a selected Fy))) is near infinite.(2) For a selected F

x, and a selected Fy, p(less_than(x,y)) is about 0.5.I agree with Heath. This issue has nothing to do with the fact that we have an infinite number of possibilities. The two ways of setting up the lottery is essentially different.

It seems we can put this in terms of the same proposition: "The number that was picked first is less than that number that was picked second." Before you learn what the first number is, you assign probability close to 1/2. After you learn it, you assign close to 1. It's the same proposition in both cases.

Suppose we select two random positive integers X and Y sequentially. These can be thought of as determining a point (X,Y) on a quadrant of a Cartesian plane. (Blogger doesn’t like angle brackets.) To say that p(X lt Y) is near infinite is to say that the point almost certainly lies above the 45-degree line X=Y. To say that p(X lt Y) = p(Y lt X) is to say that the point has an equal chance of landing anywhere on the quadrant.

Intuitively, the second seems correct; the point could be anywhere. We get the first interpretation this way: suppose we look at the X value. It is finite, so the Y value is almost certainly greater than it. On that view, p(X lt Y) is very high. But by that logic, we could look at the Y value, note that it is finite, and figure that the X value is almost certainly greater, so that p(Y lt X) is very high. Paradox.

Maybe we are reasoning like this:

(1) If X=1, then p(X lt a randomly selected positive integer) is near infinite.

(2) If X=2, then p(X lt a randomly selected positive integer) is near infinite.

(3) If X=3, then p(X lt a randomly selected positive integer) is near infinite.

….

(Omega) Therefore, for all positive integers X, p(X lt a randomly selected positive integer) is near infinite.

(Rand) Therefore, for a randomly selected positive integer X, p(X lt a randomly selected positive integer) is near infinite.

All these inferences seem good to me. But maybe there is a scope issue, because

(Rand-2) p(a randomly selected positive integer X lt a randomly selected positive integer Y) is near infinite

Is certainly false.

First, a cheat: I am a finite being. There is a large but finite number of numbers that I can grasp. To me, all other numbers are just “too big” (strictly, “too complex”). If I can grasp the first outcome, I assign probability 1 that the second is bigger. Otherwise, as will happen with probability 1, I stick with ½.

Cheating aside, with IFLs conditioning does not work in the usual way. Your manipulation/bilking argument in “Infinite Lotteries, Perfectly Thin Darts etc.” also illustrates this. Maybe with IFLs it is better to give up credences and use decision rules. Suppose I have the option, after the first number is revealed, of accepting a bet to win $1 if the second number exceeds the first and lose $2 otherwise. Clearly, “always accept” is a losing strategy, whatever my naive credences might suggest. “Pick a number N, accept if the first number is less than N, reject otherwise” is a reasonable strategy. [Note that a similar strategy works in your bilking argument.] Maybe it works like this: in normal cases strategies can be summarized by credences, but in paradoxical cases, they cannot.

Post a Comment