## Wednesday, October 26, 2022

### The Law of Large Numbers and infinite run payoffs

In discussions of maximization of expected value, the Law of Large Numbers is sometimes invoked, at times—especially by me—off-handedly. According to the Strong Law of Large Numbers (SLLN), if you have an infinite sequence of independent random variables X1, X2, ... satisfying some conditions (e.g., in the Kolmogorov version n(σn2/n2) < ∞, where σn2 is the variance of Xn), then with probability one, the average of the random variables converges to the average of the mathematical expectations of the random variables. The thought is that in that case, if the expectation of each Xn is positive, it is rationally required to accept the bet represented by Xn.

In a recent post, showed how in some cases where the Strong Law of Large Numbers is not met, in an infinite run it can be disastrous to bet in each case according to expected value.

Here I want to make a minor observation. The fact that the SLLN applies to some sequence of independent random variables is itself not sufficient to make it rational to bet in each case according to the expectations in an infinite run. Let Xn be 2n/n with probability 1/2n and  − 1/(2n) with probability 1 − 1/2n. Then

• EXn = (1/2n)(2n/n) − 1/(2n)(1−1/2n) = (1/n)(1−(1/2)(1−1/2n)).

Clearly EXn > 0. So in individual decisions based on expected value, each Xn will be a required bet.

Now, just as in my previous post, almost surely (i.e., with probability one) only finitely many of the bets Xn will have the positive payoff. Thus, with a finite number of exceptions, our sequence of payoffs will be the sequence  − 1/2,  − 1/4,  − 1/6,  − 1/8, .... Therefore, almost surely, the average of the first n payoffs converges to zero. Moreover, the average of the first n mathematical expectations converges to zero. Hence the variables X1, X2, ... satisfy the Strong Law of Large Numbers. But what is the infinite run payoff of accepting all the bets? Well, given that almost surely there are only a finite number of n such that the payoff of bet n is not of the form  − 1/(2n), it follows that almost surely the infinite run payoff differs by a finite amount from  − 1/2 − 1/4 − 1/6 − 1/8 =  − ∞. Thus the infinite run payoff is negative infinity, a disaster.

Hence even when the SLLN applies, we can have cases where almost surely there are only finitely many positive payments, infinitely many negative ones, and the negative ones add up to  − ∞.

In the above example, while the variables satisfy the SLLN, they do not satisfy the conditions for the Kolmogorov version of the SLLN: the variances grows exponentially. It is somewhat interesting to ask if the variance condition in the Kolmogorov Law is enough to prevent this pathology. It’s not. Generalize my example by supposing that a1, a2, ... is a sequence of numbers strictly between 0 and 1 with finite sum. Let Xn be 1/(nan) with probability an and  − 1/(2n) with probability 1 − an. As before, the expected value is positive, and by Borel-Cantelli (given that the sum of the an is finite) almost surely the payoffs are  − 1/(2n) with finitely many exceptions, and hence the there is a finite positive payoff and an infinite negative one in the infinite run.

But the variance σn2 is less than an/(nan)2 + 1 = (1/(n2an)) + 1. If we let an = 1/n2 (the sum of these is finite), then each variance is at most 2, and so the conditions of the Kolmogorov version of the SLLN are satisfied.

In an earlier post, I suggested that perhaps the Central Limit Theorem (CLT) rather than the Law of Large Numbers is what one should use to justify betting according to expected utilities. If the variables X1, X2, ... satisfy the conditions of the CLT, and have non-negative expectations, then P(X1+...+Xn≥0) will eventually exceed any number less than 1/2. In particular, we won’t have the kind of disastrous situation where the overall payoffs almost surely go negative, and so no example like my above one can satisfy the conditions of the CLT.

IanS said...

The last example (with a_n = 1/n^2) is very neat. I had been trying to think of something similar. :-)

The Peköz paper I mentioned in the other post has, in addition to the variance condition [sum of (nth variance/n^2 finite)], the condition that all the individual expectations are greater than some strictly positive constant. In the example, this is violated - the nth expectation is about 1/n. So again, there’s no formal contradiction. Of course, this is no surprize.

Alexander R Pruss said...

If you assume that the nth expectation is bigger than c>0, and the Strong Law of Large Numbers applies, then of course almost surely the person who accepts all the bets will eventually be better off than the person who rejects all the bets, and the difference between the two will grow without bound. And the variance condition is sufficient for the Strong Law.

Do you think this is true: If someone thinks the above result is a good reason to accept rather than reject all the bets, then they should also think that in my case we have good reason to reject rather than accept all the bets?

IanS said...

Yes, I’d say that, if, in the case of nth expectation greater than c>0, someone takes SLLN (if it applies) as a reason to accept all the bets, then in your example they should refuse to accept all the bets – if they reason on the basis of a ‘with probability 1’ result in one case, they should also do so in the other. That said, you should take care to note exactly what the various authors are actually arguing.

Speaking for myself, I don’t think that any result about an actual infinity of bets, or even just about limits of finite sequences of bets, is in itself a good reason to do anything. (Though, of course, such results can give useful hints.) What matters is the likely position when the game ends, as, in the real world, it must.

In your example, the distribution of partial sums has progressively increasing variance and skewness. Roughly (if I’m thinking straight), variance of the nth partial sum grows like n, 3rd moment grows like (n^2)/2. The normalized 3rd moment (i.e. with the outcome divided by its s.d. to make the variance 1) grows like (n^(1/2))/2. If I were really offered this sequence of bets, with the option of choosing in advance how many to accept, I’d feel that for large n, things would get pretty hairy, way too hairy to justify accepting on the basis of the positive expectation, which only grows like ln(n)/2. So I’d choose a smallish n I felt comfortable with.

Alexander R Pruss said...

I still feel that the fact that in my examples, almost surely, at some *finite* point in time the expected utility non-maximizer overtakes the expected utility maximizer, and after that the gap just increases, seems significant. But I can't put my finger on what exactly is significant about it.