Tuesday, October 18, 2022

Expected utility maximization

Suppose every day for eternity you will be offered a gamble, where on day n ≥ 1 you can choose to pay half a unit of utility to get a chance of 2n at winning 2n units of utility.

At each step, the expected winnings are 2n ⋅ 2n = 1 unit of utility, and at the price of half a unit, it looks a good deal.

Here’s what will happen if you always go for this gamble. It is almost sure (i.e., it has probability one) that you will only win a finite number of times. This follows from the Borel-Cantelli lemma and the fact that ∑2n < ∞. So you will pay the price of half a unit of utility every day for eternity, and win only a finite amount. That’s a bad deal.

Granted, this assumes you will in fact play an infinite number of times. But it is enough to show that expected utility maximization in individual choices is not always the best policy (and suggests a limitation in the argument here).

Objection: All this has to do with aggregating an infinite number of payments, or traversing an infinite future, and hence is just another paradox of infinity.

Response: Actually the crucial point can be made without aggregating infinitely many payments. Suppose you adopt the policy of accepting the gamble. Then, with probability one, there will come a day M after which you never win again. By day M, you may well have won some (maybe very large) finite amount. But after that day, you will keep on paying to play and never win again. After some further finite number of days, your losses will overtake your winnings, and after that you will just fall further and further behind every day. This unhappy fate is almost sure if you always accept the gamble, and hence if you adopt expected utility maximization in individual decisions as your policy. And the unhappiness of this fate does not depend on aggregation of infinitely many utilities.

Question: What if the game ends after a fixed large finite number of steps?

Response: In any finite number of steps, of course the expected winnings are higher than the price you pay. But nonetheless as the number of steps gets large, the chance at those expected winnings shrinks. Imagine that the game goes on for 200 days, the game on day 100 has finished, and you’re now choosing your policy for the next 100 days. The expected utility of playing for the next 100 days is 50 units. However, assuming you accept this policy, the probability that you will win anything over the next 100 days is less than 2−100, and if you don’t win anything, you lose 50 units of utility. So it doesn’t seem crazy to think that the no-playing policy is better, even though it has worse expected utility. In fact, it seems like quite a reasonable thing to neglect that tiny probability of winning, less than 2−100, and refuse to play. And knowing that the expected utility reasoning when extended for infinite time leads to disaster (infinite loss!) should make one feel better about the decision to violate expected utility maximization.

Final remark: It is worth considering what happens in interpersonal cases, too. Suppose infinitely many people numbered 1, 2, 3, ... are given the opportunity to play the game, with person n being given the opportunity of winning 2n units with probability 2n. If everyone goes for the game, then almost surely a finite number of people will win a finite amount while an infinite number pay the half-unit price. That’s disastrous: an infinite price is being paid for a finite benefit.

11 comments:

IanS said...

“… and suggests a limitation of the argument here.”

From a formal point of view, I think the issue is that the sequence of gambles is not ‘well-behaved’ in the sense of Zhao’s footnote 14. I can’t be sure, because Zhao does not spell this out, but refers to Stephen Ross “Adding Risks: Samuelson’s Fallacy of LargeNumbers Revisited.” Journal of Financial and Quantitative Analysis, 34:323–339, 1999 , which is gated. (SUPPORT OPEN ACCESS!) Zhao says The requirement is meant to rule out improbable cases like those where one decision has stakes that swamp all others, … . In this case, the last gamble is always about the size of all the previous ones together.

Alexander R Pruss said...

I guess I've tended to think that it's precisely in cases of rare large stakes gambles that it makes sense to depart from expected utility. For small repeatable gambles, of course we have central limit theorem or law of large numbers considerations.
By the way, we don't need anything as radical as exponential growth. Barely more than linear growth is enough. My point goes through with n (log n)^2 as the nth prize with probability the reciprocal of that.

IanS said...

As I said, I can’t access Ross. But this paper [Erol A. Peköz: Samuelson's Fallacy of Large Numbers and Optional Stopping. Journal of Risk and Insurance, March 2002], which builds on it, states a similar result.

Peköz requires the condition than Σ((Nth variance)/N^2) is finite. I’m guessing the Ross’s condition may be similar. With even linear growth of prizes and reciprocal linear probabilities, the sum doesn’t converge. So it won’t converge with N (log N)^2 growth (and reciprocal probabilities) either.


For what little it’s worth, I share your doubts about EU for rare high-stakes gambles. My intent is to account for the apparent discrepancy between Zhao’s remarks and yours.

Alexander R Pruss said...

There are two interrelated questions: the conditions under which we have almost sure convergence to the mean and the conditions under which we have the particular kind of divergence that my argument uses--where almost surely if we accept the gambles, we lose an infinite amount and gain only a finite amount.

The convergence question concerns necessary and sufficient conditions for the Strong Law of Large Numbers. Kolmogorov showed that the variance condition is sufficient (assuming throughout that all the random variables have finite expectations). But it is not necessary for convergence (indeed Prokhorov showed that no condition solely on variances can be a necessary and sufficient condition for convergence). Necessary and sufficient conditions were given by Nagaev ( https://epubs.siam.org/doi/abs/10.1137/1117072 ), but research on refinement continues ( https://www.jstor.org/stable/2160636 ). In any case, the failure to meet the condition Pekoz gives does not imply lack of convergence.

Further, lack of convergence is not by itself enough to clearly show that it isn't rational to engage in expected utility maximization. After all, lack of convergence is compatible with the hypothesis that your total winnings will be infinite and your total losings will be infinite, in which case it's unclear if it's rational to play or not.

However, it is much simpler to characterize cases that look like my example if we assume independence (by the way, my original example does not assume independence, because the Borel-Cantelli Lemma, unlike its converse which I am about to use, does not need independence).

I can run my argument with any sequence Y_1,Y_2,... of random variables each of which has positive expected value, but where the sum P(Y_n > -epsilon) is finite for some positive epsilon. In that case, expected utility maximization says to accept each gamble, but if you follow that advice, you will (almost surely) get a result at least as bad as -epsilon in all but finitely many cases. If the random variables are independent, then by the converse Borel-Cantelli Lemma, the condition that the sum of those probabilities is finite is necessary for the claim that almost surely you will get a result at least as bad as -epsilon in all but finitely many cases.

IanS said...

None of that suggests that there is anything wrong with Zhao and the others in a narrow formal sense – given their conditions, their formal results hold. Of course, the philosophical implications and practical relevance are a different matter. The same applies to your examples.


Against Zhao and the others, one could say this: it’s great that, given their conditions, choosing to accept every time ‘eventually’ becomes favoured, but is ‘eventually’ likely to be in your lifetime? It depends on the specifics.

Against your examples, one could say that they require you and the other party to have unlimited money and unlimited time to gamble with it. What matters is the likely conditions when time or money run out.

I’m doubtful about the applicability of standard decision theory for one-0ff high stakes choices, but I’m not sure that any of these cases are decisive.

Our intuitions about sequences of fair (favourable, unfavourable) bets are reflected in the martingale (super-martingale, sub-martingale) optional stopping theorems. But these theorems have conditions which can be violated in quite ordinary setups. A simple example: repeated triple-or-nothing on fair coin flips. Each bet has positive expectation, but if you accept them all, you will lose you initial stake with probability 1.

You don’t need exponential growth to get this sort of thing. A simple example is textbook Gambler’s Ruin. A fair coin is flipped. You win $1 on Heads, lose $1 on Tails. You play repeatedly against the house until either you go broke, or the house does. You start with $M, the house with $N. Your chance of ruin is M/(M+N), of ruining the house is N/(M+N). Your expected final fortune is of course the $M you started with.

But what if the house has unlimited money? Then you will be ruined, and suffer a loss of $M, with probability 1.

Alexander R Pruss said...

Ian:

I am trying to argue against the thesis that you should take the bets with positive expectation. Gambler's Ruin doesn't affect that because the bets have zero expectation.

I also have a weak intuition that cases where what goes on in each wager is independent of what goes on in the others are more compelling. Triple-or-nothing doesn't have this independence: once ruined, you get nothing. Gambler's Ruin has a changing fortune.

Another thing that makes my case particularly compelling to me is the interpersonal version, where each person faces a single wager, all the wagers completely independent, and yet if everyone maximizes their expected utility, with probability one, the result is disastrous--infinitely many paying and finitely many winning. It's like a tragedy of the commons, but with no interaction between the agents' decisions, no weird undefined probabilities, just everyone doing ordinary expected value maximization.

Alexander R Pruss said...

It's also interesting to think about whether one's intuitions would be different in a reverse case. On day n, if you take the gamble, you are sure to get 1/2 unit and have a 1/2^n chance of losing 2^n. By expected utilities, you should refuse. But if you always accept, then almost surely you lose only a finite amount and gain an infinite amount.

IanS said...

What worries me about this is the actual infinity of people and the unbounded bets. Suppose there are N people. It’s clear that for large N, most people are likely to lose small amounts. This is balanced by the very small probability that the last few people win very large amounts. To some people’s intuition (including mine) this does not seem like a good outcome. But I think that an argument for this has to be based on this finite case.

Alexander R Pruss said...

Unbounded bets, while difficult to handle in decision theory, don't seem metaphysically problematic. Suppose you have a friend you love as yourself but who is currently "scheduled" to have a headache for eternity, while you are currently "scheduled" to live forever without headache. Each time you win x units, your friend gets x days off from headache. Each time you lose x units, you get x days of headache. Specify that you don't get used to the headaches. (If you think it's metaphysically impossible not to be getting used to headaches, suppose your and your friend's memory of the previous day's headache or lack thereof is wiped each day.)

IanS said...

Well, here is one way to look at headache setup…

If you make only a finite number of bets, you and your friend together are certain to suffer an infinite number of headache days. If you accept all the bets, it’s possible that you might win them all. Then you and your friend will suffer no headaches. (I’m assuming that the headache-free days you win for your friend are taken sequentially without gaps.) Of course, this outcome has probability zero. But isn’t the possibility of no headaches, even at probability zero, to be preferred to the certainty of an infinite number? :-). And note, this sort of ‘reasoning’ applies even if the expected value of each bet is negative. :-)

Hmm… This line of thought gives zero value to any merely finite change. But if you take the actual infinity seriously, and you compare by counting headache days, I’m not even sure that that is wrong. Maybe it’s better to stick with the original version with positive and negative payoffs.

Alexander R Pruss said...

Ian:

If Causal Finitism is false, you could tweak the situation to make sure you can't just win all the bets. For instance, you could run the story in a supertask, make the payoffs come after the end of the supertask, and if you "won" all the bets (or even infinitely many of the bets), you lose all the benefits. This does not affect the statistical independence of all the events, because winning all the bets has zero probability, and changing things on a zero-probability set doesn't affect independence. But it does affect "intuitive" independence.