Tuesday, November 8, 2011

Attitudes to risk and the law of large numbers

People do things that seem to be irrational in respect of maximizing expected utilities. For instance, art collectors buy insurance, even though it seems that the expected payoff of buying insurance is negative—or else the insurance company wouldn't be selling it (some cases of insurance can be handled by distinguishing utilities from dollar amounts, as I do here, but I am inclined to think luxury items like art are not a case like that). Likewise, people buy lottery tickets, and choose the "wrong" option in the Allais Paradox.

Now, there are all sorts of clever decision-theoretic ways of modeling these phenomena and coming up with variations on utility-maximization that handle them. But rather than doing that I want to say something else about these cases.

Why is it good to maximize expected utilities in our choices (and let's bracket all deontic constraints here—let's suppose that none of the choices are deontically significant)? Well, a standard and plausible justification involves the Law of Large Numbers [LLN] (I actually wonder if we shouldn't be using the Central Limit Theorem instead—that might even strengthen the point I am going to make). Suppose you choose between option A and option B in a large number of independent trials. Then, on moderate assumptions on A and B, the LLN applies and says that if the number of trials N is large, probably the payoff for choosing A each time will be relatively close to NE[A] and the payoff for choosing B each time will be relatively close to NE[B], where E[A] and E[B] are the expected utilities of A and B, respectively. And so if E[A]>E[B], you will probably do better in the long run by choosing A rather than by choosing B, and you can (on moderate assumptions on A and B, again) make the probability that you will do better by choosing A as high as you like by making the number of trials large.

But here's the thing. My earthly life is finite (and I have no idea how decision theory is going to apply in the next life). I am not going to have an infinite number of trials. So how well this LLN-based argument works depends on how fast the convergence of observed average payoff to the statistically expected payoff in the LLN is. If the convergence is too slow relative to the expected number of A/B-type choices in my life, the argument is irrelevant. But now here's the kicker. The rate of convergence in the LLN depends on the shape of the distributions of A and B, and does so in such a way that the lop-sided distributions involved in the problems mentioned in the first paragraph of the paper are going to give particularly slow convergence. In other words, the standard LLN-based argument for expected utility maximization applies poorly precisely to the sorts of cases where people don't go for expected utility maximization.

That said, I don't actually think this cuts it as a justification of people's attitudes towards things like lotteries and insurance. Here is why. Take the case of lotteries. With a small number of repetitions, the observed average payoff of playing the lottery will likely be rather smaller than the expected value of the payoff, because the expected value of the payoff depends on winning, and probably you won't win with a small number of repetitions. So taking into account the deviation from the LLN actually disfavors playing the lottery. The same goes for insurance and Allais: taking into account the deviation from the LLN should, if anything, tell against insuring and choosing the "wrong" gamble in Allais.

Maybe there is a more complex explanation--but not justification--here. Maybe people sense (consciously or not—there might be some evolutionary mechanism here) that these cases don't play nice with the LLN, and so they don't do expected utility maximization, but do something heuristic, and the heuristic fails.

2 comments:

  1. It seems to me that there are two important points to note.

    First, although they deviate similarly from the LLN EV-calculations, insurance is not like lotteries. Insurance is about preventing low-frequency but catastrophic losses, while lotteries are about making low-frequency but gigantic gains. The human mind treats losses and gains differently; losses are worse than equivalent gains. I suspect this has to do with the fact that in evolutionary history, and today, if losses are sufficiently bad you cannot recover to have another large set of numbers go your way…Gambler’s Ruin. So I don’t think it’s irrational to buy insurance in the same way it’s irrational to play the lottery (one might explain this by saying that the utility deviates significantly from the monetary reward in the two cases.)

    Second, who plays the lottery? Not people with a reasonably good life, but poor people. There are two ways to explain this: (1) poor people are more likely to be irrational—maybe this is part of the explanation of why they are poor. (2) The change in utility represented by winning the lottery is far out of proportion to the change in one’s bank account. Winning the lottery is the ticket to a whole different style of life, and a much better one. (I was introduced to this second interpretation by MacIntyre; he thought of it as a this-worldly version of Pascal’s Wager.)

    Anyway, the common thread is that you can preserve the idea that people are rational in expected utility terms, if you are willing to break the connection between $1=1 utile in significant ways.

    ReplyDelete
  2. An insurance is a lottery whose winning always correlates with one's suffering a loss. :-)

    I deliberately talked of art collectors insuring their collections, because I fully agree about catastrophic losses. I think what is going on in cases of catastrophic losses is that under the catastrophic circumstances the money one gets from insurance is worth more to one then than it was when one way paying the premiums. For instance, suppose one needs a car to hold on to one's employment, and one wouldn't be able to afford to buy another car if one's existing car were stolen. In that case, one's premiums are just worth their face value to one, but the insurance money when the car is stolen is worth not just the value of the car, but the value of the car plus the value of one's employment.

    But having one Old Master stolen from among twenty is not catastrophic in this way, except perhaps psychologically, and that's an irrational psychology. There is no Gambler's Ruin here. (Maybe having all your paintings stolen at the same time would be catastrophic to one's collection. But I assume that typically artwork is insured individually, and not just against the loss of the whole collection, or of, say, half of the collection.) Nor is it the case that when one has lost one Old Master from among twenty, a dollar suddenly becomes, say, twice as valuable as it was when one was paying the premiums, though it may become a little more valuable (say, 5% more).

    ReplyDelete