Showing posts with label insurance. Show all posts
Showing posts with label insurance. Show all posts

Tuesday, October 17, 2017

Hope vs. despair

A well-known problem, noticed by Meirav, is that it is difficult to distinguish hope from despair. Both the hoper and the despairer are unsure about an outcome and they both have a positive attitude towards it. So what's the difference? Meirav has a story involving a special factor, but I want to try something else.

If I predict an outcome, and the outcome happens, there is the pleasure of correct prediction. When I despair and predict a negative outcome, that pleasure takes the distinctive more intense "I told you so" form of vindicated despair. And if the good outcome happens, despite my despair, then I should be glad about the outcome, but there is a perverse kind of sadness at the frustration of the despair.

The opposite happens when I hope. When the better outcome happens, then even though I may not have predicted the better outcome, and hence I may not have the pleasure of correct prediction, I do have the pleasure of hope's vindication. And when the bad outcome happens, I forego the small comfort of the vindication of despair.

The pleasures of correct prediction and the pains of incorrect prediction are doxastic in nature: they are pleasures and pains of right and wrong opinion. But hope and despair can, of course, exist without prediction. But when I hope for a good outcome, then I dispose myself for pleasures and pains of this doxastic sort much as if I were predicting the good outcome. When I despair of the good outcome, then I dispose myself for these pleasures and pains much as if I were predicting the bad outcome.

We can think of hoping and despairing as moves in a game. If you hope for p, then you win if and only if p is true. If you despair of p, then you win if and only if p is false. In this game of hoping and despairing, you are respectively banking on the good and the bad outcomes.

But this banking is restricted. It is in general false that when I hope for a good outcome, I act as if it were to come true. I can hope for the best while preparing for the worst. But nonetheless, by hoping I align myself with the best.

This gives us an interesting emotional utility story about hope and despair. When I hope for a good outcome, I stack a second good outcome--a victory in the hope and despair game, and the pleasure of that victory--on top of the hoped-for good outcome, and I stack a second bad outcome--a sad loss in the game--on top of the hoped-against bad outcome. And when I despair of the good outcome, I moderate my goods and bads: when the bad outcome happens, the badness is moderated by the joy of victory in the game, but when the good outcome happens, the goodness is tempered by the pain of loss. Despair, thus, functions very much like an insurance policy, spreading some utility from worlds where things go well into worlds where things go badly.

If the four goods and bads that the hope/despair game super-adds (goods: vindicated hope and vindicated despair; bads: frustrated hope and needless despair) are equal in magnitude, and if we have additive expected utilities with expected utility maximization, then as far this super-addition goes, you are better off hoping when the probability of the good outcome is greater than 1/2 and are better off despairing when the probability of the bad outcome is is less than 1/2. And I suspect (without doing the calculations) that realistic risk-averseness will shift the rationality cut-off higher up, so that with credences slightly above 1/2, despair will still be reasonable. Hope, on the other hand, intensifies risks: the person who hoped whose hope was in vain is worse off than the person who despaired and was right. A particularly risk-averse person, by the above considerations, may have reason to despair even when the probability is fairly high. These considerations might give us a nice evolutionary explanation of why we developed the mechanisms of hope and despair as part of our emotional repertoire.

However, these considerations are crude. For there can be something qualitatively bad about despair: it makes one not be as single-minded. It aligns one's will with the bad outcome in such a way that one rejoices in it, and one is saddened by the good outcome. To engage in despair on the above utility grounds is like taking out life-insurance on someone one loves in order to be comforted should the person die, rather than for the normal reasons of fiscal prudence.

This suggests a reason why the New Testament calls Christians to hope. Hope in Christ is part and parcel of a single-minded betting of everything on Christ, rather than the hedging of despair or holding back from wagering in neither hoping nor despairing. We should not take out insurance policies against Christianity's truth. But when the hope is vindicated, the fact that we hoped will intensify the joy.

I am making no claim that the above is all there is to hope and despair.

Tuesday, November 8, 2011

Attitudes to risk and the law of large numbers

People do things that seem to be irrational in respect of maximizing expected utilities. For instance, art collectors buy insurance, even though it seems that the expected payoff of buying insurance is negative—or else the insurance company wouldn't be selling it (some cases of insurance can be handled by distinguishing utilities from dollar amounts, as I do here, but I am inclined to think luxury items like art are not a case like that). Likewise, people buy lottery tickets, and choose the "wrong" option in the Allais Paradox.

Now, there are all sorts of clever decision-theoretic ways of modeling these phenomena and coming up with variations on utility-maximization that handle them. But rather than doing that I want to say something else about these cases.

Why is it good to maximize expected utilities in our choices (and let's bracket all deontic constraints here—let's suppose that none of the choices are deontically significant)? Well, a standard and plausible justification involves the Law of Large Numbers [LLN] (I actually wonder if we shouldn't be using the Central Limit Theorem instead—that might even strengthen the point I am going to make). Suppose you choose between option A and option B in a large number of independent trials. Then, on moderate assumptions on A and B, the LLN applies and says that if the number of trials N is large, probably the payoff for choosing A each time will be relatively close to NE[A] and the payoff for choosing B each time will be relatively close to NE[B], where E[A] and E[B] are the expected utilities of A and B, respectively. And so if E[A]>E[B], you will probably do better in the long run by choosing A rather than by choosing B, and you can (on moderate assumptions on A and B, again) make the probability that you will do better by choosing A as high as you like by making the number of trials large.

But here's the thing. My earthly life is finite (and I have no idea how decision theory is going to apply in the next life). I am not going to have an infinite number of trials. So how well this LLN-based argument works depends on how fast the convergence of observed average payoff to the statistically expected payoff in the LLN is. If the convergence is too slow relative to the expected number of A/B-type choices in my life, the argument is irrelevant. But now here's the kicker. The rate of convergence in the LLN depends on the shape of the distributions of A and B, and does so in such a way that the lop-sided distributions involved in the problems mentioned in the first paragraph of the paper are going to give particularly slow convergence. In other words, the standard LLN-based argument for expected utility maximization applies poorly precisely to the sorts of cases where people don't go for expected utility maximization.

That said, I don't actually think this cuts it as a justification of people's attitudes towards things like lotteries and insurance. Here is why. Take the case of lotteries. With a small number of repetitions, the observed average payoff of playing the lottery will likely be rather smaller than the expected value of the payoff, because the expected value of the payoff depends on winning, and probably you won't win with a small number of repetitions. So taking into account the deviation from the LLN actually disfavors playing the lottery. The same goes for insurance and Allais: taking into account the deviation from the LLN should, if anything, tell against insuring and choosing the "wrong" gamble in Allais.

Maybe there is a more complex explanation--but not justification--here. Maybe people sense (consciously or not—there might be some evolutionary mechanism here) that these cases don't play nice with the LLN, and so they don't do expected utility maximization, but do something heuristic, and the heuristic fails.

Wednesday, April 13, 2011

Insurance

Suppose you are insuring yourself against some event type E with an insurance company with claims ratio, say, 0.75. This means that the company pays out 75% of the net premiums in claims. On its face, this seems even more irrational than gambling at a casino—as far as I can determine with a bit of internet "research" (see for instance here), a casino tends to pay out a larger percentage of what is paid in than 75%. It seems irrational because unless you have special information about your case (in which case there are some integrity questions that might be raised), you can expect to get back 60% of what you put in.

But there is a crucial difference. One typically insures oneself against adverse circumstances. In adverse circumstances, money may well have higher utility than it does in normal circumstances. For instance, if your car is stolen and your employment depends on having a car, the value of having an amount of money sufficient to purchase a car is significantly greater than the value of having that amount of money in normal circumstances where you already have a car.

This suggests a rough heuristic: it is rational to insure yourself against E with a company whose claims ratio is r for a claim amount c only if the utility to you of receiving c in case of E is equal to the utility of receiving c/r in case of non-E. (For a better estimate, one would have to take into account potential investment returns on the money that would have gone out in premiums.) For an egregious example, extended warranties (a species of insurance) have a 0.43 claim ratio (UK data). Thus it makes sense to get an extended warranty for a $400 TV only if getting $400 in the event of the TV's breaking down has as much utility to you as getting $400/0.43=$930 under ordinary circumstances, which is unlikely to be the case. (Though it might be if you expected to be low on cash and your well-being is strongly enough tied to having a TV of the relevant price-level.) But in the case of, say, car theft coverage it might be worth it if you would be unlikely to be able to pay for a new car of sufficient quality and your well-being strongly depends on having a car of that quality.

Interestingly, I think it follows that it shouldn't be worthwhile insuring luxury items, unless (a) you wouldn't be able to afford replacing them otherwise and (b) your well-being is tied to them to a high degree. But it is probably vicious to have your well-being be so tied to luxury items.

OK, except for the thing about luxury and vice, this is stuff that's no doubt obvious to every economics student, but it wasn't obvious to me, and the heuristic is kind of handy.