People do things that seem to be irrational in respect of maximizing expected utilities. For instance, art collectors buy insurance, even though it seems that the expected payoff of buying insurance is negative—or else the insurance company wouldn't be selling it (some cases of insurance can be handled by distinguishing utilities from dollar amounts, as I do here, but I am inclined to think luxury items like art are not a case like that). Likewise, people buy lottery tickets, and choose the "wrong" option in the Allais Paradox.
Now, there are all sorts of clever decision-theoretic ways of modeling these phenomena and coming up with variations on utility-maximization that handle them. But rather than doing that I want to say something else about these cases.
Why is it good to maximize expected utilities in our choices (and let's bracket all deontic constraints here—let's suppose that none of the choices are deontically significant)? Well, a standard and plausible justification involves the Law of Large Numbers [LLN] (I actually wonder if we shouldn't be using the Central Limit Theorem instead—that might even strengthen the point I am going to make). Suppose you choose between option A and option B in a large number of independent trials. Then, on moderate assumptions on A and B, the LLN applies and says that if the number of trials N is large, probably the payoff for choosing A each time will be relatively close to NE[A] and the payoff for choosing B each time will be relatively close to NE[B], where E[A] and E[B] are the expected utilities of A and B, respectively. And so if E[A]>E[B], you will probably do better in the long run by choosing A rather than by choosing B, and you can (on moderate assumptions on A and B, again) make the probability that you will do better by choosing A as high as you like by making the number of trials large.
But here's the thing. My earthly life is finite (and I have no idea how decision theory is going to apply in the next life). I am not going to have an infinite number of trials. So how well this LLN-based argument works depends on how fast the convergence of observed average payoff to the statistically expected payoff in the LLN is. If the convergence is too slow relative to the expected number of A/B-type choices in my life, the argument is irrelevant. But now here's the kicker. The rate of convergence in the LLN depends on the shape of the distributions of A and B, and does so in such a way that the lop-sided distributions involved in the problems mentioned in the first paragraph of the paper are going to give particularly slow convergence. In other words, the standard LLN-based argument for expected utility maximization applies poorly precisely to the sorts of cases where people don't go for expected utility maximization.
That said, I don't actually think this cuts it as a justification of people's attitudes towards things like lotteries and insurance. Here is why. Take the case of lotteries. With a small number of repetitions, the observed average payoff of playing the lottery will likely be rather smaller than the expected value of the payoff, because the expected value of the payoff depends on winning, and probably you won't win with a small number of repetitions. So taking into account the deviation from the LLN actually disfavors playing the lottery. The same goes for insurance and Allais: taking into account the deviation from the LLN should, if anything, tell against insuring and choosing the "wrong" gamble in Allais.
Maybe there is a more complex explanation--but not justification--here. Maybe people sense (consciously or not—there might be some evolutionary mechanism here) that these cases don't play nice with the LLN, and so they don't do expected utility maximization, but do something heuristic, and the heuristic fails.