Kirk has come to a planet with two intelligent species in the universe, the Oligons and the Pollakons. There are a million Oligons and a trillion (i.e., million million) Pollakons. They are technologically unsophisticated, live equally happy lives on the same planet, but have no interaction with each other, and the universal translator is currently broken so Kirk can’t communicate with them either. A giant planetoid is about to graze the planet in a way that is certain to wipe out the Pollakons but leave the Oligons, given their different ecological niche, largely unaffected. Kirk can try to redirect the planetoid with his tractor beam. Spock’s accurate calculations give the following probabilities:
1 in 1000 chance that the planetoid will now miss the planet and the Oligons and Pollakons will continue to live their happy lives;
999 in 1000 chance that the planetoid will wipe out both the Oligons and the Pollakons.
If Kirk doesn’t redirect, expected utility is 106 happy lives (the Oligons). If Kirk does redirect, expected utility is (1/1000)(1012 + 106)=109 + 103 happy lives. So, expected utility clearly favors redirecting.
But redirecting just seems wrong. Kirk is nearly certain—99.9%—that redirecting will not help the Pollakons but will wipe out the Oligons.
Perhaps the reason intuition seems to favor not redirecting is that we have a moral bias in favor of non-interference. So let’s turn the story around. Kirk sees the planetoid coming towards the planet. Spock tells him that it has a 1/1000 chance that nothing bad will happen, and a 999/1000 chance that it will wipe out all life on the planet. But Spock also tells him that he can beam the Oligons—but not the Pollakons, who are made of a type of matter incapable of beaming—to the Enterprise. Spock, however, also tells Kirk that beaming the Oligons on board will require the Enterprise to come closer to the planet, which will gravitationally affect the planetoid’s path in such a way that the 1/1000 chance of nothing bad happening will disappear, and the Pollakons will now be certain, and not merely 999/1000 likely, to die.
Things are indeed a bit less clear to me now. I am inclined to think Kirk should rescue the Oligons (this may require Double Effect), but I am worried that I am irrationally neglecting small probabilities. Still, I am inclined to think Kirk should rescue. If that intuition is correct, then even in other-concerning decisions, and even when we have no relevant deontological worries, we should not go with expected utilities.
But now suppose that Kirk over his career will visit a million such planets. Then a policy of non-redirection in the original scenario or of rescue in the modified scenario would be disastrous by the Law of Large Numbers: those 1/1000 events would happen a number of times, and many, many lives will be lost. If we’re talking about long-term policies, then, it seems that Kirk should have a policy of going with expected utilities (barring deontological concerns). But for single-shot decisions, I think it’s different.
This line of thought suggests two things to me:
maximization of expected utilities in ordinary circumstances has something to do with limit laws like the Law of Large Numbers, and
we need a moral theory on which we can morally bind ourselves to a policy, in a way that lets the policy override genuine moral concerns that would be decisive absent the policy (cf. this post on promises).
3 comments:
When is EU maximisation reasonable? When is it reasonable to ignore small probabilities? Should one-off choices (for which we may not expect the expectation, or anything like it) be treated differently from ones that are likely to be repeated? Issues like this have often worried me, but I have only half-baked thoughts.
I came across this (rather discursive) paper. Bradley Monton: How to Avoid Maximizing Expected Utility.
https://quod.lib.umich.edu/cgi/p/pod/dod-idx/how-to-avoid-maximizing-expected-utility.pdf?c=phimp;idno=3521354.0019.018;format=pdf
He defends ignoring probabilities below some threshold.
Good thinking, Alexander!! You have raised excellent questions and not asked too much of your generalized conclusion.
Ian:
Even if we ignore tiny probabilities, 1/1000 is not tiny enough to ignore.
Post a Comment