Consider a version of consequentialism on which the right thing to do is the one that has the best consequences. Now suppose you're captured by an eccentric evil dictator who always tells the truth. She informs you there are ten innocent prisoners and there is a game you can play.
- If you refuse to play, the prisoners will all be released.
- If you play, the number of hairs on your head will be quickly counted by a machine, and if that number is divisible by 50, all the prisoners will be tortured to death. If that number is not divisible by 50, they will be released and one of them will be given a tasty and nutritious muffin as well, which muffin will otherwise go to waste.
So the consequentialist had better not say that the right thing to do is the one that has the best consequences. She would do better to say that the right thing to do is the one that has the best expected consequences. But I think that is a significant concession to make. The claim that you should act so as to produce the best consequences has a very pleasing simplicity to it. In its simplicity, it is a lovely philosophical theory (even though it leads to morally abhorrent conclusions). But once we say that you should maximize expected utility, we lose that elegant simplicity. We wonder why maximize expected utility instead of doing something more risk averse.
But even putting risk to one side, we should wonder why expected utility matters so much morally speaking. The best story about why expected utility matters have to do with long-run consequences and the law of large numbers. But that story, first, tells us nothing about intrinsically one-shot situations. And, second, that justification of expected utility maximization is essentially a rule utilitarian style of argument—it is the policy, not the particular act, that is being evaluated. Thus, anyone impressed by this line of thought should rather be a rule than an act consequentialist. And rule consequentialism has really serious theoretical problems.
1 comment:
It's an interesting thought experiment. My intuitive response if my task was to defend act utilitarianism, would be to argue that a 'large numbers' approach to calculating the utility value of a situation where I only know the probabilities of different outcomes doesn't mean that I'm adopting a 'rule based' approach. Rather, it's merely one possible way to approach how probability factors into calculation of utility. To the extent that it's using a rule it's not using a rule about how one should act, but a rule in the same sense as the formula for calculating the surface are of a sphere.
Post a Comment