There are infinitely many people in existence, unable to communicate with one another. An angel makes it known to all that if, and only if, infinitely many of them make some minor sacrifice, he will give them all a great benefit far outweighing the sacrifice. (Maybe the minor sacrifice is the payment of a dollar and the great benefit is eternal bliss for all of them.) You are one of the people.
It seems you can reason: We are making our decisions independently. Either infinitely many people other than me make the sacrifice or not. If they do, then there is no gain for anyone to my making it—we get the benefit anyway, and I unnecessarily make the sacrifice. If they don't, then there is no gain for anyone to my making it—we don't get the benefit even if I do, so why should I make the sacrifice?
If consequentialism is right, this reasoning seems exactly right. Yet one better hope that it's not the case that everyone reasons like this.
The case reminds me of both the Newcomb paradox—though without the need for prediction—and the Prisoner's Dilemma. Like in the case of the Prisoner's Dilemma, it sounds like the problem is with selfishness and freeriding. But perhaps unlike in the case of the Prisoner's Dilemma, the problem really isn't about selfishness.
For suppose that the infinitely many people each occupy a different room of Hilbert's Hotel (numbered 1,2,3,...). Instead of being asked to make a sacrifice oneself, however, one is asked to agree to the imposition of a small inconvenience on the person in the next room. It seems quite unselfish to reason: My decision doesn't affect anyone else's (I so suppose—so the inconveniences are only imposed after all the decisions have been made). Either infinitely many people other than me will agree or not. If so, then we get the benefit, and it is pointless to impose the inconvenience on my neighbor. If not, then we don't get the benefit, and it is pointless to add to this loss the inconvenience to my neighbor.
Perhaps, though, the right way to think is this: If I agree—either in the original or the modified case—then my action partly constitutes the a good collective (though not joint) action. If I don't agree, then my action runs a risk of partly constituting a bad collective (though not joint) action. And I have good reason to be on the side of the angels. But the paradoxicality doesn't evaporate.
I suspect this case, or one very close to it, is in the literature.
I think it's a standard Prisoner's Dilemma. There's nothing necessarily selfish about a PD, that's just the obvious way to frame examples. The heart of a PD is dominance reasoning: if the other person Cooperates, my best choice is to Defect; and if the other person Defects, my best choice is to Defect. So I'll Defect. But then if everyone reasons this way, the optimal outcome of joint Cooperation is never attained.
ReplyDelete'Defect' and 'Cooperate' are just stand-ins for some set of actions, selfish or not.
One difference from a prisoner's dilemma is that here if you refuse, everyone does better or at least as well (keeping fixed other decisions). In a prisoner's dilemma, if you defect, the other person does worse but you do better.
ReplyDeleteAlso, with the right formalism, one can formulate this case with a single (possibly infinite) utility everyone is trying to maximize, while in PD it seems important that each person has a different aim.
There is also a more contingent difference. In a PD, one kind of expects what will happen if you defect: If you defect, not improbably so will the other and disaster will result.
ReplyDeleteBut in this scenario, it is all but certain that all will be well if you defect. For it is extremely likely, indeed seems to have probability 1, that of the infinitely many other people, infinitely many will agree to the offer.
The SEP has a nice, very formal discussion of PDs. I think section 4 is most relevant for this case.
ReplyDeleteThanks for the pointer.
ReplyDeleteSo, looking at that, the closest seems to be the Tragedy of the Commons. But there are a couple of differences.
First, in TC, we still have different individual utilities. In my case, one can have everybody trying to maximize total utility and you still have the problem.
Second, in TC, if we suppose a benevolent dictator, it's easy to solve the problem. The dictator just has a lottery to pick the minimal effective number of cooperators.
Third, in TC there is a mixed strategy such that if everybody follows it, expected total utility is at least as good as it would be on any other universalized mixed strategy. (This strategy is everybody choosing independently with probability p0 whether to cooperate. I haven't worked out what p0 is, but I assume it's close to the fraction needed for minimally effective cooperation.) In my case, for any universalized mixed strategy assigning non-zero probability to cooperation, there is a better universalized mixed strategy. If everybody flips a coin to decide whether to cooperate, that's better than everybody cooperating. But it's even better if everybody flips a hundred coins and only cooperates if they're all heads. And so on.
But on reflection I am not sure my case contributes anything *philosophically*. For it seems to me that my case brings together three separate ingredients and it may be more illuminating to think about them separately.
1. The coordination problem found in all PDs.
2. No optimal strategy problems, like in standard problems such as the vacation from hell problem (Sam is in hell forever, but he gets to request one, and only one, vacation in heaven; if he requests the vacation on day n of his stay in hell, the vacation will last n days; for any day, it seems to make more sense to wait an extra day before requesting) or the problem where you're allowed to costlessly save any finite number out of an infinite number of sufferers. This shows up in the fact that in my case, even a benevolent dictator doesn't have a perfect solution.
3. Outcome knowledge. In any somewhat realistic version of the story one will assign probability 1 to there being enough other people agreeing/cooperating. In this way, the problem is like voting cases where one knows from opinion polls who will win.
But it's better to separate out coordination, no optimal strategy and outcome knowledge, and think about them each separately, because they seem to be quite separate problems, that as it happens are all brought together in this case.
This is just an interpersonal version of Satan's Apple.
ReplyDelete