tag:blogger.com,1999:blog-3891434218564545511.post2143195994824578481..comments2024-03-28T13:23:50.623-05:00Comments on Alexander Pruss's Blog: Yet another infinite population problemAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3891434218564545511.post-89623837960299443422015-12-09T15:51:42.932-06:002015-12-09T15:51:42.932-06:00This is just an interpersonal version of Satan'...This is just an interpersonal version of Satan's Apple.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-29832509900195988462014-10-28T11:43:44.848-05:002014-10-28T11:43:44.848-05:00Thanks for the pointer.
So, looking at that, the ...Thanks for the pointer.<br /><br />So, looking at that, the closest seems to be the Tragedy of the Commons. But there are a couple of differences.<br /><br />First, in TC, we still have different individual utilities. In my case, one can have everybody trying to maximize total utility and you still have the problem.<br /><br />Second, in TC, if we suppose a benevolent dictator, it's easy to solve the problem. The dictator just has a lottery to pick the minimal effective number of cooperators.<br /><br />Third, in TC there is a mixed strategy such that if everybody follows it, expected total utility is at least as good as it would be on any other universalized mixed strategy. (This strategy is everybody choosing independently with probability p0 whether to cooperate. I haven't worked out what p0 is, but I assume it's close to the fraction needed for minimally effective cooperation.) In my case, for any universalized mixed strategy assigning non-zero probability to cooperation, there is a better universalized mixed strategy. If everybody flips a coin to decide whether to cooperate, that's better than everybody cooperating. But it's even better if everybody flips a hundred coins and only cooperates if they're all heads. And so on.<br /><br />But on reflection I am not sure my case contributes anything *philosophically*. For it seems to me that my case brings together three separate ingredients and it may be more illuminating to think about them separately. <br /><br />1. The coordination problem found in all PDs.<br /><br />2. No optimal strategy problems, like in standard problems such as the vacation from hell problem (Sam is in hell forever, but he gets to request one, and only one, vacation in heaven; if he requests the vacation on day n of his stay in hell, the vacation will last n days; for any day, it seems to make more sense to wait an extra day before requesting) or the problem where you're allowed to costlessly save any finite number out of an infinite number of sufferers. This shows up in the fact that in my case, even a benevolent dictator doesn't have a perfect solution.<br /><br />3. Outcome knowledge. In any somewhat realistic version of the story one will assign probability 1 to there being enough other people agreeing/cooperating. In this way, the problem is like voting cases where one knows from opinion polls who will win.<br /><br />But it's better to separate out coordination, no optimal strategy and outcome knowledge, and think about them each separately, because they seem to be quite separate problems, that as it happens are all brought together in this case.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-17304714418749465142014-10-28T10:38:08.791-05:002014-10-28T10:38:08.791-05:00The SEP has a nice, very formal discussion of PDs....The SEP has a nice, very formal discussion of PDs. I think section 4 is most relevant for this case.Heath Whitehttps://www.blogger.com/profile/13535886546816778688noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-55091810294310219742014-10-28T08:54:08.733-05:002014-10-28T08:54:08.733-05:00There is also a more contingent difference. In a P...There is also a more contingent difference. In a PD, one kind of expects what will happen if you defect: If you defect, not improbably so will the other and disaster will result.<br /><br />But in this scenario, it is all but certain that all will be well if you defect. For it is extremely likely, indeed seems to have probability 1, that of the infinitely many other people, infinitely many will agree to the offer.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-9782019652673362042014-10-28T08:26:02.673-05:002014-10-28T08:26:02.673-05:00One difference from a prisoner's dilemma is th...One difference from a prisoner's dilemma is that here if you refuse, everyone does better or at least as well (keeping fixed other decisions). In a prisoner's dilemma, if you defect, the other person does worse but you do better.<br />Also, with the right formalism, one can formulate this case with a single (possibly infinite) utility everyone is trying to maximize, while in PD it seems important that each person has a different aim.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-70900422246940780572014-10-28T08:11:03.820-05:002014-10-28T08:11:03.820-05:00I think it's a standard Prisoner's Dilemma...I think it's a standard Prisoner's Dilemma. There's nothing necessarily selfish about a PD, that's just the obvious way to frame examples. The heart of a PD is dominance reasoning: if the other person Cooperates, my best choice is to Defect; and if the other person Defects, my best choice is to Defect. So I'll Defect. But then if everyone reasons this way, the optimal outcome of joint Cooperation is never attained.<br /><br />'Defect' and 'Cooperate' are just stand-ins for some set of actions, selfish or not. Heath Whitehttps://www.blogger.com/profile/13535886546816778688noreply@blogger.com