Showing posts with label Prisoner's Dilemma. Show all posts
Showing posts with label Prisoner's Dilemma. Show all posts

Monday, October 27, 2014

Yet another infinite population problem

There are infinitely many people in existence, unable to communicate with one another. An angel makes it known to all that if, and only if, infinitely many of them make some minor sacrifice, he will give them all a great benefit far outweighing the sacrifice. (Maybe the minor sacrifice is the payment of a dollar and the great benefit is eternal bliss for all of them.) You are one of the people.

It seems you can reason: We are making our decisions independently. Either infinitely many people other than me make the sacrifice or not. If they do, then there is no gain for anyone to my making it—we get the benefit anyway, and I unnecessarily make the sacrifice. If they don't, then there is no gain for anyone to my making it—we don't get the benefit even if I do, so why should I make the sacrifice?

If consequentialism is right, this reasoning seems exactly right. Yet one better hope that it's not the case that everyone reasons like this.

The case reminds me of both the Newcomb paradox—though without the need for prediction—and the Prisoner's Dilemma. Like in the case of the Prisoner's Dilemma, it sounds like the problem is with selfishness and freeriding. But perhaps unlike in the case of the Prisoner's Dilemma, the problem really isn't about selfishness.

For suppose that the infinitely many people each occupy a different room of Hilbert's Hotel (numbered 1,2,3,...). Instead of being asked to make a sacrifice oneself, however, one is asked to agree to the imposition of a small inconvenience on the person in the next room. It seems quite unselfish to reason: My decision doesn't affect anyone else's (I so suppose—so the inconveniences are only imposed after all the decisions have been made). Either infinitely many people other than me will agree or not. If so, then we get the benefit, and it is pointless to impose the inconvenience on my neighbor. If not, then we don't get the benefit, and it is pointless to add to this loss the inconvenience to my neighbor.

Perhaps, though, the right way to think is this: If I agree—either in the original or the modified case—then my action partly constitutes the a good collective (though not joint) action. If I don't agree, then my action runs a risk of partly constituting a bad collective (though not joint) action. And I have good reason to be on the side of the angels. But the paradoxicality doesn't evaporate.

I suspect this case, or one very close to it, is in the literature.

Tuesday, July 15, 2014

Trust and the prisoner's dilemma

This is pretty obvious, but I never quite thought of it in those terms: The prisoners' dilemma shows the need for the virtue of trust (or faith, in a non-theological sense). In the absence of contrary evidence, we should assume others to act well, to cooperate.

This assumption perhaps cannot be justified epistemically non-circularly, at least not without adverting to theism, since too much of our knowledge rests on the testimony of others, and hence is justified by trust. Our own observations simply are not sufficient to tell us that others are trustworthy. There is too much of a chance that people are betraying us behind our backs, and it is only by relying on theism, the testimony of others, or directly on trust, that we can conclude that this is not so.

It seems to me that the only way out of the circle of trust would be an argument for the existence of a perfect being (or for some similar thesis, like axiarchism) that does not depend on trust, so that I can then conclude that people created by a perfect being are likely to be trustworthy. But perhaps every argument rests on trust, if only a trust in our own faculties?

Wednesday, January 26, 2011

Epistemic self-sacrifice and prisoner's dilemma

In the fall, I attended a really neat talk by Patrick Grim which reported on several computer simulation experiments by Grim. Suppose you have a bunch of investigators who are each trying to find the maximum ("the solution to the problem") of some function. They search, but they also talk to one other. When someone they are in communication with finds a better option than their own, they have a certain probability of switching to that. The question is: How much communication should there be between investigators if we want the community as a whole to do well vis-a-vis the maximization problem?

Consider two models. On the Local Model (my terminology), the investigators are arranged around the circumference of a circle, and each talks only to her immediate neighbors. On the Internet Model (also my tendentious terminology), every investigator is in communication with every investigator. So, here's what you get. On both models, the investigators eventually communally converge on a solution. On the Internet Model, community opinion converges much faster than on the Local Model. But on the Internet Model the solution converged on is much more likely to be wrong (to be a local maximum rather than the global maximum).

So, here is a conclusion one might draw (which may not be the same as Grim's conclusion): If the task is satisficing or time is of the essence, the Internet Model may be better—we may need to get a decent working answer quickly for practical purposes, even if it's not the true one. But if the task is getting the true solution, it seems the Local Model is a better model for the community to adopt.

Suppose we're dealing with a problem where we really want the true solution, not solutions that are "good enough". This is more likely in more theoretical intellectual enterprises. Then the Local Model is epistemically better for the community. But what is epistemically better for the individual investigator?

Suppose that we have a certain hybrid of the Internet and Local Models. As in the Local Model, the investigators are arranged on a circle. Each investigator knows what every other investigator is up to. But the investigator has a bias in favor of her two neighbors over other investigators. Thus, she is more likely to switch her opinion to match that of her neighbors than to match that of the distant ones. There are two limiting cases: in one limiting case, the bias goes to zero, and we have the Internet Model. In the other limiting case, although she knows of the opinions of investigators who aren't her neighbors, she ignores it, and will never switch to it. This is the Parochial Model. The Parochial Model gives exactly the same predictions as the Local Model.

Thus, investigators' having an epistemic bias in favor of their neighbors can be good for the community. But such a bias can be bad for the individual investigator. Jane would be better off epistemically if she adopted the best solution currently available in the community. But if everybody always did that, then the community would be worse off epistemically with respect to eventually getting at the truth, since then we would have the Internet Model.

This suggests that we might well have the structure of a Prisoner's Dilemma. Everybody is better off epistemically if everybody has biases in favor of the local (and it need not be spatially local), but any individual would be better off defecting in favor of the best solution currently available. This suggests that epistemic self-sacrifice is called for by communal investigation: people ought not all adopt the best available solution—we need eccentrics investigating odd corners of the solution space, because the true solution may be there.

Of course, one could solve the problem like this. One keeps track of two solutions. One solution is the one that one comes to using the biased method and the other is the best one the community has so far. The one that one comes to using the biased method is the one that one's publications are based on. The best one the community has so far is the one that one's own personal opinion is tied to. The problem with this is that this kind of "double think" may be psychologically unworkable. It may be that investigation only works well when one is committed to one's solution.

If this double think doesn't work, this suggests that in some cases individual and group rationality could come apart. It is individually irrational to be intellectually eccentric, but good for the community that there be intellectual eccentrics.

My own pull is different in this case than in the classic non-epistemic Prisoner's Dilemma. In this case, I think one should individually go for individual rationality. One should not sacrifice oneself epistemically here by adopting biases. But in the classic Prisoner's Dilemma, one has very good reason to sacrifice oneself.