In the fall, I attended a really neat talk by Patrick Grim which reported on several computer simulation experiments by Grim. Suppose you have a bunch of investigators who are each trying to find the maximum ("the solution to the problem") of some function. They search, but they also talk to one other. When someone they are in communication with finds a better option than their own, they have a certain probability of switching to that. The question is: How much communication should there be between investigators if we want the community as a whole to do well vis-a-vis the maximization problem?
Consider two models. On the Local Model (my terminology), the investigators are arranged around the circumference of a circle, and each talks only to her immediate neighbors. On the Internet Model (also my tendentious terminology), every investigator is in communication with every investigator. So, here's what you get. On both models, the investigators eventually communally converge on a solution. On the Internet Model, community opinion converges much faster than on the Local Model. But on the Internet Model the solution converged on is much more likely to be wrong (to be a local maximum rather than the global maximum).
So, here is a conclusion one might draw (which may not be the same as Grim's conclusion): If the task is satisficing or time is of the essence, the Internet Model may be better—we may need to get a decent working answer quickly for practical purposes, even if it's not the true one. But if the task is getting the true solution, it seems the Local Model is a better model for the community to adopt.
Suppose we're dealing with a problem where we really want the true solution, not solutions that are "good enough". This is more likely in more theoretical intellectual enterprises. Then the Local Model is epistemically better for the community. But what is epistemically better for the individual investigator?
Suppose that we have a certain hybrid of the Internet and Local Models. As in the Local Model, the investigators are arranged on a circle. Each investigator knows what every other investigator is up to. But the investigator has a bias in favor of her two neighbors over other investigators. Thus, she is more likely to switch her opinion to match that of her neighbors than to match that of the distant ones. There are two limiting cases: in one limiting case, the bias goes to zero, and we have the Internet Model. In the other limiting case, although she knows of the opinions of investigators who aren't her neighbors, she ignores it, and will never switch to it. This is the Parochial Model. The Parochial Model gives exactly the same predictions as the Local Model.
Thus, investigators' having an epistemic bias in favor of their neighbors can be good for the community. But such a bias can be bad for the individual investigator. Jane would be better off epistemically if she adopted the best solution currently available in the community. But if everybody always did that, then the community would be worse off epistemically with respect to eventually getting at the truth, since then we would have the Internet Model.
This suggests that we might well have the structure of a Prisoner's Dilemma. Everybody is better off epistemically if everybody has biases in favor of the local (and it need not be spatially local), but any individual would be better off defecting in favor of the best solution currently available. This suggests that epistemic self-sacrifice is called for by communal investigation: people ought not all adopt the best available solution—we need eccentrics investigating odd corners of the solution space, because the true solution may be there.
Of course, one could solve the problem like this. One keeps track of two solutions. One solution is the one that one comes to using the biased method and the other is the best one the community has so far. The one that one comes to using the biased method is the one that one's publications are based on. The best one the community has so far is the one that one's own personal opinion is tied to. The problem with this is that this kind of "double think" may be psychologically unworkable. It may be that investigation only works well when one is committed to one's solution.
If this double think doesn't work, this suggests that in some cases individual and group rationality could come apart. It is individually irrational to be intellectually eccentric, but good for the community that there be intellectual eccentrics.
My own pull is different in this case than in the classic non-epistemic Prisoner's Dilemma. In this case, I think one should individually go for individual rationality. One should not sacrifice oneself epistemically here by adopting biases. But in the classic Prisoner's Dilemma, one has very good reason to sacrifice oneself.
1 comment:
Wow, super fascinating stuff, with a lot of relevance to contemporary debates about the epistemic significance of disagreement and rational belief-formation/updating in the face of disagreement.
Q: What does "Jane would be better off epistemically if she adopted the best solution currently available in the community" mean? That she would be more likely to have a correct solution, at any given time?
I wonder how mixed populations would fare, with some highly-connected and some weakly-connected individuals.
If the situation really is PD-like, then my intuitions differ from yours, for two reasons. One is that I really do think that knowledge of any consequence is a community endeavor, and so the proper morality ("rationality") with respect to it is to focus on the common good. Epistemic individualists are free-riders.
Secondly, one (characteristically Reformed) way to think about the situation of Christians in the higher realms of the intellect is as (from a global point of view) "eccentrics investigating odd corners of the solution space". I would not want to adopt a theory on which such "eccentricity" is irrational.
Post a Comment