Wednesday, April 8, 2015

The equal weight view

Suppose I assign a credence p to some proposition and you assign a different credence q to it, even though we have the same evidence. We learn of each other's credences. What should we do? The Equal Weight View says that:

  1. I shouldn't give any extra weight to my own credence just because it's mine.
It is also a standard part of the EWV as typically discussed in the literature that:
  1. Each of us should revise the credence in the direction of the other's credence.
Thus if p>q, then I should revise my credence down and you should revise your credence up.

It's an odd fact that the name "Equal Weight View" only connects up with tenet (1). Further, the main intuition behind (1) is a thought that I shouldn't hold myself out as epistemically special, and that does not yield (2). What (1) yields is at most the claim that the method I should use for computing my final credence upon learning of the disagreement should be agnostic as to which of the two initial credences was mine and which was yours. But this is quite compatible with (2) being false. The symmetry condition (1) does nothing to force the final credence to be between the two credences. It could be higher than both credences, or it could be lower than both.

In fact, it's easy to come up with cases where this seems reasonable. A standard case in the literature is where different people calculate their share of the bill in a restaurant differently. Vary the case as follows. You and I are eating together, we agree on a 20% tip and an equal share, and we both see the bill clearly. I calculate my share to be $14.53 with credence p=0.96. You calculate your share to be $14.53 with credence q=0.94. We share our results and credences. Should I lower my confidence, say to 0.95? On the contrary, I should raise it! How unlikely it is, after all, that you should have come to the same conclusion as me if we both made a mistake! Thus we have (1) but not (2): we both revise upward.

There is a general pattern here. We have a proposition that has very low prior probability (in the splitting case the proposition that my share will be $14.53 surely has prior credence less than 0.01). We both get the same evidence, and on the basis of the evidence revise to a high credence. But neither of us is completely confident in the evaluation of the evidence. However, the fact that the other evaluated the evidence in a pretty similar direction overcomes the lack of confidence.

One might think that (2) is at least true in the case where the two credences are on opposite sides of 1/2. But even that may be wrong. Suppose that you and I are looking at the results of some scientific experiment and are calculating the value of some statistic v that is determined entirely by the data. You calculate v at 4.884764447, with credence 0.8, being moderately sure of yourself. But I am much less confident at my arithmetical abilities, and so I conclude that v is 4.884764447 with credence 0.4, We're now on opposite sides of 1/2. Nonetheless, I think your credence should go up: it would be too unlikely that my calculations would support the exact same value that yours did.

One might worry that in these cases, the calculations are unshared evidence, and hence we're not epistemic peers. If that's right, then the bill-splitting story standard in the literature is not a case of epistemic peers, either. And I think it's going to be hard to come up with a useful notion of epistemic peerhood that gives this sort of judgment.

I think what all this suggests is that we aren't going find some general formula for pooling our information in cases of disagreement as some people in the literature have tried (e.g., here). Rather, to pool our information, we need a model of how you and I came to our conclusions, a model of the kinds of errors that we were liable to commit on this path, and then we need to use the model to evaluate how to revise our credences.


Heath White said...

This is a good point: causal/theoretical stories are almost always preferable to pure statistics.

For example: suppose (contrary to fact!) that I have a habit of standing in front of mirrors, next to other people, and asking for estimates of how handsome I am. Unaccountably, although we are looking at the same reflection, my estimates are systematically higher than theirs. Also, more confident. Something tells me we shouldn't just mutually revise our beliefs to meet in the middle....

Alexander R Pruss said...


Have you heard the joke about the statistician out hunting? The deer is at 40 yards. The first arrow lands 50 yards away. The second arrow lands 30 yards away. The statistician yells: "We got it!" Sorry, not quite relevant, but couldn't resist.