Thursday, October 19, 2017

Conciliationism is false or trivial

Suppose you and I are adding up a column of expenses, but our only interest is the last digit for some reason. You and I know that we are epistemic peers. We’ve both just calculated the last digit, and a Carl asks: Is the last digit a one? You and I speak up at the same time. You say: “Probably not; my credence that it’s a one is 0.27.” I say: “Very likely; my credence that it’s a one is 0.99.”

Concialiationists now seem to say that I should lower my credence and you should raise yours.

But now suppose that you determine the credence for the last digit as follows: You do the addition three times, each time knowing that you have an independent 1/10 chance of error. Then you assign your credence as the result of a Bayesian calculation with equal priors over all ten options for the last digit. And since I’m your epistemic peer, I do it the same way. Moreover, while we’re poor at adding digits, we’re really good at Bayesianism—maybe we’ve just memorized a lot of Bayes’ factor related tables. So we don’t make mistakes in Bayesian calculations, but we do at addition.

Now I can reverse engineer your answer. If you say your credence in a one is 0.27, then I know that of your three calculations, one of them must have been a one. For if none of your calculations was a one, your credence that the digit was a one would have been very low and if two of your calculations yielded a one, your credence would have been quite high. There are now two options: either you came up with three different answers, or you had a one and then two answers that were the same. In the latter case, it turns out that your credence in a one would have been fairly low, around 0.08. So it must be that your calculations yielded a one, and then two other numbers.

And you can reverse engineer my answer. The only way my credence could be as high as 0.99 is if all three of my calculations yielded a one. So now we both know that my calculations were 1, 1, 1 and yours were 1, x, y where 1, x, y are all distinct. So now you aggregate this data, and I do the same as your peer. We have six calculations yielding 1, 1, 1, 1, x, y. A Bayesian analysis, given the fact that the chance of error in each calculation is 0.9, yields a posterior probability of 0.997.

So, your credence did go up. But mine went up too. Thus we can have cases where the aggregation of a high credence with a low credence results in an even higher credence.

Of course, you may say that the case is a cheat. You and I are not epistemic peers, because we don’t have the same evidence: you have the evidence of your calculations and I have the evidence of mine. But if this counts as a difference of evidence, then the standard example conciliationists give, that of different people splitting a bill in a restaurant, is also not a case of epistemic peerhood. And if the results of internal calculations count as evidence for purposes of peerhood, then there just can’t be any peers who disagree, and conciliationism is trivial.

4 comments:

  1. A few scattered thoughts, mostly, but not wholly, in no particular order.

    1. Cool case!

    2. It is definitely widely known (at least among the formal epistemology crowd) that in a pair of divergent credences, the could both need to be revised up. I can't remember whether the cases had one of the credences below .5 though.

    3. I think the conclusion at the end should be "...or there can be no *persistent* peer disagreements" or "no disagreements upon sustained reflection," right? That wouldn't be new or trivial. It's supposed to be surprising that peers could ever disagree in the first place.

    4. Lots of people have said that the whole debate is trivial because there can't be people with the same evidence. I point out in "Dealing with Disagreement from the First Person Perspective: A Probabilist Proposal" that that's silly, because, even though as an experience-theorist about basic evidence, it's still puzzling that people with relevantly *isomorphic* evidence. It's also the case that *near* peers should be expected to *nearly* agree. So there are several ways to set up the puzzle.

    5. There are lots of people who have approached peer disagreement puzzles by aggregation principles, which one might think you are going in the direction of here, almost as a method of *resolving* disagreement rather than *diagnosing* the normative situation at the *time* of disagreement.

    6. In a certain senese, one I get close to in DDFFPP:PP, all Bayesians think the disagreement problem is trivial. That's kind of what Bayesians should think about any epistemic puzzle. Of course, the pay of for Bayesians is that proving something is trivial can be very non-trivial and fun. :-)

    ReplyDelete
  2. Thanks, Trent.

    Ad 2: I don't know either. I've had other cases where both needed to be revised up, but I could prove that the other cases wouldn't work if one of the credences was below 0.5. I suspect that in cases like _this_ one, I can make the lower credence be arbitrarily small, though I haven't formally proved this.

    Ad 3: No, the point at the end is that if one counts the two people as non-peers for having different evidence because they think differently, then of course there are no peers who disagree--for people who disagree always think differently. :-)

    Ad 5: Aggregation is my interest here. I am not so much interested in the puzzle of how it could happen that peers disagree, as in the question of what they should do if they find out that they do.

    ReplyDelete
  3. By the way, I am broadly in agreement with your paper.

    Note that my example doesn't meet your condition "that neither A nor B have any reason to think that the probability of A making a mistake about the matter in question differs from the probability of B making a mistake about the matter." For if A assigns credence 0.99 to p and B assigns credence 0.27 to p, then by their own lights, A has probability 1% of being wrong (in the sense of having a credence on the wrong side of 1/2) while B has probability 27% of being wrong (in the same sense).

    I wonder if it would be possible to manufacture a case like the one I gave but where the two agents have exactly opposite credences, r and 1-r. The trick could be to set things up so that B's having credence 1-r would indicate to A something very specific about how B got to her credence, and A's having credence r would indicate to A something very specific about how A got to her credence.

    Here's a way to do this. It is much more contrived than the example in my post. A and B each know that they both have the same biases that are randomly triggered by features of a situation. They are both judging whether some person is guilty. Here are some facts that they both know nearly for sure:
    - Bias alpha is triggered by and only by a consciously inaccessible feature F of the accused.
    - Bias alpha when triggered makes the agent assign credence 0.12 to guilt.
    - Coincidentally, 12% of accuseds with feature F are guilty.
    - Bias beta is triggered by and only by a consciously inaccessible feature G of the accused.
    - Bias beta when triggered makes the agent assign credence 0.88 to guilt.
    - Coincidentally, 88% of accuseds with feature G are guilty.
    - 92% of accuseds with both features F and G are guilty.
    - No unbiased reading of the consciously accessible evidence yields credence 12% or 88%.
    - Both agents initially either evaluate the consciously accessible evidence, or come to a credence by bias alpha, or come to a credence by bias beta.

    Alright. So, now you find yourself with credence 0.12. You conclude that you got it from bias alpha. You thus know that the accused has feature F which you were unconsciously aware of. And, coincidentally, 12% of those with feature F are guilty. So, after reflection on your initial credence, you keep that credence. And I find myself with credence 0.88, and likewise upon reflection I keep it. At this point, the symmetry condition you give is met: each of us has an equal probability of being wrong. But then we learn each other's credences, from which we learn that the accused has features F and G, and hence we both increase our credence to 0.92.

    ReplyDelete
  4. Alex, this is an interesting case. Here is what I just posted on Trent's facebook page (first responding to people who are confusing conciliation with equal weight = splitting the difference which is a mistake you do not make): I don't think anything in Alexander's post assumes that Conciliation is splitting the difference. He does seem to assume that if one credence is high and the other low, then conciliation can't lead to even higher than the high one. Our paper on disagreement deals with this case pretty clearly I think (https://quod.lib.umich.edu/p/phimp/3521354.0016.011/1 ) first, it is clear that we can have synergy (bayesian revision goes outside the range of the initial credences). But if something is yes/no, then >.5 and <.5 combined should be in the range. But this is a case where we don't really have a yes/no question. The correct way to think about it is as a partition where the last digit is one of ten possible cases (0-9) and then the obvious prior is 1/10th for each and so each person reporting their credence is evidence that the last digit is one and so the revised credence should be higher than either.

    ReplyDelete