tag:blogger.com,1999:blog-3891434218564545511.post8103599194938917442..comments2021-05-13T00:37:21.978-05:00Comments on Alexander Pruss's Blog: Conciliationism is false or trivialAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-3891434218564545511.post-41234512411613659432017-10-20T11:14:16.863-05:002017-10-20T11:14:16.863-05:00Alex, this is an interesting case. Here is what I ...Alex, this is an interesting case. Here is what I just posted on Trent's facebook page (first responding to people who are confusing conciliation with equal weight = splitting the difference which is a mistake you do not make): I don't think anything in Alexander's post assumes that Conciliation is splitting the difference. He does seem to assume that if one credence is high and the other low, then conciliation can't lead to even higher than the high one. Our paper on disagreement deals with this case pretty clearly I think (https://quod.lib.umich.edu/p/phimp/3521354.0016.011/1 ) first, it is clear that we can have synergy (bayesian revision goes outside the range of the initial credences). But if something is yes/no, then >.5 and <.5 combined should be in the range. But this is a case where we don't really have a yes/no question. The correct way to think about it is as a partition where the last digit is one of ten possible cases (0-9) and then the obvious prior is 1/10th for each and so each person reporting their credence is evidence that the last digit is one and so the revised credence should be higher than either.Anonymoushttps://www.blogger.com/profile/14122534020804478423noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-29217107519162768972017-10-19T22:28:28.463-05:002017-10-19T22:28:28.463-05:00By the way, I am broadly in agreement with your pa...By the way, I am broadly in agreement with your paper.<br /><br />Note that my example doesn't meet your condition "that neither A nor B have any reason to think that the probability of A making a mistake about the matter in question differs from the probability of B making a mistake about the matter." For if A assigns credence 0.99 to p and B assigns credence 0.27 to p, then by their own lights, A has probability 1% of being wrong (in the sense of having a credence on the wrong side of 1/2) while B has probability 27% of being wrong (in the same sense). <br /><br />I wonder if it would be possible to manufacture a case like the one I gave but where the two agents have exactly opposite credences, r and 1-r. The trick could be to set things up so that B's having credence 1-r would indicate to A something very specific about how B got to her credence, and A's having credence r would indicate to A something very specific about how A got to her credence. <br /><br />Here's a way to do this. It is much more contrived than the example in my post. A and B each know that they both have the same biases that are randomly triggered by features of a situation. They are both judging whether some person is guilty. Here are some facts that they both know nearly for sure:<br /> - Bias alpha is triggered by and only by a consciously inaccessible feature F of the accused.<br /> - Bias alpha when triggered makes the agent assign credence 0.12 to guilt. <br /> - Coincidentally, 12% of accuseds with feature F are guilty.<br /> - Bias beta is triggered by and only by a consciously inaccessible feature G of the accused.<br /> - Bias beta when triggered makes the agent assign credence 0.88 to guilt.<br /> - Coincidentally, 88% of accuseds with feature G are guilty.<br /> - 92% of accuseds with both features F and G are guilty.<br /> - No unbiased reading of the consciously accessible evidence yields credence 12% or 88%.<br /> - Both agents initially either evaluate the consciously accessible evidence, or come to a credence by bias alpha, or come to a credence by bias beta.<br /><br />Alright. So, now you find yourself with credence 0.12. You conclude that you got it from bias alpha. You thus know that the accused has feature F which you were unconsciously aware of. And, coincidentally, 12% of those with feature F are guilty. So, after reflection on your initial credence, you keep that credence. And I find myself with credence 0.88, and likewise upon reflection I keep it. At this point, the symmetry condition you give is met: each of us has an equal probability of being wrong. But then we learn each other's credences, from which we learn that the accused has features F and G, and hence we both increase our credence to 0.92.<br /><br />Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-10997754658088564252017-10-19T21:45:10.888-05:002017-10-19T21:45:10.888-05:00Thanks, Trent.
Ad 2: I don't know either. I&#...Thanks, Trent.<br /><br />Ad 2: I don't know either. I've had other cases where both needed to be revised up, but I could prove that the other cases wouldn't work if one of the credences was below 0.5. I suspect that in cases like _this_ one, I can make the lower credence be arbitrarily small, though I haven't formally proved this.<br /><br />Ad 3: No, the point at the end is that if one counts the two people as non-peers for having different evidence because they think differently, then of course there are no peers who disagree--for people who disagree always think differently. :-)<br /><br />Ad 5: Aggregation is my interest here. I am not so much interested in the puzzle of how it could happen that peers disagree, as in the question of what they should do if they find out that they do.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-9313898071980726702017-10-19T21:09:22.574-05:002017-10-19T21:09:22.574-05:00A few scattered thoughts, mostly, but not wholly, ...A few scattered thoughts, mostly, but not wholly, in no particular order. <br /><br />1. Cool case!<br /><br />2. It is definitely widely known (at least among the formal epistemology crowd) that in a pair of divergent credences, the could both need to be revised up. I can't remember whether the cases had one of the credences below .5 though. <br /><br />3. I think the conclusion at the end should be "...or there can be no *persistent* peer disagreements" or "no disagreements upon sustained reflection," right? That wouldn't be new or trivial. It's supposed to be surprising that peers could ever disagree in the first place. <br /><br />4. Lots of people have said that the whole debate is trivial because there can't be people with the same evidence. I point out in "Dealing with Disagreement from the First Person Perspective: A Probabilist Proposal" that that's silly, because, even though as an experience-theorist about basic evidence, it's still puzzling that people with relevantly *isomorphic* evidence. It's also the case that *near* peers should be expected to *nearly* agree. So there are several ways to set up the puzzle. <br /><br />5. There are lots of people who have approached peer disagreement puzzles by aggregation principles, which one might think you are going in the direction of here, almost as a method of *resolving* disagreement rather than *diagnosing* the normative situation at the *time* of disagreement. <br /><br />6. In a certain senese, one I get close to in DDFFPP:PP, all Bayesians think the disagreement problem is trivial. That's kind of what Bayesians should think about any epistemic puzzle. Of course, the pay of for Bayesians is that proving something is trivial can be very non-trivial and fun. :-)Trent Doughertyhttps://www.blogger.com/profile/01419566472393605963noreply@blogger.com