Suppose for simplicity that everyone is a good Bayesian and has the same priors for a hypothesis H, and also the same epistemic interests with respect to H. I now observe some evidence E relevant to H. My credence now diverges from everyone else’s, because I have new evidence. Suppose I could share this evidence with everyone. It seems obvious that if epistemic considerations are the only ones, I should share the evidence. (If the priors are not equal, then considerations in my previous post might lead me to withhold information, if I am willing to embrace epistemic paternalism.)
Besides the obvious value of revealing the truth, here are two ways to reason for this highly intuitive conclusion.
First, good Bayesians will always expect to benefit from more evidence. If my place and that of some other agent, say Alice, were switched, I’d want the information regarding E to be released. So by the Golden Rule, I should release the information.
Second, good Bayesians’ epistemic utilities are measured by a strictly proper scoring rule. But if Alice’s epistemic utilities for H are measured by a strictly proper (accuracy) scoring rule s that assigns an epistemic utility s(p,t) to a credence p when the actual truth value of H is t, which can be zero or one. By definition of strict propriety, the expectation by my lights of what Alice’s epistemic utility for a given credence should be is strictly maximized when that credence equals my credence. Since Alice shares the priors I had before I observed E, if I can make E evident to her, her new posteriors will match my current ones, and so revealing E to her will maximize my expectation of her epistemic utility.
So far so good. But now suppose that the hypothesis H = HN is that there exist N people other than me, and my priors assign probability 1/2 to there being N and 1/2 to its being n, where N is much larger than n. Suppose further that my evidence E ends up significantly supporting hypothesis Hn, so that my posterior p in HN is smaller than 1/2.
Now, my expectation of the total epistemic utility of other people if I reveal E is:
- UR = pNs(p,1) + (1−p)ns(p,0).
And if I conceal E, my expectation is:
- UC = pNs(1/2,1) + (1−p)ns(1/2,0).
If we had N = n, then it would be guaranteed by strict propriety that UR > UC, and so I should reveal. But we have N > n. Moreover, s(1/2,1) > s(p,1): if some hypothesis is true, a strictly proper accuracy scoring rule increases strictly monotonically with the credence. If N/n is sufficiently large, the first terms of UR and UC will dominate, and hence we will have UC > UR, and thus I should conceal.
The intuition behind this technical argument is this. If I reveal the evidence, I decrease people’s credence in HN. If it turns out that the number of people other than me actually is N, I have done a lot of harm, because I have decreased the credence of a very large number N of people. Since N is much larger than n, this consideration trumps considerations of what happens if the number of people is n.
I take it that this is the wrong conclusion. On epistemic grounds, if everyone’s priors are equal, we should release evidence. (See my previous post for what happens if priors are not equal.)
So what should we do? Well, one option is to opt for averaging rather than summing of epistemic utilities. But the problem reappears. For suppose that I can only communicate with members of my own local community, and we as a community have equal credence 1/2 for the hypothesis Hn that our local community of n people contains all agents, and credence 1/2 for the hypothesis Hn + N that there is also a number N of agents outside our community much greater than n. Suppose, further, that my priors are such that I am certain that all the agents outside our community know the truth about these hypotheses. I receive a piece of evidence E disfavoring Hn and leading to credence p < 1/2. Since my revelation of E only affects the members of my own commmunity, depending on which hypothesis is true, if p is my credence after updating on E, the relevant part of the expected contribution to the utility of revealing E with regard to hypothesis Hn is:
- UR = p((n−1)/n)s(p,1) + (1−p)((n−1)/(n+N))s(p,0).
And if I conceal E, my expectation contribution is:
- UC = p((n−1)/n)s(1/2,1) + (1−p)((n−1)/(n+N))s(p,0).
If N is sufficiently large, again UC will beat UR.
I take it that there is something wrong with epistemic utilitarianism.
No comments:
Post a Comment