Suppose that your priors for some hypothesis H are 3/4 while my priors for it are 1/2. I now find some piece of evidence E for H which raises my credence in H to 3/4 and would raise yours above 3/4. If my concern is for your epistemic good, should I reveal this evidence E?
Here is an interesting reason for a negative answer. For any strictly proper (accuracy) scoring rule, my expected value for the score of a credence is uniquely maximized when the credence is 3/4. I assume your epistemic utility is governed by a strictly proper scoring rule. So the expected epistemic utility, by my lights, of your credence is maximized when your credence is 3/4. But if I reveal E to you, your credence will go above 3/4. So I shouldn’t reveal it.
This is epistemic paternalism. So, it seems, expected epistemic utility maximization (which I take it has to employ a strictly proper scoring rule) forces one to adopt epistemic paternalism. This is not a happy conclusion for expected epistemic utility maximization.
1 comment:
I doubt that epistemic paternalism works in any case. To apply Bayes strictly, you have to consider not just the evidence itself but the way you come by it.
Suppose I think that you are an epistemic paternalist, so that you will report evidence that (taken naively) would move my credences closer to yours, but that you will withhold the evidence otherwise. Then I will take silence from you as suggesting that you have evidence that would move my credences away from yours. So I will update accordingly.
This can be formalized: There are two mutually exclusive hypotheses, H and H1. Your prior for H is 1/2, mine is 3/4. At some specified time, you will observe an outcome that must be either E or EA. P(E|H) is 3/4, P(E|H1) is 1/4. If you observe E you will update your credence in H to 3/4. Taken naively, it would change my credence in H from 3/4 to 9/10. So you will withhold it. If you observe EA you will update your credence in H to 1/4. Taken naively, it would reduce my credence in H from 3/4 to 1/2, i.e. closer to your new credence. So you will tell me that you observed EA.
Suppose that I am Bayesian but not naïve and that I know all the above. I know that if you observe E you will say nothing and that if you observe EA you will report it. So silence from you is just as much evidence for H as E itself is. I will take your silence as indicating that you observed E and update to 9/10 accordingly.
Of course, this sort of model cannot be applied strictly in practice. There is usually no ‘specified time’, we can’t know the required probabilities, and the calculations would be way too complex. But the model illustrates why we are wary of even perfectly valid evidence from advocates. We suspect, often rightly, that they have other evidence that they are not telling us.
Post a Comment