Epistemic utility is the value of one’s beliefs or credences matching the truth.
Suppose your and my credences differ. Then I am going to think that my credences better match the truth. This is automatic if I am measuring epistemic utilities using a proper scoring rule. But that means that benevolence with respect to epistemic utilities gives me a reason to shift your credences to be closer to mine.
At this point, there are honest and dishonest ways to proceed. The honest way is to share all my relevant evidence with you. Suppose I have done that. And you’ve reciprocated. And we still differ in credences. If we’re rational Bayesian agents, that’s presumably due to a difference in prior probabilities. What can I do, then, if the honest ways are exhausted?
I can lie! Suppose your credence that there was once life on Mars is 0.4 and mine is 0.5. So I tell you that I read that a recent experiment provided a little bit of evidence in favor of there once having been life on Mars, even though I read no such thing. That boosts your credence that there was once life on Mars. (Granted, it also boosts your credence in the falsehood that there was such a recent experiment. But, plausibly, getting right whether there was once life on Mars gets much more weight in a reasonable person’s epistemic utilities than getting right what recent experiments have found.)
We often think of lying as an offense against truth. But in these kinds of cases, the lies are aimed precisely at moving the other towards truth. And they’re still wrong.
Thus, it seems that striving to maximize others’ epistemic utility is the wrong way to think of our shared epistemic life.
Maximizing others’ epistemic utility seems to lead to a really bad picture of our shared epistemic life. Should we, then, think of striving to maximize our own epistemic utility as the right approach to one’s individual epistemic life? Perhaps. For maybe what is apt to go wrong in maximizing others’ epistemic utility is paternalism, and paternalism is rarely a problem in one’s own case.
No comments:
Post a Comment