Thursday, December 2, 2021

Misleadingness simpliciter

It is quite routine that learning a truth leads to rationally believing new falsehoods. For we all rationally believe many falsehoods. Suppose I rationally believe a falsehood p and I don’t believe a truth q. Then, presumably, I don’t believe the conjunction of p and q. But suppose I learn q. Then, typically, I will rationally come to believe the conjunction of p and q, a falsehood I did not previously believe.

Thus there is a trivial sense in which every truth I learn is misleading. But a definition of misleadingness on which every truth is misleading doesn’t seem right. Or at least it’s not right to say that every truth is misleading simpliciter. What could misleadingness simpliciter be?

In a pair of papers (see references here) Lewis and Fallis argue that we should assign epistemic utilities to our credences in such a way that conditioning on the truth should never be bad for us epistemically speaking—that it should not decrease our actual epistemic utility.

I think this is an implausible constraint. Suppose a highly beneficial medication has been taken by a billion people. I randomly sample a hundred thousand of these people and see what happened to them in the week after receiving the medication. Now, out of a billion people, we can expect about two hundred thousand to die in any given week. Suppose that my random sampling is really, really unlucky, and I find that fifty thousand of the people in my sample died a week because of the medication. Completely coincidentally, of course, since as I said the medication is highly beneficial.

Based on my data, I rationally come to believe the importantly false claim that the medication is very harmful. I also come to believe the true claim that half of my random sample died a week after taking the medication. But while that claim is true, it is quite unimportant except as misleading evidence for the harmfulness of the medication. It is intuitively very plausible that after learning the truth about half of the people in my sample dying, I am worse off epistemically.

It seems clear that in the medication case, my data is true and misleading in a non-trivial way. This suggests a definition of misleadingness simpliciter:

  • A proposition p is misleading simpliciter if and only if one’s overall epistemic utility goes down when one updates on p.

And this account of misleadingness is non-trivial. If we measure epistemic utility using strictly proper scoring rules, and if our credences are consistent, then the expected epistemic value of updating on the outcome of a non-trivial observation is positive. So we should not expect the typical truth to be misleading in the above sense. But some are misleading.

From this point of view, Lewis and Fallis are making a serious mistake: they are trying to measure epistemic utilities in such a way as to rule out the possibility of misleading truths.

By the way, I think I can prove that for any measure of epistemic utility obtained by summing a single strictly proper score across all events, there will be a possibility of misleadingness simpliciter.

Final note: We don’t need to buy into the formal mechanism of epistemic utilities to go with the above definition. We could just say that something is misleading iff coming to believe it would rationally make one worse off epistemically.

No comments: