Tuesday, October 9, 2018

Epistemic scores and consistency

Scoring rules measure the distance between a credence and the truth value, where true=1 and false=0. You want this distance to be as low as possible.

Here’s a fun paradox. Consider this sentence:

  1. At t1, my credence for (1) is less than 0.1.

(If you want more rigor, use Goedel’s diagonalization lemma to remove the self-reference.) It’s now a moment before t1, and I am trying to figure out what credence I should assign to (1) at t1. If I assign a credence less than 0.1, then (1) will be true, and the epistemic distance between 0.1 and 1 will be large on any reasonable scoring rule. So, I should assign a credence greater than or equal to 0.1. In that case, (1) will be false, and I want to minimize the epistemic distance between the credence and 0. I do that by letting the credence be exactly 0.1.

So, I should set my credence to be exactly 0.1 to optimize epistemic score. Suppose, however, that at t1 I will remember with near-certainty that I was setting my credence to 0.1. Thus, at t1 I will be in a position to know with near-certainty that my credence for (1) is not less than 0.1, and hence I will have evidence showing with near-certainty that (1) is false. And yet my credence for (1) will be 0.1. Thus, my credential state at t1 will be probabilistically inconsistent.

Hence, there are times when optimizing epistemic score leads to inconsistency.

There are, of course, theorems on the books that optimizing epistemic score requires consistency. But the theorems do not apply to cases where the truth of the matter depends on your credence, as in (1).

1 comment:

Alexander R Pruss said...

You also get paradox in the same setting without any reference to scoring rules if you just assume that at t1 you know with near certainty what your credence in (1) is, either by introspection or by remembering how one was updating.