Wednesday, May 15, 2024

Very open-minded scoring rules

An accuracy scoring rule is open-minded provided that the expected value of the score after a Bayesian update on a prospective observation is always greater than or equal to the current expected value of the score.

Now consider a single-proposition accuracy scoring rule for a hypothesis H. This can be thought of as a pair of functions T and F where T(p) is the score for assigning credence p when H is true and F(p) is the score for assigning credence p when H is false. We say that the pair (T,F) is very open-minded provided that the conditional-on-H expected value of the T score after a Bayesian update on a prospective observation is greater than or equal to the current expected value of the T score and provided that the same is true for the F score with the expected value being conditional on not-H.

An example of a very open-minded scoring rule is the logarithmic rule where T(p) = log p and F(p) = log (1−p). The logarithmic rule has some nice philosophical properties which I discuss in this post, and it is easy to see that any very open-minded scoring rule has these properties. Basically, the idea is that if I measure epistemic utilities using a very open-minded scoring rule, then I will not be worried about Bayesian update on a prospective observation damaging other people’s epistemic utilities, as long as these other people agree with me on the likelihoods.

One might wonder if there are any other non-trivial proper and very open-minded scoring rules besides the logarithmic one. There are. Here’s a pretty easy to verify fact (see the Appendix):

  • A scoring rule (T,F) is very open-minded if and only if the functions xT(x) and (1−x)F(1−x) are both convex.

Here’s a cute scoring rule that is proper and very open-minded and proper:

  • T(x) =  − ((1−x)/x)1/2 and F(x) = T(1−x).

(For propriety, use Fact 1 here. For open-mindedness, note that the graph of xT(x) is the lower half of the semicircle with radius 1/2 and center at (1/2,0), and hence is convex.)

What’s cute about this rule? Well, it is symmetric (F(x) = T(1−x)) and it has the additional symmetry property that xT(x) = (1−x)T(1−x) = (1−x)F(x). Alas, though, T is not concave, and I think a good scoring rule should have T concave (i.e., there should be diminishing returns from getting closer to the truth).

Appendix:

Suppose that the prospective observation is as to which cell of the partition E1, ..., En we are in. The open-mindedness property with respect to T then requires:

  1. iP(Ei|H)T(P(H|Ei)) ≥ T(P(H)).

Now P(Ei|H) = P(H|Ei)P(Ei)/P(H). Thus what we need is:

  1. iP(Ei)P(H|Ei)T(P(H|Ei)) ≥ P(H)T(P(H)).

Given that P(H) = ∑iP(Ei)P(H|Ei), this follows immediately from the convexity of xT(x). The converse is easy, too.

No comments:

Post a Comment