Imagine an agent for whom being certain that a proposition p is true has infinite value if p is in fact true. This could be a general Cartesian attitude about all propositions, or it could be a special attitude to a particular proposition p.
Here is one way to model this kind of Cartesian attitude. Suppose we have a single-proposition accuracy scoring rule s(r, i) which represents the epistemic utility of having credence r when the proposition in fact has truth value i, where i is either 0 (false) or 1 (true). The scores can range over the whole interval [ − ∞, ∞], and I will assume that s(r, i) is finite whenever 0 < r < 1, and continuous at r = 0 and r = 1. Additionally, I suppose that the scoring rule is proper, in the sense that the expected utility of sticking to your current credence r by your own lights is at least as good as the expected utility of any other credence. (When evaluating expected utilities with infinities, I use the rule 0 ⋅ ±∞=0.)
Finally, I say the scoring rule is Cartesian with respect to p provided that s(1, 1)=∞. (We might also have s(0, 0)=∞, but I do not assume it. There are cases where being certain and right that p is much more valuable than being certain and right that ∼p.)
Pretty much all research on scoring rules focuses on regular scoring rules. With a regular scoring rule, is allowed to have an epistemic utility −∞ when you are certain of a falsehood (i.e., s(1, 0)= − ∞ and/or s(0, 1)= − ∞), the possibility of a +∞ epistemic utility is ruled out, and indeed epistemic utilities are taken to be bounded above. Our Cartesian rules are all non-regular.
I’ve been thinking about proper Cartesian scoring rules for about a day, and here are some simple things that I think I can show:
They exist. (As do strictly proper ones.)
One can have an arbitrarily fast rate growth of s(r, 1) as r approaches 1.
However, s(r, 1)/s(r, 0) always goes to zero as r approaches 1.
Claim (2) shows that we can value near-certainty-in-the-truth to an arbitrarily high degree, but there is a price to be paid: one must disvalue near-certainty-in-a-falsehood way more.
One thing that’s interesting to me is that (3) is not true for non-Cartesian proper scoring rules. There are bounded proper scoring rules, and then s(1, 1)/s(1, 0) can be some non-zero ratio. (Relevant to this is this post.) Thus, assuming propriety, going Cartesian—i.e., valuing certainty of truth infinitely—implies an infinitely greater revulsion from certainty in a falsehood.
A consequence of (2) is that you can have proper Cartesian scoring rules that support what one might call obsessive hypothesis confirmation: even if gathering further evidence grows increasingly costly for roughly the same Bayes factors, given a linear conversion between epistemic and practical utilities, it could be worthwhile to continue to continue gathering evidence for a hypothesis no matter how close to certain one is. I don’t think all Cartesian scoring rules support obsessive hypothesis confirmation, however.
No comments:
Post a Comment