Monday, October 10, 2011

It is more than 2.588 times as important to avoid certainty about a falsehood than to have certainty about a truth

William James discusses two kinds of people: there is the person whose epistemic life is focused on getting to as many truths as possible and there is the person whose epistemic life is focused on avoiding falsehoods. So, there is truth-pursuit and error-avoidance. We don't want to have one without the other. For instance, the person who just desires truth, without hating error, might just believe every proposition (and hence also the negation of every proposition) and thus get every truth—but that's not desirable. And a chair has perfectly achieved the good of not believing any falsehood. So a good life, obviously, needs to include both love of truth and hatred of error. But how much of which? William James suggests there is no right answer: different people will simply have different preferences.

But it turns out that while there may be different preferences one can have, there is a serious constraint. Given some very plausible assumptions on epistemic utilities, one can prove that one needs to set more than 2.588 times (more precisely: at least 1/(log 4 − 1) times) as great a disvalue on being certain of a falsehood as the value one sets on being certain of a truth!

Here are the assumptions. Let V(r) be the value of having credence r in a true proposition, for 1/2≤r≤1. Let D(r) be the disvalue of having credence r in a false proposition, again for 1/2≤r≤1. Then the assumptions are:

  1. V and D are continuous functions on the interval [1/2,1] and are differentiable except perhaps at the endpoints of the interval
  2. V(1/2)=D(1/2)=0
  3. V and D are increasing functions
  4. D is convex
  5. The pair V and D is stable.
Assumption 1 is a pretty plausible continuity assumption. Assumption 2 is also a reasonable way to set a neutral value for the utilities. Assumption 3 is very plausible: it is better to be more and more confident of a truth and worse to be more and more confident of a falsehood. Assumption 4 corresponds to a fairly standard, though controversial, assumption on calibration measures. It is, I think, quite intuitive. Suppose that p is false. Then you gain more by decreasing your credence from 1.00 to 0.99 than by decreasing your credence from 0.99 to 0.98, and you gain more by decreasing your credence from 0.99 to 0.98 than by decreasing your credence from 0.98 to 0.97. You really want to get away from certainty of a falsehood, and the further away you are from that certainty, the less benefit there is in getting away. And the convexity assumption captures this intuition.

Finally, the stability condition (which may have some name in the literature) needs explanation. Suppose you have assigned credence r≥1/2 to a proposition p. Then you should expect epistemic utility rV(r)−(1−r)D(r) from this assignment. But now suppose you consider changing your credence to s, without any further evidence. You would expect to have epistemic utility rV(s)−(1−r)D(s) from that. Stability says that this isn't ever going to be an improvement on what you get with s=r. For if it were sometimes an improvement, you would have reason to change your credence right after you set it evidentially, just to get better epistemic utility, like in this post (in which V(r)=D(r)=2r−1—and that's not stable). Stability is a very plausible constraint on epistemic utilities. (The folks working on epistemic utilities may have some other name for this condition—I'm just making this up.)

Now define the hate-love ratio: HL(r)=D(r)/V(r). This measures how much worse it is assign credence r to a falsehood than it is good to assign r to a truth.

Theorem. Given (1)-(5), HL(r)≥(u−1/2)/(1/2+(log 2)−u+log u).

Corollary. Given (1)-(5), HL(1)≥1/(log 4 − 1)>2.588.

In other words, you should hate being certain of a falsehood more than 2.588 times as much as you love being certain of a truth.

Note 1: One can make V and D depend on the particular proposition p, to take account of how some propositions are more important to get right than others. The hate-love ratio inequality will hold for each proposition, then.

Note 2: There is no non-trivial upper bound on HL(1). It can even be equal to infinity (with a logarithmic measure, if memory serves me).

Here is a graph of the right hand side of the inequality in the Theorem (the x-axis is r).

Let me sketch the proof of the Theorem. Let Ur(s)=rV(s)−(1−r)D(s). Then for any fixed r, stability says that Ur(s) is maximized at s=r. Therefore the derivative Ur'(s) vanishes at s=r. Hence rV'(r)−(1−r)D'(r)=0. Therefore V'(r)=(1−r)D'(r)/r. Thus, V(r) is the integral from 1/2 to r of (1/x−1)D'(x)dx. Moreover, by convexity, we have that D'(r) is an increasing function. One can then prove that the hate-love ratio D(r)/V(r) will be minimal when D' is constant (this is actually the hardest part of the proof of the Theorem, but it's very intuitive), i.e., when D is linear, and an easy calculation then gives the value for the hate-love ratio on the right hand side of the inequality in the Theorem.

7 comments:

Dan Johnson said...

This is like something straight out of the TV show Numbers. Using math to fix ordinary life problems. :)

Alexander R Pruss said...

It's really weird that there would be a number that could be attached to this.

One thing I am not sure of is whether the assumption that V(1/2)=D(1/2)=0 is always reasonable. Suppose p is the claim that there is an external world. Then if p is true, the utility of credence 1/2 is negative--that's not the sort of proposition one should be on the fence about. But if p is false, there is no harm in credence 1/2--what you believe probably doesn't matter much if there is no external world--so D(1/2)=0. So in that case, V(1/2) < D(1/2).

Without the V(1/2) = D(1/2) assumption, all I can show is something like: (D(1)-D(1/2))/(V(1)-V(1/2)) > 2.588. I am not sure this is very useful.

Alexander R Pruss said...

Lara Buchak tells me that what I call "stability" is termed "propriety".

ryanb said...

Alex,

I find it hard to accept the value-related assumptions without further explanation. First, nothing is said about what *kind* of value is in view. Is it epistemic value? Prudential value? Moral value? You might think these are all the same ultimately, but this is at least another controversial assumption.

Second, I have trouble assigning values to V(1/2) and D(1/2) without knowing more about the circumstances under which a person holds the 1/2 credence. All I am told is that in the V(1/2) scenario, the proposition is true; and in the D(1/2) scenario it is false. Pretty much everyone I know in epistemology would ask for further information before they could pronounce on the relative values here, at least where we are talking about epistemic values. Some will want to hear about the etiology of the credence, others about the environment in which the credence was formed, others about the evidence the subject possessed, and so on.

Alexander R Pruss said...

Epistemic value is what I am thinking about here.

I was thinking that credence 1/2 is something like maximal uncommittedness, and so we can assign a default value of 0 to it.

Alexander R Pruss said...

This result is the heart of a paper accepted by Logos & Episteme.

Alexander R Pruss said...

The full paper is here.