William James discusses two kinds of people: there is the person whose epistemic life is focused on getting to as many truths as possible and there is the person whose epistemic life is focused on avoiding falsehoods. So, there is truth-pursuit and error-avoidance. We don't want to have one without the other. For instance, the person who just desires truth, without hating error, might just believe every proposition (and hence also the negation of every proposition) and thus get every truth—but that's not desirable. And a chair has perfectly achieved the good of not believing any falsehood. So a good life, obviously, needs to include both love of truth and hatred of error. But how much of which? William James suggests there is no right answer: different people will simply have different preferences.
But it turns out that while there may be different preferences one can have, there is a serious constraint. Given some very plausible assumptions on epistemic utilities, one can prove that one needs to set more than 2.588 times (more precisely: at least 1/(log 4 − 1) times) as great a disvalue on being certain of a falsehood as the value one sets on being certain of a truth!
Here are the assumptions. Let V(r) be the value of having credence r in a true proposition, for 1/2≤r≤1. Let D(r) be the disvalue of having credence r in a false proposition, again for 1/2≤r≤1. Then the assumptions are:
- V and D are continuous functions on the interval [1/2,1] and are differentiable except perhaps at the endpoints of the interval
- V(1/2)=D(1/2)=0
- V and D are increasing functions
- D is convex
- The pair V and D is stable.
Finally, the stability condition (which may have some name in the literature) needs explanation. Suppose you have assigned credence r≥1/2 to a proposition p. Then you should expect epistemic utility rV(r)−(1−r)D(r) from this assignment. But now suppose you consider changing your credence to s, without any further evidence. You would expect to have epistemic utility rV(s)−(1−r)D(s) from that. Stability says that this isn't ever going to be an improvement on what you get with s=r. For if it were sometimes an improvement, you would have reason to change your credence right after you set it evidentially, just to get better epistemic utility, like in this post (in which V(r)=D(r)=2r−1—and that's not stable). Stability is a very plausible constraint on epistemic utilities. (The folks working on epistemic utilities may have some other name for this condition—I'm just making this up.)
Now define the hate-love ratio: HL(r)=D(r)/V(r). This measures how much worse it is assign credence r to a falsehood than it is good to assign r to a truth.
Theorem. Given (1)-(5), HL(r)≥(u−1/2)/(1/2+(log 2)−u+log u).
Corollary. Given (1)-(5), HL(1)≥1/(log 4 − 1)>2.588.
In other words, you should hate being certain of a falsehood more than 2.588 times as much as you love being certain of a truth.
Note 1: One can make V and D depend on the particular proposition p, to take account of how some propositions are more important to get right than others. The hate-love ratio inequality will hold for each proposition, then.
Note 2: There is no non-trivial upper bound on HL(1). It can even be equal to infinity (with a logarithmic measure, if memory serves me).
Here is a graph of the right hand side of the inequality in the Theorem (the x-axis is r).
Let me sketch the proof of the Theorem. Let Ur(s)=rV(s)−(1−r)D(s). Then for any fixed r, stability says that Ur(s) is maximized at s=r. Therefore the derivative Ur'(s) vanishes at s=r. Hence rV'(r)−(1−r)D'(r)=0. Therefore V'(r)=(1−r)D'(r)/r. Thus, V(r) is the integral from 1/2 to r of (1/x−1)D'(x)dx. Moreover, by convexity, we have that D'(r) is an increasing function. One can then prove that the hate-love ratio D(r)/V(r) will be minimal when D' is constant (this is actually the hardest part of the proof of the Theorem, but it's very intuitive), i.e., when D is linear, and an easy calculation then gives the value for the hate-love ratio on the right hand side of the inequality in the Theorem.
This is like something straight out of the TV show Numbers. Using math to fix ordinary life problems. :)
ReplyDeleteIt's really weird that there would be a number that could be attached to this.
ReplyDeleteOne thing I am not sure of is whether the assumption that V(1/2)=D(1/2)=0 is always reasonable. Suppose p is the claim that there is an external world. Then if p is true, the utility of credence 1/2 is negative--that's not the sort of proposition one should be on the fence about. But if p is false, there is no harm in credence 1/2--what you believe probably doesn't matter much if there is no external world--so D(1/2)=0. So in that case, V(1/2) < D(1/2).
Without the V(1/2) = D(1/2) assumption, all I can show is something like: (D(1)-D(1/2))/(V(1)-V(1/2)) > 2.588. I am not sure this is very useful.
Lara Buchak tells me that what I call "stability" is termed "propriety".
ReplyDeleteAlex,
ReplyDeleteI find it hard to accept the value-related assumptions without further explanation. First, nothing is said about what *kind* of value is in view. Is it epistemic value? Prudential value? Moral value? You might think these are all the same ultimately, but this is at least another controversial assumption.
Second, I have trouble assigning values to V(1/2) and D(1/2) without knowing more about the circumstances under which a person holds the 1/2 credence. All I am told is that in the V(1/2) scenario, the proposition is true; and in the D(1/2) scenario it is false. Pretty much everyone I know in epistemology would ask for further information before they could pronounce on the relative values here, at least where we are talking about epistemic values. Some will want to hear about the etiology of the credence, others about the environment in which the credence was formed, others about the evidence the subject possessed, and so on.
Epistemic value is what I am thinking about here.
ReplyDeleteI was thinking that credence 1/2 is something like maximal uncommittedness, and so we can assign a default value of 0 to it.
This result is the heart of a paper accepted by Logos & Episteme.
ReplyDeleteThe full paper is here.
ReplyDelete