Showing posts with label William James. Show all posts
Showing posts with label William James. Show all posts

Monday, October 8, 2018

Evidentialism, and self-defeating and self-guaranteeing beliefs

Consider this modified version of William James’ mountaineer case: The mountaineer’s survival depends on his jumping over a crevasse, and the mountaineer knows that he will succeed in jumping over the crevasse if he believes he will succeed, but doesn’t know that he will succeed as he doesn’t know whether he will come to believe that he will succeed.

James used his version of the case to argue that pragmatic reasons can legitimately override lack of epistemic reasons.

But what is interesting to me in my variant is the way it provides a counterexample to evidentialism. Evidentialists say that you epistemically should form your beliefs only on the basis of evidence. But notice that although the belief that he will succeed at the jump needs to be formed in the absence of evidence for its truth, as soon as it is formed, the belief itself becomes its own evidence to the point that it turns into knowledge. The belief is self-guaranteeing. So there seems to be nothing to criticize epistemically about the formation of the belief, even though the formation is independent of evidence. In fact, it seems, there is a good epistemic reason to believe, since by believing the mountaineer increases the stock of his knowledge.

Moreover, we can even make the case be one where the evidence on balance points against the proposition. Perhaps the mountaineer has attempted, in safer circumstances, to get himself to believe that he can make such a jump, and seven times out of ten he has failed at both self-induction of belief, and also at the jump. But in the remaining three times out of ten, he succeeded at both. So, then, the mountaineer has non-conclusive evidence that he won’t manage to believe that he will succeed (and that he won’t succeed). If he comes to believe that he will succeed, he comes to believe this against the evidence—but, still, in doing, he increases his stock of knowledge, since the belief, once believed, is self-guaranteeing.

(This phenomenon of self-guaranteeing belief reminds me of things that Kierkegaard says about faith, where faith itself is a miracle that hence is evidence for its truth.)

Interestingly, we might also be able to construct cases of well-evidenced but self-defeating beliefs. Consider a jeweler who has noticed that she is successful at cutting a diamond if and only if she believes she will be unsuccessful. Her theory is that belief in her success makes her insufficiently careful. Over time, she has learned to suspend judgment in her success, and hence to be successful. But now she reflects on her history, and she finds herself with evidence that he will be successful in cutting the next diamond. Yet if she believes on this evidence, this will render her overconfident, and hence render the belief false!

This is related to the examples in this paper on lying.

So perhaps what the evidentialist needs to say is that you epistemically may believe p if and only if the evidence says that if you believe p, p is true?

Monday, October 10, 2011

It is more than 2.588 times as important to avoid certainty about a falsehood than to have certainty about a truth

William James discusses two kinds of people: there is the person whose epistemic life is focused on getting to as many truths as possible and there is the person whose epistemic life is focused on avoiding falsehoods. So, there is truth-pursuit and error-avoidance. We don't want to have one without the other. For instance, the person who just desires truth, without hating error, might just believe every proposition (and hence also the negation of every proposition) and thus get every truth—but that's not desirable. And a chair has perfectly achieved the good of not believing any falsehood. So a good life, obviously, needs to include both love of truth and hatred of error. But how much of which? William James suggests there is no right answer: different people will simply have different preferences.

But it turns out that while there may be different preferences one can have, there is a serious constraint. Given some very plausible assumptions on epistemic utilities, one can prove that one needs to set more than 2.588 times (more precisely: at least 1/(log 4 − 1) times) as great a disvalue on being certain of a falsehood as the value one sets on being certain of a truth!

Here are the assumptions. Let V(r) be the value of having credence r in a true proposition, for 1/2≤r≤1. Let D(r) be the disvalue of having credence r in a false proposition, again for 1/2≤r≤1. Then the assumptions are:

  1. V and D are continuous functions on the interval [1/2,1] and are differentiable except perhaps at the endpoints of the interval
  2. V(1/2)=D(1/2)=0
  3. V and D are increasing functions
  4. D is convex
  5. The pair V and D is stable.
Assumption 1 is a pretty plausible continuity assumption. Assumption 2 is also a reasonable way to set a neutral value for the utilities. Assumption 3 is very plausible: it is better to be more and more confident of a truth and worse to be more and more confident of a falsehood. Assumption 4 corresponds to a fairly standard, though controversial, assumption on calibration measures. It is, I think, quite intuitive. Suppose that p is false. Then you gain more by decreasing your credence from 1.00 to 0.99 than by decreasing your credence from 0.99 to 0.98, and you gain more by decreasing your credence from 0.99 to 0.98 than by decreasing your credence from 0.98 to 0.97. You really want to get away from certainty of a falsehood, and the further away you are from that certainty, the less benefit there is in getting away. And the convexity assumption captures this intuition.

Finally, the stability condition (which may have some name in the literature) needs explanation. Suppose you have assigned credence r≥1/2 to a proposition p. Then you should expect epistemic utility rV(r)−(1−r)D(r) from this assignment. But now suppose you consider changing your credence to s, without any further evidence. You would expect to have epistemic utility rV(s)−(1−r)D(s) from that. Stability says that this isn't ever going to be an improvement on what you get with s=r. For if it were sometimes an improvement, you would have reason to change your credence right after you set it evidentially, just to get better epistemic utility, like in this post (in which V(r)=D(r)=2r−1—and that's not stable). Stability is a very plausible constraint on epistemic utilities. (The folks working on epistemic utilities may have some other name for this condition—I'm just making this up.)

Now define the hate-love ratio: HL(r)=D(r)/V(r). This measures how much worse it is assign credence r to a falsehood than it is good to assign r to a truth.

Theorem. Given (1)-(5), HL(r)≥(u−1/2)/(1/2+(log 2)−u+log u).

Corollary. Given (1)-(5), HL(1)≥1/(log 4 − 1)>2.588.

In other words, you should hate being certain of a falsehood more than 2.588 times as much as you love being certain of a truth.

Note 1: One can make V and D depend on the particular proposition p, to take account of how some propositions are more important to get right than others. The hate-love ratio inequality will hold for each proposition, then.

Note 2: There is no non-trivial upper bound on HL(1). It can even be equal to infinity (with a logarithmic measure, if memory serves me).

Here is a graph of the right hand side of the inequality in the Theorem (the x-axis is r).

Let me sketch the proof of the Theorem. Let Ur(s)=rV(s)−(1−r)D(s). Then for any fixed r, stability says that Ur(s) is maximized at s=r. Therefore the derivative Ur'(s) vanishes at s=r. Hence rV'(r)−(1−r)D'(r)=0. Therefore V'(r)=(1−r)D'(r)/r. Thus, V(r) is the integral from 1/2 to r of (1/x−1)D'(x)dx. Moreover, by convexity, we have that D'(r) is an increasing function. One can then prove that the hate-love ratio D(r)/V(r) will be minimal when D' is constant (this is actually the hardest part of the proof of the Theorem, but it's very intuitive), i.e., when D is linear, and an easy calculation then gives the value for the hate-love ratio on the right hand side of the inequality in the Theorem.