Showing posts with label evidentialism. Show all posts
Showing posts with label evidentialism. Show all posts

Friday, October 29, 2021

Evidentialism and epistemic utilities

Epistemic value is the value of true belief and disvalue of false belief.

Let p be the proposition that there is such a thing as epistemic value.

Suppose p is true. Then, plausibly, the higher your credence in p, the more epistemic value your credence has. The closer your credence is to certainty, the closer to truth your representation is. Let tp(r) be the value of having credence r in p when in fact p is true. Then tp(r) is a strictly increasing function of r.

Suppose p is false. Then whatever credence you have in p, the epistemic value of that credence is zero.

Now suppose you are not sure about p, so your credence in p is an r such that 0 < r < 1. Consider now the idea of setting your credence to some other value r′. What is the expected epistemic value of doing so? Well, if p is false, there will be no epistemic value, and if p is true, you will have epistemic value tp(r′). Your current probability for p is r. So your expected epistemic value is

  • rtp(r′) + (1 − r)⋅0 = rtp(r′).

Thus, to maximize your expected epistemic value, you should set r′=1. In other words, no matter that your evidence may not support p, you should still have credence one in p, if you should maximize expected epistemic value.

What do we learn from this?

First, either evidentialism (the view that your degree of belief should be proportioned to the evidence) is false or else expected epistemic utility maximization is the wrong way to think about epistemic normativity.

Second, there are cases where the right epistemic scoring rule is improper. For given a proper epistemic scoring rule and a consistent credence assignment, we never get a recommendation of a change of credence. The scoring rule underlying the above epistemic value assignments is clearly improper, and yet is also clearly right.

Monday, October 8, 2018

Evidentialism, and self-defeating and self-guaranteeing beliefs

Consider this modified version of William James’ mountaineer case: The mountaineer’s survival depends on his jumping over a crevasse, and the mountaineer knows that he will succeed in jumping over the crevasse if he believes he will succeed, but doesn’t know that he will succeed as he doesn’t know whether he will come to believe that he will succeed.

James used his version of the case to argue that pragmatic reasons can legitimately override lack of epistemic reasons.

But what is interesting to me in my variant is the way it provides a counterexample to evidentialism. Evidentialists say that you epistemically should form your beliefs only on the basis of evidence. But notice that although the belief that he will succeed at the jump needs to be formed in the absence of evidence for its truth, as soon as it is formed, the belief itself becomes its own evidence to the point that it turns into knowledge. The belief is self-guaranteeing. So there seems to be nothing to criticize epistemically about the formation of the belief, even though the formation is independent of evidence. In fact, it seems, there is a good epistemic reason to believe, since by believing the mountaineer increases the stock of his knowledge.

Moreover, we can even make the case be one where the evidence on balance points against the proposition. Perhaps the mountaineer has attempted, in safer circumstances, to get himself to believe that he can make such a jump, and seven times out of ten he has failed at both self-induction of belief, and also at the jump. But in the remaining three times out of ten, he succeeded at both. So, then, the mountaineer has non-conclusive evidence that he won’t manage to believe that he will succeed (and that he won’t succeed). If he comes to believe that he will succeed, he comes to believe this against the evidence—but, still, in doing, he increases his stock of knowledge, since the belief, once believed, is self-guaranteeing.

(This phenomenon of self-guaranteeing belief reminds me of things that Kierkegaard says about faith, where faith itself is a miracle that hence is evidence for its truth.)

Interestingly, we might also be able to construct cases of well-evidenced but self-defeating beliefs. Consider a jeweler who has noticed that she is successful at cutting a diamond if and only if she believes she will be unsuccessful. Her theory is that belief in her success makes her insufficiently careful. Over time, she has learned to suspend judgment in her success, and hence to be successful. But now she reflects on her history, and she finds herself with evidence that he will be successful in cutting the next diamond. Yet if she believes on this evidence, this will render her overconfident, and hence render the belief false!

This is related to the examples in this paper on lying.

So perhaps what the evidentialist needs to say is that you epistemically may believe p if and only if the evidence says that if you believe p, p is true?

Friday, February 3, 2017

Evidentialism and higher-order belief

It seems epistemically vicious to induce or maintain a belief for which one has insufficient evidence.

But suppose that my evidence supports a quite low degree of confidence about (a) whether I have or will have any higher-order beliefs, (b) the reliability of my introspection into higher-order beleifs, and (c) whether I am capable of self-inducing a belief. I now try to self-induce a belief that I have a higher-order belief, reasoning: either I’ll succeed or I’ll fail in self-induction. If I succeed, I will gain a true belief—for then I will have a higher-order belief. If I fail, no harm done. So I try, and I succeed.

Nothing epistemically vicious has been done, even though I self-induced a belief for which I had insufficient evidence.

In light of my evidenced low degree of confidence in the reliability of introspection into higher-order beliefs, once I have gained the belief, I still on balance have insufficient evidence for the belief. But it doesn’t seem irrational to try to maintain the belief, on the grounds that one can only successfully maintain it if one has it, and if one has it, it’s true. And so I try to maintain the belief, and I succeed. So I maintain the belief despite continuing insufficient evidence, and yet I am rational.

Here’s a reverse case. Let’s say that I find myself with very strong evidence that I will do not have and will never have any higher-order beliefs. It would be irrational to try to get myself to believe this proposition on this evidence.

So perhaps we should tie rationality not to evidence for a belief, but to evidence for the material conditional: if I have the belief, it is true?

Cf. this about assertion.

Thursday, October 29, 2015

A weakly-fallibilist evidentialist can't be an evidential Bayesian

The title is provocative, but the thesis is less provocative (and in essence well-known: Hawthorne's work on the deeply contingent a priori is relevant) once I spell out what I stipulatively mean by the terms. By evidential Bayesianism, I mean the view that evidence should only impact our credences by conditionalization. By evidentialism, I mean the view that high credence in contingent matters should not be had except by evidence (most evidentialists make a stronger claims). By weak fallibilism, I mean that sometimes a correctly functioning epistemic agent appropriately would have high credence on the basis of non-entailing evidence. These three theses cannot all be true.

For suppose that they are all true, and I am a correctly functioning epistemic agent who has appropriate high credence in a contingent matter H, and yet my total evidence E does not entail H. By evidentialism, my credence comes from the evidence. By evidential Bayesianism, if P measures my prior probabilities, then P(H|E) is high. But it is a theorem that P(H|E) is less than or equal to P(EH), where the arrow is a material conditional. So the prior probability of EH is high. This conditional is not necessary as E does not etnail H. Hence, I have high prior credence in a contingent matter. Prior probabilities are by definition independent of my total evidence. So evidentialism is violated.

Monday, March 10, 2014

A counterexample to evidentialism?

Consider Williamson-style beliefs that obviously have the property that they have to be correct if they are believed. For instance, if I believe that I have a belief, then that belief is guaranteed to be correct. Call beliefs like this obviously self-guaranteeing.

Suppose now that I am unable to introspect my beliefs and am not a sufficiently good observer to gain evidence as to what I believe on the basis of my behavior. Unsure whether I have any beliefs, but thinking that true beliefs are valuable to have although false ones are valuable to avoid, I try to will myself to believe that I have a belief, because it is clear to me that that claim will be true if I believe it. (You might ask: If I do that, don't I already believe something, namely that the belief will be true if I believe it? Maybe, but that's beside the point, since I am unable to tell that I believe it.) I don't know if I will succeed—and even if I do succeed, I won't know that I have succeeded—since willing myself to have a belief is a notoriously shaky thing. There seems to be nothing incompatible with the love of truth in willing myself to believe that I have a belief, indeed there seems to be nothing epistemically bad. But I am (a) willing myself to believe something I now do not have evidence for, and (b) if I do come to believe it, I will believe it without any evidence for it. If indeed there is nothing epistemically bad here, then (b) gives a counterexample to synchronic evidentialism and (a) gives a counterexample to diachronic evidentialism.

But perhaps there is something perverse here. See tomorrow's post.

Friday, July 2, 2010

P. G. Wodehouse on evidence and belief

From Uneasy Money (I am listening to the librivox recording of this while exercising, mowing, etc.):
'I do believe it,' he said. 'I believe every word you say.'
She shook her head.
'You can't in the face of the evidence.'
'I believe it.'
'No. You may persuade yourself for the moment that you do, but after a while you will have to go by the evidence. You won't be able to help yourself. You haven't realized what a crushing thing evidence is. You have to go by it against your will. You see, evidence is the only guide. You don't know that I am speaking the truth; you just feel it. You're trusting your heart and not your head. The head must win in the end. You might go on believing for a time, but sooner or later you would be bound to begin to doubt and worry and torment yourself. You couldn't fight against the evidence, when once your instinct—or whatever it is that tells you that I am speaking the truth—had begun to weaken. And it would weaken. Think what it would have to be fighting all the time. Think of the case your intelligence would be making out, day after day, till it crushed you. It's impossible that you could keep yourself from docketing the evidence and arranging it and absorbing it. Think! Consider what you know are actual facts. ...'
There is more, but that would make this post into a spoiler.

Actually, I think Elizabeth (the female speaker) underestimates the power of volitional belief.

Thursday, January 28, 2010

Miscellaneous thoughts on lying

1. Self-help book idea: A Year Without a Lie.

2. A number of vices in some sense require a willingness to lie. One can't really commit adultery without being willing to lie. Here the "can't" is of a restricted prudential rationality. For instance, in many cases one can't cheat on one's taxes without either lying (writing down a falsehood on a tax return) or at least being willing to lie (destroying records and not filing, while planning to lie about the destruction should one get an audit). It would probably be hard to be an unscrupulous politician without a willingness to lie.

An absolute unwillingness to lie will keep one from a number of vices. What about a merely prima facie unwillingness? An unwillingness to lie unless by lying one prevents a great evil? Well, psychologically speaking, such an unwillingness is likely to be weaker. Moreover, such a conditional unwillingness is unlikely to keep one from lying to one's spouse if one commits adultery since one is likely to think that by so lying one is preventing a great evil—especially if one has children. Similarly, it may not keep one from lying to a tax auditor, since a tax fraud conviction may result in a great evil to oneself and one's family. So there is reason to adopt an absolute unwillingness to lie in order to keep oneself from other vices.

3. There is a value in being the sort of person who can be trusted no matter what. If one is known to conscientiously follow the rule not to lie unless by lying one prevents a great evil, then before relying on one's testimony, others may have to try to figure out whether one might not think one is preventing a great evil, and this will lower the value of one's testimony.

Then there will be circumstances in which one's testimony is of no weight, and yet it is of vital importance that one's testimony have weight. Suppose, for instance, that I believe that my friend is innocent of murder. I testify to the court: "He spent the evening with me, talking about Spinoza." Suppose all the other evidence is against my friend. If I am the sort of person who lies to prevent great evils, then I am the sort of person who would provide a friend whom I believe to be innocent with a false alibi under those circumstances, because I would thereby be preventing the great evil of his being falsely convicted—indeed, his life might be stake. Therefore, for my testimony to carry the weight that it is desparately important for it to carry, I have to be believed to be the sort of person who wouldn't lie even to save one's friend from what I believe to be an unjust murder conviction.

Or, for a more common case, consider an innocent wife who is asked by a jealous husband whether she was faithful to him. If she is known to follow the rule of not lying unless lying prevents a great evil, her testimony to her innocence is of little worth, because quite possibly a great evil would be prevented by her lying if she were unfaithful. But it is crucially important that her testimony be believed, and this requires that she be known to follow the rule of not lying simpliciter.

Now, granted, such cases may be rare. But I think they are no rarer than the cases where lying is needed to prevent a great evil, and in fact they are more common. Here's a handwaving argument for this. Both kinds of cases are a species of this situation: It prevents a great evil if one's interlocutor comes to believe that p. But it seems unlikely that most species of this situation are such that in fact p is false. There are two views on the matter that one might hold. One might think that this situation occurs just as often with p false as with p true. But a more correct view of this is that true belief is somewhat more likely to be beneficial to society than false belief. If so, then a majority (though perhaps a modest one) of cases of this situation are ones where in fact p is true. (To get the desired conclusion from this, one has to either assume that that the speaker knows whether p, or, more weakly, that the speaker is more likely to be right about p than to be wrong about p.)

So it is at least as important, and likely more important, that one be believed no matter what than that one be able to lie to prevent great evils.

Therefore, it is a good thing to be such as to be believed to be unwilling to lie, no matter what. But the best way to ensure that one is believed to have that sort of character is to have that sort of character. Moreover, if one fails to have that sort of character, but pretends to do so, such a constant pretence is likely to be harmful to one's character. And it is unlikely that someone who constantly pretends to have a character other than she does is going to be the sort of person whom people believe no matter what. Therefore, there is good reason—even good consequentialist reason—to adopt an absolute unwillingness to lie.

4. The above points apply particularly strongly to Christians because it is particularly crucial that people believe our testimony about Christ. We believe, after all, that whether people have faith in Christ affects their eternal well-being. Thus, whenever we speak with someone about Christ, this is a situation where a great evil and a great good are at stake in our testimony being trusted.

Suppose that I thought it was acceptable to lie to prevent great evils. Then if I have an atheist friend who trusted me, I might well conclude that the right thing for me to do is to testify to having seen some miracle that I haven't in fact seen. (I might try to limit the extent of the lie, for instance by choosing some miracle that I read about and that I believe happened, and lying only about whether I myself had witnessed it.) But of course if I am known to be the sort of person who lies to prevent great evils, my atheist friend would have no reason to trust me. On the other hand, if I am known to be the sort of person who would not lie even to save someone from eternal damnation (not that one's words literally do that—but they may in some way contribute, because God's grace works through them), then if I tell my atheist friend that I have seen something miraculous (not that any of the miracles that I've seen are going to be that convincing to the atheist, since they're all miracles of the beauty of nature, and miracles of moral transformation in myself—I am a sinner in a bad way, but you should just think what I'd have been like without Christ!), my friend may very well believe me. And likewise if I testify to less overtly miraculous things.

The above also gives the Christian reason to believe responsibly—this may or may not imply evidentialism.