Suppose there is a distinctive and significant value to knowledge. What I mean by that is that if two epistemic are very similar in terms of truth, the level and type of justification, the subject matter and its relevant to life, the degree of belief, etc., but one is knowledge and the other is not, then the one that is knowledge has a significantly higher value because it is knowledge.
Plausibly, then, if we imagine Alice has some evidence for a truth p that is insufficient for knowledge, and slowly and continuously her evidence for p mounts up, when the evidence has crossed the threshold needed for knowledge, the value of Alice’s state with respect to p will have suddenly and discontinuously increased.
This hypothesis initially seemed to me to have an unfortunate consequence. Suppose Alice has just barely exceeded the threshold for knowledge of p, and she is offered a cost-free piece of information that may turn out to slightly increase or slightly decrease her overall evidence with respect to p, where the decrease would be sufficient to lose her knowledge of p (since she has only “barely” exceeded the evidential threshold for knowledge). It seems that Alice should refuse to look at the information, since the benefit of a slight improvement in credence if the evidence is non-misleading is outweighed by the danger of a significant and discontinuous loss of value due to loss of knowledge.
But that’s not quite right. For from Alice’s point of view, because the threshold for knowledge is not 1, there is a real possibility that p is false. But it may be that just as there is a discontinuous gain in epistemic value when your (rational) credence becomes sufficient for knowledge of something that is in fact true, it may be that there is a discontinuous loss of epistemic value when your credence becomes sufficient for knowledge of something false. (Of course, you can’t know anything false, but you can have evidence-sufficient-for-knowledge with respect to something false.) This is not implausible, and given this, by looking at the information, by her lights Alice also has a chance of a significant gain in value due to losing the illusion of knowledge in something false.
If we think that it’s never rational for a rational agent to refuse free information, then the above argument can be made rigorous to establish that any discontinuous rise in the epistemic value of credence at the point at which knowledge of a truth is reached is exactly mirrored by a discontinuous fall in the epistemic value of a state of credence where seeming-knowledge of a falsehood is reached. Moreover, the rise and the fall must be in the ratio 1 − r : r where r is the knowledge threshold. Note that for knowledge, r is plausibly pretty large, around 0.95 at least, and so the ratio between the special value of knowledge of a truth and the special disvalue of evidence-sufficient-for-knowledge for a falsehood will need to be at most 1:19. This kind of a ratio seems intuitively implausible to me. It seems unlikely that the special disvalue of evidence-sufficient-for-knowledge of a falsehood is an order of magnitude greater than the special value of knowledge. This contributes to my scepticism that there is a special value of knowledge.
Can we rigorously model this kind of an epistemic value assignment? I think so. Consider the following discontinuous accuracy scoring rule s1(x,t), where x is a probability and t is a 0 or 1 truth value:
s1(x,t) = 0 if 1 − r ≤ x ≤ r
s1(x,t) = a if r < x and t = 1 or if x < 1 − r and t = 0
s1(x,t) = − b if r < x and t = 0 or if x < 1 − r and t = 1.
Suppose that a and b are positive and a/b = (1−r)/r. Then if my scribbled notes are correct, it is straightforward but annoying to check that s1 is proper, and it has a discontinuous reward a for meeting threshold r with respect to a truth and a discontinuous penalty − a for meeting threshold r with respect to a falsehood. To get a strictly proper scoring rule, just add to it any strictly proper continous accuracy scoring rule (e.g., Brier).
No comments:
Post a Comment