Saturday, January 21, 2023

Knowing you will soon have enough evidence to know

Suppose I am just the slightest bit short of the evidence needed for belief that I have some condition C. I consider taking a test for C that has a zero false negative rate and a middling false positive rate—neither close to zero nor close to one. On reasonable numerical interpretations of the previous two sentences:

  1. I have enough evidence to believe that the test would come out positive.

  2. If the test comes out positive, it will be another piece of evidence for the hypothesis that I have C, and it will push me over the edge to belief that I have C.

To see that (1) is true, note that the test is certain to come out positive if I have C and has a significant probability of coming out positive even if I don’t have C. Hence, the probability of a positive test result will be significantly higher than the probability that I have C. But I am just the slightest bit short of the evidence needed for belief that I have C, so the evidence that the test would be positive (let’s suppose a deterministic setting, so we have no worries about the sense of the subjunctive conditional here) is sufficient for belief.

To see that (2) is true, note that given that the false negative rate is zero, and the false positive rate is not close to one, I will indeed have non-negligible evidence for C if the test is positive.

If I am rational, my beliefs will follow the evidence. So if I am rational, in a situation like the above, I will take myself to have a way of bringing it about that I believe, and do so rationally, that I have C. Moreover, this way of bringing it about that I believe that I have C will itself be perfectly rational if the test is free. For of course it’s rational to accept free information. So I will be in a position where I am rationally able to bring it about that I rationally believe C, while not yet believing it.

In fact, the same thing can be said about knowledge, assuming there is knowledge in lottery situations. For suppose that I am just the slightest bit short of the evidence needed for knowledge that I have C. Then I can set up the story such that:

  1. I have enough evidence to know that the test would come out positive,

and:

  1. If the test comes out positive, I will have enough evidence to know that I have C.

In other words, oddly enough, just prior to getting the test results I can reasonably say:

  1. I don’t yet have enough evidence to know that I have C, but I know that in a moment I will.

This sounds like:

  1. I don’t know that I have C but I know that I will know.

But (6) is absurd: if I know that I will know something, then I am in a position to know that the matter is so, since that I will know p entails that p is true (assuming that p doesn’t concern an open future). However, there is no similar absurdity in (5). I may know that I will have enough evidence to know C, but that’s not the same as knowing that I will know C or even be in a position to know C. For it is possible to have enough evidence to know something without being in a position to know it (namely, when the thing isn’t true or when one is Gettiered).

Still, there is something odd about (5). It’s a bit like the line:

  1. After we have impartially reviewed the evidence, we will execute him.

Appendix: Suppose the threshold for belief or knowledge is r, where r < 1. Suppose that the false-positive rate for the test is 1/2 and the false-negative rate is zero. If E is a positive test result, then P(C|E) = P(C)P(E|C)/P(E) = P(C)/P(E) = 2P(C)/(1+P(C)). It follows by a bit of algebra that if my prior P(C) is more than r/(2−r), then P(C|E) is above the threshold r. Since r < 1, we have r/(2−r) < r, and so the story (either in the belief or knowledge form) works for the non-empty range of priors strictly between r/(2−r) and r.

4 comments:

  1. I’d take this as an argument against the view that belief is (or is justified by) credence above a threshold.

    But that view seems implausible in any case. It’s pretty much only in gambling setups with well-defined chances that we have sharp credences. But we all have vast numbers of beliefs (taken in an informal ‘folk’ sense). So it seems that some other approach is needed.

    A thorough Bayesian would be untroubled by this example. She would have her credences, and that would be enough. She would not care whether they exceeded some arbitrary threshold for belief. She would be similarly untroubled by the lottery paradox. But no one in practice is a thorough Bayesian, or even close. So again, it seems that some other approach is needed.

    ReplyDelete
  2. Ian:

    I don't think the worries about sharpness of credences apply. For we can suppose that the cases we apply the argument to are precisely ones with sufficiently sharp credences. They might be cases concerning gambling scenarios, or they could be medical cases where the only relevant data one has is statistics from the most recent publications on the subject.

    I think all we really need for the argument is that in certain kinds of cases meeting a credence threshold is what makes the difference between not having a belief (or a justified belief) and having a belief (or a justified belief).

    By the way, suppose we think that the argument doesn't work for sharpness reasons. Let's say that u is a level of sharpness such that it only makes sense to say that you've met the threshold if you are at r+u or higher and that you haven't met it if you are at r-u or lower, and if you're between r-u and r+u it's indeterminate. Then working through the Bayesian details should provide one with an interesting joint constraint on r and u. And then we can ask see if it's reasonable to think that r and u actually satisfy the joint constraint.

    ReplyDelete
  3. To be clear, I don’t doubt that the example works. It does, and it illustrates a genuine problem with the view that belief is (or is justified by) credence above a threshold. Specifically, this violates an apparently natural reflection principle, that if I rationally believe that I will rationally come to believe, then I should believe now. Issues like this seem unavoidable with a threshold (other than 1). The lottery paradox illustrates a different one: violation of logical closure. Of course, you may be prepared to live with such issues - maybe beliefs don’t have to be logically consistent.


    My problem with the threshold approach is more basic. If I don’t have a credence, I can’t apply the threshold test. If I do have a credence, then noting whether it is above or below a threshold adds nothing useful. One response to this is straight Bayesianism, which rejects any role for belief (beyond credence 1). Another is that credence and belief are separate and related, but not straightforwardly. In the example, I would say that I believe the relevant medical information about the chance that I have C and about the reliability of the test, and set my credences accordingly.

    ReplyDelete
  4. Ian,

    I hadn't thought about this in the context of reflection. But technically there is no counterexample here to van Fraassen's reflection principle, which only says that P(H | credence at t will be p) = p. My examples satisfy that, assuming full transparency about my own credences and full certainty that I will conditionalize (cf.: https://jonathanweisberg.org/pdf/C_R_and_SKv2.SP.pdf ).

    The original reflection principle implies that my credence should be the expected value of my future credences (at a fixed future time; we might also generalize to any martingale stopping time). This is not a problem in my cases, because although it is likely that my future credence will be bigger than it is now, there is a small chance that my future credence will plummet to zero due to a negative test result, as William notes.

    Upon further thought, I think it can be quite reasonable to say: "I know that tomorrow I will have knowledge-level evidence for p, but I don't yet." And if so, then my subsequent post on squishiness, even if technically correct, is moot.

    ReplyDelete