Two days ago, I posted a post on this topic based on an inequality that I had claimed to have proved but that was just plain false. Oops! The surprising numerical results in that post were also wrong.
But let's re-do it. Suppose that to count as being confident that p, you need P(p)≥α. But if you're confident that p, you still need not be confident that you will permanenly remain confident, even assuming that you are sure you will remain rational. For if you are confident but barely so, there may well be a good chance that future data will push you below the confidence level.
Say that you are secure from future rational defeat when you are rationally confident that future data won't rationally push you below the confidence level. Assume you know for sure that:
- Your probabilities are coherent.
- They will remain coherent.
- You won't forget data.
- If N is the number of pieces of evidence you will receive in your (earthly) life, then E[N]<∞.
- When you receive a piece of evidence in your life, you also know for sure that you're receiving a piece of evidence in your life.
Under these assumptions, if the confidence level is α, i.e., if P(p)≥α is what it is to be confident in p, then it is enough for security in p that P(p)≥1−(1−α)2. And this inequality is sharp. For any P(p) <1−(1−α)2, one can imagine an experiment that has probability 1−α of pushing your credence below α.
Security from future rational defeat (in this life) normally requires quite a bit more than mere confidence. If your confidence level is 0.95, then security level might need to be as high as 1−(0.05)2=0.9975, but no higher.
Of course, you might know for sure that no more information will come in. Then the security level will equal the confidence level. But normally you don't know that. So let's let the absolute security level be that level of credence which suffices for security regardless of what you know about the kinds of future data you might receive. That level just is 1−(1−α)2 for a confidence level α. The graph shows the absolute security level for each given confidence level.
Suppose confidence α is the credence you need for knowledge. Then absolute security ensures that no matter what expectations you have about future information, you have enough credence to know that your knowledge won't be defeated.
Absolute security is something akin to moral certainty.
More generally, the probability that given a current credence of β your credence will ever dip below α is less than (1−β)/(1−α). The above inequality for the security level is derived from this inequality, and this inequality follows from Doob's optional stopping theorem and the fact that your conditional expectations for p will be a martingale when we condition on an increasing sequence of σ-fields (the increasingness just encodes the fact that you don't forget) and then we can stop that martingale once we reach the end of information or hit either α or 1.
Of course, maybe once again I made a mistake!
It seems pretty easy to imagine some very convincing evidence which could come our way to show that we are all living in a computer simulation. This entails that we are not absolutely secure in our knowledge that we are not all living in a computer situation (right?). If something that basic is non-absolutely-secure, I wonder if the only truths about which we have absolute security are apriori truths and certain matters of introspection. Can you think of any others?
ReplyDeleteWe can imagine this evidence, but it is very unlikely that we will get it.
ReplyDeleteWhy? Well, it's extremely likely I have two hands. That's at least 0.9975, and probably way, way higher.
If I have two hands, any evidence to the contrary is misleading. But I am unlikely to get misleading evidence of such strength as to pull me too far down. That's because there are probabilistic bounds on how far misleading evidence is likely to push one. See for instance here.
And of course the result of this post shows that if I know for sure that my current credence is 0.9975, then my credence that I will never get evidence that pushes me below 0.95 should be at least 0.95.
Thanks for the clarification. I think I may have misunderstood this sentence from the original post:
ReplyDelete'So let's let the absolute security level be that level of credence which suffices for security regardless of what you know about the kinds of future data you might receive.'
I took this to mean that I have absolute security in my knowledge of P iff (I would still be secure in my knowledge of P even if nothing to which I had epistemic access counted as positive evidence against the view that I would, in the future, receive evidence that counts against P). Your reply shows me that this is not what you meant by this sentence, but I'm still not sure I'm tracking. Would it be fair to say that in the sentence I have quoted above, you mean that absolute security is the level of confidence such that, necessarily, if you have at least that level of confidence in any given proposition and you satisfy the stated conditions 1 - 5, then you have security with respect to that proposition?
The central results of this paper will appear in "Being sure and being confident that you won't lose confidence", Logos and Episteme.
ReplyDelete