Consider this toy story about belief. It’s inconvenient to store probabilities in our minds. So instead of storing the probability of a proposition p, once we have evaluated the evidence to come up with a probability r for p, we store that we believe p if r ≥ 0.95, that we disbelieve p if r ≤ 0.05, and otherwise that we are undecided. (Of course, the “0.95” is only for the sake of an example.)
Now, here is a curious thing. Suppose I come across a belief p in my mind, having long forgotten the probability it came with, and I need to make some decision to which p is relevant. What probability should I treat p as having in my decision? A natural first guess is 0.95, which is my probabilistic threshold for belief. But that is a mistake. For the average probability of my beliefs, if I follow the above practice perfectly, is bigger than 0.95. For I don’t just believe things that have probability 0.95. I also believe things that have probability 0.96, 0.97 and even 0.999999. Intuitively, however, I would expect that there are fewer and fewer propositions with higher and higher probability. So, intuitively, I would expect the average probability of a believed proposition to be a somewhat above 0.95. How far above, I don’t know. And the average probability of a believed proposition is the probability that if I pick a believed proposition out of my mental hat, it will be true.
So even though my threshold for belief is 0.95 in this toy model, I should treat my beliefs as if they had a slightly higher probability than that.
This could provide an explanation for why people can sometimes treat their beliefs as having more evidence than they do, without positing any irrationality on their part (assuming that the process of not storing probabilities but only storing disbelieve/suspend/belief is not irrational).
Objection 1: I make mistakes. So I should take into account the fact that sometimes I evaluated the evidence wrong and believed things whose actual evidential probability was less than 0.95.
Response: We can both overestimate and underestimate probabilities. Without evidence that one kind of error is more common than the other, we can just ignore this.
Objection 2: We have more fine-grained data storage than disbelieve/suspend/believe. We confidently disbelieve some things, confidently believe others, are inclined or disinclined to believe some, etc.
Response: Sure. But the point remains. Let’s say that we add “confidently disbelieve” and “confidently believe”. It’ll still be true that we should treat the things in the “believe but not confidently” bin as having slightly higher probability than the threshold for “believe”, and the things in the “confidently believe” bin as having slightly higher probability than the threshold for “confidently believe”.
No comments:
Post a Comment