Suppose that a credence greater than 95% suffices to count as a belief, and that you are a rational agent who tossed ten fair coins but did not see the results. Then you have at least 638 false beliefs about coin toss outcomes.
To see this, for simplicity, suppose first that all the coins came up heads. Let Tn be the proposition that the nth coin is tails. Then the disjunction of five or more of the Tn has probability 96%, and so you believe every disjunction of five or more of the Tn. Each such belief is false, because all the coins will in fact come up heads. There are 638 (pairwise logically inequivalent) disjunctions of five or more of the Tn. So, you have at least 638 false beliefs here (even if we are counting up to logical equivalence).
Things are slightly more complicated if not all the coins come up heads, but exactly the same conclusion is still true: you have 638 disjunctions of five or more false single-coin-outcome beliefs.
But it seems that nothing went wrong in the coin toss situation: everything is as it should be. There is no evil present. So, it seems, reasonable false belief is not an evil.
I am not sure what to make of this conclusion, since it also seems to me that it is the telos of our beliefs to correctly represent reality, and a failure to do that seems an evil.
Perhaps the thing to say is this: the belief itself is bad, but having a bad belief isn’t always intrinsically bad for the agent? This seems strange, but I think it can happen.
Consider a rather different case. I want to trigger an alarm given the presence of radiation above a certain threshold. I have a radiation sensor that has practically no chance of being triggered when the radiation is below the threshold but has a 5% independent failure rate when the radiation is above the threshold. And a 5% false negative rate is not good enough for my application. So I build a device with five independent sensors, and have the alarm be triggered if any one sensor goes off. My false negative rate goes down to 3 ⋅ 10−7. Suppose now four sensors are triggered and the fifth is not. The device is working correctly and triggers the alarm, even though one sensor has failed. The failure of the sensor is bad for the sensor but not bad for the device.
Another move is to say that there is an evil present in the false belief case, but it’s tiny.
And yet another move is to deny that one should have a belief when the credence rises above a threshold.
Perhaps some beliefs are correct when they specify a range of possibilities, and become incorrect if they over-specify. This also is why overspecification (using too many variables, over-fitting the data) in a regression equation (in statistics) can be wrong.
ReplyDeleteThorough Bayesians would have no use for beliefs. They would have credences for everything. Some propositions might have credences close to 1, but calling them beliefs would add nothing.
ReplyDeleteIf the coin flips are taken as objectively chancy, then their probabilities are the best representation of reality, and the best guide to action, possible to someone who does not know the outcomes. Beliefs can add nothing to this.
In real life, we are not thorough Bayesians. We apply Bayesian reasoning when we can, i.e. rarely. More often we act on the basis of beliefs. And yes, it’s true: we see true belief as intrinsically good, but sometimes reasonable false beliefs work just as well in guiding action.
Here is a view I find congenial. I will not say I believe it yet. :-)
ReplyDeleteMentally representing the world takes some effort and resources. We often economize on these resources and use only as much as seems like a good idea. We have different levels of precision for representing the world, which I will call (a) belief, (b) credence, and (c) probability distribution. (There might be more levels/types.) It is fallacious to think that any of these is equivalent to any of the others, or that we only use one type.
A belief is a simple yes/no view about whether a proposition obtains. We might modify this to include some in-between values for vague predicates.
A credence is a probability assignment. (Values could be of varying degrees of precision.) It is harder to hold, and process, a probability assignment than a simple belief, so we don’t do it as much. (Vagueness in belief is metaphysical; probability in a credence is epistemic.) Bayesianism is an epistemic ideal for creatures with unlimited cognitive resources, which we are not.
A probability distribution is a range of credences about a range of values as applied to an open sentence. Probability distributions entail credences but not vice versa. It is even harder to hold and process probability distributions than individual credences and so we do this very rarely indeed. Or maybe credences are actually a philosopher’s fiction; what we have in real life is almost always either a belief or a probability distribution. (I can think of few instances where I know the probability that X is F(n) but not the probability that X is F(n-1) or F(n+1). )
So I would reject the inference from credence-above-a-threshhold to belief. If what you know is that a bunch of coins are flipped, what you rightly have in mind is a certain credence (indeed, probability distribution) and you have made no errors.
Semantics here. I think at least some Bayesians would consider Bayesian belief to be the degree of personal confidence we have in a hypothesis, which IanS would call a credence, not a belief.
ReplyDeleteI suppose that I would call a belief in a probability distribution a belief too. Sometimes that kind of belief is all we tend to have, until we know more.