Thursday, January 29, 2015

Wanting to be even more sure

We like being sure. No matter how high our confidence, we have a desire to be more sure, which taken to an extreme becomes a Cartesian desire for absolute certainty. It's tempting to dismiss the desire for greater and greater confidence, when one already has a very high confidence, as irrational.

But the desire is not irrational. Apart from certain moral considerations (e.g., respecting confidentiality) a rational person does not refuse costless information (pace Lara Buchak's account of faith). No matter how high my confidence, as long as it is less than 100%, I may be wrong, and by closing my ears to free data I close myself to being shown to have been wrong, i.e., I close myself to truth. I may think this is not a big deal. After all, if I am 99.9999% sure, then I will think it quite unlikely that I will ever be shown to have been wrong. After all, to be shown to be wrong, I have to actually be wrong ("shown wrong" is factive), and I think the probability that I am wrong is only 0.0001%. Moreover, even if I'm wrong, quite likely further evidence won't get me the vast distance from being 99.9999% sure to being unsure. So it seems like not a big deal to reject new data. Except that it is. First, I have lots of confident beliefs, and while it is unlikely for any particular one of my 99.9999%-sure beliefs to be wrong, the probability that some one of them is wrong is quite a bit higher. And, second, I am a member of a community, and for Kantian reasons I should avoid epistemic policies that make an exception of myself. And of course I want others to be open to evience even when 99.9999% sure, if only because sometimes they are 99.9999% sure of the negation of what I am 99.9999% sure of!

So we want rational people to be open to more evidence. And this puts a constraint on how we value our levels of confidence. Let's say that I do value having at least 99.9999% confidence, but above that level I set no additional premium on my confidence. Then I will refuse costless information when I have reached 99.9999% confidence. I will even pay (perhaps a very small amount) not to hear it! For there are two possibilities. The new evidence might increase my confidence and might decrease it. If it increases it, I gain nothing, since I set no additional premium on higher confidence. If it decreases it, however, I am apt to lose (this may requiring tweaking of the case). And a rational agent will pay to avoid a situation where she is sure to gain nothing and has a possibility of losing.

So it's important that one's desire structure be such that it continue to set a premium on higher and higher levels of confidence. In fact, the desire structure should not only be such that one wouldn't pay to close one's ears to free data, but it should be such that one would always be willing to pay something (perhaps a very small amount) to get new relevant data.

Intuitively, this requires that we value a small increment in confidence more than we disvalue a small decrement. And indeed that's right.

So our desire for greater and greater confidence is indeed quite reasonable.

There is a lesson in the above for the reward structure in science. We should ensure that the rewards in science—say, publishing—do not exhibit thresholds, such as a special premium for a significance level of 0.05 or 0.01. Such thresholds in a reward structure inevitably reward irrational refusals of free information. (Interestingly, though, a threshold for absolute certainty would not reward irrational refusals of free information.)

I am, of course, assuming that we are dealing with rational agents, ones that always proceed by Bayesian update, but who are nonetheless asking themselves whether to gather more data or not. Of course, an irrational agent who sets a high value on confidence is apt to cheat and just boost her confidence by fiat.

Technical appendix: In fact to ensure that I am always willing to pay some small amount to get more information, I need to set a value V(r) on the credence r in such a way that V is a strictly convex function. (The sufficiency of this follows from the fact that the evolving credences of a Bayesian agent are a martingale, and a convex function of a martingale is a submartingale. The necessity follows from some easy cases.)

This line of thought now has a connection with the theory of scoring rules. A scoring rule measures our inaccuracy—it measures how far we are from truth. If a proposition is true and we assign credence r to it, then the scoring rule measures the distance between r and 1. Particularly desirable are strictly proper scoring rules. Now for any (single-proposition) scoring rule, we can measure the agent's own expectation as to what her score is. It turns out that the agent's expectation as to her score is a continuous, bounded, strictly concave function ψ(r) of her credence r and that every continuous, bounded, strictly concave function ψ defines a scoring rule such that ψ(r) is the agent's expectation of her score. (See this paper.) This means that if our convex value function V for levels of confidence is bounded and continuous—not unreasonable assumptions—then that value function V(r) is −ψ(r) where ψ(r) is the agent's expectation as to her score, given a credence of r, according to some strictly proper scoring rule.

In other words, assuming continuity and boundedness, the consideration that agents should value confidence in such a way that they are always willing to gather more data means that they should value their confidence in exactly the way they would if their assignment of value to their confidence was based on self-scoring (i.e., calculating their expected value for their score) their accuracy.

Interestingly, though, I am not quite sure that continuity and boundedness should be required of V. Maybe there is a special premium on certainty, so V is continuous within (0,1) (that's guaranteed by convexity) but has jumps—maybe even infinite ones—at the boundaries.

No comments: