Thursday, February 22, 2018

In practice priors do not wash out often enough

Bayesian reasoning starts with prior probabilities and gathers evidence that leads to posterior probabilities. It is occasionally said that prior probabilities do not matter much, because they wash out as evidence comes in.

It is true that in the cases where there is convergence of probability to 0 or to 1, the priors do wash out. But much of our life—scientific, philosophical and practical—deals with cases where our probabilities are not that close to 0 or 1. And in those cases priors matter.

Let’s take a case which clearly matters: climate change. (I am not doing this to make any first-order comment on climate change.) The 2013 IPCC report defines several confidence levels:

  • virtually certain: 99-100%

  • very likely: 90-100%

  • likely: 66-100%

  • about as likely as not: 33-66%

  • unlikely: 0-33%

  • very unlikely: 0-10%

  • exceptionally unlikely: 0-1%.

They then assess that a human contribution to warmer and/or more frequent warm days over most land areas was “very likely”, and no higher confidence level occurs in their policymaker summary table SPM.1. Let’s suppose that this “very likely” corresponds to the middle of its confidence range, namely a credence of 0.95. How sensitive is this “very likely” to priors?

On a Bayesian reconstruction, there was some actual prior probability p0 for the claim, which, given the evidence, led to the posterior of (we’re assuming) 0.95. If that prior probability had been lower, the posterior would have been lower as well. So we can ask questions like this: How much lower would the prior had to have been than p0 for…

  • …the posterior to no longer be in the “very likely” range?

  • …the posterior to fall into the “about as likely as not range”?

These are precise and pretty simple mathematical questions. The Bayesian effect of evidence is purely additive when we work with log likelihood ratios instead of probabilities, i.e., with log p/(1 − p) in place of p, so a difference in prior log likelihood ratios generates an equal difference in posterior ones. We can thus get a formula for what kinds of changes of priors translate to what kinds of changes in posteriors. Given an actual posterior of q0 and an actual prior of p0, to have got a posterior of q1, the prior would have to have been (1 − q0)p0q0/[(q1 − q0)p0 + (1 − q1)q0], or so says Derive.

We can now plug in a few numbers, all assuming that our actual confidence is 0.95:

  • If our actual prior was 0.10, to leave the “very likely” range, our prior would have needed to be below 0.05.

  • If our actual prior was 0.50, to leave the “very likely” range, our prior would have needed to be below 0.32.

  • If our actual prior was 0.10, to get to the “about as likely as not range”, our prior would have needed to be below 0.01.

  • If our actual prior was 0.50, to get to the “about as likely as not range”, our prior would have needed to be below 0.09.

Now, we don’t know what our actual prior was, but we can see from the above that variation of priors well within an order of magnitude can push us out of the “very likely” range and into the merely “likely”. And it seems quite plausible that the difference between the “very likely” and merely “likely” matters practically, given the costs involved. And a variation in priors of about one order of magnitude moves us from “very likely” to “about as likely as not”.

Thus, as an empirical matter of fact, priors have not washed out in the case of global warming. Of course, if we observe long enough, eventually our evidence about global warming is likely to converge to 1. But by then it will be too late for us to act on that evidence!

And there is nothing special about global warming here. Plausibly, many scientific and ordinary beliefs that we need to act on have a confidence level of no more than about 0.95. And so priors matter, and can matter a lot.

We can give a rough estimate of how differences in priors make a difference regarding posteriors using the IPCC likelihood classifications. Roughly speaking, a change between one category and the next (e.g., “exceptionally unlikely” to “unlikely”) in the priors results in a change between a category and the next (e.g., “likely” to “very likely”) in the posteriors.

The only time priors have washed out is cases where our credences have converged very close to 0 or to 1. There are many scientific and ordinary claims in this category. But not nearly enough for us to be satisfied. We do need to worry about priors, and we better not be subjective Bayesians.

1 comment:

Alexander R Pruss said...

This post of mine neglects an important thing discussed here: http://alexanderpruss.blogspot.com/2018/02/more-on-wobbling-of-priors.html