Friday, May 24, 2024

Three or four ways to implement Bayesianism

We tend to imagine a Bayesian agent as starting with some credences, “the ur-priors”, and then updating the credences as the observations come in. It’s as if there was a book of credences in the mind, with credences constantly erased and re-written as the observations come in. When we ask the Bayesian agent for their credence in p, they search through the credence book for p and read off the number written beside it.

In this post, I will assume the ur-priors are “regular”: i.e., everything contingent has a credence strictly between zero and one. I will also assume that observations are always certain.

Still the above need not be the right model of how Bayesianism is actually implemented. Another way is to have a book of ur-priors in the mind, and an ever-growing mental book of observations. When you ask such a Bayesian agent what their credence in p, they on the spot look at their book of ur-priors and their book of observations, and then calculate the posterior for p.

The second way is not very efficient: you are constantly recalculating, and you need an ever-growing memory store for all the accumulated evidence. If you were making a Bayesian agent in software, the ever-changing credence book would be more efficient.

But here is an interesting way in which the second way would be better. Suppose you came to conclude that some of your ur-priors were stupid, through some kind of an epistemic conversion experience, say. Then you could simply change your ur-priors without rewriting anything else in your mind, and all your posteriors would automatically be computed correctly as needed.

In the first approach, if you had an epistemic conversion, you’d have to go back and reverse-engineer all your priors, and fix them up. Unfortunately, some priors will no longer be recoverable. From your posteriors after conditionalizing on E, you cannot recover your original priors for situations incompatible with E. And yet knowing what these priors were might be relevant to rewriting all your priors, including the ones compatible with E, in light of your conversion experience.

Here is a third way to implement Bayesianism that combines the best of the two approaches. You have a book of ur-priors and a book of current credences. You update the latter in ordinary updates. In case of an epistemic conversion experience, you rewrite your book of ur-priors, and conditionalize on the conjunction of all the propositions that you currently have credence one in, and replace the contents of your credence book with the result.

We’re not exactly Bayesian agents. Insofar as we approximate being Bayesian agents, I think we’re most like the agents of the first sort, the ones with one book which is ever rewritten. This makes epistemic conversions more difficult to conduct responsibly.

Perhaps we should try to make ourselves a bit more like Bayesian agents of the third sort by keeping track of our epistemic history—even if we cannot go all the way back to ur-priors. This could be done with a diary.

No comments: