Rough question: How much of a constraint does subjective Bayesianism put on the posteriors?
Let’s make the question precise. Suppose I start with some consistent and regular prior probabilities on some countable sample space, gathered some evidence E (a non-empty subset of the sample space), applied Bayesian conditionalization, and obtained a posterior probability distribution PE.
Precise question: What constraints do the above claims put on PE?
Well, here are some constraints that clearly follow from the above story:
PE is a consistent probability distribution. (Bayesian conditionalization preserves the axioms of probability.)
PE(E)=1. (Obvious.)
If A ∩ E is non-empty, then PE(A)>0. (Follows from the regularity of the priors.)
And it turns out that the above constraints are the only ones that my initial story places on PE:
- Let PE be any function satisfying (1)–(3). Then there is a consistent and regular probability function P such that PE(A)=P(A|E) for all A.
Proof: Either E is all of the sample space or not. If E is all of the sample space, then let P = PE and we are done. Otherwise, let Q be some probability function that assigns a non-zero value to every point outside E and assigns zero to E. Let P = (1/2)PE + (1/2)Q.
Thus, (1)–(3) are the only constraints subjective Bayesianism places on our posteriors.
I knew that subjective Bayesianism placed very little in the way of constraint on our posteriors, but I didn’t realize just how little.
No comments:
Post a Comment