Here’s a problem for Bayesianism and/or our rationality that I am not sure what exactly to do about.
Take a proposition that we are now pretty confident of, but which was highly counterintuitive so our priors were tiny. This will be a case where we were really surprised. Examples:
Simultaneity is relative
Physical reality is indeterministic.
Let’s say our current level of credence is 0.95, but our priors were 0.001. Now, here is the problem. Currently we (let’s assume) believe the proposition. But if our priors were 0.0001, our credence would have been only 0.65, given the same evidence, and so we wouldn’t believe the claim. (Whatever the cut-off for belief is, it’s clearly higher than 2/3: nobody should believe on tossing a die that they will get 4 or less.)
Here is the problem. It’s really hard for us to tell the difference in counterintuitiveness between 0.001 and 0.0001. Such differences are psychologically wobbly. If we just squint a little differently when looking mentally a priori at (1) and (2), our credence can go up or down by an order of magnitude. And when our priors are even lower, say 0.00001, then an order of magnitude difference in counterintuitiveness is even harder to distinguish—yet an order of magnitude difference in priors is what makes the difference between a believable 0.95 posterior and an unbelievable 0.65 posterior. And yet our posteriors, I assume, don’t wobble between the two.
In other words, the problem is this: it seems that the tiny priors have an order of magnitude wobble, but our moderate posteriors don’t exhibit a correspnding wobble.
If our posteriors were higher, this wouldn’t be a problem. At a posterior of 0.9999, an order of magnitude wobble in priors results in a wobble between 0.9999 and 0.999, and that isn’t very psychologically noticeable (except maybe when we have really high payoffs).
There is a solution to this problem. Perhaps our priors in claims aren’t tiny just because the claims are counterintuitive. It makes perfect sense to have tiny priors for reasons of indifference. My prior in winning a lottery with a million tickets and one winner is about one in a million, but my intuitive wobbliness on the prior is less than an order of magnitude (I might have some uncertainty about whether the lottery is fair, etc.) But mere counterintuitiveness should not lead to such tiny priors. The counterintuitive happens all too often! So, perhaps, our priors in (1) and (2) were, or should have been, more like 0.10. And now perhaps the wobble in the priors will probably be rather less: it might vary between 0.05 and 0.15, which will result in a less noticeable wobble, namely between 0.90 and 0.97.
Simple hypotheses like (1) and (2), thus, will have at worst moderately low priors, even if they are quite counterintuitive.
And here is an interesting corollary. The God hypothesis is a simple hypothesis—it says that there is something that has all perfections. Thus even if it is counterintuitive (as it is to many atheists), it still doesn’t have really tiny priors.
But perhaps we are irrational in not having our posteriors wobble in cases like (1) and (2).
Objection: When we apply our intuitions, we generate posteriors, not priors. So our priors in (1) and (2) can be moderate, maybe even 1/2, but then when we updated on the counterintuitiveness of (1) and (2), we got something small. And then when we updated on the physics data, we got to 0.95.
Response: This objection is based on a merely verbal disagreement. For whatever wobble there is in the priors on the account I gave in the post will correspond to a similar wobble in the counterintuitiveness-based update in the objection.
I think that the scientists who were actually involved must have had a many-stage process in which they came to think of such hypotheses as not particularly unlikely, before they got the final scientific evidence. (That is all a bit mysterious to me, though.) The general public never gets the scientific evidence, but moves its priors as a result of testimony. Given that some source is plausible, it can say one thing, and then another, and our beliefs will simply follow that. The Bayesian side to testimony will be the plausibility?
ReplyDeleteMartin:
ReplyDeleteI don't think it matters what the steps in between are, whether they are multiple scientific steps or testimony, since the worries in the post depend only on the starting and ending points. Unless, of course, there is some violation of Bayesian update somewhere in there.
There might be some such violation in cases of testimony. Intuitively, when an expert tells us that something is 95% likely, and we trust the expert, our credence just jumps to 95%, ignoring our priors.
This post of mine neglects an important thing discussed here: http://alexanderpruss.blogspot.com/2018/02/more-on-wobbling-of-priors.html
ReplyDeleteIf an expert tells us that something is 95% likely, would our credence jump to 95%? I find that very counter-intuitive. My credence is not so transparent to me, nor so malleable. (I think that my credence that it was 95% likely would depend strongly on my credence in the expert, which is not quite the same.)
ReplyDeleteI think that testimony is a sort of evidence, and that the Bayesian side of it is the plausibility of the expert. We have reasons for believing an expert, which may or may not be Bayesian. (It is very similar to the usual sort of evidence, where we believe that our equipment is working well enough, and that the common knowledge base is sound enough, and so on.)
Alex,
ReplyDeleteRegarding the corollary, why would that be a simple claim?
The claim that something has all perfections already has some very restrictive assumptions on what a perfection is, and what sort of things exist. Else, we get things like: a perfection in a mosquito is (say) to have a perfectly functioning blood-sucking mechanism. But that's not a perfection in, say, a cricket. So, that's a perfection crickets cannot have (and, I would think, neither could God, if he existed!). So, you need to make an argument limiting perfections.
Moreover, the claim about moral perfection seems on its own extremely complex, and/or pack very complex claims about what exists and what doesn't and/or about metaethics. For example, when I try to evaluate a claim like that, I consider other potential species (say, species #2044385 of aliens on another planet), who ponder about the existence of something omnipotent and #2044385-morally perfect, where #2044385-morality is whatever they have instead of morality. My point is that the claim of moral perfection does not seem simpler. Rather, it depends on how complex #2044385-morality turns out to be, but systems of rules, preferences, values, or whatever best describe morality can be widely varied and it seems extremely probable to me (think of AI with different value structures) that there are plenty of alternatives less complex or similarly complex as morality.
It seems to me you would need to make first defend (successfully) a very complex metaethical theory to say that the claim of moral perfection is simple within that theory, and so on.
The concept of a perfection could be primitive. Then the theory is very simple.
ReplyDeleteMartin:
ReplyDeleteWell, at least my betting behavior would jump to betting as if it was 95%.
I don't think that it follows from the primitiveness of a human concept that the theory is simple (at least if I'm reading "primitive" right - but if I'm not, then it seems to me it's not primitive).
ReplyDeletePrimitive concepts can pick very complex properties. But even if they don't (there is a question of what "simple" and "complex" mean in this context), it seems to me that a statement assigning properties picked by primitive concepts may still have (and sometimes do have) extremely low priors.
Take, for example, the concept of red. That seems primitive to me. But hypothetical advanced squid-like aliens (or some other possible beings; this is only an example) could have a primitive concept of squid-red, which is different from red. And it does not seem more probable to me (as a prior, before factoring in the existence of humans, etc.) that a hugely powerful nonhuman agent would like redness or any human color property more than the preference of any other potential species with similarly complex color-like vision ("complex" here in terms of number of colors, subtleties of the differences, etc.). In practice, when evaluating probabilities we do include things like humans in the background (probably!), so that probably results in a higher assessment, but still extremely low in my view.
Addition: a potential objection is that the hypothesis is that God is morally perfect, not that he values moral goodness. I don't think this changes matters, but if you like, we can consider whether an object is red. It seems more probable a priori that it isn't, but also, it seems not less probable that it's some other possible color-like variant.
ReplyDeleteAnother variant: the concept of beauty seems primitive. But it does not seem probable a priory that a random object (of which we have no other info) be beautiful (one can also make an argument with squid-like aliens with a concept of squid-like beauty, etc.)
The property of being a perfection can be primitive, too. :-)
ReplyDeleteThen so can the possible property of squid-like-perfection! :-)
ReplyDeleteMaybe the “violation of Bayesian updating” happens when we learn (or devise, or come to grasp) a new theory or framework.
ReplyDeleteBefore I learned about special relativity, I implicitly took it as obvious that space and time are independent - I didn’t even suspect that it could be doubted. But then I read stories about people in space ships trying to synchronize their clocks by signalling with torches… These stories showed that given a finite speed of communication, remote simultaneity is problematic. Further reading showed that this was not just a philosophical nit-pick, but could be built into a precise and consistent theory. So I reformulated my priors.
If we were smart enough to understand in advance all possible relevant theories, and all their implications, maybe we could be true Bayesians :-)
Ian : that makes sense.
ReplyDelete