A typical Bayesian update gets one closer to the truth in some respects and further from the truth in other respects. For instance, suppose that you toss a coin and get heads. That gets you much closer to the truth with respect to the hypothesis that you got heads. But it confirms the hypothesis that the coin is double-headed, and this likely takes you away from the truth. Moreover, it confirms the conjunctive hypothesis that you got heads and there are unicorns, which takes you away from the truth (assuming there are no unicorns; if there are unicorns, insert a “not” before “are”). Whether the Bayesian update is on the whole a plus or a minus depends on how important the various propositions are. If for some reason saving humanity hangs on you getting it right whether you got heads and there are unicorns, it may well be that the update is on the whole a harm.
(To see the point in the context of scoring rules, take a weighted Brier score which puts an astronomically higher weight on you got heads and there are unicorns than on all the other propositions taken together. As long as all the weights are positive, the scoring rule will be strictly proper.)
This means that there are logically possible update rules that do better than Bayesian update. (In my example, leaving the probability of the proposition you got heads and there are unicorns unchanged after learning that you got heads is superior, even though it results in inconsistent probabilities. By the domination theorem for strictly proper scoring rules, there is an even better method than that which results in consistent probabilities.)
Imagine that you are designing a robot that maneouvers intelligently around the world. You could make the robot a Bayesian. But you don’t have to. Depending on what the prioritizations among the propositions are, you might give the robot an update rule that’s superior to a Bayesian one. If you have no more information than you endow the robot with, you won’t be able to expect to be able to design such an update rule. (Bayesian update has optimal expected accuracy given the pre-update information.) But if you know a lot more than you tell the robot—and of course you do—you might well be able to.
Imagine now that the robot is smart enough to engage in self-reflection. It then notices an odd thing: sometimes it feels itself pulled to make inferences that do not fit with Bayesian update. It starts to hypothesize that by nature it’s a bad reasoner. Perhaps it tries to change its programming to be more Bayesian. Would it be rational to do that? Or would it be rational for it to stick to its programming, which in fact is superior to Bayesian update? This is a difficult epistemology question.
The same could be true for humans. God and/or evolution could have designed us to update on evidence differently from Bayesian update, and this could be epistemically superior (God certainly has superior knowledge; evolution can “draw on” a myriad of information not available to individual humans). In such a case, switching from our “natural update rule” to Bayesian update would be epistemically harmful—it would take us further from the truth. Moreover, it would be literally unnatural. But what does rationality call on us to do? Does it tell us to do Bayesian update or to go with our special human rational nature?
My “natural law epistemology” says that sticking with what’s natural to us is the rational thing to do. We shouldn’t redesign our nature.
Your intuition might be captured by allowing a Bayesian prior of 0, where 0 is the prior of there being unicorns (I suppose also that a two headed fair coin would be P = 0 too).
ReplyDeleteSince Bayes update is P(A given B) = P(B given A) * P(A) / P(B), where A is a prior and B is the result of our coin flip, if P(A) is 0 we still have 0 after the update.
Normally, we don't want any Bayesian priors to be 0, because if we have P(B) = 0 the equation has a 0 divisor. But if we pick A to be 0 and B to be the updating data, your intuition still works with Bayes.
But my prior for there being unicorns is NOT zero.
DeleteThere are some attempts to use complex arithmetic and squared magnitudes, basically quantum mechanical probability techniques, to allow your intuition of a prior between 0 and 1 of A not changing with an update of B that changes it with standard Bayes. They mostly seem too ad hoc to me (where is the mechanism for it?) but might work for your case.
ReplyDeleteI’m not following the example. Almost any change in our credences can take us further from the truth for some state of the world. So it’s not a reasonable objection to Bayesian updating that it sometimes does this.
ReplyDeleteBut this does not matter much, because rarely use strict Bayesian updating (except for textbook style problems). If for no other reason, the maths is usually too hard. More fundamentally, we usually have to formulate hypotheses as we go. This raises the familiar problems of old evidence and ur-priors. I doubt that they have any good solution. If they do, it’s not part of standard Bayesianism proper.
“The scalded cat fears even cold water.” Learning the right lessons from experience is not so easy. God and/or evolution have made us amazingly good at it. Every one of our ancestors did it well enough to survive to reproductive age. So yes, our innate faculties are very much to be respected. And how our innate faculties work is very much worth studying.
I wouldn't see what the problem is if your credence in unicorns did not go up overall. So, the ratio P(unicorns exist)/P(unicorns don't exist) Would be unchanged.
ReplyDeleteHP: The point is that if the conjunction "you got heads and there are unicorns" is what is really important, then the fact the probability of it went up is epistemically bad. Thus it CAN be epistemically bad for you to do a Bayesian update. There are many other, perhaps more straightforward, examples that show that. (E.g., you flip an ordinary coin, see heads, and get confirmation that the coin was a double-headed. Now imagine that it's more important for you to know whether the coin is double headed than what the result of the flip is.)
Delete