Suppose I am considering two different hypotheses, and I am sure exactly one of them is true. On H, the coin I toss is chancy, with different tosses being independent, and has a chance 1/2 of landing heads and a chance 1/2 of landing tails. On N, the way the coin falls is completely brute and unexplained--it's "fundamental chaos", in the sense of my ACPA talk. So, now, you observe n instances of the coin being tossed, about half of which are heads and half of which are tails. Intuitively, that should support H. But if N is an option, if the prior probability of N is non-zero, we actually get Bayesian divergence as n increases: we get further and further from confirmation of H.
Here's why. Let E be my total evidence--the full sequence of n observed tosses. By Bayes' Theorem we should have:
P(H|E) = P(E|H)P(H)/[P(E|H)P(H) + P(E|N)P(N)].But there is a problem: P(E|N) is undefined. What shall we do about this? Well, it is completely undefined. Thus, we should take it to be an interval of probabilities, the full interval [0,1] from 0 to 1. The posterior probability P(H|E), then, will also be an interval between:
P(E|H)P(H)/[P(E|H)P(H) + (0)·P(N)] = 1and
P(E|H)P(H)/[P(E|H)P(H) + (1)·P(N)] ≤ P(E|H)/P(N) = 2−n / P(N).(Remember that E is a sequence of n fair and independent tosses if H is true.) Thus, as the number of observations increases, the posterior probability for the "sensible" hypothesis H gets to be an interval [a,1], where a is very small. But something whose probability is almost the whole interval [0,1] is not rationally confirmed. So the more data we have, the further we are from confirmation.
This means that no-explanation hypotheses like N are pernicious to Bayesians: if they are not ruled out as having zero or infinitesimal probability from the outset, they undercut science in a way that is worse and worse the more data we get.
Fortunately, we have the Principle of Sufficient Reason which can rule out hypotheses like N.