When I was a grad student, I was taught that in Bayesian epistemology the prior probabilities wash out as evidence comes in.
But that's false or at least deeply misleading.
Suppose Sam and Jennifer start with significantly different priors, but have the same relevant conditional probabilities and the same evidence. Then their posterior probabilities will always be significantly different. For instance, suppose Sam and Jennifer start with with priors of 0.1 and 0.9 respectively for some proposition p. They then get a ton of evidence, so that Sam's posterior probability is 0.99. But Jennifer's posterior probability will be way higher, about 0.99988. Suppose Sam's is 0.999. Then Jennifer's will be 0.999988. And so on. Jennifer's probabilities will always be way higher.
If the difference between 0.999 and 0.999988 seems small, that's because we're using the wrong scale. Notice that Sam assigns 81X higher probability to not-p than Jennifer does.
And in fact, if with Turing we measure probabilities with log-odds (log-odds(A) = log (P(A)/(1−P(A))), then no matter how much Sam and Jennifer collect the same evidence, Jennifer's log-odds for p minus Sam's will always equal about 4.39.