I enter a room with four walls, three of them red, and the fourth white, except for a small red patch, about 1 cm^{2} in size. I also find a dart stuck in that small red patch. (This is of course a variant of Leslie's story about the wasp and the dart.) What should I think about what happened here?

I don't know. But I know that what I should *not* think is that the dart was tossed in an unbiased random direction. Rather, I would instead conclude that for some reason whatever process or agency propelled the dart had both a bias in favor of this wall *and* a bias in favor of red. Here's, very roughly, how one would make a Bayesian model of this. There is the unbiased randomness hypothesis *U*. Let's give it credence 1/2. And there are four relevant strong bias hypotheses: *B*_{1}, *B*_{2}, *B*_{3} and *B*_{4}, according to which the the dart was tossed with a strong bias for wall 1, 2, 3 or 4, respectively, as well as a strong bias in favor of red. These four bias hypotheses are *prima facie* roughly equally likely. The probability that at least one of them is true isn't going to be all that high, but also isn't going to be all that low. There may well be reasons beyond our ken for bias in favor of one wall or another. Let's say that the probability that some one of these bias hypotheses is true is about 1/16. Thus the prior probability of *B*_{4} will be about 1/64, as the bias hypotheses are approximately equally likely.

But note that our evidence--the dart in red on wall 4--is much better predicted by *B*_{4} than by *U*. How much better? Well, if the walls are three by four meters in size (a reasonable set of dimensions for the wall), the probability of hitting our small red patch will be one in 480,000 on *U*, but relatively high (depending on what we mean by "strong" in "strong bias") on *B*_{4}, let's say 1/10. Then Bayes' Theorem tells us that we have extremely strong confirmation of *B*_{4}, with posterior probability 99.96%.

Suppose we go a little more extreme. The room has 10,000 walls, each of the same size as before, (it's a giagantic myriagonal room), all but the last being completely red, with the last being white except for a small patch, with the same dimensions as before. Then what happens? Well, our uniform randomness hypothesis has an even smaller probability of predicting hitting the red patch on the 10,000th wall, though it has a very high probability of hitting red somewhere. On the other hand, now our bias hypotheses need to be split between 10,000 walls. Thus, the *B*_{10000} hypothesis will have a probability of 1/160,000, assuming the probability that some one of the bias hypotheses is true is 1/16 as before. Plugging this into Bayes' Theorem, we get 99.86%, which is roughly the same probability as before! (The reason is pretty simple: as we increase the number of walls, the prior odds and likelihood ratio go up in roughly inverse proportion, leaving the posterior odds roughly unchanged.)

This is, of course, supposed to be a response to the objection to the fine-tuning argument based on the claim that for all we know, if the parameters defining the physics were *very* different from what they are, life might be quite likely (this is supposed to correspond to the three red walls), even though in the vicinity of the actual values of the parameters, life-permissiveness is rare (this is the white wall with a small red patch). The reasonable conclusion is that whatever cause generated our physics had a bias in favor of both (a) life and (b) the rough vicinity of our place in the space of possible parameter values. And we have an obvious explanation of why a cause might have bias (a): the cause is a morally good agent. But bias (b) is something we may not have an explanation for. Nonetheless, even without an explanation, we can have a good Bayesian argument.