I suspect that sometimes we can just see that our priors were wrong, and we can see it in a way that outstrips the speed of Bayesian conditionalization. We can just see where the Bayesian conditionalizations are heading—and jump there.

For instance, suppose I know there is a hidden regular polygon on a page. I have a special device, which I can point to a place on the page and it makes a black dot on the page if the place is within the polygon and a yellow dot otherwise. I have no information on the number of sides of the polygon, so I assign some priors, like maybe 0.0000001 for a triangle, 0.0000001 for a square, and then eventually dropping off. But suppose that in fact it's a square. I put down a lot of random points. It might well happen that I can just *see* what the shape is, long before my priors converge.

If one were worried that the number of points is insufficient (it would be stupid to think it's a triangle after seeing three points!), one can compare P(what one sees | n-gon) versus P(what one sees | square) to ensure one has enough points for confidence. But in all of this, one can—and perhaps should—side-step the priors.

This is, of course, very much like the preceding post.

## 1 comment:

I think the argument fails when it is put in this way, because by the time you can see the shape, only a few shapes will be compatible with the arrangement of the colored dots.

Maybe a better setup would be as follows. You have a device that places a dot at a random location within the shape. In ten steps, say, maybe you can tell that the shape is a triangle. But if you're married to your priors, you'll need a lot more patience.

Post a Comment