Friday, July 19, 2013

Symmetry and Indifference

Suppose we have some situation where either event A or event B occurred, but not both, and the two events are on par: our epistemic situation is symmetric between them. Surely:

  1. One should not assign a different probability to A than to B.
After all, such a difference in probability would be unsupported by the evidence. It is tempting to conclude that:
  1. One should assign the same probability to A as to B.
From (2), the Principle of Indifference follows: if it's certain that exactly one of A1,...,An happened, and the epistemic situation is symmetric between them all, then by applying (2) to the different pairs, we conclude that they all have equal probability, and since the probabilities must add up to one, it follows that P(Ai)=1/n for all i.

But while (1) is very plausible (notwithstanding subjective Bayesianism), (2) does not follow from (1), and likewise Indifference does not follow. For (1) is compatible with not assigning any probability to either A or B. And sometimes that is just the right thing to do. For instance, in this post, A and D are on par, but the argument of the post shows that no probability can be assigned to either.

In fact, we can generalize (1):

  1. One should treat A probabilistically on par with B.
If one of the two has a probability, the other should have a probability, and the same one. If one of the two has an imprecise probability, the other should have one, and the same one. If one is taken as maximally nonmeasurable, so should the other one be. And even facts about conditional probabilities should be parallel.

Nonetheless, there is a puzzle. It is very intuitive that sometimes Indifference is correct. Sometimes, we correctly go from the fact that A and B are on par to the claim that they have the same probability. Given (1) (or (3)), to make that move, we need the auxiliary premise that at least one of A and B has a probability.

So the puzzle now is: Under what circumstances do we know of an event that it has a probability? (Cf. this post.)

4 comments:

lukebarnes said...

If we're using Bayes theorem, things seem to get worse. We can write the probability of some theory T given some data D as

P(T | D) = P(D | T) P(T) / [sum P(D | Ti) P(Ti)]

where Ti is a mutually exclusive, exhaustive list of possible theories: (a partition): p(intersection Ti) = 1. T is one of the Ti.

Thus, to calculate the probability of a certain theory given the evidence we have, we need to know the likelihood and prior of the alternative theories to T. If we attempt to calculate this at all, we usually assume that most of the terms in the sum are negligible.

However, if any of the alternative theories fails to have a probability, then presumably the calculation of P(T | D) fails. I'm assuming that "anything times inscrutable equals inscrutable", and similarly for other arithmetic operations.

So if we're not careful, any old weird, inscrutable theory will ruin things for everyone, rendering all theories incapable of being tested. I assume this means that such theories should be excluded from the sum. Or something.

Alexander R Pruss said...

If we're lucky, the nonmeasurability of the theories could be bounded. For instance, let V be a Vitali subset of [0,1]. This is maximally nonmeasurable (all of its measurable subsets have measure zero and all of its measurable supersets have measure one). Let A = V intersect [0,1/4]. Then while P(A) is undefined as a number, we can confidently bound P(A) from above by 1/4. Or, in terms of interval-valued probability, we can say P(A) = [0,1/4].

So, if we're lucky, the nonmeasurable theories can be bounded above by measurable ones.

For instance, one might have two major scientific theories, T1 and T2, with priors 0.45 and 0.45 each, and then one might have a whole bunch of wacko theories. The wacko theories may be nonmeasurable, or E might be conditionally nonmeasurable on the wacko theories, but we can say that all together the wacko theories have probability no more than 0.10 (i.e., P(T3 or T4 or ...) <= 0.10, even though P(T3) and so on are individually undefined). And so we may be able to neglect them, depending on the other values in the ratio.

Anonymous said...

Alex, how does this (the failure of the principle of indifference) affect your thinking in the following passage from your Blackwell Companion: "Our empirical observations suggest that the probability of such events is very low. On the other hand, if we get our probabilities a priori from some sort of principle of indifference, supposing all arrangements to be equally likely, the messy PSR-violating arrangements would seem much more probable. How to explain the fact that bricks and photon clouds do not show up in the air for no discernible reason? I suggest that the best explanation is that the PSR holds, and that whatever beings there may be (e.g., God) who are capable of causing bricks and photon clouds to show up in the air for no discernible reason are in fact disposed not to do so."?

Alexander R Pruss said...

It makes the argument be only intuitive, and incapable of being formalized within standard probability theory.