Wednesday, September 13, 2017

Probabilities and Boolean operations

When people question the axioms of probability, they may omit to question the assumptions that if A and B have a probability, so do A-or-B and A-and-B. (Maybe this is because in the textbooks those assumptions are often not enumerated in the neat lists of the “three Kolmogorov axioms”, but are given in a block of text in a preamble.)

First note that as long as one keeps the assumption that if A has a probability, so does not-A, then by De Morgan’s, any counterexample to conjunctions having a probability will yield a counterexample to disjunctions having a probability. So I’ll focus on conjunctions.

I’m thinking that there is reason to question these axioms, in fact two reasons. The first reason, one that I am a bit less impressed with, is that limiting frequency frequentism can easily violate these two axioms. It is easy to come up with cases where A-type events have a limiting frequency, B-type ones do, too, but (A-and-B)-type ones don’t. I’ve argued before that so much the worse for frequentism, but now I am not so sure in light of the second reason.

The second reason is cases like this. You have an event C that has no probability whatsoever–maybe it’s an event of a dart hitting a nonmeasurable set–and a fair indeterministic coin flip causally independent of C. Let H and T be the events of the coin flip being heads or tails. Then let A be the event:

  • (H and C) or (T and not C).

Here’s an argument that P(A)=1/2. Imagine a coin with erasable heads and tails images, and imagine that a trickster prior to flipping a coin is going to decide, using some procedure or other, whether to erase the heads and tails images on the coin and draw them on the other side. “Clearly” (as we philosophers say when we have no further argument!) as long as the trickster has no way of seeing the future, the trickster’s trick will not affect the probabilities of heads or tails. She can’t make the coin be any less or more likely to land heads by changing which side heads lies on. But that’s basically what’s going on in A: we are asking what the probability of heads is, with the convention that if C doesn’t happen, then we’ll have relabeled the two sides.

Another argument that P(A)=1/2 is this (due to a comment by Ian). Either C happens or it doesn’t. No matter which is the case, A has a chance 1/2 of happening.

So A has probability 1/2. But now what is the probability of A-and-H? It is the same as the probability of C-and-H, which by independence is half of the probability of C, and the latter probabilit is undefined. Half of something undefined is still undefined, so A-and-H has an undefined probability, even though A has a perfectly reasonable probability of 1/2.

A lot of this is nicely handled by interval-valued theories of probability. For we can assign to C the interval [0, 1], and assign to H the sharp probability [1/2, 1/2], and off to the races we go: A has a sharp probability as does H, but their conjunction does not. This is good motivation for interval-valued theories of probability.

3 comments:

  1. That is a good point, and one that I had missed. It could apply equally to epistemic probabilities. I may think that C has an objective probability, but have no idea what it is. Or I may have no idea at all about the probability of C, and no interest in it. All that matters is that C is independent of coin flip.

    ReplyDelete
  2. So-called ‘Dilation’ is a standard problem for interval theories.

    Suppose, in the above setup, that you will be told after the event whether the coin landed heads or tails, but not whether A occurred. Before the event you will give A the sharp probability of 1/2. After the event, if you learn H, you will give A the interval probability of [0,1]. The same if you learn T. Either way, more evidence makes you less confident. And how does this relate to the Reflection Principle?

    I’m not sure what to make of this. It is discussed in the literature.

    ReplyDelete