Tuesday, July 28, 2020

Independence and probability

Thesis: If we stick to real-numbered probabilities, genuine independence of events A and B cannot be defined in terms of any condition on the conditional probabilities P(X|Y) where X and Y are events can be constructed from A and B by using the boolean operations of union, intersection and complement, even if the conditional probabilities are taken to be primitive rather than defined as ratios.

Argument: Suppose that three genuinely independent darts are thrown uniformly at the interval [0, 1], and consider the events:

  • A: the first dart hits [0, 1/2) or the third hits 1/2

  • B: the second dart hits [0, 1/2) or the third hits 1/2.

The events A and B are not genuinely independent. The third dart’s hitting 1/2 would guarantee that both events A and B happen. But it is easy to check that the conditional probabilities for any boolean combination of A and B are exactly the same as for the corresponding boolean combination of A′ and B′, where:

  • A′: the first dart hits [0, 1/2)

  • B′: the second dart hits [0, 1/2).

So, conditional probabilities can’t distinguish the non-genuinely independent pair A and B from the genuinely independent pair A′ and B′.

Nor should we mind this fact. For genuine independence is a concept about causation or rationality, while probabilities give us a statistical concept.

1 comment:

David Duffy said...

This strikes me as an odd property to focus on. Usually we are reversing this, to generate correlated random variables with specified properties by appropriately combining independent random variables. The identification of true independence (and true randomness) requires statistical testing of the properties of the collection of your events of interest (eg distribution of moments), which will be only ever inductive (which I presume you would classify as rational).