Wednesday, December 2, 2020

Another problem for infinitesimal probabilities

Here’s another problem with independence for friends of infinitesimal probabilities.

Let ..., X−2, X−1, X0, X1, X2, ... be an infinite sequence of independent fair coin tosses. For i = 0, 1, 2, ..., define Ei to be heads if Xi and X−1 − i are the same and tails otherwise.

Now define these three events:

  • L: X−1, X−2, ... are all heads

  • R: X0, X1, ... are all heads

  • E: E0, E1, ... are all heads.

Friends of infinitesimal probabilities insist that P(R) and P(L) are positive infinitesimals.

I now claim that E is independent of R, and the same argument will show that E is independent of L. This is because of this principle:

  1. If Y0, Y1, ... is a sequence of independent random variables, and f and g are functions such that f(Yi) and g(Yi) are independent of each other for each fixed i, then the sequences f(Y0),f(Y1),... and g(Y0),g(Y1),... are independent of each other.

But now let Yi = (Xi, X−1 − i). Then Y0, Y1, ... is a sequence of independent random variables. Let f(x, y)=x and let g(x, y) be heads if x = y and tails otherwise. Then it is easy to check that f(Yi) and g(Yi) are independent of each other for each fixed i. Thus, by (1), f(Y0),f(Y1),... and g(Y0),g(Y1),... are independent of each other. But f(Yi)=Xi and g(Yi)=Ei. So, X0, X1, ... and E0, E1, ... are independent of each other, and hence so are E and R.

The same argument shows that E and L are independent.

Write AB for the conjunction of A and B and note that EL, ER and RL are the same event—namely, the event of all the coins being heads. Then:

  1. P(E)P(L)=P(EL)=P(RL)=P(R)P(L)

Since friends of positive infinitesimals insist that P(R) and P(L) are positive infinitesimals, we can divide both sides by P(L) and get P(E)=P(R). The same argument with L and R swapped shows that P(E)=P(L). So, P(L)=P(R).

But now let Xi* = Xi + 1, and define L* to be the event of X−1*, X−2* being all heads, and R* the event of X0*, X1*,… being all heads. The exact same argument as above will show that P(L*)=P(R*). But friends of infinitesimal probabilities have to say that P(R*)>P(R) and P(L*)<P(L), and so we have a contradiction if P(L)=P(R) and P(L*)=P(R*).

I think the crucial question is whether (1) is still true in settings with infinitesimal probabilities. I don’t have a great argument for it. It is, of course, true in classical probabilistic settings.

4 comments:

IanS said...

This is interesting. It illustrates the modelling issues mentioned in the other post.

Model the coin flips as independent, meaning that any random variable that depends only on the flips in some subset is independent of any random variable that depends only the flips not in the subset. This applies to any subset of flips whatever, finite or infinite. Call this sort of independence ‘model’ independence. It reflects the intuitive idea of causal independence. It is possible to construct complete hyperreal probabilities that respect model independence. (Or so I believe; I couldn’t prove it myself.)

As an example, let Ln be ‘Heads for all flips with indices strictly less than n’ and Rn be ‘Heads for all flips with indices n and greater’. Then Ln and Rn are model independent. So it is true for any n that P(Ln).P(Rn) = P(Ln and Rn) = P(all Heads). (Note that Ln decreases exponentially in n just as Rn increases, keeping consistency.)

By contrast, Xn and En are ‘math’ independent (meaning that P(Xn and En) = P(Xn).P(En)) but not model independent (because both depend on flip n). The Xn are model independent for different n, as are the Pn. So (hyperreal) arithmetic guarantees that for finite n, (X0 and X1 … and Xn) is math independent of (P0 and P1 … and Pn). But without model independence, you cannot conclude that the same applies to the infinite conjunction.

As I see it, this example illustrates the standard infinitesimalist line that if you want to use infinitesimals, you have to pick a model and stick with it. It’s true that you can see the En as derived from a series of ‘virtual’ flips and you can argue that the virtual flips are similar to the modelled flips. In fact, you could build a model starting the En and the non-negative Xn modelled as causally independent, and treating the negative Xn as derived. But that would be a different model.

Alexander R Pruss said...

Ian:

First, the existence result can be proved using the method in the answer here: https://mathoverflow.net/questions/166060/finitely-additive-measures-on-mathbb-z-2-omega-with-invariance-and-independe
(In that answer, I end up with the standard part of the hyperreal measure in question.)

Now, the two kinds of independence are really interesting, and I am grateful to you for helping me see more clearly something that I've been groping towards.

The way I would prefer to think about it is that there is a strong independence, which is a kind of causal (or maybe rational, in the epistemic case) separation.

But then there is statistical independence. And one can have statistical independence without despite causal dependence. Thus, I could generate two coin flips in this way. I toss a coin. That's my first coin flip. Then I roll a die. If the die is even, I reverse the coin. If the die is odd, I do nothing to the coin. The final position of the coin is my second coin flip. The two coin flips are statistically independent, but the second coin flip causally depends on the first.

So now your comment raises this interesting question: Is there a danger in intuitions about strong independence guiding one's thinking about statistical independence? And if so, does my principle (1) fall afoul of this danger?

I don't know.

IanS said...

Yes, there is a danger, and yes, (1) falls foul of it.

The post is in effect a proof of this. The argument, if it worked, would yield a mathematical inconsistency, not just an implausible conclusion. But the hyperreal probabilities are mathematically consistent (whatever you might make of them philosophically). So something must be wrong. Everything checks out (this is always a dangerous way to reason…), except the claimed independence of R and E (and of L and E), which follows from (1), so that must be the problem. Note also that taken purely mathematically, without the intuitive baggage which is based on finite cases, (1) does not seem obviously plausible.

Alexander R Pruss said...

The argument would only yield a mathematical inconsistency if "independent" in (1) is taken to have a specific mathematical meaning. But I don't mean it to have that. I think we have an intuitive notion of statistical independence, and there are various mathematical attempts to capture the notion, none of which are wholly satisfactory in infinitary contexts. I meant (1) to apply to that intuitive notion.