Monday, December 7, 2020

Independence, spinners and infinitesimals

Say that a “spinner” is a process whose output is an angle from 0 (inclusive) to 360 (exclusive). Take as primitive a notion of uniform spinner. I don’t know how to define it. A necessary condition for uniformity is that every angle has the same probability, but this necessary condition is not sufficient.

Consider two uniform and independent spinners, generating angles X and Y. Consider a third “virtual spinner”, which generates the angle Z obtained by adding X and Y and wrapping to be in the 0 to 360 range (thus, if X = 350 and Y = 20, then Z = 10). This virtual spinner is intuitively statistically independent of each of X and Y on its own but not of both.

Suppose we take the intuitive statistical independence at face value. Then:

  • P(Z = 0)P(X = 0)=P(Z = X = 0)=P(Y = X = 0)=P(Y = 0)P(X = 0),

where the second equality followed from the fact that if X = 0 then Z = 0 if and only if Y = 0. Suppose now that P(X = 0) is an infinitesimal α. Then we can divide both sides by α, and we get

  • P(Z = 0)=P(Y = 0).

By the same reasoning with X and Y swapped:

  • P(Z = 0)=P(X = 0).

We conclude that

  • P(X = 0)=P(Y = 0).

We thus now have an argument for a seemingly innocent thesis:

  1. Any two independent uniform spinners have the same probability of landing at 0.

But if we accept that uniform spinners have infinitesimal probabilities of landing at a particular value, then (1) is false. For suppose that X and Y are angles from two independent uniform spinners for which (1) is true. Consider a spinner whose angle is 2Y (wrapped to the [0, 360) range). This doubled spinner is clearly uniform, and independent of X. But its probability of yielding 0 is equal to the probability of Y being 0 or 180, which is twice the probability of Y being 0, and hence twice the probability of X being 0, in violation of (1) if P(X = 0)>0.

So, something has gone wrong for friends of infinitesimal probabilities. I see the following options available for them:

  1. Deny that Z = 0 has non-zero probability.

  2. Deny that Z is statistically independent of X as well as being statisticlaly independent of Y.

I think (3) is probably the better option, though it strikes me as unintuitive. This option has the interesting consequence: we cannot independently rerandomize a spinner by giving it another spin.

The careful reader will notice that this is basically the same argument as the one here.

10 comments:

IanS said...

How about this?

Infinitesimal probabilities have no absolute ‘practical’ meaning. If an event has infinitesimal probability, then whatever its value, you would say that the event was ‘infinitely unlikely’, and you would reject a bet on it at any odds.

Infinitesimal probabilities do have relative ‘practical’ meaning. If (Z = a) and (Z = b) have the same infinitesimal probability, you would say they were ‘equally (un)likely’, and you would accept a bet on (Z = a) conditional on ((Z = a) or (Z = b)) at even odds or better. (As for whether a bet with an infinitely unlikely condition makes any ‘practical’ sense, well…)

So it is only the ratios that matter ‘practically’. Z is ‘practically’ independent of X in the sense that, for example P(Z = a) / P(Z = b) = 1 = P(Z = a | event E defined on X) / P(Z = b | event E defined on X), for any event E defined on X.

So yes, you can deny that Z is statistically independent of X and still say that is ‘practically’ independent of X, and you can ‘practically’ re-randomize the spinner.

I can’t help feeling that though this is reasonable as far as it goes, it is not the whole story.

Alexander R Pruss said...

The pragmatic approach is promising, though as Pascal noted, there are bets with infinite payoffs in real life. Though I think nobody has any idea how to coordinate those with infinitesimal probabilities.

Alexander R Pruss said...

I think what these arguments do is they push the infinitesimalist to a thesis that the notion of statistical independence has very limited application in contexts where there is no causal independence. This is a kind of parallel to the idea that invariance under symmetries has very limited application for the infinitesimalist.

Thus, in spinner cases, to get statistical independence, one needs to reset the spinner between spins, which ensures causal independence.

And in cases of tossing infinitely many coins, to get statistical independence, before tossing all the infinitely many coins again, one needs to reset all the coins by making particular sides face up.

Alexander R Pruss said...

And just as one can have approximate invariance under symmetries (e.g., P(gA)-P(A) is infinitesimal), one can have approximate statistical independence, as per your other comment.

IanS said...

“… statistical independence has very limited application in contexts where there is no causal independence.” I agree. Use a causal model to make the calculations, and the pragmatic approach to interpret the results. Statistical independence is of little (no?) use in itself, except perhaps as a convenience.

David Duffy said...

This seems to be related to the problem of the distribution of fractional parts of real random variables. The fractional part of k*U, where U is a uniform random variable (0-1), and k integer, is supposed to be uniform. You seem to be arguing that it is a problem that multiple values of the original distribution will map to a single value, here k=2, 0,1/2 -> 0 , but this is just as true of 1/4, 3/4 -> 1/2 etc.

Alexander R Pruss said...

David: Yes, one can run the problem that way, too.

IanS said...

Some thoughts on spinners and independence.

First, what is a spinner? It seems from the post that we want rotation invariance. But, by your theorem, no complete regular hyperreal probability can be invariant under all real rotations. To avoid this problem (and keep things simple!), restrict to rational angles and rational rotations. (For notational convenience, take the angles as the rationals in [0, 1) and the rotations as ‘folding around’ at 1. Read intervals as referring the rational numbers in the interval.) Then, again by your theorem, complete regular hyperreal probabilities exist.

The following construction for two rational spinners is adapted from your proof. For n = 1, 2, 3, …, define a set Sn consisting of all points (x/n!, y/n!) with x and y from 0 to n! - 1. Define measures Pn on [0, 1) x [0, 1) by Pn(A) = (size of A intersect Sn) / n!^2. Use the Pn and an ultrafilter on the natural numbers n in the usual way to define to a hyperreal probability P on [0, 1) x [0, 1).

Clearly, P is regular, is invariant under rational rotations of X and Y, gives identical marginal probabilities to X and Y, and makes X and Y (stochastically) independent. To see independence, note that p ‘vertical lines’ in Sn and q ‘horizontal lines’ in Sn always intersect at pq points in Sn.

P also makes X+Y (and X-Y) independent of X and Y individually (but not jointly, of course). To see this, note for example that p diagonals in Sn and q horizontal lines in Sn always intersect at pq points in Sn.

Now think about X+2Y. This is independent of Y. (Reason as above.) But it is not independent of X. (Because a slope 1/2 line in Sn may cross a vertical line in Sn between two points of Sn.) This is precisely option (3) of the post.
(As an aside, note that you can contrive a different joint probability that does make X+2Y independent of both X and Y. Instead of Sn, start from Tn consisting of all points (x/n!, y/(2*n!)) with x from 0 to n! - 1 and y from 0 to 2*n! – 1, then proceed similarly. But then, though X and Y will be independent, their probabilities will be different, and X+Y will independent of X but not of Y.)

What to make of this? Well, you could still hope that under P, X+2Y is in some sense ‘practically’ independent of X. Note for example that if a condition on X is constructed by combining a finite number of intervals of form [p, q) with each p and q rational, the product formula will hold exactly. (Prove by finding a common denominator for all the ps and qs and applying invariance under rotations of X by 1/common denominator.) But this won’t work for conditions like X = 0 (as in the post), or for conditions like X is in {1/2, 1/3, 1/4, …}.

The moral I draw is that with hyperreal probabilities, intuition can be misleading, and you can rely only on what is explicitly modelled.

IanS said...

To spell out a point that was implicit in the above: In Sn, a line of slope ±1/2 intersects a vertical at either two points or none. ‘On the average’, lines of slope ±1/2 intersect verticals at one point, as expected. This average is in fact realized over pairs of adjacent lines, either sloped or vertical. So for conditions on either X or X+2Y that can be expressed as a union of a finite number of intervals of form [p, q) or (p, q] with p and q rational, the product formula will apply exactly. (Because for all sufficiently large n, such intervals will contain an even number of adjacent lines in Sn.) But not all conditions can be so expressed, e.g. X = 0 and X+2Y = 0 in your example.

IanS said...

Here is an extreme example of the non-independence of X and X+2Y in the setup described above. It seems to rule out any sort of ‘pragmatic approach’.

For any rational number q, there is a smallest n for which q can written as (integer)/n!. Call q ‘Even’ or ‘Odd’ according as (integer) is even or odd. For example, 1/2 (= 1/2!) is Odd, 1/3 (= 2/3!) is Even.

Then, up to infinitesimal differences, P(X Even) = P(X Odd) = P(X+2Y Even) = P(X+2Y Odd) = 1/2. But, crossing them, again up to infinitesimal differences, P(X Even and X+2Y Even) = (X Odd and X+2Y Odd) = 1/2 and P(X Even and X+2Y Odd) = P(X Odd and X+2Y Even) = 0. This is perfect stochastic dependence up to infinitesimal differences. No ‘pragmatic approach’ will help here.

To prove the results for X, think for example about Pn(X Even). The idea is to prove that Pn(X Even) converges to 1/2. Then P(X Even) will equal 1/2 up to an infinitesimal. The columns in Sn have X values m/n!, m = 0, 1, … n!-1. The values for m = 0, n, 2n … ((n-1)! – 1)n, will have been assigned Even or Odd on the basis of smaller denominators. The rest will be Even or Odd according as m is Even or Odd. So a crude lower bound on Pn(X Even) is (n!/2 – (n-1)!)/n! = 1/2 – 1/n. Similarly, an upper bound in 1/2 + 1/n. These converge to 1/2 as required. Use the same idea for the other results. (The bounds for the crossed cases are wider, but Pn still converges as required.)