Tuesday, June 11, 2013

A funny uniform distribution?

Let X and Y be independent random variables uniformly distributed over [0,1]. Let Z=max(X2,Y2). Then it's easy to check that Z is also uniformly distributed over [0,1].

But now suppose we think that uniform random variables have equal infinitesimal probabilities of hitting every point. Thus, P(X=a)=P(Y=a)=α for every a, where α is some infinitesimal. What, then, is P(Z=a)? Well Z=a if and only if one of three mutually exclusive possibilities occurs:

  1. X<a1/2 and Y=a1/2
  2. X=a1/2 and Y<a1/2
  3. X=Y=a1/2.
Now, P(X<a1/2)=a1/2O(α) (the last term is due to end-point effects: P(X<1)=1−α) and P(Y=a1/2)=α. Thus, P((1))=αa1/2+O2). By the same token, P((2))=αa1/2+O2). And P(X=Y=a1/2)=α2 by independence. Thus, P(Z=a)=P((1))+P((2))+P((3))=2αa1/2+O2).

In other words, Z is a uniformly distributed random variable by standard probabilistic criteria, but the probability of Z hitting different points is different: P(Z=a) is basically an infinitesimal multiple of √a.

What is happening here is that if one attempts to attach infinitesimal probabilities to the individual outcomes of bona fide classical probabilities, the infinitesimal individual outcome probabilities float free from the distribution. You can have the same individual outcome probabilities and different distributions or, as in this post, different (nonuniform) individual outcome probabilities and the same (uniform!) distribution.

No comments:

Post a Comment