Suppose that I uniformly randomly choose a number x between 0, inclusive, and 1, exclusive. I then look at the bits b1, b2, ... after the binary point in the binary expansion x = 0.b1b2.... Each bit has equal probability 1/2 of being 0 or 1, and the bits are independent by the standard mathematical definition of independence.
Now, what I said is actually underspecified. For some numbers have two binary expansions. E.g., 1/2 can be written as 0.100000... or as 0.011111... (compare how in decimal we have 1/2 = 0.50000... = 0.49999...). So when talked of “the” binary expansion, I need to choose one of the two. Suppose I do the intuitive thing, and consistently choose the expansion that ends with an infinite string of zeroes over the expansion that ends with an infinite string of ones.
This fine point doesn’t affect anything I said about independence, given the standard mathematical definition thereof. But there is an intuitive sense of independence in which we can now see that the bits are not independent. For instance, while each bit can be 1 on its own, it is impossible to have all the bits be 1 (this is actually impossible regardless of how I decided on choosing the expansion, because x = 1 is excluded), and indeed impossible to have all the bits be 1 from some point on. There is a very subtle dependence between the bits that we cannot define within classical probability, a dependence that would be lacking if we tossed an infinite number of "really" independent fair coins.
No comments:
Post a Comment