From time to time, I hear a philosopher suggest using infinitesimals to model single-point probabilities. For instance, you uniformly choose a real number from the interval [0,1]. On standard probability theory, for any number x in [0,1], the probability that you picked x is zero. But this is counterintuitive. It seems to make sense to be more sure of your having picked a real number than of your having picked a real number other than 1/2. Moreover, there is an intuition that if something has probability zero, it's impossible. But then every outcome would be impossible.
A tempting solution is to say that P({x})=i where i is a positive infinitesimal. I've heard this suggestion often made. It seems not to be widely known that Tim McGrew has shown that this is a very problematic solution. So in the interests that this fact be more widely known, here is Tim's argument.
Let x1,x2,... be any infinite sequence of distinct numbers in [0,1], and let U={x1,x2,...}. Then, P(U)=P({x1})+P({x2})+...=i+i+i+i+...=(i+i)+(i+i)+...=2i+2i+...=2(i+i+...)=2P(U). But if P(U)=2P(U), then P(U) is either zero or infinity. It can't be zero as it's at least i. But it can't be infinity as it's at most 1. So we have a contradiction.
The argument used countable additivity. But it can be modified to use countable subadditivity. Countable subadditivity says that if A1,A2,... are disjoint sets, and A is their union, then P(A) is no less than P(A1)+P(A2)+.... Countable subadditivity seems pretty plausible. It follows from finite additivity and the principle that if a is no less than a1+a2+...+an for every finite n, then a is no less than a1+a2+.... Given countable subadditivity, the argument still works. Let I=i+i+i+i+.... Then I=2I by the argument above. So I is zero or infinity. If it's zero, then i=0. If it's infinity, then P(U) is infinite by countable subadditivity. The only problem with this argument is that it's not clear that countable subadditivity makes sense in a context with infinitesimals, because a infinite sum of infinitesimals does not seem to make sense (or at least is equivocal to an infinite sum of standard numbers).
Another way to avoid countable additivity is to posit the principle that when we are dealing with a uniform probability on [0,1], any two countably infinite subsets of [0,1] should have the same probability. But then let U={1/2,1/3,1/4,...}, U1={1/2,1/4,...} and U2={1/3,1/5,...}. By countable additivity P(U)=P(U1)+P(U2). But P(U1)=P(U2)=P(U) by the principle above. And this leads to the conclusion that P(U) is zero or infinity.
Here's another intuitive approach. Let U(a)={a/2,a/3,a/4,...}. If the probability of {a/n} is the same as the probability of {b/n} for all n, we'd expect that P(U(a))=P(U(b)). But then P(U(1))=P(U(1/2)), since all point probabilities are the same. But by finite additivity, it follows that P(U(1)−U(1/2))=0 (where A−B is the set of all members of A that aren't members of B). However, U(1)-U(1/2)={1/3,1/5,...} and this does not have probability zero, if the probability of a single point is an infinitesimal. So, once again, we get the conclusion that the probability of a single point can't be an infinitesimal.
Alan Rhoda informs that it looks like Tim McGrew does not buy the argument (anymore?).
ReplyDeleteMy memory was wrong. The argument I give is not Tim McGrew's. Rather, it is a modification of an argument of Tim McGrew's used for a different conclusion. Tim doesn't actually agree with my use of his argument.
ReplyDelete