Suppose we uniformly pick a random number from the interval [0,1] ([a,b] is the closed interval from a to b, both inclusive). On standard probabilistic measures, the probability of picking any particular number is zero. Some people don't like that, instead insisting that the probability should be infinitesimal.
So, here's one plausible desideratum for uniform probabilities on [0,1]:
- For every x in [0,1], P({x})>0.
- P is a finitely additive probability measure that makes all intervals measurable.
- If I and J are closed intervals and J has twice the length of I, then P(J)=2P(I).
I don't think this little fact is a very big deal, in that perhaps we can still have (3) for half-open intervals (ones that contain one but not the other endpoint). But it does show that we're not saving all intuitions about the measure structure of [0,1] when we add infinitesimals—we violate the scalability intuition in (3).
You write, "The only way this can be is if P({1/2})=0, contrary to (1)."
ReplyDeleteBut is that right? I don't know exactly how infinitesimals are defined, but take an analogy with transfinite arithmetic. Aleph_0 = Aleph_0 + 1. One might think that the only way this equation can work is if 1 = 0, but one would be wrong. Why not think that infinitesimals work in a similar way? Perhaps the relation of an infinitesimal quantity to an ordinary finite quantity is like the relation of an ordinary finite quantity to an infinite one.
Interesting. You might have a system like that. I need to think about it. But normally infinitesimals are members of a field that extends the reals. And for the members of a field, if a=a+b, then b=0. So what I said is true for hyperreals, formal Laurent series, and other constructions of infinitesimals.
ReplyDeleteI think the problem is we're not being careful with our infinitesimals. Consider the discrete set [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1] being broken into [0 to 0.5] and [0.5 to 1]. Each "half" contains 6 elements, but the whole is 11 elements, not 12, because of the double-counting of 0.5. That double-counting is handled in subtracting the P({½}) but it's not accounted for in saying that P([0, ½]) is half of P([0, 1]): it's really half-plus-an-infinitesimal-amount. That extra amount is the missing discrepancy.
ReplyDeleteDL:
ReplyDeleteThat sounds right, and that's why the problem doesn't happen when we work with half-open intervals.
Still, the argument does show that an intuitive scaling property does not hold once we bring in infinitesimals.
The argument is making me have vague thoughts about how we shouldn't think of a continuum as made up of points. Years ago, when I was a math grad student, my dad pushed against the idea that a continuum is made of points. As a mathematician, I couldn't think of a continuum in any other way. I still can't really, but these kinds of cases make me think that Dad might well have been right that a different way to think is needed, and are giving me glimpses.
Maybe I should look at John Bell's work on continua again.