Wednesday, May 3, 2023

Conditionalizing on classically null events

Some events have probability zero in classical probability. For instance, if you spin a continuous and fair spinner, the probability of its landing on any specific value is classically zero.

Some philosophers think we should be able to conditionalize on possible events that classically have zero probability, say by assigning non-zero infinitesimal probabilities to such events or by using Popper functions. I think there is very good reason to be suspicious of this.

Consider these very plausible claims:

  1. For y equal to 1 or 2, let Hy be a hypothesis about the production of the random variables X such that the conditional distribution of X on Hy is uniform over the interval [0, y). Suppose H1 and H2 have non-zero priors. Then the fact that the value of X is x supports H1 over H2 if x < 1.

  2. If two claims are logically equivalent and they can be conditionalized on, and one supports a hypothesis over another hypothesis, so does the other.

  3. If a random variable is independent of each of two hypotheses, then no fact about the value of the random variable supports either hypothesis over the other.

But (1) yields a counterexample to the method of conditionalization by infinitesimal probabilities. For suppose a random variable Z is uniformly randomly chosen in [0, 1) by some specific method. Suppose further that a fair coin, independent of Z, was flipped, and on heads we let X = Z and on tails we let X = 2Z. Let H1 be the heads hypothesis and let H2 be the tails hypothesis. Then X is uniformly distributed over [0, y) conditionally on Hy for y = 1, 2.

But now let E be the fact that X = 0, and suppose we can conditionalize on E. By (1), E supports H1 over H2 as 0 < 1. But E is logically equivalent to the fact that Z = 0. By (2), then Z = 0 supports H1 over H2. But Z is independent of H1 and of H2. So we have a contradiction to (3).

I think this line of thought undercuts my toy model argument in my last post.

5 comments:

Andrew Dabrowski said...

Why do you say that Z is independent of the H_y?
P(H_1|Z>1) = 0.

Andrew Dabrowski said...

Also, probabilists are happy to take the ratio of probability densities, e.g. in the spinner scenario.

Alexander R Pruss said...

Z cannot ever be bigger than one.

Andrew Dabrowski said...

Oops, sorry, I confused X and Z.
You're right, in fact this seems to apply to any X<=1.
Might not this also undercut the lottery example in your book on infinity?

Alexander R Pruss said...

It wouldn't immensely surprise me, but I don't see how it undercuts it.