Some people think that for *C* to explain *E*, *P*(*E*|*C*) must be high. This is false. Suppose two events *E*_{1} and *E*_{2} have probabilistic explanations, and we understand that the events are independent. Then we understand why *E*_{1} occurs and why *E*_{2} occurs, and we also understand why their conjunction occurs. But of course their conjunction has lower probability than either of them, and iterating this argument we get to a conjunctive event such that we understand why it occurs even though its probabilistic explanation is a quite low probability one.

## Monday, September 22, 2014

### Low probability explanations

Subscribe to:
Post Comments (Atom)

## 7 comments:

Something is fishy with this argument. For consider: I roll a six-sided die, get a 1, and we want an explanation of this. Well, here are five re-descriptions of the event: I rolled something other than a 6, I rolled something other than a 5, I rolled something other than a 4, I rolled something other than a 3, I rolled something other than a 2. The probability of each of these events is 5/6, and their conjunction, of course, entails that I rolled a 1. So we can “explain” an event of probability 1/6 in terms of the conjunctions of events of much higher probability. Following this technique, we can “explain” any arbitrarily low-probability event in terms of a conjunction of arbitrarily high-probability events just by redescribing it. This, however, seems like a pseudo-explanation if there ever was one.

I would diagnose it like this. We need to think of explanations as coming in a variety of qualities from “excellent” through “good” down to “very lame.” Just as a conjunction of probabilities is not necessarily equally probable, so a conjunction of explanations is not necessarily an equally good explanation. We may find ourselves conjoining very good explanations into a very lame explanation.

This affects how we think of the PSR. If we take that principle to be “every event has an explanation,” then how much and what kind of an explanation we insist on will affect how we understand the principle. For example, “Every event has a causal explanation” is quite different from “every event has non-zero probability.” Even the naturalist who believes in quantum mechanics will give you the latter, because it’s a tautology.

Heath:

I did specify that one understands the events to be independent, which is not the case in your example. When they're not, there may be something more to understand about their coinciding, namely that they're not a coincidence.

I used to think that explanations do come in a variety of qualities and that the lower probability ones were less good. But I've been reflecting on Jeffreys' remark that in probabilistic cases, it is false that we understand the less likely events any less than we do the more likely ones. If we think of explanations as vehicles for a certain kind of understanding, this suggests that the low probability explanations are no worse.

Consider two quantum phenomena. In the first, an electron in (unnormalized) state 10|up>+|down> is fed into Stern-Gerlach apparatus and is observed going up in the magnetic field. We have a fine probabilistic explanation: it had approximately probability 0.99 of going up. In the second, an electron in state |up>+10|down> is fed into the same apparatus and is also observed going up in the magnetic field. Now our probabilistic explanation involves probability 0.01. But the numbers don't affect how well we understand the phenomenon. We understand the 10|up>+|down> state exactly as well as we understand the |up>+10|down> state. We understand the outcome equally well. And we understand the probabilistic causation "in between" the state and outcome equally well--or more precisely, equally poorly.

Alex,

Fair point about the independence of events.

Perhaps what bothers me is this. There is a very natural form of reasoning which observes the extraordinary unlikelihood of some coincidence of events and infers that there is some unified causal explanation behind them. The fine-tuning argument is a good example but we don’t need anything that elevated. Suppose I come home and find a broken window, tossed-about furniture and missing jewelry. I conclude that my house has been burgled. Now there is some alternative story, however unlikely, that explains all these events, and maybe a defense lawyer will try to tell that story. But I would resist the implication that these two explanations (a burglar did it vs. some preposterous story) are on a par, equally good. If they are, I no longer know what to make of abductive reasoning or scientific knowledge generally.

Maybe I am asking ‘explanation’ to do epistemological work while you think of it as something metaphysical. But I am a little suspicious of this metaphysical sense of ‘explanation.’ Since we are talking about probabilistic or statistical explanations, ‘explanation’ doesn’t mean ‘cause.’ I agree that once we have the whole causal story, there isn’t any more understanding to be had. Maybe a cause can be non-deterministic and then the causal explanation will look like a probabilistic one. But non-causal probabilistic explanations often leave more understanding to be had. For example, “Why does this philosophy major eat sushi on Thursdays?” is just not well explained by the (perhaps true) claim that 1% of philosophy majors eat sushi on Thursdays, and I am inclined to say that it’s because the probability is so low that my belief-forming processes get no help from this alleged explanation. On the other hand if 99% of philosophy majors ate sushi on Thursdays I would wonder why that was, but I would be less puzzled by this particular philosophy major.

Quite possibly I am making elementary errors here, and I’ve certainly rambled a bit. Make of it what you can.

1. Suppose you know the preposterous story from the defense attorney to be true. Might it not be just as good an explanation as the burglar story would have been?

2. When we say that C1 is a better explanation than C2 for E, maybe what we really should be saying is: C1 is a better putative explanation than C2 is. And then maybe "better putative explanation" doesn't mean that it would be a better explanation if it were the explanation, but it means that it's a better candidate for being the explanation?

3. That said, I don't want to endorse the point in 2 in general. I suspect there are better explanations and poorer explanations. But the difference does not neatly go with probabilities. The Jeffreys point seems just right: Simply changing the numbers doesn't change how much understanding you gain.

But other things may change how much understanding you gain. Thus, the hypothesis that the ten coins Bob tossed independently landed heads by chance is less explanatory than the hypothesis that the person who threw them had amazing skill. But the difference is not, I think, due to the fact that P(all heads | independent)=1/1024 and P(all heads | great skill)=0.8 (say). The difference seems to me to be due to the fact that skill hypothesis is more unifying, more systematic, etc.

4. Though this is controversial, I do think statistical explanations need to be causal.

That 99% of philosophers eat sushi may or may not explain why philosopher x eats sushi. If x eats sushi because of the social influence of the other sushi-eating philosophers, then it does. But that's causal. But if it's just a brute fact that 99% of philosophers eat sushi, then that's no explanation of the individual case. Finally, sometimes it happens that there is some common cause--maybe because the same gene that predisposes one to be a philosopher predisposes one to be willing to risk food poisoning. In that case, that 99% of philosophers ate sushi isn't what explains why x ate sushi, but rather the common cause is the explanation.

I tend to accept statistical explanations when they report an underlying propensity present in the particular case. Otherwise, I am inclined to say that either the statistical story is elliptical or its a bit like a case of explaining the mysterious by the more mysterious.

5. Imagine that there are 100 foods that Bob chooses between each time he goes out. Sushi, pizza and pad thai are among them. He has no preferences between them. We understand his choosing sushi just as well as we do his choosing pizza. Do we understand the disjunctive state of affairs of his choosing pizza, pad thai, or any of the other 97 non-sushi items better than his choosing sushi? I am not sure. Yet the disjunctive state of affairs has probability 99%.

It seems to me that some distinction like the one in your 2 is essential. You are right in 3 that it does not neatly track probabilities.

Other than that, I am puzzled.

"Other than that, I am puzzled."

And that's a sign that we don't have an explanation of explanation yet.

Maybe we have a great one, it's just low-probability. :-)

Post a Comment