Monday, November 6, 2017

Statistically contrastive explanations of both heads and tails

Say that an explanation e of p rather than q is statistically contrastive if and only P(p|e)>P(q|e).

For instance, suppose I rolled an indeterministic die and got a six. Then I can give a statistically contrastive explanation of why I rolled more than one (p) rather than rolling one (q). The explanation (e) is that I rolled a fair six-sided die. In that case: P(p|e)=5/6 > 1/6 = P(q|e). Suppose I had rolled a one. Then e would still have been an explanation of the outcome, but not a statistically contrastive one.

One might try to generalize the above remarks to conclude to this thesis:

  1. In indeterministic stochastic setups, there will always be a possible outcome that does not admit of a statistically contrastive explanation.

The intuitive argument for (1) is this. If one indeterministic stochastic outcome is p, either there is or is not a statistically contrastive explanation e of why p rather not p is the case. If there is no such statistically contrastive explanation, then the consequent of (1) is indeed true. Suppose that there is a statistically contrastive explanation e, and let q be the negation of p. Then P(p|e)>P(q|e). Thus, e is a statistically contrastive explanation of why p rather than q, but it is obvious that it cannot be a statistically contrastive explanation of why q rather than p.

The intuitive argument for (1) is logically invalid. For it only shows that e is not the statistically contrastive explanation for why q rather than p, while what needed to be shown is that there is no statistically contrastive explanation.

In fact, (1) is false. The indeterministic stochastic situation is Alice’s flipping of a coin. There are two outcomes: heads and tails. But prior to the coin getting flipped, Bob uniformly chooses a random number r such that 0 < r < 1 and loads the coin in such a way that the chance of heads is r. Suppose that in the situation at hand r = 0.8. Let H be the heads outcome and T the tails outcome. Then here is a constrastive explanation for H rather than T:

  • e1: an unfair coin with chance 0.8 of heads was flipped.

Clearly P(H|e1)=0.8 > 0.2 = P(T|e1). But suppose that instead tails was obtained. We can give a constrastive explanation of that, too:

  • e2: an unfair coin with chance at least 0.2 of tails was flipped.

Given only e2, the chance of tails is somewhere between 0.2 and 1.0, with the distribution uniform. Thus, on average, given e2 the chance of tails will be 0.6: P(T|e2)=0.6. And P(H|e2)=1 − P(T|e2)=0.4. Thus, e2 is actually a statistically contrastive explanation of T. And note that something like this will work no matter what value r has as long as it’s strictly between 0 and 1.

It might still be arguable that given indeterministic stochastic situations, something will lack a statistically contrastive explanation. For instance, while we can give a statistically contrastive explanation of heads rather than tails, and a statistically contrastive explanation of tails rather than heads. But it does not seem that we can give a statistically contrastive explanation of why the coin was loaded exactly to degree 0.8, since that has zero probability. Of course, that’s an outcome of a different stochastic process than the coin flip one, so it doesn't support (1). And the argument needs to be more complicated than the invalid argument for (1).

7 comments:

Heath White said...

It seems to me that one should say something like:

e2: an unfair coin with chance at least 0.2 of tails was flipped

is grounded in

e1: an unfair coin with chance 0.8 of heads was flipped.

And if an explanation dissolves when the grounds of the alleged explanans are substituted in, then you did not actually have an explanation in the first place.

(Principle: if A grounds B, and B explains C, then A explains C.)
(Weaker Principle: if A grounds B, and B explains C, then it is not the case that ~A explains C.)

Alexander R Pruss said...

I don't think explanation dissolves when grounds are substituted, because I think e1 is a perfectly fine explanation of tails, but it does become non-contrastive then. I don't know what to make of its becoming non-contrastive.

Alexander R Pruss said...

There is a principle that pushes the other way. If we have two factors, one pushing towards X and one pushing towards not-X, then only the one pushing towards X helps explain X. The other one is "counterexplanatory". But in the case at hand, we have two factors:
- the chance of tails is at least 0.2
- the chance of tails is at most 0.2
The second factor is counterexplanatory, so it should be left out of the explanation.

Heath White said...

I don't know a great deal about explanation but I think I am stingier with the notion than you are. Suppose everything in the universe were a little bit contingent (as it may be, given QM). Then on your view, everything would have an explanation, no matter what happened, as the chance of anything at all happening is >0. But that seems pretty promiscuous with the notion of explanation.

What is the _point_ of saying that "well, there was a 1% chance of that happpening" is an _explanation_? If "X has an explanation" just means "X is not impossible"... well, after X happens we know that already.

(And suppose we thought everything were deterministic, but then a law-breaking miracle occurred. We might just conclude that the universe was not deterministic after all, and therefore our miracle had an explanation. Again, that seems too easy.)

Alexander R Pruss said...

There is a nice argument from the 1960s which convinced a number of philosophers of science. We understand why the coin which was loaded 99% in favor of heads landed heads. But the case where it was loaded 1% in favor of heads and landed heads is no harder to understand. So we understand why that happened, too.

Heath White said...

I think I would say, rather, that there is a sorites here. A deterministic explanation is real explanation. In the case of a coin 99% weighted toward heads, we have 99% of an explanation for it landing heads, which may be good enough for many purposes. But when we get to 1%, that is good enough for practically no purposes.

A way to put a point on this: is "P explains Q" a vague claim or not? I would say yes ... there seem to be lots of examples. Economics and folk-psych explanations are generally not terribly good explanations.

Alexander R Pruss said...

It's not meant to be a sorites. We understand the 99% case. But the 1% case has no additional mystery to it.