Suppose that we ruled out all but one possible explanations for a phenomenon. The remaining explanation *H* is the poorest and most contrived explanation you can imagine. But a poor and contrived explanation is still an explanation. And in this case it is the best explanation by a mile: none

other comes close, as all others have been ruled out. Thus by Inference to Best Explanation:

- We should accept
*H*.

- If there was any chance that there simply was no explanation, the hypothesis that there is no explanation would be preferable to
*H*, given how poor and contrived*H*is. - So, there is no chance that there is simply is no explanation.

*H*.

## 28 comments:

Interesting post. I think it should push us to reject an unrestricted principle of inference to the best explanation. Sometimes no explanation is better off than an explanation which gets us nowhere. How we should restrict abduction so that it respects this seems to me an interesting and potentially fruitful question. I've been meaning to write a post on this on my blog for a while - I'll make it a priority now that it's got some currency. The issue also links in with the last post on wo's weblog - I think the explanation of certain facts about numbers he says we get when we identify numbers with certain sets is just such an example of an explanation in a case where we'd be better off without one.

We should reject p1 in cases where the "no explanation" or "brute fact" hypothesis (call this H0) has a higher probability than H. This fact is sufficient to defeat the argument presented.

In slightly longer terms, "inference to the best explanation" is a useful informal heuristic, but it is outweighed by formal induction via probability theory, and formal induction via probability theory has no qualms about evaluating "brute fact" hypotheses at all.

So, you might accept H in cases where you haven't thought it through very much, but if you actually bother to evaluate P(H|E) and P(H0|E) and find that H0 is more probable, you are warranted in rejecting H in favor of the "explanation-less" H0--which means that not only is the argument unsound (p1 is not true in general) but that the conclusion is also false. The fact that we certainly can be warranted in accepting H0 over any other hypothesis means that the PSR is not true as an epistemic rule (and it is vacuous as a metaphysical one).

And, for what it's worth, in cases where P(H|E) > P(H0|E), I don't think we would call H a "crummy" hypothesis.

It's far from clear that IBE is just a heuristic and the probabilistic reasoning is the right way to go.

Moreover, if Ho is a no-explanation hypothesis, then P(E|Ho) is not going to be defined in any non-arbitrary way.

Probability theory allows for optimal application of information to models. IBE doesn't. Probability theory is always a better usage of available information--that actually is a provable fact, and it seems to render the discussion pretty much closed.

Moreover, that's actually not correct. Like any hypothesis, H0 is simply going to be some computable model, and its outputs are its predictions. Indeed, all extant physical models include brute facts in them--gravitational constants, the speed of light, etc. These are simply present in models with no further explanation given. That's what it *means* to have a brute fact hypothesis, and since essentially all models eventually terminate this way, the question isn't really whether a model is a "brute fact" hypothesis, but rather which facts are presented as brute facts. For example:

I might want to construct a model of how a cannon shot will fall under specified conditions. I might just do the calculation myself in my head and present my model as "the shot will fall here." In that model, the fact that the shot is going to fall "here" is presented as a brute fact--no explanation given.

On the other hand, I might construct a model based on the rules of simple ballistic motion (including, say, the typical "9.8m/s/s" used as the acceleration due to gravity near the Earth's surface) which predicts that the shot will fall "here." In this case, the fact that the shot will fall "here" is not presented as brute--it's given an explanation. The rules of ballistic motion, however, are not.

I might go further and construct a model where the rules of simple ballistic motion are derived from proper Newtonian physics. In this case, the rules of simple ballistic motion are not presented as brute, but Newton's laws and, say, the Universal Gravitational Constant are.

All of these are, at some level, brute fact hypotheses, yet they all make clear, non-arbitrary predictions (with probabilities equal to 1).

Of course, my predictions don't have to be P(1) predictions--I can program in distributions, like we do for predicting market behavior or test scores or quantum tunneling, and these distributions can themselves be presented as brute facts or explained by way of predication on other facts, which may be brute or...(and so on, until eventually you reach some brute fact).

Of course, you may terminate your model in something you call a "necessary" fact, rather than a brute fact, but this distinction is irrelevant as far as probability and induction are concerned. Whether you choose to call the Universal Gravitational Constant "necessary" or "brute" doesn't affect the probability of the model which uses it as foundational.

How good probabilistic reasoning is depends on how good the priors are.

But in any case I think we have a misunderstanding as to what the no-explanation hypothesis is going to be like.

The no-explanation hypothesis simply says that the phenomenon in question has no explanation: that it just happened for no reason at all.

How do you get the probability of the phenomenon occurring conditionally on it happening for no reason at all?

Well, you can of course have some arbitrary priors. But they will be arbitrary.

Solomonoff priors are non-arbitrary, and useful in principle, but all priors drop away as more evidence is applied.

As for a hypothesis which simply says "this phenomenon will occur, no explanation given," that's precisely the sort of hypothesis that I illustrate with the first example in my last post: "the shot will fall here, no explanation given."

That's a brute fact hypothesis, where the phenomenon itself is the brute fact, and it makes its particular prediction (that the shot will fall here) with P(E|H) = 1.

But, somewhat more importantly, IBE is never really good reasoning at all. It's always just a sort of rough guess with no actual rational epistemic warrant behind it.

Probability theory provides epistemic warrant. IBE does not.

And, just so we're clear, we understand that "P(E|H)" is *not* the prior for H, right? That's P(H).

I'm trying to decide whether you're really trying to say that P(E|H) is arbitrary and that P(H) is arbitrary, or just that P(H) is arbitrary.

Neither would be true, of course (or, at least, neither needs to be true) but let's at least make sure we're talking about the same thing.

Sorry to keep posting in a row like this, but one more quick summary:

P(E|H) for brute fact hypotheses are not arbitrary.

P(H) for brute fact hypotheses can be arbitrary, but an also be non-arbitrary.

IBE is *always* arbitrary.

If arbitrariness is a problem, probability theory wins out over IBE handily.

And, finally, if it is even *possible* for us to be epistemically warranted in believing a brute fact hypothesis over the "crummy explanation" hypothesis, then your p1 is false, your argument is unsound, and the PSR is false.

Since, clearly, it can be possible, using probability theory, to be warranted in believing brute-fact hypotheses over "crummy explanation" hypotheses, your p1 is false, your argument is unsound, and the PSR is false as well.

That's about where we're at on this one.

PSR can be true, even necessarily true, despite the possibility, or even actuality, of our being warranted in disbelieving it.

What is P(sky is blue | there is no explanation of the sky being blue)?

It can't be an epistemic principle if we can be epistemically warranted in violating it, and the argument you've presented doesn't speak to it as a metaphysical principle at all.

The hypothesis we'd want to evaluate is "the sky is blur--no explanation given" and the probability of the sky being blue given that hypothesis is clearly one.

Though, really, "there is no explanation for the fact that the sky is blue" and "the sky is blue, no explanation given," are basically the same thing. And both predict that the sky is blue with probability one.

1. The argument is meant to show that in fact we *accept* the Principle of Sufficient Reason. That's different from saying that it's an epistemic principle.

2. I carefully formulated the hypothesis so as not to immediately entail that the sky is blue. Only then is the hypothesis parallel to hypotheses like the correct one about the scattering of light in the upper atmosphere (which does not immediately entail that the sky is blue, but makes it very probable).

1.) Sure, but your argument doesn't, in fact, accomplish that for people versed in probability theory.

2.) You may have tried to do that, but, in fact, as it is written your hypothesis *does* directly entail that the sky is blue. It's right there: "...the sky being blue."

Even were you to somehow change the hypothesis so that it does not accomplish this, we would still have examples of brute-fact hypotheses that do clearly entail predictions, and we would still be left in a position where your argument is refuted.

After all, I have no doubt that some people *do* accept the PSR. What is in question is whether people are *warranted* in doing so, and, in fact, they are not. Your argument does not offer any warrant to the contrary. Its conclusion is false, and its first premise is false.

We are warranted in rejecting the PSR. We are not warranted in accepting it. The fact that we are warranted in accepting brute-fact hypotheses, such as that the UGC is what it is--no explanation given -- demonstrates this quite clearly.

We could put it like this:

1.) Some "brute fact" hypotheses are accessible to evaluation by probabilistic induction.

2.) If some "brute fact" hypotheses are accessible to evaluation by probabilistic induction, then it is in principle possible for one to be warranted in believing some "brute fact" hypothesis.

3.) If it is, in principle, possible for one to be epistemically warranted in believing some brute-fact hypothesis, one is not warranted in believing the PSR.

4.) Therefore, one is not warranted in believing the PSR.

If you don't like the phrasing, you can rephrase as: The proposition that the sky is blue has no explanation.

Of course, you can build the evidence (say, the sky being blue) right into your hypothesis. But that just shifts the problem back to the priors. Let B be the proposition that the sky is blue. Let N be the proposition that there is no explanation of the proposition that the sky is blue. Then P(B and N)=P(B|N)P(N). But P(B|N) has sensible value.

I have no doubt that, eventually, you could build some hypothesis that fails to make a prediction, but so what? What matters is that you can build a brute-fact hypothesis that *does* make predictions, and therefore is accessible to induction.

And yes, of course priors matter, but (first) priors don't have to be arbitrary and (second) even if they did, so what? It wouldn't pose any sort of problem for my refutation of both your argument and the PSR.

Correction: P(B|N) has NO sensible value.

Yeah, I figured that's what you meant.

Sorry, my phone is acting very strange and double posting a lot

Post a Comment