In the 1960s, it dawned on philosophers of science that:
- Other things being equal, low-probability explanation confers equally good understanding as high-probability explanation.
If I have a quantum coin that has a probability 0.4 of heads and 0.6 of tails, and it yields heads, I understand why it yielded heads no less well than I would have had it yielded tails—the number is simply different.
On the other hand, the following thesis (which for years I’ve conceded to opponents to low-probability explanations):
- Other things being equal, low-probability explanations are less good than high-probability ones.
Finally, add this plausible comparative thesis:
- What makes an explanation good is how much understanding it confers (or at least would confer were it true)
which plausibly fits with the maxim that I’ve often been happy to concede that the job of an explanation is to provide understanding.
But (1)–(3) cannot all be true. Something must go. If (2) goes, then Inference to Best Explanation goes as well (I learned this from Yunus Prasetya’s very recent work on IBE and scientific explanation). I don’t want that (unlike Prasetya). And (1) seems right to me, and it also seems important to defending the Principle of Sufficient Reason in stochastic contexts.
Reluctantly, I conclude that (3) needs to go. And this means that I’ve overestimated the connection between explanation and understanding.
I'm not sure rejecting (2) entails rejecting IBE. On Salmon's SR model and Railton's DNP model, that might be the case. P(E|H) seems to be the only Bayesian prior that can be affected by explanatory considerations.
ReplyDeleteBut Kitcher's unificationist account might provide a way out. You've got a schematic argument that, when filled out properly, likely describes part of the currently accepted body of scientific knowledge (K). Where H is a good unifying (in Kitcher's sense of the term) explanation, H would be the premises of a filled-out schematic argument. That seems to make P(H) high.
The problem is that the same line of reasoning also makes P(E) high, as E would be the conclusion in a filled-out schematic argument that fits with K. So, the two priors may offset each other.
I think you're right: IBE might survive the rejection of (2). But some things that sound like good applications of IBE would need to go.
ReplyDeleteFor instance, someone offers me a lottery ticket, telling me that either there are ten or a million tickets being sold this week---she can't remember which. I win the lottery. It seems like the following is a good instance of IBE: "The 10 ticket hypothesis much better explains why I won than the million ticket hypothesis would. So, probably the 10 ticket hypothesis is true." Of course, we can reconstruct this as a straight Bayesian argument. But it sounds to my ear like a good application of IBE.
What is meant by "understanding" in this case? In (1), do we have any understanding of why heads is less likely than tails, or is it purely and inexplicably stochastic?
ReplyDeleteI guess what I'm concerned about is whether simply knowing that heads is 40% likely and tails is 60% likely counts as an "explanation" in any sense. If I'm puzzled by why heads came up, does it really remove puzzlement to tell me that this probabilistic imbalance just does exist, without appeal to causes or powers or anything? Why would I not just be puzzled as to why that probability distribution holds rather than some other?
I was thinking that the chances are grounded in causal powers.
ReplyDeleteAlso, I think Lipton's account of IBE allows you to reject (3). The goodness of an explanation has to involve more than how much understanding it produces. He uses the example of conspiracy theories. A conspiracy theory might produce understanding (if it's true), but it is also wildly implausible. Any plausible version IBE has to account for these cases.
ReplyDelete-Yunus