Thursday, August 21, 2014

A possible limitation of explicitly probabilistic reasoning

Bayesian reasoners will have their credences converge to the truth at different rates depending on their prior probabilities. But it's not as if there is one set of prior probabilities that will always lead to optimal convergence. Rather, some sets of priors lead to truth faster in some worlds and some lead to truth faster in others. This is trivially obvious: for any world w, one can have priors that are uniquely tuned for w, say by assigning a probability extremely close to 1 to every contingent proposition true at w and a probability extremely close to 0 to every contingent proposition false at w. Of course, there is the question of how one could get to have such priors, but one might just have got lucky!

So, a Bayesian reasoner's credence converges to the truth at rates depending on her priors and what kind of a world she is in. For instance, if she is in a very messy world, she will get to the truth faster if she has lower prior credences for elegant universal generalizations, while if she is in a more elegant world (like ours!), higher prior credences for such generalizations will lead her to truth more readily.

Now suppose that our ordinary rational ampliative reasoning processes are not explicitly probabilistic but can be to a good approximation modeled by a Bayesian system with a prior probability assigment P0. It is tempting to think that then we would do better to explicitly reason probabilistically according to this Bayesian system. That may be the case. But unless we have a good guess as to what the prior probability assignment P0 is, this is not an option. Sure, let's suppose that our rational processes can be modeled quite well with a Bayesian system with priors P0. But we won't be able to ditch our native reasoning processes in favor of the Bayesian system if we do not have a good guess as to what the priors P0 are. And we have no direct introspective access to the priors P0 implicit in our reasoning processes, while our indirect access to them (e.g., through psychological experiments about people's responses to evidence) is pretty crude and inaccurate.

Imagine now that, due to God and/or natural selection, we have ampliative reasoning processes that are tuned for a world like ours. These processes can be modeled by Bayesian reasoning with priors P0, which priors P0 would then be tuned well for a world like ours. But it may be that our best informed guess Q0 as to the priors will be much more poorly tuned to our world than the priors P0 actually implicit in our reasoning. In that case, switching from our ordinary reasoning processes to something explicitly probabilistic will throw away the information contained in the implicit priors P0, information placed there by the divine and/or evolutionary tuning process.

If this is right, then sometimes or often when trying to do a formal probabilistic reconstruction of an intuitive inductive argument we will do less well than simply by sticking to the inductive argument. For our ordinary intuitive inductive reasoning is, on this hypothesis, tuned well for our world. But our probabilistic reconstruction may not benefit from this tuning.

On this tuning hypothesis, experimental philosophy is actually a good path to epistemological research. For how people reason carries implicit information as to what priors fit our world well.

No comments:

Post a Comment