The central problem for Bayesian epistemology is where we get our prior probabilities from. Here are three solutions that have something in common:
- Set our priors based on our common intuitions as to the priors.
- Set our priors in such a way as to best model our human everyday and scientific inductive reasoning.
- Use Solomonoff priors, using an idealization of the human mind as the Turing machine.
Is that bad? I was once teaching Philosophy of Love and Sex, and one of the students complained that the ethics we were talking about was only applicable to humans. I think his worry was that the ethics wouldn't apply to aliens. That's a Kantian concern. But it's off-base: of course sexual ethics will be different for asexually reproducing plasma beings. However, while it's off-base for sexual ethics, it sounds very reasonable to say that our epistemology should apply to all agents, or at least all embodied discursive agents. Unfortunately, there is little hope of solving the problem of priors subject to that assumption.
Let me try to soften you up in favor of anthropocentrism about priors with an ethics analogy. If sharks developed rationality, we wouldn't expect their flourishing to involve quite as much friendship as our flourishing does. Autonomy and friendship are both of value, and yet are in tension, and we would expect different species to resolve that tension differently based on the different ways that they are characteristically adapted to their environment. This is, indeed, an argument for a significant Natural Law component in ethics: even if values are kind-independent, the appropriate resolution of tensions between them is something that may well be relative to a kind.
But there are similar kinds of tensions in the doxastic life. For instance, there is a value to quickly grasping patterns in nature and generalizing them and a value to being more doxastically cautious. We can imagine that agents with one characteristic way of life might flourish in their doxastic lives better if they are eager patterners—they see three tigers and conclude all tigers are dangerous—and agents with a different characteristic way of life might do better to be more cautious in generalizing. Moreover, the appropriate resolution of the tension is likely to be dependent on the subject matter.
This particular tension is nicely modeled within the priors: the particular balance is determined by how the priors are for "nice patterns" (and what counts) versus how the priors are for "mess".
So one way of living with the anthropocentrism of proposals like (1)-(3) (which are not, of course, all of a piece: there is a spectrum there, with more and more idealization as one goes down the list) is to accept a Natural Law epistemology. For each kind of rational agent, there is a natural way for the minds of agents of that kind to think. This natural way yields decisions between competing doxastic values. In a Bayesian setting, this is most prominently embodied in the choice of priors. There are, literally, such things as natural and unnatural priors for a rational agent of a given kind.
There is, however, something disquieting. What about truth? Don't we want our reasoning to get us to truth? What if our kind-relative norms don't get us there?
Well, first of all, we want to both get to truth and avoid falsehood. As William James famously notes, the two desiderata are in tension—you can get all truth by believing every proposition and you can avoid all falsehood by believing none—and there are different ways of resolving the tension. James's own solution was to relativize to the individual. But my Natural Law suggestion is that the resolution of the tension is to be relativized to the kind (and perhaps subject matter). (There may be some absolute constraints, of course.)
But the worry remains. What if our priors are just not conducive to getting to the truth? (It won't help to say that in the limit we get convergence, because we don't want to wait for the limit!) What if our epistemic procedures, appropriate as they are to our kind, fail to get at the truth?
After all, we can imagine eager pattern identifiers in Humean worlds, where the patterns they identify are always spurious, and cautious agents in extremely nicely arranged worlds who keep on missing out on the order around them, as well as less extreme cases.
Here is where theism can help. God put us, with the natures we have, in a certain environment. It is reasonable to think that there would then be a fit between truth-conduciveness (and falsehood-avoidingness) and the characteristic ways of reasoning normative for our kind. This role for God in Natural Law epistemology is somewhat similar to the role of God in Kantian ethics. Kant has this deep concern: "What if in fact doing the right thing doesn't lead to happiness?" To feel the concern, make it a universal concern: "What if everybody's doing the right thing didn't actually lead to anybody being happy?" And although he thinks in a scenario like this people should still do the right thing, despite the cost for everyone, he thinks that we should postulate God to rule out such unhappy thoughts.
I suppose one might hope that evolution could help relieve the worry. Our characteristic doxastic ways of life evolved for our environment, so we would expect some fit between our priors and our environment (interesting question: if our individual priors have evolved, are they still priors?). I agree, but only in a limited way. The evolutionary argument is only going to help in those areas of doxastic life that were important to our fitness where the ways of life were evolving. It's not going to help us much in modern physics or metaphysics. It will help with those areas of doxastic life where the level of abstraction and complexity is much less.
While in the above I held out for a Natural Law epistemology (or metaepistemology), I could also see someone defending a Divine Command epistemology.
3 comments:
Very interesting. I wonder if there's another route to kind-relative priors from the epistemology of "other-minds." We spontaneously form beliefs about the mental states of others on the basis of their behavior/facial expressions which are hard to justify on the basis of general epistemic norms of induction, etc. that apply beyond the area of mind-reading. Some think that we have a dedicated mind-reading module for making these judgments. It's natural to think that if intelligent space worms (whose behavioral manifestations of their mental states differ drastically from ours) observed our facial experessions, behavior etc., their conclusions about our mental states would (rightly) be a lot more tentative than ours conclusion on the basis of the same evidence, which are usually (rightly) very confident. This suggests that, for conditionals of the form "If someone exhibits such-and-such [human-like] facial expressions and behavior, he/she is in mental state M," our rational priors will differ from those of the intelligent space worms.
That sounds right, and of course is illustrative of a lot of such cases.
It appears that God gave a twist to the physical universe because of man's sin: weeds and painful childbirth, for example. Is it not the case that such a twist affected our rationality in addition to the physical universe? If so, we will have trouble understanding rationality. Or am I totally off base?
Post a Comment