Suppose that I have epistemic reason not to believe p, but a failure to believe p will likely lead to a non-epistemic bad to another. Maybe I have epistemic reason not to believe that George is innocent, but a failure to believe that George is innocent will likely lead me to treat him unfairly. On what grounds should I decide whether to believe p? Not epistemic—for epistemic rationality is blind to non-epistemic bads, and hence either will automatically tell me to go with the epistemic good, which is not the right answer since epistemic goods do not outweigh all others, or will be unable to weigh the epistemic and non-epistemic goods and bads. On the other hand, morality tells me to pursue the good, and that includes the epistemic good. Morality, thus, is able to weigh both the epistemic and the non-epistemic goods and bads. Sometimes it will say to go for the epistemic good at the expense of the non-epistemic, and sometimes it will say to go for the non-epistemic good at the expense of the epistemic. It seems, thus, that the question of what should I do simpliciter in a case like this is a question of moral normativity.
Insofar, then, as morality can weigh epistemic goods, and morality is what delivers the answer of what I should do simpliciter, it seems that epistemic normativity tells me what I should or should not do considering a subset of the actual reasons—the subset of epistemically relevant ones. This subset is considered, however, by morality, and weighed against other reasons. As as the epistemic reasons are weighed by morality, they also constitute moral reasons. Moreover, since the answer to the question of what I should do simpliciter comes from morality, it does not appear that the epistemic reasons have any independent normative force: epistemic reasons are simply a special kind of moral reasons. There are many kinds of moral reasons. For instance, there are reasons relating to aretaic goods to self; reasons relating to non-aretaic goods to other persons; reasons relating to goods to non-persons; etc. And there are reasons relating to epistemic goods.
What I said about epistemic goods applies to prudential ones. (In fact, I now think that C. S. Lewis makes a similar argument early in Mere Christianity when he talks of how morality judges between instincts.)
One might think that what I said only applies in the case where the epistemic reasons have moral implications. But even if they didn't, the fact that they don't would be a matter for a moral judgment, and hence the reasons would not escape from the purview of moral normativity. And, anyway, since epistemic goods are goods—if they weren't, they wouldn't be worth pursuing—and since morality tells us to pursue the goods, there are always moral implications.
7 comments:
Let me add another argument: Suppose that in wisely weighing the epistemic and non-epistemic reason, I conclude that I should, all things considered, act on the non-epistemic reason, and I do so. In that case, the thing to say is: "The right thing to do was the one that is epistemically worse." Suppose, however, I wisely conclude that I should, all things considered, act on the epistemic reason, and I do so. In that case, it would be mistaken to say: "The right thing to do was the one that was morally worse."
In other words, when I weighed the epistemic and non-epistemic reasons, and went with the non-epistemic, I acted against epistemic normativity. But when I went with the epistemic, I didn't act against moral normativity. Therefore, but if only the non-epistemic reason was a moral reason, and the epistemic reason was not a moral reason, then by acting on the epistemic reason I would necessarily be going against morality.
It may be necessary to do some justice to the fact that people do distinguish oral reasons from non-moral ones. So we can say that there are moral reasons and narrowly moral reasons. The epistemic reasons are moral, but not narrowly so. But there is nothing morally special about the narrowly moral reasons.
Nice post - it'd be interesting indeed to see this crossed with the findings of various psychological studies, because we do in fact have a pretty solid library of connections between beliefs/attitudes and resulting behaviors.
You mention: “epistemic rationality is blind to non-epistemic bads...”
It depends how you view epistemic rationality. If one endorses a view of epistemic rationality that is relativized to epistemic situations, then cases arise where epistemic rationality is blind in the way you mention. If, however, one endorses an objective account of epistemic rationality, then it will seek to account for facts that are not limited to facts about what one knows in a given epistemic situation. In such a case, the normative facts will supervene on the physical facts. A change in the physical facts may result in a change in the normative facts about what one ought to believe. Epistemic rationality may, in turn, endorse a non-epistemic good. I’ll borrow a case from Achinstein’s “The Book of Evidence” to illustrate this point.
Imagine a community is governed by authorities that preach the benefits of arsenic. They extol the health benefits of arsenic when sprinkled on food. Imagine, ceteris paribus, all members of this community trust the authorities, believe what they say about arsenic, and have no reason to doubt that arsenic is beneficial to their health. A member of the community named Ann ate a lot of arsenic a little while ago. Is it reasonable for members of the community to believe that Ann is dead or dying? Relativized to an epistemic situation it is not reasonable for members of the community to believe Ann is dead or dying. They have never seen anyone, including Ann, adversely affected by arsenic (as the example assumes), so knowing that Ann ate arsenic, even a lot of it, does not make it reasonable for them to believe Ann is dead or dying. For all they know Ann is enjoying better health as a result of ingesting the beneficial substance. The members of the epistemic community have an epistemic reason not to believe Ann is dead or dying (p), but a failure to believe Ann is dead or dying will lead to a non-epistemic bad, namely, failure to rush to her house in an effort to save her by inducing vomiting, etc. It will lead to her certain death. If, on the other hand, epistemic rationality is objective and it is reasonable for people of the community to believe, even if such knowledge is not luminous, that Ann is dead or dying (based on the physical facts of the impact of arsenic on the human body), then such epistemic rationality endorses a non-epistemic good, namely, attempting to save Ann’s life.
I think there is a shorter way to put this basic point. You (and not only you; I originally saw this in McInerny's book on Aquinas) use "moral" to mean, approximately, "all things considered". The morally correct action is the one that is correct ATC, for example. It then follows trivially that all reasons are moral reasons, that any narrower form of normativity is a subset of moral normativity, and so on.
Although this usage is different from how some people use "moral", we need something in the ATC role.
Heath:
There is a little more to it than this--namely, there is the argument in my first comment. Take, for instance, this case. You have no evidence that there are aliens in other galaxies. But if you believe that there are aliens in other galaxies, you will cut short a headache of mine by one second (maybe you come to believe by swallowing a pill, and your swallowing the pill motivates someone to give me Tylenol half a second earlier). You thus have a moral reason to believe it. You have an epistemic reason to withhold belief. Suppose you choose not to believe it. Nobody, I think, would say you acted immorally.
So, at least, the following is right: if x is permitted to do A all things considered, then it is not immoral for x to do A.
On the view on which there are both moral and epistemic reasons, and the two are distinct, this is hard to explain.
But one hardly if ever gets to choose what one believes; and insofar as one does weigh up evidence and deliberate and such, only epistemic grounds should influence your decision what to believe. One should be able to seperate what one believes from how one acts in situations such as the George case.
Similarly with the aliens. You have a moral reason to take a pill, not to believe that there are aliens. The moral reason arises from the relative importance of helping you compared with the presumed insignificance of wrongly forming a belief about aliens. But you have an epistemic reason not to believe in aliens, which operates in the absence of taking the pill.
Suppose it was not just your headache, but the fate of the entire planet that was at stake; and suppose the belief was not one that might be important, such as the existence of aliens, but was something like a large axiom of infinity. You have no evidence that such an axiom is true, but some mathematicians believe it; others do not, but then, you are not a mathematician (say) and so it hardly matters what you believe about this. Then I think I would say you acted immorally by not taking the pill.
And I think we are often in situations like this, where our decisions affect what we are likely to end up believing, e.g. when we decide how much time to spend on gathering and weighing up evidence, or when we decide whether to take some drug such as alcohol, or whether to trust someone.
The thing about the epistemic reasons is that they are not moral reasons, and hence they do not compete with moral reasons but operate in an entirely different manner. They are practically automatic, but have to be considered when we are describing how and why people act as they do. There is no epistemic reason not to take the pill, but there is a moral reason to value truth, i.e. to value epistemic reasons (maybe:)
Korsgaard's discussion of the disanalogy between reasons for belief and reasons for action in "The Activity of Reason" is really helpful on this, I think (though you have to wade through her anti-substantive realism arguments).
Post a Comment