It is widely thought that our actions are governed by at least multiple types of normativity, including the moral, the prudential and the epistemic, and that each type of normativity comes along with a store of reasons and an ought. Moreover, some actions—mental ones—can simultaneously fall under all three types of normativity.
Let’s explore this hypothesis. If we make this distinction between types of normativity, we will presumably say that morality is the realm of other-concerned reasons and prudence is the realm of self-concerned reasons. Suppose that at the cost of an hour of torture, you can save me from a minor inconvenience. Then (a) you have a moral reason to save me from the inconvenience and (b) you have a prudential reason not to save me.
It seems clear that you ought to not save me from the inconvenience. But what is this ought? It isn’t moral, since you have no moral reasons not to save me. Moreover, what explains the existence of this ought seem to be prudential reasons. So it seems to be a prudential ought.
But actually it’s not so clear that this is a prudential ought. For a further part of the explanation of why you ought not save me is that the moral reasons in favor of saving me from a minor inconvenience are so very weak. So this is an ought that is explained by the presence of prudential reasons and the weakness of the opposed moral reasons. That doesn’t sound like an ought belonging to prudential normativity. It seems to be a fourth kind of ought—an overall ought.
But perhaps moving to a fourth kind of ought was too quick. Consider that it would be wrongheaded in this case to say that you morally ought to save me, even though all the relevant moral reasons favor saving me and if these were all the reasons you had, i.e., if there were no cost to saving me from inconvenience, it would be the case that you morally ought to save me. (Or so I think. Add background assumptions about our relationship as needed to make it true if you’re not sure.) So whether you morally ought to save me depends on what non-moral reasons you have. So maybe we can say that in the original case, the ought really is a prudential ought, even though its existence depends on the weakness of the opposed moral reasons.
This, however, is probably not the way to go. For it leads to a great multiplication of types of ought. Consider a situation where you have moral and prudential reasons in favor of some action A, but epistemic reasons to the contrary. We can suppose that the situation is such that the moral reasons by themselves are insufficient to make it be the case that you ought to perform A, and the prudential reasons by themselves are insufficient, but when combined they become sufficiently strong in contrast with the epistemic reasons to generate an ought. The ought which they generate, then, is neither moral nor prudential. Unless we’ve admitted the overall ought as a fourth kind, it seems we have to say that the moral and prudential reasons generate a moral-and-prudential ought. And then we immediately get two other kinds of ought in other cases: a moral-and-epistemic ought and a prudential-and-epistemic ought. So now we have six types of ought.
And the types multiply. Suppose you learn, by consulting an expert, that an action has no cost and there are either moral or prudential considerations in favor of the action, but not both. You ought to do the action. But what kind of ought is that? It’s some kind of seventh ought, a disjunctive moral-exclusive-or-prudential kind. Furthermore, there will be graded versions. There will be a mostly-moral-but-slightly-epistemic ought, and a slighty-moral-but-mostly-epistemic ought, and so on. And what if this happens? An expert tells you, correctly or not, that she has discovered there is a fourth kind of reason, beyond the moral, prudential and epistemic, and that some action A has no cost but is overwhelmingly favored by the fourth kind of reason. If you trust the expert, you ought to perform the action. But what is the ought here? Is it "unknown type ought"?
It is not plausible to think that oughts divide in any fundamental way into all these many kinds, corresponding to different kinds of normativity.
Rather, it seems, we should just say that there is a single type of ought, an overall ought. If we still want to maintain there are different kinds of reasons, we should say that there is variation in what kinds of reasons and in what proportion explain that overall ought.
But the kinds of reasons are subject to the same line of thought. You learn that some action benefits you or a stranger, but you don’t know which. Is this a moral or a prudential reason to do the action? I suppose one could say: You have a moral reason to do the action in light of the fact that the action has a chance of benefiting you, and you have a prudential reason to do the action in light of the fact that the action has a chance of benefiting a stranger. But the reason-giving force of the fact that action benefits you or a stranger is different from the reason-giving force of the facts that it has a chance of benefiting you and a chance of benefiting the stranger.
Here’s a technical example of this. Suppose you have no evidence at all whether the action benefits you or the stranger, but it must be one or the other, to the point that no meaningful probability can be assigned to either hypothesis. (Maybe a dart is thrown at a target, and you are benefited if it hits a saturated non-measurable subset and a stranger is benefited otherwise.) That you have no meaningful probability that the action benefits you is a reason whose prudential reason-giving force is quite unclear. That you have no meaningful probability that the action benefits a stranger is a reason whose moral reason-giving force is quite unclear. But the disjunctive fact, that the action benefits you or the stranger, is a quite clear reason.
All this makes me think that reasons do not divide into discrete boxes like the moral, the prudential and the epistemic.
Alex,
ReplyDeleteI'm inclined to say that many cases of "should" (usually regarded as "prudential" "should") are not overall judgments, but judgments from the perspective of some goal or ordered set of goals (or maybe it's better to say with respect to some (perhaps partial; usually implicit) evaluative function, since some agents might not have such goals; more below).
I found what I think is a good example in support of the view that many judgments are not overall ones in Street's paper "Constructivism about Reasons" (page 29, footnote 41 in the version I have). In the example, she asks her husband whether they should stop to pick up milk on their way back from seeing a movie. Her judgment does not seem to be an overall one, and that seems to be a common way of making "should" judgments (I don't agree with Street's views on judgments in general, but I'm inclined to agree with this particular point she makes, though without looking at the matter from her theory about judgments).
But if you think it's only overall "ought" (and "should"), do you think judgments like Street's example, or "We should go see the movie tonight", etc., are meant as overall judgments? Or do you make a distinction between "should" and "ought to" in this context?
With regard to moral and prudential judgments, I'm tentatively considering the view that perhaps moral "should" judgments are judgments with respect to some implicit goal/function, namely the goal of not breaking the moral rules (at least, in usual contexts; in some contexts, the goal might be to do what's morally best, but that's not usually how we treat moral "ought" judgments, I think); the same would apply to generally considered prudential judgments.
In a sense, I think the suggestion above also reduces the types of "ought" with respect to the usual view, since (leaving epistemic "ought" aside) the only type would seem to be "ought-with-respect-to-some-goal/function". But in a different sense, does a view like that multiply the "oughts" in a way similar to what you find objectionable?
In re: epistemic judgments, a similar possibility would be that are also judgments with respect to the goal of not being epistemically irrational, though I find that one less probable.
An overall "should" (or "ought") would be a should with respect to the entire evaluative function of an agent. I can think of two difficult issues in this context, though:
1. Perhaps, in some cases, different parts of an agent's mind have different goals/values, and there is no single evaluative function that can settle the matter.
2. It may be that in the case of some very weird agents, the overall "should" does not factor in, say, the moral "should", or the epistemic "should", if an agent's evaluative function assigns no positive value at all to not behaving immorally or to being epistemically rational.
I think there are just reasons, which sum up to an overall 'ought', but which are naturally or conveniently divided into different kinds. Self-regarding, other-regarding, or epistemic reasons are all fairly natural divisions. But "As far as the law goes, you ought to ..." or "Just considering your friendship with Nellie, you should ...." or "Professionally speaking, the right thing to do is..." are all other ways of dividing up the space of reasons. We use them to break hard problems down into smaller problems.
ReplyDeleteCompare: in Aristotle, all the virtues are forms of practical wisdom, phronesis, but we can usefully talk about "courage" by considering the phronesis required in the context of fear and danger, "temperance" by considering the phronesis required in facing temptation, etc. These are fairly natural divisions of the kinds of challenges we face. But that's not to say they are hard-and-fast, metaphysically interesting, or have no disputed/borderline/hard-to-classify cases.