Showing posts with label prudential reasons. Show all posts
Showing posts with label prudential reasons. Show all posts

Tuesday, December 6, 2022

Dividing up reasons

One might think that reasons for action are exhaustively and exclusively divided into the moral and the prudential. Here is a problem with this. Suppose that you have a spinner divided into red and green areas. If you spin it and it lands into red, something nice happens to you; if it lands on green, something nice happens to a deserving stranger. You clearly have reason to spin the spinner. But, assuming the division of reasons, your reason for spinning it is neither moral nor prudential.

So what should we say? One possibility is to say that there are only reasons of one type, say the moral. I find that attractive. Then benefits to yourself also give you moral reason to act, and so you simply have a moral reason to spin the spinner. Another possibility is to say that in addition to moral and prudential reasons there is some third class of “mixed” or “combination” reasons.

Objection: The chance p of the spinner landing on red is a prudential reason and the chance 1 − p of its landing on green is a moral reason. So you have two reasons, one moral and one prudential.

Response: That may be right in the simple case. But now imagine that the “red” set is a saturated nonmeasurable subset of the spinner edge, and the “green” set is also such. A saturated nonmeasurable subset has no reasonable probability assignment, not even a non-trivial range of probabilities like from 1/3 to 1/2 (at best we can assign it the full range from 0 to 1). Now the reason-giving strength of a chancy outcome is proportionate to the probability. But in the saturated nonmeasurable case, there is no probability, and hence no meaningful strength for the red-based reason or for the green-based reason. But there is a meaningful strength for the red-or-green moral-cum-prudential reason. The red-or-green-based reason hence does not reduce to two separate reasons, one moral and one prudential.

Now, one might have technical worries about saturated nonmeasurable sets figuring in decisions. I do. (E.g., see the Axiom of Choice chapter in my infinity book.) But now instead of supposing saturated nonmeasurable sets, suppose a case where an agent subjectively has literally no idea whether some event E will happen—has no probability assignment for E whatsoever, not even a ranged one (except for the full range from 0 to 1). The spinner landing on a set believed to be saturated nonmeasurable might be an example of such a case, but the case could be more humdrum—it’s just a case of extreme agnosticism. And now suppose that the agent is told that if they so opt, then they will get something nice on E and a deserving stranger will get something nice otherwise.

Final remark: The argument applies to any exclusive and exhaustive division of reasons into “simple” (i.e., non-combination) types.

Wednesday, April 12, 2017

Types of normativity

It is widely thought that our actions are governed by at least multiple types of normativity, including the moral, the prudential and the epistemic, and that each type of normativity comes along with a store of reasons and an ought. Moreover, some actions—mental ones—can simultaneously fall under all three types of normativity.

Let’s explore this hypothesis. If we make this distinction between types of normativity, we will presumably say that morality is the realm of other-concerned reasons and prudence is the realm of self-concerned reasons. Suppose that at the cost of an hour of torture, you can save me from a minor inconvenience. Then (a) you have a moral reason to save me from the inconvenience and (b) you have a prudential reason not to save me.

It seems clear that you ought to not save me from the inconvenience. But what is this ought? It isn’t moral, since you have no moral reasons not to save me. Moreover, what explains the existence of this ought seem to be prudential reasons. So it seems to be a prudential ought.

But actually it’s not so clear that this is a prudential ought. For a further part of the explanation of why you ought not save me is that the moral reasons in favor of saving me from a minor inconvenience are so very weak. So this is an ought that is explained by the presence of prudential reasons and the weakness of the opposed moral reasons. That doesn’t sound like an ought belonging to prudential normativity. It seems to be a fourth kind of ought—an overall ought.

But perhaps moving to a fourth kind of ought was too quick. Consider that it would be wrongheaded in this case to say that you morally ought to save me, even though all the relevant moral reasons favor saving me and if these were all the reasons you had, i.e., if there were no cost to saving me from inconvenience, it would be the case that you morally ought to save me. (Or so I think. Add background assumptions about our relationship as needed to make it true if you’re not sure.) So whether you morally ought to save me depends on what non-moral reasons you have. So maybe we can say that in the original case, the ought really is a prudential ought, even though its existence depends on the weakness of the opposed moral reasons.

This, however, is probably not the way to go. For it leads to a great multiplication of types of ought. Consider a situation where you have moral and prudential reasons in favor of some action A, but epistemic reasons to the contrary. We can suppose that the situation is such that the moral reasons by themselves are insufficient to make it be the case that you ought to perform A, and the prudential reasons by themselves are insufficient, but when combined they become sufficiently strong in contrast with the epistemic reasons to generate an ought. The ought which they generate, then, is neither moral nor prudential. Unless we’ve admitted the overall ought as a fourth kind, it seems we have to say that the moral and prudential reasons generate a moral-and-prudential ought. And then we immediately get two other kinds of ought in other cases: a moral-and-epistemic ought and a prudential-and-epistemic ought. So now we have six types of ought.

And the types multiply. Suppose you learn, by consulting an expert, that an action has no cost and there are either moral or prudential considerations in favor of the action, but not both. You ought to do the action. But what kind of ought is that? It’s some kind of seventh ought, a disjunctive moral-exclusive-or-prudential kind. Furthermore, there will be graded versions. There will be a mostly-moral-but-slightly-epistemic ought, and a slighty-moral-but-mostly-epistemic ought, and so on. And what if this happens? An expert tells you, correctly or not, that she has discovered there is a fourth kind of reason, beyond the moral, prudential and epistemic, and that some action A has no cost but is overwhelmingly favored by the fourth kind of reason. If you trust the expert, you ought to perform the action. But what is the ought here? Is it "unknown type ought"?

It is not plausible to think that oughts divide in any fundamental way into all these many kinds, corresponding to different kinds of normativity.

Rather, it seems, we should just say that there is a single type of ought, an overall ought. If we still want to maintain there are different kinds of reasons, we should say that there is variation in what kinds of reasons and in what proportion explain that overall ought.

But the kinds of reasons are subject to the same line of thought. You learn that some action benefits you or a stranger, but you don’t know which. Is this a moral or a prudential reason to do the action? I suppose one could say: You have a moral reason to do the action in light of the fact that the action has a chance of benefiting you, and you have a prudential reason to do the action in light of the fact that the action has a chance of benefiting a stranger. But the reason-giving force of the fact that action benefits you or a stranger is different from the reason-giving force of the facts that it has a chance of benefiting you and a chance of benefiting the stranger.

Here’s a technical example of this. Suppose you have no evidence at all whether the action benefits you or the stranger, but it must be one or the other, to the point that no meaningful probability can be assigned to either hypothesis. (Maybe a dart is thrown at a target, and you are benefited if it hits a saturated non-measurable subset and a stranger is benefited otherwise.) That you have no meaningful probability that the action benefits you is a reason whose prudential reason-giving force is quite unclear. That you have no meaningful probability that the action benefits a stranger is a reason whose moral reason-giving force is quite unclear. But the disjunctive fact, that the action benefits you or the stranger, is a quite clear reason.

All this makes me think that reasons do not divide into discrete boxes like the moral, the prudential and the epistemic.