Tuesday, April 3, 2012

An asymmetry between good and evil, and an argument against utilitarianism

Here is an asymmetry between good and evil actions. It is very easy to generate cases of infinitely morally bad actions. You just need to imagine an agent who has the belief that if she raises her right hand, she will cause torment to an infinite number of people. And she raises her right hand in order to do so. But there doesn't seem to be a corresponding easy way to generate infinitely morally good actions. Take the case of an agent who thinks that if she raises her right hand, she will save infinitely many people from misery. Her raising her right hand will be a good action, but it will not be an infinitely morally good action. In fact, it will not be morally better than raising her right hand in a case where she believes that doing so will relieve finitely many from misery.

To make the point clearer, observe that it is a morally great thing to sacrifice one's life to save ten people. But it is a morally more impressive thing to sacrifice one's life to save one person. Compare St Paul's sentiment in Romans 5:7, that it is more impressive to die for an unrighteous than a righteous person.

Chris Tweedt, when I mentioned some of these ideas, noted that they provide an argument against utilitarianism: utilitarianism cannot explain why it would be better to save one life than to save ten lives.

Now of course if the choice is between saving one life and saving ten lives with the sacrifice, then saving ten lives is normally the better action. In fact, if the one life is that of a person among the ten, to save only that one life would normally[note 1] be irrational, and we morally ought not be irrational. But that's because choices should be considered contrastively. Previously, when I said that giving one's life for one is better than giving one's life for ten, I meant that

  1. choosing to save one other's life over saving one's own life
was a better choice than
  1. choosing to save ten others' lives over saving one's own life.
But the present judgment was, instead, that:
  1. choosing to save one other's life over saving one's own life or saving ten others' lives
is normally rationally and morally inferior to
  1. choosing to save ten others' lives over saving one's own life or saving one other's life.
Cases (1) and (2) were comparisons between choices made in different choice situations, while cases (3) and (4) were comparisons between choices made in the same choice situation. The moral value of a choice depends not just on what one is choosing but on what one is choosing over (this is obvious).

But even after taking this into account, it's hard to see how a utilitarian can make sense of the judgment that (1) is morally superior to (2). In fact, from the utilitarian's point of view, if everything relevant is equal, (1) is morally neutral—it makes no net difference—while (2) is morally positive.

Perhaps, though, we need a distinction between moral impressiveness and moral goodness? So maybe (1) is more morally impressive than (2), but (2) is still morally better. This distinction would be analogous to that between moral repugnance and moral badness. Pulling wings off flies for fun is perhaps more morally repugnant than killing someone in a fair fight to defend one's reputation, but the latter is a morally worse act.

But I do not think the difference between (1) and (2) is just that of moral impressiveness. Here's one way to see this. It is plausible that as one increases the number of people whose lives are saved, or starts to include among them people one has special duties of care towards, one will reach the point where the sacrifice of one's life is morally obligatory. But to sacrifice one's life to save one stranger is morally supererogatory. And while I don't want to say that every supererogatory action is better than every obligatory action, this seems to be a case where the supererogatory action is better than the obligatory one.

On reflection, however, it is quite possible to increase the moral value of a good act. Just imagine, for instance, that you believe that you will suffer forever unless you murder an innocent person. Then refraining from the immoral action will be infinitely good (or just infinitely impressive?). So we can increase the badness of an action apparently without bound by making the intended result worse, and we can increase the goodness of an action by making the expected cost to self worse (as long as one does not by doing so render the action irrational--cases need to be chosen carefully).

2 comments:

Luis G. Oliveira said...

Hi Alex,

I'm not pulled by your present formulation. I'm inclined to say that doing (2) is better than doing (1).

I wonder if we are not blurring together evaluations of "agents" and evaluations of "actions."

I would be more inclined to agree with you--and St. Paul--if the claim was that it is more "morally praiseworthy" to sacrifice one's life to save one than to save ten. But this would now be an evaluation of agents, presumably on the spectrum extremely-blameworthy to extremely-praiseworthy, and utilitarianism (traditionally) says nothing about how we should evaluate such things.

Put a bit differently. Being a praiseworthy action is surely a moral value, so there is a sense in which utilitarianism can't explain why (1) has more of this moral value (praiseworthiness) then (2). But since utilitarianism is not proposed as a theory about this kind of moral value, it is not a fault of utilitarianism that it doesn't explain our judgements about it.

Thoughts?

Alexander R Pruss said...

There are two kinds of evaluation of agents:
A. We can evaluate the agent as having a certain character.
B. We can evaluate the agent as doing a certain action.

Now, I am definitely not dealing with A. After all, an agent with exactly the same character can do 1 as can do 2.

So if I am evaluating agents, I am evaluating agents as doing certain actions, not as having certain characters. But I do not think there is a significant difference, except in focus, between evaluating an action and evaluating the agent qua doer of it.


Here's another thing in the vicinity. Compare these two:

5. Choosing to save ten lives over eating a cookie.
6. Choosing to save ten lives over saving one's own life.

Here, I think it is clear that 6 is a morally better action. (Though to refrain from 5 is morally much worse than to refrain from 6.) But I don't think this is a judgment utilitarianism supports. In terms of consequences, 6 is less valuable, since only nine lives are saved.