Monday, October 21, 2013

Utilitarianism and trivializing the value of life

Consider these scenarios:

  • Jim killed ten people to save ten people and a squirrel.
  • Sam killed ten people to save ten people and receive a yummy and healthy cookie that would have otherwise gone to waste.
  • Frederica killed ten people to save ten people and to have some sadistic fun.
If utilitarianism is true, then in an appropriate setting where all other things are equal and no option produces greater utility, the actions of Jim, Sam and Frederica are not only permissible but are duties in their circumstances. But clearly these actions are all wrong.

I find these counterexamples against utilitarianism particularly compelling. But I also think they tell us something in deontological theories. I think a deontological theory, in order not to paralyze us, will have to include some version of the Principle of Double Effect. But consider these cases (I am not sure I can come up with a good parallel to the Frederica case):

  • John saved ten people and a squirrel by a method that had the death of ten other people as a side-effect.
  • Sally killed ten people and received a yummy and healthy cookie that w would have otherwise gone to waste by a method that had the death of ten other people as a side-effect.
These seem wrong. Not quite as wrong as Jim's, Sam's and Frederica's actions, but still wrong. These actions trivialize the non-fungible loss of human life. The Principle of Double Effect typically has a proportionality constraint: the bad effects must not be out of proportion to the good. It is widely accepted among Double Effect theorists that this constraint should not be read in a utilitarian way, and the above cases show this. Ten people dying is out of proportion to saving ten people and a squirrel. (What about a hundred to save a hundred and one? Tough question!)


John Moore said...

You did say "where all other things are equal," but then you brushed it aside. That's actually a huge caveat. You need to discuss what it really means for all other things to be equal.

Most importantly, everything else being equal means that people wouldn't think your act was outrageous and shun you. In this "appropriate setting" there's some odd culture in which people do not think this is trivializing life.

Another consideration: Killing 10 people takes a lot of effort and causes a big mess. Getting a cookie is probably not sufficient payoff for all that labor.

See, if all things are really equal, you have to explicitly describe all those circumstances. You can't just assume things are similar to our world.

Alexander R Pruss said...

It's not hard to set up these things so these considerations don't matter. Maybe it's your last action in life, so your being shunned won't matter. Maybe you're on a dessert island. As for labor, it need be no more than giving an order or pressing a button.

John Moore said...

You say it's not hard? How convoluted do you want to get? Clearly you don't take seriously the caveat that all other things are equal.

By definition, if all things were equal and you got some kind of profit, then the transaction would be good. But in all your examples things are not really equal.

Don't blame utilitarianism. The fault lies your own unstated assumptions.

Richard Davis said...

John, perhaps it would be fair to ask you yourself to spell out a scenario in which it does seem to you that all things are equal (except that when you kill ten people to save ten other people, you also get a cookie)?

I suspect that even if you do so, Dr. Pruss will still deny that in that situation, it is right to kill the ten people. This is because (I think) he denies one of the things you claimed: that by definition, if all things were equal and you got some kind of profit, the transaction would be good. I think Pruss would say that in such a scenario (even if you were to spell it out), even though all things are equal and you make a profit, it still is not right to voluntarily kill ten people in order to save ten others. The view is that when you act so as to kill ten people, even with the intention to save ten others, the action itself --- or else the fact that the action takes place --- is somehow intrinsically bad in a way that does not reduce to the badness or goodness of its causal consequences.

Of course, one of the things which isn't covered in the word 'all' in this context is the moral considerations themselves. We cannot usefully rig the case merely by stipulating that all the moral considerations are themselves equal whether you kill the ten people or not. The claim that all the moral considerations are equal just outright entails that it is not morally worse for you to kill the people than for you not to do so. But the question we're asking is (roughly) whether in such a case, it is morally worse to kill the people. So if we specify the scenario so as to outright entail that it is not morally worse to kill the people, then we have rendered the question uninteresting. Thus, not strictly 'all' things can be equal. I think instead we mean something like 'all things, besides the gaining of the cookie, which can be described in non-moral language'.

Does that seem right?

Michael Rabenberg said...

Couple of things.

1. I've long thought the worst cases for utilitarianism to be ones in which a person would maximize aggregate utility by maximizing his own pleasure over pain in the process of doing something of no even notional moral worth, like sexually abusing a comatose person and making sure the person never finds out. Utilitarianism would have it that doctors are *morally required* to sexually abuse their comatose patients if they're psychologically constituted in the requisite way. This is delusional.

2. I'm intrigued by your parenthetical at the end, Prof. Pruss. What makes you think increasing the numbers of saved persons and killed persons might make things permissible?

Michael Rabenberg said...

(I take it you changed it from ten+insignificant addition vs ten to 101 vs 100 instead of to 11 vs 10 for a reason.)

John Moore said...

OK, thanks for this discussion. It's really stimulating my ideas.

Yes, to argue against utilitarianism, you really have to say that (material) profit isn't the main source of good. So where I might say 10+1>10, you'd have to say no there's something higher than just what's measurable or quantifiable. At that point we could discuss what that higher thing is.

I just don't like these emotional arguments where you appeal to people's gut feelings using a very convoluted and unrealistic situation.

Two big problems I see with utilitarianism:

(1) The idea of maximizing for the greatest number. It's fine just to seek your own individual good.

(2) The idea that you're morally obligated to maximize. It's fine to stop at "good enough."

Sorry if I'm going off topic here ...

Alexander R Pruss said...

"Yes, to argue against utilitarianism, you really have to say that (material) profit isn't the main source of good."

Not quite.

Utilitarianism is the conjunction of two theories:
1. A theory of the good: The good is happiness.
2. A theory of the right: The right is the maximization of the total good.

While you can argue against utilitarianism by disputing 1, and that would be "say[ing] that (material) profit isn't the main source of good", that's not the only way to dispute utilitarianism. One can bracket 1 and dispute 2. One can coherently say that there are cases where we should not do the thing that results in more good.

Richard Davis said...

Shouldn't a theist hold that given God's power and character, it is guaranteed that whatever is right always maximizes both happiness and the total good (whether or not those are the same thing!) in the long run?

For instance, in the case of abusing the comatose patient, the such an act would injure the soul of the doctor who performed it. In the long run, this would reduce the doctor's happiness. (The view here would be that given God's character and power, a doctor cannot be so constituted that in the long run such an act does not reduce his happiness.)

If so, then we theists might still dispute utilitarianism (on the formulation Dr. Pruss gave one post back), but it seems we may dispute it only by denying thesis one: that the good is happiness. We might say instead that while maximizing the good always entails the long-run maximization of happiness, nevertheless the good and happiness are two different things.

Not to say that we must deny utilitarianism. It's not altogether obvious to me that a theist should deny that the good is happiness. As for Dr. Pruss's examples, there may necessarily be long-run harmful consequences to killing the ten to save ten others.

Alexander R Pruss said...

Maybe, but it's not obvious. Imagine this scenario. If you don't kill one innocent person, ten innocent people will be killed and a hundred innocent people will each be given an offer to kill an innocent person to save ten.

Now you know that if you don't take your offer, some of the hundred people will take their offer. Thus if you don't commit the sin of murder, a greater number of previously innocent people will commit the sin of murder. So even if we count the badness of sin, the better consequences seem to come if one does the immoral thing.

Moreover, the tradition strongly calls Adam's sin the "felix culpa", happy fault. The thought is that through Christ's redemption, a greater good comes from Adam's sin than we would have had had Adam not sinned. And yet Adam did wrong. (Of course, Adam wouldn't have known that such a great good would come. But even if he did, the sin would have been a sin.)

Richard Davis said...

John, you said:

'I just don't like these emotional arguments where you appeal to people's gut feelings using a very convoluted and unrealistic situation.'

I'm not sure how unrealistic the situation must be. Suppose you're a guard in a Nazi death camp. You can the save life of one prisoner who is scheduled to be killed by your fellow guards, simply by secretly using your own gun to shoot another prisoner instead (one who was scheduled to be released instead of killed!) and then setting the first prisoner free in his stead. Also, you know that the second prisoner, but not the first, will eventually send you a small gift if he is released.

It seems that this scenario isn't too far from realistic, and it can be used to assess utilitarianism in much the same way as Dr. Pruss's scenarios. Do you think it's right to let our 'gut feelings' play a large role in assessing the morality of the Nazi guard's actions in such a case? If so, then since this scenario is realistic, is the problem that gut feelings just are not usually a good guide to moral facts, even in realistic cases?

See, in my view, gut feelings do play an important role in assessing moral truths. I remember as a child that the faculty for 'feeling' that something was right or wrong --- the faculty I was taught to call 'my conscience' --- had a huge and beneficial impact toward getting me to behave rightly rather than wrongly. Even now I find that I very frequently appeal to intuitive, gut-feeling-ish assessments of the morality of actions whose moral status I cannot otherwise determine.

Richard Davis said...

Dr. Pruss, great point. It's not altogether clear to me whether in such a situation, the most good (and happiness) would result from killing the innocent person or not. It would be nice to hold that it is still wrong, however, to kill the innocent person.

One major concern: If the right does not necessarily maximize the good, then mustn't conclude that God does not always will both the good and the right?