It's noon. You and two other innocents, A and B, are imprisoned by a dictator in separate blast-proof cells. All the innocents are strangers, and you know of no morally relevant differences between them (whether absolutely or relative to you). A's and B's cells both contain bomb and timer apparatuses that A and B cannot do anything about. B's bomb timer is turned off. A's timer is set to blow her up at 1:00 pm. In your cell, there is a yummy mint on a weight-sensitive switch connected to the apparatus in B's cell. If the mint is removed, B's timer will be set to go off at 1:00 pm. The dictator will check up on the situation shortly before 1:00 pm, and will turn off A's timer if you've done something that caused B's timer to turn on. Anybody who survives past 1:00 pm will then be released.[note 1]
So you reason to yourself. "I like mints. If I eat the mint, I will cause B's death, but A will be saved. My causing of B's death will be non-intentional, and on balance the consequences to human life are neutral. But I get a mint out of it. So the Principle of Double Effect should permit me to eat the mint."
If this reasoning is good, the Principle of Double Effect is close to useless. Strict deontologists think it's wrong to kill one innocent to save millions. Most think it's wrong to kill one innocent to save two. But just about every deontologist will say that it's wrong to kill one innocent to save one innocent and one cat. Now, consider this case. The dictator hands you a gun, and tells you that if you don't kill innocent B, she'll kill innocent A and a cat. You clearly shouldn't. But if you thought it was acceptable to take the mint, then you could reason thus: "It would be interesting to see what a bullet hole in a shirt pocket looks like (and the shirt doesn't belong to B—it is prison attire, belonging to the dictator). If I aim the gun at B's shirt pocket, and press the trigger, the bullet will make a hole in the shirt pocket. And as an non-intended side-effect, it will subsequently cause B's death. But that's fine, because on balance the consequences to human life are neutral, as then B will be saved—plus a cat!" And since you can always think up some minor good that is served by pulling a trigger (finger exercise, practice aiming, etc.), you will get results any deontologist should reject.
So something is wrong with the reasoning—or Double Effect is wrong. I do not think, however, that Double Effect is wrong—I think it's indispensible. So what I will say is this. Double Effect requires that the evil effect not be intended and that there be a proportionality between the side-effect and the intended effect. What the above cases show is that, as a number of authors have noted, proportionality is not a matter of utilitarian calculation. Not only should we have on-balance positive consequences, but the intended effect should be a good proportionate to the foreseen evil. And the foreseen evil is not "that one person fewer will be alive than otherwise", but the foreseen evil is that a particular person should die. The deaths of different people are incommensurable evils even when we know no morally significant differences between the people.
In some cases the virtuous agent may count the numbers of people. But not in these cases. It is callous and unloving to get a mint or produce a bullet hole at the cost of B's death. It trivializes the value of B's life. There is a dilemma here. Either one is acting in the way that causes B's death for the sake of saving A, or not. If one is not, then B literally died so that one might have a mint or be intellectually gratified by the sight of a bullet hole. And so one trivializes B's life. If one is acting to save A, then one is not trivializing B's life. But in that case one is intending B's death, and deontology forbids that.
Here is a variant analysis that comes to the same thing, perhaps. There are cases where one can only do something in one of two ways: by intending a basic evil or by having a morally vicious set of intentions. The cases I gave are like that: one can only take the mint or produce the bullet hole by intending B's death or by having a set of intentions that trivialize B's life. In either case, one is unloving to B. It's hard to say which is the worse.
(This is related to the looping trolley case. There, I think one is either intending the absorption of kinetic energy by the one person, which is problematic, or one is intending a slight increase in length of life or slightly increase in probability of survival on the part of the five, which trivializes the death of the one.)
6 comments:
How do you feel about this live action deception business?
(Note: I deliberately say "deception" here and not "lying" in an attempt at more neutrality.)
On reflection, and after discussion with Daniel Hill, I need to modify what I said about the loop trolley.
Either you intend the absorption of kinetic energy by the one, or you don't. If you do intend it, you've got a problematic intention. If you don't intend it, then you either intend that the lives of the five are saved or you intend an increase in probability of saving. If you intend an increase of probability of saving, you have a proportionality problem as indicated in the post. If you intend saving but not the smunching of the one victim, you have another proportionality problem: your probability of success is too low to have proportionality (proportionality must take account of the probability of success). So in all three cases, you do something problematic. (I say problematic and not wrong, because intending the absorption of kinetic energy is not the same as intending death. Given appropriate authority, one might be permitted to intend the absorption of kinetic energy.)
Dan:
I have not followed any of the details. For the record, I think it is always wrong to lie, and that knowingly, deliberately and deceptively (with the deception being of the relevant sort) asserting a falsehood is a lie. The little I've heard suggests that there was relevant knowledge, deliberation and deception. The only question is probably whether what was asserted was a falsehood, and to determine that one would have to look carefully at all the sentences uttered.
What do you mean by the deception being of the relevant sort?
My own view is that you don't need the deception condition, but since I was only stating a set of sufficient conditions, I left it in.
One might want to distinguish between three kinds of deception:
- deception as to the content (the usual case)
- deception as to the speaker's beliefs (a less common case)
- deception as to something else.
And the relevancy condition rules out the last case. Suppose that by speaking with Reagan's voice, I try to convince you that I am Reagan. That's not a relevant deception--it has nothing to do with the content of what I am saying, be it true or false.
But like I said, I am happy to drop the deception condition.
Interesting cases! I need to think about these.
Post a Comment