Showing posts with label triple effect. Show all posts
Showing posts with label triple effect. Show all posts

Monday, August 1, 2022

Triple effect, looping trolley and felix culpa

Frances Kamm uses her principle of triple effect to resolve the loop version of the trolley problem. On the loop version, as usual, the main track branches into are two tracks, track A with five people and track B with one person, and the trolley is heading for track A. But now the two tracks join via a loop, so if there were no one on either track, a trolley that goes on track A will come back on track B and vice versa. If we had five people on track A and no one on track B, and we redirected the trolley to track B, it would go on track B, loop around, and fatally hit the people on track A anyway. But the one person actually on track B is big enough that if the trolley goes on track B, it will be stopped by the impact and the five people will be saved.

The problem with redirecting to track B on the loop version of the trolley problem is that it seems that a part of your intention is that the trolley should hit the person on track B, since it is that impact which stops the trolley from hitting the five people on track A. And so you are intending harm to the person on track B.

In her Intricate Ethics book, Kamm gives basically this story about redirecting the trolley in the loop case:

  • Initial Intention: Redirect trolley to track A to prevent the danger of five people being hit from the front.

  • Initial Defeater: The five people come to be in danger of being hit from the back by the trolley.

  • Defeater to Initial Defeater: The one person on track B blocks the trolley and prevents the dangers of being hit from the back.

The important point here is that the defeater to the defeater is not intended—it is just a defeater to a defeater. Thus there is no intention to block the trolley via the one person on track B, and hence that person’s being hit is not a case of their intentionally being used as a means to saving lives.

But this defeater-defeater story is mistaken as it stands. For given the presence of the person on track B, there is no danger of the five people being hit from the back. Thus, there is no initial defeater here.

Now, if you don’t know about the one person on track B, you would have a defeater to the redirection, namely the defeater that there is danger of being hit from the back. But learning about the person on track B would not provide a defeater to that defeater—it would simply remove the defeater by showing that the danger doesn’t exist.

That the story doesn’t have a defeater-defeater structure does not mean that one is intending the one person to be hit. Kamm might still be right in thinking there is no intention to block the trolley via the one person on track B. But I am dubious of Kamm’s story now, because I am dubious that the danger of being hit from the front yields a worthy initial intention. For there is nothing particularly bad about being hit from the front. It is only the danger of being hit simpliciter that seems worth preventing.

It is interesting to me to note that even if Kamm’s story doesn’t have defeater-defeater form, the main place where I want to use her triple effect account seems to still have defeater-defeater form. That place is the felix culpa, where God allows Adam and Eve to exercise their free will, even though he knows that this would or might well (depending on details about theories of foreknowledge and middle knowledge) result in their sinning, and God’s reasoning involves the great goods of salvation history that come from Adam and Eve’s sin.

  • Initial Intention: Allow Adam and Eve to exercise their free will.

  • Initial Defeater: They will or might well sin.

  • Defeater to Initial Defeater: Great goods will come about.

Here the initial defeater is not mistaken as in the looping trolley case—the sin or its possibility is really real. Moreover, while it’s not an initially worthy intention to prevent people from being hit from the front, unless they aren’t going to be hit from behind (or some other direction) either, it is an initially worthy intention to allow Adam and Eve to exercise their free will, even if no further goods come about, because free will is intrinsically good.

Thus we can criticize Kamm’s own use of triple effect while yet preserving what I think is a really important theological application.

Friday, November 13, 2020

Reducing Triple Effect to Double Effect

Kamm’s Principle of Triple Effect (PTE) says something like this:

  • Sometimes it is permissible to perform an act ϕ that has a good intended effect G1 and a foreseen evil effect E where E causally leads to a further good effect G2 that is not intended but is a part of one’s reasons for performing ϕ (e.g., as a defeater for the defeater provided by E).

Here is Kamm’s illustration by a case that does not have much moral significance: you throw a party in order to have a good time (G1); you foresee this will result in a mess (E); but you expect the partygoers will help you clean up (G2). You don’t throw the party in order that they help you clean up, and you don’t intend their help, but your expectation of their help is a part of your reasons for throwing the party (e.g., it defeats the mess defeater).

It looks now like PTE is essentially just the Principle of Double Effect (PDE) with a particular way of understanding the proportionality condition. Specifically, PTE is PDE with the understanding that foreseen goods that are causally downstream of foreseen evils can be legitimately used as part of the proportionality calculation.

One can, of course, have a hard-line PDE that forbids foreseen goods causally downstream of foreseen evils to be legitimately used as part of the proportionality calculation. But that hard-line PDE would be mistaken.

Suppose Alice has her leg trapped under a tree, and if you do not move the tree immediately, the leg will have to be amputated. Additionally, there is a hungry grizzly near Bob and Carl, who are unable to escape and you cannot help either of them. The bear is just hungry enough to eat one of Bob and Carl. If it does so, then because of eating that one, it won’t eat the other. The bear is heading for Bob. If you move the tree to help Alice, the bear will look in your direction, and will notice Carl while doing so, and will eat Carl instead of Bob. All three people are strangers to you.

It is reasonable to say that the fact that your rescuing Alice switches whom the bear eats does not remove your good moral reason to rescue Alice. However, if we have the hard-line PDE, then we have a problem. Your rescuing Alice leads to a good effect, Alice’s leg being saved, and an evil, Carl being eaten. As far as this goes, we don’t have proportionality: we should not save a stranger’s leg at the expense of another stranger’s life. So the hard-line PDE forbids the action. But the PDE with the softer way of understanding proportionality gives the correct answer: once we take into account the fact that the bear’s eating Carl saves Bob, proportionality is restored, and you can save Alice’s leg.

At the same time, I think it is important that the good G1 that you intend not be trivial in comparison to the evil E. If instead of its being a matter of rescuing Alice’s leg, it were a matter of picking up a penny, you shouldn’t do that (for more argument in that direction, see here).

So, if I am right, the proportionality evaluation in PDE has the following features:

  • we allow unintended goods that are causally downstream of unintended evils to count for proportionality, but

  • the triviality of the intended goods when compared to the unintended evils undercuts proportionality.

In other words, while the intended goods need not be sufficient on their own to make for proportionality, and unintended downstream goods may need to be taken into account for proportionality, nonetheless the intended goods must make a significant contribution towards proportionality.

Monday, October 9, 2017

Preventing someone from murdering Hitler

You are a secret opponent of the Nazi regime, and you happen to see Schmidt sneaking up on Hitler with an axe and murderous intent. You know what’s happening: Schmidt believes that Hitler has been committing adultery with Mrs. Schmidt, and is going to murder Hitler. Should you warn Hitler’s guards?

  1. Intuition: No! If Hitler stays alive, millions will die.

  2. Objection: You would be intending Schmidt to kill Hitler, a killing that you know would be a murder, and you are morally speaking an accomplice. And it is wrong to intend an evil to prevent more evil.

There is a subtlety here. Perhaps you think: “It is permissible to kill an evil tyrant like Hitler, and so Schmidt is doing the right thing, but for the wrong reasons. So by not warning the guards, I am not intending Schmidt to commit a murder, but only a killing that is objectively morally right, albeit I foresee that Schmidt will commit it for the wrong reasons.” I think this reasoning is flawed—I don’t think one can say that Schmidt is doing anything morally permissible, even if the same physical actions would be morally permissible if they had other motive. But if you’re impressed by the reasoning, tweak the case a little. All this is happening before Hitler has done any of the evil tyrannical deeds that would justify killing him. However, you foresee with certainty that if Hitler is not stopped, he will do them. So Schmidt’s killing would be wrong, even if Schmidt were doing it to prevent millions of deaths.

What’s behind (2) is the thought that Double Effect forbids you to intend an evil, even if it’s for the purpose of preventing a greater evil.

But here is the fascinating thing. Double Effect forbids you from warning the guards. The action of warning the guards is an action that has two effects: (i) prevention of a murder, and (ii) the foreseen deaths of millions. Double Effect has a proportionality condition: it is only permissible to do an action with a good and a bad effect when the bad effect is proportionate to the good effect. But millions of deaths are not proportionate to the prevention of one murder. So Double Effect forbids you from warning the guards.

Now it seems that we have a conflict between Double Effect and Double Effect. On the one hand, Double Effect seems to say that you may not warn the guards, because doing so will cause millions of deaths. On the other hand, it seems to say that you may not refrain from warning the guards in order to save millions because in so doing you are intending Schmidt to kill Hitler.

I know of three ways out of this conflict.

Resolution 1: Double Effect applies only to commissions and not omissions. It is permissible to omit warning the guarads in order that Schmidt may have a free hand to kill Hitler, even though it would not be permissible to help Schmidt by any positive act. One may intend the killing of Hitler in the context of one’s omission but not in the context of one’s commission.

Resolution 2: This is a case of Triple Effect or, equivalent, of a defeater-defeater. You have some reason not to warn the guards. Maybe it’s just the general moral reason that you have not to invoke the stern apparatus of Nazi law, or the very minor reason not to bother straining one’s voice. There is a defeater for that reason, namely that warning the guards will prevent a murder. And there is a defeater-defeater: preventing that murder will lead to the deaths of millions. Thus, the defeater to your initial relatively minor moral reason not to warn guards—viz., that if you don’t, a murder will be committed—is defeater, and so you can just go with the initial moral reason. On this story, the initial Objection to the Intuition is wrong-headed, because it is not your intention to save millions—that is just a defeater to a defeater.

Resolution 3: Your intention is simply to refrain from acting in ways that have a disproportionately bad effect. We should simply not perform such actions. You aren’t refraining as a means to the prevention of the disproportionately bead effect, as the initial Objection claimed. Rather, you are refraining as a means to prevent oneself from contributing to a disproportionately bad effect, namely to prevent oneself from defending the life of the man who will kill millions.

Evaluation:

While Resolution 1 is in some ways attractive, it requires an explanation why intentions for evils are permissible in the context of omissions but not of commissions.

I used to really like something like Resolution 2. But now it seems forced to me, because it claims that your primary intention in the omission can be something so very minor—perhaps as minor as not straining one’s voice in some versions of the story. That just doesn’t seem psychologically realistic, and it seems to trivialize the goods and evils involved if one is focused on something minor. I still think the Triple Effect reasoning like has much to be said for it, but only in those cases where there is a significant good at stake in the initial intention.

I find myself now pulled to Resolution 3. The worry is that Resolution 3 pulls one towards the consequentialist justification of the initial intuition. But I think Resolution 3 is distinguishable from consequentialism, both logically and psychologically. Logically: the intention is not to contribute to an overwhelmingly bad outcome. Psychologically: one can refrain from warning the guards even if one wouldn’t raise a finger to help Schmidt. Resolution 3 suggests that there is an asymmetry between commission and omission, but it locates that asymmetry more plausibly than Resolution 1 did. Resolution 1 claimed that it was permissible to intend evils in the context of omissions. That is implausible for the same reason why it is impermissible to intend evils in the context of comissions: the will of someone who intends evil is a corrupt will. But Resolution 3 is an intuitively plausible non-consequentialist principle about avoiding being a contributor to evil.

In fact, if one so wishes, one can use Resolution 3 to fix the problem with Resolution 2. The initial intention becomes: Don’t be a contributor to evil. Defeater: If you don’t warn, a murder will happen. Defeater-defeater: But millions will die. Now the initial intention is very much non-trivial.

Saturday, February 26, 2011

Mints, cats, double effect and proportionality

It's noon. You and two other innocents, A and B, are imprisoned by a dictator in separate blast-proof cells. All the innocents are strangers, and you know of no morally relevant differences between them (whether absolutely or relative to you). A's and B's cells both contain bomb and timer apparatuses that A and B cannot do anything about. B's bomb timer is turned off. A's timer is set to blow her up at 1:00 pm. In your cell, there is a yummy mint on a weight-sensitive switch connected to the apparatus in B's cell. If the mint is removed, B's timer will be set to go off at 1:00 pm. The dictator will check up on the situation shortly before 1:00 pm, and will turn off A's timer if you've done something that caused B's timer to turn on. Anybody who survives past 1:00 pm will then be released.[note 1]

So you reason to yourself. "I like mints. If I eat the mint, I will cause B's death, but A will be saved. My causing of B's death will be non-intentional, and on balance the consequences to human life are neutral. But I get a mint out of it. So the Principle of Double Effect should permit me to eat the mint."

If this reasoning is good, the Principle of Double Effect is close to useless. Strict deontologists think it's wrong to kill one innocent to save millions. Most think it's wrong to kill one innocent to save two. But just about every deontologist will say that it's wrong to kill one innocent to save one innocent and one cat. Now, consider this case. The dictator hands you a gun, and tells you that if you don't kill innocent B, she'll kill innocent A and a cat. You clearly shouldn't. But if you thought it was acceptable to take the mint, then you could reason thus: "It would be interesting to see what a bullet hole in a shirt pocket looks like (and the shirt doesn't belong to B—it is prison attire, belonging to the dictator). If I aim the gun at B's shirt pocket, and press the trigger, the bullet will make a hole in the shirt pocket. And as an non-intended side-effect, it will subsequently cause B's death. But that's fine, because on balance the consequences to human life are neutral, as then B will be saved—plus a cat!" And since you can always think up some minor good that is served by pulling a trigger (finger exercise, practice aiming, etc.), you will get results any deontologist should reject.

So something is wrong with the reasoning—or Double Effect is wrong. I do not think, however, that Double Effect is wrong—I think it's indispensible. So what I will say is this. Double Effect requires that the evil effect not be intended and that there be a proportionality between the side-effect and the intended effect. What the above cases show is that, as a number of authors have noted, proportionality is not a matter of utilitarian calculation. Not only should we have on-balance positive consequences, but the intended effect should be a good proportionate to the foreseen evil. And the foreseen evil is not "that one person fewer will be alive than otherwise", but the foreseen evil is that a particular person should die. The deaths of different people are incommensurable evils even when we know no morally significant differences between the people.

In some cases the virtuous agent may count the numbers of people. But not in these cases. It is callous and unloving to get a mint or produce a bullet hole at the cost of B's death. It trivializes the value of B's life. There is a dilemma here. Either one is acting in the way that causes B's death for the sake of saving A, or not. If one is not, then B literally died so that one might have a mint or be intellectually gratified by the sight of a bullet hole. And so one trivializes B's life. If one is acting to save A, then one is not trivializing B's life. But in that case one is intending B's death, and deontology forbids that.

Here is a variant analysis that comes to the same thing, perhaps. There are cases where one can only do something in one of two ways: by intending a basic evil or by having a morally vicious set of intentions. The cases I gave are like that: one can only take the mint or produce the bullet hole by intending B's death or by having a set of intentions that trivialize B's life. In either case, one is unloving to B. It's hard to say which is the worse.

(This is related to the looping trolley case. There, I think one is either intending the absorption of kinetic energy by the one person, which is problematic, or one is intending a slight increase in length of life or slightly increase in probability of survival on the part of the five, which trivializes the death of the one.)