Showing posts with label proportionality. Show all posts
Showing posts with label proportionality. Show all posts

Wednesday, October 9, 2024

Proportionality and deterrence

There are many contexts where a necessary condition of the permissibility of a course of action is a kind of proportionality between the goods and bads resulting from the course of action. (If utilitarianism is true, then given a utilitarian understanding of the proportionality, it’s not only necessary but sufficient for permissibility.) Two examples:

  • The Principle of Double Effect says it is permissible to do things that are foreseen to have a basic evil as an effect, if that evil is not intended, and if proportionality between the evil effect and the good effects holds.

  • The conditions for entry into a just war typically include both a justice condition and a proportionality condition (sometimes split into two conditions, one about likely consequences of the war and the other about the probability of victory).

But here is an interesting and difficult kind of scenario. Before giving a general formulation, consider the example that made me think about this. Country A has a bellicose neighbor B. However, B’s regime while bellicose is not sufficiently evil that on a straightforward reading of proportionality it would be worthwhile for A to fight back if invaded. Sure, one would lose sovereignty by not fighting back, but B’s track record suggests that the individual citizens of A would maintain the freedoms that matter most (maybe this is what it would be like to be taken over by Alexander the Great or Napoleon—I don’t know enough of history to know), while a war would obviously be very bloody. However, suppose that a policy of not fighting back would likely result in an instant invasion, while a policy of fighting back would have a high probability of resulting in peace for the foreseeable future. We can then imagine that the benefits of likely avoiding even a non-violent takeover by B outweigh the small risk that despite A’s having a policy of armed resistance B would still invade.

The general case is this: We have a policy that is likely to prevent an unhappy situation, but following through on the policy violates a straightforward reading of proportionality if the unhappy situation eventuates.

One solution is to take into account the value of follwing through on the policy with respect to one’s credibility in the future. But in some cases this will be a doubtful justification. Consider a policy of fighting back against an invader—at least initially—even if there is no chance of victory. There are surely many cases of bellicose countries that could successfully take over a neighbor, but judge that the costs of doing so are too high given the expected resistance. But if the neighbor has such a policy, then in case the invasion nonetheless eventuates, whatever is done, sovereignty will be lost, and the policy will be irrelevant in the future. (One might have some speculation about the benefits for other countries of following through on the policy, but that’s very speculative.)

One line of thought on these kinds of cases is that we need to forego such policies, despite their benefits. One can’t permissibly act on them, so one can’t have them, and that’s that. This is unsatisfying, but I think there is a serious chance that this is right.

One might think that the best of both worlds is to make it seem like one has the policy, but not in fact have it. A problem with this is that it might involve lying, and I think lying is wrong. But even aside from that, in some cases this may not be practicable. Imagine training an army to defend one’s country, and then having a secret plan, known only to a very small number of top commanders, that one will surrender at the first moment of an invasion. Can one really count on that surrender? The deterrent policy is more effective the fiercer and more patriotic the army, but those factors are precisely likely to make them fight despite the surrender at the top.

Another move is this. Perhaps proportionality itself takes into account not just the straightforward computation of costs and benefits, but also the value of remaining steadfast in reasonably adopted policies. I find this somewhat attractive, but this approach has to have limits, and I don’t know where to draw them. Suppose one has invented a weapon which will kill every human being in enemy territory. Use of this weapon, with a Double Effect style intention of killing only the enemy soldiers, is clearly unjustified no matter what policies one might have, but a policy to use this weapon might be a nearly perfect protection against invasion. (Obviously this connects with the question of nuclear deterrence.) I suppose what one needs to say is that the importance of steadfastness in policies affects how proportionality evaluation go, but should not be decisive.

I find myself pulled to the strict view that policies we should not have policies acting on which would violate a straightforward reading of proportionality, and the view that we should abandon the straightforward reading of proportionality and take into account—to a degree that is difficult to weigh—the value of following policies.

Monday, September 19, 2022

More on proportionality in Double Effect and prevention

In my previous post, I discuss cases where someone is doing an evil for the sake of preventing significantly worse goods—say, murdering a patient to save four others with the organs from the one—and note that a straightforward reading of the Principle of Double Effect’s proportionality condition seems to forbid one from stopping that evil. I offer the suggestion, due to a graduate student, that failure to stop the evil in such cases implies complicity with the evils.

I now think that complicity doesn’t solve the problem, because we can imagine case where there is no relevant evildoer. Take a trolley problem where the trolley is coming to a fork and about to turn onto the left track and kill Alice. There is no one on the right track. So far this is straightforward and doesn’t involve Double Effect at all—you should obviously redirect the trolley. But now add that if Alice dies, four people will be saved with her organs, and if Alice lives, they will die.

Among the results of redirecting the trolley, now, are the deaths of the four who won’t be saved, and hence Double Effect does apply. To save one person at the expense of four is disproportionate, and so it seems that one violates Double Effect in saving the one. And in this case, a failure to save Alice would not involve any complicity in anyone else’s evildoing.

It is tempting to say that the deaths of the four are due to their medical condition and not the result of trolley redirection, and hence do not count for Double Effect proportionality purposes. But now imagine that the four people can be saved with synthetic organs, though only if the surgery happens very quickly. However, the only four surgeons in the region are all on an automated trolley, which is heading towards the hospital along the left track, is expected to kill Alice along the way, but will continue on until it stops at the hospital. If the trolley is redirected on the right path, it will go far away and not reach the hospital in time.

In this case, it does seem correct to say that Double Effect forbids one from redirecting the trolley—you should not stop the surgeons’ trolley even if a person is expected to die from a trolley accident along the way. (Perhaps you are unconvinced if the number of patients needing to be saved is only four. If so, increase the number.) But for Double Effect to have this consequence, the deaths of the of the patients in the hospital have to count as effects of your trolley redirection.

And if the deaths count in this case, they should count in the original case where Alice’s organs are needed. After all, in both cases the patients die of their medical condition because the trolley redirection has prevented the only possible way of saving them.

Here’s another tempting response. In the original version of the story, if one refrains from redirecting the trolley in light of the people needing Alice’s organs, one is intending that Alice die as a means to saving the four, and hence one is violating Double Effect. But this response would not save Double Effect: it would make Double Effect be in conflict with itself. For if my earlier argument that Double Effect prohibits redirecting the trolley stands, and this response does nothing to counter it, then Double Effect both prohibits redirecting and prohibits refraining from redirecting!

I think what we need is some careful way of computing proportionality in Double Effect. Here is a thought. Start by saying in both versions of the case that the deaths of the four patients are not the effects of the trolley redirection. This was very intuitive, but seemed to cause a problem in the delayed-surgeons version. However, there is a fairly natural way to reconstrue things. Take it that leaving the trolley to go along the left track results in the good of saving the four patients. So far we’ve only shifted whether we count the deaths of the four as an evil on the redirection side of the ledger or the saving of the four as a good on the non-redirection side. This makes no difference to the comparison. But now add one more move: don’t count goods that result from evils in the ledger at all. This second move doesn’t affect the delayed-surgeons case. For the good of saving lives in that case is not a result of Alice’s death, and the proportionality calculation is unaffected. In particular, in that case we still get the correct result that you should not redirect the trolley, since the events relevant to proportionality are the evil of Alice’s death and the good of saving four lives, and so preventing Alice’s death is disproportionate. But in the organ case, the good of saving lives is a result of Alice’s death. So in that case, Double Effect’s proportionality calculation does not include the lives saved, and hence, quite correctly, we conclude that you should redirect to save Alice’s life.

Maybe. But I am not sure. Maybe my initial intuition is wrong, and one should not redirect the trolley in the organ case. What pulls me the other way is the hungry bear case here.

Friday, September 16, 2022

Proportionality in Double Effect and prevention cases

Suppose you are visiting a hospital and you see Bob, a nurse, sneaking into Alice’s hospital room. Unnoticed, you look at what is going on, and you see that Bob is about to add a lethal drug to Alice’s IV, a drug that would undetectably kill Alice while leaving her organs intact. You recall with horror that two days ago you had a conversation with Bob and he described to you how compelling he finds the argument that it is sometimes obligatory to kill one patient in order to provide organs to save multiple other patients, when this can be done secretly. At the time, you unsuccessfully tried to persuade Bob that the consequentialism behind the argument was implausible. You happen to know that if Bob were to die right now, then four people could be saved. You could now yell, push Bob away, and prevent Alice’s murder.

Here is a Double Effect argument that you shouldn’t stop the murder. Your action of pushing Bob away has two sets of effects: (a) Alice isn’t murdered and (b) four patients who would be saved by Alice’s organs die. Of these, (a) is an intended good and (b) is an unintended evil. So your action is an action to which Double Effect is relevant: it is an action with two effects, an intended good and an unintended evil. But Double Effect makes it a necessary condition for the permissibility of an action that the evils not be disproportionate to the goods. And here the evils are disproportionate to the goods. So you shouldn’t stop Bob, it seems.

Now, one might question the proportionality judgment. Maybe while four deaths are disproportional to one death, four deaths are not disproportionate to one murder? This is mistaken, however. For suppose you see an assassin trying to murder someone with a long-range shot, and you see four innocent people near the assassin. The only way you have to stop the assassin is with a hand-grenade, which would kill the four innocents as well. It is clear that four deaths of innocents are disproportionate to the one murder: you should not stop the murder by blowing up the assassin.

Suppose you bite the bullet and agree that you shouldn’t stop Bob. Then I have an even more problematic version. Go back to your disquieting conversation with Bob about killing patients for their organs. Suppose that Bob disclosed to you in the course of that conversation that it wasn’t a merely hypothetical question, as you assumed, but that he was actually planning on acting on it. It seems completely clear that you should try to persuade him out of this murderous plan. But the exact same Double Effect argument seems to apply here: There are two sets of effects of your persuading Bob not to do it—one person isn’t murdered and a number of people die. The bad effects are disproportionate to the good ones, so Double Effect seems to prohibit you from persuading Bob out of his plan.

Maybe though this second case is different from the first, in that it is one of the basic tasks of a fellow human being to persuade others to act well—this is a central part of our human communal interaction. So it may be that once we take into account the good of persuading others to act well, and add that good to the intended goods, now the four deaths are no longer disproportionate. But now increase the numbers. Perhaps Alice has some weird mutation in her heart tissue such that culturing her heart tissue would save a thousand lives. Now the death of a thousand seems clearly disproportionate to preventing one murder and obtaining the goods of persuading others to act well. Imagine that I had a choice between preventing an explosion that would completely destroy a ship with a thousand people on board and persuading someone not to commit an “ordinary” murder. I should prevent the sinking of the ship. Yet even in the thousand patient case I have the intuition—admittedly, now weaker—that I should try to persuade Bob not to murder Alice, or at least that it is permissible to do so. Especially if Bob is my friend.

What’s going on? Is it the case that when we consider the good of persuading someone to act well, we should not count against that any goods that would result from their acting badly? Is it—a graduate student suggested this to me—that if I fail to persuade them to act well in order to obtain the goods that would result from their act badly, then I become complicit in their bad action? I think there is something to this idea. It may even apply in my earlier case of not stopping Bob physically from the murder, but it seems particularly plausible in the case of refraining to persuade.

In any case, if I am right that it is right to persuade Bob out of his plan to murder Alice, we really do need to understand the proportionality condition in Double Effect very carefully. That condition seems to become significantly context-sensitive. Double Effect is not a simple structural principle by any means.

Objection: When it’s a matter of stopping Bob’s murder of Alice, you don’t cause the deaths of the patients who need Alice’s organs to live. The patients die of whatever conditions they die of, rather than from your action.
So those deaths don’t figure in the Double Effect proportionality calculus.

Response: Imagine that I could stop an ordinary murder, but to do that I would have to park my car in a place that would block an ambulance from getting to the scene of an unrelated accident, where a number of people would die of their injuries if the ambulance were not to get there in time. When considering my action of parking my car, I do need to consider the deaths of the people the ambulance would save, even though they die from their injuries rather than from my action. If the number of people the ambulance would save is large enough, I ought not block the ambulance’s path to prevent one murder.

Friday, November 13, 2020

Reducing Triple Effect to Double Effect

Kamm’s Principle of Triple Effect (PTE) says something like this:

  • Sometimes it is permissible to perform an act ϕ that has a good intended effect G1 and a foreseen evil effect E where E causally leads to a further good effect G2 that is not intended but is a part of one’s reasons for performing ϕ (e.g., as a defeater for the defeater provided by E).

Here is Kamm’s illustration by a case that does not have much moral significance: you throw a party in order to have a good time (G1); you foresee this will result in a mess (E); but you expect the partygoers will help you clean up (G2). You don’t throw the party in order that they help you clean up, and you don’t intend their help, but your expectation of their help is a part of your reasons for throwing the party (e.g., it defeats the mess defeater).

It looks now like PTE is essentially just the Principle of Double Effect (PDE) with a particular way of understanding the proportionality condition. Specifically, PTE is PDE with the understanding that foreseen goods that are causally downstream of foreseen evils can be legitimately used as part of the proportionality calculation.

One can, of course, have a hard-line PDE that forbids foreseen goods causally downstream of foreseen evils to be legitimately used as part of the proportionality calculation. But that hard-line PDE would be mistaken.

Suppose Alice has her leg trapped under a tree, and if you do not move the tree immediately, the leg will have to be amputated. Additionally, there is a hungry grizzly near Bob and Carl, who are unable to escape and you cannot help either of them. The bear is just hungry enough to eat one of Bob and Carl. If it does so, then because of eating that one, it won’t eat the other. The bear is heading for Bob. If you move the tree to help Alice, the bear will look in your direction, and will notice Carl while doing so, and will eat Carl instead of Bob. All three people are strangers to you.

It is reasonable to say that the fact that your rescuing Alice switches whom the bear eats does not remove your good moral reason to rescue Alice. However, if we have the hard-line PDE, then we have a problem. Your rescuing Alice leads to a good effect, Alice’s leg being saved, and an evil, Carl being eaten. As far as this goes, we don’t have proportionality: we should not save a stranger’s leg at the expense of another stranger’s life. So the hard-line PDE forbids the action. But the PDE with the softer way of understanding proportionality gives the correct answer: once we take into account the fact that the bear’s eating Carl saves Bob, proportionality is restored, and you can save Alice’s leg.

At the same time, I think it is important that the good G1 that you intend not be trivial in comparison to the evil E. If instead of its being a matter of rescuing Alice’s leg, it were a matter of picking up a penny, you shouldn’t do that (for more argument in that direction, see here).

So, if I am right, the proportionality evaluation in PDE has the following features:

  • we allow unintended goods that are causally downstream of unintended evils to count for proportionality, but

  • the triviality of the intended goods when compared to the unintended evils undercuts proportionality.

In other words, while the intended goods need not be sufficient on their own to make for proportionality, and unintended downstream goods may need to be taken into account for proportionality, nonetheless the intended goods must make a significant contribution towards proportionality.

Monday, September 23, 2019

Fulfilling requests

One of the most moving stories in Rosenbaum’s deeply moving Holocaust and the Halakhah tells of how one can be a great moral hero even when acting out of mistaken conscience. A man in a concentration camp comes to his rabbi with a problem. His son has been scheduled to be executed. But it is possible to bribe the kapo to get him off the death list. However, the kapo have a quota to fill, and if they let off his son, they will kill another child. Is it permissible to bribe the kapo knowing that this will result in the death of another child? The rabbi answers that, of course, it is permissible. The man goes away, but he is not convinced. He does not bribe the kapo. Instead, he concludes that God has called him to the great sacrifice of not shifting his son’s death onto another. The father finds a joy in the sacrifice amidst his mourning.

The rabbi was certainly right. The father’s conscience presumably was mistaken (unless God specifically spoke to him and required the sacrifice). Yet the father is a moral hero in acting from this mistaken conscience. (Here are two relevant features of this case. First, while he was mistaken, he was not mistaken in a way that shows moral callousness—on the contrary, he is obviously a man of moral sensitivity. Second, while he was mistaken in thinking the sacrifice was morally required, nonetheless the sacrifice was—I think—at least permissible.)

The analytic philosopher will see this as a variant of a trolley case (with some complications, such as that the deaths were mediated by the free agency of the kapo). It is permissible to redirect the trolley away from one’s child and towards a stranger’s child. This is another way in which the proportionality condition in the Principle of Double Effect is not a utilitarian calculation: the agent has a proportional reason to save their own child even when it is foreseen (but not intended) to cost another’s their life.

But at the same time it would not be permissible to redirect the trolley away from one stranger’s child towards another stranger’s child. Such redirection would be a grotesque toying with lives. It would be a needless and callous making of oneself into a cause of another’s death, even if unintentionally.

Here, however, is a case that puzzles me. Suppose Alice’s child is on the track the trolley is speeding towards, and a stranger’s child is on another track. Alice is physically incapable of redirecting the trolley but Bob is capable of it. Alice and both children are strangers to Bob. Would it be permissible for Alice to ask Bob to redirect the trolley?

Here is an argument to the contrary. It is impermissible for Bob to redirect the trolley from one stranger to another: that is just playing with lives. But it is impermissible to request someone to perform an impermissible action. Hence, it is impermissible to ask Bob to redirect the trolley.

That seems mistaken. The case of asking Bob to redirect the trolley need not be that different from begging the kapo to take one’s child off the death list, depending on the details of the latter story. So what is going on?

I think there are at least two ways to justify Bob’s acquiescence to the request and hence Alice’s making of the request:

  1. Once Alice asks Bob to redirect the trolley, Alice is no longer a stranger to Bob. There is a way in which Bob in receiving her request can become an agent of Alice’s, and hence those that Alice cares for become ones that he has a special reason to care for.

  2. On receipt of the request, Bob has two options coming with distinctive incommensurable reasons. The first is not to redirected, with the reason being promote equality, in this case equality between children who don’t have a parent in place to speak up for them and ones who do. The second is to fulfill the request of an anguished parent to save their child. Both reasons are grave, and it is permissible for him (other things being equal) to act on either reason. Requests really do add weight to reasons.

There is another complicating factor. I do have the intuition that if Bob is an employee in charge of the trolley, he should do nothing. The reason is this. Insofar as he is in charge of the trolley, Bob has a role duty of mitigating damage done by the trolley. It is generally good policy that such a role come along with a significant independence from outside influences, such as bribes or even requests. So, in that case, Bob should act as if he did not receive the request. But if he did not receive any request, he shouldn’t do anything, for it is better not to become the cause of the child’s death—as one would if one redirected.

Here is a variant case. There are three tracks. The trolley is on track A with five people. The other two tracks, B and C, have one person each, and Alice is asking Bob not to redirect to track B, as her child is there. Bob has to redirect to either track B or C, but everything other than Alice’s request is equal between these tracks. Here it seems to me that Bob should flip a coin (if there is time; if not, just act as randomly as he can) if he is an employee. And if he is not an employee, then he has a choice to accede to Alice’s request or flip a coin.

Three versions of proportionality in Double Effect

Bob sees a trolley speeding towards five strangers on a track and can redirect the trolley towards another track which has one stranger on it. Alice, who is herself unable to redirect the trolley, offers Bob a dollar to redirect it. Suppose Bob redirects the trolley solely for the sake of dollar. Bob is clearly a callous individual. But has Bob violated the strictures of the Principle of Double Effect?

Well, Bob has done an action that’s intrinsically good or neutral (redirecting a trolley). The bad effect—the death of the one stranger—was not intended either as an end or as a means, and indeed does not in any way (we assume) contribute to the intended good effect, which is getting a dollar. What remains to check is the proportionality condition.

Here it depends on exactly how the proportionality condition is formulated. There are at least three formulations:

  1. The bad effects are not disproportionate to the intended good effects.

  2. The bad effects are not disproportionate to the good effects.

  3. The bad effects are not disproportionate to those good effects that are not themselves the outcomes of bad effects.

On formulation (1), Bob has violated Double Effect: the death of the one stranger is disproportionate to the sole intended good, namely Bob getting a dollar. On formulations (2) and (3), Bob has not violated Double Effect, since the good effect—the saving of the five—is proportionate (and is not the outcome of a bad effect).

My intuition is that the case supports (1). But I worry that this rides on our desire to get the obviously vicious Bob on some charge or other, and violating Double Effect is the obvious one. But there may be another charge. Bob had a moral duty to the do the following: to redirect the trolley in order to save five lives. He failed to do that. His failure to do that is a morally wrong abstension. So even if (2) or (3) are the right story, we can still get Bob on some moral charge or other.

So I am not sure how far the case helps adjudicate between (1)–(3).

Note one nice thing about (1), though. If we go for (1), we automatically filter out any good effects that are the outcomes of bad effects, since if we intended such good effects, we would be intending a bad means and violating the means condition of Double Effected. So (1) implicitly contains the same restriction as is found in (3).

Thursday, September 19, 2019

Cupcakes and trolleys

A trolley is heading towards a person lying on the tracks. Also lying on the tracks is a delicious cupcake. You could redirect the trolley to a second track where there is a different person lying on the tracks, but no cupcake.

Utilitarianism suggests that, as long as you are able to enjoy the cupcake under the circumstances and not feel bad about the whole affair, you have a moral duty to redirect the trolley in order to save the cupcake for yourself. This is morally perverse.

Besides showing that utilitarianism is false, this example shows that the proportionality condition in the Principle of Double Effect cannot simply consist in a simple calculation comparing the goods and bads resulting from the action. For there is something morally disproportionate in choosing who lives and dies for the sake of a cupcake.

Thursday, November 9, 2017

Proportionality in Double Effect is not a simple comparison

It is tempting to make the final “proportionality” condition of the Principle of Double Effect say that the overall consequences of the action are good or neutral, perhaps after screening off any consequences that come through evil (cf. the discussion here).

But “good or neutral” is not a necessary condition for permissibility. Alice is on a bridge above Bob, and sees an active grenade roll towards Bob. If she does nothing, Alice will be shielded by the bridge from the explosion. But instead she leaps off the bridge and covers the grenade with her body, saving Bob’s life at the cost of her own.

If “good or neutral” consequences are required for permissibility, then to evaluate the permissibility of Alice’s action it seems we would need to evaluate whether Alice’s death is a worse thing than Bob’s. Suppose Alice owns three goldfish while Bob owns two goldfish, and in either case the goldfish will be less well cared for by the heirs (and to the same degree). Then Alice’s death is mildly worse than Bob’s death, other things being equal. But it would be absurd to say that Alice acted wrongly in jumping on the grenade because of the impact of this act on her goldfish.

Thus, the proportionality condition in PDE needs to be able to tolerate some differences in the size of the evils, even when these differences disfavor the course of action that is being taken. In other words, although the consequences of jumping on the grenade are slightly worse than those of not doing so, because of the impact on the goldfish, the bad consequences of jumping are not disproportionate to the bad consequences of not jumping.

On the other hand, if it was Bob’s goldfish bowl, rather than Bob, that was near the grenade, the consequences of jumping would be disproportionate to the consequences of not jumping, since Alice’s death is disproportionately bad as compared to the death of Bob’s goldfish.

Objection: The initial case where Alice jumps to save Bob’s life fails to take into account the fact that Alice’s act of self-sacrifice adds great value to the consequences of jumping, because it is a heroic act of self-sacrifice. This added increment of value outweighs the loss to Alice’s extra goldfish, and so I was incorrect to judge that the consequences are mildly negative.

Response: First, it seems to be circular to count the value of the act itself when evaluating the act’s permissibility, since the act itself only has positive value if it is permissible. And anyway one can tweak the case to avoid this difficulty. Suppose that it is known that if Alice does not jump on the grenade, Carl who is standing beside her will. And Carl only owns one goldfish. Then whether Alice jumps or not, the world includes a heroic act. And it is better that Carl jump than that Alice, other things being equal, as Carl only has one goldfish depending on him. But it is absurd that Alice is forbidden from jumping in order that a man with fewer goldfish might do it in her place.

Question: How much of a difference in value can proportionality tolerate?

Response: I don’t know. And I suspect that this is one of those parameters in ethics that needs explaining.

Thursday, October 26, 2017

A two-stage view of proportionality in the Principle of Double Effect

A question about Double Effect that hasn’t been sufficiently handled is in what way, if any, the good effects of bad effects are screened off when judging proportionality.

It seems that some sort of screening off is needed. Consider this case. An evildoer says that he’ll free five innocents unless you kill one innocent; otherwise, she’ll kill them. So you shoot at the innocent’s shirt covering his chest, intending to learn how the fabric is rent by the bullet (knowledge is a good thing!), while foreseeing without intending that the innocent should die, and also foreseeing without intending that the evildoer will free the five.

This is clearly a travesty of double effect reasoning. But the only condition that isn’t obviously satisfied is the proportionality condition. So let’s think about proportionality. Here are two ways to think here:

  1. All good and bad effects count for proportionality. Thus, both the death of the one and the saving of the five count, as does the trivial good of knowing how the shirt rips. Thus proportionality is satisfied: the goods are proportionate.

  2. The good effects that are causally downstream of the bad effects of one’s action don’t count. On this view, it is the intended effect that must be proportionate to the unintended bad effects. Thus, the death of the one counts, and the trivial good of knowing about how the fabric rips counts, but the saving of the five does not count, as it is not intended (if it were intended, the act would be impermissible, of course). But of course the good of knowing how the fabric rips is not proportionate to the death of the one innocent.

Option 2 fits better with the intuition that the initial case was a travesty of double effect reasoning.

But option 2 doesn’t seem to be the right one in all other cases. Suppose I am guarding five innocents sentenced to death by an evil dictator. If I free them, I will be killed. I also know that unless the innocents leave the country, they will be recaptured soon. The innocents are planning to bribe the border officials, which is quite likely to work. But it will be wrong for the border officials to let them escape, because the border officials will have the false belief that these people are justly sentenced, but are venal.

It seems permissible to free the innocents. Here, the unintended but foreseen bad effect is my own death. The good effect is the innocents’ being allowed out of prison. But it seems that if we don’t get to consider effects downstream of bad stuff, we don’t get to consider the fact that the innocents will escape the country, as that’s downstream of the border officials’ venal acceptance of bribes.

Here’s one theory I developed today in conversation with a graduate student. Proportionality is very complex. Perhaps there are two stages.

Stage I: Are the intended good effect and the foreseen bad effects are in the same ballpark? This is a very loose proportionality consideration. One life and ten lives are in the same ballpark, but knowing how the fabric rips is far out of that ballpart. If the intended good effect is so much less than the foreseen bad effects that they are not in the same ballpark, proportionality is not met. Here, the good effects that are downstream of the bad effects don’t count.

If the Stage I proportionality condition is violated, the act is wrong. If it’s met, I proceed to Stage II.

Stage II: Now I get to do a proportionality calculation taking into account all the foreseen goods and bads, regardless of how they are connected to intentions.

The proportionality condition now requires a positive evaluation by means of both stages.

On this two stage theory, shooting the innocent’s shirt in the initial case is wrong, as proportionality is violated at Stage I. On the other hand, the release of the prisoners may be permissible. For the freedom of the innocents is in the same ballpark as my life—it’s a big ballpark—even if they are going to be recaptured. It’s not a trivial good, like the taste of a mint.

I am not happy with this. It’s too complicated!

Monday, October 9, 2017

Preventing someone from murdering Hitler

You are a secret opponent of the Nazi regime, and you happen to see Schmidt sneaking up on Hitler with an axe and murderous intent. You know what’s happening: Schmidt believes that Hitler has been committing adultery with Mrs. Schmidt, and is going to murder Hitler. Should you warn Hitler’s guards?

  1. Intuition: No! If Hitler stays alive, millions will die.

  2. Objection: You would be intending Schmidt to kill Hitler, a killing that you know would be a murder, and you are morally speaking an accomplice. And it is wrong to intend an evil to prevent more evil.

There is a subtlety here. Perhaps you think: “It is permissible to kill an evil tyrant like Hitler, and so Schmidt is doing the right thing, but for the wrong reasons. So by not warning the guards, I am not intending Schmidt to commit a murder, but only a killing that is objectively morally right, albeit I foresee that Schmidt will commit it for the wrong reasons.” I think this reasoning is flawed—I don’t think one can say that Schmidt is doing anything morally permissible, even if the same physical actions would be morally permissible if they had other motive. But if you’re impressed by the reasoning, tweak the case a little. All this is happening before Hitler has done any of the evil tyrannical deeds that would justify killing him. However, you foresee with certainty that if Hitler is not stopped, he will do them. So Schmidt’s killing would be wrong, even if Schmidt were doing it to prevent millions of deaths.

What’s behind (2) is the thought that Double Effect forbids you to intend an evil, even if it’s for the purpose of preventing a greater evil.

But here is the fascinating thing. Double Effect forbids you from warning the guards. The action of warning the guards is an action that has two effects: (i) prevention of a murder, and (ii) the foreseen deaths of millions. Double Effect has a proportionality condition: it is only permissible to do an action with a good and a bad effect when the bad effect is proportionate to the good effect. But millions of deaths are not proportionate to the prevention of one murder. So Double Effect forbids you from warning the guards.

Now it seems that we have a conflict between Double Effect and Double Effect. On the one hand, Double Effect seems to say that you may not warn the guards, because doing so will cause millions of deaths. On the other hand, it seems to say that you may not refrain from warning the guards in order to save millions because in so doing you are intending Schmidt to kill Hitler.

I know of three ways out of this conflict.

Resolution 1: Double Effect applies only to commissions and not omissions. It is permissible to omit warning the guarads in order that Schmidt may have a free hand to kill Hitler, even though it would not be permissible to help Schmidt by any positive act. One may intend the killing of Hitler in the context of one’s omission but not in the context of one’s commission.

Resolution 2: This is a case of Triple Effect or, equivalent, of a defeater-defeater. You have some reason not to warn the guards. Maybe it’s just the general moral reason that you have not to invoke the stern apparatus of Nazi law, or the very minor reason not to bother straining one’s voice. There is a defeater for that reason, namely that warning the guards will prevent a murder. And there is a defeater-defeater: preventing that murder will lead to the deaths of millions. Thus, the defeater to your initial relatively minor moral reason not to warn guards—viz., that if you don’t, a murder will be committed—is defeater, and so you can just go with the initial moral reason. On this story, the initial Objection to the Intuition is wrong-headed, because it is not your intention to save millions—that is just a defeater to a defeater.

Resolution 3: Your intention is simply to refrain from acting in ways that have a disproportionately bad effect. We should simply not perform such actions. You aren’t refraining as a means to the prevention of the disproportionately bead effect, as the initial Objection claimed. Rather, you are refraining as a means to prevent oneself from contributing to a disproportionately bad effect, namely to prevent oneself from defending the life of the man who will kill millions.

Evaluation:

While Resolution 1 is in some ways attractive, it requires an explanation why intentions for evils are permissible in the context of omissions but not of commissions.

I used to really like something like Resolution 2. But now it seems forced to me, because it claims that your primary intention in the omission can be something so very minor—perhaps as minor as not straining one’s voice in some versions of the story. That just doesn’t seem psychologically realistic, and it seems to trivialize the goods and evils involved if one is focused on something minor. I still think the Triple Effect reasoning like has much to be said for it, but only in those cases where there is a significant good at stake in the initial intention.

I find myself now pulled to Resolution 3. The worry is that Resolution 3 pulls one towards the consequentialist justification of the initial intuition. But I think Resolution 3 is distinguishable from consequentialism, both logically and psychologically. Logically: the intention is not to contribute to an overwhelmingly bad outcome. Psychologically: one can refrain from warning the guards even if one wouldn’t raise a finger to help Schmidt. Resolution 3 suggests that there is an asymmetry between commission and omission, but it locates that asymmetry more plausibly than Resolution 1 did. Resolution 1 claimed that it was permissible to intend evils in the context of omissions. That is implausible for the same reason why it is impermissible to intend evils in the context of comissions: the will of someone who intends evil is a corrupt will. But Resolution 3 is an intuitively plausible non-consequentialist principle about avoiding being a contributor to evil.

In fact, if one so wishes, one can use Resolution 3 to fix the problem with Resolution 2. The initial intention becomes: Don’t be a contributor to evil. Defeater: If you don’t warn, a murder will happen. Defeater-defeater: But millions will die. Now the initial intention is very much non-trivial.

Saturday, February 26, 2011

Mints, cats, double effect and proportionality

It's noon. You and two other innocents, A and B, are imprisoned by a dictator in separate blast-proof cells. All the innocents are strangers, and you know of no morally relevant differences between them (whether absolutely or relative to you). A's and B's cells both contain bomb and timer apparatuses that A and B cannot do anything about. B's bomb timer is turned off. A's timer is set to blow her up at 1:00 pm. In your cell, there is a yummy mint on a weight-sensitive switch connected to the apparatus in B's cell. If the mint is removed, B's timer will be set to go off at 1:00 pm. The dictator will check up on the situation shortly before 1:00 pm, and will turn off A's timer if you've done something that caused B's timer to turn on. Anybody who survives past 1:00 pm will then be released.[note 1]

So you reason to yourself. "I like mints. If I eat the mint, I will cause B's death, but A will be saved. My causing of B's death will be non-intentional, and on balance the consequences to human life are neutral. But I get a mint out of it. So the Principle of Double Effect should permit me to eat the mint."

If this reasoning is good, the Principle of Double Effect is close to useless. Strict deontologists think it's wrong to kill one innocent to save millions. Most think it's wrong to kill one innocent to save two. But just about every deontologist will say that it's wrong to kill one innocent to save one innocent and one cat. Now, consider this case. The dictator hands you a gun, and tells you that if you don't kill innocent B, she'll kill innocent A and a cat. You clearly shouldn't. But if you thought it was acceptable to take the mint, then you could reason thus: "It would be interesting to see what a bullet hole in a shirt pocket looks like (and the shirt doesn't belong to B—it is prison attire, belonging to the dictator). If I aim the gun at B's shirt pocket, and press the trigger, the bullet will make a hole in the shirt pocket. And as an non-intended side-effect, it will subsequently cause B's death. But that's fine, because on balance the consequences to human life are neutral, as then B will be saved—plus a cat!" And since you can always think up some minor good that is served by pulling a trigger (finger exercise, practice aiming, etc.), you will get results any deontologist should reject.

So something is wrong with the reasoning—or Double Effect is wrong. I do not think, however, that Double Effect is wrong—I think it's indispensible. So what I will say is this. Double Effect requires that the evil effect not be intended and that there be a proportionality between the side-effect and the intended effect. What the above cases show is that, as a number of authors have noted, proportionality is not a matter of utilitarian calculation. Not only should we have on-balance positive consequences, but the intended effect should be a good proportionate to the foreseen evil. And the foreseen evil is not "that one person fewer will be alive than otherwise", but the foreseen evil is that a particular person should die. The deaths of different people are incommensurable evils even when we know no morally significant differences between the people.

In some cases the virtuous agent may count the numbers of people. But not in these cases. It is callous and unloving to get a mint or produce a bullet hole at the cost of B's death. It trivializes the value of B's life. There is a dilemma here. Either one is acting in the way that causes B's death for the sake of saving A, or not. If one is not, then B literally died so that one might have a mint or be intellectually gratified by the sight of a bullet hole. And so one trivializes B's life. If one is acting to save A, then one is not trivializing B's life. But in that case one is intending B's death, and deontology forbids that.

Here is a variant analysis that comes to the same thing, perhaps. There are cases where one can only do something in one of two ways: by intending a basic evil or by having a morally vicious set of intentions. The cases I gave are like that: one can only take the mint or produce the bullet hole by intending B's death or by having a set of intentions that trivialize B's life. In either case, one is unloving to B. It's hard to say which is the worse.

(This is related to the looping trolley case. There, I think one is either intending the absorption of kinetic energy by the one person, which is problematic, or one is intending a slight increase in length of life or slightly increase in probability of survival on the part of the five, which trivializes the death of the one.)

Thursday, April 8, 2010

Proportionality in Double Effect

A standard formulation of Double Effect allows doing an action that has a basic evil E as a consequence provided:

  1. The action in itself is neutral or good.
  2. E is not intended, whether as means or as end.
  3. There is a good G intended.
  4. The bad effects are are not disproportionate to the good effects.
Now, it is tempting to take the proportionality condition (4) to be a simple consequentialist condition that says that the overall consequences of the action are positive. It is well-known among people who work on Double Effect that proportionality is not a consequentialist condition. However, I am not sure it is well-known just how important it is that it not be a consequentialist condition.

In fact, if we take the proportionality condition simply to say that the overall consequences of the action are positive, then Double Effect will allow too-close variants of paradigm cases of what it is taken not to allow. For instance, a paradigm example of what Double Effect is taken not to permit is terror bombing in war. Terror bombing is a bombing intended to cause civilian casualties so as to terrify the enemy into surrender. That violates condition (2), since the evils are intended as a means to enemy surrender.

But now imagine that the person operating the bombs is an ethicist who believes in Double Effect with the consequentialist condition in place of (4). She realizes that knowledge is a good. In particular, it is good to know what it looks like from close up when civilian buildings have bombs dropped on them. So, she plans to drop bombs on civilian buildings to find out what that looks like. The action of dropping bombs in itself is neutral. (For instance, one might drop bombs as a means to mining.) She is pursuing a good G, of learning what it looks like when civilian buildings are bombed. She does not intend civilian deaths, either as an end or as a means: that there be civilians in the buildings does not contribute to her end, which is to see what it looks like when the buildings are bombed. So, (1)-(3) are satisfied. And if the bombing can be reasonably expected to end the war, thereby preventing further bloodshed, the overall consequences of the bombing can be assumed to be positive. But while this bomber is not intending civilian deaths, her variation on terror bombing is surely impermissible. Moreover, we see the pattern now: all that is needed for Double Effect with a consequentialist proportionality condition to justify a consequentialistically acceptable action is that the agent find some trivial good served by the action, and then the agent can act for that end. In other words, Double Effect ends up working like consequentialism with a bit of clever mental juggling.

So, the proportionality condition cannot be taken to be overall positive consequences. Maybe, though, there is a modification of the overall positive consequences criterion that works. Let C be the set of causal consequences of the action. At least one member of C is a basic evil. Let C* be the subset of C of those consequences c with the property that c does not have any basic evil in C as a necessary cause. Then, the modified consequentialist proportionality condition is that the overall value of C* is positive. This takes care of the above case, because the relevant increase in the probability of ending of the war has as a necessary cause the deaths of the civilians.

Incommensurability, however, precludes even this kind of consequentialist criterion. Also, I wonder if the above filtered consequentialist criterion isn't too restrictive. For instance, it wouldn't allow the defeater-defeater move I make in the second comment here, and it may be that theodicy requires such a move at some point.

All this suggests to me that the proportionality is a very complex notion. It may be one of those things that can't be codified (at least sufficiently briefly for us humans to do it in this life), but needs to be weighed by the Aristotelian phronimos.