Thursday, April 19, 2012

Choices and intentions

Start with this situation:

  • Six innocents are drowning: A, B, C, D, E, F.
  • One innocent is in no antecedent danger: G.
  • Sam, who is truthful but evil, tells me that if I do anything that kills G, he will rescue D, E and F, and only then.
I cannot reach the innocents except by activing remote drones with rescue equipment. There are two buttons in front of me that activate the drones.
  • If I press the green button, A, B and C are rescued.
  • If I press the red button, A, B and C are rescued and G is killed along the way (maybe G is standing so close to the relevant drone that he will be killed by the drone when it launches; his death is not a means to the rescue of A, B and C, however).
So, if I press the green button, then four (A, B, C, G) live and three die (D, E, F). If I press red button, then six (A, B, C, D, E and F) live and one dies (G).

Suppose first that my choice is between the green button and nothing. (Maybe the red button is covered beneath an unbreakable dome.) Then I should press the green button.

Suppose instead that my choice is between the red button and nothing. Then I should press the red button. My intention would be to rescue A, B and C. The death of G is an unintended side-effect of rescuing A, B and C. The rescue of D, E and F by Sam is welcome but not intended (since if it were intended, then the death of G would have to be intended as a means thereto, and it is wrong to intend G's death).

But now suppose that my choice is between the green button, the red button and nothing. The red button has the best consequences, because two more innocents live. But if I choose the red button over the green button because of this fact, then I am intending the rescue of D, E and F, and therefore I am intending the means to that rescue, namely G's death. To make the point clearer, suppose that the way things work, when I press the green button, a signal gets sent to the drone to go rescue A, B and C, and when I press the red button, that happens and an additional signal is sent to the drone to activate a powerful booster that kills G. To choose the red over the green button seems to involve a choice to activate the booster, since otherwise there is no reason for that choice. Imagine, after all, that one could directly control the two signals without pressing the buttons. It would be wrong to send both the launch signal and the booster signal if one was capable of only sending the launch signal.

So it seems that although the red button is one that it is permissible to press in a binary choice between pressing it and doing nothing, it is not permissible to press the red button in preference to pressing the green button, even though pressing the red button has better consequences than pressing the green button.

If this line of reasoning is correct then to figure out what someone intends one needs to look not just at what they chose, but also at what alternative they chose it against. This fits neatly with my view of our choice and responsibility as essentially contrastive.

But I am not so confident of this line of reasoning.

20 comments:

  1. Very interesting post. Thanks!

    ReplyDelete
  2. I expect that many will disagree with your intuition that you should press the red button when the choice is between that and doing nothing--at least when this scenario is spelled out in certain ways, or even when the option is simply presented by itself in even greater simplicity (even without the graphic details I will use).

    Suppose that a disabled man has been, through no fault of his own, propped up against the drone's propeller. To remotely activate the drone is to initiate its propeller, the obvious consequence of which will be to sever the disabled man's head. Is it (even) permissible to initiate the drone, given that this is the only way to rescue six from drowning?

    The fact that the evil (killing the one) is "upstreamish" from the good (rescuing the six) often seems to affect people's moral intuitions.

    ReplyDelete
  3. Causal order matters. If the beheading is a means to the rescue, then it's wrong to press the red button.

    But I don't think temporal order matters. Consider two versions: one like yours and another where the drone's takeoff causes an instability in the disabled man's house, so that ten hours later the house will collapse around him and kill him.

    Maybe, though, your "upstreamish" doesn't refer to temporal order, but depends on the idea that it matters whether the bad effect is a side-effect of the rescue itself or of something upstream from the rescue. That doesn't seem right. Probably the least controversial case of double effect is polio vaccine, which is very much like your disabled man case. The decision to distribute the vaccine causes a number of people to die quickly through side-effects, while many others are saved.

    ReplyDelete
  4. Alex, I like the example. Your answer seems right to me.

    Here is a similar point about causal order, relevant to Craig’s question, that I have thought about.

    Suppose a terrorist has a device which will ignite a nuclear device, wiping out the whole city, and the only way to stop it is to kill him. He has been cornered by police snipers but is protecting himself with a hostage held directly in front of him.

    Scenario A: You are a sniper, armed with a high-powered rifle, directly behind the terrorist. If you shoot him lethally (e.g. through the head) the high-powered bullet will pass through his skull into the hostage’s skull, killing the hostage also.

    Scenario B: You are a sniper with a high-powered rifle, but directly in front of the terrorist. You can only shoot the terrorist by shooting through the hostage (this will work).

    Question: In either case, should you kill the terrorist? (Assume that only you can do so.) My view is that (a) the cases should have the same answer, and (b) that in Scenario A you should shoot the terrorist, so (c) you should shoot him in Scenario B also. The fact that the temporal order of killing the terrorist/hostage is different is irrelevant. What is relevant is that in neither case are you killing the hostage as a method of killing the terrorist. (In both scenarios the hostage is just in the way.)

    In Alex’s example, killing G is the only method of saving DEF, and so if you are counting saving DEF as a reason to push the red button then you are counting G as a method of accomplishing that end.

    ReplyDelete
  5. Heath:

    Neil Delaney disagrees with you about Scenario B (Neil F. Delaney, “Two Cheers for ‘Closeness’: Terror, Targeting and Double Effect”, Philosophical Studies 137 (2008) 335-367), but I am with you.

    Here's a tougher case. (I am quoting from my paper "The Accomplishment of Plans", forthcoming in Phil Studies.)

    "Let me end this section with a case due to Neil Delaney, where a bomber drops the bombs on top of civilians who are located in a school that is placed on top of a weapons cache, because these civilians are a good indicator of where the militarily significant legitimate target is. On the present account, the deaths of these civilians are not an accomplishment of the bomber. However, the bomber does accomplish their being in grave danger, since by his targeting, he accomplishes the nearness between the place where the bombs fall and where the people sit. Thus, the bomber seems to act wrongly, which fits with Delaney’s intuitions, unless this is one of those rare cases where grave endangerment of non-consenting parties is nonetheless permissible."

    I am no longer sure I'm right about this.

    ReplyDelete
  6. This is quite speculative, but my suspicion (which I'd very much like to correct, if it's inaccurate) is that the rules we've internalized, and which generate our moral intuitions about cases, tend to have these features: (a) for the sorts of scenarios we are likely to face, they tend to steer us well clear of interventions that might kill a bystander without successfully rescuing anyone, and, relatedly, (b) they tend to be somewhat insensitive to stipulations about certainty in the hypothetical cases when these are particularly unrealistic (e.g., the certainty that house will collapse ten hours later due to vibrations from the drone's takeoff).

    If these suspicions are right, they might help us to make sense of any intuitive differences between the two ways in which the drone's takeoff kills a bystander. They also might help to spell out what "upstreamishness" is all about, and even why, in some cases, we react so strongly against causing evil as a means to something good.

    ReplyDelete
  7. There is a different way to decide what should be done. You need to act towards the people you are confused about by doing what you would want done if it were you in the situation. The person we are confused about is person G. Is it ok to kill him if several others will be saved? In any given situation, you are morally fine if you decide it as if it were you. If I am person G and I happen to be 75 while persons D, E, and F are children, I would want to give my life to save them, so I could choose that if I had the button. If I am person G and I have small children to care for and persons D, E, and F have no such responsibilities, I would not want to give my life to save them. So, if I a have the button in that case, I will not kill person G.

    Ditto for the disabled guy near the prop. If you were him, what would you decide? (I have a feeling he would want to give his life). You need to act in accord with what you would want if you were him.

    Ditto for the hostage. Presumably he/she is a member of that city that will be destroyed. If that were you, would you give your life to save the city (your loved ones included). I would, and so I would have no problem shooting the hostage if I were the sniper.

    Golden rule - it works every time.

    ReplyDelete
  8. The application of the Golden Rule (GR) is tricky. When I was a kid, I would rationalize: "I wouldn't mind if someone did X to me, so I can do X to them", where X was some aspect of social etiquette, since I did not care about etiquette as a kid. My mistake was to take the GR as a way of generalizing my own preferences by imposing them on others.

    Here's a trivial example of this misuse: "I will prohibit others from eating desserts with bananas. After all, I wouldn't mind if others prohibited me from eating desserts with bananas in them, because I find such desserts disgusting."

    So, we better not interpret the GR in a way that makes it generalize our personal preferences. Perhaps what we should do is interpret "what you would have them do unto you" as meaning "what would fulfill your deepest desires". But now the application is far from clear, because you have to discern what your deepest desires are, and what are merely shallower surface desires.

    My own view of the GR is that it's not meant to be a criterion for deciding hard cases. Rather, it's meant to be a re-orienting reminder that we must pursue the good of others in the same way that we need to pursue our own good.

    ReplyDelete
  9. I disagree. I think that is exactly when the GR most is useful: for deciding hard cases. It doesn't mean you need to torment others in your daily life. ;-)

    At times, you must make a hard decision that is 'in loco' of a person who cannot physically make the decision. Person G, the hostage, Terry Shivo, your mother with dementia ... etc. God puts it in your lap when it really is a decision for that other person to make.

    To make a moral decision you simply have to ask yourself what you would want done if it were you. Of course, you have to consider facts and opinions (as you certainly would also need to do to decide something for yourself.), but ultimately it has to be what you would choose for yourself in that position. [and it's ok to substitute "what a normal moral person would choose for himself" if you truly think you are too biased about the case.]

    ReplyDelete
  10. I think I disagree with Delaney. Here’s why.

    Suppose I am Superman battling Lex Luthor, who as per usual has a doomsday device. I can destroy it with my heat vision, however I don’t know where it is. If I find it and destroy it, there is likely to be some collateral damage of innocents which I am prepared to put up with for DDE reasons.

    Lex also is fiendish enough to have captured Lois Lane. And he baits me sending me a message that says, “Lois Lane is with the doomsday device.” Of course he lets me find LL, and she is strapped to the device, and the only way to destroy it will involve killing LL. This is, I take it, a paradigm case of the kind of thing Delaney is talking about: Lois’ location is excellent evidence of where the device is.

    It seems that if I was prepared to put up with the death of some innocent civilians to begin with, I have no further reason not to put up with the death of LL. (Caveat: she is very special to me. Maybe that is a reason.) What I mean is that I have no reasons that derive from the fact that her location is evidence of where the device is. Superman has a hard choice, but I don’t think it’s immoral to destroy the device along with Lois.

    ReplyDelete
  11. Doesn't this justify the moral reasoning of terrorists? Suppose I'm al-Qaeda, and I don't want to kill innocent civilians, and I don't want to intend to kill innocent civilians.

    But suppose I want to cripple the economy of my enemy in warfare, the United States, by destroying the World Trade Center.

    Method A: I fly planes into the WTC, accomplishing the goal of harming the economy of the U.S., but incidentally killing thousands of innocent civilians.

    Suppose that al-Qaeda's resources are such that other methods of accomplishing a similar goal are unfeasible. What does this mean for the permissibility of Method A?


    Or suppose I'm robbing a bank vault, and the only feasible method for opening the vault is an explosive device that is certain to be lethal to anyone standing close enough to it. Suppose that a bank security guard refuses to move away from the vault. Can I trigger the explosive, killing the guard, without intending the death of the guard?

    ReplyDelete
  12. "Suppose instead that my choice is between the red button and nothing. Then I should press the red button. My intention would be to rescue A, B and C. The death of G is an unintended side-effect of rescuing A, B and C."

    This is proportionalism, a principle contrary to Catholic moral teaching.

    ReplyDelete
  13. Not at all. This is just an application of the well-established Principle of Double Effect which comes from Aquinas.

    ReplyDelete
  14. (Though Aquinas disendorses the modern versions of it.)

    ReplyDelete
  15. It is a proportionalist application of the PDE to say, "The act is moral because a) I don't intend the innocent death I cause; and b) the net survival of innocents is plus 2."

    ReplyDelete
  16. Causing the death of innocents is out of proportion to the end of saving the lives of innocents. You can't simply subtract the body count from the number of lives saved and conclude an act is moral when the result is greater than zero.

    ReplyDelete
  17. In my case it's not just a matter of weighing the outcomes, but also of ensuring that no innocent deaths are intended.

    Find an official definition of proportionalism and see if what I said falls under it.

    A standard Catholic application of double effect is removing a cancerous uterus from a pregnant woman. That seems very similar: one does something that unintentionally causes the death of the fetus in order to save a life, reasoning that non-intentionally causing one innocent death is proportionate to avoiding the deaths of two innocent people.

    ReplyDelete
  18. Thanks, Alex. This seems to me the crucial sentence in what you say:

    `But if I choose Button 2 because of these preferable
    consequences--and why else would I choose it?--then it seems that I am
    intending that G die as a means to these consequences.'

    I don't agree. I am persuaded by Kamm that one can choose to perform an action A because it has certain consequences without intending that one bring about those consequences, or even that those consequences obtain. Take her example of choosing to drive to shop A rather than shop B because shop A will reimburse me my fuel. I don't drive to A intending that I be reimbursed. Or her example of the party: I am torn between having a party and not having a party. If I do have a party there'll be a mess; if I don't there won't. But, hang on, if I have a party the guests will feel indebted (a bad thing, says Kamm) and will clear up. I don't intend that they feel indebted, and I don't intend that they clear up, but I choose to have a party because it will have that consequence.

    ReplyDelete
  19. Daniel:

    You could be right.

    But can't one handle the fuel case more naturally by saying that you intend not to have an unreimbursed fuel expense, and that's why you go to Shop A rather than Shop B?

    I am drawn to thinking of facts about intentions as supervening on facts about reasons. But notice the symmetry here. I have three options, let's say. Shop A, shop B, no shop (C). I have two relevant reasons:
    R1: avoid unreimbursed fuel expenses
    R2: arriving in a shop.

    R1 favors A and C over B.
    R2 favors A and B over C.

    I end up acting on both R1 and R2. There seems to be a neat symmetry between the reasons. So if my acting on R2 makes it be the case that I intend to arrive in a shop, then by parallel my acting on R1 makes it be the case that I intend to avoid unreimbursed fuel expenses.

    But the particular way in which I avoid unreimbursed fuel expenses in the case of A is by getting reimbursed. My getting reimbursed is, then, something I intend.


    Vary my original case. Suppose that if D, E and F are drowning, and only they. If Fred does anything that results in G dying, Sam will rescue D, E and F, and there is no other way of rescuing them. Fred has a shotgun and wants to point and fire it just for fun. Fred has to choose a direction to point the shotgun first. He chooses to point it in the direction of G, reasoning that his intention is solely to enjoy the firing of the gun, but that nonetheless the further fact that G will then die and Sam will rescue D, E and F favors his aiming in that direction.

    Here, surely, we want to say that Sam intended to kill G.

    One could try to handle this by using the "trivializing the value of human life" move from this post, but that move, especially in the present case, doesn't seem to quite capture what is going on here.


    A different move in the fuel case would be to talk of defeater-defeaters. I want to go shopping. A partial defeater for that is that I will incur a fuel expense. But shop A's reimbursement defeats the partial defeater rather than providing me with a reason to go. I think there may be something to this move, but it doesn't change my judgment in the case in this post. For D, E and F's deaths are not a defeater for my rescuing A, B and C. (They might be in a modified case where my rescuing A, B and C causes D, E and F to be endangered. I haven't thought about that case yet.)


    This is all theologically very important to Christians, because we want to preserve the ideas that:
    1. God didn't intend Adam to sin.
    2. God's considerations of the consequences of the felix culpa--the great work of redemption--were rationally relevant to permitting Adam to sin.

    But I think a divine incompatibilist (though maybe not a divine determinist!) can handle the theological case without changing the judgment in this post. We could say that God's intention is that either Adam choose rightly or the great work of redemption happens. Adam's choosing wrongly would be a means to the second disjunct, but not to the disjunction.

    ReplyDelete