Monday, November 9, 2020

Logically complex intentions

In a paper that was very important to me when I wrote it, I argue that the Principle of Double Effect should deal with accomplishment rather than intention. In particular, I consider cases of logically complex intentions: “I am a peanut farmer and I hate people with severe peanut allergies…. I secretly slip peanuts into Jones’ food in order that she should die if she has a severe peanut allergies. I do not intend Jones’ death—I only intend the logically complex state of Jones dying if she has a severe peanut allergy.” I then say that what is wrong with this action is that if Jones has an allergy, then I have accomplished her death, though I did not intend her death. What was wrong with my action is that my plan of action was open to a possibility that included my accomplishing her death.

But now consider a different case. A killer robot is loose in the building and all the doors are locked. The robot will stop precisely when it kills someone: it has a gun with practically unlimited ammunition and a kill detector that turns it off when it kills someone. It’s heading for Bob’s office, and Alice bravely runs in front of it to save his life. And my intuition is that Alice did not commit suicide. Yet it seems that Alice intended her death as a means to saving Bob’s life.

But perhaps it is not right to say that you intended your death at all. Instead, it seems plausible that Alice intention is:

  1. If the robot will kill someone, it will kill Alice.

An additional reason to think that (1) is a better interpretation of Alice’s intentions than just her unconditionally intending to die is that if the robot breaks down before killing Alice, we wouldn’t say that Alice’s action failed. Rather, we would say that it was made moot.

But according to what I say in the accomplishment paper, if in fact the robot does not break down, then Alice accomplishes her own death. And that’s wrong. (I take it that suicide is wrong.)

Perhaps what we want to say is this. In conditional intention cases, when one intends:

  1. If p, then q

and p happens and one’s action is successful, then what one has contrastively accomplished is:

  1. its being the case that p and q rather than p and not q.

To contrastively accomplish A rather than B is not the same as to accomplish A simply. And there is nothing evil about contrastively accomplishing its being the case that the robot kills someone and kills Alice rather than the robot killing someone and not killing Alice. On the other hand, if we apply this analysis to the peanut allergy case, what the crazy peanut farmer contrastively accomplishes is:

  1. Jones having a peanut allergy and dying rather than having a peanut allergy and not dying.

And this is an evil thing to contrastively accomplish. Roughly, it is evil to accomplish A rather than B just in case A is not insignificantly more evil than B.

But what about a variant case? The robot is so programmed that it stops as soon as someone in the building dies. The robot is heading for Bob and it’s too late for Alice to jump in front of it. So instead Alice shoots herself. Can’t we say that she shot herself rather than have Bob die, and the contrastive accomplishment of her death rather than Bob’s is laudable? I don’t think so. For her contrastive accomplishment was accomplished by simply accomplishing her death, which while in a sense brave, was a suicide and hence wrong.

A difficult but important task someone should do: Work out the logic of accomplishment and contrastive accomplishment for logically complex intentions.

2 comments:

Harrison Lee said...

Hi Alex,

If a terrorist intends to kill a group of people by blowing up a building, his intention might be framed as follows, "If anything x is such that x is a person and x is in the building, let x die." Framed in this way, the intention is conditional. So, it cannot be the case that the peanut farmer's killing is unintentional just because he only intends death conditionally. What is the difference between these two cases?

Regarding the task you set at the end--to "work out the logic of accomplishment and contrastive accomplishment for logically complex intentions"--here is a thought.

You note that in the first robot case, "if the robot breaks down before killing Alice, we wouldn’t say that Alice’s action failed. Rather, we would say that it [her plan?] was made moot."

But suppose Alice survives shooting herself and the robot carries on its way toward Bob. Without a change of heart, she would be disappointed that she failed to carry out her plan. Suppose that the robot then broke down before killing Bob. Wouldn't it still be true that Alice once tried to kill herself--in order to save Bob--and failed?

(Clicking the "I'm not a robot" button before publishing this comment felt especially significant.)

Alexander R Pruss said...

Harrison,

Your terrorist as described ("intends to kill a group of people") also wants someone to die: he won't be satisfied if the house turns out to be empty. So I think the conditional formulation doesn't fully capture his intentions.

What you say about Alice's suicide sounds right.