Alice has tools in a shed and sees a clearly unarmed thief approaching the shed. She knows she is in no danger of her life or limb—she can easily move away from the thief—but points a gun at the thief and shouts: “Stop or I’ll shoot to kill.” The thief doesn’t stop. Alice fulfills the threat and kills the thief.
Bob has a farm of man-eating crocodiles and some tools he wants to store safely. He places the tools in a shed in the middle of the crocodile farm, in order to dissuade thieves. The farm is correctly marked all-around “Man-eating crocodiles”, and the crocodiles are quite visible to all and sundry. An unarmed thief breaks into Bob’s property attempting to get to his tool shed, but a crocodile eats him on the way.
Regardless of what local laws may say, Alice is a murderer. In fulfilling the threat, by definition she intended to kill the thief who posed no danger to life or limb. (The case might be different if the tools were needed for Alice to survive, but even then I think she shouldn’t intend death.) What about Bob? Well, there we don’t know what the intentions are. Here are two possible intentions:
Prospective thieves are dissuaded by the presence of the man-eating crocodiles, but as a backup any that not dissuaded are eaten.
Prospective thieves are dissuaded by the presence of the man-eating crocodiles.
If Bob’s intention is (1), then I think he’s no different from Alice. But Bob’s intention could simply be (2), whereas Alice’s intention couldn’t simply be to dissuade the thief, since if that were simply her intention, she wouldn’t have fired. (Note: the promise to shoot to kill is not morally binding.) Rather, when offering the threat, Alice intended to dissuade and shoot to kill as a backup, and then when she shot in fulfillment of the threat, she intended to kill. If Bob’s intention is simply (2), then Bob may be guilty of some variety of endangerment, but he’s not a murderer. I am inclined to think this can be true even if Bob trained the crocodiles to be man-eaters (in which case it becomes much clearer that he’s guilty of a variety of endangerment).
But let’s think a bit more about (2). The means to dissuading thieves is to put the shed in a place where there are crocodiles with a disposition to eat intruders. So Bob is also intending something like this:
- There be a dispositional state of affairs where any thieves (and maybe other intruders) tend to die.
However, in intending this dispositional state of affairs, Bob need not be intending the disposition’s actuation. He can simply intend the dispositional state of affairs to function not by actuation but by dissuasion. Moreover, if the thief dies, that’s not an accomplishment of Bob’s. On the other hand, if Bob intended the universal conditional
- All thieves die
or even:
- Most thieves die
then he would be accomplishing the deaths of thieves if any were eaten. Thus there is a difference between the logically complex intention that (4) or (5) be true, and the intention that there be a dispositional state of affairs to the effect of (4) or (5). This would seem to be the case even if the dispositional state of affairs entailed (4) or (5). Here’s why there is such a difference. If many thieves come and none die, then that constitutes or grounds the falsity of (4) and (5). But it does not constitute or ground the falsity of (3), and that would be true even if it entailed the falsity of (3).
This line of thought, though, has a curious consequence. Automated lethally-armed guard robots are in principle preferable to human lethally-armed guards. For the human guard either has a policy of killing if the threat doesn’t stop the intruder or has a policy of deceiving the intruder that she has such a policy. Deception is morally problematic and a policy of intending to kill is morally problematic. On the other hand, with the robotic lethally-armed guards, nobody needs to deceive and nobody needs to have a policy of killing under any circumstances. All that’s needed is the intending of a dispositional state of affairs. This seems preferable even in circumstances—say, wartime—where intentional killing is permissible, since it is surely better to avoid intentional killing.
But isn’t it paradoxical to think there is a moral difference between setting up a human guard and a robotic guard? Yet a lethally-armed robotic guard doesn’t seem significantly different from locating the guarded location on a deadly crocodile farm. So if we think there is no moral difference here, then we have to say that there is no difference between Alice’s policy of shooting intruders dead and Bob’s setup.
I think the moral difference between the human guard and the robotic guard can be defended. Think about it this way. In the case of the robotic guard, we can say that the death of the intruder is simply up to the intruder, whereas the human guard would still have to make a decision to go with the lethal policy in response to the intruder’s decision not to comply with the threat. The human guard could say “It’s on the intruder’s head” or “I had no choice—I had a policy”, but these are simply false: both she and the intruder had a choice.
None of this should be construed as a defence in practice of autonomous lethal robots. There are obvious practical worries about false positives, malfunctions, misuse and lowering the bar to a country’s initiating lethal hostilities.