Alice has tools in a shed and sees a clearly unarmed thief approaching the shed. She knows she is in no danger of her life or limb—she can easily move away from the thief—but points a gun at the thief and shouts: “Stop or I’ll shoot to kill.” The thief doesn’t stop. Alice fulfills the threat and kills the thief.
Bob has a farm of man-eating crocodiles and some tools he wants to store safely. He places the tools in a shed in the middle of the crocodile farm, in order to dissuade thieves. The farm is correctly marked all-around “Man-eating crocodiles”, and the crocodiles are quite visible to all and sundry. An unarmed thief breaks into Bob’s property attempting to get to his tool shed, but a crocodile eats him on the way.
Regardless of what local laws may say, Alice is a murderer. In fulfilling the threat, by definition she intended to kill the thief who posed no danger to life or limb. (The case might be different if the tools were needed for Alice to survive, but even then I think she shouldn’t intend death.) What about Bob? Well, there we don’t know what the intentions are. Here are two possible intentions:
Prospective thieves are dissuaded by the presence of the man-eating crocodiles, but as a backup any that not dissuaded are eaten.
Prospective thieves are dissuaded by the presence of the man-eating crocodiles.
If Bob’s intention is (1), then I think he’s no different from Alice. But Bob’s intention could simply be (2), whereas Alice’s intention couldn’t simply be to dissuade the thief, since if that were simply her intention, she wouldn’t have fired. (Note: the promise to shoot to kill is not morally binding.) Rather, when offering the threat, Alice intended to dissuade and shoot to kill as a backup, and then when she shot in fulfillment of the threat, she intended to kill. If Bob’s intention is simply (2), then Bob may be guilty of some variety of endangerment, but he’s not a murderer. I am inclined to think this can be true even if Bob trained the crocodiles to be man-eaters (in which case it becomes much clearer that he’s guilty of a variety of endangerment).
But let’s think a bit more about (2). The means to dissuading thieves is to put the shed in a place where there are crocodiles with a disposition to eat intruders. So Bob is also intending something like this:
- There be a dispositional state of affairs where any thieves (and maybe other intruders) tend to die.
However, in intending this dispositional state of affairs, Bob need not be intending the disposition’s actuation. He can simply intend the dispositional state of affairs to function not by actuation but by dissuasion. Moreover, if the thief dies, that’s not an accomplishment of Bob’s. On the other hand, if Bob intended the universal conditional
- All thieves die
or even:
- Most thieves die
then he would be accomplishing the deaths of thieves if any were eaten. Thus there is a difference between the logically complex intention that (4) or (5) be true, and the intention that there be a dispositional state of affairs to the effect of (4) or (5). This would seem to be the case even if the dispositional state of affairs entailed (4) or (5). Here’s why there is such a difference. If many thieves come and none die, then that constitutes or grounds the falsity of (4) and (5). But it does not constitute or ground the falsity of (3), and that would be true even if it entailed the falsity of (3).
This line of thought, though, has a curious consequence. Automated lethally-armed guard robots are in principle preferable to human lethally-armed guards. For the human guard either has a policy of killing if the threat doesn’t stop the intruder or has a policy of deceiving the intruder that she has such a policy. Deception is morally problematic and a policy of intending to kill is morally problematic. On the other hand, with the robotic lethally-armed guards, nobody needs to deceive and nobody needs to have a policy of killing under any circumstances. All that’s needed is the intending of a dispositional state of affairs. This seems preferable even in circumstances—say, wartime—where intentional killing is permissible, since it is surely better to avoid intentional killing.
But isn’t it paradoxical to think there is a moral difference between setting up a human guard and a robotic guard? Yet a lethally-armed robotic guard doesn’t seem significantly different from locating the guarded location on a deadly crocodile farm. So if we think there is no moral difference here, then we have to say that there is no difference between Alice’s policy of shooting intruders dead and Bob’s setup.
I think the moral difference between the human guard and the robotic guard can be defended. Think about it this way. In the case of the robotic guard, we can say that the death of the intruder is simply up to the intruder, whereas the human guard would still have to make a decision to go with the lethal policy in response to the intruder’s decision not to comply with the threat. The human guard could say “It’s on the intruder’s head” or “I had no choice—I had a policy”, but these are simply false: both she and the intruder had a choice.
None of this should be construed as a defence in practice of autonomous lethal robots. There are obvious practical worries about false positives, malfunctions, misuse and lowering the bar to a country’s initiating lethal hostilities.
A few months ago, some friends of mine asked me to play a trolley car problem game at moralmachine.mit.edu
ReplyDeleteThe question on that site is, “What should the self-driving car do?” I found myself far more willing to have the self-driving car be programmed to hit people who were flouting the law, than I would have been to recommend such a policy to human drivers.
In the case of the robot guard, isn't the person who makes "a decision to go with the lethal policy in response to the intruder's decision not to comply with the threat" the programmer of the robot? The programmer and the intruder both have a choice, it seems.
ReplyDeleteMatthew:
ReplyDeleteThat's very interesting, and it makes sense. Creating natural conditions (in the sense that no further voluntary intervention is needed for them to work) which the vicious know about but where the vicious would suffer does seem different from directly causing the vicious to suffer.
johnsonav:
The programmer had a choice to cause a dispositional state of affairs. That dispositional state of affairs is in place, and the programmer's action is complete and successful, even if the lethal disposition is never triggered.
I don't think Bob is guilty of murder under that scenario. I don't know if you draw that conclusion based on Catholic moral theology or your own moral intuition. If intuitive, I don't share your intuition.
ReplyDeleteThe thief attempts to steal the tools at his own risk. Surely there's no obligation to make thievery safe. Surely Bob is under no obligation to protect the thief from harm. The set-up by itself doesn't put the thief in mortal danger. Rather, the thief puts himself in mortal danger by assuming a risk. In that respect it's different from a mantrap, which the unwary don't detect until it's too late. They never knew what hit them. But in this case, the hazard is in plain sight, and they defy the hazard, at their own cost.
(In fairness, you say Bob may or may not be guilty of murder depending on his intentions.)
Hmm...
ReplyDeleteSuppose Chuck is a criminal mastermind. He designs a bomb-collar that, once attached around a person's neck, can't be removed, and is set to go off in 30 minutes. Chuck kidnaps Dave, attaches the collar to him, and tells him that he must rob a bank and return to Chuck's hideout with the money in time, or the bomb will detonate, killing Dave. Chuck doesn't intend for Dave to die; he intends a dispositional state of affairs that functions by dissuasion.
But, Dave is unable to return to the hideout with the money in time. The bomb explodes, killing him. Following the reasoning in the post, Chuck didn't murder Dave. That doesn't seem right to me, but I can't put my finger on the problem.
Interesting post.
Steve:
ReplyDeleteI also don't think Bob has committed murder.
johnsonav:
I think there are actions that are not intentional killings but that are, nonetheless, wrongful killings and it is not unjust for the law to impose similar penalties to that are imposed for murder. For instance, suppose consider an enviromentalist terrorist who shoots down an airplane, because airplanes pollute. His aim is to destroy the airplane. He doesn't care either way about the people in it. He doesn't intend their deaths. But my understanding is that the law would convict him of murder--and rightly so.
Likewise, what Chuck did may be close enough to murder if Dave doesn't make it in time and to attempted murder if he does that it's reasonable to apply similar penalties. But it's not literally murder.
Consider, though, this variant. Dave is an associate of Chuck's. Dave wants Chuck to put the collar on him, as it will motivate him to hurry, and he is confident that with the collar he'll make it in time, but he might not make it in time without it. Nonetheless, things go awry, and Dave doesn't make it. Has Dave committed suicide? Not literally.
Alex,
ReplyDeleteYou said "If Bob’s intention is (1), then I think he’s no different from Alice," and you said Alice is a murderer.
Steve:
ReplyDeleteThat's right.
I was thinking that the charitable interpretation of intentions would be (2).
Your argument does not work if Bob's intention is (1). For if Bob's intention is (1), then Bob is intending that everyone who is not dissuaded should die. That's a murderous intent. It is not different in logical form from intending, say, that everyone who owes one money should die, a clearly murderous intent.
Isn't the fundamental difference between Alice and Bob that Alice is the direct cause of the thief's death? She performs an action in the relevant circumstances, the direct result of which is a human death. That's murder. Creating a situation where a person will die [I]without the direct action of any human causing that death[/I], if the person freely chooses to step into that situation, is not murder.
ReplyDeleteWhy would there be a paradox in the case of "setting up a human guard" vs. "setting up a robot guard"?? In the latter case, it is never true that a human freely chooses to kill another human (which is part of what it means to murder).
But isn’t it paradoxical to think there is a moral difference between setting up a human guard and a robotic guard? Yet a lethally-armed robotic guard doesn’t seem significantly different from locating the guarded location on a deadly crocodile farm. So if we think there is no moral difference here, then we have to say that there is no difference between Alice’s policy of shooting intruders dead and Bob’s setup.
ReplyDeleteIt would seem to me that main difference between Alice's policy of shooting intruders dead and Bob's setup is that Bob has construed his created dispositional state to be one of 'man against nature' whereas Alice's created dispositional state is clearly one of 'man against man' since we consider the gun an extension of herself. Thus Bob has less culpability in the death of the intruder.
In the case of lethally armed robots it is not difficult to imagine a civilization plagued by lethally armed robots,relics of a past civilization which kill thieves, in this case creating a scenario akin to the 'man versus nature' situation that Bob has created. In this hear and now time in which we live though it it is not irrational to foresee a day in the not distant future when a person, corporation, or state could employ lethally armed robots and I would find the lethally armed robots to be an extension of the entity who controlled them and that entity should be guilty of murder if it is indeed murder to directly cause the death of a human trying to steal material property.
Here's a thought. Suppose that one is sincerely only pursuing deterrence by setting up the robot gun.
ReplyDeleteThen it would be ideal not to put any bullets in the robot guard's gun. In practice, one may however have to put bullets in, because thieves would find out if one had a policy of having the robot guard's gun be unloaded. But in that case, one's intention isn't that the gun *be* lethal, but only that thieves should *think* it's lethal, though as it happens the only way of securing that might be by putting bullets in it.
So there is a real difference between the robot guard and the human guard. If the robot guard shoots, nobody needs to have intended the shot. The owner might simply have intended that there be a robot guard that appears like it will shoot, and the robot guard intended nothing since it's not an agent. On the other hand, a human guard that shoots intends to shoot. The shooting no longer serves the purposes of deterring this thief. (It may deter other thieves, but if so, then the shooting is intended.)
(Of course, if the robot has a mind and is an agent, then there is no longer this difference.)
ReplyDelete