Showing posts with label robots. Show all posts
Showing posts with label robots. Show all posts

Tuesday, December 20, 2016

Bestowing harms and benefits

A virtuous person happily confers justified benefits and unhappily bestows even justified harms. Moreover, it is not just that the virtuous person is happy about someone being benefitted and unhappy about someone being harmed, though she does have those attitudes. Rather, the virtuous person is happy to be the conferrer of justified benefits and unhappy to be the bestower even of justified harms. These attitudes on the part of the virtuous person are evidence that it is non-instrumentally good for one to confer justified benefits and non-instrumentally bad for one to bestow even justified harms. Of course, the bestowal of justified harms can be virtuous, and virtuous action is non-instrumentally good for one. But an action can be good for one qua virtuous and bad for one in another way—cases of self-sacrifice are like that. Virtuously bestowing justified harms is a case of self-sacrifice on the part of the virtuous agent.

When multiple agents are necessary and voluntary causes of a single harm, the total bad of being a bestower of harm is not significantly diluted between the agents. Each agent non-instrumentally suffers from the total bad of bestowing harm, though the contingent psychological effects may—but need not—be diluted. (A thought experiment: One person hits a criminal in an instance of morally justified and legally sentenced corporal punishment while the other holds down the punishee. Both agents are equally responsible. It makes no difference to the badness of being the imposer of corporal punishment if instead of the other holding down the punishee, the punishee is simply tied down. Interestingly, one may have a different intuition on the other side—it might seem worse to hold down the punishee to be hit by a robot than by a person. But that’s a mistake.)

If this is right, then we have a non-instrumental reason to reduce the number of people involved in the justified imposition of a harm, though in particular cases there may also be reasons, instrumental and otherwise, to increase the number of people involved (e.g., a larger number of people involved in punishing may better convey societal disapprovat).

This in turn gives a non-instrumental reason to develop autonomous fighting robots for the military, since the use of such robots decreases the number of people who are non-instrumentally (as well as psychologically) harmed by killing. Of course, there are obvious serious practical problems there.

Monday, December 19, 2016

Intending material conditionals and dispositions, with an excursus on lethally-armed robots

Alice has tools in a shed and sees a clearly unarmed thief approaching the shed. She knows she is in no danger of her life or limb—she can easily move away from the thief—but points a gun at the thief and shouts: “Stop or I’ll shoot to kill.” The thief doesn’t stop. Alice fulfills the threat and kills the thief.

Bob has a farm of man-eating crocodiles and some tools he wants to store safely. He places the tools in a shed in the middle of the crocodile farm, in order to dissuade thieves. The farm is correctly marked all-around “Man-eating crocodiles”, and the crocodiles are quite visible to all and sundry. An unarmed thief breaks into Bob’s property attempting to get to his tool shed, but a crocodile eats him on the way.

Regardless of what local laws may say, Alice is a murderer. In fulfilling the threat, by definition she intended to kill the thief who posed no danger to life or limb. (The case might be different if the tools were needed for Alice to survive, but even then I think she shouldn’t intend death.) What about Bob? Well, there we don’t know what the intentions are. Here are two possible intentions:

  1. Prospective thieves are dissuaded by the presence of the man-eating crocodiles, but as a backup any that not dissuaded are eaten.

  2. Prospective thieves are dissuaded by the presence of the man-eating crocodiles.

If Bob’s intention is (1), then I think he’s no different from Alice. But Bob’s intention could simply be (2), whereas Alice’s intention couldn’t simply be to dissuade the thief, since if that were simply her intention, she wouldn’t have fired. (Note: the promise to shoot to kill is not morally binding.) Rather, when offering the threat, Alice intended to dissuade and shoot to kill as a backup, and then when she shot in fulfillment of the threat, she intended to kill. If Bob’s intention is simply (2), then Bob may be guilty of some variety of endangerment, but he’s not a murderer. I am inclined to think this can be true even if Bob trained the crocodiles to be man-eaters (in which case it becomes much clearer that he’s guilty of a variety of endangerment).

But let’s think a bit more about (2). The means to dissuading thieves is to put the shed in a place where there are crocodiles with a disposition to eat intruders. So Bob is also intending something like this:

  1. There be a dispositional state of affairs where any thieves (and maybe other intruders) tend to die.

However, in intending this dispositional state of affairs, Bob need not be intending the disposition’s actuation. He can simply intend the dispositional state of affairs to function not by actuation but by dissuasion. Moreover, if the thief dies, that’s not an accomplishment of Bob’s. On the other hand, if Bob intended the universal conditional

  1. All thieves die

or even:

  1. Most thieves die

then he would be accomplishing the deaths of thieves if any were eaten. Thus there is a difference between the logically complex intention that (4) or (5) be true, and the intention that there be a dispositional state of affairs to the effect of (4) or (5). This would seem to be the case even if the dispositional state of affairs entailed (4) or (5). Here’s why there is such a difference. If many thieves come and none die, then that constitutes or grounds the falsity of (4) and (5). But it does not constitute or ground the falsity of (3), and that would be true even if it entailed the falsity of (3).

This line of thought, though, has a curious consequence. Automated lethally-armed guard robots are in principle preferable to human lethally-armed guards. For the human guard either has a policy of killing if the threat doesn’t stop the intruder or has a policy of deceiving the intruder that she has such a policy. Deception is morally problematic and a policy of intending to kill is morally problematic. On the other hand, with the robotic lethally-armed guards, nobody needs to deceive and nobody needs to have a policy of killing under any circumstances. All that’s needed is the intending of a dispositional state of affairs. This seems preferable even in circumstances—say, wartime—where intentional killing is permissible, since it is surely better to avoid intentional killing.

But isn’t it paradoxical to think there is a moral difference between setting up a human guard and a robotic guard? Yet a lethally-armed robotic guard doesn’t seem significantly different from locating the guarded location on a deadly crocodile farm. So if we think there is no moral difference here, then we have to say that there is no difference between Alice’s policy of shooting intruders dead and Bob’s setup.

I think the moral difference between the human guard and the robotic guard can be defended. Think about it this way. In the case of the robotic guard, we can say that the death of the intruder is simply up to the intruder, whereas the human guard would still have to make a decision to go with the lethal policy in response to the intruder’s decision not to comply with the threat. The human guard could say “It’s on the intruder’s head” or “I had no choice—I had a policy”, but these are simply false: both she and the intruder had a choice.

None of this should be construed as a defence in practice of autonomous lethal robots. There are obvious practical worries about false positives, malfunctions, misuse and lowering the bar to a country’s initiating lethal hostilities.

Thursday, April 24, 2014

Brainlink on sale

I got an email earlier this week from Surplus Shed about the Brainlink being on sale for $20, in the aftermath of its discontinuation. It looks like a really cool device. It can hook up via Bluetooth to a computer or an Android phone on one end, and to many things on the other end: it has two PWM motor controllers, some DAC I/O, some analogue I/O (low resolution but the firmware is user-upgreadeable), a proximity sensor, accelerometers, and IR transmitter. The last of these is supposed to make it capable of controlling Roombas, TVs, DVD players and toy robots (I plan to try it with some of our IR helicopters, though the range of the IR on the Brainlink is supposed to be short, and maybe with our Pleo if we can make its battery pack work), and you can control it with Java code (there is an SDK). It's all beautifully open and well-documented. Very sad it's discontinued, but the original price was way more than a Raspberry Pi, so it's not surprising it didn't fly. For $20 it's a steal. The official website for the product is here. My eldest daughter and I are really looking forward to it! (Of course we may end up disappointed.) Techie readers may want to check it out.

Thursday, November 6, 2008

Artificial Intelligence and Personal Identity

Today at Baylor's Science and Human Nature conference I am giving a talk where I argue that absurdities follow from the assumption that a robot is a person. For a quick argument, note that the question of how many persons there are ought always to have an objective answer, but the question of how many robots there does not always have an objective answer. (Think of a larger robot made up of smaller ones, for instance.)