Showing posts with label benefit. Show all posts
Showing posts with label benefit. Show all posts

Tuesday, December 20, 2016

Bestowing harms and benefits

A virtuous person happily confers justified benefits and unhappily bestows even justified harms. Moreover, it is not just that the virtuous person is happy about someone being benefitted and unhappy about someone being harmed, though she does have those attitudes. Rather, the virtuous person is happy to be the conferrer of justified benefits and unhappy to be the bestower even of justified harms. These attitudes on the part of the virtuous person are evidence that it is non-instrumentally good for one to confer justified benefits and non-instrumentally bad for one to bestow even justified harms. Of course, the bestowal of justified harms can be virtuous, and virtuous action is non-instrumentally good for one. But an action can be good for one qua virtuous and bad for one in another way—cases of self-sacrifice are like that. Virtuously bestowing justified harms is a case of self-sacrifice on the part of the virtuous agent.

When multiple agents are necessary and voluntary causes of a single harm, the total bad of being a bestower of harm is not significantly diluted between the agents. Each agent non-instrumentally suffers from the total bad of bestowing harm, though the contingent psychological effects may—but need not—be diluted. (A thought experiment: One person hits a criminal in an instance of morally justified and legally sentenced corporal punishment while the other holds down the punishee. Both agents are equally responsible. It makes no difference to the badness of being the imposer of corporal punishment if instead of the other holding down the punishee, the punishee is simply tied down. Interestingly, one may have a different intuition on the other side—it might seem worse to hold down the punishee to be hit by a robot than by a person. But that’s a mistake.)

If this is right, then we have a non-instrumental reason to reduce the number of people involved in the justified imposition of a harm, though in particular cases there may also be reasons, instrumental and otherwise, to increase the number of people involved (e.g., a larger number of people involved in punishing may better convey societal disapprovat).

This in turn gives a non-instrumental reason to develop autonomous fighting robots for the military, since the use of such robots decreases the number of people who are non-instrumentally (as well as psychologically) harmed by killing. Of course, there are obvious serious practical problems there.

Tuesday, May 17, 2016

Universal beneficence and love

Take as true this plausible thesis:

  1. If you love someone, you have moral reason to benefit that person.
This is curious: it means that if I brainwash you into loving me, you will have moral reason to benefit me. But surely you did not gain a moral reason to benefit me from my brainwashing you. So you must have already had that moral reason before I brainwashed you into loving me. Hence you always already had a moral reason to benefit me, and since I'm not special in this respect, you always already had a moral reason to benefit everyone.

Here's another thesis:

  1. You should never try to stop loving.
But again suppose I brainwash you into loving me. If loving me was something optional, something you had no duty to, then it should be permissible for you to undo my imposition of love. But by (2) it's not permissible. So although I did wrong in forcing you to love me, loving me is indeed the right thing for you to do--it is your duty. But I'm not special. So you always already had a moral reason to love everyone.

But it is not in general wrong to try to stop having a particular form of love. We can find ourselves with the wrong form of love: we can love grown children as small children, for instance, or having a romantic love towards someone we ought not. In those cases, it is right to try to stop having that particular form of love, trying as much as one can to replace it with the right form.

Tuesday, October 15, 2013

Benefitting some and harming none

Consider the principle that if an action benefits some and harms none, then it's permissible. Now imagine a lottery run by uniformly choosing a random number between 0 and 1, with each number equally likely. There are infinitely many tickets, each bearing a different number between 0 and 1. Each ticket has been sold to a different person (there are lots of people in this story!). At night, I steal all the tickets. I then rearrange them in the following way. I get all the tickets numbered between 0 and 0.990, and my best friend gets all the tickets numbered between 0.990 and 0.999. I then redistribute the remaining tickets to all the people who bought tickets. So by morning, my friend and I have all the tickets numbered between 0 and 0.999, but everybody who had a ticket still has a ticket, and the ticket she has is just a good as the one she had before. I have made it pretty sure that I would win, but I haven't lowered anybody else's chances at winning.

Bracketing contingent considerations of public peace and of positive law, it seems that:

  1. I have harmed none—no one's chance of winning has gone down.
  2. I have benefited some myself and my friend—our chances of winning have gone up.
  3. I have done wrong.
  4. Thus, an action that benefits some and harms none can still be wrong.

One might object. Suppose ticket number 0.458 wins. Previously it was assigned to Mr Smith. Now it's mine. Haven't I harmed Mr Smith? Maybe but maybe not. Let me fill out the case by saying that there is no fact of the matter as to who would have won had I not shuffled the tickets. In the story, we live in a very chaotic universe, and any activity—be it stretching one's arms in the morning or shuffling tickets—affects the random choice of winning number. There is no fact about what that random choice would have been had things gone differently. Thus just because ticket 0.458 wins and it was Mr Smith's before my night-time activity one cannot say that I have harmed Mr Smith. (Molinists won't like this. But surely whether I have harmed anybody shouldn't depend on the truth values of Molinist conditionals.)