Showing posts with label deontology. Show all posts
Showing posts with label deontology. Show all posts

Tuesday, August 27, 2024

The need for a fine-grained deontology

It’s tempting to say that what justifies lethal self-defense is a wrongful lethal threat, perhaps with voluntariness and/or culpability added (see discussion and comments here).

But that’s not quite right. Suppose that a police officer, in addition to carrying her own gun, has her best friend’s gun with her, which she was taking in to a shop for minor cosmetic repairs. She promised her friend that she wouldn’t use his gun. Now, you threaten the officer, and she pulls her friend’s gun out, in blatant disregard of her promise, because she has always wanted to see what it feels like to threaten someone with this particular gun. The officer is now lethally threatening you, and doing so wrongfully, voluntarily and culpably, but that not justify lethal self-defense.

One might note here that the officer is not wronging you by breaking her promise to her best friend. So perhaps what justifies lethal self-defense is a lethal threat that wrongs you. But that can’t be the solution. If you are the best friend in question—no doubt now the former best friend—then it is you who is being wronged by the breaking of the promise. But that wrong is irrelevant to your lethal self-defense. Furthermore, we want an account of self-defense to justify to a general account of defense of innocent victims.

One might say that lethal self-defense is permitted only against a gravely wrongful threat, and this promise-breaking is not gravely wrongful. But we can tweak the case to make it be gravely wrongful. Maybe the police officer swore an oath before God and the community not to use this particular gun. That surely doesn’t justify your using lethal force to defend yourself against the officer’s threat.

Maybe what we want to say is that the kind of wrongful lethal threat that justifies lethal self-defense is one that wrongs by violating the right to life of the person threatened (rather than, say, being wrong by violating a promise). That sounds right to me. But what’s interesting about this is that it forces us to have a more fine-grained deontology. Not only do we need to talk about actions being wrong, but about actions being wrong against someone, and against someone in a particular way.

It’s interesting that considerations of self-defense require such a fine-grained deontology even if we do not think that in general every wrongful action wrongs someone.

Thursday, April 18, 2024

Evaluating some theses on dignity and value

I’ve been thinking a bit about the relationship between dignity and value. Here are four plausible principles:

  1. If x has dignity, then x has great non-instrumental value.

  2. If x has dignity, then x has great non-instrumental value because it has dignity.

  3. If x has dignity and y does not, then x has more non-instrumental value than y.

  4. Dignity just is great value (variant: great non-instrumental value).

Of these theses, I am pretty confident that (1) is true. I am fairly confident (3) is false, except perhaps in the special case where y is a substance. I am even more confident that (4) is false.

I am not sure about (2), but I incline against it.

Here is my reason to suspect that (2) is false. It seems that things have dignity in virtue of some further fact F about them, such as that they are rational beings, or that they are in the image and likeness of God, or that they are sacred. In such a case, it seems plausible to think that F directly gives the dignified entity both the great value and dignity, and hence the great value derives directly from F and not from the dignity. For instance, maybe what makes persons have great value is that they are rational, and the same fact—namely that they are rational—gives them dignity. But the dignity doesn’t give them additional value beyond that bestowed on them by their rationality.

My reason to deny (4) is that great value does not give rise to the kinds of deontological consequences that dignity does. One may not desecrate something with dignity no matter what consequences come of it. But it is plausible that mere great value can be destroyed for the sake of dignity.

This leaves principle (3). The argument in my recent post (which I now have some reservations about, in light of some powerful criticisms from a colleague) points to the falsity of (3). Here is another, related reason. Suppose we find out that the Andromeda Galaxy is full of life, of great diversity and wonder, including both sentient and non-sentient organisms, but has nothing close to sapient life—nothing like a person. An evil alien is about to launch a weapon that will destroy the Andromeda Galaxy. You can either stop that alien or save a drowning human. It seems to me that either option is permissible. If I am right, then the value of the human is not much greater than that of the Andromeda Galaxy.

But now imagine that the Whirlpool Galaxy has an order of magnitude more life than the Andromeda Galaxy, with much greater diversity and wonder, than the Andromeda Galaxy, but still with nothing sapient. Then even if the value of the human is greater than that of the Andromeda Galaxy, because it is not much greater, while the value of the Whirlpool Galaxy is much greater than that of the Andromeda Galaxy, it follows that the human does not have greater value than the Whirlpool Galaxy.

However, the Whirlpool Galaxy, assuming it has no sapience in it, lacks dignity. A sign of this is that it would be permissible to deliberately destroy it in order to save two similar galaxies from destruction.

Thus, the human is not greater in value than the Whirlpool Galaxy (in my story), but the human has dignity while the Whirlpool Galaxy lacks it.

That said, on my ontology, galaxies are unlikely to be substances (especially if the life in the galaxy is considered a part of the galaxy, since following Aristotle I doubt that a substance can be a proper part of a substance). So it is still possible that principle (3) is true for substances.

But I am not sure even of (3) in the case of substances. Suppose elephants are not persons, and imagine an alien sentient but not sapient creature which is like an elephant in the temporal density of the richness of life (i.e., richness per unit time), except that (a) its rich elephantine life lasts millions of years, and (b) there can only be one member of the kind, because they naturally do not reproduce. On the other hand, consider an alien person who naturally only has a life that lasts ten minutes, and has the same temporal density of richness of life that we do. I doubt that the alien person is much more valuable than the elephantine alien. And if the alien person is not much more valuable, then by imagining a non-personal animal that is much more valuable than the elephantine alien, we have imagined that some person is not more valuable than some non-person. Assuming all non-persons lack dignity and all persons have dignity, we have a case where an entity with dignity is not more valuable than an entity without dignity.

That said, I am not very confident of my arguments against (3). And while I am dubious of (3), I do accept:

  1. If x has dignity and y does not, then y is not more valuable than x.

I think the case of the human and the galaxy, or the alien person and alien elephantine creature, are cases of incommensurability.

Tuesday, April 16, 2024

Value and dignity

  1. If it can be reasonable for a typical innocent human being to save lions from extinction at the expense of the human’s own life, then the life of a typical human being is not of greater value than that of all the lion species.

  2. It can be reasonable for a typical innocent human being to save lions from extinction at the expense of the human’s own life.

  3. So, the life of a typical innocent human being is not of greater value than that of the lion species.

  4. It is wrong to intentionally kill an innocent human being in order to save tigers, elephants and giraffes from extinction.

  5. It is not wrong to intentionally destroy the lion species in order to save tigers, elephants and giraffes from extinction.

  6. If (3), (4) and (5), then the right to life of innocent human beings is not grounded in how great the value of human life is.

  7. So, the right to life of innocent human beings is not grounded in how great the value of human life is.

I think the conclusion to draw from this is the Kantian one, that dignity that property of human beings that grounds respect, is not a form of value. A human being has a dignity greater than that of all lions taken together, as indicated by the deontological claims (4) and (5), but a human being does not have a value greater than that of all lions taken together.

One might be unconvinced by (2). But if so, then tweak the argument. It is reasonable to accept a 25% chance of death in order to stop an alien attack aimed at killing off all the lions. If so, then on the plausible assumption that the value of all the lions, tigers, elephants and giraffes is at least four times that of the lions (note that there are multiple species of elephants and giraffes, but only one of lions), it is reasonable to accept a 100% chance of death in order to stop the alien attack aimed at killing off all four types of animals. But now we can easily imagine sixteen types of animals such that it is permissible to intentionally kill off the lions, tigers, elephants and giraffes in order to save the 16 types, but it is not permissible to intentionally kill a human in order to save the 16 types.

Thursday, April 4, 2024

Intending the bad as such

Here is a plausible thesis:

  1. You should never intend to produce a bad effect qua bad.

Now, even the most hardnosed deontologist (like me!) will admit that there are minor bads which it is permissible to intentionally produce for instrumental reasons. If a gun is held to your head, and you are told that you will die unless you come up to a stranger and give them a moderate slap with a dead fish, then the slap is the right thing to do. And if the only way for you to survive a bear attack is to wake up your fellow camper who is much more handy with a rifle than you are, and the only way to wake them up is to poke them with a sharp stick, then the poke is the right thing. But these cases are not counterexamples to (1), since while the slap and poke are bad, one is not intending them qua bad.

However, there are more contrived cases where it seems that you should intend to produce a bad effect qua bad. For instance, suppose that you are informed that you will die unless you do something clearly bad to a stranger, but it is left entirely up to you what the bad thing is. Then it seems obvious that the right thing to do is to choose the least bad thing you can think of—the lightest slap with a dead fish, perhaps, that still clearly counts as bad—and do that. But if you do that, then you are intending the bad qua bad.

Yet I find (1) plausible. I feel a pull towards thinking that you shouldn’t set your will on the bad qua bad, no matter what. However it seems weird to think that it would be right to give a stranger a moderate slap with a dead fish if that was specifically what you were required to do to save your life, but it would be wrong to give them a mild slap if it were left up to you what bad thing to do. So, very cautiously, I am inclined to deny (1) in the case of minor bads.

Thursday, January 11, 2024

A deontological asymmetry

Consider these two cases:

  1. You know that your freely killing one innocent person will lead to three innocent drowning people being saved.

  2. You know that saving three innocent drowning people will lead to your freely killing one innocent person.

It’s easy to imagine cases like (1). If compatibilism is true, it’s also pretty easy to imagine cases like (2)—we just suppose that your saving the innocent people produces a state of affairs where your psychology gradually changes in such a way that you kill one innocent person. If libertarianism and Molinism are true, we can also get (2): God can reveal to you the conditional of free will.

If libertarianism is true but Molinism is false, it’s harder to get (2), but we can still get it, or something very close to it. We can, for instance, imagine that if you rescue the three people, you will be kidnapped by someone who will offer increasingly difficult to resist temptations to kill an innocent person, and it can be very likely that one day you will give in.

Deontological ethics says that in (1) killing the innocent person is wrong.

Does it say that saving the three innocents is wrong in (2)? It might, but not obviously so. For the action is in itself good, and one might reasonably say that becoming a murderer is a consequence that is not disproportionate to saving the three lives. After all, imagine this variant:

  1. You know that saving three innocent drowning people will lead to a fourth person freely killing one innocent person.

Here it seems that it is at least permissible to save the three innocents. That someone will through a weird chain of events become a murderer if you save the three innocents does not make it wrong to save the three.

I am inclined to think that saving the three is permissible in (2). But if you disagree, change the three to thirty. Now it seems pretty clear to me that saving the drowning people is permissible in (2). But it is still wrong to kill an innocent person to save thirty.

Even on threshold deontology, it seems pretty plausible that the thresholds in (1) and (2) are different. If n is the smallest number such that it is permissible to save n drowning people, at the expense of a side-effect of your eventually killing one innocent, then it seems plausible that n is not big enough to make it permissible to kill one innocent to save n.

So, let’s suppose we have this asymmetry between (1) and (2), with the “three” replaced by some other number as needed (the same one in both statements), so that the action described in (1) is wrong but the one in (2) is permissible.

This then will be yet another counterexample to the project of consequentializing deontology: of finding a utility assignment that renders conclusions equivalent to those of deontology. For the consequences of (1) and (2) are the same, even if one assigns a very big disutility to killing innocents.

Monday, August 28, 2023

Are we finite?

Here’s a valid argument with plausible premises:

  1. A finite being has finite value.

  2. Any being with finite value may be permissibly sacrificed for a sufficiently large finite benefit.

  3. It is wrong to sacrifice a human for any finite benefit.

  4. So, a human has infinite value. (2 and 3)

  5. So, a human is an infinite being. (1 and 4)

That conclusion itself is interesting. But also:

  1. Any purely material being made of a finite amount of matter is a finite being.

  2. If human beings are purely material, they are made of a finite amount of matter.

  3. So, human beings are not purely material. (5, 6 and 7)

I am not sure, all that said, whether I buy (2). I think a deontology might provide a way of denying it.

And, of course, work needs to be done to reconcile (5) with the tradition that holds that all creatures are finite, and only God is infinite. Off-hand, I think one would need to distinguish between senses of being “infinite”. Famously, Augustine said that the numbers are finite because they are contained in the mind of God. There is, thus, an absolute sense of the infinite, where only God is infinite, and anything full contained in the divine mind is absolutely finite. But surely there is also a sense in which there are infinitely many numbers! So there must another sense of the infinite, and that might be a sense in which humans might be infinite.

Nor do I really know what it means to say that a human is infinite.

Lots of room for further research if one doesn’t just reject the whole line of thought.

Monday, June 5, 2023

Forced choice and deontology

Suppose the only way to save five innocent people is by killing one innocent person. The deontologist says: you must refrain.

But what you have a forced choice between killing one and killing five?

How could that be? I know of two ways.

First, a psychological block. Perhaps you are brainwashed into killing, but it’s left to your free will whether you are to choose one victim or five.

Second, you can tie the outcomes to cases where effort is required to maintain the status quo but where at least mental effort is needed to stop maintaining it.

As a little introspective experiment, I held my breath for 20 seconds. It took a mild to moderate amount of effort to do so, increasing towards the end of the time period. As our language of “holding” indicates, holding one’s breath is an action. But at the same time, it was clear at all the times that breathing would also be an action—a deliberate interruption of the holding that would also take a mental and physical effort.

Similarly, imagine you’re holding an extremely heavy suitcase. The on-going holding is an effort. But at the same time, to let the suitcase go would also be an effort: you would need to bend your knees to lower it to the ground, or at least move your fingers to release your grip.

In both the breath and suitcase cases, there is no such thing as refraining from action. Holding is an action and letting go is an action.

Very well, now imagine that an evildoer has set things up as follows. They informed you that if you don’t kill the one innocent, five innocents will die. And then they set up a machine that will shoot the one innocent if you let go of the suitcase in the next thirty seconds. What should you do?

If you hold on for thirty seconds, then your effort will ensure that overall four people will die. Even if we grant that you are not intending this tragic consequence, it is wrong to act in a way that produces such a consequence. Think about this in terms of Double Effect (I am grateful to one of our grad students for the connection): holding on to the suitcase has an evil consequence that is disproportionate to whatever goods are involved in holding on.

If you let go, however, then only one person will die. This seems better. But if that’s why you let go, then you are letting go in order that that one person’s death should prevent the deaths of the five. And that violates deontology.

Here is a tentative suggestion. Standard deontological principles have an unstated presupposition: refraining from action is possible. If refraining is impossible, the principles apply at best in modified form.

Friday, October 14, 2022

Another thought on consequentializing deontology

One strategy for accounting for deontology while allowing the tools of decision theory to be used is to set such a high disvalue on violations of deontic constraints that we end up having to obey the constraints.

I think this leads to a very implausible consequence. Suppose you shouldn’t violate a deontic constraint to save a million lives. But now imagine you’re in a situation where you need to ϕ to save ten thousand lives, and suppose that the non-deontic-consequence badness of ϕing is negligible as compared to ten thousand lives. Further, you think it’s pretty likely that there is no deontic constraint against ϕing, but you’ve heard that a small number of morally sensitive people think there is. You conclude that there is a 1% chance that there is a deontic constraint against ϕing. If we account for the fact that you shouldn’t violate a deontic constraint to save a million lives by setting a disvalue on violation of deontic constraints greater than the disvalue of a million deaths, then a 1% risk of violating a deontic constraint is worse than ten thousand deaths, and so you shouldn’t ϕ because of the 1% risk of violating a deontic constraint. But this is surely the wrong result. One understands a person of principle refusing to do something that clearly violates a deontic constraint to save lots of lives. But to refuse to do something that has a 99% chance of not violating a deontic constraint to save lots of lives, solely because of that 1% chance of deontic violation, is very implausible.

While I think this argument is basically correct, it is also puzzling. Why is it that it is so morally awful to knowingly violate a deontic constraint, but a small risk of violation can be tolerated? My guess is it has to do with where deontic constraints come from: they come from the fact that in certain prohibited actions one is setting one’s will against a basic good, like the life of the innocent. In cases where violation is very likely, one simply is setting one’s will against the good. But when it is unlikely, one simply is not.

Objection The above argument assumes that the disvalue of deaths varies linearly in the number of deaths and that expected utility maximization is the way to go.

Response: Vary the case. Imagine that there is a ticking bomb that has a 99% chance of being defective and a 1% chance of being functional. If it’s functional, then when the timer goes off a million people die. And now suppose that the only way to disarm the bomb is to do something that has a 1% chance of violating a deontic constraint, with the two chances (functionality of the bomb and violation of constraint) being independent. It seems plausible that you should take the 1% risk of violating a deontic constraint to avoid a 1% chance of a million people dying.

Wednesday, April 6, 2022

Consequentialism and probability

Classic utilitarianism holds that the right thing to do is what actually maximizes utility. But:

  1. If the best science says that drug A is better for the patient than drug B, then a doctor does the right thing by prescribing drug A, even if due to unknowable idiosyncracies of the patient, drug B is actually better for the patient.

  2. Unless generalized Molinism is true, in indeterministic situations there is often no fact of the matter of what would really have happened had you acted otherwise than you did.

  3. In typical cases what maximizes utility is saying what is true, but the right thing to do is to say what one actually thinks, even if that is not the truth.

These suggest that perhaps the right thing to do is the one that is more likely to maximize utility. But that’s mistaken, too. In the following case getting coffee from the machine is more likely to maximize utility.

  1. You know that one of the three coffee machines in the breakroom has been wired to a bomb by a terrorist, but don’t know which one, and you get your morning coffee fix by using one of the three machines at random.

Clearly that is the wrong thing to do, even though there is a 2/3 probability that this coffee machine is just fine and utility is maximized (we suppose) by your drinking coffee.

This, in turn, suggests that the right thing to do is what has the highest expected utility.

But this, too, has a counterexample:

  1. The inquisitor tortures heretics while confident that this maximizes their and others’ chance of getting into heaven.

Whatever we may wish to say about the inquisitor’s culpability, it is clear that he is not doing the right thing.

Perhaps, though, we can say that the inquisitor’s credences are irrational given his evidence, and the expected utilities in determining what is right and wrong need to be calculated according to the credences of the ideal agent who has the same evidence.

This also doesn’t work. First, it could be that a particular inquisitor’s evidence does yield the credences that they actually have—perhaps they have formed their relevant beliefs on the basis of the most reliable testimony they could find, and they were just really epistemically unlucky. Second, suppose that you know that all the coffee machines with serial numbers whose last digit is the same as the quadrilionth digit of π have been rigged to explode. You’ve looked at the coffee machine’s serial number’s last digit, but of course you have no idea what the quadrilionth digit of π is. In fact, the two digits are different. You did the wrong thing by using the coffee machine, even though the ideal agent’s expected utilities given your evidence would say that you did the right thing—for the ideal agent would know a priori what the quadrilionth digit of π is.

So it seems that there really isn’t a good thing for the consequentialist to say about this stuff.

The classic consequentialist might try to dig in their heels and distinguish the right from the praiseworthy, and the wrong from the blameworthy. Perhaps maximizing expected utility is praiseworthy, but is right if and only if it actually maximizes utility. This this still has problems with (2), and it still gets the inquisitor wrong, because it implies that the inquisitor is praiseworthy, which is also absurd.

The more I think about it, the more I think that if I were a consequentialist I might want to bite the bullet on the inquisitor cases and say that either the inquisitor is acting rightly or is praiseworthy. But as the non-consequentialist that I am, I think this is a horrible conclusion.

Thursday, March 31, 2022

Deontology and the Spanish Inquisition

  1. If a person acts in a way that would be right if their relevant non-moral beliefs were correct, they are not subject to moral criticism for their action.

  2. If consequentialism or threshold deontology is correct, then inquisitors who tortured heretics for the good of the heretics’ souls acted in ways that would be right if the inquisitors’ relevant non-moral beliefs were correct.

  3. The torture of heretics is subject to moral criticism.

  4. So, neither consequentialism nor threshold deontology is correct.

Let me expand on 2. The inquisitors had the non-moral beliefs that heretics were bound for eternal misery, and that torturing the heretics had a significant chance of turning them to a path leading to eternal bliss and generally increasing the number of people receving eternal bliss and avoiding eternal misery. If these non-moral beliefs were correct, then the inquisitors would have been acting in a way that maximizes good consequences, and hence that would have been right if consequentialism is true. The same is true on threshold deontology. For while a threshold deontologist has deontic constraints on such things as torturing people for their beliefs, these constraints disappear once the stakes are high enough. And the stakes here are infinitely high: eternal bliss and eternal misery. Infinitely high had better be high enough!

Another way to put the argument is this: If consequentialism or threshold deontology is correct, then the only criticism we can make of the inquisitors is for their non-moral beliefs. And yet surely we should do more than that!

If we are to condemn the inquisitors on moral grounds, we need genuine absolute deontic prohibitions.

Monday, January 24, 2022

Against divine desire theories of duty

On the divine desire version of divine command theory, the right thing to do is what God wants us to do.

But what if God’s desires conflict? God does’t want us to commit murder. But suppose a truthful evildoer tells me that if I don’t murder one innocent person, then a thousand persons will be given a choice to murder an innocent person or die. Knowing humanity, I can be quite confident that of that thousand people, a significant number, maybe as much as a fifty or more, will opt to murder rather than be murdered. Thus, if I commit murder, God’s desire that there be no murder will be violated by one murder. If I don’t commit murder, God’s desire that there be no murder will be violated by about fifty or more murders. It seems that in this case murder fulfills God’s desires better. And yet murder is wrong.

(Some Christians these days have consequentialist inclinations and may want to accept the conclusion that in this case murder is acceptable. I will assume in this post that they are wrong.)

Perhaps we can say this: Desires should be divided into instrumental and non-instrumental ones, and it is only non-instrumental divine desires that define moral obligations. The fact that by murdering an innocent person I prevent fifty or so murders only gives God an instrumental desire for me to murder that innocent.

But this line of thought is risky. For suppose that God’s reasons for wanting the Israelites to refrain from pork were instrumental. What God really wanted was for the Israelites to have a cultural distinctiveness from other peoples, and refraining from pork served to produce that. On the view that instrumental desires do not produce obligations, it follows that the Israelites had no obligation to refrain from pork, which is wrong.

Perhaps, though, another move is possible. Maybe we should say that in the scenario I gave earlier God knows that his desires will be better served by my committing murder, but he does not want me to do so, whether instrumentally or not. For we need not suppose that whenever a rational being desires y and sees that x is instrumental to y then the rational being desires x. This does indeed get us out of the initial problem.

But we still have a bit of a puzzle. For suppose that someone you love has multiple desires and they cannot all be satisfied. Among that person’s desires, there will be desires concerning what you do and desires concerning other matters. Is it the case that in your love for them, their desires concerning what you do should automatically take precedence over their other desires? No! Suppose Alice and Bob love each other. Now imagine that Bob would really like a certain expensive item that he cannot afford to buy for himself, but that Alice, who is wealthier, can buy for him with only a minor hardship to her. We can now imagine that Bob’s desire that Alice spend no money is weaker than his desire for the expensive item. In that case, surely, given her love for Bob, Alice has good reason to buy the gift for Bob, and it is false that Bob’s desire concerning what Alice does (namely, his desire that she not spend money) take precedence over Bob’s stronger desire concerning other matters (namely, his desire for the item). It would be a loving thing for Alice, thus, to transgress Bob’s desire that she not spend money.

But presumably God’s desire that I not commit murder is weaker than God’s desire that fifty other people not commit murder. Thus, it seems that committing the murder would exhibit love of God—assuming that God’s desires is all that is at issue, and there are no moral obligations independent of God’s desires. Hence, there is a tension between love for God and obedience to God on the divine desire version of divine command theory. And that’s a tension we should avoid.

Monday, November 15, 2021

Intrinsic evil

Consider this argument:

  1. An action is intrinsically evil if and only if it is wrong to do no matter what.

  2. In doing anything wrong, one does something (at least) prima facie bad with insufficient moral reason.

  3. No matter what, it is wrong to do something prima facie bad with insufficient moral reason.

  4. So in doing anything wrong, one performs an intrinsically evil action.

This conclusion seems mistaken. Lightly slapping a stranger on a bus in the face is wrong, but not intrinsically wrong, because if a malefactor was going to kill everyone on the bus who wasn’t slapped by you, then you should go and slap everybody. Yet the argument would imply that in lightly slapping a stranger on a bus you do something intrinsically wrong, namely slap a stranger with insufficient moral reason. But it seems mistaken to think that in slapping a stranger lightly you perform an intrinsically evil action.

The above argument threatens to eviscerate the traditional Christian distinction between intrinsic and extrinsic evil. What should we say?

Here is a suggestion. Perhaps we should abandon (1) and instead distinguish between reasons why an action is wrong. Intrinsically evil actions are wrong for reasons that do not depend on consideration of consequences and extrinsically evil actions are wrong but not for any reasons that do not depend on consideration of consequences.

Thus, lightly slapping a stranger with insufficient moral reason is extrinsically evil because any reason that makes it wrong is a reason that depends on consideration of consequences. On the other hand, one can completely explain what makes an act of murder wrong without adverting to consequences.

But isn’t the death of the victim a crucial part of the wrongness of murder, and yet a consequence? After all, if the cause of death is murder, then the death is a consequence of the murder. Fortunately we can solve this: the act is no less wrong if the victim does not die. It is the intention of death, not the actuality of death, that is a part of the reasons for wrongness.

So, when we distinguish between acts made wrong by consequences and and wrong acts not made wrong by consequences, by “consequences” we do not mean intended consequences, but only actual or foreseen or risked consequences.

But what if Alice slaps Bob with the intention of producing an on-balance bad outcome? That act is wrong for reasons that have nothing to do with actual, foreseen or risked consequences, but only with her intention. Here I think we can bite the bullet: to slap an innocent stranger with the intention of producing an on-balance bad outcome is intrinsically wrong, just as it is intrinsically wrong to slap an innocent stranger with the intention of causing death.

Note that this would show that an intrinsically evil action need not be very evil. A light slap with the intention of producing an on-balance slightly bad outcome is wrong, but not very wrong. (Similarly, the Christian tradition holds that every lie is intrinsically evil, but some lies are only slight wrongs.)

Here is another advantage of running the distinction in this way, given the Jewish and Christian tradition. If an intrinsically evil action is one that is evil independently of consequences, it could be that such an action could still be turned into a permissible one on the basis of circumstantial factors not based in consequences. And God’s commands can be such circumstantial factors. Thus, when God commands Abraham to kill Isaac, the killing of Isaac becomes right not because of any new consequences, but because of the circumstance of God commanding the killing.

Could we maybe narrow down the scope of intrinsically evil actions even more, by saying that not just consequences, but circumstances in general, aren’t supposed to be among the reasons for wrongness? But if we do that, then most paradigm cases of intrinsically evil actions will fail: for instance, that the victim of a murder is innocent is a circumstance (it is not a part of the agent’s intention).

Monday, April 19, 2021

Desires for another's action

Suppose that Alice is a morally upright officer fighting against an unjust aggressor in a bloody war. The aggressor’s murderous acts include continual slaughter of children. Alice has sent Bob for a mission behind enemy lines. Bob’s last message said that Bob has found a way to end the war. The enemy has been led to war by a regent representing a three-year-old king. If the three-year-old were to die, the crown would pass to a peaceloving older cousin who would immediately end the war. And Bob has just found a way to kill the toddler king. Moreover, he can do it in such a way that it looks like it is a death of natural causes and will not lead to vengeful enemy action.

Alice responds to the message by saying that the child-king is an innocent noncombatant and that she forbids killing him as that would be murder. It seems that Alice now has two incompatible desires:

  • that Bob will do the right thing by refraining from murdering the child, and

  • that Bob will assassinate the child king, thereby preventing much slaughter, including of children.

And there is a sense in which Alice wants the assassination more than she wants Bob to do the right thing. For what makes the assassination undesirable—the murder of a child—occurs in greater numbers in the no-assassination scenario.

But in another sense, it was the desire to have Bob do the right thing that was greater. For that was the desire that guided Alice’s action of forbidding the assassination.

What should we say?

Here is a suggestion: Alice desires that Bob do the right thing, but Alice wishes that Bob would assassinate the king. What Alice desires and what Alice wishes for are in this case in conflict.

And here is a related question. Suppose someone you care about wants you to do one thing but wishes you to do another. Which should you do?

In the above case, the answer is given by morality: assassinating the three-year-old king is wrong, no matter the consequences. And considerations of authority concur. But what if we bracket morality and authority, and simply ask what Bob should do insofar as he cares about Alice who is his friend. Should he follow Alice’s desires or her wishes? I think this is not so clear. On the one hand, it seems more respectful to follow someone’s desires. On the other hand, it seems more beneficent to follow someone’s wishes.

Thursday, February 18, 2021

Moral risk

Say that an action is deontologically doubtful (DD) provided that the probability of the action being forbidden by the correct deontology is significant but less than 1/2.

There are cases where we clearly should not risk performing a DD action. A clear example is when you’re hunting and you see a shape that has a 40% chance of being human: you should not shoot. But notice that in this case, deontology need play no role: expected-utility reasoning tells you that you shouldn’t shoot.

There are, on the other hand, cases where you should take a significant risk of performing a DD action.

Beast Case: The shape in the distance has a 30% chance of being human and a 70% chance of being a beast that is going to devour a dozen people in your village if not shot by you right now. In that case, it seems it might well be permissible to shoot.

This suggests this principle:

  1. If a DD action has significantly higher expected utility than refraining from the action, it is permissible to perform it.

But this is false. I will assume here the standard deontological claim that it is wrong to shoot one innocent to save two.

Villain Case: You are hunting and you see a dark shape in the woods. The shape has a 40% chance of being an innocent human and a 60% chance of being a log. A villain who is with you has just instructed a minion to go and check in a minute on the identity of the shape. If the shape turns out to be a human, the minion is to murder two innocents. You can’t kill the villain or the minion, as they have bulletproof jackets.

The expected utility of shooting is significantly higher than of refraining from the action. If you shoot, the expected lives lost are (0.4)(1)=0.4, and if you don’t shoot the expected lives lost are (0.4)(2)=0.8. So shooting has an expected utility that’s 0.4 lives better than not shooting. But it is also clear, assuming the deontological claim that it is wrong to kill one to save two, that it is wrong to shoot in this case.

What is different from the villain case and the dangerous beast case is that in the Villain Case, the difference in expected utilities comes precisely from the scenario where the shape is human. Intuition suggests we should tweak (1) to evaluate expected utilities in a way that ignores the good effects of deontologically forbidden things. This tweak does not affect the Beast Case, but it does affect the Villain Case, where the difference in utilities came precisely from counting the life-saving benefits of killing the human.

I don’t know how to precisely formulate the tweaked version of (1), and I don’t know if it is sufficiently strong to covere all cases.

Tuesday, June 30, 2020

Do promises sometimes make otherwise wrong actions permissible?

Consider a variant of my teenage Hitler case. You’re a hospital anesthetist and teenage Hitler is about to have an emergency appendectomy. The only anesthetic you have available is one that requires a neutralizer to take the patient out of anesthesia—without the neutralizer, the patient dies. You know (an oracle told you) that if teenage Hitler survives, he’ll kill millions. And you’re the only person who knows how to apply anesthesia or the neutralizer in this town.

You’re now asked to apply anesthesia. You have two options: apply or refuse. If you refuse, the surgeon will perform the appendectomy without anesthesia, causing excruciating pain to a (still) innocent teenager, who will still go on to kill millions. Nobody benefits from your refusal.

But if you apply anesthesia, you will put yourself in a very awkward moral position. Here is why. Once the surgery is over, standard practice will be to apply the neutralizer. But the Principle of Double Effect (PDE) will forbid you from applying the neutralizer. For applying the neutralizer is an action that has two effects: the good effect of saving teenage Hitler’s life and the evil effect of millions dying. PDE allows you to do actions that have a foreseen evil effect only when the evil effect is not disproportionate to the good effect. But here the evil effect is disproportionate. So, PDE forbids application of the neutralizer. Thus if you know yourself to be a morally upright person, you also know that if you apply the anesthesia, you will later refuse to apply the neutralizer. But surely it is wrong to apply the anesthesia to an innocent teenage while expecting not to apply the neutralizer. For instance, it would be clearly wrong to apply the anesthesia if one were out of neutralizer.

So, it seems you need to refuse to apply anesthesia. But your reasons for the refusal wiil be very odd: you must refuse to apply anesthesia, because it would be morally wrong for you to neutralize the anesthesia, even though everyone is no worse or better off in the scenario where you apply anesthesia and neutralize it than in the scenario where the operation happens without anesthesia. To make the puzzle even sharper, we can suppose that if teenage Hitler has the operation without anesthesia, he will blame you for the pain, and eventually add your ethnic group—which otherwise he would have no prejudice against—to his death lists. So your refusal to apply anesthesia not only causes pain to an innocent teenager but causes many deaths.

The logical structure here is this: If you do A, you will be forbidden from doing B. But you are not permitted to do A if you expect not do B. And some are much better off and no one is worse off if you do both A and B than if you do neither.

Here is a much more moderate case that seems to have a similar structure. Bob credibly threatens to break all of Carl’s house windows unless Alice breaks one of Carl’s windows. It seems that it would be right for Alice to break the window since any reasonable person would choose to have one window broken rather than all of them. But suppose instead Bob threatens to break all of Carl’s windows unless Alice promises to break one of Carl’s windows tomorrow. And Alice knows that by tomorrow Bob will be in jail. Alice knows that if she makes the promise, she would do wrong to keep it, for Carl’s presumed permission of one window being broken to save the other windows would not extend to the pointless window-breaking tomorrow. And one shouldn’t make a promise one is planning not to keep (bracketing extreme cases, which this is not one of). So Alice shouldn’t make the promise. But no one would be worse off if Alice made the promise and kept it.

I wonder if there isn’t a way out of both puzzles, namely to suppose that in some cases a promise makes permissible something that would not otherwise be permissible. Thus, it would normally be wrong to apply the neutralizer to teenage Hitler. But if you promised to do so (e.g., implicitly when you agree to perform your ordinary medical duties at the hospital, or explicitly when you reassured his mom that you’ll bring him out of anesthesia), then it becomes permissible, despite the fact that many would die if you kept the promise. Similarly, if Alice promised Bob to break the window, it could become permissible to do so. Of course, we better not say in general that promises make permissible things that would otherwise be impermissible.

The principle here could be roughly something like that:

  1. If it would be permissible for you to now intentionally ensure that a state of affairs F occurs at a later time t, then it is permissible for you to promise to bring about F at t and then to do so if no relevant difference in the circumstances occurs.

Consider how (1) applies to the teenage Hitler and window-breaking cases.

It would be permissible for you to set up a machine that would automatically neutralize Hitler’s anesthesia at the end of the operation, and then to administer anesthesia. Thus, it is now—i.e., prior to your administering the anesthesia—permissible for you to ensure that Hitler’s anesthesia will be neutralized. Hence, by (1) it is permissible for you to promise to neutralize the anesthesia and then to keep the promise, barring some relevant change in the circumstances.

Similarly, it would be permissible for you to throw a rock at Carl’s window from very far away (out in space, say) so that it would only reach the window tomorrow. So, by (1) it is permissible for you to promise to break the window tomorrow and then to keep the promise.

On the other hand, take the case where an evildoer asks you to promise to kill an innocent tomorrow or else she’ll kill ten today, and suppose that tomorrow the evildoer will be in jail and unable to check up on what you did. It would be wrong for you to now intentionally ensure the innocent dies tomorrow, so (1) does not apply and does not give you permission to make and keep the promise. (Some people will think it’s OK to make and break this promise. But no one thinks it’s OK to make and keep this promise.)

Principle (1) seems really ad hoc. But perhaps this impression is reduced when we think of promises as a way of projecting our activity forward in time. Principle (1) basically says that if it would be permissible to project our activity forward in time by making a robot—or by self-hypnosis—then we should be able to accomplish something similar by a promise.

The above is reminiscent of cases where you promise to ignore someone’s releasing you from a promise. For instance, Alice, a staunch promoter of environmental causes, lends Bob a large sum of money, on the condition of Bob making the following promise: Bob will give the money back in ten years, unless Alice’s ideals shift away from environmentalism in which case he will give it to the Sierra Fund, notwithstanding any pleas to the contrary from Alice. The current context—Alice’s requirements at borrowing time—becomes normative at the time for the promise to be kept, notwithstanding some feared changes.

I am far from confident of (1). But it would let one escape the unhappy position of saying that in cases with the above structure one is required to let the worst happen. I expect there are counterexamples to (1), too. But perhaps (1) is true ceteris paribus.

Friday, May 22, 2020

Lying to save lives

I’m imagining a conversation between Alice, who thinks it is permissible to lie to Nazis to protect innocents, and a Nazi. Alice has just lied to the Nazi to protect innocents hiding in her house. The Nazi then asks her: “Do you think it is permissible to lie to protect innocents from people like me?” If Alice says “Yes”, the Nazi will discount her statement, search her house and find innocents. So, she has to say “No.” But then the Nazi goes on to ask: “Why not? Isn’t life more important than truth? And I know that you think me an unjust aggressor (no, don’t deny it, I know you know it, but I’m not going to get you just for that).” And now Alice has to either cave and say that she does think it permissible to lie to unjust aggressors, in which case the game is up, and the innocents will die, or she has to exercise her philosophical mind to find the best arguments she can for a moral conclusion that she believes to be perverse. The latter seems really bad.

Or imagine that Alice thinks that the only way she will convince the Nazi that she is telling the truth in her initial lie is by adding lies about how much she appreciates the Nazi’s fearless work against Jews. That also seems really wrong to me.

Or imagine that Alice’s non-Nazi friend Bob can’t keep secrets and asks her if she is hiding any Jews. Moreover, Alice knows that Bob knows that Alice fearlessly does what she thinks is right. And so Bob will conclude that Alice is hiding Jews unless he thinks Alice believes Jews deserve death. And if Bob comes to believe that Alice is hiding Jews, the game will be up through no fault of Bob’s, since Bob can’t keep secrets. Now it looks like the only way Alice can keep the innocents she is hiding safe is by advocating genocide to Bob.

It is very intuitive that a Nazi at the door doesn’t deserve the truth about who is living in that house. And yet at the same time, it seems like everyone deserves the truth about what is right and wrong. But at the same time, it is difficult to limit a permission of lying to the former kinds of cases. There is a slippery slope here, with two stable positions: an absolutist prohibition on lying and a consequentialist calculus. An in-between position will be difficult to specify and defend.

Monday, April 1, 2019

The infinite disvalue strategy for modeling deontological constraints

An standard way to handle deontological constraints is to simply specify an infinite disvalue for breaking the constraints. Thus, you shouldn’t kill an innocent person to save ten innocents, because the disvalue of your murdering the one is infinitely greater than the finite value of the lives of the ten.

A standard response to this is to imagine cases where one deontological violation prevents multiple similar deontological violations. That cannot be handled by disvalue, since the multiple violations should have greater cumulative disvalue than the single violation. However, such cases may seem contrived.

But I just realized recently that they need not be contrived. In fact, the standard strategic bombing of civilian targets cases may be like that. These cases—which were probably exemplified by the bombings of Hiroshima, Nagasaki and Dresden—are usually described as cases where it is expected (or at least hoped) that the deaths of the innocents will persuade the enemy to surrender and stop a greater number of deaths.

However, in cases—like the World War II cases—where one is fighting an unjust aggressor, killings performed by the enemy (whether the victims are civilians or military personnel) are typically murders. Thus such cases may very well be cases where a smaller number of murders—committed by means of bombing—by one’s side prevents a greater number of murders committed by the other side. Thus, we have historical cases, or at least cases very close to historical cases, where a smaller number of immoral acts is thought to prevent a greater number of immoral acts of the same kind. And hence we have uncontrived cases where the disvalue strategy for modeling deontological constraints fails.

Tuesday, January 15, 2019

Truth, life and deontology

Absolutists who think that lying is wrong even to save a life are sometimes accused of thinking truth to be more valuable than life. Whether or not absolutism about lying is true (I think it is), the accusation is a misunderstanding of the structure of deontological prohibitions. For it would be silly to suppose that a deontologist who thinks it’s wrong to kill one innocent in order to save ten thinks that one life is more valuable than ten! If there is a mistake in deontology, it is not the mistake of thinking 1 > 10.

Deontology and future hypothetical wrongs

Molinism makes possible a curious kind of moral dilemma. God could reveal to Alice that if Alice doesn’t kill Bob today, she will kill Carl and David tomorrow (all these being innocents), and if she does kill Bob today, she won’t kill anyone tomorrow. Should she, thus, kill Bob today in order to prevent herself from murdering Carl and David tomorrow?

One might think that the possibility that Molinism allows for such a moral dilemma is a count against Molinism. But even without Molinism, one could have a probabilistic version of the dilemma where God reveals to Alice that if she doesn’t kill Bob today, she is very likely to kill both Carl and David tomorrow, and if she does kill Bob today, she is very unlikely to kill anyone tomorrow.

One way to make consequentialism fit with deontological intuitions is to set a high, perhaps infinite, disvalue on wrong action. That would imply that in the dilemma Alice should kill Bob in order to prevent the two murders tomorrow.

I think this is a mistake. Just as on deontological grounds it would be wrong for Alice to murder Bob to keep Eva from murdering Carl and David, so too it’s wrong for her to murder Bob to keep herself from murdering them. A eudaimonist may disagree here, holding that we should be promoting our own flourishing, so that when the choice is between committing two murders tomorrow and one today, we should go for the one today, but when the choice is between oneself committing one murder and another party committing two, we should let the other party commit the two. So much the worse, I say, for that kind of eudaimonist.

What makes it wrong for Alice to murder Bob is that the we shouldn’t perform bad acts. It’s not that we should minimize the number of bad acts performed, by others or oneself, but that we shouldn’t perform them. Of course, all other things being equal we should minimize the number of bad acts performed, by others and oneself, but a bad act is an act not to be done. And the lesson of deontology is that certain acts, such as intentionally killing without proper authority, are bad acts in virtue of their nature.

But isn’t killing Bob today the lesser evil?

Yet imagine Alice is debating whether she should eat ice cream, with its having been revealed to her that if she eats ice cream today, tomorrow she will kill Bob, and if she does not, then tomorrow she will kill Carl and David. In that case, it is clear: she should eat the ice cream. For the eating of ice cream isn’t the sort of act that is bad in virtue of its nature (unless a very strong form of moral veganism is true). Note, however, that if she eats the ice cream today, then her killing of Bob tomorrow is still wrong. (If you disagree, it may be simply because you disagree with Molinism, and you hold that the inevitability of her killing Bob takes away her freedom; if you think that, then go for a probabilistic version of the story.) This is true even though it is a lesser evil than her killing Carl and David.

In the original case, we can look at Alice doing two things when killing Bob:

  1. Killing Bob

  2. Bringing it about that she doesn’t kill Carl and David.

Her action is bad qua (1) and good qua (2). But we learn from Aquinas that for an action to be right, it must be right in every respect. So her action is wrong simpliciter.

On the other hand, in the ice cream version, in consuming the ice cream, Alice is doing two things:

  1. Eating ice cream

  2. Bringing it about that she doesn’t kill Carl and David.

Now her action is good or neutral qua (3) and good qua (4). In fact, it’s right in every respect. But her later killing of Bob is still wrong.

Wednesday, August 22, 2018

Dentistry, deontology, Double Effect and hypnosis

You are a dentist and a teenage Hitler comes to you to have a bad tooth removed. You only have available an anaesthetic with this feature: Within eight hours of the start of anaesthesia, a neutralizer must be given, otherwise the patient dies. This is not a problem: the extraction will only taken an hour.

You remove the tooth, and are about to administer the neutralizer when you learn that if Hitler survives, he will kill tens of millions of people. And now it seems you have a question whether to save the life of a person who will kill millions if saved. You apply the Principle of Double Effect and check whether the conditions are satisfied:

  • Your end is good: Yup, saving the life of an innocent teenager.

  • The action is good or neutral in itself: Yes, administering a neutralizer.

  • The foreseen evils are not intended by you either as a means or as an end: Yes, you do not intend the deaths either as an end or as a means.

  • The foreseen evil is not disproportionate to the intended good: Ah, here is the rub. How can the deaths of tens of millions not be disproportionate to the saving of the life of one?

So it seems that the Principle of Double Effect forbids you to administer the neutralizer, and you must allow Hitler to die. In so doing, you will be violating your professional code of ethics, and you will no doubt have to resign from the dental profession. But at least you won’t have done something that would cover the world with blood.

This is still counterintuitive to me. It feels wrong for a medical professional to deliberately stop mid-procedure in this way.

One can try soften the worry by thinking of other cases. Suppose that the neutralizer bottle has been linked by a terrorist to a bomb a mile away, so that picking up the bottle will result in the death of dozens of people. In that case it is clearly wrong for the dentist to complete the operation. But the Hitler case still feels different, because it is the very survival of Hitler that one doesn’t want to happen. It is a bit more like a case where the terrorist informs you that if the patient survives the procedure, the terrorist will kill many innocents. I still think that in that case you shouldn’t finish the procedure. But it’s a tough case.

Suppose you are with me so far. Now, here is a twist. You learn of Hitler’s future murders prior to the start of the procedure. You are the only dentist around. Should you perform the procedure?

Here are four possible courses of action:

  1. You do nothing. The teenage Hitler suffers toothache for many a day, and then later on kills tens of millions.

  2. You perform the extraction without anaesthesia. The teenage Hitler suffers excruciating pain, and then later on kills tens of millions.

  3. You perform the procedure, including both anaesthesia and neutralizer. The teenage Hitler’s pain is relieved, but then later on he kills tens of millions.

  4. You administer the anaesthesia, remove the bad tooth, and stop there. The teenage Hitler dies, but the world is a far better place.

Assume for simplicity that it is the same tens of millions who die in cases 1, 2 and 3.

So, now, which course of action should you intend to embark on? Option 4, while consequentialistically best, is not acceptable given correct deontology (if you are a consequentialist, the rest won’t be very interesting to you). For if you intend to go for Option 4, you will do so in order to kill Hitler by administering the anaesthesia while planning not to administer the neutralizer. And that’s wrong, because he is a juridically innocent teenager.

Option 3 seems clearly morally superior to Options 1 and 2. After all, one innocent person—the teenage Hitler—is better off in Option 3, and nobody is worse off there.

But you cannot morally go through with Option 3. For as soon as you’ve applied the anaesthesia, the Double Effect reasoning we went through above would prohibit you from applying the neutralizer. So Option 3 is not available to you if you expect to continue to act morally, because if you continue to act morally, you will be unable to administer the neutralizer.

What should you do? If you had a time-delay neutralizer, that would be the morally upright solution. You give the time-delay neutralizer, administer anaesthesia, remove the bad tooth, and you’re done. Tens of millions still die, but at least this innocent teenager won’t be suffering. It seems a little paradoxical that Option 3 is morally impossible, but if you tweak the order of the procedures by using a time-delay, you get things right. But there really is a difference between the time-delay case and Option 3. In Option 3, your administering the neutralizer kills tens of millions. But administering the time-delay neutralizer prior to the procedure doesn’t counterfactual results in the deaths of tens of millions, because had you not administered the time-delay neutralizer, you wouldn’t then administer the anaesthesia (Option 2) or you wouldn’t then perform the procedure at all (Option 1), and so tens of millions would still die.

Here is another interesting option. Suppose you could get yourself hypnotized so that as soon as the tooth is removed, you just find yourself administering the neutralizer with no choice on your part. That, I think, would be just like the time-delay neutralizer, and thus it seems permissible. But on the other hand, it seems that it is wrong to get yourself hypnotized to involuntarily do something that it would be wrong to do voluntarily, and to administer to Hitler the neutralizer after the anaesthesia is something that it would be wrong to do voluntarily. Perhaps, though, it is always wrong to get yourself hypnotized with the intention of taking away your of choice (maybe that’s a failure of respect for oneself)? Or maybe it is sometimes permissible to hypnotize yourself to involuntarily do something that it would be wrong to voluntarily do. (Here is a case that seems acceptable. You hypnotize yourself to involuntarily say: “I am now speaking involuntarily.” It would be a lie to say that voluntarily!)