Showing posts with label double effect. Show all posts
Showing posts with label double effect. Show all posts

Thursday, August 1, 2024

Double effect and causal remoteness

I think some people feel that more immediate effects count for more than more remote ones in moral choices, including in the context of the Principle of Double Effect. I used to think this is wrong, as long as the probabilities of effects are the same (typically more remote effects are more uncertain, but we can easily imagine cases where this is not so). But then I thought of two strange trolley cases.

In both cases, the trolley is heading for a track with Fluffy the cat asleep on it. The trolley can be redirected to a second track on which an innocent human is sleeping. Moreover, in a nearby hospital there are five people who will die if they do not receive a simple medical treatment. There is only one surgeon available.

But now we have two cases:

  1. All five people love Fluffy very much and have specified that they consent to life-saving treatment if and only if Fluffy is alive. The surgeon refuses to perform surgery that the patients have not consented to.

  2. The surgeon loves Fluffy and after hearing of the situation has informed you that they will perform surgery if and only if Fluffy is alive.

In both cases, I am rather uncomfortable with the idea of redirecting the trolley. But if we don’t take immediacy into account, both cases seem straightforward applications of Double Effect. The intention in both cases is to save five human lives by saving Fluffy, with the death of the person on the second track being an unintended side-effect. Proportionality between the good and the bad effects seems indisputable.

However, in both cases, redirecting the trolley leads much more directly to the death of the one person than to the saving of the five. The causal chain from redirection to life-saving in both cases is mediated by the surgeon’s choice to perform surgery. (In Case 1, the surgeon is reasonable and in Case 2, the surgeon is unreasonable.) So perhaps in considerations of proportionality, the more immediate but smaller bad effect (the death of the person on the side-track) outweighs the more remote but larger good effect (the saving of the five).

I can feel the pull of this. Here is a test. Suppose we make the death of the sixth innocent person equally indirect, by supposing instead that Rover the dog is on the second track, and is connected to someone’s survival in the way that Fluffy is connected to the survival of the five. In that case, it seems pretty plausible that you should redirect. (Though I am not completely certain, because I worry that in redirecting the trolley even in this case you are unduly cooperating with immoral people—the five people who care more about a cat than about their own human dignity, or the crazy surgeon.)

If this is right, how do we measure the remoteness of causal chains? Is it the number of independent free choices that have to be made, perhaps? That doesn’t seem quite right. Suppose that we have a trolley heading towards Alice who is tied to the track, and we can redirect the trolley towards Bob. Alice is a surgeon needed to save ten people. Bob is a surgeon needed to save one. However, Alice works in a hospital that has vastly more red tape, and hence for her to save the ten people, thirty times as many people need to sign off on the paperwork. But in both cases the probabilities of success (including the signing off on the paperwork) are the same. In this case, maybe we should ignore the red tape, and redirect?

So the measure of the remoteness of causal chains is going to have to be quite complex.

All this confirms my conviction that the proportionality condition in Double Effect is much more complex than initially seems.

Wednesday, May 24, 2023

The five-five trolley

The standard trolley case is where a trolley is heading to a track with five people, and you can redirect it to a track with one person. It seems permissible to do so.

But now imagine that a trolley is heading to a track with five people, and you can redirect it to another track also with five people. Why would you bother? Well, suppose that you enjoy turning the steering wheel on the trolley, and you reason that there is no overall harm in your redirecting the trolley.

This seems callous.

Yet we are in cases like the five-five trolley all the time. By the butterfly effect, many minor actions of ours affect the timings of human mating (you have a short conversation with someone as they are leaving work; this affects traffic patterns, and changes the timing of sexual acts for a number of people in the traffic), which then changes which sperm reaches an ovum, and hence affects which human beings exist in the next generation, and the changes balloon, and pretty soon there are major differences as to who is in the path of a hurricane, and so on.

But of course there is still a difference between the five-five trolley and the butterfly effect cases. In the five-five trolley, you know some of the details of the effects of your action: you know that these five will die if you don’t redirect and those five if you do. But note that these details are not much. You still may not know any of the ten people from Adam. In the butterfly effect cases, you can say a fair amount about the sort of effects your minor action has, but not much more than that.

What’s going on? I am inclined to think that here we should invoke something about the symbolic meaning of one’s actions. In the case where one turns the steering wheel on the trolley for fun, while knowing epistemically close effects, one exhibits a callous disregard for the sanctity of human life. But when one has a conversation with someone after work, given the epistemic distance, one does not exhibit the same callous disregard.

It is not surprising if callousness and regard for sacredness should depend on fine details of epistemic and other distance. Think of the phenomenon of jokes that come “too soon” after a terrible event: they show a callous disregard for evil. But similar jokes about temporally, personally and/or epistemically distant events may be acceptable.

Saturday, November 23, 2019

Intending the end without intending the known means

It is said that:

  • he who intends the end intends the means.

But the person who doesn’t know how a computer keyboard works does intend to close circuits when writing an email, even though the closing of circuits is the means to writing emails.

Perhaps, though, when they learn how computer keyboards work, that might change their intention, so that now they intend to close circuits whenever they intentionally type a character? But that is psychologically implausible: the activity of the practical intellect involved in typing is normally unchanged by learning how a keyboard works. (There are, of course, special circumstances where it may change. For instance, if one knows that one is near some very delicate electrical equipment whose functioning could be affected by the closing of these circuits, then one’s deliberation might change.) The knowledge of what happens in typing remains merely non-occurrent knowledge, not affecting the activity of the practical intellect or the will.

One might think, though, that if one occurrently knows that the means to typing an email is the closing of circuits, one is intending to close circuits. But even this need not be true. For instance, a person who is writing a technical article on how keyboards work may well be occurently knowing that their movements are transformed into data in computer memory by means of closing electrical circuits, but this occurrent knowledge may very well still not affect either their practical intellect or their will. (Indeed, when I wrote the opening paragraph of this post, I no doubt occurrently knew how keyboards work, but I don’t think this affected my intentions.)

For one’s knowledge to affect one’s intentions it needs to enter into the deliberation. For that, it needs to be occurrently and practically taken by the agent as practically relevant. For most people under most circumstances, that computer keyboards work by closing circuits is not practically relevant. But if it is Sabbath and one is an Orthodox Jew who believes that closing circuits is forbidden on the Sabbath, then the knowledge is apt to be taken as practically relevant: if one still types, that is apt to become an act of rebellion or of akrasia, and if one refrains from typing, that act is apt to be done as a mitzvah. However, one could imagine the sad case of such an Orthodox Jew who types on the Sabbath anyway, and eventually becomes so calloused that the fact that circuits are being closed stops entering into deliberation, though the fact is still known by the theoretical intellect. Such a person’s intentions may eventually drift to those of the typical gentile.

So, what is one to say about the principle that he who intends the end intends the means? There is of course a trivial version:

  • he who intends the end intends the intended means.

Maybe we can do a little better:

  • in intending the end one intends the means insofar as they enter into deliberation.

I am not sure this is right, but it’s the best I can do right now.

Note an interesting thing. If this last version is right, then the means may enter into deliberation on the opposite side, against the action. For instance, if one thinks it’s forbidden to close electrical circuits on the Sabbath, but one chooses to do so, that the means involve the closing of electrical circuits is apt to enter into deliberation on the con side of typing, not on the pro side (unless one is positively rebellious).

Friday, November 22, 2019

Foresight and intention

The Principle of Double Effect controversially teaches that when an effect is bad, it is typically worse to intend it than to merely foresee it. I think it's interesting and somewhat refreshing to reflect on reverse cases, where we distinguish between intending and foreseeing a good effect. Obviously it's better if a legislator intends rather than merely foresees that the legislation furthers the public good. Here, it's clear that here a foresight-intention distinction captures something important.

Thursday, May 16, 2019

Analogies to ectopic pregnancy

The standard Catholic view of tubal pregnancy is that it is permissible to remove the tube with the child. The idea seems to be that the danger to the mother comes from the potential rupture of the tube, and hence removal of the tube is removal of that which poses the danger, and the death of the child is a non-intended side-effect, with the action justified by double effect. I’ve always been queasy about this reasoning, but I now have two related analogies that make me feel better about this.

Case 1: There are two astronauts on a spaceship, with no oxygen left in the air. The astronauts are wearing spacesuits with oxygen tanks. The oxygen tanks are sufficient for the astronauts to survive until they get home: 50% of the oxygen can be expected to be used up before getting home. However, one of the tanks is rigged by a malefactor with an explosive device such that if more than 20% of the oxygen is used, it will explode, killing both astronauts. The astronaut wearing that particular spacesuit is unconscious and cannot be consulted. It is not feasible to disarm the bomb or to swap tanks. The conscious astronaut removes the explosive tank from the other astronaut’s space suit and throws it into space, knowing that this will result in the unconscious astronaut dying from lack of oxygen. The intention, however, is to remove the item that will dangerously rupture if it is left in place. It is not the intention to kill the other astronaut. This is true even though it is the other astronaut’s breathing that would trigger the tank’s explosion.

The proximate source of the danger is the oxygen tank. But the more distant source is the breathing. It seems very plausible that it makes a moral difference whether the conscious astronaut shoots the unconscious astronaut to stop their breathing (wrong) or removes their tank to expel the danger (right action). This seems a legitimate case of double effect reasoning.

Case 2: Much as in Case 1, but (a) there is intense radiation outside the spaceship’s shielding, so that getting pushed into space even while wearing a spacesuit on will be fatal, and (b) there is no way to separate the tank from the astronaut. Thus, the other astronaut picks up the explosive tank, and throws it far into space. The tank is connected to the unconscious astronaut, so the unconscious astronaut flies out with the tank, and is killed by radiation. The tank never explodes, because the oxygen doesn't get depleted

Again, this seems a perfectly legitimate case of double effect reasoning.

What about the alternative of removing the child from the tube, which orthodox Catholic ethicists tend to reject (unless done in the hope reattaching in the correct place)? Well, the child is connected to the tube via a placenta. The placenta is to a large degree an organ of the child. As I understand it, removal of the child from the tube would require intentionally cutting the placenta, in a way that is fatal to the child. This directly fatal intervention seems akin to slicing the astronaut to remove them from the suit. This seems harder to justify.

Wednesday, January 18, 2012

"He who intends the end intends..."

It is a classic maxim that:

  1. He who intends the end intends the means.

Here is a problem. I take a pill to relieve a headache. Unbeknownst to me, the pill relieves the headache by means of numbing certain pain receptors I know nothing about. Plainly, I don't intend to numb these pain receptors, since I don't know anything about them. So I intend the end but don't intend the means.

One might weaken (1):

  1. He who intends the end intends the known means.
This also doesn't work. Suppose I have always taken a pill to relieve a headache. My reasoning has always been: "This pill relieves headaches and has few side-effects. I have good reason to relieve my headache. So I will take this pill." At a certain age, I learned how that pill works. But my knowledge of how that pill works in no way affected my practical reasoning, since it didn't undercut any part of the practical syllogism I employed. But intention is a matter of practical reasoning, so my newly gained knowledge did not affect my intentions. Alternate argument: intentions are explanatory of action, but the knowledge of how that pill works did not change the explanation of my actions, so it did not change my intentions.

Moreover, there are cases where two causal pathways are known to causally contribute to an end, but only one is intended. For instance, take the classic case of bombing the enemy HQ in order to end the war sooner, while accepting that civilians on the streets around the HQ will die. Suppose, for instance, that ne expects that the destruction of the enemy HQ in itself hastens the end of the war by a month, but that the deaths of the civilians are expected to hasten the end of the war by another month. The bombing can still be legitimate, as long as one only intends the first of these two means. In fact, it can still be legitimate even if the deaths of the civilians are a greater effect. Imagine that one is planning to bomb the enemy HQ because it hastens the end of the war by a month and one has prudently decided that the proportionality condition in the Principle of Double Effect holds. An analyst then announces that the deaths of the civilians will hasten the end of the war by another two months. Surely the analyst's announcement shouldn't stop one from bombing.

Now the last case may seem a bit unfair. We might say: there are two causal pathways to hastening the end of the war, but only one of them is the means to it. But if we say that, then by "means" we mean "intended means" and (1) becomes:

  1. He who intends the end intends the intended means.
But this is trivial if by "the intended means" we mean "all the intended means" and dubious if we mean "the one and only intended means", since there may be several intended means in an action.

I suggest a very simple alternative repair to (1). Just replace a definite article by an indefinite one:

  1. He who intends the end intends a means.
This is not trivial: it implies that every action has an intended means. One might worry about God's creating ex nihilo. I think there we can stipulate that God's creating A is a means to the existence of A, even if it turns out that God's creating A just is the existence of A (cf. chapter 12 of my PSR book), by generalizing the notion of a means to that of "the way in which the event is made to happen."

(I would expect that (1) would be a translation of some Latin maxim. Latin doesn't have articles, so whatever Latin would be behind (1) might well be understandable as (4).)

Now go back to the original pill case. I don't intend to numb my pain receptors. So what means do I intend? Answer: I don't intend any specific means—I simply intend whatever means it is by which the pill relieves headaches. That's why my intentions don't need to change when I learn how the pill works.

Now consider this wackier case. Suppose that I learn that the way the headache relief pill works is this. There is a homunculus inside me that has the power to relieve my headaches. When I take the pill, I cause horrific pain (much greater than my headache) to the homunculus, and he rushes to relieve my headache, afraid that if he doesn't, I'll take another dose. If I am right that given a normal story about how pain relief works, I need not be intending to numb pain receptors, likewise in this story I needn't be intending to torture the homunculus, even though I know about the homunculus and his pain. However, I do intend whatever means it is by which the pill relieves headaches. And that means is in fact horrific pain for the homunculus. I accomplish my means, and so my accomplishment in fact includes horrific pain for the homunculus. And it is really bad when one's accomplishment is known to have horrific pain for someone else as a part of it.

Sunday, August 14, 2011

A sufficient condition for not intending

Suppose that I do A, foreseeing that p.  Here is a sufficient condition for my action not to have p among its intentions:

  • All my active reasons in favor of my acting as I did in doing A could have been operative for me, and to at least as great a degree, had I not foreseen that p.
(A reason is active provided that it not only favors my acting a certain way, but that acting in this way comes from the reason in the right way.)  This logically follows from the general thesis, which I am inclined to accept, that my intentions supervene on my active reasons in favor of my acting as I did.

Consider two standard cases.  Terror bombing: bomb is dropped on civilian-occupied area to terrify enemy civilians into forcing their government to surrender.  Strategic bombing: bomb is dropped on civilian-occupied area to hit a military target.  In strategic bombing, one could act as one did, and for exactly the same reasons as one did, even if one did not foresee the deaths of the civilians.  But in terror bombing, one couldn't.  If one did not foresee the deaths of a civilians, one of the active reasons for dropping the bomb where one dropped it would not have been available: that dropping it there will terrify civilians by killing them.  

Notice that reasons in favor of my acting as I do in doing A include reasons that concern the value of the end but also include reasons that concern how the means leads up to the end.  Why did you act as you did?  "Because it would save the patient's life" gives the one kind of reason, and "Because it would remove the tumor" gives the other kind of reason.  

The condition I offer is sufficient but not necessary.  Frances Kamm's triple-effect cases, if they work as she thinks they do, show that the condition is not necessary, unless one distinguishes between reasons directly favoring the action, and includes only those in the sufficient condition, from reasons that act as defeater-defeaters.  Another kind of case is given by Neil Delaney's targeting cases, where the presence of civilians is evidence that the military target is there.  Or consider a case where I modify the manner in which I act in light of my foresight that p.  For instance, I expect that there will be civilian casualties, and so I drop the bomb early enough in the raid that I will have time to do a second pass once the smoke clears and drop medical supplies.  My actual active reasons for acting as I did, namely bombing at the time I did, couldn't have existed had I not believed that there would be civilian casualties.  (I could have had another similar reason still, such as that there might be civilian casualties.)

It would be nice if the sufficient condition could be made into a necessary one.  But even without being so made, I think the condition helps with a wide range of cases.  Moreover, sometimes one may be able to handle a recalcitrant case by the following strategem.  Modify the case in such a way that intuitively if the original case had an intention that p, so does the modified one, and then show that the modified case doesn't have an intention that p by using the condition. 

Thursday, August 11, 2011

Reasons and intentions

  1. If an action has an intention, that intention is always a part of the full rational explanation of the action.
  2. Only facts that are identical with or grounded in the agent's reasons are found in a rational explanation of an action.
  3. Therefore, the intentions in an action are identical with or grounded in the agent's reasons for the action. ("The Grounding Claim")
Here, grounding is a relation stronger than supervenience. I think the Grounding Claim is very plausible even apart from the little argument for it. One nice thing about the Grounding Claim is that it helps demystify intentions. Once we get clear on the reasons for an action, there is nothing more to intentions.

The Grounding Claim is very abstract, but it has a concrete and controversial consequence:

  1. It is possible to have two agents who differ in the foreseen consequences of an action but who do not differ in intentions.
This follows from the fact that a foreseen consequence only affect the reasons for an action when the agent cares about the consequences in some sense, and mere foresight does not entail care. When one agent finds out about a consequence of an action that she doesn't care about—either because the consequence is morally irrelevant or because the agent is morally insensitive—this does not by itself affect her reasons. Thus, two agents can have the same reasons but foresee different things, at least if they are things they do not care about. This, in turn, shows:
  1. Foresight is not the same as intention.

The challenge for a theory of intention, then, is to figure out in what way an agent's intentions are grounded in (or identical with) her reasons—how to read her intentions off from her reasons.

I don't know how to do that.

Wednesday, August 10, 2011

Tough double effect cases

I think the distinction between the foreseen and the intended is central to a lot of our moral thinking, and that some form of the Principle of Double Effect is correct.

Here is a pair of cases that I find particularly difficult, however.  This post owes things to a discussion I've been having with Daniel Hill.

Case 1: Matilda knows that a house contains two people, one an innocent and the other a terrorist.  Matilda is flying over the house and can drop a bomb on it, and if she does so, both people will die.  However, the terrorist will then be unable to detonate a bomb that would kill hundreds.  Matilda has no other way of preventing the detonation of the bomb.  Is it permissible for her to drop the bomb?

Case 2: This time Matilda is on the ground, and has a gun.  The two residents of the house are in front of her.  One of them is the innocent and the other is the terrorist.  She can't tell which is which.  The only way to prevent the detonation of the bomb is by killing the terrorist.  (Why is wounding not good enough?  Maybe from her position, she can only aim for the head, and so she'll either kill or miss.)  Is it permissible for her to shoot both?

The first case seems to be just like the standard double effect case of tactical bombing where if you drop the bomb on the enemy HQ, innocent visitors to the HQ (e.g., spouses of officers), will also die.  It is hard to distinguish Case 2 from Case 1, since it doesn't seem like it should morally matter whether one drops one bomb or takes two shots.  But the action in Case 2 seems wrong if you have strong deontological intuitions.  It sure seems like you're intentionally killing two people, where you know that one of them (but you do not know which--and that may change things) is an innocent.

So the challenge for the defender of double effect reasoning is to either show in a morally compelling way how Case 2 differs from Case 1, or show that the intuitions that the shooting in Case 2 is wrong are mistaken.

I will try for the first, but I don't know how morally compelling my story will be.  I think it will only be compelling to those who find double effect reasoning compelling.  Still I hope the story will have some plausibility.  Let the two people in the house be Susan and Tricia.  Matilda's intention in Case 1 is that the terrorist in the house die.  By what means?  By means of the place where the terrorist is being seared by an explosion.  Matilda need have no intention in Case 1 regarding the non-terrorist, or regarding Susan qua Susan or Tricia qua Tricia.  Her intention is explicitly about the terrorist as such.

Now consider Case 2.  Suppose Matilda has Susan in her gunsights and squeezes the trigger.  What are Matilda's reasons for so doing?  The most plausible account seems to be something like this: "Susan may be a terrorist, and if so, then many lives will be saved by her death, so I will shoot her."  In other words, the plan of action seems to be: "Shoot Susan dead, so that if she is the terrorist, the terrorist is dead."  If that's the plan of action, then Matilda is (literally) aiming to kill Susan.  And by the same token, Matilda is aiming to kill Tricia. Therefore, Matilda is intending the death of two persons, one of whom she knows to be an innocent.  She knows, thus, that in her overall action plan there is an innocent whose death she is aiming at.  And that is wrong.

Elsewhere, I have speculated that there are some actions that are only permissible with certain intentions.  For instance, perhaps it is only permissible to assert with the intention of avoiding the assertion of a falsehood and perhaps sexual relations are only permissible with the intention of uniting maritally.  It now seems quite plausible to me that intentional killing is like that.  To kill someone intentionally and permissibly it is not enough that one believe that the person is an aggressor (or is probably an aggressor?) that one is duly authorized to kill, or however exactly the exceptions on the prohibition of killing should be put.  The soldier or police officer needs to kill because the person is an aggressor that one is duly authorized to kill.  The Allied soldier who justly kills a German soldier must do so because the German soldier is an aggressor.  If the Allied soldier, instead, solely kills Helmut because Helmut is German or because Helmut has a long nose or because target practice is fun, the Allied soldier is morally corrupt.  (What if the Allied soldier kills Helmut both because Helmut is an aggressor and because Helmut is German?  I think more detail will be needed about the deliberative structure here, and I want to bracket this case.)

Now, let us suppose that in fact Susan is the terrorist.  Then Matilda in intentionally killing Susan is not killing Susan because Susan is a terrorist.  Rather, Matilda is killing Susan because Susan might be a terrorist.  And that is not good enough.  The intention to kill someone because she is a terrorist is compatible with love of that person, since doing justice to someone is compatible with love, and sometimes required by love.  But that is not Matilda's intention.

This has an interesting implication for military ethics.  It is often said to be necessary for soldiers to dehumanize their enemy in order to kill, to see them as enemies rather than as people, and this is often seen as a criticism of the military enterprise.  But if I am right, it is morally required that the soldier kill Helmut under a description that includes something like "enemy aggressor" rather than simply under the description "Helmut" or "that man over there, who no doubt has a family who are awaiting his return."  Perhaps in the ideal the humanity of the enemy, and the fact that he has a family who are awaiting his return, does enter into how the action is done--with compassion, sadness and only as necessary for due defense of the innocent.  But Helmut's aggressor status needs to be in the soldier's intentions.

But let us go back to Case 2.  One might cleverly object that it need not in fact be Matilda's intention that Susan die (Daniel Hill queried me about such an idea).  It could perhaps be Matilda's intention that Susan die if she is a terrorist.  Now, it is certainly possible to have such intentions.  If one has, or thinks one has, a magic bullet that kills only terrorists, one could shoot Susan intending that she die if she is a terrorist.  In that case, one's means to the conditional end that Susan die if she is a terrorist is shooting a bullet that discriminates between terrorists and non-terrorists.  But in the actual Case 2, one brings it about that Susan dies if she is a terrorist by bringing it about that Susan dies: the conditional end is brought about, in this case, by the unconditional means.  So one still intends that Susan die, as a means to the conditional end that Susan die if she is a terrorist.

But what if Matilda is a clever double effect casuist, and says: "My intention is that a bullet should go through such and such a location in space, and that if there be the head of a terrorist in that location, that terrorist should die"?   However, I think this is an incorrect statement of Susan's intentions.  Intentions aren't inner speeches.  They embody our actual reasons for acting.  Matilda's reason for sending the bullet to that location in space is that she can see Susan's head there.  Her plan for making sure that the terrorist in that location should die seems to be that Susan should die, and hence if the terrorist is there, the terrorist should die.  Susan's death still seems to be a means to the death of the terrorist in that location, if there be one there.  And likewise for Tricia's death.  I am not completely happy about this story, but it has some plausibility.

In Case 1, however, the aim is less personal, and that does actually matter: the only death aimed at is the death of "the terrorist", under that definite description.  Certainly, we would expect Matilda to be much more traumatized by Case 2 than by Case 1 (and if she weren't, we would think there is something wrong with her), and we should take such trauma to be defeasible evidence for a morally relevant difference between the two cases.

[Typo fixed.]

Saturday, February 26, 2011

Mints, cats, double effect and proportionality

It's noon. You and two other innocents, A and B, are imprisoned by a dictator in separate blast-proof cells. All the innocents are strangers, and you know of no morally relevant differences between them (whether absolutely or relative to you). A's and B's cells both contain bomb and timer apparatuses that A and B cannot do anything about. B's bomb timer is turned off. A's timer is set to blow her up at 1:00 pm. In your cell, there is a yummy mint on a weight-sensitive switch connected to the apparatus in B's cell. If the mint is removed, B's timer will be set to go off at 1:00 pm. The dictator will check up on the situation shortly before 1:00 pm, and will turn off A's timer if you've done something that caused B's timer to turn on. Anybody who survives past 1:00 pm will then be released.[note 1]

So you reason to yourself. "I like mints. If I eat the mint, I will cause B's death, but A will be saved. My causing of B's death will be non-intentional, and on balance the consequences to human life are neutral. But I get a mint out of it. So the Principle of Double Effect should permit me to eat the mint."

If this reasoning is good, the Principle of Double Effect is close to useless. Strict deontologists think it's wrong to kill one innocent to save millions. Most think it's wrong to kill one innocent to save two. But just about every deontologist will say that it's wrong to kill one innocent to save one innocent and one cat. Now, consider this case. The dictator hands you a gun, and tells you that if you don't kill innocent B, she'll kill innocent A and a cat. You clearly shouldn't. But if you thought it was acceptable to take the mint, then you could reason thus: "It would be interesting to see what a bullet hole in a shirt pocket looks like (and the shirt doesn't belong to B—it is prison attire, belonging to the dictator). If I aim the gun at B's shirt pocket, and press the trigger, the bullet will make a hole in the shirt pocket. And as an non-intended side-effect, it will subsequently cause B's death. But that's fine, because on balance the consequences to human life are neutral, as then B will be saved—plus a cat!" And since you can always think up some minor good that is served by pulling a trigger (finger exercise, practice aiming, etc.), you will get results any deontologist should reject.

So something is wrong with the reasoning—or Double Effect is wrong. I do not think, however, that Double Effect is wrong—I think it's indispensible. So what I will say is this. Double Effect requires that the evil effect not be intended and that there be a proportionality between the side-effect and the intended effect. What the above cases show is that, as a number of authors have noted, proportionality is not a matter of utilitarian calculation. Not only should we have on-balance positive consequences, but the intended effect should be a good proportionate to the foreseen evil. And the foreseen evil is not "that one person fewer will be alive than otherwise", but the foreseen evil is that a particular person should die. The deaths of different people are incommensurable evils even when we know no morally significant differences between the people.

In some cases the virtuous agent may count the numbers of people. But not in these cases. It is callous and unloving to get a mint or produce a bullet hole at the cost of B's death. It trivializes the value of B's life. There is a dilemma here. Either one is acting in the way that causes B's death for the sake of saving A, or not. If one is not, then B literally died so that one might have a mint or be intellectually gratified by the sight of a bullet hole. And so one trivializes B's life. If one is acting to save A, then one is not trivializing B's life. But in that case one is intending B's death, and deontology forbids that.

Here is a variant analysis that comes to the same thing, perhaps. There are cases where one can only do something in one of two ways: by intending a basic evil or by having a morally vicious set of intentions. The cases I gave are like that: one can only take the mint or produce the bullet hole by intending B's death or by having a set of intentions that trivialize B's life. In either case, one is unloving to B. It's hard to say which is the worse.

(This is related to the looping trolley case. There, I think one is either intending the absorption of kinetic energy by the one person, which is problematic, or one is intending a slight increase in length of life or slightly increase in probability of survival on the part of the five, which trivializes the death of the one.)

Friday, February 18, 2011

Double Effect conference online tomorrow

The Anscombe Centre is running what looks to be a really good conference on Double Effect [PDF] on Saturday, February 19th, 9:30-17:30 GMT. They'll be accepting questions by email (and a selection of the email questions will be asked by the chair). They ask that you send an email to s.barrie@bioethics.org.uk if you're planning on attending electronically. No registration fee for online attendance, but they do accept donations.
Here is a fuller schedule.
I'll be there virtually, albeit sleepily, starting from around the second talk (the first starts at 3:30 am my time). If anybody wants to chat with me during or between sessions, go here, and make up a nickname (ideally one such that I'll know who you are). (That's not an official conference venue.)

Monday, October 11, 2010

Threats of self-torture

This post is inspired by the (public domain) story "Warrior Race" by Robert Sheckley (of whom I am a big fan).

Suppose I want a hundred dollars from you, but have no claim on it. So I resolve to torture myself in a way that would have significantly more disvalue than whatever good you can do with a hundred dollars on the condition that you don't give me the money, and convince you of my resolve. I also ensure you have no way of stopping me except by paying up. Or perhaps, if you're not sure of my resolve, I set up a machine that will torture me until you pay up. I also convince you that (a) I won't do this again, and (b) I will ensure nobody will ever find out about it. (If there are worries about the epistemic appropriateness of your trusting me, suppose that I have a little device implanted in my brain which will kill me if I am about to violate these rules.)

If you're a consistent utilitarian, you will pay up. Utilitarians, thus, are open to this particularly odd sort of blackmail.[note 1] Intuitively, I think, there is no duty for you to pay up. You could just say: "You made your bed, now lie in it." And so this is an argument against utilitarianism.

But why is it that non-utilitarians don't have to pay up? After all, it seems plausible independently of utilitarianism that if a moderate expenditure can prevent an immense amount of suffering, one has a duty to do that.

Or if that's not right, other forms of threat might work. You wanted to vote against Smith's getting tenure. But Smith informs you that if you vote against his tenure, he'll literally torture himself for the rest of his life to an intensity far disproportionate to the values involved in a fair tenure process. It is plausible that something like the proportionality condition from the Principle of Double Effect is a necessary condition on the permissibility of an action with a foreseen bad effect: the bad effect cannot be disproportionate to the good effect. But here the bad effect seems to be disproportionate to the good effect. (If causation doesn't filter through others' decisions, then suppose Smith set up a machine to torture him if you vote against him.) If this is right, then we don't have an argument against utilitarianism. We just have the observation that threats of self-harm will be effective against virtuous people.

One might think that anybody who would issue such threats of self-harm is insane, and maybe it is not so implausible to suppose that an insane person could get you to do whatever (within very broad limits) she wants by means of threats of self-harm. But if you're known to consistently act by a moral theory, like utiltiarianism, that requires you to give in to the demand, then it is not insane to threaten self-harm in this way, as the threatener knows that she won't have to carry out the threat. It can, indeed, be narrowly self-interestedly rational.

I think there may be a move available to the non-utilitarian. She could insist that your suffering the torture involves goods of justice. There are (at least) two kinds of punishment: imposed and natural. And justice is involved with both. It does seem plausible that if two people are drowning, and only one can be rescued, and one is there because she murderously pushed the other in and in the process toppled in with her victim, the innocent has a call on us that the other does not.

Notice, though, that in these sorts of cases as individuals we have no right to impose a punishment on the person other than public disapproval. As individuals certainly we have no right to impose torture on someone who threatens self-harm and no right to impose death on the drowning attempted-murderer. So if the right story involves natural punishment, and "You made your bed..." suggests that, then we will still need a doing/non-doing or foreseeing/intending distinction. Actually, doing/non-doing won't work in the tenure case, since there Smith threatens you with self-harm if you vote against him, and voting is a doing. So it seems one needs a foreseeing/intending distinction to make this work out: you foresee that Smith will suffer, but because the suffering would be a good of justice, that shouldn't sway you from your vote against him.[note 2]

Furthermore, the concept of punishment without a punisher appears incoherent. So to make the "natural punishment" line go through, one may need a God behind nature. Maybe one could try for "natural consequences" that aren't punishment. But if they aren't punishment, it's not clear how the threatener's suffering the torments is a good of justice. If they aren't punishment, all we can get is that it's not unjust that the threatener should suffer. But that could leave intact the argument that you shouldn't vote in such a way that will cause this disproportionate suffering as, plausibly, that wasn't an argument from justice but from non-maleficence.

So it could well be that supporting the "You made your bed..." line in these cases requires fair amount of philosophical doctrine: justice and natural punishment, foreseeing/intending and maybe even theism.

Of course, it could be that the hard-nosed "You made your bed..." intuitions are wrong.

Monday, May 10, 2010

Double effect reasoning without a Principle of Double Effect

Consider a fairly normal formulation of the Principle of Double Effect (PDE). An action that is foreseen to result in an evil is permissible if:

  1. the action is not intrinsically wrong
  2. at least one good is intended
  3. no evil is intended, either as a means or as an end
  4. the foreseen evils are not disproportionate to the intended good or goods.
In the previous post I argued that if this (or something like it) is true, it isn't just a peripheral principle, but all of normative ethics—a necessary and sufficient condition for permissibility. Unfortunately, I do not think this is true, not because of any difficult issues about intentions, but for the simple reason that there are many ways for an action to go wrong, and (1)-(4) do not exclude all of them. Suppose, for instance, I drop bombs on the enemy headquarters, even though I know that some civilians in the vicinity will die. It may well be that (1)-(4) are satisfied. But that is not enough for permissibility if, for instance, my commanding officer has forbidden the bombing or I am under a valid vow of non-violence. Yet that the bombing was forbidden or that I am under a vow of non-violence does not affect (1)-(3).

Granted, the proportionality question is going to be affected by the command or vow, but proportionality does not do justice to the reasons that come from commands or vows. Proportionality weighs goods, while commands and vows give rise to exclusionary reasons, which make some goods no longer count. Of course, one could redefine proportionality to take into account all the reasons available, including the exclusionary ones, but if we do that, then (4) will presumably by itself be sufficient for permissibility. Moreover, the sort of "proportionality" that will let one adequately account for reasons arising from commands and vows will likely just be another word for permissibility, and then the account is trivial. Moreover, intuitively, even when one considers the evil of disobeying one's commander's prohibition, the bombing of the enemy headquarters could be proportionate. But when one's commander prohibits it, the bombing is no longer an act of war, but a private lethal act which one has no right to perform.

Now, one could add conditions like that the action is not forbidden by vow, promise or command. But the resulting PDE would look ad hoc, and I don't think we could be sure we listed everything needed.

So what is to be done about PDE? Here is a short and dogmatic suggestion. One of the basic deontic moral intuitions is that one should produce no evil. However, as soon as we start reflecting on the world around us, we realize that many of our actions have bad consequences for some people. A letter of recommendation that I write for my student is likely to either hurt my student or hurt my student's competitor. When I cross the road, I incur the harm of an increased risk of being run over. And so on, in various day-to-day things. Moreover, there are less day-to-day cases, such as the polio vaccine manufacturer who knows that the vaccine will kill some patients, but also knows it will save more lives. The consequentialist solution is to refine "Produce no evil" into "Do nothing that produces less utility than you could produce." I think it's easy to see that this doesn't do justice to the deontic "Produce no evil" insight.

The basic insight of double effect reasoning is that "Produce no evil" should be refined into as "Intend no evil", with a supplement of "Do nothing disproportionate." Discomfort over trolley cases then shows that we are sometimes unsure whether "Intend no evil" (together with the proportionality condition) really does capture all of the force of "Produce no evil", but I think "Intend no evil" does in fact come close to capturing the force. (I prefer: "Accomplish no evil.")

What is the PDE, then? It is simply an observation of the conditions under which the refinement of "Produce no evil" is satisfied. Seen in this way, it does not provide a sufficient condition for permissibility. It does provide a necessary condition for permissibility, and satisfaction of the conditions does show that the action is permissible insofar as the deontic question of the production of evil is concerned. But there may be other deontic questions. Seen in this way, the PDE can be simplified greatly. It simply says that "Produce no evil" is not violated when one intends no evil and does not act disproportionately.

Sunday, May 9, 2010

Is the Principle of Double Effect the sum total of normative ethics?

The Principle of Double Effect (PDE) is often stated in something like this form: An action that is foreseen to have an evil effect (maybe better: an effect that is a basic evil—I shall not worry about this) E is permissible if:

  1. the action is not intrinsically wrong
  2. a good is intended
  3. E is not intended, either as a means or as an end
  4. E is not disproportionate to the intended good.
This sort of formulation is obviously incorrect. After all, conditions (1)-(4) which are supposed to be sufficient for permissibility are compatible with the action also being intended to have another evil effect E*. That particular problem is easily fixed. We just say that an action that is foreseen to have at least one evil effect is permissible if:
  1. the action is not intrinsically wrong
  2. at least one good is intended
  3. no evil is intended, either as a means or as an end
  4. the foreseen evils are not disproportionate to the intended good or goods.
This is better.

Now, observe two interesting facts. First, if an action that is foreseen to have at least one evil effect and that satisfies (5)-(8) is permissible, then a fortiori an action that is not foreseen to have any evil effects and that satisfies (5)-(8) should also be permissible. (It would be really, really weird if an action would become permissible as soon as one noticed some tiny evil side-effect.) So, in fact, the PDE gives a set of sufficient conditions for permissibility. Second, it is clear that (5) and (8) are necessary conditions for permissibility. Moreover, if (7) weren't a necessary condition for permissibility, there would be little need for a PDE. Finally, it is plausible that an action that aims at no good is a perversion of will, and hence on Natural Law grounds (6) will be necessary. Moreover, it may even be the case that every action has to intend at least one good in order to be an action, and hence (6) may be trivial.

If so, then if (5)-(8) are the right conditions to put in the PDE, they are a complete account of permissibility. This makes the PDE not just something peripheral to a deontic ethics, an epicycle for handling some wartime and medical cases, but in fact it makes the PDE be all of normative ethics.

However, as tomorrow's post will show, a PDE like (5)-(8) is not the right way to think of the insights embodied in double effect reasoning (to use Cavanaugh's phrase).

Thursday, April 8, 2010

Proportionality in Double Effect

A standard formulation of Double Effect allows doing an action that has a basic evil E as a consequence provided:

  1. The action in itself is neutral or good.
  2. E is not intended, whether as means or as end.
  3. There is a good G intended.
  4. The bad effects are are not disproportionate to the good effects.
Now, it is tempting to take the proportionality condition (4) to be a simple consequentialist condition that says that the overall consequences of the action are positive. It is well-known among people who work on Double Effect that proportionality is not a consequentialist condition. However, I am not sure it is well-known just how important it is that it not be a consequentialist condition.

In fact, if we take the proportionality condition simply to say that the overall consequences of the action are positive, then Double Effect will allow too-close variants of paradigm cases of what it is taken not to allow. For instance, a paradigm example of what Double Effect is taken not to permit is terror bombing in war. Terror bombing is a bombing intended to cause civilian casualties so as to terrify the enemy into surrender. That violates condition (2), since the evils are intended as a means to enemy surrender.

But now imagine that the person operating the bombs is an ethicist who believes in Double Effect with the consequentialist condition in place of (4). She realizes that knowledge is a good. In particular, it is good to know what it looks like from close up when civilian buildings have bombs dropped on them. So, she plans to drop bombs on civilian buildings to find out what that looks like. The action of dropping bombs in itself is neutral. (For instance, one might drop bombs as a means to mining.) She is pursuing a good G, of learning what it looks like when civilian buildings are bombed. She does not intend civilian deaths, either as an end or as a means: that there be civilians in the buildings does not contribute to her end, which is to see what it looks like when the buildings are bombed. So, (1)-(3) are satisfied. And if the bombing can be reasonably expected to end the war, thereby preventing further bloodshed, the overall consequences of the bombing can be assumed to be positive. But while this bomber is not intending civilian deaths, her variation on terror bombing is surely impermissible. Moreover, we see the pattern now: all that is needed for Double Effect with a consequentialist proportionality condition to justify a consequentialistically acceptable action is that the agent find some trivial good served by the action, and then the agent can act for that end. In other words, Double Effect ends up working like consequentialism with a bit of clever mental juggling.

So, the proportionality condition cannot be taken to be overall positive consequences. Maybe, though, there is a modification of the overall positive consequences criterion that works. Let C be the set of causal consequences of the action. At least one member of C is a basic evil. Let C* be the subset of C of those consequences c with the property that c does not have any basic evil in C as a necessary cause. Then, the modified consequentialist proportionality condition is that the overall value of C* is positive. This takes care of the above case, because the relevant increase in the probability of ending of the war has as a necessary cause the deaths of the civilians.

Incommensurability, however, precludes even this kind of consequentialist criterion. Also, I wonder if the above filtered consequentialist criterion isn't too restrictive. For instance, it wouldn't allow the defeater-defeater move I make in the second comment here, and it may be that theodicy requires such a move at some point.

All this suggests to me that the proportionality is a very complex notion. It may be one of those things that can't be codified (at least sufficiently briefly for us humans to do it in this life), but needs to be weighed by the Aristotelian phronimos.

Wednesday, March 31, 2010

Calvinism and the problem of sin

According to St Paul, we do not do evil that good might come of it. A plausible version of this principle is:

  1. It is wrong to intend to produce an intrinsic evil, either as an end in itself or as a means (causal or constitutive) to an end.
Now, sin is an intrinsic evil. If (1) applies to God as well as to us, then a perfectly righteous God cannot intend anyone to sin. As far as (1) goes, it might be acceptable to permit someone to sin in order to bring a good out of it, but to cause someone to sin, even in order to bring a good out of it, is wrong. We might then formulate one of the distinctive views of Calvinism as that God intends sin in order to be glorified either by redeeming or by punishing the sinner. But then God violates (1).

Presumably, a standard Calvinist response will be that (1) applies to us but not to God. However, our ethics is supposed to be an ethics of love, and God, whether necessarily or by his contingent decision, always acts in love as well. (Some Calvinists think God doesn't love the reprobate. But the argument still applies insofar as on Calvinist views it seems that God intends the elect—whom he loves—to sin, in order that he be able to redeem them.) And principle (1) seems to be at such a high level of generality that if it follows from the duty to love in our case, it is likely to follow from that duty in the case of God, as well.

I think the Calvinist should deny that God intends sin. Instead, the Calvinist should give some sort of a Double Effect story on which God causes something that entails the existence of sin, but which is distinct from the sin and good, and to which the sin is not a means. Maybe instead of willing Sally to punch George, which was evil, God can intend Sally to swing her fist forward, which is not in itself an evil, but which, along with the other things God has willed, entails that Sally is punched by George. Then one ends up denying that God intends people to sin for the sake of his glory, instead asserting that for the sake of his glory he permits them to sin, while he (God) wills something that entails their sinning. Whether it is possible for a Calvinist to walk this fine line is not clear.

Thursday, December 31, 2009

Intentions

Here is the thesis I will argue for: that x intentionally kills y does not entail that x intended that y die.

I need a relevance principle for intentions:

(RPI) If x intends that p in an action A, then x takes the epistemic conditional probability of p on A to be higher than the epistemic conditional probability of p on some relevant alternative (such as refraining from A).

Now suppose that George is falling past Fred's balcony. Under the building there is a net spread out. George's fall is such that he is virtually certain to miss the net unless his path is modified, and it is virtually certain that he'll die if he misses the net. Fred has always wanted to kill George. It's not that Fred has wanted George dead, but Fred wanted to take revenge on George, to be the cause of George's death. Fred has a baseball bat. If he hits George on the head, George's downward path will be modified and George will land on the net. There is no other intervention within Fred's power that can stop George from hitting the ground. If Fred hits George on the head with the bat, George has a 90% chance of dying from the blow, and a 1% chance of dying from landing on the net. Fred knows all this. He hits George on the head, and George dies from the blow.

Fred has intentionally killed George. But by RPI, Fred did not intend George's death, roughly because the action of hitting George with the bat did not increase the conditional (epistemic) probability of George's death.

Clearly, Fred intended that George should die of Fred's blow. Therefore, that one can intend that George die of Fred's blow without intending that George die.

This shows that there isn't going to be any plausible entailment principle for intentions, even if entailment is going to be understood along the lines of any plausible relevant logic (for the entailment from George dying of Fred's blow to George dying had better be relevant). Interestingly, it is not even true that he who intends the conjunction intends the conjuncts. Suppose that if I do nothing, p has probability 0.7 and q has probability 0.7, but the probability of the conjunction is only 0.1. But if I perform A, then p has probability 0.6, as does q, and the conjunction of p and q has probability 0.55. I want the conjunction of p and q, and I don't care about each conjunct (maybe I get a payoff if the conjunction holds, but I get nothing for each conjunct on its own). I know all this. So I perform A. I do not intend p, since I lowered the probability of p. Likewise, I do not intend q. But I do intend their conjunction.

In particular, this implies that in the Principle of Double Effect, the condition that one not intend the evil has to be expanded. Consider this weird case. If a peanut is eaten and Fred is killed, ten innocent people are saved. If the conjunction does not hold, ten innocent people die. Double Effect advocates (like me) will not permit me to wave a magic wand that simultaneously causes Fred to die and the peanut to be eaten (maybe by Fred who is allergic to peanuts?). However, note that Fred's death is not intended here, either as an end or as a means. Only the conjunction of Fred's death and the ingestion of the peanut is intended.

The correct way to expand the condition that one not intend the evil is tricky, and what I sketch in this paper still seems to me to be the best way to go.

Monday, May 11, 2009

Intention and understanding

George, who is quite happy thinking that he has just aced his logic exam (actually, he failed miserably) sees a first-order logic proposition on a board:

  1. (x)(~toothache(x) → ~(x = George))).
On a whim, he desires that this be the case. He rubs a lamp, the genie appears and George says to the genie: "Make it be the case that (x)(~toothache(x) → ~(x = George)))." To George's surprise, he immediately gets a toothache. The surprise isn't at the fulfillment of the wish—he fully expected the wish to be fulfilled—but at the toothache, since George did not see that (1) is logically equivalent to:
  1. toothache(George).

Did George get what he intended? Well, yes: he wanted (1) to be true, and the genie did make (1) be true. But while George got what he intended, he also got a toothache, which he clearly did not intend to get. Thus, one can intend (1) without intending (2). Intention cuts more finely than logical equivalence.

Suppose George were better at logic, so it was obvious to him that (1) and (2) are equivalent? Could he intend (1) without intending (2)? I am inclined to answer affirmatively. Belief does not automatically affect intentions—intentions are a matter of the will, not of the intellect. Of course, if he were better at logic, the toothache would not be a surprise.

Once we admit that intentions can cut this finely, we have to be really careful with Double Effect, lest we end up justifying the unjustifiable. We don't want to allow Janine to get away with murder by saying that she asked the genie to bring it about that either Fred is dead or 2+2=5, and so she never intended Fred to be dead. My way of doing that is to introduce the notion of accomplishment. As long as George intended (1), whether or not he knew that (1) entailed (2), George accomplished his toothache: the toothache was a part of the accomplishment of the action. As long as Janine intended the disjunction, the disjunct (or, more precisely, the truthmaker of the disjunct) which she (through the genie) accomplished is a part of her accomplishment.

Wednesday, August 20, 2008

Deception and lying

There is good reason to think lying is always wrong. Lying is wrong on Kantian grounds: it treats the other person as a tool to one's ends rather than as an autonomous rational being, and the practice of lying would undercut itself if universalized. Lying is wrong on natural law grounds: it is clearly a perversion of the nature of assertoric speech, using speech for the opposite of its natural end of communicating truth. Lying is malevolent, except perhaps in outré cases: in lying, we act to bring it about that the other has a false belief, and it is surely intrinsically bad to have a false belief. Lying is wrong on personalist grounds: in making an assertion one solicits the other's trust, but in deliberately speaking falsely, one betrays that trust in the act of soliciting it. And lying is wrong on theological grounds: God is truth, and the Book of Revelation lists liars among the damned.

On the other hand, even those who are willing to agree that lying is always wrong are unlikely to think there is anything wrong with sticking one's hat out on a stick so that one's enemy might shoot at it while one sneaks away. It is hard enough to protect the innocent against unjust aggressors without lying (and, alas, sometimes impossible). But to do so without any deceit is nigh impossible.

But some people—even very smart people—do in fact consider lying and deceit to be the same thing. After all, in both cases, it seems, one is trying to do the same thing, namely to induce a false belief, and if so, then the malevolence argument would make deceit be wrong for one of the reasons that lying is.

I once found this very puzzling. And then a colleague gave me the beginning of an answer. In cases of deceit, one is trying to get the other to do something, rather than trying to get the other to believe something. I think this story can be filled out in a way that makes for a neat distinction between deceit and all but perhaps outr´ cases of lying (more on those later). On the face of it, one might argue that if I stick out my hat, my intention is to bring it about that

  1. my enemy will think I am under the hat, and will shoot, and the commotion will cover my escape.
It seems that the enemy's belief that I am under the hat is essential to the success of the plan.

But this argument is mistaken. What is essential is that the enemy should take herself to have evidence that I am under the hat. She does not have to believe that I am under the hat to shoot. She only needs to take herself to have more evidence for my being there than for my being in any other particular place. That is all that is needed to rationally justify her shooting under the hat. And her belief that she has this evidence is in fact a true belief—she indeed does have such evidence. Now, an epistemically less cautious enemy may actually form the belief that I am under the hat. But here I can apply double effect. She forms the false belief on the basis of the evidence. I intend her to have the evidence and to shoot. The evidence is sufficient to lead to her shooting. I do not have to intend her to form that false belief. I suppose things go better for me if she does, but I need only intend that

  1. my enemy will take herself to have more evidence for me to be under the hat than anywhere else, and will shoot, and the commotion will cover my escape.
(A lot of these ideas developed in conversation with the aforementioned colleague. In fact it may be that there is very little that is mine here.)

The same can be said when I lay a false trail at a cross-roads when I am pursued by the enemy. I only intend what is needed for the accomplishment of my plan. Belief that I've taken road A, when I've taken road B, is not needed. All that's needed is that my enemy have strong evidence that I've taken road A, since having strong evidence that I've taken road A is sufficient to justify her following road A. There is no evil in her having such strong evidence. The evidence consists, after all, of a truth—the truth that there are footprints leading A-ward.

The principle of double effect can justify some cases of deception—I may foresee the other's forming a false belief, but I don't intend that belief formation, either as an end or as a means. And, typically, I don't even foresee that belief formation—I only foresee the possibility of it, since I do not know how epistemically cautious the other person will be. All that I intend is for the other person to have evidence for a false belief, and to act on that evidence.

Of course, in some cases of deceit, one is positively intending that the other have a false belief. For instance, a student plagiarist might desire not merely that her parents have evidence of her innocence, but that her parents positively believe her innocent. If she then manufactures evidence for her innocence with the intention that her parents believe her innocent, the above will be no excuse.

If this story is right, and if it is not to justify well-intentioned lies every bit as much as deceits, then there must be a crucial difference between how assertions function and how evidence functions. Assertions cannot simply be intended as yet another piece of evidence. For if they are, then in affirming a falsehood, we are not trying to induce any false belief in the other, but we are simply manufacturing misleading evidence. And, indeed, I do think assertions directly justify beliefs, in ways that are not merely evidential.

We can now go back to the reasons for believing lying to be wrong, and see if they apply to cases of deceit where one is not intending false belief but only misleading evidence. The Kantian "using" argument may not work (I used to think it would work, but I am not so clear on that). Maybe one is not circumventing the other's rationality, but only ensuring that the other act on unclear evidence. Nor is it clear that the practice of generating misleading evidence is not universalizable. Even if everybody who has good reason to deceive generates misleading evidence, there will be enough cases where non-misleading evidence is generated unconsciously that the evidence will still have some weight. Making footprints or putting a hat on a stick are not actions that have a natural end that is being circumvented here in the way in which lying circumvents the natural end of assertion. So the natural law argument against lying fails to show deception to be wrong. If the double effect considerations above are correct, the malevolence argument fails. The personalist argument also fails, because when we take something as evidence, rather than as testimony, trust in another person need not be involved. I do not trust persons to leave footprints leading to them—I have no right to feel betrayed if they leave footprints pointing in other directions. God is truth, but the cases of deceit that I have defended are not directly opposed to truth, since they do not involve an attempt to cause a false belief.

Final comment: Twice I mentioned that there could be outré cases of lying where there is no intention of causing false belief. These would be cases where one does not expect to be believed. There could, for instance, be cases where one knows that the other person is expecting one to lie, and so one says something false, in order to lead the other to true belief. I don't know if this is really a betrayal of trust since there is no trust. I don't know if people would count this as lying—it doesn't, for instance, meet the Catholic Catechism's definition of lying as a false assertion intended to deceive. But if one wishes to count this as a case of lying, it is a form of lying that may be significantly morally different from the others.

Saturday, May 3, 2008

Philosophy needs the Principle of Sufficient Reason

I claim that much of philosophy depends on the Principle of Sufficient Reason (PSR), namely the claim that all facts have explanations.

It is morally acceptable to redirect a speeding trolley from a track on which there are five people onto a track with only one person. On the other hand, it is not right to shoot one innocent person to save five. What is the morally relevant difference between the two cases? If we denied the PSR, then we could simply say: “Who cares? Both of these moral facts are just brute facts, with no explanation.” Why, indeed, suppose that there should be some explanation of the difference in moral evaluation if we accept the denial of the PSR, and hence accept that there can be facts with no explanation at all?

Almost all moral theorists accept the supervenience of the moral on the non-moral. But without the PSR, would we really have reason to accept that? We could simply suppose brute contingent facts. In this world, torture is wrong. In that world, exactly alike in every other respect, torture is a duty. Why? No reason, just contingent brute fact.

The denial of the PSR, thus, would bring much philosophical argumentation to a standstill.

Note: This, like the previous post and at least one earlier post, is an excerpt from a paper I am finishing on Leibnizian cosmological arguments.