Consider a variant of my teenage Hitler case. You’re a hospital anesthetist and teenage Hitler is about to have an emergency appendectomy. The only anesthetic you have available is one that requires a neutralizer to take the patient out of anesthesia—without the neutralizer, the patient dies. You know (an oracle told you) that if teenage Hitler survives, he’ll kill millions. And you’re the only person who knows how to apply anesthesia or the neutralizer in this town.
You’re now asked to apply anesthesia. You have two options: apply or refuse. If you refuse, the surgeon will perform the appendectomy without anesthesia, causing excruciating pain to a (still) innocent teenager, who will still go on to kill millions. Nobody benefits from your refusal.
But if you apply anesthesia, you will put yourself in a very awkward moral position. Here is why. Once the surgery is over, standard practice will be to apply the neutralizer. But the Principle of Double Effect (PDE) will forbid you from applying the neutralizer. For applying the neutralizer is an action that has two effects: the good effect of saving teenage Hitler’s life and the evil effect of millions dying. PDE allows you to do actions that have a foreseen evil effect only when the evil effect is not disproportionate to the good effect. But here the evil effect is disproportionate. So, PDE forbids application of the neutralizer. Thus if you know yourself to be a morally upright person, you also know that if you apply the anesthesia, you will later refuse to apply the neutralizer. But surely it is wrong to apply the anesthesia to an innocent teenage while expecting not to apply the neutralizer. For instance, it would be clearly wrong to apply the anesthesia if one were out of neutralizer.
So, it seems you need to refuse to apply anesthesia. But your reasons for the refusal wiil be very odd: you must refuse to apply anesthesia, because it would be morally wrong for you to neutralize the anesthesia, even though everyone is no worse or better off in the scenario where you apply anesthesia and neutralize it than in the scenario where the operation happens without anesthesia. To make the puzzle even sharper, we can suppose that if teenage Hitler has the operation without anesthesia, he will blame you for the pain, and eventually add your ethnic group—which otherwise he would have no prejudice against—to his death lists. So your refusal to apply anesthesia not only causes pain to an innocent teenager but causes many deaths.
The logical structure here is this: If you do A, you will be forbidden from doing B. But you are not permitted to do A if you expect not do B. And some are much better off and no one is worse off if you do both A and B than if you do neither.
Here is a much more moderate case that seems to have a similar structure. Bob credibly threatens to break all of Carl’s house windows unless Alice breaks one of Carl’s windows. It seems that it would be right for Alice to break the window since any reasonable person would choose to have one window broken rather than all of them. But suppose instead Bob threatens to break all of Carl’s windows unless Alice promises to break one of Carl’s windows tomorrow. And Alice knows that by tomorrow Bob will be in jail. Alice knows that if she makes the promise, she would do wrong to keep it, for Carl’s presumed permission of one window being broken to save the other windows would not extend to the pointless window-breaking tomorrow. And one shouldn’t make a promise one is planning not to keep (bracketing extreme cases, which this is not one of). So Alice shouldn’t make the promise. But no one would be worse off if Alice made the promise and kept it.
I wonder if there isn’t a way out of both puzzles, namely to suppose that in some cases a promise makes permissible something that would not otherwise be permissible. Thus, it would normally be wrong to apply the neutralizer to teenage Hitler. But if you promised to do so (e.g., implicitly when you agree to perform your ordinary medical duties at the hospital, or explicitly when you reassured his mom that you’ll bring him out of anesthesia), then it becomes permissible, despite the fact that many would die if you kept the promise. Similarly, if Alice promised Bob to break the window, it could become permissible to do so. Of course, we better not say in general that promises make permissible things that would otherwise be impermissible.
The principle here could be roughly something like that:
- If it would be permissible for you to now intentionally ensure that a state of affairs F occurs at a later time t, then it is permissible for you to promise to bring about F at t and then to do so if no relevant difference in the circumstances occurs.
Consider how (1) applies to the teenage Hitler and window-breaking cases.
It would be permissible for you to set up a machine that would automatically neutralize Hitler’s anesthesia at the end of the operation, and then to administer anesthesia. Thus, it is now—i.e., prior to your administering the anesthesia—permissible for you to ensure that Hitler’s anesthesia will be neutralized. Hence, by (1) it is permissible for you to promise to neutralize the anesthesia and then to keep the promise, barring some relevant change in the circumstances.
Similarly, it would be permissible for you to throw a rock at Carl’s window from very far away (out in space, say) so that it would only reach the window tomorrow. So, by (1) it is permissible for you to promise to break the window tomorrow and then to keep the promise.
On the other hand, take the case where an evildoer asks you to promise to kill an innocent tomorrow or else she’ll kill ten today, and suppose that tomorrow the evildoer will be in jail and unable to check up on what you did. It would be wrong for you to now intentionally ensure the innocent dies tomorrow, so (1) does not apply and does not give you permission to make and keep the promise. (Some people will think it’s OK to make and break this promise. But no one thinks it’s OK to make and keep this promise.)
Principle (1) seems really ad hoc. But perhaps this impression is reduced when we think of promises as a way of projecting our activity forward in time. Principle (1) basically says that if it would be permissible to project our activity forward in time by making a robot—or by self-hypnosis—then we should be able to accomplish something similar by a promise.
The above is reminiscent of cases where you promise to ignore someone’s releasing you from a promise. For instance, Alice, a staunch promoter of environmental causes, lends Bob a large sum of money, on the condition of Bob making the following promise: Bob will give the money back in ten years, unless Alice’s ideals shift away from environmentalism in which case he will give it to the Sierra Fund, notwithstanding any pleas to the contrary from Alice. The current context—Alice’s requirements at borrowing time—becomes normative at the time for the promise to be kept, notwithstanding some feared changes.
I am far from confident of (1). But it would let one escape the unhappy position of saying that in cases with the above structure one is required to let the worst happen. I expect there are counterexamples to (1), too. But perhaps (1) is true ceteris paribus.