Tuesday, January 14, 2025

The overridingness of morality and Double Effect

You’ve been imprisoned in a cell with a torture robot. The cell is locked by a combination lock, and your estimate is that you will be able to open it in a week. If the torture robot is left running, it will stimulate your pain center, causing horrible pain but no lasting damage, and not slowing down your escaping at all. An infallible oracle reveals to you that if you disable the robot, through a random confluence of events this will affect your character in such a way that in a year you will be 0.1% less patient for the rest of your life than you would otherwise be.

Now, sometimes, a small difference in the degree of a virtue could make a big difference. For instance, perhaps, you will one day be in a position where an extremely arduous task will need to be done to save someone’s life, and you just barely have enough patience for it, so that if you were 0.1% less patient, you wouldn’t do it. You ask the oracle whether something like this will happen if you turn off the robot. The oracle replies: “No, it’s just that you will be 0.1% more annoyed whenever you engage in an arduous task, but that’s never going to push you past any significant threshold—you’re not going to blow up in a big way at your child, or neglect a duty, or anything like that.”

It seems obviously reasonable to disable the robot. Thus, enormous short-term hedonic considerations can win out over tiny long-term virtue considerations. It is thus not the case that considerations of virtue always beat hedonic considerations.

What are we to make, then, of the deep insight—perhaps the most important insight in the history of Western philosophy—about the primacy of morality over other considerations?

Two things. First, moral considerations tend to be much more important than non-moral considerations.

Second, we should never do what is morally wrong, no matter what the price for avoiding it, and no matter how small the wrong. But there is a difference between doing what is morally wrong and doing something morally permissible that makes one less virtuous.

Here is a second case. You and an innocent stranger are in the cell. The robot is set to torture the stranger. The oracle now reveals to you that right after the escape, you will forget the last two weeks of your life, and your life will go the same way whether you disabled the robot or not, with exactly one morally relevant exception: if you have chosen to disable the robot, then one day, feeling peckish and having forgotten your wallet, you will culpably steal a candybar from a cornerstore.

It seems obvious that you should disable the robot, despite the fact that doing so leads to your doing a minor moral wrong. The point isn’t that disabling the robot justifies stealing the candybar—at the time that you steal it, you will have forgotten all about the robot, so there is no justification. The point is that even though you should never do wrong that a good might come of it, nonetheless sometimes for the sake of a great good it is permissible to do something that you know will lead to your later doing something impermissible.

Sometimes theologians have incautiously said things like that the smallest sin outweighs the greatest evil that is not a sin. I think this is incorrect. But what is correct is that you shouldn’t commit the smallest sin for the sake of the greatest good. However, the Principle of Double Effect applies to future sins: you can foresee but not intend that if you perform a certain action—turning off the robot, say—you will commit a future sin.

2 comments:

Jan_Laskowik said...
This comment has been removed by the author.
Jan_Laskowik said...

I've been regularly reading your blog for some time and i am so glad you touched this subject. I have two questions related to your post: 1) You wrote "you will CULPABLY steal a candybar from a cornerstore", but what do you think of value of preventing immoral actions (done by yourself or others), that are not culpable, how important other considerations must be to justify not preventing action that is wrong, but not blameworthy. 2) What if we can choose between, saving 100 people from brutal and premature death from disease or preventing sin of brutal murder of one person. On one hand the moral evil of murder seems to outweight evil of 100 deaths, but it is bad mainly for murderer and for victim only as bad as brutal death. (suppose we can prevent murder, by preventing murderer from ever having murderous intent)