Thursday, January 11, 2024

A deontological asymmetry

Consider these two cases:

  1. You know that your freely killing one innocent person will lead to three innocent drowning people being saved.

  2. You know that saving three innocent drowning people will lead to your freely killing one innocent person.

It’s easy to imagine cases like (1). If compatibilism is true, it’s also pretty easy to imagine cases like (2)—we just suppose that your saving the innocent people produces a state of affairs where your psychology gradually changes in such a way that you kill one innocent person. If libertarianism and Molinism are true, we can also get (2): God can reveal to you the conditional of free will.

If libertarianism is true but Molinism is false, it’s harder to get (2), but we can still get it, or something very close to it. We can, for instance, imagine that if you rescue the three people, you will be kidnapped by someone who will offer increasingly difficult to resist temptations to kill an innocent person, and it can be very likely that one day you will give in.

Deontological ethics says that in (1) killing the innocent person is wrong.

Does it say that saving the three innocents is wrong in (2)? It might, but not obviously so. For the action is in itself good, and one might reasonably say that becoming a murderer is a consequence that is not disproportionate to saving the three lives. After all, imagine this variant:

  1. You know that saving three innocent drowning people will lead to a fourth person freely killing one innocent person.

Here it seems that it is at least permissible to save the three innocents. That someone will through a weird chain of events become a murderer if you save the three innocents does not make it wrong to save the three.

I am inclined to think that saving the three is permissible in (2). But if you disagree, change the three to thirty. Now it seems pretty clear to me that saving the drowning people is permissible in (2). But it is still wrong to kill an innocent person to save thirty.

Even on threshold deontology, it seems pretty plausible that the thresholds in (1) and (2) are different. If n is the smallest number such that it is permissible to save n drowning people, at the expense of a side-effect of your eventually killing one innocent, then it seems plausible that n is not big enough to make it permissible to kill one innocent to save n.

So, let’s suppose we have this asymmetry between (1) and (2), with the “three” replaced by some other number as needed (the same one in both statements), so that the action described in (1) is wrong but the one in (2) is permissible.

This then will be yet another counterexample to the project of consequentializing deontology: of finding a utility assignment that renders conclusions equivalent to those of deontology. For the consequences of (1) and (2) are the same, even if one assigns a very big disutility to killing innocents.

5 comments:

Walter Van den Acker said...

Alex

If libertarianismis true it is impossible to know that saving three innocent drowning people will lead to your freely killing one innocent person.

Alexander R Pruss said...

That's not right on Molinism.

And even without Molinism, one can have a very high rational probability that saving three innocents would lead to freely killing, and one can replace knowledge with very high rational probability.

Walter Van den Acker said...

Molinism is not compatible with LFW.
And your argument does not work with high probabilities. The very idea of LFW is that, no matter how high the probability of choosing X, it is amways a real possibility to choose Y instead.
Anayway, the high probability could perhaps be relevant on your 3, but not on your 2. If I think that killing an innocent person is wong and that I will never kill an innocent person, I won't even consider that "saving three innocent drowning people will lead to my freely killing one innocent person".

Alexander R Pruss said...

Very high probabilities are all I need to refute the consequentialization of deontology. For the most plausible consequentialization of deontology involves assigning heavy disutility to deontologically wrong actions, and then calculated expected utilities. It makes little difference there whether you have very high probabilities or knowledge.

Here's a scenario for you. Suppose you are an exemplarily virtuous person who thinks it's always wrong to kill and innocent person. But if you save three people, you will be captured and brainwashed "slightly", to the point where you lose your exemplary virtue, but not to the point of losing free will, becoming insane, or even being vicious. You just have the "average" level of virtue and sanity of a person in our society. And then you will be tempted, strongly, over and over, to commit murder. Now, I take it that if one takes an average person in our society, the probability is very high that with enough temptation, of the "right" sort, they will give in and commit a murder.

Walter Van den Acker said...

If I am brainwashed to the point where I lose my exemplary virtue, my free Will is compromised.
If I suddenly become able to commit murder, my personality has changed. If my personality doesn't change, I don't commit murder.