Friday, October 14, 2022

Another thought on consequentializing deontology

One strategy for accounting for deontology while allowing the tools of decision theory to be used is to set such a high disvalue on violations of deontic constraints that we end up having to obey the constraints.

I think this leads to a very implausible consequence. Suppose you shouldn’t violate a deontic constraint to save a million lives. But now imagine you’re in a situation where you need to ϕ to save ten thousand lives, and suppose that the non-deontic-consequence badness of ϕing is negligible as compared to ten thousand lives. Further, you think it’s pretty likely that there is no deontic constraint against ϕing, but you’ve heard that a small number of morally sensitive people think there is. You conclude that there is a 1% chance that there is a deontic constraint against ϕing. If we account for the fact that you shouldn’t violate a deontic constraint to save a million lives by setting a disvalue on violation of deontic constraints greater than the disvalue of a million deaths, then a 1% risk of violating a deontic constraint is worse than ten thousand deaths, and so you shouldn’t ϕ because of the 1% risk of violating a deontic constraint. But this is surely the wrong result. One understands a person of principle refusing to do something that clearly violates a deontic constraint to save lots of lives. But to refuse to do something that has a 99% chance of not violating a deontic constraint to save lots of lives, solely because of that 1% chance of deontic violation, is very implausible.

While I think this argument is basically correct, it is also puzzling. Why is it that it is so morally awful to knowingly violate a deontic constraint, but a small risk of violation can be tolerated? My guess is it has to do with where deontic constraints come from: they come from the fact that in certain prohibited actions one is setting one’s will against a basic good, like the life of the innocent. In cases where violation is very likely, one simply is setting one’s will against the good. But when it is unlikely, one simply is not.

Objection The above argument assumes that the disvalue of deaths varies linearly in the number of deaths and that expected utility maximization is the way to go.

Response: Vary the case. Imagine that there is a ticking bomb that has a 99% chance of being defective and a 1% chance of being functional. If it’s functional, then when the timer goes off a million people die. And now suppose that the only way to disarm the bomb is to do something that has a 1% chance of violating a deontic constraint, with the two chances (functionality of the bomb and violation of constraint) being independent. It seems plausible that you should take the 1% risk of violating a deontic constraint to avoid a 1% chance of a million people dying.

No comments:

Post a Comment