Thursday, December 31, 2015

Utilities and deontology

You are a police officer, and it is looking to you like Glossop is about to kill Fink-Nottle with a shotgun. You hear Glossop say to Fink-Nottle: "This will pay you back for stealing Madeline's affections." Your justified credence that Glossop is about to murder Fink-Nottle is, say, 0.9999. Though there is some small chance that, say, Glossop and Fink-Nottle are practicing a fight scene for an amateur theatrical. The only thing you can see that you can do to save Fink-Nottle is to shoot Glossop dead (you can't yell at them, as you're too far away for them to hear you--you only know what Glossop is saying because you can read his lips). This seems to be the right thing to do, even though you risk a probability of 0.0001 that you are killing an innocent man.

On the other hand, it is not permissible to kill someone you know for sure to be innocent in order to save 9999 others.

There is an apparent tension between these two judgments. Standard decision-theoretic considerations suggest that if it is worth taking a 0.0001 probability risk of an adverse outcome (the killing of an innocent person, in this case) in order to secure a 0.9999 chance of some benefit (saving an innocent's life) then the disvalue of the adverse outcome must be less than 9999 times greater than the value of the benefit. Thus, it would follow from the first judgment that the disvalue of killing an innocent person is less than 9999 times the value of saving the life of an innocent person. But if so, then it seems it would be worthwhile to kill one innocent to save 9999 others.

Risk aversion is relevant to such judgments. But risk aversion tends to reduce the choiceworthiness (or at least apparent choiceworthiness) of actions involving uncertainty, so it's going to make it harder to justify killing Glossop, and that only strengthens the argument that if it's permissible to kill Glossop, it's permissible to kill one innocent to save 9999.

The deontologist might use the above line of argument to challenge the applicability of standard decision-theoretic considerations to moral questions. The person committed to such a decision theory might, instead, use the line of argument to undermine deontology.

But the whole above line of thought is, however, fallacious. For in killing Glossop, you accept a risk of 0.0001 of

  1. killing an innocent who looks guilty to you.
But in killing the person you know for sure to be innocent, you accept a certainty of
  1. killing an innocent who is known by you to be for sure innocent.
These are different actions, and hence it is unsurprising that they have different disvalues. Indeed, suppose you have amnesia and you know that you did (1) or (2) yesterday. You then clearly have reason to hope that what you did was (1). So the above argument for a tension between decision-theory and deontology fails. I suspect others succeed, but that's for other occasions.


David Gordon said...

This doesn't affect the substance of your argument, but shouldn't killing Glossop by shooting him also be assigned a probability value?

Alexander R Pruss said...

Yeah. I'm assuming that all the unspecified stuff is known for sure.

Erenan said...

Maybe Glossop doesn't love Madeleine any longer and instead of simply breaking it off with her directly and potentially wounding her sensitive emotions, he promised Fink-Nottle his valuable antique shotgun in exchange for Fink-Nottle's stealing her affections and making the breakup easier on all parties?