Let’s suppose disease X if medically unchecked will kill 4.00% of the population, and there is one and only one intervention available: a costless vaccine that is 100% effective at preventing X but that kills 3.99% of those who take it. (This is, of course, a very different situation than the one we are in regarding COVID-19, where we have extremely safe vaccines.) Moreover, there is no correlation between those who would be killed by X and those who would be killed by the vaccine.
Assuming there are no other relevant consequences (e.g., people’s loss of faith in vaccines leading to lower vaccine uptake in other cases), a utilitarian calculation says that the vaccine should be used: instead of 316.0 million people dying, 315.2 million people would die, so 800,000 fewer people would die. That’s an enormous benefit.
But it’s not completely clear that this costless vaccine should be promoted. For the 315.2 million who would die from the vaccine would be killed by us (i.e., us humans). There is at least a case to be made that allowing 316.0 million deaths is preferable to causing 315.2 million. The Principle of Double Effect may justify the vaccination because the deaths are not intentional—they are neither ends nor means—but still one might think that there is a doing/allowing distinction that favors allowing the deaths.
I am not confident what to say in the above case. But suppose the numbers are even closer. Suppose that we have extremely precise predictions and they show that the hypothetical costless vaccine would kill exactly one less person than would be killed by X. In that case, I do feel a strong pull to thinking this vaccine should not be marketed. On the other hand, if the numbers are further apart, it becomes clearer to me that the vaccine is worth it. If the vaccine kills 2% of the population while X kills 4%, the vaccine seems worthwhile (assuming no other relevant consequences). In that case, wanting to keep our hands clean by refusing to vaccinate would result in 158 million more people dying. (That said, I doubt our medical establishment would allow a vaccine that kills 2% of the population even if the vaccine would result in 158 million fewer people dying. I think our medical establishment is excessively risk averse and disvalues medically-caused deaths above deaths from disease to a degree that is morally unjustified.)
From a first-person view, though, I lose my intuition that if the vaccine only kills one fewer person than the disease, then the vaccine should not be administered. Suppose I am biking and my bike is coasting down a smooth hill. I can let the bike continue to coast to the bottom of the hill, or I can turn off into a side path that has just appeared. Suddenly I acquire the following information: by the main path there will be a tiger that has a 4% chance of eating any cyclist passing by, while by the side path there will be a different tiger that has “only” a 3.99999999% chance of eating a cyclist. Clearly, I should turn to the side path, notwithstanding the fact that if the tiger on the side path eats me, it will have eaten me because of my free choice to turn, while if the tiger on the main path eats me, that’s just due to my bike’s inertia. Similarly, then, if the vaccine is truly costless (i.e., no inconvenience, no pain, etc.), and it decreases my chance of death from 4% to 3.99999999% (that’s roughly what a one-person difference worldwide translates to), I should go for it.
So, in the case where the vaccine kills only one fewer person than the disease would have killed, from a first-person view, I get the intuition that I should get the vaccine. From a third-person view, I get the intuition that the vaccine shouldn’t be promoted. Perhaps the two intuitions can be made to fit together: perhaps the costless vaccine that kills only one fewer person should not be promoted, but the facts should be made public and the vaccine should be made freely available (since it is costless) to anyone who asks for it.
This suggests an interesting distinction between first-person and third-person decision-making. The doing/allowing distinction, which favors evils not of our causing over evils of our causing even when the latter are non-intentional, seems more compelling in third-person cases. And one can transform third-person cases to be more like first-person through unencouraged informed consent perhaps.
(Of course, in practice, nothing is costless. And in a case where there is such a slight difference in danger as 4% vs. 3.99999999%, the costs are going to be the decisive factor. Even in my tiger case, if we construe it realistically, the effort and risk of making a turn on a hill will override the probabilistic benefits of facing the slightly less hungry tiger.)
No comments:
Post a Comment