Bob and Carl are drowning and you can save only one of them. Bob is a human being in the prime of life, physically and mentally healthy, highly intelligent, and leading a happy and fulfilling life as a physicist committed to lifelong celibacy. To look at him, Carl is Bob’s identical twin. Carl has the same physical and mental powers as Bob, and leads a very similar happy and fulfilling life as a physicist committed to lifelong celibacy.
But there is one crucial difference that you know about, but Carl does not. Carl is actually a member of a superintelligent humanoid alien species. However, due to an unfortunate untreatable genetic condition, Carl suffers from a severe intellectual impairment, having merely the intelligence of a highly intelligent human. In order that Carl might avoid the stigma of the impairment, his parents had some highly sophisticated surgery done on him to make him fit into human society, and arranged for him to be adopted by a human family and raised as a human. No one except for you on earth will ever know that Carl isn’t human. You know because you happened to see the aliens arranging this (but you haven’t told anyone, because you don’t want people to think you are crazy).
Should you save Bob or Carl from drowning? My intuition is that if the above is all that you know, you have no reason to prefer saving one over the other. If one of them is slightly more likely to be saved by you (e.g., they are slightly closer to you), you should go for that one, but otherwise it’s a toss-up.
But notice that if you save Carl, there will be more natural evil in the world: There will be a severe intellectual impairment, which won’t be present if you choose to save Bob instead. It seems pretty plausible that:
- If you have a choice between two otherwise permissible courses of action, which result in the same goods, but one of them results in exactly one additional evil, you have a moral reason to choose the course of action that does not result in the evil.
Thus, it seems, you should save Bob.
So there is something paradoxical here. On the one hand, there seems to be no reason to pick Bob over Carl. On the other hand, the plausible general ethical principle (1) suggests you should pick Bob.
How can we get out of this paradox? Here are two options.
First, one could say that impairment is not an evil at all. As long as Carl leads a fulfilling life—even if it is merely fulfilling by human standards and not those of his species—his impairment is no evil. Indeed, we might even take the above story to be a reductio ad absurdum of an Aristotelian picture of species as having norms attached to them with it being a harm to one to fall short of these norms.
Second, one argue that principle (1) does not actually apply to the case. For there is a difference of goods in saving Carl: you are saving a member of a superintelligent species, while in the case of saving Bob, you are saving a mere human. For this to fit with the intuition that it’s a toss-up whether to save Bob or Carl, it has to be the case that what the superintelligence of his species adds to the reasons for saving Carl is balanced by what his abnormally low intelligence subtracts from these reasons.
Of these options, I am more attracted to the second. And the second has an interesting and important consequence: "mere" membership in a natural kind can have significant value. This has important repercussions for the status of the human fetus.
There is another interesting evil that arises in such a case. If one takes knowledge to be an intrinsic normative good, then note that in a world where you save Carl over Bob, billions of agents will have a false belief, namely, that "Carl is a human". So, there is another reason to save Bob: you prevent billions of cases of slight evil.
ReplyDeleteBut it still seems as though you shouldn't have a reason to save Bob over Carl. Maybe this could be used as a reductio against the position that *all* knowledge is an intrinsic normative good, and instead that we ought to think only certain types of knowledge constitute normative goods.
The morally correct path is to try to save both and only settle for saving one if the process forces that outcome. In practice, save the closest one and then try to save the second while doing so.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteIt occurred to me that Bob might fall in love and decide to have a family. That possibility could be another reason to choose Bob. And if he was unable to father children because of some impairment, then both Bob and Carl would have impairments. I also wonder if we should not choose Bob because he, like us, is human. If we do not care whether someone is human or not, why should we care how clever a person is? It also occurred to me that the aliens are not really better than us. Would we turn an intellectually subnormal person into an intellectually normal dog if we could? In short, we do seem to have some reason to save Bob over Carl.
ReplyDeleteThe slight evils due to false beliefs are interesting, and it is interesting how much we ignore them. For instance, there was a time when "everybody" thought no one could run a sub four minute mile. Was that *any* reason for Bannister not to try? Multiply the numbers: suppose an infinity of aliens follow human sports, and suppose they all thought Bannister would not succeed. Would *that* be any reason for him not to try?
ReplyDeleteI don't know what to make of it.
I share the same feeling of mystery. I don't know what to make of it either. I am convinced, nonetheless though, that at least some knowledge is an intrinsic normative good.
ReplyDeletePerhaps the answer to the Bannister case, though, would be that the fact that all those agents hold the false belief is only an evil *in general*; ie, it is only correct to say that an identical world W except with agents holding the true belief that a sub-four minute mile is possible is a better world. It isn't correct, though, to say that any agents ought to make decisions based on the number of false beliefs in a world. Perhaps "false beliefs" are a peculiar type of evil with different moral consequences than other evils.
It could be that there is a hierarchy of goods such that there are kinds of good G1 and G2 such that no amount of G1 -- not even an infinite amount! -- is worth any amount of G2. Or maybe it's not so much a matter of how much the goods are worth, but what sorts of reasons they generate for us. Perhaps the value of true belief generates very little or no reason to adjust reality to fit belief but strong reason to adjust belief to fit reality.
ReplyDeleteAh. Yes, I think that's correct. At least, your latter point. True beliefs are goods, but just different kinds, at least in the reasons/obligations they generate for us. This has been very illuminating.
ReplyDelete