You are the captain of a small damaged spaceship two light years from Earth, with a crew of ten. Your hyperdrive is failing. You can activate it right now, in a last burst of energy, and then get home. If you delay activating the hyperdrive, it will become irreparable, and you will have to travel to earth at sublight speed, which will take 10 years, causing severe disruption to the personal lives of the crew.
The problem is this. When such a failing hyperdrive is activated, everything within a million kilometers of the spaceship’s position will be briefly bathed in lethal radiation, though the spaceship itself will be protected and the radiation will quickly dissipate. Your scanners, fortunately, show no planets or spaceships within a million kilometers, but they do show one large asteroid. You know there are two asteroids that pass through that area of space: one of them is inhabited, with a population of 10 million, while the other is barren. You turn your telescope to the asteroid. It looks like the uninhabited asteroid.
So, you come to believe there is no life within a million kilometers. Moreover, you believe that as the captain of the ship who has a resposibility to get the crew home in a reasonable amount of time, unless of course this causes undue harm. Thus, you believe:
- You are obligated to activate the hyperdrive.
You reflect, however, on the fact that ship’s captains have made mistakes in asteroid identification before. You pull up the training database, and find that at this distance, captains with your level of training make the relevant mistake only once in a million times. So you still believe that this is the lifeless asteroid. but now you get worried. You imagine a million starship captains making the same kind of decision as you. As a result, 10 million crew members get home on time to their friends and families, but in one case, 10 million people are wiped out in an asteroid. You conclude, reasonably, that this is an unacceptable level of risk. One in a million isn’t good enough. So, you conclude:
- You are obligated not to activate the hyperdrive.
This reflection on the possibility of perceptual error does not remove your belief in (1), indeed your knowledge of (1). After all, a one in a million chance of error is less than the chance of error in many cases of ordinary everyday perceptual knowledge—and, indeed, asteroid identification just is a case of everyday perceptual knowledge for a captain like yourself.
Maybe this is just a case of your knowing you are in a real moral dilemma: you have two conflicting duties, one to activate the hyperdrive and the other not to. But this fails to account for the asymmetry in the case, namely that caution should prevail, and there has to be an important sense of “right” in which the right decision is not to activate the hyperdrive.
I don’t know what to say about cases like this. Here is my best start. First, make a distinction between subjective and objective obligations. This disambiguates (1) and (2) as:
You are objectively obligated to activate the hyperdrive.
You are subjectively obligated not to activate the hyperdrive.
Second, deny the plausible bridge principle:
- If you believe you are objectively obligated to ϕ, then you are subjectively obligated to ϕ.
You need to deny (4), since you believe (3), and if (4) were true, then it would follow you are subjectively obligated to activate the hyperdrive, and we would once again have lost sight of the asymmetric “right” on which the right thing is not to activate.
This works as far as it goes, though we need some sort of a replacement for (4), some other principle bridging from the objective to the subjective. What that principle is is not clear to me. A first try is some sort of an analogue to expected utility calculations, where instead of utilities we have the moral weights of non-violated duties. But I doubt that these weights can be handled numerically.
And I still don’t know how to handle is the problem of ignorance of the bridge principles between the objective and the subjective.
It seems there is some complex function from one’s total mental state to one’s full-stop subjective obligation. This complex function is one which is not known to us at present. (Which is a bit weird, in that it is the function that governs subjective obligation.)
A way out of this mess would be to have some sort of infallibilism about subjective obligation. Perhaps there is some specially epistemically illuminated state that we are in when we are subjectively obligated, a state that is a deliverance of a conscience that is at least infallible with respect to subjective obligation. I see difficulties for this approach, but maybe there is some hope, too.
Objection: Because of pragmatic encroachment, the standards for knowledge go up heavily when ten million lives are at stake, and you don’t know that the asteroid is uninhabited when lives depend on this. Thus, you don’t know (1), whereas you do know (2), which restores the crucial action-guiding asymmetry.
Response: I don’t buy pragmatic encroachment. I think the only rational process by which you lose knowledge is getting counterevidence; the stakes going up does not make for counterevidence.
But this is a big discussion in epistemology. I think I can avoid it by supposing (as I expect is true) that you are no more than 99.9999% sure of the risk principles underlying the cautionary judgment in (2). Moreover, the stakes go up for that judgment just as much as they do for (1). Hence, I can suppose that you know neither (1) nor (2), but are merely very confident, and rationally so, of both. This restores the symmetry between (1) and (2).