Tuesday, March 5, 2019

More on moral risk

You are the captain of a small damaged spaceship two light years from Earth, with a crew of ten. Your hyperdrive is failing. You can activate it right now, in a last burst of energy, and then get home. If you delay activating the hyperdrive, it will become irreparable, and you will have to travel to earth at sublight speed, which will take 10 years, causing severe disruption to the personal lives of the crew.

The problem is this. When such a failing hyperdrive is activated, everything within a million kilometers of the spaceship’s position will be briefly bathed in lethal radiation, though the spaceship itself will be protected and the radiation will quickly dissipate. Your scanners, fortunately, show no planets or spaceships within a million kilometers, but they do show one large asteroid. You know there are two asteroids that pass through that area of space: one of them is inhabited, with a population of 10 million, while the other is barren. You turn your telescope to the asteroid. It looks like the uninhabited asteroid.

So, you come to believe there is no life within a million kilometers. Moreover, you believe that as the captain of the ship who has a resposibility to get the crew home in a reasonable amount of time, unless of course this causes undue harm. Thus, you believe:

  1. You are obligated to activate the hyperdrive.

You reflect, however, on the fact that ship’s captains have made mistakes in asteroid identification before. You pull up the training database, and find that at this distance, captains with your level of training make the relevant mistake only once in a million times. So you still believe that this is the lifeless asteroid. but now you get worried. You imagine a million starship captains making the same kind of decision as you. As a result, 10 million crew members get home on time to their friends and families, but in one case, 10 million people are wiped out in an asteroid. You conclude, reasonably, that this is an unacceptable level of risk. One in a million isn’t good enough. So, you conclude:

  1. You are obligated not to activate the hyperdrive.

This reflection on the possibility of perceptual error does not remove your belief in (1), indeed your knowledge of (1). After all, a one in a million chance of error is less than the chance of error in many cases of ordinary everyday perceptual knowledge—and, indeed, asteroid identification just is a case of everyday perceptual knowledge for a captain like yourself.

Maybe this is just a case of your knowing you are in a real moral dilemma: you have two conflicting duties, one to activate the hyperdrive and the other not to. But this fails to account for the asymmetry in the case, namely that caution should prevail, and there has to be an important sense of “right” in which the right decision is not to activate the hyperdrive.

I don’t know what to say about cases like this. Here is my best start. First, make a distinction between subjective and objective obligations. This disambiguates (1) and (2) as:

  1. You are objectively obligated to activate the hyperdrive.

  2. You are subjectively obligated not to activate the hyperdrive.

Second, deny the plausible bridge principle:

  1. If you believe you are objectively obligated to ϕ, then you are subjectively obligated to ϕ.

You need to deny (4), since you believe (3), and if (4) were true, then it would follow you are subjectively obligated to activate the hyperdrive, and we would once again have lost sight of the asymmetric “right” on which the right thing is not to activate.

This works as far as it goes, though we need some sort of a replacement for (4), some other principle bridging from the objective to the subjective. What that principle is is not clear to me. A first try is some sort of an analogue to expected utility calculations, where instead of utilities we have the moral weights of non-violated duties. But I doubt that these weights can be handled numerically.

And I still don’t know how to handle is the problem of ignorance of the bridge principles between the objective and the subjective.

It seems there is some complex function from one’s total mental state to one’s full-stop subjective obligation. This complex function is one which is not known to us at present. (Which is a bit weird, in that it is the function that governs subjective obligation.)

A way out of this mess would be to have some sort of infallibilism about subjective obligation. Perhaps there is some specially epistemically illuminated state that we are in when we are subjectively obligated, a state that is a deliverance of a conscience that is at least infallible with respect to subjective obligation. I see difficulties for this approach, but maybe there is some hope, too.

Objection: Because of pragmatic encroachment, the standards for knowledge go up heavily when ten million lives are at stake, and you don’t know that the asteroid is uninhabited when lives depend on this. Thus, you don’t know (1), whereas you do know (2), which restores the crucial action-guiding asymmetry.

Response: I don’t buy pragmatic encroachment. I think the only rational process by which you lose knowledge is getting counterevidence; the stakes going up does not make for counterevidence.

But this is a big discussion in epistemology. I think I can avoid it by supposing (as I expect is true) that you are no more than 99.9999% sure of the risk principles underlying the cautionary judgment in (2). Moreover, the stakes go up for that judgment just as much as they do for (1). Hence, I can suppose that you know neither (1) nor (2), but are merely very confident, and rationally so, of both. This restores the symmetry between (1) and (2).

13 comments:

  1. Genuine real life examples of similar situations involve things such as large ships near recreational waters such as beaches, or cars on streets near schools. There is generally a legal obligation in such situations to not activate the engines in a potentially lethal way (ie speeding in the case of schools and cars) even it is more or less certain that no children are on the street or in the waters.

    The usual "real life" solution in these cases thus becomes one of adding a legal or company policy obligation (or not adding one in cases where there is no need for such) to substitute for the subjective obligation, which tends to help resolve such dilemmas in a typical social setting.

    So I would expect that the ship's captain's manual would have a "distance from nearest inhabited asteroid" rule for the hyperdrive that would recommend against it being activated.

    ReplyDelete
  2. This is a good thought experiment.

    As I think about it, the basic structure of the problem seems to be this. There is a clear right thing to do, given the facts. (If it’s the inhabited asteroid, don’t fire the hyperdrive, otherwise do.) You are calling this “objective obligation.” But you don’t know all these facts, or not for certain. So then we invoke risk-handling principles to decide what the right thing to do is, given your epistemic awareness of the facts (e.g. choose the option with greater expected utility). You are calling this “subjective obligation.”

    One kind of problem (the present kind) arises from the observation that we can *know* what objective obligations we have, even while our subjective obligations are different because we don’t know it for absolute certain. The conflict you are pointing out amounts to asking whether we should act on objective or subjective obligations. The answer is subjective obligations. This is paradoxical because subjective obligations are in a sense derived from objective obligations. But what we learn from the paradox is that, strictly speaking, knowledge of one’s objective obligations is irrelevant. What matters is your *epistemic situation*, and this can be more complicated and interesting than just knowledge.

    ReplyDelete
  3. Another kind of problem is that we might be uncertain (or: in a poor epistemic situation) with regard to the risk-handling principles. Maybe Carl is unsure between (5) and (6), for example: what then? Do we invoke meta-risk-handling principles? Of course, whatever we say, Carl could be unsure of that too.

    It seems to me that what is driving this problem is the conviction that there *must* be some fully internally accessible right answer for the agent to act on. Anything else would not be “fair,” it would violate the ought-implies-can principle. (I think something like this drives incompatibilism about moral responsibility, FWIW.) There is an exact parallel in epistemology with the concept of justification. There are the first-order facts that determine the truth; there is the “justified belief” that is what one’s evidence (i.e. one’s awareness of the facts) indicates; but then we get problems if someone doesn’t know how to handle evidence correctly or is unsure between different evidence-handling principles.

    But people question the idea of justification in epistemology. If you are enough of an externalist, there is (1) the truth; (2) what your evidence really indicates on correct evidence-handling principles; (3) what your evidence implies given the evidence-handling principles you actually accept; etc. But what you don’t need is the idea of a “justified belief” that you are blame-free for accepting. You can just say, there are various different kinds of mistakes to make, which involve different kinds of failures.

    If we give up the idea that there *has* to be a “right answer,” fully internally accessible to an agent, so that we can blame them for going wrong in ways that were fully in their power—if we give up this idea, then we become “externalists” about right action and the objective/subjective obligations business clears up considerably.

    ReplyDelete
  4. Heath:

    The meta-level problem is what worries me the most.

    In the precise case I give it is not plausible to give up the idea that there is a right answer. Not activating the engines seems to be clearly the right thing to do.

    Now, in the meta-level cases things do become harder to see.

    So, I think, the "no right answer always" option still requires us to have a story about in which cases--like the one in my post--there is a right answer, and in which cases there isn't.

    ReplyDelete
  5. William:

    That's interesting. Yes, that helps break the asymmetry. But of course there will be cases where there are no relevantly applicable rules, say when you are working in a completely new industry, or outside of any government jurisdiction (a factory in international waters), or when you must make a quick decision and don't know what the law says.

    ReplyDelete
  6. Alex,

    About the hyperdrive case, I am agreeing with you. But there is still the larger question of when is there a right answer and when isn’t there.

    Consider this progression:
    Zeroth-order facts: the actual facts on the ground.
    First-order facts: your evidence about the zeroth-order facts
    Second-order facts: What the first-order facts imply about what you should believe about zeroth-order facts.
    Third-order facts: Your evidence (etc.) for the second-order facts.
    Fourth-order facts: What the third-order facts imply about what you should believe about the second-order facts.
    Fifth-order facts: Your evidence (etc.) for the fourth-order facts.
    Etc.

    The zeroth order grounds objective obligations. If someone makes a mistake at the zeroth order but not at higher orders, those higher-order facts ground subjective obligations. Puzzles arise when mistakes are made at higher orders.

    In most cases, I think, either (1) people don’t make mistakes at the higher order, or (2) they make a mistake but one we think is blameless—say, they were raised in a culture where sheep entrails were taken to be reliable portents of the future, so we don’t blame them for acting on that mistaken second-order belief. If (1), there is no puzzle; if (2) there is no blame and hence no obligation. The difficulties arise when we don’t wish to take either of these routes.

    ReplyDelete
  7. By the way, I think there are two problems here. One problem is practical and forward-looking: Given the facts about my evidence, what should I do? In hard cases, the agent agonizes about this before the decision.

    The second is backwards-looking: Given the facts about the agent's evidence, are they blameworthy or praiseworthy for their decision?

    It would be really nice if the two came together. But perhaps they don't.

    I have a story that I sort of like. It is highly idealized, and it may be that there is no way to make that story work when one removes the idealization.

    An agent first tries to figure out what they morally ought to do in the situation all things considered, including the evidence at all the orders, balancing all the uncertainties. This is a work of the conscience. The conscience then delivers or does not deliver a verdict. The verdict, unlike all the messy stuff that came beforehand, is clear: "Do this!" (This could be based on messy probabilities, of course. And it could be negative or disjunctive.) If the conscience fails to deliver a verdict, you are not praiseworthy or blameworthy whatever you do. But if the conscience delivers a verdict, you now have a choice whether to follow the verdict or not. If you choose to follow it, you are praiseworthy. If you choose not to follow it, you are blameworthy.

    The answer to the backwards-looking problem is simple: The agent is blameworthy iff (conscience delivered a verdict and the choice did not follow it); the agent is praiseworthy iff (conscience delivered a verdict and the choice followed it). (One of the idealizations is that I am not yet taking account of the supererogatory.)

    The answer to the forwards-looking problem, however, is much more complex. For the agonizing forwards-looking thinking comes *before* the verdict of conscience. This thinking is what the verdict of conscience is based on. It won't do to just say: "Well, all I need to do is to follow conscience." For you don't yet know what that verdict will be. So the verdict of conscience story doesn't solve everything: there is still the question of what to do.

    Note that on this story, we don't need to say that there is an obligation to follow conscience. We can say that there is on the one hand what one ought to do all things considered (including the higher order stuff?), and there is on the other hand what one is praiseworthy or blameworthy for doing.

    ReplyDelete
  8. What do you want to say about poorly formed consciences?

    a) you are never to blame for either having them, or acting on them
    b) you can be blamed for having them, but not for acting on them
    c) you can be blamed for both having them and acting on them

    Your position rules out (c). (a) seems doubtful to me. That leaves (b), which is an interesting nuanced position.

    ReplyDelete
  9. On this view, I think you can be blamed for having poorly formed your conscience, but only if your conscience told you to form it better.

    I think blame fundamentally attaches to freely chosen actions (or chosen abstentions) against conscience, and derivatively to the relevant consequences. However, when counting "how blameworthy you are altogether" or "how much punishment you deserve", I want to avoid double counting of the derivative and fundamental blameworthiness. The person whose gun jams when he tries to shoot someone and the one whose gun fires and kills have the same fundamental blameworthiness; the latter is derivatively blameworthy for the victim's death (assuming this was a wrongful killing), but when counting deserved punishment, it would be double counting to count the derivative guilt on top of the fundamental guilt.

    When you freely poorly form your conscience (say, by abstaining from thinking harder when your conscience tells you should), the future wrong actions that relevantly flow from the mistaken conscience are apt to be relevant consequences of the poor formation, and your blame for them is merely derivative. In fact, I am inclined to say that you may be derivatively blameworthy but fundamentally praiseworthy for these actions if you did them because your mistaken conscience required them, though I know that will sound implausible to many.

    So my position is, roughly, (c) if blame is derivative and (a) if it's fundamental.

    ReplyDelete
  10. All very interesting! If we see this in terms of the virtues, should we say an agent can have the virtue of conscientiousness/integrity (they don't go against conscience) while lacking some other relevant virtue (the mean between recklessness and timidity in risk-taking for example). Not necessarily their fault they lack that virtue - but they may lack it for all that.

    Must doesn't always imply can (we know that anyway since the right thing to do can be too complicated even to occur to the moral agent in the time at hand).

    So we can praise someone for showing the virtue of conscientiousness/integrity: they have fulfilled the negative obligation not to act against conscience - an objective negative obligation quite compatible with the negative obligation not to make objectively wrongful choices. Someone fully morally equipped to weigh risks would not have made such a reckless (or timid) choice - but the person still deserves credit for the virtue of conscientiousness shown in respecting the negative duty not to make any choice with the further thought: this is the wrong choice to make.

    Of course, all this is at a higher level than zeroth order facts which don't seem to bear on moral character albeit someone with good moral character tries their best to arrive at zeroth order facts.

    ReplyDelete
  11. This comment has been removed by the author.

    ReplyDelete
  12. Why can't both decisions be equally "right" or "moral"?

    ReplyDelete
  13. White Dragon: We would blame the captain for activating the hyperdrive, even if it turned out that the asteroid was vacant. But if the captain exercises caution, there is nothing to blame them for. There is some sense in which one of the choices is the better one.

    ReplyDelete