What I have always found most attractive about utilitarianism is its elegant simplicity. What according to the utilitarian is the obligatory thing to do? That which maximizes the good. What is the good? The total welfare of all beings capable of having a welfare. Thus, facts about duty can either be fully characterized in terms of welfare (normative utilitarianism) or will reduce to facts about welfare (metaethical utilitarianism). Moreover, we might further give a full characterization of welfare as pleasure and the absence of pain or as the fulfillment of desire, thereby either fully characterizing facts about welfare in terms of prima facie non-normative facts, or maybe even reducing facts about welfare to these apparently non-normative facts. Thus, utilitarianism gives a characterization (necessary and sufficient conditions) for duty in terms of apparently non-normative facts, and maybe even reduces moral normativity to non-normative facts. This is a lovely theory, though false.
But this illusion of having given a description of all of obligation in non-normative terms is deceptive. There are two ways of putting the problem. One is to invoke uncertainty and the other is to invoke ubiquitous indeterminism (UI) and anti-Molinism (AM). I'll start with the second. According to anti-Molinism, there is no fact of the matter about what would result from a non-actual action when the action is connected to its consequences through an indeterministic chain of causes. Thus, if Frank doesn't take an aspirin, and if aspirin takings are connected indeterministically to headache reliefs, there is no fact of the matter about whether Frank's headache would be relieved by an aspirin. And according to ubiquitous indeterminism, all physical chains of causes are indeterministic. The most common interpretations of quantum mechanics give us reason to believe ubiquitous indeterminism, while libertarianism gives us reason to believe in practically ubiquitous indeterminism (because human beings might intervene in just about any chain of causes.
Of course, this means that given UI and AM, duty cannot simply be equated with the maximization of the good. A more complex formula is needed, and this, I think, introduces a significant degree of freedom into the theory—namely, how we handle the objective probabilities. This, in turn, makes the resulting theory significantly more complex and less elegant.
But, perhaps, it will be retorted that there is a canonical formula, namely maximizing the expected value of each action. This, however, is only of many formulae that could be chosen. Another is maximizing the worst possible outcome (maximin). Yet another is maximizing the best possible outcome (maximax). And there are lots of other formulae available. For instance, for any positive number p, we might say that we should maximize is E[|U|p sgn U] (sgn x = 1 if x>0 and = -1 if x<0) or maybe E[(pi/2+arctan(U))], where U is utility.
But perhaps maximizing the expected value is the simplest of all plausible formula (maximax is implausible, and minimax is trivialized by the kind of ubiquitous indeterminism we have, which ensures that each action has basically the same set of possible utility outcomes, but with different probabilities). However, maximizing expected value leads to implausibilities even greater than in standard deterministic utilitarianism. It is implausible enough that one should kill one innocent person to save two or three innocent lives. But that one should kill one innocent person for a 51 percent chance of saving two innocent lives or for a 34 percent chance of saving three (which the expected value rule will imply in the case where the future happinesses of all the persons are equal) is quite implausible. Or suppose that there are a hundred people, each of whom is facing an independent 50 percent chance of death. By killing one innocent person, you can reduce the danger of death for each of these hundred people to 48.5 percent. Then, you should do that, according to expected value maximization utilitarianism.
Or let's try a different sort of example. Suppose action A has a 51 percent chance of doubling the total future happiness of the human race (assume this happiness is positive), and a 49 percent chance of painlessly destroying the whole of the human race. Then (at least on the hedonistic version—desire satisfaction would require some more careful considerations), according to expected value maximization utilitarianism, you should do A. But clearly A is an irresponsible action.
There may be ways of avoiding such paradoxes. But any way of avoiding such paradoxes will be far from the elegant simplicity of utilitarianism.
Exactly the same problems come up in a deterministic or Molinist case in situations of uncertainty (and we are always in situations of uncertainty). We need an action-guiding concept of obligation that works in such situations. Whether we call this "subjective obligation" or "obligation" simpliciter, it is needed. And to handle this, we will lose the elegant simplicity of utilitarianism. Consider for instance the following case. Suppose action A is 99% likely in light of the evidence to increase the happiness of the human race by two percent, and has a one percent chance of destroying the human race. Then, you might actually justifiedly believe, maybe even know, that A will increase the happiness of the human race, since 99% likelihood may be enough for belief. But plainly you shouldn't do A in this case. Hence a justified belief that an action would maximize utility, and maybe even knowledge, is not enough.
No comments:
Post a Comment