Monday, January 26, 2015

Act and rule utilitarianism

Rule utilitarianism holds that one should act according to those rules, or those usable rules, that if adopted universally would produce the highest utility. Act utilitarianism holds that one should do that act which produces the highest utility. There is an obvious worry that rule utilitarianism collapses into act utilitarianism. After all, wouldn't utility be maximized if everyone adopted the rule of performing that act which produces the highest utility? If so, then the rule utilitarian will have one rule, that of maximizing the utility in each act, and the two theories will be the same.

A standard answer to the collapse worry is either to focus on the fact that some rules are not humanly usable or to distinguish between adopting and following a rule. The rule of maximizing utility is so difficult to follow (both for epistemic reasons and because it's onerous) that even if everyone adopted it, it still wouldn't be universally followed.

Interestingly, though, in cases with infinitely many agents the two theories can differ even if we assume the agents would follow whatever rule they adopted.

Here's such a case. You are one of countably infinitely many agents, numbered 1,2,3,..., and one special subject, Jane. (Jane may or may not be among the infinitely many agents—it doesn't matter.) Each of the infinitely many agents has the opportunity to independently decide whether to costlessly press a button. What happens to Jane depends on who, if anyone, pressed the button:

  • If a finite number n of people press the button, then Jane gets n+1 units of utility.
  • If an infinite number of people press the button, then Jane gets a little bit of utility from each button press: specifically, she gets 2k/10 units of utility from person number k, if that person presses the button.

So, if infinitely many people press the button, Jane gets at most (1/2+1/4+1/8+...)/10=1/10 units of utility. If finitely many people press the button, Jane gets at least 1 unit of utility (if that finite number is zero), and possibly quite a lot more. So she's much better off if finitely many people press.

Now suppose all of the agents are act utilitarians. Then each reasons:

My decision is independent of all the other decisions. If infinitely many other people press the button, then my pressing the button contributes (2k)/10 units of utility to Jane and costs nothing, so I should press. If only finitely many other people press the button, then my pressing the button contributes a full unit of utility to Jane and costs nothing, so I should press. In any case, I should press.
And so if everyone follows the rule of doing that individual act that maximizes utility, Jane ends up with one tenth of a unit of utility, an unsatisfactory result.

So from the point of view of act utilitarianism, in this scenario there is a clear answer as to what each person should do, and it's a rather unfortunate answer—it leads to a poor result for Jane.

Now assume rule utilitarianism, and let's suppose that we are dealing with perfect agents who can adopt any rule, no matter how complex, and who would follow any rule, no matter how difficult it is. Despite these stipulations, rule utilitarianism does not recommend that everyone maximize utility in this scenario. For if everyone maximizes utility, only a tenth of a unit is produced, and there are much better rules than that. For instance, the rule that one should press the button if and only if one's number is less than ten will produce nine units of utility if universally adopted and followed. And the rule that one should press the button if and only if one's number is less than 10100 will produce even more utility.

In fact, it's easy to see that in our idealized case, rule utilitarianism fails to yield a verdict as to what we should do, as there is no optimal rule. We want to ensure that only finitely many people press the button, but as long as we keep to that, the more the better. So far from collapsing into the act utilitarian verdict, rule utilitarianism fails to yield a verdict.

A reasonable modification of rule utilitarianism, however, may allow for satisficing in cases where there is no optimal rule. Such a version of rule utilitarianism will presumably tell us that it's permissible to adopt the rule of pressing the button if and only if one's number is less than 10100. This version of rule utilitarianism also does not collapse into act utilitarianism, since the act utilitarian verdict, namely that one should unconditionally press the button, fails to satisfice, as it yields only 1/10 units of utility.

What about less idealized versions of rule utilitarianism, ones with more realistic assumptions about agents. Interesting, those versions may collapse into act utilitarianism. Here's why. Given realistic assumptions about agents, we can expect that no matter what rule is given, there is some small independent chance that any given agent will press the button even if the rule says not to, just because the agent has made a mistake or is feeling malicious or has forgotten the rule. No matter how small that chance is, the result is that in any realistic version of the scenario we can expect that infinitely many people will press the button. And given that infinitely many other people will press the button, if only by mistake, the act utilitarian advice to press the button oneself is exactly right.

So, interestingly, in our infinitary case the more realistic versions of rule utilitarianism end up giving the same advice as act utilitarianism, while an idealized version ends up failing to yield a verdict, unless supplemented with a permission to satisfice.

But in any case, no version of rule utilitarianism generally collapses into act utilitarianism if such infinitary cases are possible. For there are standard finitary cases where realistic versions of rule utilitarianism fail to collapse, and now we see that there are infinitary ones where idealized versions fail to collapse. And so no version generally collapses, if cases like this are possible.

Of course, the big question here is whether such cases are possible. My Causal Finitism (the view that nothing can have infinitely many
items in its causal history) says they're not, and I think oddities such as above give further evidence for Causal Finitism.

9 comments:

Mike Almeida said...

So from the point of view of act utilitarianism, in this scenario there is a clear answer as to what each person should do, and it's a rather unfortunate answer—it leads to a poor result for Jane.

Actually, AU offers a prescription to All that is inconsistent with its prescription to Each. Each ought to push the button, but it is not true that All should. And there is no way, in the case you describe, for all to fulfill their obligations. If everyone pushes, then infintiely many have failed to fulfill their obligations (despite everyone doing what she ought). If finitely many push the button, then again infinitely many fail to fulfill their obligations. And that is on the act utilitarian principle alone. No matter what anyone does, infinitely many fail to fulfill their obligations. It's a utilitarian moral dilemma.

Alexander R Pruss said...

Mike:

Who is All? :-)

Mike Almeida said...

All is everyone, all utilitarian agents as a group. It's a common phenomenon for AU, that its collective recommendation (what the group ought to do) conflict with its recommendations individually, what each member of the group ought to do. For instance, it might recommend that each of us goes onto the ice to save the drowning child. But if we all go onto the ice--following our AU recommendation--we will all violate our AU recommendation, since we should not all go out onto the ice (it will collapse and many more will drown).

Incidentally, totally off the point, but your captcha is asking for verification three and four times in a row.

Alexander R Pruss said...

The ice case is pretty complicated. It depends on what one's expectations are about others going on the ice. If the probability that a close to or above critical mass of people is going to be on the ice is high enough, then AU will recommend that I not go on the ice.

But in my case, the individual recommendation is to press the button no matter what you think or know others will do.

Mike Almeida said...

The ice case is pretty simple, actually. Assume the following is true for each person: (i) if you were not to go onto the ice to save the child, no one else would and (ii) if you were to go onto the ice, no one else would. It follows from (i) and (ii) that each person should go onto the ice to save the child. But if everyone does go, no one would fulfill his obligations.

Your case is of course different. In your case each person ought to push the button, but if everyone does, then an infinite number would not fulfill his obligations. Your case has a structure similar to generalized quasi-PD's. Take the case of over-fishing. If one more person over-fishes, no matter how many others are doing it, it makes no difference to the badness of the outcome, and it benefits him. He should over-fish. But if everyone over-fishes the outcome is much worse than if a small number do (in which case it is better). In your case, if everyone pushes the button it is much worse than if a finite number do.

Alexander R Pruss said...

If everybody goes, then (i) and (ii) aren't true.

As for overfishing kinds of cases, I deny that it's true that if one more person overfishes, no matter how many others are doing it, it makes no difference. By transitivity of "it makes no difference" one can then derive that it makes no difference if everyone overfishes.

Mike Almeida said...

If everybody goes, then (i) and (ii) aren't true.

That's not quite right. (i) and (ii) are true. If everyone were to do what AU prescribes, then neither would be true. But there's nothing new or surprising in that, certainly.

I deny that it's true that if one more person overfishes, no matter how many others are doing it, it makes no difference.

That's a position that some people take, but it is not a typical utilitarian position. It's a position that commits you to there being imperceptible decrements in value. I, and most utilitarians, deny that there are such imperceptible decrements. So I deny that every additional increase in overfishing makes a moral (i.e., utilitarian) difference. It is a difference that is below the level of perception. (For what it's worth, in many cases, there is literally no difference at all, as in voting cases, where any difference there is to what you do individually comes after some distant threshold is met).

Hassnain Jamil said...
This comment has been removed by the author.
Hassnain Jamil said...

Act Utilitarianism

Act utilitarianism is the belief that it is the right action that brings the greatest happiness to the greatest number of people. It is a concept that believes that the morality (ethics) of an action is determined by its usefulness to most of the people that this act is in accordance with the moral rules since it brings greater good or happiness.

Rule Utilitarianism

Rule utilitarianism on the other hand is the belief that an action can be morally right if it conforms to the rules that will lead to the greatest good or happiness. It adheres to the belief that the correctness of an action is determined by the correctness of its rules and that if the correct rule is followed, the greatest good or happiness is achieved.

For more details visit click on the given link http://sh.st/l3LTW