Thursday, December 8, 2022

Utilitarianism, egoism and promises

Suppose Alice and Bob are perfect utilitarians or perfect amoral egoists in any combination. They are about to play a game where they raise a left hand or a right hand in a separate booth, and if they both raise the same hand, they both get something good. Otherwise, nobody gets that good. Nobody sees what they’re doing in the game: the game is fully automated. And they both have full shared knowledge of the above.

They confer before the game and promise to one another to raise the right hand. They go into their separate rooms. And what happens next?

Take first the case where they are both perfect amoral egoists. Amoral egoists don’t care about promises. So the fact that an amoral egoist promised to raise the right hand is no evidence at all that they will raise the right hand, unless there is something in it for them. But is there anything in it for them? Well, if Bob raises his right hand, then there is something in it for Alice to raise her right hand. But note that this conditional is true regardless of whether they’ve made any promises to each other, and it is equally true that if Bob raises his left hand, then there is something in it for Alice to raise her left hand.

The promise is simply irrelevant here. It is true that in normal circumstances, it makes sense for egoists to keep promises in order to fool people into thinking that they have morality. But I’ve assumed full shared knowledge of each other’s tendencies here, and so no such considerations apply here.

It is true that if Alice expects Bob to expect her to keep her promise, then Alice will expect Bob to raise his right hand, and hence she should raise her right hand. But since she’s known to be an amoral egoist, there is no reason for Bob to expect Alice to keep her promise. And the same vice versa.

What if they are utilitarians? It makes no difference. Since in this case both always get the same outcome, there is no difference between utilitarians and amoral egoists.

This means that in cases like this, with full transparency of behavioral tendencies, utilitarians and amoral egoists will do well to brainwash or hypnotize themselves into promise-keeping.

In ordinary life, this problem doesn’t arise as much, because as long as at least one person is more typical, and hence takes promises to have reason-giving force, or if public opinion is around to enforce promise-keeping, then the issue doesn’t come up. But I think there is a lesson here and in the previous post: for many ordinary practice, the utilitarian is free-riding on the non-utilitarians.


Isaac LaGrand said...

This is a fun puzzle. It seems like David Lewis’ convention work, or Thomas Schelling’s coordination and focal points stuff must be relevant.

Alexander R Pruss said...

My feeling is it's not hard to solve -- as long as you don't place artificial restrictions on what counts as a reason, in the way the utilitarian or egoist does.

Alexander R Pruss said...

Here's a further line of thought. Suppose that Alice and Bob are not fully utilitarian, but they incorporate into their ethics (which they follow perfectly) an anti-conventionalist element (and that's also a part of their shared knowledge). Thus, they think that the fact that one has promised p is a fairly weak reason, of degree epsilon, against performing p. Then, if epsilon>0, then it seems that by their lights Alice and Bob should lift their left hands rather than their right, if they promised to lift their right, since their behavioral bias is anti-promissory, and each knows the other to have such a bias. Very well. Now take the limit as epsilon (the strength of the reason they think favors breaking promises) to zero. For every epsilon>0, they should lift their left hands rather than their right. It seems to be a reasonable continuity conclusion that for epsilon=0, they should either be neutral between lifting their left hands rather than their right or should still prefer the left, but definitely should not prefer their right. And yet epsilon=0 is just the full utilitarian case.