Showing posts with label population ethics. Show all posts
Showing posts with label population ethics. Show all posts

Wednesday, February 28, 2024

More on benefiting infinitely many people

Once again let’s suppose that there are infinitely people on a line infinite in both directions, one meter apart, on positions numbered in meters. Suppose all the people are on par. Fix some benefit (e.g., saving a life or giving a cookie). Let Ln be the action of giving the benefit to all the people to the left of position n. Let Rn be the action of giving the benefit to all the people to the right of position n.

Write A ≤ B to mean that action B is at least as good as action A, and write A < B to mean that A ≤ B but not B ≤ A. If neither A ≤ B nor B ≤ A, then we say that A and B are noncomparable.

Consider these three conditions:

  • Transitivity: If A ≤ B and B ≤ C, then A ≤ C for any actions A, B and C from among the {Lk} and the {Rk}.

  • Strict monotonicity: Ln < Ln + 1 and Rn > Rn + 1 for all n.

  • Weak translation invariance: If Ln ≤ Rm, then Ln + k ≤ Rm + k and if Ln ≥ Rm, then Ln + k ≥ Rm + k, for any n, m and k.

Theorem: If we have transitivity, strict monotonicity and weak translation invariance, then exactly one of the following three statements is true:

  1. For all m and n, Lm and Rn are incomparable

  2. For all m and n, Lm < Rn

  3. For all m and n, Lm > Rn.

In other words, if any of the left-benefit actions is comparable with any of the right-benefit actions, there is an overwhelming moral skew whereby either all the left-benefit actions beat all the right-benefit actions or all the right-benefit actions beat all the left-benefit actions.

Proposition 1 in this paper is a special case of the above theorem, but the proof of the theorem proceeds in basically the same way. For a reductio, assume that (i) is false. Then either Lm ≥ Rn or Lm ≤ Rn for some m and n. First suppose that Lm ≥ Rn. Then the second and third paragraphs of the proof of Proposition 1 show that (iii) holds. Now suppose that Lm ≤ Rn. Let Lk* = Rk and Rk* = Lk. Say that A*B iff A* ≤ B*. Then transitivity, strict monotonicity and weak translation invariance hold for ≤*. Moreover, we have Lm ≤ Rn, so Rm*Ln. Applying the previous case with  − m and  − n in place of n and m respectively we conclude that we always have Lj>*Rk and hence that we always have Lj < Rk, i.e., (ii).

I suppose the most reasonable conclusion is that there is complete incomparability between the left- and right-benefit actions. But this seems implausible, too.

Again, I think the big conclusion is that human ethics has limits of applicability.

I hasten to add this. One might reasonably think—Ian suggested this in a recent comment—that decisions about benefiting or harming infinitely many people (at once) do not come up for humans. Well, that’s a little quick. To vary the Pascal’s Mugger situation, suppose a strange guy comes up to you on the street, and tells you that there are infinitely many people in a line drowning in a parallel universe, and asks you if you want him to save all the ones to the left of position 123 or all the ones to the right of position  − 11, because he can magically do either one, and nothing else, and he needs help in his moral dilemma. You are, of course, very dubious of what he is saying. Your credence that he is telling the truth is very, very small. But as any good Bayesian will tell you, it shouldn’t be zero. And now the decision you need to make is a real one.

Tuesday, February 27, 2024

Saving infinitely many lives

Suppose there is an infinitely long line with equally-spaced positions numbered sequentially with the integers. At each position there is a person drowning. All the persons are on par in all relevant respects and equally related to you. Consider first a choice between two actions:

  1. Save people at 0, 2, 4, 6, 8, ... (red circles).

  2. Save people at 1, 2, 3, 5, 7, ... (blue circles).

It seems pretty intuitive that (1) and (2) are morally on par. The non-negative evens and odds are alike!

But now add a third option:

  1. Save people at 2, 4, 6, 8, ... (yellow circles).

The relation between (2) and (3) is exactly the same as the relation between (1) and (2)—after all, there doesn’t seem to be anything special about the point labeled with the zero. So, if (1) and (2) are on par, so are (2) and (3).

But by transitivity of being on par, (1) and (3) are on par. But they’re not! It is better to perform action (1), since that saves all the people that action (3) saves, plus the person at the zero point.

So maybe (1) is after better than (2), and (2) is better than (3)? But this leads to the following strange thing. We know how much better (1) is than (2): it is better by one person. If (1) is better than (2) and (2) is better than (3), then since the relationships between (1) and (2) and between (2) and (3) are the same, it follows that (1) must be better than (2) by half a person and (2) must be better than (3) by that same amount.

But when you are choosing which people to save, and they’re all on par, and the saving is always certain, how can you get two options that are “half a person” apart?

Very strange.

In fact, it seems we can get options that are apart by even smaller intervals. Consider:

  1. Save people at 0, 10, 20, 30, 40, ....

  2. Save people at 1, 11, 21, 31, 41, ....

and so on up to:

  1. Save people at 10, 20, 30, 40, ....

Each of options (4)–(14) is related the same way to the next. Option (4) is better than option (14) by exactly one person. So it seems that each of options (4)–(13) is better by a tenth of a person than the next!

I think there is one at all reasonable way out, and it is to say that in both the (1)–(3) series and the (4)–(14) series, each option is incomparable with the succeeding one, but we have comparability between the start and end of each series.

Maybe, but is the incomparability claim really correct? It still feels like (1) and (2) should be exactly on par. If you had a choice between (1) and (2), and one of the two actions involved a slight benefit to another person—say, a small probability of saving the life of the person at  − 17—then we should go for the action with that slight benefit. And this makes it implausible that the two are incomparable.

My own present preferred solution is that the various things here seem implausible to us because human morality is not meant for cases with infinitely many beneficiaries. I think this is another piece of evidence for the species-relativity of morality: our morality is grounded in human nature.