Tuesday, February 27, 2024

Saving infinitely many lives

Suppose there is an infinitely long line with equally-spaced positions numbered sequentially with the integers. At each position there is a person drowning. All the persons are on par in all relevant respects and equally related to you. Consider first a choice between two actions:

  1. Save people at 0, 2, 4, 6, 8, ... (red circles).

  2. Save people at 1, 2, 3, 5, 7, ... (blue circles).

It seems pretty intuitive that (1) and (2) are morally on par. The non-negative evens and odds are alike!

But now add a third option:

  1. Save people at 2, 4, 6, 8, ... (yellow circles).

The relation between (2) and (3) is exactly the same as the relation between (1) and (2)—after all, there doesn’t seem to be anything special about the point labeled with the zero. So, if (1) and (2) are on par, so are (2) and (3).

But by transitivity of being on par, (1) and (3) are on par. But they’re not! It is better to perform action (1), since that saves all the people that action (3) saves, plus the person at the zero point.

So maybe (1) is after better than (2), and (2) is better than (3)? But this leads to the following strange thing. We know how much better (1) is than (2): it is better by one person. If (1) is better than (2) and (2) is better than (3), then since the relationships between (1) and (2) and between (2) and (3) are the same, it follows that (1) must be better than (2) by half a person and (2) must be better than (3) by that same amount.

But when you are choosing which people to save, and they’re all on par, and the saving is always certain, how can you get two options that are “half a person” apart?

Very strange.

In fact, it seems we can get options that are apart by even smaller intervals. Consider:

  1. Save people at 0, 10, 20, 30, 40, ....

  2. Save people at 1, 11, 21, 31, 41, ....

and so on up to:

  1. Save people at 10, 20, 30, 40, ....

Each of options (4)–(14) is related the same way to the next. Option (4) is better than option (14) by exactly one person. So it seems that each of options (4)–(13) is better by a tenth of a person than the next!

I think there is one at all reasonable way out, and it is to say that in both the (1)–(3) series and the (4)–(14) series, each option is incomparable with the succeeding one, but we have comparability between the start and end of each series.

Maybe, but is the incomparability claim really correct? It still feels like (1) and (2) should be exactly on par. If you had a choice between (1) and (2), and one of the two actions involved a slight benefit to another person—say, a small probability of saving the life of the person at  − 17—then we should go for the action with that slight benefit. And this makes it implausible that the two are incomparable.

My own present preferred solution is that the various things here seem implausible to us because human morality is not meant for cases with infinitely many beneficiaries. I think this is another piece of evidence for the species-relativity of morality: our morality is grounded in human nature.

4 comments:

IanS said...

In practical applications, it is usual to apply an exponentially decaying discount factor. This naturally yields non-integral valuations. If the rate of decay is small (so that there is very little decay over one ‘cycle’), then the differences in valuations are close to the relevant fraction of the cycle length.

The appropriate discount rate (zero, or merely small) is a live issue in discussions of (for example) policies relating to climate change and economic growth.

Alexander R Pruss said...

Do you mean that the incremental benefit of saving one more life given that N were saved is something like e^-kN for some small k?

That seems quite mistaken ethically.

But in any case, if that's the kind of decay, it doesn't help with the problem at hand. On the contrary, it predicts that there is no difference between (1) and (3), since N=infinity.

IanS said...

Apologies, I misread the post.

I was thinking that you would save the lives sequentially. I was suggesting that saving a life at time N units in the future might be valued proportionally to exp(-kN). [Note that a wide range of conditioning functions would have a similar effect. For example, max (1–kN, 0), for suitably small k, would also give differences in valuation close to the relevant fractions.]

But that’s not what the post says. You have to assume that (for example) you press a button, and all the even-numbered people are saved.

That said, saving an infinite number of people at the press of a button is not something that any human could do. We can only do things sequentially, or at least in finite chunks. And, rightly or wrongly, we do often discount outcomes expected in the distant future, or far away. As finite beings, with finite mental capacities, we pretty much have to. This is consistent with your ‘preferred solution’.

But suppose there were an ‘angel’ who could save an infinite number of people at the press of a button. There would still be the usual rearrangement issues. Should switching 0-1, 10-11, 20-21 … make any difference? Under what sort of permutations, if any, should valuations be invariant?

James Anderson said...

Alex,

Fascinating! But I believe a small correction is in order:

"We know how much better (1) is than (2): it is better by one person."

Shouldn't this read "than (3)"?