Let me take another look at the interpersonal moral Satan’s Apple, but start with a finite case.
Consider a situation where a finite number N of people independently make a choice between A and B and some disastrous outcome happens if the number of people choosing B hits a threshold M. Suppose further that if you fix whether the disaster happens, then it is better you to choose A than B, but the disastrous outcome outweighs all the benefits from all the possible choices of B.
For instance, maybe B is feeding an apple to a hungry child, and A is refraining from doing so, but there is an evil dictator who likes children to be miserable, and once enough children are not hungry, he will throw all the children in jail.
Intuitively, you should do some sort of expected utility calculation based on your best estimate of the probability p that among the N − 1 people other than you, M − 1 will choose B. For if fewer or more than M − 1 of them choose B, your choice will make no difference, and you should choose B. If F is the difference between the utilities of B and A, e.g., the utility of feeding the apple to the hungry child (assumed to be fairly positive), and D is the utility of the disaster (very negative), then you need to see if pD + F is positive or negative or zero. Modulo some concerns about attitudes to risk, if pD + F is positive, you should choose B (feed the child) and if its negative, you shouldn’t.
If you have a uniform distribution over the possible number of people other than you choosing B, the probability that this number is M − 1 will be 1/N (since the number of people other than you choosing B is one of 0, 1, ..., N − 1). Now, we assumed that the benefits of B are such that they don’t outweigh the disaster even if everyone chooses B, so D + NF < 0. Therefore (1/N)D + F < 0, and so in the uniform distribution case you shouldn’t choose B.
But you might not have a uniform distribution. You might, for instance, have a reasonable estimate that a proportion p of other people will choose B while the threshold is M ≈ qN for some fixed ratio q between 0 and 1. If q is not close to p, then facts about the binomial distribution show that the probability that M − 1 other people choose B goes approximately exponentially to zero as N increases. Assuming that the badness of the disaster is linear or at most polynomial in the number of agents, if the number of agents is large enough, choosing B will be a good thing. Of course, you might have the unlucky situation that q (the ratio of threshold to number of people) and p (the probability of an agent choosing B) are approximately equal, in which case even for large N, the risk that you’re near the threshold will be too high to allow you to choose B.
But now back to infinity. In the interpersonal moral Satan’s Apple, we have infinitely many agents choosing between A and B. But now instead of the threshold being a finite number, the threshold is an infinite cardinality (one can also make a version where it’s a co-cardinality). And this threshold has the property that other people’s choices can never be such that your choice will put things above the threshold—either the threshold has already been met without your choice, or your choice can’t make it hit the threshold. In the finite case, it depended on the numbers involved whether you should choose A or B. But the exact same reasoning as in the finite case, but now without any statistical inputs being needed, shows that you should choose B. For it literally cannot make any difference to whether a disaster happens, no matter what other people choose.
In my previous post, I suggested that the interpersonal moral Satan’s Apple was a reason to embrace causal finitism: to deny that an outcome (say, the disaster) can causally depend on infinitely many inputs (the agents’ choices). But the finite cases make me less confident. In the case where N is large, and our best estimate of the probability of another agent choosing B is a value p not close to the threshold ratio q, it still seems counterintuitive that you should morally choose B, and so should everyone else, even though that yields the disaster.
But I think in the finite case one can remove the counterintuitiveness. For there are mixed strategies that if adopted by everyone are better than everyone choosing A or everyone choosing B. The mixed strategy will involve choosing some number 0 < pbest < q (where q is the threshold ratio at which the disaster happens) and everyone choosing B with probability pbest and A with probability 1 − pbest, where pbest is carefully optimized allow as many people to feed hungry children without a significant risk of disaster. The exact value of pbest will depend on the exact utilities involved, but will be close to q if the number of agents is large, as long as the disaster doesn’t scale exponentially. Now our statistical reasoning shows that when your best estimate of the probability of other people choosing B is not close to the threshold ratio q, you should just straight out choose B. And the worry I had is that everyone doing that results in the disaster. But it does not seem problematic that in a case where your data shows that people’s behavior is not close to optimal, i.e., their behavior propensities do not match pbest, you need to act in a way that doesn’t universalize very nicely. This is no more paradoxical than the fact that when there are criminals, we need to have a police force, even though ideally we wouldn’t have one.
But in the infinite case, no matter what strategy other people adopt, whether pure or mixed, choosing B is better.
2 comments:
Well, I think, that regardless of these probability considerations there is a good reason - independent of your probability analysis - for making a choice for option B here.
I mean, that a feed and alive child in a prison is always better to have than having the same child dieing from hunger.
Sooo... I guess, that option B is objectively a better choice than option A utility wise regardless of what else of "threshold"-this and "probability"-that.
But I guess, that's just my personal opinion here.
"But now instead of the threshold being a finite number, the threshold is an infinite cardinality (one can also make a version where it’s a co-cardinality). And this threshold has the property that other people’s choices can never be such that your choice will put things above the threshold — either the threshold has already been met without your choice, or your choice can’t make it hit the threshold."
Such a bad formulation.
Rather than pointing out, what's "never" the case, one should rather point out, what's always the case there. The case with such a threshold is, that such a threshold is either met or not met with or without your choice. Or in other words such a threshold is met with or without your choice or is not met with or without your choice.
So specifically and particularly you making a choice doesn't matter for the conditions of such a threshold either being met or being not met. But what matters for the conditions of such a threshold either being met or being not met is the actual state of affairs of a specific and particular kind of set, in which you might be contained or not contained.
It doesn't matter, which particular and specific bricks are used to obtain a specific and particular wall. But what matters for the wall is if the set of all bricks making out that wall is corresponding to the set of all natural numbers.
As for you as a single and specifical or particular brick - well, you can now choose between being a part of that infinte wall or being not a part of that infinte wall - holding out terrorists, illigal immigrants AND legal immigrants, such as your ancesters were at one point in time and history, OR NOT doing that - you can give an apple to a child OR NOT do that.
It’s your choice. You kinda have to make that choice and also have to leave with the consequences resulting from that made choice of yours.
Post a Comment