Wednesday, January 28, 2015

Individual and group interest, and infinity

There are infinitely many people. A random process causes each one to independent develop a cancer, either of type A or of type B. The chance that a given individual develops a type A cancer is 9/10 and the chance that she develops a type B cancer is 1/10. It is not possible to diagnose whether an individual has type A or type B cancer. There are two drugs available, either of which—but not both, because they are toxic when combined—could be distributed by you en masse to all of the infinitely people. There is no possibility of distributing different drugs to different people—the logistics only make it possible for you to distribute the same drug to everyone. Drug Alpha cures type A cancer but does not affect type B, and drug Beta cures type B cancer but does not affect type A.

What should you do? Clearly, you should distribute Alpha to everyone. After all, each individual is much more likely to have type A cancer.

But now suppose that an angel reveals to everyone the following interesting fact:

  • (F) Only finitely many people have type A cancer.
You're very surprised. You would have expected infinitely many to have type A cancer and infinitely many to have type B cancer. But even though F is very unlikely outcome—indeed, classically it has zero probability—it is possible. So, what should you do now?

The obvious answer is that you should distribute Beta to everyone. After all, if you distribute Alpha, finitely many people will be cured, while if you distribute Beta, infinitely many will be. Clear choice!

But not so fast. Here is a plausible principle:

  • (I) If you're choosing between intrinsically morally permissible options X and Y and for every relevant individual x, option X is in x's best interest, then option X is the best option to choose.
But there is an argument that it is in every individual's interest that you distribute Alpha to her. Here's why. Let x be any individual. Before the angel's revelation of F, it was clearly in x's best interest that she get Alpha. But now we have all learned F. Does that affect what's in x's best interest? There is a very convincing argument that it does not. Consider this proposition:
  • (Fx) Among people other than x, only finitely many have type A cancer.
Clearly, learning Fx does not affect what is to be done in x's best interest, because the development of cancer in all the patients is independent, so learning about which cancers people other than x have tells us nothing about x's cancer. To dispute Fx is to buy into something akin to the Gambler's Fallacy. But now notice that Fx is logically equivalent to F. Necessarily, if only finitely many people other than x have type A cancer, then only finitely many people have type A cancer (one individual won't make the difference between the finite and the infinite!), and the converse is trivial. If learning Fx does not affect what is to be done in x's best interest, neither should learning the equivalent fact F. So, learning F does not affect what is in x's best interest, and so the initial judgment that drug Alpha is in x's best interest stands.

Thus:

  1. Necessarily, if I is true, then in the infinitary case above, you should distribute Alpha.

But at the same time it really was quite obvious that you should save infinitely many rather than finitely many people, so you should distribute Beta. So it seems we should reject I.

Yet I seems so very obviously true! So, what to do?

There are some possibilities. Maybe one can say deny I in cases of incomplete knowledge, as this one is. Perhaps I is true when you know for sure how the action will affect each individual, but only then. Yet I seems true without the restriction.

A very different suggestion is simply to reject the case. It is impossible to have a case like the one I described. Yet surely it is possible for the outcome of the random process to satisfy F. So where lies the impossibility? I think the impossibility lies in the fact that one would be acting on fact F. And the best explanation here is Causal Finitism: the doctrine that there cannot be infinitely many things among the causal antecedents of a single event. In the case as I described it, the angel's utterance is presumably caused by the infinitary distribution of the cancers.

22 comments:

  1. This is a puzzling case, but I don’t think we need to go to causal finitism for the solution. For the angel could be the Angel of Death, who is giving you an inkling of his plans for the future. No infinite causal background there.

    ReplyDelete
  2. Could you expand on this? I don't follow.

    ReplyDelete
  3. There are infinitely many people. A random process WILL cause them to get cancer.... Drug Alpha PREVENTS Type A cancer, and Drug Beta PREVENTS Type B cancer....

    Now you learn an interesting fact. The Angel of Death appears to you and tells you that

    (F*) Only finitely many people WILL get type A cancer.

    F* is true because the Angel of Death gets to decide how people die.

    Etc.

    ReplyDelete
  4. Isn't it infinitely likely that you are in any given case medicating a patient who has cancer type b?

    ReplyDelete
  5. Heath:

    Yeah, but once the angel of death appears, because he is the common cause of all the cancers he seems to destroy the independence between the different people.

    lynch-patrick:

    It sure seems so, but I give what seems to be a solid argument to the contrary, namely that learning F is the same as learning F_x, and learning F_x doesn't tell you anything about x, so learning F doesn't tell you anything about x.

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. I may not understand your puzzle, so bear with me! - aren't you only giving Beta based solely on the warrant that there is not a finite number of people with type B cancer? If the angel gave you this fact instead of F, then you would not distribute Beta.

    You have no information on the type of cancer in front of you in any particular case x - I think the probabilities you give tell that giving Alpha is the same as doing nothing for an infinite number of people.

    Since it's not an intrinsically morally permissible option to do nothing for x when you could potentially cure their cancer, then, I think, affirming I argues for Beta. (typos fixed)

    ReplyDelete
  8. It seems, however, that "it [is] clearly in x's best interest that she get Alpha." is false. Perhaps without knowing F it seems that for any x, she should receive drug Alpha, but we would simply be mistaken about the true proportion of persons with cancer a.

    Suppose there are two diseases, both with identical symptoms, the significantly more common of which is cured with penicillin, and the less common of which becomes instantly fatal upon being treated with penicillin. If I have the second disease, then no matter how much more probable it is I would have the first one, it would not actually be in my best interest to take penicillin.

    So in any particular x's case, not knowing F does not change that it really is in the best interest of the infinitude of persons to distribute drug Beta.
    It seems, then, that "best interest" arguments cannot be applied through probabilities, as the correct answer is independent of any epistemic barriers we have. Perhaps it is, without knowledge of F, morally permissible to distribute alpha, so long as consequentialism isn't true.

    ReplyDelete
  9. One third of the Angels followed Satan, so there is a 33% chance that the information in F may be false. Also when something sounds too good to be true... It is usually not true. Carry on with the Alpha drug.

    ReplyDelete
  10. Kolten:

    That's a very nice move. I guess what I'm looking at is a more subjective notion of best interest given the information at hand. This notion is the one at play when we say that it's in your best interest to buy the lottery ticket for the lottery where you have a one in a million chance of winning a million dollars when the ticket is being offered to you for a penny. And this is still true even if it turns out the ticket is not a winning ticket.

    ReplyDelete
  11. Mark:

    Yeah, a worrisome feature of these kinds of examples is that we get information that epistemically undercuts our confidence in the information. I think it's OK to bracket this worry, but I could be wrong.

    ReplyDelete
  12. Professor Pruss:

    A question about causal finitism, as opposed to other types of finitism. Is the idea,or example, that God could create an infinite number of people probabilistically subject to cancer (one cause can have an infinite number of effects), but that no-one, not even God in his omniscience, could determine after the event whether a finite or an infinite number of them had Type A cancer (no event can have an infinite number of causal antecedents)? I don’t say this is unreasonable, I am just asking whether it is what you mean.

    ReplyDelete
  13. Does it matter whether the cancer has already formed? If it has not, then clearly the probability for any person's forming cancer would make drug A in their interest. If it has, however, then the population now contains a proportion of essentially zero in regards to the number of persons woth cancer a. In the former case, we would expect nine out of ten persons to have cancer a (or perhaps half and half, I'm not very well versed in the rules of comparing infinities), while in the latter we would expect zero out of ten people to have cancer a, in which case knowing Fx would be very pertinent to determining best interest for any particular person.

    ReplyDelete
  14. Ian:

    God can know that only finitely many people will (or did) get type A cancer, but it is not knowledge he can act on, for that would allow a path from an infinite number of events to another event (namely, the outcome of God's action). (The path goes through God, so maybe it's not strictly speaking causal, but it's close enough to causal, I think).

    ReplyDelete
  15. Here's something that may make the issues clear. Think of a finite case. There are 1000 people.

    Consider:
    F10: Exactly ten people have type A cancer.
    F10x: Of the people other than you, exactly ten have type A cancer.

    Learning F10 should make you think: "So probably I have type B cancer." This is a standard bayesian update case.

    But learning F10x should not affect your thinking about your own case. For it would be a variant of the Gambler's Fallacy to go from information about what other people have to information about you--assuming that all the cases were independent.

    The difference is that in the infinite case, F and Fx are logically equivalent, and so we either have to say that both F and Fx should affect your thinking, or that neither F nor Fx should.

    ReplyDelete
  16. Professor Pruss

    Thank you for your answer. It raises some interesting questions, but I will leave them alone for now.

    On the problem itself, I don’t think we need to invoke causal finitism. Rather, the trouble is with the probability theory. We can’t apply standard theory, because we have to condition on an event of probability zero. So we invoke the Popper function intuition, that each person has type A cancer with probability 0.9 independently of any knowledge about the other people. The catch is that the resulting probabilities are only finitely additive. Example: what is the probability that exactly N people have type A cancer? Answer: 0 for all finite N, barring infinitesimal/non-standard probabilities. (Sketch proof: the probability of exactly N type A’s can be no more than the probability of at most N type A’s in the first M people, for any finite M. Using binomial probabilities, and taking M sufficiently large, this can be made arbitrarily small.) But if we add these probabilities over N we get 0, not 1 as required. It is well-known that failure of countable additivity implies that coglomerability does not hold in general. Principle (I) seems to require conglomerability. To be sure, this does not show that Principle (I) fails in this case, but it does raise a doubt.

    ReplyDelete
  17. This comment has been removed by the author.

    ReplyDelete
  18. Ian:

    I don't know that principle (I) is about probabilities. It seems to be more about rationality in general, whether this is understood in a probabilistic or non-probabilistic way.

    It's also not clear that (I) presupposes conglomerability, though you might convince me.

    In fact, I think probabilities only really enter into the argument at the initial uncontroversial point when we haven't learned F. After that, we make use of (I) together with the principles:
    (IND) Learning something independent of the factors relevant to a decision does not affect a rational decision.
    (EQ) If P and Q are clearly logically equivalent, learning one affects a rational decision iff learning the other does.

    Now you might think that IND uses probabilities when it talks of independence. But I don't think we should take this to be statistical independence. Statistical independence is too weak for the argument and doesn't capture the full intuitions. Rather, we have here something like causal or explanatory independence. To make the point vivid, suppose that each patient is completely causally isolated from the others and there are no laws tying cancers between them.

    ReplyDelete
  19. Professor Pruss:

    You are right that Principle (I) is about rationality in general, not in itself about probability. Probabilities come in because “best for each” depends on them. We don’t know which drug is best for any individual, only which is more likely to be best. The implicit idea is, say, 1 util benefit from the right drug, 0 utils for the wrong one, weight by probability.

    You are right that non-conglomerability seems to be a red herring. It could arise, but it doesn’t seem to be the cause of the problem.

    Is it really so uncontroversial that before we know F, we should choose alpha? With probability 1 there will be an infinite number of A’s and an infinite number of B’s, so maybe the answer is indeterminate. Suppose an angel who can diagnose rearranges the people so that every 10th one is an A, the rest B. What do we choose now? To get sensible answers, I suggest we need some sort of limiting process or weighting. To formalize this, pick a set of coefficients c(i) that add up to 1. Assign utilities c(i) to person i getting the right drug, 0 the wrong. Base our choice on the expected total utility. Apply this to the problem, before we know F. We get 0.9 for alpha, 0.1 for beta, so we choose alpha, as seems sensible. After we know F, the probabilities for each person are the same (we accept the Popper function intuition), so we get the same answers, and again choose alpha.

    So where did the paradox go? Roughly, the weighting transformed “infinite” to something between 0 and 1 and “infinite – finite” to “infinitesimal”.

    Problem solved? Well, I have changed the problem. But the original problem required us to compare infinities, so maybe it should be changed.

    ReplyDelete
  20. Conglomerability again. I spoke too soon. It fails here and this matters.

    The following applies after we know (F). I assume what I have called the Popper function intuition, which I take to be the same as your strong independence. I use real-valued (i.e not infinitesimal/non-standard) probabilities.

    Number of A’s is finite is equivalent to: number of A’s is 0 or number of A’s is 1 or number of A’s is 2... For brevity, call these #=0, #=1, #=2, etc. As shown in my second post, they all have probability zero. But, by (F), number of A’s is finite has probability 1, so the probabilities are only finitely additive.

    Think about a particular individual. The probability that she is type A is 0.9. Call this Pr(A). What about Pr(A given #=n)? This is zero for all n. [Sketch proof: Pr (A given #=n) is at most Pr (A given n A’s in the first m people), for any m > n. By taking m sufficiently large, this can be made arbitrarily small.] If conglomerability held, this would imply Pr(A) = 0. But Pr(A) is 0.9, so conglomerability fails. Which drug should we choose for our individual? Alpha. Which drug should we choose given #=n? Beta, regardless of n. So even for an individual, best given #=n for all n does not imply best. Note that there is no issue here of best for each vs best for all (i.e principle (I)). Rather the issue is best conditional on each item in a partition vs best.

    ReplyDelete
  21. Yes, there is a failure of conglomerability, but I don't see where in my argument it's used. I never condition on the number being n.

    ReplyDelete
  22. Think about your argument for Beta:

    The obvious answer is that you should distribute Beta to everyone. After all, if you distribute Alpha, finitely many people will be cured, while if you distribute Beta, infinitely many will be. Clear choice!

    It is indeed clear that given any particular finite number of A’s, Beta is better. It is also true that “a finite number of A’s” is logically equivalent to “0 A’s or 1 A or 2 A’s...”. But this is an infinite union, conglomerability fails, so “best given each condition” need not imply “best given the union”.

    ReplyDelete