Suppose I am choosing between options A and B. Evidential decision theory tells me to calculate the expected utility E(U|A) given the news that I did A and the expected utility E(U|B) given the news that I did B, and go for the bigger of the two. This is well-known to lead to the following absurd result. Suppose there is a gene G that both causes one day to die a horrible death and makes one very likely to choose A, while absence of the gene makes one very likely to choose B. Then if A and B are different flavors of ice cream, I should always choose B, because E(U|A) ≪ E(U|B), since the horrible death from G trumps any advantage of flavor that A might have over B. This is silly, of course, because one’s choice does not affect whether one has G.
Causal decision theorists proceed as follows. We have a set of “causal hypotheses” about what the relevant parts of the world at the time of the decision are like. For each causal hypothesis H we calculate E(U|H∧A) and E(U|H∧B), and then we take the weighted average over our probabilities, and then decide accordingly. In other words, we have a causal expected utility of D
- Ec(U|D) = ∑HE(U|H∧D)P(H)
and are to choose A over B provided that Ec(U|A) = Ec(U|B). In the gene case, the “bad news” of the horrible death on G is a constant addition to Ec(U|A) and to Ec(U|B), and so it can be ignored—as is right, since it’s not in our control.
But here is a variant case that worries me. Suppose that you are choosing between flavors A and B of ice cream, and you will only ever ever get to taste one of them, and only once. You can’t figure out which one will taste better for you (maybe one is oyster ice cream and the other is sea urchin ice cream). However, data shows that not only does G make one likely to choose A and its absence makes one likely to choose B, but everyone who has G derives pleasure from A and displeasure from B and everyone who lacks G has the opposite result, and all the pleasures and displeasures are of the same magnitude.
Now, background information says that you have a 3/4 chance of having G. On causal decision theory, this means that you should choose A, because likely you have G, and those who have G all enjoy A. Evidential decision theory, however, tells you that you should choose B, since if you choose B then likely you don’t have the terrible gene G.
In this case, I feel causal decision theory isn’t quite right. Suppose I choose A. Then after I have made my choice, but before I have consumed the ice cream, I will be glad that I chose A: my choice of A will make me think I have G, and hence that A is tastier. But similarly, if I choose B, then after I have made my choice, and again before consumption, I will be glad that I chose B, since my choice B will make me think I don’t have G and hence that B was a good choice. Whatever I choose, I will be glad I chose it. This suggests to me that my there is nothing wrong with either choice!
Here is the beginning of a third decision theory, then—one that is neither causal nor evidential. An option A is permissible provided that causal decision theory with the causal hypothesis credences conditioned on one’s choosing A permits one to do A. An option A is required provided that no alternative is permissible. (There are cases where no option is permissible. That’s weird, I admit.)
In the initial case, where the pleasure of each flavor does not depend on G, this third decision theory gives the same answer as causal decision theory—it says to go for the tastier flavor. In the second case, however, where the pleasure/displeasure depends on G, it permits one to go for either flavor. In a probabilistic-predictor Newcomb’s Paradox, it says to two-box.
No comments:
Post a Comment