Let U be utility, and let Hi be all the epistemically open causal hypotheses. To evaluate the value of an option A, causal decision theory says we should calculate the weighted sum:
The natural solution is to modify the sum (1) to include only those hypotheses Hi such that P(Hi&A)>0, and renormalize the sum to compensate. For instance, if P(Hi&A)>0 for 1≤i≤m and P(Hi&)=0 for m<i≤n, then we should calculate the value of option A at:
But this turns out to betray the core intuitions of causal decision theory. Suppose there are two equally likely causal hypotheses: H1 says you're free to choose between pizza and falafel and H2 says you're brainwashed into eating falafel. Suppose you prefer falafel to pizza. But of course your future is bleak if you're brainwashed into eating falafel. Wonderful as falafel is, eating falafel at every meal is going to be miserable. Now, V2(pizza) includes in the sum only the term for hypothesis H1, since H2 is incompatible with pizza. Thus, V2(pizza) is the value of eating pizza. But V2(falafel) includes in the sum terms for both hypotheses H1 and H2. And since half of the weight of that sum will correspond to H2, and on H2 we have the misery of being brainwashed into falafel for the rest of our lives, the theory based on V2 requires us to choose pizza, lest it turn out that we were brainwashed into eating falafel. But that's exactly the sort of silliness that causal decision theory was created to eliminate: your present decision whether to eat falafel or not makes not a whit of difference to whether you've been brainwashed. To go for pizza here is to act like an evidential decision theorist.
So our simple modification of V1 to V2 solves the problem of V1 being undefined, but at the price of betraying causal decision theory intuitions.
There is, however, another modification. When choosing between some options, say pizza and falafel, we should define the value by only considering in our sums the hypotheses on which all options are causally possible. Interestingly, this means that the value of an option depends on which contrast class of options we are considering: the value of option Aj as chosen from between A1,...,Ak will in general depend on what the options are. Thus, we should write the value as:
The fact that the value of an option in general depends on what other options are in view will be a controversial but not fatal consequence.
Another interesting consequence is that if all the causal hypotheses are deterministic, then the value V3 will be undefined. Thus, decisions presuppose the possibility of indeterminism. It's plausible that an argument can be made from this for incompatibilism, an argument well worth exploring.
Here, however, is a puzzle. Suppose that hitherto your preferences have been that you liked meatballs a little more than falafel which in turn you liked a little more than pizza (with transitivity). There are now two causal hypotheses each with probability 1/2: according to H1 you have had no intervention, but according to H2 you've been brainwashed against eating pizza and your brain response to falafel has been modified so that you will enjoy falafel far more than any food you've ever had. You are now choosing between falafel, meatballs and pizza. The V3 calculation tells you to consider only those hypotheses compatible with all three options, i.e., to consider only hypothesis H1. So you will go for meatballs. But surely that's a mistake. Falafel is a better option than meatballs, since if you weren't brainwashed against pizza, meatballs are only slightly better, while if you were brainwashed against pizza, falafel is far better. When evaluating falafel, you do need to take the possibility of H2 into account.
So maybe the whole approach that fixes causal decision theory by restricting the list of causal hypotheses is wrong? Maybe instead we should restrict outcomes, only considering those outcomes that are caused by one's choice. Thus, in our falafel-pizza story, you simply don't count the disvalue of being brainwashed, since that doesn't causally depend on your decision, when evaluating the utilities. Unfortunately, it's not so simple, since surely we can come up with cases where there are subtle interactions between prior conditions, such as brainwashing, and one's decision. I don't know what to do. But fortunately I don't believe in these kinds of decision theories.