Suppose there are two opaque boxes, A and B, of which I can choose
one. A nearly perfect predictor of my actions put $100 in the box that
they thought I would choose. Suppose I find myself with evidence that
it’s 75% likely that I will choose box A (maybe in 75% of cases like
this, people like me choose A). I then reason: “So, probably, the money
is in box A”, and I take box A.
This reasoning is supported by causal decision theory. There are two
causal hypotheses: that there is money in box A and that there is money
in box B. Evidence that it’s 75% likely that I will choose box A
provides me with evidence that it’s close to 75% likely that the
predictor put the money in box A. The causal expected value of my
choosing box A is thus around $75 and the causal expected value of my
choosing box B is around $25.
On evidential decision theory, it’s a near toss-up what to do: the
expected news value of my choosing A is close to $100 and so is that of
my choosing B.
Thus, on causal decision theory, if I have to pay a $10 fee for
choosing box A, while choosing box B is free, I should still go for box
A. But on evidential decision theory, since it’s nearly certain that
I’ll get a prize no matter what I do, it’s pointless to pay any fee. And
that seems to be the right answer to me here. But evidential decision
theory gives the clearly wrong answer in some other cases, such as that
infamous counterfactual case where an undetected cancer would make you
likely to smoke, with no causation in the other direction, and so on
evidential decision theory you refrain from smoking to make sure you
didn’t get the cancer.
In recent posts, I’ve been groping towards an alternative to both
theories. The alternative depends on the idea of imagining looking at
the options from the standpoint of causal decision theory after updating
on the hypothesis that one has made a specific choice. In current my
predictor cases, if you were to learn that you chose A, you would think:
Very likely the money is in box A, so choosing box A was a good choice,
while if you chose B, you would think: Very likely the money is in box
B, so choosing box B was a good choice. As a result, it’s tempting to
say that both choices are fine—they both ratify themselves, or something
like that. But that misses out the plausible claim that if there is a
$10 fee for choosing A, you should choose B. I don’t know how best to
get that claim. Evidential decision theory gets it, but
evidential decision theory has other problems.
Here’s something gerrymandered that might work for some binary
choices. For options X and
Y, which may or may not be the
same, let eX(Y)
be the causal expected value of Y with respect to the credences for
the causal hypotheses updated with respect to your having chosen X. Now, say that the differential
restrospective causal expectation d(X) of option X equals eX(X) − eX(Y).
This measures how much you would think you gained, from the standpoint
of causal decision theory, in choosing X rather than Y by the lights of having updated on
choosing X. Then you should
the option that provides a bigger d(X).
In the case where there is a $10 fee for choosing box A, d(B) is approximately $100 while
d(A) is approximately $90, so
you should go for box B, as per my intuition. So you end up agreeing
with evidential decision theory here.
You avoid the conclusion you should smoke to make sure you don’t have
cancer in the hypothetical case where cancer causes smoking but not
conversely, because the differential retrospective causal expectation of
smoking is positive while the differential retrospective causal
expectation of not smoking is negative, assuming smoking is fun (is
it?). So here you agree with causal decision theory.
What about Newcomb’s paradox? If the clear box has a thousand dollars
and the opaque box has a million or nothing (depending on whether you
are predicted to take just the opaque box or to take both), then the
differential retrospective causal expectation of two-boxing is a
thousand dollars (when you learned you two-box, you learn that the
opaque box was likely empty) and the differential retrospective causal
expectation of one-boxing is minus a thousand dollars.
So the differential retrospective causal expectation theory agrees
with causal decision theory in the clear case (cancer-causes-smoking),
the difficult case (Newcomb), but agrees with evidential decision theory
in the $10 fee variant of my two-box scenario, and the last seems
plausible.
But (a) it’s gerrymandered and (b) I don’t know how to generalize it
to cases with more than two options. I feel lost.
Maybe I should stop worrying about this stuff, because maybe there
just is no good general way of making rational decisions in cases where
there is probabilistic information available to you about how you will
make your choice.