Let's say that something very good will happen to you if and only if the universe is in state S at midnight today. You labor mightily up until midnight to make the universe be in S. But then, surely, you stop and relax. There is no point to anything you may do after midnight with respect to the universe being in S at midnight, except for prayer or research on time machines or some other method of affecting the past. It's too late for anything else!
This line of thought immediately implies two-boxing in the Newcomb's Paradox. For suppose that the predictor will decide on the contents of the boxes on the basis of her predictions tonight at midnight about your actions tomorrow at noon when you will be shown the two boxes. Her predictions are based on the state of the universe at midnight. Let S be the state of the universe being such as to make her predict that you will engage in one-boxing. Then until midnight you will labor mightily to make the universe be in S. You will read the works of epistemic decision theorists, and shut out from your mind the two-boxers' responses. But then midnight strikes. And then, surely, you stop and relax. There is no point to anything you may do after midnight with respect to whether the universe was in S at midnight or not, except for prayer or research on time machines or some other method of affecting the past, and in the Newcomb paradox one normally stipulates that such techniques are not relevant. In particular, with respect to the universe being in S at midnight tonight, it makes no sense to choose a single box tomorrow at noon. So you might as well choose two. Though, if you're lucky, by midnight tonight you will have got yourself into such a firm spirit of one-boxing that by noon tomorrow you will be blind to this thought and will choose only one box.
There is no point to anything you may do after midnight with respect to whether the universe was in S at midnight or not, except for prayer or research on time machines or some other method of affecting the past, and in the Newcomb paradox one normally stipulates that such techniques are not relevant
ReplyDeleteThe standard causal response is to two box (though Horgan argued interestingly for one-boxing based on backtracking). I say you ought ot one box, and here's why.
Let the outcomes be: if you take both and G predicts you take both, then ~$Million & You lose your life and if you take one and G predicts you take one, then $Million & You do not lose your life. Suppose G is 100% accurate in prediction and let him make his decision at any time you'd like.
There's no in principle difference between the 100% case and the 99% case. Nozick urges that we should two-box in the 100% case as well. But everyone who one-boxes lives and everyone who two-boxes dies and everyone playing knows that. The sort of dominance reasoning that leads an agent to do something that will surely lead to their own death is, at least, dubious. I'm certain that no causal theorist would actually two box in this case.
It is easy to make it more costly, for those willing to bite the enormous bullet of insisting it is rational to choose certain death. Let the bad outcome be eternal damnation. If the response is that there is some sense in which you should act to avoid eternal damnation, and it is not the 'should' of rationality or prudence, then what 'should' is it?
It is inaccurate to say that choosing two boxes "leads" to death. Assuming that the predictor is a genuine predictor, it's not choosing two boxes that leads to death, but it's having the sort of antecedent character that makes you choose two boxes that leads to death. And that's beyond your control at the time of choice. At that point, it makes no difference what you choose (barring miracles, time travel, etc.). All you can do is hope that you have the one-boxing character.
ReplyDeleteThe 100% case is a case where it's hard to see what causal decision theory will say. On causal decision theory, the standard calculation is to take all the epistemically relevant theories T1, T2, ... and calculate the sums sum_i E(Utility | Ti and A)P(Ti) and sum_i E(Utility | Ti and B)P(Ti) to decide between A and B. But if one of the epistemically relevant theories Ti is such that P(Ti and A) or P(Ti and B) is zero, then the corresponding expected utility is undefined, and so standard causal decision theory fails to give an answer.
This is part of why standard causal decision theory requires indeterminism on the part of all the relevant theories. For if Ti is a deterministic causal theory and includes the initial conditions, then we will automatically have P(Ti and A)=0 or P(Ti and B)=0, and so we know ahead of time that one of the two sums is undefined.
There is probably something in the literature on this issue. And there are ways of fixing this up, in at least some cases. Sometimes, even though P(Ti and A)=0, there will be a well-defined expected utility. (Suppose you get paid a dollar per meter of distance that you jump. Then E(Utility | jump exactly 1/2 meter) is well defined--it's fifty cents--even though P(jump exactly 1/2 meter)=0.) But when Ti and A are logically incompatible, it's going to be hard to define P(Utility | Ti and A).
In the 100%-accurate Newcombe case, the problem comes up as follows. E(Utility | I two-box AND predictor predicts one-boxing) is difficult to define when P(I two-box AND predictor predicts one-boxing) = 0. It will depend on further details of the story whether this expected utility can be defined.
Here's how I think it will work out in the end. If the predictor is truly perfect (not just probability 1, but actual certainty), then it follows that either my decision is determined and not (non-derivatively) free OR there is something like backwards causation. If there is something like backwards causation, then clearly one-boxing is rational--the causalist and the evidentialist agrees, I take it. I think causal decision theory should only apply to free decisions. Strictly speaking, I don't think non-free (or merely derivatively free) decisions shouldn't be evaluated for rationality (except in some extended sense).
I don't think this is a view you can consistently hold. God is a perfect predictor in the relevant sense. But it does not require backward causation (or even backtracking) to choose freely in God-worlds. I'm assuming you are not a theological fatalist.
ReplyDeleteCausal theorists don't need, as far as I can see, clearly defined expected utilities in order to rationally decide in the 100% case. They use dominance reasoning. No matter what God predicts, we're better off (so say the causal theorists) two-boxing. (the case would have to be changed slightly to accommodate this: if God predicts two-box and you one-box, make the outcome slightly worse).
It is inaccurate to say that choosing two boxes "leads" to death. Assuming that the predictor is a genuine predictor, it's not choosing two boxes that leads to death, but it's having the sort of antecedent character that makes you choose two boxes that leads to death. And that's beyond your control at the time of....
I have no idea why you say this other than (perhaps) the assumption that the 100% case entails determinism. But it doesn't. God is 100% accurate in indeterministic worlds.
God is outside of time, so he neither predicts nor retrodicts.
ReplyDeleteIn any case, even if God were in time, the crucial thing to observe would be that God believes I will do A *because* I will do A. Now, maybe this "because" isn't causal. But in any case this is "something like backwards causation"--an unusual sort of dependence relation running backwards in time. And I think the "causal" part of causal decision theory needs to be extended to other kinds of dependence relationships besides strict causal ones.
The essential thing about a Newcombe-style predictor, however, is that the predictor's prediction isn't dependent on your action.
As for domination reasoning, we know that this sort of reasoning is fallacious when it doesn't play nice with the causal/explanatory relations. Consider this "reversed Newcombe" case. Box A has $1000 and box B has nothing in it. You then point to which box or boxes you want. And then the experimenter puts or doesn't put money into box B according to your choice, just as in the Newcombe case. I.e., the experimenter puts a million dollars into B iff you did not point to both boxes. And then you get the box or boxes you pointed to.
The same domination reasoning works just fine. Either there will be money in both boxes or just in box A. In either case, you do better by choosing both boxes rather than just box B.
Of course, it's obvious why the domination reasoning is fallacious here: whether box B will have money in it depends on your choice.
This point is obvious if the dependence here is causal, but it is also clear if the dependence were of some other sort than causal. (One can cook up cases like that.) So of course the causal decision theorist must treat non-causal kinds of dependence similarly to causal dependence.
In the perfect predictor case we can have both conditions met: fixity and infallibility. What we can't have is fixity, infallibility and (alternative possibilities) freedom. So, there's no need to assume in Newcomb Problems that (infallibility) (A1 -> H1) & (A2 -> H2) is false. If we add infallibility to fixity [(A1 '> H1) & (A2 '> H1)] v [(A1 '> H2) & (A2 '> H2)], we simply end up with ~MA1 v ~MA2. Fixity ensures independence; infallibility ensures perfect prediction. But which action is impossible, A1 or A2, depends on what you actually do.
ReplyDeletefwiw, whether God's being 'outside time' matters to whether he can make a prediction at some time t will depend on whether it also affects whether he can perform miracles at t or answer prayers at t or incarnate at t or speak to Moses at t or otherwise come into time at t.
I think that there is no such thing as a choice without alternative possibilities. And without a choice, there is nothing for decision theory to evaluate.
ReplyDeleteI'm inclined to agree. But what is curious about this case is that ~MA1 if A2 and ~MA2 if A1. So, what's possible depends on what you will do. You can still choose between A1 and A2 which you will do.
ReplyDeleteI don't see how it *depends* on me which is possible. Sure, there is a true counterfactual there, but not all counterfactuals imply a dependence.
ReplyDeleteIt does not depend on what I would do, it depends on what I will do. Whether that sort of dependence is metaphysically interesting is another question. It does seem to me interesting that what I will do affects what I can do, but that might be an idiosyncratic interest.
ReplyDeleteI still don't see how what's possible *depends* on what I will do.
ReplyDeleteInfallibility ensures that A1 --> H1 & A2 --> H2, the perfect predictor predicts perfectly. So, if you do A1, then H1 will have been predicted. But since we have assumed fixity, it will be true that A2 '> H1 & H2, had you chosen A2 it would have been the case that H1, but also the case that H2, since we have assumed infallibility. H1 & H2 is impossible, so A2 is impossible. If instead you do A2, then you get the opposite conclusion, that A1 is impossible.
ReplyDeleteThis yields: If you do A1, then A2 is impossible.
ReplyDeleteBut that's just a conditional. A conditional is not a dependence relation. What am I missing?
I'm also at a bit of a loss. Would it help you if I said that what was possible is conditional on what you actually do? And if that does help, why does it help?
ReplyDeleteNo, it doesn't help.
ReplyDeleteFor all A, it is true that if A, then MA. But surely it is not in general true that MA depends on A.
Put it in terms of explanation. Does A1 explain why ~MA2? If so, how?