Suppose an ever-truthful dictator tells you that he is interested in the truth value of a proposition *p* solely about the laws of nature and how the world was like 1000 years ago. He tells you that in an hour he will infallibly find out whether *p* is true. If so, he will execute you. If not, he will let you go free.

Unless you have a time machine or a God who answers prayers before they are made is on your side, fatalism is surely the appropriate attitude. There is nothing you can do about whether you will be executed.

Next suppose that determinism is true. That shouldn't affect what was just said. Determinism does not create new supernatural powers to affect the past.

Now add that your identical twin is told by the dictator that he will be executed if and only if he scratches his head in an hour.

Finally suppose that your proposition *p* is the proposition that one thousand years ago the universe was such as to nomically determine that in an hour you will scratch your head, and suppose that the dictator's infallible method for finding out whether *p* is true is simply to see if you scratch your head. Then you are in the same boat as your twin! Each of you will be killed if and only if he scratches his head. Since fatalism is true for you, it's true for your twin. Thus he can't do anything about his execution, and in particular he has no freedom about scratching his head. And so compatibilism is false.

## 16 comments:

I agree that, in the last scenario, you and your twin are in the same boat.

But we could shorten the analysis and just say this: suppose I want an infallible method of finding out whether the past nomically determines that I will commit suicide (execute myself) in an hour. My method is to wait an hour and see what happens. It is much less clear to me that what happens is not up to me, when *I* am doing the executing, than when the dictator is.

I think the difference is that the incompatibilist must endorse a transfer principle like Beta for not-up-to-me-ness, while for the compatibilist such principles fail, because what makes an action up to me is just (roughly) whether it is a product of my will.

Here is an interesting variant. Imagine a third twin. He, too, will be executed iff the analogue of p is true for him. But the dictator there, instead of simply observing whether the twin raises his hand, engages in direct historical investigation (perhaps by consulting an infallible history book) whether p is true.

Does the method by which the dictator finds out whether p is true affect whether you have the power to save your life?

Another variant:

Suppose determinism.

Let P(X) be the proposition that the state of the universe 1000 years ago plus the laws entails that X will raise his hand.

Now imagine triplets.

A is told that he will get $100 iff P(A) is true. The method for

determining whether P(A) is true is historical

investigation--completely reliable books of science and history will

be consulted.

B is told that he will get $100 iff P(B) is true. The method for

determining whether P(B) is true is to see if B raises his hand.

C is told that he will get $100 iff he raises his hand.

If compatibilism is true, it is clearly rational for C to raise his hand.

But if it's rational for C to raise his hand, it seem to be rational for B to raise his hand.

But if it's rational for B to raise his hand, isn't it rational for A to do so? Why should the method by which the truth value of P(A) or P(B) is determined matter, as long as the methods are of equal reliability.

But now suppose it's rational for A to raise his hand. What is his practical means-end reasoning? The end is $100. What are the means? Making P(A) true??!

This seems to be a bit like Newcomb's paradox, since a great deal depends upon how much we trust the predictor to be reliable in Newcomb's and your story.

If we trust the predictor in Newcomb's scenario, we take the high value box based on the prediction. If we trust the predictor in your scenario, we act to do what benefits us regardless of whether or not it is predicted.

There is some difference in details though.

In Newcomb's scenario, we benefit slightly more if we correctly believe either that the predictor is wrong or that we can prove them wrong (==have free will).

But in your scenario there is no benefit to us if we prove the oracle to be wrong.

Yes, this version is like Newcomb. I think what it shows is that if you're a two-boxer in Newcomb, then you can't be a compatibilist.

The connection to Newcomb makes this an interesting and novel argument.

It's a way of connecting this with my intuition that the *causal* decision theory variant of decision theory is incompatible with determinism. Aren't you basically a causal decision theorist? At least, your distinction between two ways of updating in your (really good) paper with Mark Lance seems very similar to the distinction between the causal and epistemic ways of evaluating conditional probabilities.

Next suppose that determinism is true. That shouldn't affect what was just said. Determinism does not create new supernatural powers to affect the past.The fixity of the past is no different from the fixity of the future. If it will be true that p, then there is again nothing you can do to make it the case that not-p, and your fate is sealed. You can say, "but I can act in such a way that it will not be the case that p". That's true, but irrelevant. If in fact it will be the case that p, then that is what will happen, despite the fact that you can see to it that not-p. This is one of the reasons that it does not matter to what you can do that you are world-bound. You're in w and only w. What will happen to you, of course, will. That's true, despite the fact that you can do lots of things otherwise.

This is all by way of saying that the arguments you're running could be run with respect to the fixity of the future thesis. But they are not credible at all when considering a fixed future.

While of course you can't literally change the future, you can causally affect it. But you normally (barring time machines or miracles) causally affect the past. That's a difference with respect to fixity.

It's tricky. You cannot make a true proposition false, no matter what you do, nor a false one true. So, you cannot cause it to be the case that a true proposition about the future is a false one. What we can do is act in such a way that p was never true. That is not causing p to be false. It is the same relation that we stand in to "changing" the past. We can (sometimes) act in such a way that the past never included that fact that p. But we cannot change the past. The past/future asymmetry wrt "change" is one of degree, not kind, as far as I can tell.

Plausibly, if a state of affairs P grounds a state of affairs Q, then by causing P, we are causing Q.

But there being a glass of water on the table grounds its being true that there is a glass of water on the table. So by causing there to be a glass of water on the table--something well within our power--we cause it to be true that there is a glass of water on the table.

Alex,

Were I to raise my hand at t I would not be making it true that I raised my hand at t. It would already have been true that I raise my hand at t. My act does not cause that proposition to be true.

Mike:

Sure, it would have already been true. But it would have been true because of my raising of my hand. So why not say that I cause it to have always already been true?

Certainly, my decision explains why it's true. Maybe that's not causal explanation, but I don't see why not.

I am starting to wonder if one can't substantiate this claim:

Given standard means-end reasoning:

1. You should be a one-boxer iff compatibilism is true.

2. You should be a two-boxer iff libertarianism is true.

Let's say you're a one-boxer. What's your means-end reasoning? Presumably your end is to get a million dollars. What's your means? Well, it's to open box B without opening box A. But how does the failure to open box A contribute to your end? The only suggestion I can see is that it contributes in some counterfactual fashion. But if you hold that, then you hold that the past is counterfactually dependent on your action. And then you can get alternate possibilities even given determinism. And if you can have alternate possibilities given determinism, then

Let's say you're a two-boxer. Then you will think that in my ABC variant, there is no reason for A to raise his hand, and hence there for B and hence for C to do so. And that implies incompatibilism.

I’ve been trying to get this straight too. Here is the Newcomb version of your case: a very smart alien has infallibly promised to give you the standard choices in the Newcomb problem. If he predicts you will pick only the opaque box (B) then he puts $1M in it. If he predicts you will pick both then he puts $1T in the clear box (A) and nothing in B. His method of prediction is to examine the past and the laws. He is not going to make any inferential mistakes, though he may have to make some probabilistic guesses if the laws are indeterministic or have exceptions. You know all this.

Now suppose determinism. Then you know, as truths having the modality of natural necessity,

(1) If you pick only B, then you will get $1M.

(2) If you pick both A and B, then you will get $1T.

So surely the rational thing to do, in this case, is pick only box B. If our alien is omniscient about a deterministic world, you should be a one-boxer.

I don’t know this lit well enough to say exactly what explains this but here is a go at it. Causal decision theory tells us to evaluate counterfactuals, e.g. “If I were to pick box A I would get ____” rather than material conditionals, e.g. “If I do pick box A I will get ____”. Even in a deterministic world we normally evaluate counterfactuals by imagining “small miracles.” But this context is one which essentially tells us that small miracles are ruled out. The nearest possible world (for present purposes) where I pick both boxes is one where the past has determined that action, the alien knows about it, and has left the second box empty. So in this context, the truth values or probabilities of counterfactuals and material conditionals align.

I am not sure you can extract any substantive arguments out of these observations but it is an interesting point.

Post a Comment