Thursday, October 31, 2013

Decision theory and compatibilism

Here's a decision theoretic picture of how to make the decision between A and B. First, gain as much knowledge K as is reasonably possible about the laws and present conditions in the universe. The more information, the better our decision is likely to be (cf. Good's Theorem). Then calculate the conditional expected utility of the future given A with K, and do the same for B. Then do the action where the conditional expected utility is higher.

Let U(A,K) and U(B,K) be the two conditional expected utilities. (Note: I mean this to be neutral between causal and epistemic decision theories, but if I have to commit to one, it'll be causal.) We want to make our decision on U(A,K) and U(B,K) for the most inclusive K we can.

Now imagine that we could ask an angel for any piece of information I about the present and the laws (e.g., by asking "How many hairs do I have on my head?"), and then form a new set of information K2 including I on which to calculate U(A,K2) and U(B,K2). Then we should ask for as much information as we can. But now here is a problem: if determinism holds, then once we get enough information, Kn will entail which of A and B happens. Let's say it entails A. Then U(B,Kn) is undefined. This informs one that one will do A, but makes decision-making impossible.

So how much cost-free information should we get from the angel? If we ask for so much that it entails what we're going to do, we won't be able to decide. If our choice is indeterministic, we have a simple principled answer: Ask for everything about the laws and the present. But if our choice is determined, we must stop short of full information. But where?

Perhaps we ask for full information about the laws and about everything outside our minds. But the contents of our minds are often highly relevant to our decisions. For instance, if we leave out in our decision-making the content of our minds, we won't have information on what we like and what we dislike. And in some decisions, such as when deciding whether to see a psychologist, information about our character is crucial.

Here's another interesting question. Our angel knows all about the present and the laws. It seems that he's got all the information we want to have about how we should act. So we just ask: Given all you know, does A or does B maximize utility? And he can't answer this question. For given all that he knows, only one of the two conditional utility values makes sense.

Of course, a similar problem comes up in asking an omniscient being in a case where our choices are indeterministic. We might think that we can make a better decision if that being tells about the future. ("What will AAPL close at tomorrow?") But there is a bright line that can be drawn. We cannot use in our decision any information that depends on things that depend on our decision, since then we have a vicious loop in the order of explanation. So an omniscient being metaphysically cannot give us information that essentially depends on our decisions. (In particular, if we're deciding whether to buy AAPL stock, he can't tell us what it will close at tomorrow, unless he has a commitment to make it close at that no matter what we do, since without such a commitment, what it will close at tomorrow depends—in a complex and stochastic and perhaps chaotic way—on whether we buy the stock today.)

Let me end with this curious question:

  • If you have a character that determines you never to ask for help, isn't that a reason to get professional help?
I think this is an interesting question both for compatibilists and for incompatibilists.

4 comments:

  1. Hilary Bok addresses this sort of issue with her example of the Pocket Oracle. She argues that, in principle, you could not get information about what you deterministically will do (I’m simplifying a bit) since the act of giving you the information changes the present state of affairs and therefore may change the implications for the future.

    Suppose the closing price of AAPL depends on whether you buy it, and you are making that decision. You ask the angel “what will AAPL close at tomorrow?” The angel can only answer this question, in a deterministic universe, by looking at its present state, which includes your ignorance. But if he alters that state (i.e. alters your beliefs about AAPL’s closing price by giving you an answer) he may very well falsify his statement. In short, there may be (and quite likely is) no way for the angel to give you true information about states which depend on your decisions.

    ReplyDelete
  2. That's helpful and does adversely impact the argument.

    Here's a variant that may escape this. The angel is himself deterministic. And he announces to me not what I will do--for that would be problematic for these reasons--but what the present state of the universe is and what the laws are. Call this knowledge K.

    Suppose that in fact I *won't* make the extremely complicated calculation of U(A,K) and U(B,K) on the basis of K. There are two cases here, and maybe they need to be considered separately: I won't because I can't (the realistic case), and I can but I won't.

    In any case, standard decision theory says that it is rational to choose A iff U(A,K) > U(B,K), regardless of whether I know what these two utilities are. You don't get off the hook for irrationality just because you didn't do the calculation.

    But it isn't true that U(A,K) > U(B,K), and it isn't true that U(A,K) < U(B,K), and it isn't true that U(A,K) = U(B,K), because one of these two quantities is undefined. So no choice is rational under these circumstances. Which is odd.

    ReplyDelete
  3. Or I could stick to my guns in the original story. Sure, it *might* be that the angel can't give me all the answers. But suppose it can. Then we still have the problem of my post, that to act rationally we need to stop our request for information short of asking for the complete state of the universe and the laws (if the angel is indeterministic: the complete state of the universe after the answer).

    ReplyDelete
  4. There's a trick for getting around this problem, provided that you have the power to pre-commit to respond in a particular way to a pre-specified condition. (For example, you would have this power if you are a robot who can edit its own source code.)

    Here is the trick: Simply pre-commit to the following: "If the angel tells me that I will do A, then do B, and vice versa."

    The effect of this will be that the angel simply cannot tell you what you will do. For, whatever he told you, a contradiction would result, because you would do the opposite of whatever he said. It would be like if you asked him, "Will you say 'no' in reply to this question?" He would be unable to give a yes-or-no answer without falsifying that very answer.

    Thus, if you make the pre-commitment above, you can ask him for whatever information you want without fear of have your power of choice taken away, because none of the information he gives you can contain a prediction of what you will do.

    I learned this idea from decision theorists associated with the Machine Intelligence Research Institute. They sometimes call this trick "playing chicken with the universe".

    I'm not sure what happens if you are worried even about learning what you will do with merely high probability, rather than with certainty.

    ReplyDelete