Friday, May 14, 2010

Epistemic probabilities, decisions and determinism

Suppose I have a brother, and I want to ask him to lend me money. When I go to him asking him for a loan, I need to decide whether to wear a blue shirt. How should I decide? An obvious answer is:

  1. I need to evaluate the conditional epistemic probabilities P(I get loan | I wear blue) and P(I get loan | I don't wear blue), and act according to the higher of these.

The obvious answer is wrong. Here is a case. My brother is completely color blind. But my mother informs me that

  1. my father has brought me up to have a tendency to wear blue shirts if and only if he brought up my brother to have a tendency to give loans to relatives
If I know (2) with certainty (or even high credence), the epistemic probability P(get loan | wear blue) is higher than P(get loan | don't wear blue), because the information in (2) induces a correlation between loan-getting and wearing blue. However, this correlation gives me no more reason to wear a blue shirt than the woman who wishes to avoid a hereditary disease afflicting Germans has reason to move to France. (One can sharpen the case by supposing the brother isn't colorblind but has a slight distrust of people who wear blue, with the probabilistic effect of that distrust being smaller than that of the tendencies in (2); in that case, I have positive reason not to wear the blue shirt, contrary to (1).)

This has got to be in the literature. It's reminiscent of standard examples involving probabilistic theories of causation.

So what is the right answer as to how to decide? My strong intuition is that I need to as best I can estimate the integral of P*(I get loan | Q & S=x) dP(x), where P* is objective chance, S is an epistemic random variable representing the complete state of the world (including the laws) just before my choice and P is epistemic probability measure on the set of values of S compatible with my making a choice, for Q = "I choose to wear blue" and Q = "I don't choose to wear blue". Since P*(I get loan | I choose to wear blue & S=x) = P*(I get loan | I don't choose to wear blue & S=x), for every x, given the brother's colorblindness, the two integrals are equal, and so I don't have reason either way with respect to the shirt. And that's the right answer.

But notice an interesting fact. If determinism holds, then for any complete state x of the world just before my choice, either S=x entails that I will choose to wear blue or S=x entails that I won't choose to wear blue. In the former case, P*(I get loan | I don't choose to wear blue & S=x) is undefined, and in the latter case P*(I get loan | I choose to wear blue & S=x) is undefined. Thus, in the two integrals I am supposed to compare, the values of the integrands are never both defined at the same time, if determinism holds. Therefore, if the above is the right way to make decisions—and I think it is—then knowing determinism to hold would make decision theory non-viable.

8 comments:

  1. Couldn't you run a version of this with divine foreknowledge, rather than causal determinism, and get the same result?

    ReplyDelete
  2. my father has brought me up to have a tendency to wear blue shirts if and only if he brought up my brother to have a tendency to give loans to relatives
    If I know (2) with certainty (or even high credence), the epistemic probability P(get loan | wear blue) is higher than P(get loan | don't wear blue), because the information in (2) induces a correlation between loan-getting and wearing blue. However, this correlation gives me no more reason to wear a blue shirt than the woman who wishes to avoid a hereditary disease afflicting Germans has reason to move to France


    It's a Newcomb-like problem. I think it is wrong to conclude so quickly that you're not given a reason to wear blue and the women is not given a reason to move to France. Make the correlation perfect or 1. Then it seems obvious to me that you have as reason to wear blue, since it makes certain that you get the loan. Similarly for the move to France. If it is certain (and not merely certain up to time t, prior to the move and prior to the request for a loan) then you have very good reason, from the correlation, to wear blue. Similar results are forthcoming from knowing that the perfect predictor really is "perfect". That knowledge gives you a reason to one-box. Nozick comes to this conclusion as well in the high probability cases.

    ReplyDelete
  3. Heath:

    No, I don't think you can run this with foreknowledge.

    Mike:

    Yes, it's a Newcomb-like problem. I knew it was Newcomb-like, but didn't notice how extremely close it is to Newcomb. What I do like about this case is that it is easier to imagine than a predictor.

    It is obvious to me that what's going on in saying one should wear a blue shirt is simply an artifact of the wrong way to do one's expected utility calculations, whether the correlation is high or low. :-) (The perfect case is incompatible with free will and hence with decisions.)

    Here's one way to run an argument for this. If I could have a no-cost answer to a question, I would expect to make a better decision, and hence have reason to ask the question (Good's Theorem, I guess). So, now, suppose that my brother has the tiniest prejudice against giving loans to folks who wear blue shirts. I ask my mother: "Did my dad condition me to wear blue shirts?" My mother answers, but in a quiet voice. Whatever she answers, I then shouldn't wear a blue shirt. So why should I bother listening to her answer? But if I don't listen, it's like the original case.

    Or think of it this way. I listened to her, and she said "No." I then decide not to wear a blue shirt. But as I am getting ready to put on a white shirt, I realize that I no longer remember what she said. On the one-boxer view, now that I no longer remember what she said, I should change my mind and not put on a blue shirt, or ask her once again. But that's silly--why ask her, when whatever she says, I'd still be putting on a white shirt?

    ReplyDelete
  4. The perfect case is incompatible with free will and hence with decisions.

    I can't see how. Take the perfect Newcomb case. The fact that the predictor is 100% accurate does not make me unfree. How could it? It doesn't even make the predictor unfree, or not that I can see. I can one-box or not, knowing what I know. But in failing to one-box I act irrationally (or, so say I). Where does the loss of freedom come in?

    ReplyDelete
  5. Given incompatibilism, the only way the predictor's knowledge could be perfect is if it is explanatorily posterior to the choice (e.g., backwards causation). But in that case, we have a circularity in the order of explanation, because the predictor's knowledge is prior to the predictor's statement which is prior to the decision.

    ReplyDelete
  6. Given incompatibilism, the only way the predictor's knowledge could be perfect is if it is explanatorily posterior to the choice (e.g., backwards causation).

    I'm sure I'd deny it, since I can't see a reason to believe it. There are true propositions about what I will do the explanation for which truth does not require that the proposition be true posterior to my action only. It is true prior to my doing anything, since it is a proposition about what I will do (but haven't done). I see no conflict with free, incompatibilist action here.

    ReplyDelete
  7. I am not sure if I prefer the account in the post to Skyrms' causal decision theory.

    ReplyDelete
  8. This, too, is just a medical Newcomb case. My integral solution is a version of Lewis's version of causal decision theory.

    ReplyDelete