## Monday, September 22, 2008

### Conditionals, Adams' Thesis and Molinism

The Theorem below is surely known. But the consequence about Molinism is interesting. It is related to arguments by Mike Almeida.

Definition. The claim AB is a conditional providing AB entails the material conditional "if A, then B".

Remark: This is of course a very lax definition of a conditional (B counts as a conditional, as does not-A), so the results below will be fairly general.

Definition. AB is localized provided A&B entails AB.

Remark: Lewisian and Molinist subjunctives are always localized.

Definition. Adams' Thesis holds for a conditional claim AB providing P(AB)=P(B|A).

Definition. The claim B is (probabilistically) independent of A provided P(B|A)=P(B). (If P(A)>0, this is equivalent to P(A&B)=P(A)P(B).)

Theorem 1. Suppose AB is a localized conditional. Then Adams' Thesis holds for AB if and only if AB is independent of A.

Proof. First note that if AB is a localized conditional, then, necessarily, A&(AB) holds if and only if A&B holds. Therefore P(AB|A)=P(A&(AB)|A)=P(A&B|A)=P(B|A). Now P(AB|A)=P(AB) if and only if AB is independent of A. ■

Remark: It follows that Molinist conditionals do not satisfy Adams' Thesis. For in Molinist cases, God providentially decides what antecedents of conditionals to strongly actualize on the basis of what Molinist conditionals are true, and hence A is in general dependent on AB (and thus AB is in general dependent on A).[note 1]

Mike Almeida said...

Alex,

Neat proof! It looks like you've shown that rational agents in Molinist worlds ought not to maximize U-utility, but ought to maximize V-utility. They ought to be causal conditional utility maximizers.

Alexander R Pruss said...

Mike,

Thanks! What are U- and V-utilities?

Mike Almeida said...

The distinction is in Gibbard and Harper, 'Two Kinds of Expected Utility'. I may have reversed the names. If you are a U-maximizer, you calculate expected utility of A by multiplying the probability of the conditions Pr(A []-> O) by the value of outcome O (for every O in the partition). But if you are a V-maximizer, you multiply Pr(O/A) by the value of each O in the partition. But the latter would be a mistake in Molinist worlds, since Pr(A []-> O) does not equal Pr(O/A), and the indeterministic outcomes O in those worlds, given that you perform A, have the probability Pr(A []-> O) and not Pr(O/A). Let me think a bit more about this, but it is close to saying that Molinist worlds are quasi-Newcomb worlds. God is determining the causal consequences of your actions ahead of time, so you should choose actions by their causal consequences and not by their best-news consequences.

Alexander R Pruss said...

Mike,

Interesting. Let me think. Take a case where P(A→B) differs a lot from P(B|A). This is going to be a case where whether God allows A to happen is highly dependent on whether A→B holds. Here is one such case. Let's say that I have a device that involves the throw of a fair die. If the die is 1, I get \$10. If the die is anything else, everybody on earth suffers horribly for a ten days--let this outcome be B. Let A be my activating the device. Now, let us suppose that for providential reasons, God would be very unlikely to let A happen unless A→~B. Then, P(A→B|A) is astronomically small. By the same token P(B|A) is astronomically small, since P(A→B|A)=P(B|A) (that was part of the proof of Theorem 1).

Now, P(A→B)=1/6. (By some version of the Principal Principle?) So, I have two ways to calculate utilities:
U = P(A→B)Util(B)+P(A→~B)Util(~B)
or:
V = P(B|A)Util(B)+P(~B|A)Util(~B).

Since Util(B) is a large (but not astronomically large) negative number, and Util(~B) is a relatively small number, and P(B|A) is astronomically small while P(A→B)=1/6, we get that U is a large negative number, while V is a small positive number.

Your proposal, I take it, is that the utilitarian should act in accordance with U and not activate the device. This is far from obvious to me. (Acting in accordance with V, though, would involve the sin of testing God, I think. But that's not a utilitarian consideration, unless one adds in the disutility of eternal damnation, but that gums up the utilitarian calculus badly.)

This is a Newcomb-type case, but a probabilistic one.

Mike Almeida said...

Alex,

I don't think this is a Newcomb. You won't have a Newcomb without domniance reasoning conflicting with V-maximization. But there is no dominant choice here (unlike, for instance, two-boxing which dominates one-boxing).
So you need a case where, no matter what God has chosen to do, you are better off performing some action A, even though B raises the probability of very good news. Take the case of my freely choosing to chew tobacco, supposing that is unrelated to getting throat cancer. I might say, in a Molinist world, I have no reason to stop chewing that is derived from the high correlation of cancer with chewing. I know right now, since this world is Molinistic, that the probability is either 1 or 0 that I get cancer, since God decided that long ago. He even knew (like any perfect predictor) whether I would choose to chew or not. Chewing dominates not chewing. If God planned on my getting cancer, then I am better off chewing. If God planned on my not getting cancer, then I am better of chewing.