## Thursday, January 17, 2013

The following is known as the the Adams Thesis for a conditional →:

1. P(AB)=P(B|A).
This is very plausible. However, Brian Weatherson expresses a widely shared conviction when he says:
As with so many formal theories, accepting this thesis leads to paradox. Lewis (1976) showed that any probability function Pr satisfying [(1)] would be trivial in the sense that the domain of the function could not contain three possible but pairwise incompatible sentences.
And indeed in the literature, the Lewis result gets used to argue that a conditional cannot have a truth value, since if it had one, that value would have to satisfy the Adams Thesis.

But what Lewis actually showed was somewhat weaker. Lewis showed that triviality results from:

1. P(AB|C)=P(B|AC).
Now, Lewis perhaps correctly concludes from this that the Adams Thesis can't hold for subjective probabilities. For given a probability distribution P satisfying (1) and any C with P(C)>0, we could imagine another rational agent, or the same one at a later point, who has conditionalized her subjective probabilities on C, and when we apply (1) to her newly conditionalized probabilities we get (2).

But suppose that the probabilities we are dealing with are objective chances. Then one might well accept (1) for the objective chance probability function, without insisting on (2) in general. For instance, a reasonable restriction on (2) would match the restriction on the background knowledge in Lewis's Principal Principle, namely that the background knowledge is admissible, i.e., does not contain any information about B or stuff later than B.

Perhaps, though, clever people will find triviality results for (1), much as Lewis did for (2)? I doubt it. My reason for doubt is that I think I can prove the following two results which show that any probability space, no matter how nontrivial, can be extended to an Adams Thesis verifying probability space for all A with P(A)>0.

Proposition 1: Let <P,F,Ω> be any probability space. Then there is a probability space <P',F',Ω'> that extends <P,F,Ω> in the sense that there is a function e:FF' that preserves intersections, unions and complements and where P'(e(A))=P(A), and such that for every A in F with P(A)>0 and every B in F, there is an event AB in F satisfying P'(AB)=P(B|A).

This result only yields a probability space verifying the Adams thesis for conditionals where neither the antecedent nor the consequent contains a conditional. Since conditionals that have conditionals in the antecedent and consequent can be at least somewhat hairy, this restriction may not be so bad. And one can iterate the Theorem n times to get an extension that allows the antecedent and consequent to have n conditional arrows in them. But if we are willing to allow merely finite additivity, then we have:

Proposition 2: Let <P,F,Ω> be any probability space and assume the Axiom of Choice. Then there is a finitely additive probability space <P',F',Ω'> (in particular, F' is a field, perhaps not a sigma-field) that extends <P,F,Ω> and is such that for any events A and B with P'(A)>0 there is an event AB such that P'(AB)=P'(B|A).

To prove Theorem 2 from Theorem 1, let <Pn,Fnn> be the probability space resulting from applying Theorem 1 n times. Let N be an infinite hypernatural number. Then there will be a hyperreal-valued *-probability <*PN,*FN,*ΩN>, and when we restrict this appropriately to include only finitely many iterations of arrows in an event and take the standard part of *PN we should be able to get <P',F',Ω'>.

Heath White said...

If I am following this right (big 'if') this result would take a lot of the wind out of the sails of the "no truth conditions for conditionals" views. It is not so bad to have no truth conditions for sentences with infinitely many embedded conditionals.

But I am quite likely not following this right.

Alexander R Pruss said...

You are following this right. :-)

Alexander R Pruss said...

But I am inclined to think the "no truth conditions" people may be right anyway.

Alexander R Pruss said...

Addition to the theorems: the extended probability field can be taken to satisfy the rules (a) A&(A→B) = A&B; (b) A→B is a subset of A→C whenever B is a subset of C.

I doubt we have uniqueness up to isomorphism. It would be interesting if adding more good rules would yield such.

Alexander R Pruss said...

I'm running into some technical difficulties in the proofs. I may need the Axiom of Choice for Prop 1, too.

Alexander R Pruss said...

Or maybe I can do the whole thing without AC, and even with countable additivity.

Alexander R Pruss said...

Doesn't look like I can do it without AC.

Alexander R Pruss said...

Looks like a lot of this is scooped by van Fraassen 1976.