The following is known as the the Adams Thesis for a conditional →:
- P(A→B)=P(B|A).
As with so many formal theories, accepting this thesis leads to paradox. Lewis (1976) showed that any probability function Pr satisfying [(1)] would be trivial in the sense that the domain of the function could not contain three possible but pairwise incompatible sentences.And indeed in the literature, the Lewis result gets used to argue that a conditional cannot have a truth value, since if it had one, that value would have to satisfy the Adams Thesis.
But what Lewis actually showed was somewhat weaker. Lewis showed that triviality results from:
- P(A→B|C)=P(B|AC).
But suppose that the probabilities we are dealing with are objective chances. Then one might well accept (1) for the objective chance probability function, without insisting on (2) in general. For instance, a reasonable restriction on (2) would match the restriction on the background knowledge in Lewis's Principal Principle, namely that the background knowledge is admissible, i.e., does not contain any information about B or stuff later than B.
Perhaps, though, clever people will find triviality results for (1), much as Lewis did for (2)? I doubt it. My reason for doubt is that I think I can prove the following two results which show that any probability space, no matter how nontrivial, can be extended to an Adams Thesis verifying probability space for all A with P(A)>0.
Proposition 1: Let <P,F,Ω> be any probability space. Then there is a probability space <P',F',Ω'> that extends <P,F,Ω> in the sense that there is a function e:F→F' that preserves intersections, unions and complements and where P'(e(A))=P(A), and such that for every A in F with P(A)>0 and every B in F, there is an event A→B in F satisfying P'(A→B)=P(B|A).
This result only yields a probability space verifying the Adams thesis for conditionals where neither the antecedent nor the consequent contains a conditional. Since conditionals that have conditionals in the antecedent and consequent can be at least somewhat hairy, this restriction may not be so bad. And one can iterate the Theorem n times to get an extension that allows the antecedent and consequent to have n conditional arrows in them. But if we are willing to allow merely finite additivity, then we have:
Proposition 2: Let <P,F,Ω> be any probability space and assume the Axiom of Choice. Then there is a finitely additive probability space <P',F',Ω'> (in particular, F' is a field, perhaps not a sigma-field) that extends <P,F,Ω> and is such that for any events A and B with P'(A)>0 there is an event A→B such that P'(A→B)=P'(B|A).
To prove Theorem 2 from Theorem 1, let <Pn,Fn,Ωn> be the probability space resulting from applying Theorem 1 n times. Let N be an infinite hypernatural number. Then there will be a hyperreal-valued *-probability <*PN,*FN,*ΩN>, and when we restrict this appropriately to include only finitely many iterations of arrows in an event and take the standard part of *PN we should be able to get <P',F',Ω'>.
8 comments:
If I am following this right (big 'if') this result would take a lot of the wind out of the sails of the "no truth conditions for conditionals" views. It is not so bad to have no truth conditions for sentences with infinitely many embedded conditionals.
But I am quite likely not following this right.
You are following this right. :-)
But I am inclined to think the "no truth conditions" people may be right anyway.
Addition to the theorems: the extended probability field can be taken to satisfy the rules (a) A&(A→B) = A&B; (b) A→B is a subset of A→C whenever B is a subset of C.
I doubt we have uniqueness up to isomorphism. It would be interesting if adding more good rules would yield such.
I'm running into some technical difficulties in the proofs. I may need the Axiom of Choice for Prop 1, too.
Or maybe I can do the whole thing without AC, and even with countable additivity.
Doesn't look like I can do it without AC.
Looks like a lot of this is scooped by van Fraassen 1976.
Post a Comment