Sunday, January 2, 2011

A derivation of the likelihood-ratio measure of confirmation

Let CK(E,H) be the degree of confirmation that E lends H given background K. Here is a derivation of the likelihood-ratio measure. The derivation is, I think, more compeling than Milne's.
We need several assumptions. For simplicity, write PK(A)=P(A|K) and PK(A|B)=P(A|BK). Our first assumption is uncontroversial and everybody accepts it. (For simplicity, I shall also suppose throughout that we're working with events such that none of the relevant Boolean combinations have probability zero or one.)
  1. CK(E,H) is a continuous function of the probabilities PK(−) where the blank can be filled in by any boolean combination of E and H.
Everybody in the measure of confirmation business accepts (1). We now need two more complex theses to get some interesting results. One is this:
  1. If I is independent of all Boolean combinations of E, H and K, then CK(IE,H)=CK(E,H).
An event I like that is obviously irrelevant, and so E and IE confirm H equally. This should be uncontroversial, though it is sufficient to refute the Eells-Jeffrey measure. The next thesis is more controversial:
  1. If E and F are events that are conditionally independent given HK as well as given (~H)K, then CKE(F,H)=CK(F,H).
This thesis says that independent evidence has the same evidential force no matter in which order it comes in. For instance, if you flip a coin twice to gather evidence for whether the coin is biased in favor of heads, and you get heads twice, each heads result provides exactly the same confirmation.
Now we get some substantive results. The first one is easy.
Theorem 1. If (1), then CK(E,H) is a function solely of PK(H), PK(E|H) and PK(E|~H), i.e., there is a function f such that CK(E,H)=f(PK(E|H),PK(E|~H),PK(H)).
This is because all the relevant Boolean combinations can be written in terms of these three probabilities.
I am omitting the proofs of the next couple of Theorems, except to note that obviously Theorem 4 follows from Theorems 2 and 3. I haven't written any of proofs out, but I am confident of theoremhood (maybe with some minor additional assumption). Of course, I could be wrong.
Theorem 2. If (1) and (2), then CK(E,H) is a function solely of PK(H) and the likelihood ratio PK(E|H)/PK(E|~H).
Observe that Theorem 2 refutes the Eells-Jeffrey likelihood-difference measure, given that (1) and (2) are so plausible.
Theorem 3. If (1) and (3), then CK(E,H) is a function solely of the likelihoods PK(E|H) and PK(E|~H).
Theorem 4. If (1), (2) and (3), then CK(E,H) is a function solely of the likelihood ratio PK(E|H)/PK(E|~H).
Therefore, given (1)-(3), CK(E,H)=f(PK(E|H)/PK(E|~H)) for some function f. It is obvious that f must then be an increasing function (this needs some additional assumptions). If all that is of interest is the comparison of degrees of confirmation, this is all we need. But perhaps we want to combine degrees of confirmation. This could be done additively or multiplicatively, i.e.,
  1. If E and F are conditionally independent given HK and (~H)K, then CK(EF,H)=CK(E,H)+CKE(F,H)
or
  1. If E and F are conditionally independent given HK and (~H)K, then CK(EF,H)=CK(E,H)CKE(F,H).
Now we get two final results.
Theorem 5. Assume (1), (2), (3) and (4). Then there is a constant c such that CK(E,H)=clog(PK(E|H)/PK(E|~H)).
Theorem 6. Assume (1), (2), (3) and (5). Then there is a constant c such that CK(E,H)=(PK(E|H)/PK(E|~H))c.
And for simplicity in both cases we should normalize by setting c=1.

3 comments:

  1. Embarrassing mistake: "Eells-Jeffrey measure" should be "Nozick and Carnap measures".

    ReplyDelete
  2. A more embarrassing mistake: Milne has a different measure. I still think this is the right one. :-)

    ReplyDelete
  3. A paper based on this post, entitled "Independent Tests and the Log-Likelihood-Ratio Measure of Confirmation", has just been accepted by _Thought_.

    ReplyDelete