Carnap's objective prior probability measure was designed to make induction possible. Almost nobody uses Carnap's probability measure any more—the only exception I am aware of is Tooley in his debate book with Plantinga on evil. I have no idea why Tooley is using the Carnap measure—I thought it was out of date. In any case, it's easy to point out at least two things that are wrong with the Carnap measure, and hence why Tooley's arguments based on it need to be reworked. To explain the problems with the Carnap measure, I need some details. If you're familiar with Carnap measure, you can skip ahead to "Problem 1".
Carnap's prior probability measure is best seen as a measure for the probability of claims made by sentences of a truth-functional language with n names, a1,...,an, and k unary predicates, Q1,...,Qk. Let N be the set of names, Q the set of predicates and T the set {True, False}. Call the language L(Q,N). Say that a state s is a function from the Cartesian product QxN to T, and let S be the set of all states. There is a natural way of saying whether a sentence u of L(N,P) is true at a state s. Basically, you say that the sentence Qi(aj) is true at s if and only if s(Qi,aj)=True, and then extend truth-functionally to all states.
There is a natural probability measure on S, which I will call the "Wittgenstein measure", defined by PW(A)=|A|/|S| for every subset A of S, where |X| is the cardinality of the set X. This probability measure assigns equal probability to every state. Given a probability measure P on states, we get a probability measure for the sentences of L(Q,N). If u is such a sentence, define the subset uT={s:u is true at s} of S. Then, we can let P(u)=P(uT). The Wittgenstein measure does not allow induction. Suppose that we have three names, and two predicates, Raven and Black. Our evidence E is: Raven(a1), Raven(a2), Raven(a3), Black(a1) and Black(a2). Then, PW(Black(a3)|E)=1/2=PW(Black(a3)), as can be easily verified, because all states are equally likely, and hence the state that makes all the ai be black ravens is no more likely than the state that makes all the ai be ravens but with only a1 and a2 black.
So, Carnap wanted to come up with a probability measure that allows induction but is still fairly natural. What he did was this. Instead of assigning equal probability to each state, he assigned equal probability to each equivalence class of states. Say that s~t for states s and t if there is some permutation p of the names N such that s(R,p(a))=t(R,a) for every predicate R and every name a. Let [s] be the equivalence class of s under this relation: [s]={t:t~s}. Let S* be the set of these equivalence classes. Then, if s is a state, we define: PC({s})=1/(|[s]||S*|). In other words, each state in an equivalence class has equal probability, and each equivalence class has equal probability. If A is any subset of S, we then define PC(A) as the sum of PC({a}) as a ranges over the elements of A.
The merit of Carnap measure is that it assigns a greater probability to more uniform states. Thus, PC(Black(a3)|E) should be greater than 1/2 (I haven't actually worked the numbers).
Problem 1: Carnap measure is not invariant under increase of the number of predicates. Intuitively, adding irrelevant predicates to the language, predicates that do not appear in either the evidence or the hypothesis, should not change the degree of confirmation. But it does. In fact, we have the following theorem. Let u be any sentence of L(Q,N). Let Qr be Q with r additional predicates thrown in. Let ur be a sentence of L(Qr,N) which is just like u (i.e., ur is u considered qua sentence of L(Qr,N)).
Theorem 1: PC(ur) tends to PW(u) as r tends to infinity.
In other words, as one increases the number of predicates, one loses the ability to do induction, since PW is no good for induction. The proof (which is non-trivial, but not insanely hard) is left to the reader.
Problem 2: Let d be a sentence of L(Q,N) saying that indiscernibles are identical. For instance, let dij be the disjunction ~(Q1(ai) iff Q1(aj)) or ... or ~(Qk(ai) iff Qk(aj)), and let d be the conjunction of the dij for all distinct i and j.
Theorem 2: PC(u|d)=PW(u|d).
Thus, when we condition on the identity of indiscernibles, Carnap measure collapses to Wittgenstein measure. But Wittgenstein measure is worthless for induction. And often the identity of indiscernibles holds. For instance, suppose we have a1,a2,a3 as our individuals, and our evidence is this: a1,a2,a3 are each a raven, a1 and a2 are black. So far so good, we can do induction and we get some confirmation of a3 being black. But suppose we also learn that identity of indiscernibles holds for these three ravens. Then we lose the confirmation! And we might well learn this. For instance, we might learn that exactly a1 and a3 are male, and exactly a1 and a2 each have an even number of feathers, and that means that identity of indiscernibles holds.
Moreover, I think most of us have a background belief that our world has such richness of properties that, at least as a contingent matter of fact, the identity of indiscernibles holds for macroscopic objects. If so, then Carnap measure makes induction impossible for macroscopic objects.
Sketch of proof of Theorem 2: Let D be the set of states at which identity of indiscernibles holds. Thus, D is the set of states s with the property that if a and b are distinct, then there is a predicate R such that s(R,a) differs from s(R,b). Observe that if s is any state in D, then |[s]|=n!, where n is the number of names. For, any permutation of the names induces a different state given the identity of indiscernibles, and there are n! permutations. Therefore, PC({s})=1/(n!|S*|). Hence, PC({s}) has the same value for every s in D. Therefore, PC({s}|D)=1/|D|. But, likewise, PW({s}|D)=1/|D|. The Theorem follows easily from this.
Remark: Theorem 2 gives an intuitive reason to believe Theorem 1. As one increases the number of predicates while keeping fixed the number of names, a greater and greater share of the state space satisfies the identity of indiscernibles.
2 comments:
Theorem 3: Let E_1 be any evidence solely about the particulars r_1,...,r_n. Let H be a hypothesis solely about r_{n+1}. Let E_2 be the evidence that P(r_1)&...&P(r_n)&~P(r_{n+1}), where P is some predicate. Then, according to PC, E_1 and H are conditionally independent given E_2.
Remark: This means that if you're trying to do simple induction, and you also know that there is any property that the particulars involved in the inductive data have and which the particular you are trying to learn about does not have, the inductive data tells you nothing about the particular. But this is always going to be the case--the unobserved particular will be differently located, less accessible, whatever. Minimally, the unobserved particular will be unobserved!
Looks like Tooley (and I) should have both been thinking about newer Carnapian systems. See the discussion on prosblogion.
Post a Comment