Suppose there are n (physically, including neurally) healthy mature humans on earth. Let Q1, ..., Qn be their non-mental qualitative profiles: complete descriptions of their non-mental life in qualitative terms. Let Hi be the hypothesis that everything with profile Qi is conscious. Now, consider the hypotheses:
M: All healthy mature humans have a mental life.
N: Exactly one healthy mature human has a mental life.
Z: No healthy mature human has a mental life.
Assume our background information contains the that there are at least two healthy mature humans. Given that background, the hypotheses are mutually exclusive. Now add that there are n healthy mature humans on earth, where n is in the billions, and that they have profiles Q1, ..., Qn, which are all different. What’s a reasonable thing to think now? Well, N is no more likely than M or Z. Conservatively, let’s just suppose they are all equally likely, and hence all have probability 1/3. Furthermore, if N is true, exactly one Hi is true. Moreover all the Hi are just about on par given N, so P(Hi|N)≈1/n for all i, and hence P(Hi&N) is at most about 1/(3n). On the other hand, P(Hi|Z)=0 and P(Hi|M)=1.
Now suppose I learn that Qm is my profile. Then I learn that Hm is true. That rules out the all-zombie hypothesis Z, and most of the Hi&N conjunctions. What is compatible with my data are two mutually exclusive hypotheses: Hm&N as well as M. It’s easy to check (e.g., with Bayes’ theorem) that my posterior probability for Hm&N will then be approximately at most 1/(n + 1). Thus, the probability that there is another mind is bigger than 0.999999999.
Whether we can argue for M in this way depends on how the priors for M compare to the priors of hypotheses in between M and N, such as the hypothesis that all but seven healthy mature humans have consciousness.
My original post erroneously assumed that M, N and Z are exhaustive. It's been amended and the conclusion has been weakened.
ReplyDeleteThanks for the wonderful post. Although, I was a bit confused about the posterior probability being 1/(n+1). This is probably because I have not studied Bayes' theorem in depth. I tried to run an analysis on it but failed. You stated in the post that the hypothesis for which we wanted the posterior probability was (Hm&N). Presumably, this is posterior to our finding out the relevant evidence in question, namely Hm. But then don't we get Bayes' calculation as:
ReplyDeleteP(Hm&N given Hm) = [(P(Hm&N))x(P(Hm given Hm&N))]/P(Hm)
As we found earlier in the post, for a given Hi, the P(Hi&N) is just P(Hi given N)xP(N), which is (1/n)x(1/3) = 1/(3n). So, it seems to me that, for a given Hm, the P(Hm&N) is 1/(3n). But the P(Hm given Hm&N) is surely 1, as the prob of A on the supposition of the truth of a conjunction of A an B is surely 1. An since P(Hm) is part of our background knowledge (we are supposing we just learned it, after all), surely P(Hm) is 1. Plugging these three values into the equation, don't we get:
P(Hm&N given Hm) = [(1/(3n))x(1)]/1 = 1/(3n)
?
I went wrong somewhere since your Bayesian skills far surpass mine. But I am hoping you can help me see why my calculation is wrong and why I ought to get 1/(n+1). Thanks!!!
You seem to be conditioning on Hm instead of on Qm.
ReplyDeleteI didn't actually use Bayes' Theorem, but thought about ratios (inspired by how John Hawthorne likes to think about conditionalization).
The trick is to use this fact: If A and B both entail E, then the ratio of P(A) to P(B) is the same as the ratio of P(A|E) to P(B|E). (Proof: A and A&E are equivalent if A entails E. So, P(A|E)=P(A&E)/P(E)=P(A)/P(E). By the same token P(B|E)=P(B)/P(E). So, P(A|E)/P(B|E)=P(A)/P(B).)
OK, now back to the post. Given the earlier assumed background Hm&N entails Qm and M entails Qm. So the ratios of the posteriors of Hm&N and M given Qm are the same as the ratios of their priors. The priors are 1/(3n) and 1/3, respectively. So, the ratio of the posterior of Hm&N to the posterior of M is 1:n. But Hm&N and M are mutually exclusive (given the background before we learned Qm). Hence, their posterior probabilities must add up to some value c that is no greater than 1, and as their ratio is 1:n, they have to equal to c/(n+1) and nc/(n+1) respectively. In particular, the posterior of Hm&N must be less than or equal to 1/(n+1).
I understand it now! Thank you so much. It is a beautiful argument. Do you have any suggestions for reasonably inferring that all Qi's have Mi (i.e. all normal, healthy, functioning adult humans have a qualitative, subjective mental life) from the fact that we are nearly certain that at least one other such human has a mind (a mind apart from our own, that is)?
ReplyDeleteOf course, it would seem to be arbitrary and inexplicable if there were just, say, two minds in existence associated with functioning human profiles, but billions of other non-mined functioning humans with similar profiles. We could thus run an explicability argument, perhaps. And this arbitrariness/explicability worry is magnified if we accept a principle of relevant differences according to which, roughly, if x and y only differ in respects that are irrelevant with regard to possessing some further property P (or having some further fact F true of them), then it is inexplicable (else: metaphysically impossible) for one of x or y to have P (or F) but the other to lack P (or F). From this, perhaps we could argue that the only differences that obtain between the Qi profiles seem quite clearly to be irrelevant with respect to having Hi be true of them.
What do you think?
I have no idea what it would mean for a mature, healthy human to not have a mental life.... I mean, are we assuming that they are mature and healthy but in a coma? Or are we entertaining the idea that a human being could report what was in front of them in normal light, but not when in darkness, that they would recoil when stabbed but not when embraced, that they would sing along to the song that is playing... but nevertheless they have no "mental life"?? What on Earth is a mental life, then?
ReplyDeleteI think the math makes sense, I just don't understand the fundamental problem that its addressing.
Michael:
ReplyDeleteDualism could be true and they could be soulless zombies.
Or it could be that the mental supervenes on the physical, but that among the physical preconditions of mental function there is the condition that the individual have a certain specific shade of eye color which in fact only I have.
Pruss:
ReplyDeleteI think that just demonstrates the incoherence of both of those views. Our criteria for ascription of consciousness are the sorts of behaviors I mentioned, and we would not even have the conceptual scheme to discuss any of this without taking those criteria for granted.
If Dualism permits that a human being could lay down, close her eyes, and become unresponsive until she opens her eyes in the morning (though being less able to report what is around her until a little later after plenty of blinking and eye-rubbing...), but nevertheless she did not lose consciousness and regain it, then Dualism is meaningless. Likewise for "supervenience" views with similar consequences. I mean, we wouldn't even know what we were talking about as we decided among these views, if we didn't agree on a common meaning for terms like "lose consciousness" or "become conscious of..." based on criteria for ascription.