Wednesday, September 29, 2010

Probabilities of propositions and beliefs

Start with this thesis:

  1. A random belief of a random person is at least as likely true as not.
To deny (1) would be overly pessimistic, as (1) seems pretty innocuous. But I shall argue that, given two additional theses, (1) implies a substantive Principle of Credulity, namely that the mere fact that someone believes p is significant evidence for p.

Now observe this:

  1. A random atomic proposition is significantly more likely false than true.
This is somewhat surprising and counterintuitive, so I shall argue for it. First, consider unary atomic propositions: the attribution of a property to an object. Now, it is plausible that most of the properties we have terms for are at least somewhat natural. And natural properties tend to have a number of relevant alternatives to them that are in some intuitive taxonomic sense "on the same level". For instance, being a horse has as relevant alternatives being an F where F ranges over all biological species, plus it has a number of alternatives whose level is harder to gauge like being an electron or being a number. Since there are lots of species, without any specific information about how many critters there are in each species, it is reasonable to think that the probability that Sam is a horse, with no specific information about Sam or horses, is quite low. Or take the way that many of the basic properties are determinates of determinables for which there is an infinite number of options: e.g., having mass exactly 17.3 kg. (I am grateful for discussions with Trent Dougherty on this point.) The low probability point is less obvious for relations. But, first, one might think that there are a lot more natural properties than relations, and so the case of properties swamps that of relations. Second, it does seem prima facie plausible that typical natural n-ary relations only relate a small subset of the n-tuples. E.g., relatively few events are related by causation. The closest to an exception is earlier than, which we would expect to relate about half of the pairs of events. But actually, it relates less than half: for The closest to an exception is earlier than, which we would expect to relate about half of the pairs of events. But actually, it relates less than half. For every pair (E,F) related by earlier than there is a pair (F,E) related by later than, but there are also pairs related by neither earlier than or later than, namely those that are simultaneous (or spacelike separated).

I now claim:

  1. A random proposition is significantly more likely to be false than true.
My plausibility argument for (3) proceeds by considering the special case of the propositions that are expressed by sentences of propositional logic with natural predicates. I suspect that the claim extends to quantified sentences and even modalized ones, but right now I can only prove it for propositional logic. For the proof, we need a reasonable account of a random proposition. I shall do this by generating random grammatically correct sentences of propositional logic.

The method is this. I shall suppose that the basic connectives are "and", "or" and "not", and that we have a stock of basic predicates and names for all objects, one name per object. We first randomly choose an item from the set of basic connectives and basic predicates. If the item is an n-ary predicate P, we randomly choose a sequence (n1,...,nn) of n names, and write down the sentence P(n1,...,nn). If the item is a unary connective (i.e., "not"), we write down the connective followed by a random sentence (we recurse here). If the item is a binary connective, we write down a random sentence (recursing) in parentheses, followed by the connective, followed by another random sentence (recursing again) in parentheses. The recursion is not logically guaranteed to finish, but we can conditionalize on the recursion finishing (plus I think it's going to finish with probability one if there are enough predicates).

Now, let p0 be the probability that a random atomic sentence is true. Let N be the number of predicates. Let p be the probability that a sentence generated by the above procedure is true. Then:

  1. p=p2/(3+N)+(1−(1−p)2)/(3+N)+(1−p)/(3+N)+Np0/(3+N).
The first term comes from the fact that we have a 1/(3+N) chance of initially generating an "and", in which case we have a probability p2 of truth since both random conjuncts will have to be true; we have a 1/(3+N) chance of generating "or", and then a probability 1−(1−p)2 that at least one disjunct is true; a 1/(3+N) chance of generating "not" and probability 1−p that the operand of it will be false; and then an N/(3+N) probability of generating an atomic sentence, which has probability p0 of truth. Solving (4) for p we get:
  1. p=(Np0+1)/(N+2).
It is easy to verify that if p0<1/2, then p<1/2. Moreover, for large N (recall that N is the number of predicates), and surely N is in fact large, p is going to converge to p0. We can conclude to (3) in the special cases of propositions expressed by randomly generated sentences of propositional logic, and this provides significant support for (3) in general.

Now an interesting result follows from (1)-(3). The seemingly innocuous claim (1) commits us to assigning a pretty substantive amount of weight to a Principle of Credulity. For the fact that someone believes a proposition raises the probability from something that according to (3) was significantly less than 1/2 at least to 1/2. Thus the mere fact that someone believes something is significant evidence for its truth, even if it does not suffice for making the conclusion reasonable to believe.

In fact, I suspect that p0 is very small, maybe as small as 1/100 or even much smaller. In that case, the probability of a random proposition being true might be very small. And yet we have (1). So belief is significant evidence.

8 comments:

  1. A little bit of fiddling shows that as long as we have at least two basic predicates, the random recursion that I use to generate sentences has probability 1 of converging.

    ReplyDelete
  2. Surely there is a problem in the math?

    I remember a conversation with my logic prof in grad school. My intuitive sense was that there were more ways to be wrong than right. He pointed out that for every p there is a ~p, so there are just as many true propositions as false ones.

    Why doesn't that same fact apply here?

    ReplyDelete
  3. There are infinitely many propositions, so one can group them in various ways. One way will group p with ~p, and so the numbers will be equal. But there are other ways of grouping, too.

    What I do is I randomly generate propositions. This random process favors shorter propositions (just as our belief processes do). So it favors p over ~p. And that leads to the result.

    ReplyDelete
  4. People are naturally credulous, of course, not least because we are naturally inclined to be honest most of the time, and tend to be epistemically adequate. But we don't usually think about it; and is it not precisely when we have cause to think about it that the mere fact that someone holds a belief is insignificant evidence for its truth? (Nevertheless I like your post, which seems to me to be a neat argument for an important epistemological principle:)

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. Here's a few thoughts.

    1. It looks like we can weaken your (1) quite a bit without damaging your conclusion, provided that the probability in (1) is still higher than the probability in (3). That means the argument is relatively robust.

    2. If someone believes p, your argument says that I now have evidence for p. But suppose the person stops believing p. It looks like I may have lost my evidence for p (or at least the contribution to that evidence from the person's former belief in p). This seems strange.

    To avoid this, maybe (1) doesn't have to be about present beliefs, but also about past or future beliefs. This might introduce too much instability.

    We could also revise (1) to concern what could or would be believed. This is a common move in the epistemology of disagreement, I think-- what matters can't just be the actual distribution of opinion, otherwise part of your evidence for theism would vanish if all the theists vanished.

    If we make this change, though, it's not as clear that (1) would be true, even on a weakened version. (1) wouldn't be true in evil demon worlds, for example.

    3. I wonder if there are reference class issues here. You argued that p's being believed raises the subjective probability of p, and that this means that there is now evidence for p. I wonder if the claim about evidence depends on restricting the reference class for the subjective probabilities to something like "positions you might take on Q," where Q is a question you might answer by believing p.

    Here's why this might matter: even if a randomly chosen believed proposition is at least as likely true as false, it doesn't follow that a randomly chosen believed proposition about the ontology of possible worlds, or the simplicity of God, is at least as likely true as false, since the reference classes are different.

    ReplyDelete
  7. The fact that someone ever believed p is evidence for p. I hadn't thought about stopping. We want to distinguish between forgetting and changing one's mind. Forgetting shouldn't affect the evidence. But if she changes her mind, that cancels out the initial evidence if she moves from belief to suspension of judgment, and makes for evidence for the negation if she moves from belief to disbelief.

    ReplyDelete
  8. There may be other problems with (1) too, e.g. do you mean by 'belief' to include beliefs that we never express or even explicitly hold? Suppose you walk into a busy room, see someone and start up a conversation. Do you believe that the roof is there? You probably didn't notice that it was, but let us say that you did believe that it was there.

    Then such beliefs form an infinite set, and the notion of randomness becomes messy. E.g. for each true belief there are lots of false ones. Furthermore, even with beliefs expressible in few words that may well be true. We are of course careful in what we say, but therefore I think that the plausibility of (1) may rely on an implicit confusion of the beliefs we are likely to express with the beliefs that we actually hold.

    For just one example, I see a green tree and so I believe that it's green. But at the same time I believe that it's that colour sensation I have, that I call 'green', that is out there, the colour of the tree. And the latter is false, I think (I've been thinking about this recently, in my Putting the green back in the greenery:) Now, I also believe that that's a false belief, but that just makes my total belief-set inconsistent.

    I try to address that problem when I say things, by being careful in my expressions, but the problem of perception is so innate that I can hardly make it go away just by knowing that there is a problem. And of course, if our belief-sets are inconsistent, then there are a lot of false propositions that we would be inclined to agree with (more than we would think). So in short, I suspect that denying (1) is not necessarily overly pessimistic.

    ReplyDelete