Sunday, August 4, 2019

Belief, testimony and trust

Suppose that to believe a proposition is to have a credence in that proposition above some (perhaps contextual) threshold pb where pb is bigger than 1/2 (I think it’s somewhere around 0.95 to 0.98). Then by the results of my previous post, because of the very fast decay of the normal distribution, most propositions with credence above the threshold pb have a credence extremely close to pb.

Now suppose I assert precisely when my credence is above the threshold pb. If you trusted my rationality and honesty perfectly and had no further relevant evidence, it would make sense to set your credences to mine when I tell you something. But normally, we don’t tell each other our credences. We just assert. From the fact that I assert, given perfect trust, you could conclude that my credence is probably very slightly above pb. Thus you would set your credence to slightly above pb, and in particular you would believe the proposition I asserted.

But in practice, we don’t trust each other perfectly. Thus, you might think something like this about my assertion:

If Alex was honest and a good measurer of own credences, his credence was probably a tiny bit above pb, and if I was certain of that, I’d make that be my credence. but he might not have been honest or he might have been self-deceived, in which case his credence could very well be significantly below pb, especially given the fast decay in the distribution of credences, which yields high priors for the credence being significantly below pb.

Since the chance of dishonesty or self-deceit is normally not all that tiny, your overall credence would be below pb. Note that this is the case even for people we take to be decent and careful interlocutors. Thus, in typical circumstances, if we assert at the threshold for belief, even interlocutors who think of us as ordinarily rational and honest shouldn’t believe us.

This seems to me to be an unacceptable consequence. It seems to me that if someone we take to be at least ordinarily rational and honest tells us something, we should believe it, absent defeaters. Given the above argument, it seems that the credential threshold for assertion has to be significantly higher than the credential threshold for belief. In particular, it seems, the belief norm of assertion is insufficiently strong.

Intuitively, the knowledge norm of assertion is strong enough (maybe it’s too strong). If this is right, then it follows that knowledge has a credential threshold significantly above that for belief. Then, if someone asserts, we will think that their credence is just slightly above the threshold for knowledge, and even if we discount that because of worries that even an ordinarily decent person might not be reporting their credence correctly, we will likely stay above the threshold for belief. The conclusion will be that in ordinary circumstances if someone asserts something, we will be able to believe it—but not know it.

I am not happy with this. I would like to be able to say that we can go from another’s assertion to our knowledge, in cases of ordinary degrees of trust. I could just be wrong about that. Maybe I am too credulous.

Here is a way of going beyond this. Perhaps the norms of assertion should be seen not as all-or-nothing, but as more complex:

  1. When the credence is at or below pb, we are forbidden to assert.

  2. When the credence is above pb, but close to pb, we have permission to assert, but we also have a strong defeasible reason not to assert, with the strength of that reason increasing to infinity as we get closer we are to pb.

If someone abides by these, they will be unlikely to assert a proposition whose credence is only slightly above pb, because they will have a strong reason not to. Thus, their asserting in accordance with the norms will give us evidence that their credence is not insignificantly above pb. And hence we will be able to believe, given a decent degree of trust.

Note, however, that the second norm will not apply if there is a qualifier like “I think” or “I believe”. In that case, the earlier argument will still work. Thus, we have this interesting consequence: If someone trustworthy merely says that they believe something, that testimony is still insufficient for our belief. But if they assert it outright, that is sufficient for our belief.

This line of thought arose out of conversations I had with Trent Dougherty a number of years ago and my wife more recently. I don’t know if either would endorse my conclusions, though.

10 comments:

  1. Maybe it matters that ordinary contexts of testimony are cooperative endeavors. That is, you want some information and I am trying to give it to you.

    If you could read a printout of "things Heath thinks he believes"--a non-cooperative context--then the conclusions in the post might follow. However if I am trying to help you, I am more likely to signal if my confidence in the information is borderline, and the absence of that signal implicates that my confidence is well above the belief threshold.

    ReplyDelete
  2. Most of the propositions in a logically complete set will be complex combinations (Trump is POTUS and it’s raining and violets are blue and …). It is these combinations that lead to the normal limiting distribution of credences.

    By contrast, the propositions we usually think about and assert are mostly simple. It is not obvious that credences in simple propositions would be expected to follow any particular distribution.

    In any case, I doubt that belief can be identified with credence above a threshold, if only because we rarely assign credences to the required precision.

    ReplyDelete
  3. Ian:

    That's a good point, though the simple propositions will often be equivalent to various complex propositions in a more natural (in the Lewis sense) language. Take "It's raining". What that claims is pretty complex. Start: Liquid H2O is falling from clouds as a result of precipitation within the clouds. But then one has to replace "clouds" with something more natural.

    ReplyDelete
  4. Heath:

    I want it to be the case that, roughly, if we are sure that someone is following the norms of assertion and that this is a typical case, that should be sufficient to accomplish the information reception goals. So if further cooperative assumptions are needed for information reception, they need in some way to be built into the norms of assertion.

    ReplyDelete
  5. Ian:

    There is a second line of thought: evidence gathering takes time and it's expensive. So we would expect a strong drop-off in how much evidence has been gathered for various propositions.

    ReplyDelete
  6. Here's a slightly different move from the one in my post. We could say that not being certain is a defeasible reason not to assert. And the more uncertain one is, the stronger that reason is.

    ReplyDelete
  7. Here is a general consideration. I tend to think of the "information reception goals" as transmitting the epistemic status of the assertor to hearer. So, e.g. testimony transmits knowledge, or justified belief. So a typical example of this idea would be that (i) the norm of assertion is knowledge, i.e. you should assert P only if you know P; and (ii) in a normal case, the hearer knows P as a result of hearing the assertion.

    However ... testimony itself can introduce a degree of uncertainty (maybe they don't know but
    think they do, maybe they are lying), so *degree of credence* is not an epistemic status that is going to get transmitted across testimony. So if you think of the information reception goals as transmitting an epistemic status, and you think of this epistemic status in terms of degrees of credence, then any instance of the model I started with will fail.

    That could show (1) the model is wrong, or (2) the relevant epistemic status is not well-modeled by degrees of credence.

    ReplyDelete
  8. But even if this is right, degree of credence may be a necessary condition for the epistemic status. And if the credence falls in transmission, as it might, then we are apt to have a failure of transmission of epistemic status.

    ReplyDelete
  9. Not strictly about this post, but I've been thinking about the absolute prohibition against lying as I read this post (via the concept of trust), which we discussed years ago now. I recall you saying that it would be OK to hide the truth, given a good enough reason, but never OK to lie. I see the point of such a rule. Lying is too easy and antisocial. But there is an easy way to, in effect, lie, which tends to undermine that point.

    Words can be given special meanings, for example in subcultures, or in science. So without lying you could use words that have been given special meanings. Basically, you say the words of the lie, but you tell yourself that those words mean something true in a theoretical language that could be made up. You can tell yourself as much detail about that theoretical language as you feel you need to. (You could even have arranged that the language is public by discussing it with other people in advance. All meanings have to start with one person.)

    You simply omit to tell the person you are saying those words to, that you are not using the usual meanings. This technicality could be used to get around the absolute prohibition against lying. Consequently one could say to the jew-hunter at the door "There are no Jews here" even when there are.

    A variation of this is that you think that the question from the jew-hunter "Are there any Jews here?" already uses a different meaning to "Jews" because of the antisemitic beliefs of the questioner. Just by thinking that (whether it is true or not) means that you have been applying this notion of a different language, so that you are not actually lying. And in this case, you would not have needed to know that you were doing this. You would still not have been lying (just so long as you did think that the person asking the question had a bad semantics).

    But, does this get around the classic problem with the absolute prohibition? Or does it undermine it?

    ReplyDelete
  10. The variation at the end is one that I do defend: https://muse.jhu.edu/article/637086/summary . However, if you read that paper, you find that it doesn't undermine anything, but in a way makes the classic prohibition stricter.

    As for your first suggestion, I think there is a publicness to language which prevents uncommunicated stipulations. In stipulating a new meaning to a set of sounds, you need to communicate--or at least attempt to communicate--the stipulation.

    Here is another variant. Suppose that you ask me a question, to which the correct answer is "yes", but I don't want to reveal that answer to you, as moderately serious harms will result from the revelation. So, instead, I take this moment to engage in some voice exercises by uttering the noises: "No, no, NO!" My intention is solely to exercise my vocal cords: I am not intentionally *saying* anything. I foresee that you will mistakenly take my "No" to be an answer, but I do not intend you to do so, and proportionality seems to apply as overall the harm of your deceit is small. I think this is still wrong, but I am not quite sure what is wrong. I incline to think that the proportionality calculus is wrong. I am still betraying your trust, and that is fairly serious? I don't know what to say about this case, though.

    ReplyDelete