Tuesday, October 18, 2011


I'm going to offer three arguments for a conclusion I found quite counterintuitive when I got to it, and which I still find counterintuitive, but I can't get out of the arguments for it.

Argument 1. There is a game being played in my sight. The player chooses some value (e.g., a number, a pair of numbers, etc.) and gets a payoff that is a function of the value she chose and some facts that I have no information whatsoever about. Moreover, the payoff function is the same for each player, and the facts don't change between players. I see Jones playing and choosing some value v. I don't get to see what payoff Jones gets. What value should I choose? I think there is a very good case that I should choose v, just as Jones did. After all, I know that I have no information about the unknown facts, but for all I know, Jones knows something more about them than I do (if that's not true, then I do know something about the unknown facts, namely that Jones doesn't know anything about them).

Now, suppose that the game is the game of assigning credences (whether these be point values, intervals, fuzzy intervals, etc.) to a proposition p, and that the payoff function is the right epistemic utility function measuring how close one's credence is to the actual truth value of p. If I should maximize epistemic utility, I get the conclusion that if I know nothing about p other than that you assign to it a credence r, then I should assign to it credence r. Note: I will assume throughout this post that the credences we are talking about are neither 0 or nor 1—there are some exceptional edge effects in the case of those extreme credences, such as that Bayesian information won't shift us out of them (we might have special worries about irreversible decisions, which may trump the above argument).

I find this result quite counterintuitive. My own intuition is that when I know nothing about p other than the credence you assign to p, I should assign to p a downgrade of your credence—I should shift your credence closer to 1/2. But contradicts the conclusion I draw from the above argument.

I can get to the more intuitive result if I have reason to think Jones is less risk averse than I am. In the case of many reasonable epistemic utility measures, risk averseness will push one towards 1/2. So perhaps my intuition that you should downgrade the other's credence, that you should not epistemically trust the other as you trust yourself, comes from an intuition that I am more epistemically risk averse than others. But, really, I have little reason to think that I am more epistemically risk averse than others (though I do have reason to think that I am more non-epistemically risk averse than others).

Argument 2: Suppose I have no information about some quantity Q (say, the number of hairs you've got, the gravitational constant, etc.) other than that Jones' best estimate for Q is r. What should my best estimate for Q be? Surely r. But now suppose I have no information about a proposition p, except that Jones' best estimate for how well p is supported by her evidence is r. Then my best estimate for how well p is supported by Jones' evidence is r. And since I have no evidence to add to the pot, and since my credence should match evidential support (barring some additional moral or pragmatic considerations, which I don't have reason to think apply, since I have no additional information about p), I should have credence r. (Again, it doesn't matter if credences are points or intervals vel caetera.)

Let me make a part of my thinking more explicit. If I have no further information on Q, which Jones estimates to be r, it is equally likely that Jones is under-estimating Q as that Jones is over-estimating Q, so even if I don't trust Jones very much, unless I have specific information that Jones is likely to over-estimate or under-estimate, I should take Q as my best estimate. If Q is the degree to which p is supported by Jones' evidence, then the thought is that Jones might over-estimate this (epistemic incautiousness) or Jones might under-estimate it (undue epistemic caution). Here the assumption that we're not working with extreme credences comes in, since, say, if Jones assigns 1, she can't be under-estimating.

Argument 3: This is the argument that got me started on this line of thought. Imagine two scenarios.
Scenario 1: I have partial amnesia—I forget all information relevant to the proposition p, including information as to how reliable I am in judgments of the p sort. And I don't gain any new evidence. But I do find a notebook where I wrote that I assign credence r to p. I am certain the notebook is accurate as to what credence I assigned. What credence should I assign?
Scenario 2: Same as Scenario 1, except that the notebook lists Jones' credence r for p, not my credence. And I have no information on Jones' reliability, etc.

In Scenario 1, I should assign credence r to p. After all, I shouldn't downgrade (I assume upgrading is out of the question) credences that are stored in my memory, or else all my credences will have an implausible downward slide absent new evidence, and it shouldn't matter whether the credence is stored in memory or on paper.

But I should do in Scenario 2 exactly what I would do in Scenario 1. After all, barring information about reliability, why take my past self to be any more reliable than Jones? So, in Scenario 2, I should assign credence r, too. But the partial amnesia is doing no work in Scenario 2 other than ensuring I have no other information about p. So, given no other information about p, I should assign the same credence as Jones.

Final off-the-cuff remark: I am inclined to take this as a way of loving one's neighbor as oneself.[note 1]


Alexander R Pruss said...

It's very plausible to suppose that it matters a lot just how reliable one thinks the person to be. But I think it doesn't in this case. (It would matter in a case where we were comparing the testimony of two different individuals.)

Here's a way to see that it doesn't matter, in the setting of Argument 3. It doesn't matter in Scenario 1 how reliable I was with regard to p: I should accept my own credences barring further evidence. (Note: If I find out that I am less reliable than I thought myself at the time I formed the credence, then perhaps I should downgrade. But that's not a question of how reliable I am, but of what disparity exists between my actual reliability and the reliability I thought myself to have. (Here, general empirical data may affect things--we have reason to think the incompetent overestimate their competence.))

The same is a foriori true in Scenario 2: it doesn't matter how reliable I was with regard to p in Scenario 2. :-) But, clearly, if in Scenario 1 I should take my own old credence back no matter how little reliable I was, in Scenario 2 I should take Jones' credence as long as Jones was at least as reliable as I was. But since how reliable I was is irrelevant in Scenario 2, my credence in Scenario 2 cannot depend on whether Jones was at least as reliable as I was.

A second line of thought about Scenario 2. Plot the credence we should take after getting the data from Jones as a function of Jones' reliability. Once Jones' reliability is the same or greater than as mine, I should take Jones' credence at face value (in particular, I shouldn't inflate it--a reliable agent no more underestimates than overestimates credences). But it would be an odd thing that we downgrade Jones' credence when Jones is less reliable than we but don't upgrade when he's more reliable than we.

Alexander R Pruss said...

I wonder if some of our puzzlement over the claim of this post isn't due to a lopsided view of unreliability on which we worry about agents having credences that are too big (in the case of credences greater than 1/2, of course) but don't worry about agents having credences that are too small. But both kinds of mistakes go into unreliability, and underestimating some of one's credences leads to overestimating others (if I underestimate the credences on the defeaters to p, I overestimate p).

Alexander R Pruss said...

How about this thought? Incompetent people think themselves more competent than competent people do. It is plausible that this leads to a mechanism on which incompetent people unduly inflate their credences away from 1/2. If we then add the pessimistic premise that more people are incompetent than competent in respect of p, we conclude that we should downgrade the credences of randomly chosen others (i.e., move them towards 1/2). But this relies on having additional data than I assumed myself to have in my scenarios: namely data that p is something that more people are incompetent with regard to than are competent with regard to, and that the kind of incompetence at issue here is one that inflates credences.

Heath White said...


With some caveats, I find the conclusion that I should accept Jones’ credences very intuitive. What it amounts to is some trust in Jones, at least as much trust as you have in yourself. (Alternatively put: a form of efficient markets hypothesis in information.)

I would add this: ordinarily there is an “abstention” option when playing games with payoffs, or making claims about p—no one has to do either. Thus if we see individuals engaging in these behaviors, this is prima facie evidence that they know what they’re doing, so prima facie evidence that they are trustworthy.

As for your instincts: the fact, I believe, is that philosophers are somewhat better than regular folks at assigning credences. We are, in particular, highly critical of our own and others’ beliefs and thus epistemically conservative relative to the general population. I suspect that accounts for your tendency to downgrade Jones’ credences. Note also that reasoning abilities, like language abilities or motor skills, are not lost in amnesia, so you might reasonably maintain your “downgrade” practice even if you lose some memory.

James Bejon said...

So, I'm not sure I've thought this out as carefully as I might. But then I'm not studying philosophy so hopefully that's allowed.

Anyway, the following seems to be another way of coming to the conclusion arrived at in Argument 2:

Consider the case where I have no information about some quantity Q (say, the number of hairs I have got, the gravitational constant, etc.) other than that Jones' best estimate for Q is r. What should my best estimate for Q be? Well, suppose it's greater than r. Then suppose a friend of mine knows about me what I know about Jones (and knows nothing about Jones). What should my friend's estimate for Q be? Higher still, right? So, now suppose I have an infinite number of friends in similar positions. In the end, we arrive at a situation where someone can be absolutely certain about the value of Q (or in the case where the initial estimate is lower than r, we arrive at a situation where someone can be absolutely certain that Jones is wrong)--which doesn't seem right. So, doesn't it follow that our estimate should be r in this way too?

Alexander R Pruss said...


In Argument 1, I say nothing about there being only two choices. The story is compatible with there being lots of choices. Maybe Jones puts in a number between 0 and 1. You can put in Jones' number. You can put in 1/2. You can put in some number between Jones' number and 1/2. On one variant of the story, you even know how the payoff function depends on the unknown facts and the number chosen, just as you might in the epistemic case (the epistemic payoff function would be given by the correct scoring rule).

As to reliability, I am now thinking that we want to distinguish between good judgment and reliability. Say that an agent is reliable to the extent that she tends to not have high credences for falsehoods or low credences for truths, or something like that. (Come up with some sort of a measure that captures that.) Say that an agent exhibits good judgment to the extent that her credence in p matches the degree to which her evidence (understood in some internalist way, I think) supports p.

So, what I said in previous comments is right if "reliability" is replaced with "good judgment" (I think that what I said in the post is true on either reading). But I agree that reliability in the new sense, which is closer to the usual sense in epistemology, really does matter in these cases, because unreliability in this sense is asymmetrical: the unreliable in respect of p are more likely to have a credence for p that's too far away from 1/2 rather than too close to 1/2. So to the extent that I have reason to think Jones unreliable, I should downgrade her credence.

But in the examples given, I have no information about Jones' reliability. She forms her belief on some (possibly empty) body of evidence. I don't know what that body of evidence is. If she had perfect judgment, I plainly should adopt her credence, since the evidence available to her is the total evidence available to both of us. But I don't know how good her judgment is. Still, I have no more reason to think her judgment errs through excessive caution than through excessive risk-taking, and so it's still reasonable to adopt her credence.

Jonathan Livengood said...

I don't see why you find Argument 1 persuasive. It seems to me that for all you know, Jones knows nothing about the facts. So, you have no more reason to trust Jones than not to trust Jones. In fact, if you did have reason to trust Jones, then conditioning on Jones' credence for p would tell you something about the probability of p. So, if Jones' credence is evidence for you at all, then it is not the case that you know nothing about the facts.

I think you can see the same thing happening in the first part of Argument 2. Unless I know something about the connection between Jones' reporting and the truth about Q, I have no more reason to accept Jones' estimate than I do to pick one out of a hat!

Again, even with respect to my former self, I should want to know how reliable (in your good judgment sense) I was with respect to propositions like p. Maybe it all comes down to this: I find it implausible that I should just accept my old credences in the amnesia case.

But with all that said, I feel like I must be missing something ... especially with respect to the amnesia case, I would really like to hear more.