Saturday, October 8, 2011

Going beyond the evidence out of love for the truth

I want to have correct beliefs. Consider some proposition, p, that isn't going to (directly or indirectly, say by epistemic connections to other propositions) affect my actions in a significant way, say that there is life outside our galaxy. Suppose that my evidence supports p to degree r with 0<r<1. What credence should I assign to p? The evidentialist will say that I should assign r. But that's not the answer from decision theory on the following model of my desires.

As I said, I want to be right about p. If I assign credence 1/2 to p, I get no utility, regardless of whether p is true or not. If p is true and I assign credence 1 to p, then I get utility +1, and if I assign credence 0 to p then I get utility −1. Between these two extremes, I interpolate linearly: If p is true, the utility of credence s is 2s−1. And this gives the right answer for credence 1/2, namely zero utility. If, on the other hand, p is false, then I get utility +1 if I gave credence 0 to p, and I get utility −1 if I gave credence −1 to p, and linearly interpolating tells me that the utility of credence s is 1−2s.

These utilities are a kind of model of love of truth. I want to have the truth firmly in my grasp, and I want to avoid error, and there is complete symmetry between love of truth and hatred of error. And nothing else matters.

What credence should I, from a self-interested decision-theoretic standpoint, assign to p? Well, if I assign credence s to p, my expected utility will be:

  • U(r,s)=r(2s−1)+(1−r)(1−2s)=(2s−1)(2r−1).
It is easy to see that if r>1/2, then I maximize utility when s=1. In other words, on the above model, if nothing matters but truth, and the evidence favors p to any degree, I do best by snapping my credence to 1. Similarly, if r<1/2, then I do best by snapping my credence to 0. The only time when it's not optimal to snap credences to 0 or 1 is when r=1/2, in which case U(r,s)=0 no matter what the credence s is.

So, love of truth, on the above model, requires me to go beyond the evidence: I should assign the extreme credence on the side that the evidence favors, howsoever slightly the evidence favors it.

Now, if I can't set the credence directly, then I may still be able to ask someone to brainwash me into the right credence, or I might try to instill a habit of snapping credence to 0 or 1 in the case of propositions that don't affect my actions in a significant way.

The conclusion seems wrong. So what's gone wrong? Is it decision theory being fundamentally flawed? Neglect of the existence of epistemic duties that trump even truth-based utilities? The assumption that all that matters here is truth (maybe rational connections matter, too)?

Objection: The rational thing to do is not to snap the credence to 0 or 1, but to investigate p further, which is likely to result in a credence closer to 0 or to 1 than r, as Bayesian convergence sets in.

Response 1: If you do that, you miss out on the higher expected utility during the time you're investigating.

Response 2: In some cases, you may have good reason to think that you're not going to get much more evidence than you already have. For instance, suppose that currently you assign credence 0.55 to p, and you have good reason to think you'll never get closer to 0 or 1 than a credence of 0.60 or 0.40. It turns out that you can do an eventual expected utility calculation comparing two plans. Plan 1 is to just snap your credence to 1. Then your eventual expected utility is U(0.55,1)=0.1. Plan 2 is the evidentialist plan of seeking more evidence and proportioning your belief to it. Then, plausibly, your eventual expected utility is no bigger than aU(0.60,0.60)+(1−a)U(0.40,0.40), where a is the probability that you'll end up with a subjectively probability bigger than 1/2 (that you'll end up exactly at 1/2 has zero chance). But U(0.60,0.60)=U(0.40,0.40)=0.04. So you'll do better going with Plan 1. Your eventual utility (and here you need to look at what weight should be assigned to considerations of how much more valuable it is to have the truth earlier), however, will be even better if you try to have the best of both worlds. Plan 3: investigate until you it looks like you are unlikely to get anything significantly more definite, and then snap your credence to 0 or 1. You might expect, say, to get credence 0.60 or 0.40 after such investigation, and then your utility will be aU(0.60,1)+(1−a)U(0.40,1)=0.2. This plan combines an evidentialist element together with a non-evidentialist snapping of credences way past the evidence.

2 comments:

  1. Dr. Pruss, you just ruined my paper... I'm currently working on a paper outlining my epistemic theory as a moderate evidentialist and this is a stick in my spokes! Well, thanks for the new information to consider!

    ReplyDelete
  2. Lara Buchak has pointed out to me that there is a literature on this stuff--see here.

    What I am using in this paper is, I think, basically the absolute value rule. But its symmetry is unsatisfactory. The disutility of assigning 1 to something false shouldn't be equal to the utility of assigning 1 to something true (in fact, I have an argument that the disutility should be at least 2.59 times greater than the utility--I may post that eventually).

    ReplyDelete