I want to have correct beliefs. Consider some proposition, p, that isn't going to (directly or indirectly, say by epistemic connections to other propositions) affect my actions in a significant way, say that there is life outside our galaxy. Suppose that my evidence supports p to degree r with 0<r<1. What credence should I assign to p? The evidentialist will say that I should assign r. But that's not the answer from decision theory on the following model of my desires.
As I said, I want to be right about p. If I assign credence 1/2 to p, I get no utility, regardless of whether p is true or not. If p is true and I assign credence 1 to p, then I get utility +1, and if I assign credence 0 to p then I get utility −1. Between these two extremes, I interpolate linearly: If p is true, the utility of credence s is 2s−1. And this gives the right answer for credence 1/2, namely zero utility. If, on the other hand, p is false, then I get utility +1 if I gave credence 0 to p, and I get utility −1 if I gave credence −1 to p, and linearly interpolating tells me that the utility of credence s is 1−2s.
These utilities are a kind of model of love of truth. I want to have the truth firmly in my grasp, and I want to avoid error, and there is complete symmetry between love of truth and hatred of error. And nothing else matters.
What credence should I, from a self-interested decision-theoretic standpoint, assign to p? Well, if I assign credence s to p, my expected utility will be:
So, love of truth, on the above model, requires me to go beyond the evidence: I should assign the extreme credence on the side that the evidence favors, howsoever slightly the evidence favors it.
Now, if I can't set the credence directly, then I may still be able to ask someone to brainwash me into the right credence, or I might try to instill a habit of snapping credence to 0 or 1 in the case of propositions that don't affect my actions in a significant way.
The conclusion seems wrong. So what's gone wrong? Is it decision theory being fundamentally flawed? Neglect of the existence of epistemic duties that trump even truth-based utilities? The assumption that all that matters here is truth (maybe rational connections matter, too)?
Objection: The rational thing to do is not to snap the credence to 0 or 1, but to investigate p further, which is likely to result in a credence closer to 0 or to 1 than r, as Bayesian convergence sets in.
Response 1: If you do that, you miss out on the higher expected utility during the time you're investigating.
Response 2: In some cases, you may have good reason to think that you're not going to get much more evidence than you already have. For instance, suppose that currently you assign credence 0.55 to p, and you have good reason to think you'll never get closer to 0 or 1 than a credence of 0.60 or 0.40. It turns out that you can do an eventual expected utility calculation comparing two plans. Plan 1 is to just snap your credence to 1. Then your eventual expected utility is U(0.55,1)=0.1. Plan 2 is the evidentialist plan of seeking more evidence and proportioning your belief to it. Then, plausibly, your eventual expected utility is no bigger than aU(0.60,0.60)+(1−a)U(0.40,0.40), where a is the probability that you'll end up with a subjectively probability bigger than 1/2 (that you'll end up exactly at 1/2 has zero chance). But U(0.60,0.60)=U(0.40,0.40)=0.04. So you'll do better going with Plan 1. Your eventual utility (and here you need to look at what weight should be assigned to considerations of how much more valuable it is to have the truth earlier), however, will be even better if you try to have the best of both worlds. Plan 3: investigate until you it looks like you are unlikely to get anything significantly more definite, and then snap your credence to 0 or 1. You might expect, say, to get credence 0.60 or 0.40 after such investigation, and then your utility will be aU(0.60,1)+(1−a)U(0.40,1)=0.2. This plan combines an evidentialist element together with a non-evidentialist snapping of credences way past the evidence.