Friday, October 29, 2021

Evidentialism and epistemic utilities

Epistemic value is the value of true belief and disvalue of false belief.

Let p be the proposition that there is such a thing as epistemic value.

Suppose p is true. Then, plausibly, the higher your credence in p, the more epistemic value your credence has. The closer your credence is to certainty, the closer to truth your representation is. Let tp(r) be the value of having credence r in p when in fact p is true. Then tp(r) is a strictly increasing function of r.

Suppose p is false. Then whatever credence you have in p, the epistemic value of that credence is zero.

Now suppose you are not sure about p, so your credence in p is an r such that 0 < r < 1. Consider now the idea of setting your credence to some other value r′. What is the expected epistemic value of doing so? Well, if p is false, there will be no epistemic value, and if p is true, you will have epistemic value tp(r′). Your current probability for p is r. So your expected epistemic value is

  • rtp(r′) + (1 − r)⋅0 = rtp(r′).

Thus, to maximize your expected epistemic value, you should set r′=1. In other words, no matter that your evidence may not support p, you should still have credence one in p, if you should maximize expected epistemic value.

What do we learn from this?

First, either evidentialism (the view that your degree of belief should be proportioned to the evidence) is false or else expected epistemic utility maximization is the wrong way to think about epistemic normativity.

Second, there are cases where the right epistemic scoring rule is improper. For given a proper epistemic scoring rule and a consistent credence assignment, we never get a recommendation of a change of credence. The scoring rule underlying the above epistemic value assignments is clearly improper, and yet is also clearly right.


IanS said...

Is p, as applied in the post, really a proposition, or is it implicitly a norm of epistemic rationality?

To spell out this interpretation of p: “Epistemically rational agents should value true belief over false belief, even in propositions that have no immediate practical implications. For partial belief expressed as a numerical credence, they should value proximity to truth according to a scoring rule. They should seek to maximize expected proximity to truth.”

Taken as a norm rather than a proposition, p would not be subject to the rules it prescribes for credences in propositions. Strictly, you don’t have credences in norms, you accept them or reject them or judge them more or less reasonable.

Alexander R Pruss said...

I think norms, or at least objective norms, are propositions. They are true or false: true if one should do what it says one should do, and false otherwise.

jqb said...

"I think norms, or at least objective norms, are propositions. They are true or false: true if one should do what it says one should do, and false otherwise."

What the heck does "should" mean here? It's just an opinion, or at least it's just an opinion that it's not just an opinion. In any case, there's no proposition, or the claim that there is presupposes itself.

What I learn from this is not your First and Second, but rather that getting this sort of reasoning right is very difficult and we should not trust its conclusions.

IanS said...

If I’m thinking straight, your example seems to fit into the framework of this paper: B. A. Levinstein - An objection of varying importance to epistemic utility theory ( The paper does not appear to have many citations.

To spell out the relation: if p is true, it is ‘epistemically important’; if false, it isn’t. The paper doesn’t give self-referential examples like p, but I don’t think that matters.

Alexander R Pruss said...

Yeah, I've thought a bit about that, but I had a somewhat hard time finding other cases.

Alexander R Pruss said...

It's a nice paper. I wish it didn't assume additivity in its main results, since additivity seems implausible to me.