Epistemic value is the value of true belief and disvalue of false belief.
Let p be the proposition that there is such a thing as epistemic value.
Suppose p is true. Then, plausibly, the higher your credence in p, the more epistemic value your credence has. The closer your credence is to certainty, the closer to truth your representation is. Let tp(r) be the value of having credence r in p when in fact p is true. Then tp(r) is a strictly increasing function of r.
Suppose p is false. Then whatever credence you have in p, the epistemic value of that credence is zero.
Now suppose you are not sure about p, so your credence in p is an r such that 0 < r < 1. Consider now the idea of setting your credence to some other value r′. What is the expected epistemic value of doing so? Well, if p is false, there will be no epistemic value, and if p is true, you will have epistemic value tp(r′). Your current probability for p is r. So your expected epistemic value is
- rtp(r′) + (1 − r)⋅0 = rtp(r′).
Thus, to maximize your expected epistemic value, you should set r′=1. In other words, no matter that your evidence may not support p, you should still have credence one in p, if you should maximize expected epistemic value.
What do we learn from this?
First, either evidentialism (the view that your degree of belief should be proportioned to the evidence) is false or else expected epistemic utility maximization is the wrong way to think about epistemic normativity.
Second, there are cases where the right epistemic scoring rule is improper. For given a proper epistemic scoring rule and a consistent credence assignment, we never get a recommendation of a change of credence. The scoring rule underlying the above epistemic value assignments is clearly improper, and yet is also clearly right.