There are two ways of evaluating a credence assignment. There is the decision-theoretic method: you consider how you are going to do given this credence assignment when presented with some gambles. And there is the scoring rule method: you consider how far you are from "the truth", i.e., the credence assignment that assigns 0 to the falsehoods and 1 to the truths, and you measure this with respect to a proper scoring rule.
There are various parallel results for the two methods.
It turns out that there is a good reason why there are parallel results. The two methods are equivalent. Assume an underlying probability space Ω. To avoid measurability issues, suppose Ω is finite. Denote a scoring rule by a function s(p,q) where p is a consistent credence assignment for some family of sentences and q is a consistent extreme credence assignment (0/1 valued) for the same family. By "the truth", I mean the extreme credence assignment that assigns 1 to each true sentence and 0 to each false one. A scoring rule is proper provided that Ep(−s(p',T))≤Ep(−s(p,T)) where T is the random variable that assigns to each point ω of Ω a function T(ω) that in turn assigns to each sentence its truth value at ω (i.e., 1 if true, 0 if false), and where Ep is expectation with respect to the credence assignment p.
Theorem. For any proper scoring rule s(p,q), there exists a family F of gambles such that for any consistent credence assignment there is a gamble that maximizes the expected payoff, and when you choose that maximizing gamble your payoff will be −s(p,T). Conversely, suppose that F is a family of gambles such that for any credence assignment there is a gamble that maximizes the expected payoff. Let V(p,q) be the payoff of such a gamble for credence assignment p when q is the truth. Then −V(p,q) is a proper scoring rule.
The proof is actually very simple. (I had very complicated proofs of special cases of this Theorem in the past, but now I see it is all very simple, even trivial.) For the left-to-right direction, for any possible credence assignment p, define the gamble Gp as follows: at ω, you get paid −s(p,T(ω)). Then the propriety of the scoring rule guarantees that Gp maximizes the expected payoff when p is your credence assigment. Conversely, let s(p,q)=−V(p,q). Propriety is easy to check—it just follows from maximization.
In the converse, one should restrict to consistent credences, and then say that -V extends to a proper scoring rule.
ReplyDeleteWait! What are you supposed to do if the gamble that maximizes expected utility is not unique? I am not sure the alleged theorem is right.
ReplyDeleteMaybe one should rephrase the left-to-right part of the theorem:
ReplyDeleteFor any proper scoring rule s(p,q), there exists a family F of gambles such that for any consistent credence assignment there is a gamble G such that (a) G maximizes the expected payoff and (b) when you choose G, your payoff will be −s(p,T).
In other words, (b) need not be true for every expected-payoff-maximizing gamble when that gamble is not unique.
When the scoring rule is strictly proper, we do get uniqueness.
For the right-to-left direction, we should fix some tie breaking method if there are ties between gambles.