Accuracy scoring rules measure the epistemic utility of having some credence assignment. For simplicity, let’s assume that all credence assignments are probabilistically coherent. A strictly proper scoring rule has the property that always by one’s own lights, the expected value of one’s actual credence assignment is better than that of any other credence assignment.
A well-known fact is that a strictly proper scoring rules always makes it rational to update on non-trivial evidence. I.e., by one’s present lights, the expected epistemic utility after examining and updating on non-trivial evidence will be higher than the expected epistemic utility of ignoring that evidence. We might put this by saying that a strictly proper scoring rule is strictly open-minded.
The logarithmic scoring rule makes the score of assigning credence r be log r when the hypothesis is true and log (1−r) when the hypothesis is false. It is strictly proper and hence strictly open-minded.
The logarithmic scoring rule, however, satisfies a condition even stronger than strict open-mindedness. This condition is easiest to describe in a binary case where one is simply evaluating the score of one’s credence in a single hypothesis H. Assuming some non-triviality assumptions, it turns out that not only is the expected epistemic utility increased by examining evidence, but the expected epistemic utility conditional on H is increased by examining evidence. (This is a pretty easy calculation.)
So what?
Well, there are several reasons this matters. First, on my recent account of what it is to have a no-hedge commitment to a hypothesis H, if your epistemic utilities are measured by some scoring rules (e.g., Brier) and you have a no-hedge commitment to H but you do not have credence 1 in H, then you will sometimes have reason to refuse to look at evidence. But the above fact about the logarithmic scoring rule shows that this is not so for the logarithmic scoring rule. With the logarithmic scoring rule, it makes sense to look at the evidence even if you have a no-hedge commitment to H—i.e., even if all your betting behavior is “as if H”.
Second, let’s imagine that I run a funding agency and you come to me with an interest in doing some experiment relevant to a hypothesis H. Let’s suppose that the relevant epistemic community agrees on the relevant likelihoods with respect to the evidence obtainable from the experiment, and is perfectly rational, but differs with regard to the priors of H. I might then have this paternalistic worry about funding the experiment. Even though updating on the results of the experiment by my lights is expected to benefit me epistemically, if a strictly proper scoring rule is the appropriate measure of benefit, it may not be true that by my lights other members of the community will benefit epistemically from updating on the results of the experiment. I may, for instance, be close to certain of H, and think that some members of the community have credences that are sufficiently high that the benefit to them of getting a boost in credence in H from the experiment is outweighed by the risk of misleading evidence. If it is my job to watch out for the epistemic good of the community, this could give me reason to refuse funding.
But not so if I think the logarithmic rule is the right way to evaluate epistemic utility. If everyone shares likelihoods, and we differ only in priors for H, and everyone is rational, then when we measure epistemic utility with the logarithmic rule, I have a positive expectation of the epistemic utility effect of examining the experiment’s results on each member of the community. This is easily shown to follow from my above observation about the logarithmic scoring rule. (By my lights the expectation of a fellow community member’s epistemic utility after updating on the experimental results is a weighted sum of an expectation given H and an expectation given not-H. Each improves given the experiment.)
No comments:
Post a Comment