Saturday, May 11, 2024

What is it like not to be hedging?

Plausibly, a Christian commitment prohibits hedging. Thus in some sense even if one’s own credence in Christianity is less than 100%, one should act “as if it is 100%”, without hedging one’s bets. One shouldn’t have a backup plan if Christianity is false.

Understanding what this exactly means is difficult. Suppose Alice has Christian commitment, but her credence in Christianity is 97%. If someone asks Alice her credence in Christianity, she should not lie and say “100%”, even though that is literally acting “as if it is 100%”.

Here is a more controversial issue. Suppose Alice has a 97% credence in Christianity, but has the opportunity to examine a piece of evidence which will settle the question one way or the other—it will make her 100% certain Christianity is true or 100% certain it’s not. (Maybe she has an opportunity for a conversation with God.) If she were literally acting as if her credence were 100%, there would be no point to looking at any more evidence. But that seems the wrong answer. It seems to be a way of being scared that the evidence will refute Christianity, but that kind of a fear is opposed to the no-hedge attitude.

Here is a suggestion about how no-hedge decision-making should work. When I think about my credences, say in the context of decision-making, I can:

  1. think about the credences as psychological facts about me, or

  2. regulate my epistemic and practical behavior by the credences (use them to compute expected values, etc.).

The distinction between these two approaches to my credences is really clear from a third-person perspective. Bob, who is Alice’s therapist, thinks about Alice’s credences as psychological facts about her, but does not regulate his own behavior by these credences: Alice’s credences have a psychologically descriptive role for Bob but not a regulative role for Bob in his actions. In fact, they probably don’t even have a regulative role for Bob when he thinks about what actions are good for Alice. If Alice has a high credence in the danger of housecats, and Bob does not, Bob will not encourage Alice to avoid housecats—on the contrary, he may well try to change Alice’s credence, in order to get Alice to act more normally around them.

So, here is my suggestion about no-hedging commitments. When you have a no-hedging commitment to a set of claims, you regulate your behavior by them as if the claims had credence 100%, but when you take the credences into account as psychological facts about you, you give them the credence they actually have.

(I am neglecting here a subtle issue. Should we regulate our behavior by our credences or by our opinion about our credences? I suspect that it is by our credences—else a regress results. If that’s right, then there might be a very nice way to clarify the distinction between taking credences into account as psychological facts and taking them into account as regulative facts. When we take them into account as psychological facts, our behavior is regulated by our credences about the credences. When we take them into account regulatively, our behavior is directly regulated by the credences. If I am right about this, the whole story becomes neater.)

Thus, when Alice is asked what her credence in Christianity is, her decision of how to answer depends on the credence qua psychological fact. Hence, she answers “97%”. But when Alice decides whether or not to engage in Christian worship in a time of persecution, her decision on how to answer would normally depend on the credence qua regulative, and so she does not take into account the 3% probability of being wrong about Christianity—she just acts as if Christianity were certain.

Similarly, when Alice considers whether to look at a piece of evidence that might raise or lower her credence in Christianity, she does need to consider what her credence is as a psychological fact, because her interest is in what might happen to her actual psychological credence.

Let’s think about this in terms of epistemic utilities (or accuracy scoring rules). If Alice were proceeding “normally”, without any no-hedge commitment, when she evaluates the expected epistemic value of examining some piece of evidence—after all, it may be practically costly to examine it (it may involve digging in an archaeological site, or studying a new language)—she needs to take her credences into account in two different ways: psychologically when calculating the potential for epistemic gain from her credence getting closer to the truth and potential for epistemic loss from her credence getting further from the truth, and regulatively when calculating the expectations as well as when thinking about what is or is not true.

Now on to some fun technical stuff. Let ϕ(r,t) be the epistemic utility of having credence r in some fixed hypothesis of interest H when the truth value is t (which can be 0 or 1). Let’s suppose there is no as-if stuff going on, and I am evaluating the expected epistemic value of examining whether some piece of evidence E obtains. Then if P indicates my credences, the expected epistemic utility of examining the evidence is:

  1. VE = P(H)(P(E|H)ϕ(P(H|E),1)+P(∼E|H)ϕ(P(H|E),1)) + P(∼H)(P(E|∼H)ϕ(P(H|E),0)+P(∼E|∼H)ϕ(P(H|∼E),0)).

Basically, I am partitioning logical space based on whether H and E obtain.

Now, in the as-if case, basically the agent has two sets of credences: psychological credences and regulative credences, and they come apart. Let Ψ and R be the two. Then the formula above becomes:

  1. VE = R(H)(R(E|H)ϕ(Ψ(H|E),1)+R(∼E|H)ϕ(Ψ(H|∼E),1)) + R(∼H)(R(E|∼H)ϕ(Ψ(H|E),0)+R(∼E|∼H)ϕ(Ψ(H|∼E),0)).

The no-hedging case that interests us makes R(H) = 1: we regulatively ignore the possibility that the hypothesis is false. Our expected value of examining whether E obtains is then:

  1. VE = R(E|H)ϕ(Ψ(H|E),1) + R(∼E|H)ϕ(Ψ(H|∼E),1).

Let’s make a simplifying assumption that the doctrines that we are as-if committed to do not affect the likelihoods P(E|H) and P(E|∣H) (granted the latter may be a bit fishy if P(H) = 1, but let’s suppose we have Popper functions or something like that to take care of that), so that R(E|H) = Ψ(E|H) and R(E|∣H) = Ψ(E|∣H).

We then have:

  1. Ψ(H|E) = Ψ(H)R(E|H)/(R(E|H)Ψ(H)+R(E|∼H)Ψ(∼H)).

  2. Ψ(H|∼E) = Ψ(H)R(∼E|H)/(R(∼E|H)Ψ(H)+R(∼E|∼H)Ψ(∼H)).

Assuming Alice has a preferred scoring rule, we now have a formula that can guide Alice what evidence to look at: she can just check whether VE is bigger than ϕ(Ψ(H),1), which is her current score regulatively evaluated, i.e., evaluated in the as-if H is true way. If VE is bigger, it’s worth checking whether E is true.

One might hope for something really nice, like that if the scoring rule ϕ is strictly proper, then it’s always worth looking at the evidence. Not so, alas.

It’s easy to see that VE beats the current epistemic utility when E is perfectly correlated with H, assuming ϕ(x,1) is strictly monotonic increasing in x.

Surprisingly and sadly, numerical calculations with the Brier score ϕ(x,t) =  − (xt)2 show that if Alice’s credence is 0.97, then unless the Bayes’ factor is very far from 1, current epistemic utility beats VE, and so no-hedging Alice should not look at the evidence, except in rare cases where the evidence is extreme. Interestingly, though, if Alice’s current credence were 0.5, then Alice should always look at the evidence. I suppose the reason is that if Alice is at 0.97, there is not much room for her Brier score to go up assuming the hypothesis is correct, but there is a lot of room for her score to go down. If we took seriously the possibility that the hypothesis could be false, it would be worth examining the evidence just in case the hypothesis is false. But that would be a form of hedging.

1 comment:

  1. Interestingly, if instead of Brier one uses the logarithmic score, no-hedge Alice will not neglect the evidence. See my post from today.

    ReplyDelete