Friday, March 26, 2021

Credences and decision-theoretic behavior

Let p be the proposition that among the last six coin tosses worldwide that proceeded my typing the period at the end of this sentence, there were exactly two heads tosses. The probability of p is 6!/(26⋅2!⋅4!) = 15/64.

Now that I know that, what is my credence in p? Is it 15/64? I don’t think so. I don’t think my credences are that precise. But if I were engaging in gambling behavior with amounts small enough that risk aversion wouldn’t come into play, now that I’ve done the calculation, I would carefully and precisely gamble according to 15/64. Thus, I do not think my decision-theoretic behavior reflects my credence—and not through any irrationality in my decision-theoretic behavior.

Here’s a case that makes the point perhaps even more strongly. Suppose I didn’t bother to calculate what fraction 6!/(26⋅2!⋅4!) was, but given any decision concerning p, I calculate the expected utilities by using 6!/(26⋅2!⋅4!) as the probability. Thus, if you offer to sell me a gamble where I get $19 if p is true, I would value the gamble at $19⋅6!/(26⋅2!⋅4!)$, and I would calculate that quantity as $4.45 without actually calculating 6!/(26⋅2!⋅4!). (E.g., I might multiply 19 by 6! first, then divide by 26⋅2!⋅4!.) I could do this kind of thing fairly mechanically, without noticing that $4.45 is about a quarter of $19, and hence without having much of an idea as to where 6!/(26⋅2!⋅4!) lies in the 0 to 1 probability range. If I did that, then my decision-theoretic behavior would be quite rational, and would indicate a credence of 15/64 in p, but in fact it would be pretty clearly incorrect to say that my credence in p is 15/64. In fact, it might not even be correct to say that I assigned a credence less than a half to p.

I could even imagine a case like this. I make an initial mental estimate of what 6!/(26⋅2!⋅4!) is, and I mistakenly think it’s about three quarters. As a result, I am moderately confident in p. But whenever a gambling situation is offered to me, instead of relying on my moderate confidence, I do an explicit numerical calculation, and then go with the decision recommended to me by expected utility maximization. However, I don’t bother to figure out how the results of these calculations match up with what I think about p. If you were to ask me, I would say that p is likely true. But if you were to offer me a gamble, I would do calculations that better fit with the hypothesis of my having a credence close to a quarter. In this case, I think my real credence is about three quarters, but my rational decision-theoretic behavior is something else altogether.

Furthermore, there seems to me to be a continuum between my decision-theoretic behavior coming from mental calculation, pencil-and-paper calculation, the use of a calculator or the use of a natural language query system that can be asked “What is the expected utility of gambling on exactly two of six coin tosses being heads when the prize for being right is $19?” (a souped up Wolfram Alpha, say). Clearly, the last two need not reflect one’s credences. And by the same token, I think that neither need the first two.

All this suggests to me that decision-theoretic behavior lacks the kinds of tight conceptual connection to credences that people enamored of representation theorems like.

1 comment:

  1. This seems reasonable. Economists like to talk about ‘revealed preference’ vs ‘stated preference’. Maybe something similar applies here. Maybe your ‘implicit credence’ is ‘6!/(2^6 * 2! * 4!), whatever that works out as’. Or if you don’t even bother with the formula, but rely on the natural language query system, your ‘implicit credence’ is ‘whatever the computer says’.

    These sorts of examples raise an issue that sometimes worries me. Judged by the standards of formal models in epistemology and decision theory, humans are far from rational. But we still manage to get by in everyday life, learn and believe lots of (presumably) true things, discover general relativity, etc. How does that work? And where do the formal models fit in?

    ReplyDelete