One picture of credences is that they are derived from agents’ valuations of wagers (i.e., previsions) as follows: the agent’s credence in a proposition p is equal to the agent’s valuation of a gamble that pays one unit if p is true and 0 units if p false.
While this may give the right answer for a rational agent, it does not work for an irrational agent. Here are two closely related problems. First, note that the above definition of credences is dependent on the unit system in which the gambles are denominated. A rational agent who values a gamble that pays one dollars on heads and zero dollars otherwise at half a dollar will also value a gamble that pays one yen on heads and zero yen otherwise at half a yen, and we can attribute a credence of 1/2 in heads to the agent. In general, the rational agent’s valuations will be invariant under affine transformations and so we do not have a problem. But Bob, an irrational agent, might value the first gamble at $0.60 and the second at 0.30 yen. What, then, is that agent’s credence in heads?
If there were a privileged unit system for utilities, we could use that, and equate an agent’s credence in p with their valuation of a wager that pays one privileged unit on p and zero on not-p. But there are many units of utility, none of them privileged: dollars, yen, hours of rock climbing, glazed donuts, etc.
And even if there were a privileged unit system, there is a second problem. Suppose Alice is an irrational agent. Suppose Alice has two different probability functions, P and Q. When Alice needs to calculate the value of a gamble that pays exactly one unit on some proposition and exactly zero units on the negation of that proposition, she uses classical mathematical expectation based on P. When Alice needs to calculate the value of any other gamble—i.e., a gamble that has fewer than or more than two possible payoffs or a gamble that has two payoffs but at values other than exactly one or zero—she uses classical mathematical expectation based on Q.
Then the proposed procedure attributes to Alice the credence function P. But it is in fact Q that is predictive of Alice’s behavior. For we are never in practice offered gambles that have exactly two payoffs. Coin-toss games are rare in real life, and even they have more than two payoffs. For instance, suppose I tell you that I will give you a dollar on heads and zero otherwise. Well, a dollar is worth a different amount depending on when exactly I give it to you: a dollar given earlier is typically more valuable, since you can invest it for longer. And it’s random when exactly I will pay you. So on heads, there are actually infinitely many possible payoffs, some slightly larger than others. Moreover, there is a slight chance of the coin landing on the edge. While that eventuality is extremely unlikely, it has a payoff that’s likely to be more than a dollar: if you ever see a coin landing on edge, you will get pleasure out of telling your friends about it afterwards. Moreover, even if we were offered a gamble that had exactly two payoffs, it is extremely unlikely that these payoffs would be exactly one and zero in the privileged unit system.
The above cases do not undercut a more sophisticated story about the relationship between credences and valuations, a story on which one counts as having the credence that would best fit one’s practical valuations of gambles with two-values, and where there is a tie, one’s credences are underdetermined or interval-valued. In Alice’s case, for instance, it is easy to say that Q best fits the credences, while in Bob’s case, the credence for heads might be a range from 0.3 to 0.6.
But we can imagine a variant of Alice where she uses P whenever she has a gamble that has only two payoffs, and she uses Q at all other times. Since in practice two-payoff gambles don’t occur, she always uses Q. But if we use two-payoff gambles to define credences, then Alice will get P attributed to her as her credences, despite her never using P.
Can we have a more sophisticated story that allows credences to be defined in terms of valuations of gambles with more payoffs than two? I doubt it. For there are multiple ways of relating a prevision to a credence when we are dealing with an inconsistent agent, and none of them seem privileged. Even my favorite way, the Level Set Integral, comes in two versions: the Split and Shifted versions.
I’m not sure I follow this.
ReplyDeleteAn agent’s preferences can be summarized by utilities and credences only if they conform to a suitable model. For the textbook rational agent, the model is expected utility based on consistent credences.
But for irrational agents, possible models are endless. For example, if an irrational agent’s preferences are intransitive, they cannot be represented by utilities. This rules out any sort of utility-and-credence representation. Even for agents whose preferences can be represented by utilities, there is still room for lots of weirdness. An irrational agent’s preferences may have no relation to their epistemic credences. (This may seem irrational, but that’s kinda the point…)
But there are much ‘tamer’ types of irrationality. Inconsistent credences + Level Set Integral, Split version is one such. For this model, all the relevant credences can be defined by the valuations of suitable test gambles. (e.g. for a coin flip, the previsions of two ‘single proposition’ gambles would suffice, the first 1 utile on Heads and the second 1 utile on Tails. For cases with more possible outcomes, more test gambles would be required, one for each possible combination of outcomes.
The moral: there cannot be a ‘more sophisticated story’ that works in general, because weird irrational systems of preferences are endless. But for some simple sorts irrationality, the previsions of suitably selected ‘single proposition’ gambles can determine the implicit model credences.
A note on units: I had understood that utility, where is exists, strictly has no units other than arbitrarily scaled ‘utiles’. One utile might represent the utility of X dollars, Y yen, or Z hours of rock climbing, but utility does not have units of dollars, etc. Further, it need not be linear in dollars, etc.
I suppose this makes sense if one is willing to say that a sufficiently practically irrational agent doesn't have credences. But I don't want to say that: I think one could be arbitrarily practically irrational and still have credences.
ReplyDelete