## Wednesday, January 15, 2014

### Dutch Books and probabilistic inconsistency

It is often said that if your credences are probabilistically inconsistent, e.g., because you assign probability 0.6 to p and 0.6 to its negation ~p, then you are subject to a Dutch Book, namely a bookie can present you a sequence of betting deals such that by your lights you will want to accept each one, but if you accept them all, then you are certain to lose money no matter how things turn out.

While this is often said, and there is indeed a theorem that roughly says the above, it's not exactly true when it's put as above.

Take the above case where you assign 0.6 to p and 0.6 to ~p. The standard way to construct a Dutch Book would be something like this. If you assign 0.6 to p, then you'd be happy to pay \$5.50 for a ticket that wins \$10 if p is true. And since you assign 0.6 to ~p, you'd be happy to pay \$5.50 for a ticket that wins \$10 if p is false. So if you're offered both bets, you'll be happy to accept, but then no matter whether p turns out to be true or false, you'll have paid out \$11 but only win \$10, a sure net loss of \$1.

But the thought that you'll be happy to pay \$5.50 for the ticket that wins \$10 if p is true can be questioned. The justification for the thought goes like this: You will value the \$10-if-p option at its expected value of (0.6)(\$10)=\$6, calculated with your probability assignment. Hence, you will be happy to buy the \$10-if-p option for any amount less than \$6. And ditto for ~p.

However, this is not the only way to think about the case. The question whether to accept the first deal, namely to pay \$5.50 for the chance to win \$10 if p is true, can be thought of as the choice between the accept and reject moves in this game

 p ~p accept \$4.50 −\$10 reject \$0 \$0
Now the natural way to evaluate the value of the accept line is: (0.6)(\$4.50)+(0.6)(-\$10)=−\$3.30, since you assign 0.6 to p and 0.6 to ~p. And of course the value of the reject line is (0.6)(\$0)+(0.6)(\$0)=\$0. So the reject move is the best one. And of course the same goes for the evaluation of the second deal offered by the bookie. So if you evaluate the choices according to the above methods, you will in fact reject both of the bookie's deals.

In fact, if you adopt the above way of calculating whether you should accept a deal or not, then in the case where there is just one proposition whose truth or falsity is at issue, and you assign equal positive probabilities to its truth and to its falsity, then you will come up with the very same decisions as the consistent decision theorist who assigns 0.5 to p and 0.5 to ~p. Since the consistent decision theorist is not subject to a Dutch Book, neither are you.

So what just happened? Well, what happened is that there are two ways of figuring out whether to pay \$5.50 for the ticket that wins \$10 if p is true. The standard way is to calculate the value of the ticket, using the obvious calculation (0.6)(\$10) = \$6, and then compare that to the price \$5.50 of the ticket. Basically, we are comparing two values: the value of the ticket and the value of a sure \$5.50. We are, further, assuming that the value of a sure \$5.50 is, well, \$5.50. But the latter assumption can be questioned when the probabilities are inconsistent. For while you might say that the value of a sure \$5.50 is just (1.0)(\$5.50)=\$5.50, you might also break up that sure \$5.50 according to the two options at issue, namely p and ~p, and calculate the value of that sure \$5.50 as (0.6)(\$5.50)+(0.6)(\$5.50) = \$6.60. (Of course, that value looks wrong, but we shouldn't expect things to look right with inconsistent probabilities!) And now we ask whether it's worth giving up that sure \$5.50 for the ticket, and we will say that it's not, since the ticket's value is \$6 while the sure \$5.50 is worth \$6.60. This calculation is equivalent to the one implicit in the game-based calculation above.

Here's a more formal way to look at it. When you're evaluating the value of a betting portfolio B that has only finitely many values, a natural thing to do is to break up the sample space into a partition E1,...,En with the property that B takes a constant value V(B,Ei) on each of the Ei. Then the value of B is naturally given by the formula:

• V(B,E1)P(E1)+...+V(B,En)P(En).
If the probabilities are consistent, then it doesn't matter which partition is chosen for the calculation, as long as the value of B is constant on each element of the partition. But when the probabilities are not consistent, then in general the value depends on the choice of partition. The standard calculation makes the following stipulation:
1. Let E1,...,En be the coarsest partition with the property that B takes a constant value on each Ei.
But that is not the only reasonable stipulation available. Here is another:
1. When comparing the values of bets B1,...,Bk, let E1,...,En be the coarsest partition with the property that each of the Bi takes a constant value on each of the Ej.
The second stipulation leads to results equivalent to those coming from thinking about things in terms of the table I gave earlier. This stipulation does mean that the comparative values of two bets will in general depend on what other bets they are being compared to, and hence we do not satisfy independence of irrelevant alternatives. But things like that shouldn't surprise us given that we're reasoning with inconsistent probabilities!

Moreover, the above is not a complete get-out-of-Dutch-Book card for inconsistent reasoners. There still will be probability assignments subject to Dutch Books. But it will not be the case that every inconsistent assignment is subject to a Dutch Book.

Further, there is an interesting practical question. We have good reason to think that real agents have inconsistent probabilities. When they make decisions on the basis of inconsistent probabilities, we can ask: What should they do, given that their probabilities are inconsistent? Should they decide using the standard method that partitions the sample space according to rule (1) or should they partition it via rule (2)? There is some reason to think that rule (2) is actually the better one for inconsistent reasoners—after all, it less often leads to Dutch Books!