Suppose Alice has an inconsistent probabilistic assignment PA. Then, famously, there is a series of bets on single propositions (call these binary bets) that is a Dutch Book against Alice: i.e., Alice by her lights will accept each bet, and is guaranteed to lose money.
But now suppose Bob has a probabilistic assignment PB—perhaps a consistent one—that is strictly further from the truth than Alice’s inconsistent one in the sense that
for any p, if p is false, then PB(p)≥PA(p),
for any p, if p is true, then PB(p)≤PA(p), and
at least one of the inequalities is strict.
Then Alice will do at least as well as Bob on every portfolio of offers of binary bets, and on some portfolios she will do strictly better than Bob. In particular, even if Bob’s probabilistic assignment is consistent, and there is a binary bet Dutch Book against Alice, Bob will fare no better than Alice with respect to that book.
Thus, if we start with a consistent assignment and then by some process move towards truth, we will do better (against binary bet portfolios) even if we lose consistency.
So why is Alice’s probabilistic assignment supposed to be rationally bad in a way that Bob’s isn’t? Well, the difference is this. A bookie can fleece Alice simply on the basis of knowing Alice’s probability assignment. But simply knowing Bob’s probability assignment won’t be enough to know which portfolio will fleece him.
However, the more I think about this, the more I lose the intuition that all this shows there is something particularly rationally problematic about Alice’s assignments just because they are inconsistent. Why should game-theoretic performance against a competitor who knows one’s credences be particularly indicative of rationality or the lack thereof? When nature offers us betting portfolios (to pursue this trail or that trail after a wounded deer in the woods, say), these portfolios are normally independent of our credences. Of course, in business and war, we have to worry about mind-reading competitors. But much of our life, we don’t.
Suppose I find myself with inconsistent credences. What should I do? Should I force them to be consistent? If I am dealing with mind-reading competitors who have no more information about the external world than I do, then I should go for consistency. But going for consistency will force me to modify some of my probabilities, and for all I know, these probabilities may get modified away from truth. And that might be more harmful.
There may be interesting trade-offs. Maybe some intellectual strategies work better against mind-reading competitors and others work better with the portfolios set by nature. We should not take doing well with respect to one selection of portfolio to be particularly informative about the nature of rationality.
1 comment:
I think the worst way to approach Dutch book arguments is as a pragmatic rationale for probabilistically-consistent credences in the event that you face some sly Dutch bookies.
Rather, I take them to be ways of making a certain logical point. It seems intuitive that beliefs (or: a set of propositions) have a certain logical flaw if they are inconsistent. What flaw? Well, they can't all be true at once. There are various ways to demonstrate this.
When you stop dealing in T/F and start dealing in [0.0..1.0] you might ask what the analog to logical consistency is. Answer: probabilistic consistency. Absence of this feature in a set of beliefs is a logical flaw. What flaw? They can't be the correct probability assignments. We can *illustrate* this flaw by a Dutch book argument but I take it this is merely a pedagogical device.
In we imperfect reasoners, logically-inconsistent or probabilistically-inconsistent beliefs might be the best we can do at some points. The best path from our present state of ignorance to the truth might go through some inconsistent points. That doesn't mean inconsistency isn't a logical flaw, though.
Post a Comment