Recently, I found myself puzzled by the difficulty in applying “classical” evidential decision theory to a perfectly rational agent. The problem was that the rational agent decides whether to do A or B based on a comparison between the conditional expectations E(U|A) and E(U|B) of the utility function U. But supposing that in fact E(U|A) > E(U|B), the perfectly rational agent has no chance of doing B, so P(B) = 0, and hence E(U|B) is undefined.
But then I thought this isn’t a big deal, because we aren’t perfectly rational agents, so we always have a chance of screwing up and hence P(B) > 0 even if E(U|B) is much less than E(U|A).
I am not entirely satisfied with this. After all, you might think: “I may be pretty imperfect, but if I am choosing between a donut D and a year of torture T, I have zero chance of choosing the year of torture. But then E(U|T) is undefined, so how am I being rational in this choice? Maybe that’s a good objection, maybe not.
But here is another reason why the “We’re imperfect” solution isn’t completely ideal. We want to say that Good’s Theorem tells us something important about rationality—namely, that more information makes rational agents make better decisions. Good’s Theorem is usually interpreted as saying that under some independence conditions, the expected value of a perfectly rational choice given more information is no less than that of a perfectly rational choice given less information. Notice that this is obviously false in the case of an imperfectly rational agent. Thus, we have to make sense of “What a perfectly rational agent would choose” to make sense of the standard interpretation of Good’s Theorem. Moreover, in the setting of Good’s Theorem, the perfectly rational agent has to be choosing based on expected utilities—and that’s precisely what generates the zero-probability-conditioning problem.
Now, the Theorem is still true as an abstract bit of mathematics. But the application is difficult if we can’t make sense of a perfectly rational agent who is certain to maximize expected utility.
Likely we can extend Good’s Theorem to talk about the limiting case of imperfect agents getting more and more perfect. But it would be nice if we didn’t have to.
No comments:
Post a Comment