tag:blogger.com,1999:blog-3891434218564545511.post1311389972829628874..comments2024-03-28T19:56:42.305-05:00Comments on Alexander Pruss's Blog: Decision theory and compatibilismAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-3891434218564545511.post-59786458548753173242013-11-03T18:57:14.989-06:002013-11-03T18:57:14.989-06:00There's a trick for getting around this proble...There's a trick for getting around this problem, provided that you have the power to pre-commit to respond in a particular way to a pre-specified condition. (For example, you would have this power if you are a robot who can edit its own source code.)<br /><br />Here is the trick: Simply pre-commit to the following: "If the angel tells me that I will do <i>A</i>, then do <i>B</i>, and vice versa."<br /><br />The effect of this will be that the angel simply cannot tell you what you will do. For, whatever he told you, a contradiction would result, because you would do the opposite of whatever he said. It would be like if you asked him, "Will you say 'no' in reply to this question?" He would be unable to give a yes-or-no answer without falsifying that very answer.<br /><br />Thus, if you make the pre-commitment above, you can ask him for whatever information you want without fear of have your power of choice taken away, because none of the information he gives you can contain a prediction of what you will do.<br /><br />I learned this idea from decision theorists associated with the <a href="http://intelligence.org/" rel="nofollow">Machine Intelligence Research Institute</a>. They sometimes call this trick "playing chicken with the universe".<br /><br />I'm not sure what happens if you are worried even about learning what you will do with merely high probability, rather than with certainty.Tyrrell McAllisterhttps://www.blogger.com/profile/03742116091097551615noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-4692987735443543482013-11-01T09:24:53.258-05:002013-11-01T09:24:53.258-05:00Or I could stick to my guns in the original story....Or I could stick to my guns in the original story. Sure, it *might* be that the angel can't give me all the answers. But suppose it can. Then we still have the problem of my post, that to act rationally we need to stop our request for information short of asking for the complete state of the universe and the laws (if the angel is indeterministic: the complete state of the universe after the answer). Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-66489717757116024582013-11-01T09:22:11.818-05:002013-11-01T09:22:11.818-05:00That's helpful and does adversely impact the a...That's helpful and does adversely impact the argument.<br /><br />Here's a variant that may escape this. The angel is himself deterministic. And he announces to me not what I will do--for that would be problematic for these reasons--but what the present state of the universe is and what the laws are. Call this knowledge K.<br /><br />Suppose that in fact I *won't* make the extremely complicated calculation of U(A,K) and U(B,K) on the basis of K. There are two cases here, and maybe they need to be considered separately: I won't because I can't (the realistic case), and I can but I won't.<br /><br />In any case, standard decision theory says that it is rational to choose A iff U(A,K) > U(B,K), regardless of whether I know what these two utilities are. You don't get off the hook for irrationality just because you didn't do the calculation.<br /><br />But it isn't true that U(A,K) > U(B,K), and it isn't true that U(A,K) < U(B,K), and it isn't true that U(A,K) = U(B,K), because one of these two quantities is undefined. So no choice is rational under these circumstances. Which is odd.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-79487548729506658142013-11-01T08:53:29.215-05:002013-11-01T08:53:29.215-05:00Hilary Bok addresses this sort of issue with her e...Hilary Bok addresses this sort of issue with her example of the Pocket Oracle. She argues that, in principle, you could not get information about what you deterministically will do (I’m simplifying a bit) since the act of giving you the information changes the present state of affairs and therefore may change the implications for the future.<br /><br />Suppose the closing price of AAPL depends on whether you buy it, and you are making that decision. You ask the angel “what will AAPL close at tomorrow?” The angel can only answer this question, in a deterministic universe, by looking at its present state, which includes your ignorance. But if he alters that state (i.e. alters your beliefs about AAPL’s closing price by giving you an answer) he may very well falsify his statement. In short, there may be (and quite likely is) no way for the angel to give you true information about states which depend on your decisions.Heath Whitehttps://www.blogger.com/profile/13535886546816778688noreply@blogger.com