tag:blogger.com,1999:blog-3891434218564545511.post1675818781600595059..comments2024-03-27T20:37:09.185-05:00Comments on Alexander Pruss's Blog: A fallacy of probabilistic reasoning with an application to sceptical theismAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-3891434218564545511.post-55044144177549561472015-02-18T22:16:49.939-06:002015-02-18T22:16:49.939-06:00Hi Alex,
With regard to the Godot case, I'd ...Hi Alex, <br /><br />With regard to the Godot case, I'd like to consider an alternative: <br />Let's say that Godot makes the choice first, and he chooses to play the dime game (DG). Let H1 be the hypothesis that Godot is an omniscient self-interested agent that chose DG. <br />Let say that Bob reckons Pr(H1) = 1. <br />In this case, it's clear that Bob should play DG (assuming always that Bob is self-interested in this context, and assuming he does not change the probabilistic assessment of H1). <br />So, it seems as long as Bob holds that he should play the QG, he shouldn't assign probability 1 to H1. <br /><br />Let's say now that Bob only holds that Pr(H1) > 0.5, but doesn't give it a specific value. What game should Bob play?<br />From the opposite direction, let's say Bob holds that he should play the QT. What's the range of Pr(H1) rationally compatible with that? <br /><br />That aside, I'm not sure what "expected utility" means in your interpretation of (5).<br /><br />In (3), going by your assessments, "utility" seems to mean "value" (or at least, be equivalent to it), in the monetary sense. <br />So, the expected utility of a game is the probability of winning times the money one gets if one wins, $1000. In your second example, EU(A)=$1000*1/6, or about $167 as you say, and EU(B)=$1*1=1. <br /><br />However, in (5), the same interpretation does not work. I guess it's about moral value, so it would be something like expected moral value, or EMU. But I haven't been able to find a plausible definition of EMU that works in (5). That's not because of the difficulty in assigning specific numbers of moral value. Even if we could - and, say, MV(E)=-8331 (E is an evil, so its moral value is negative) -, I would have trouble understanding (5) in terms of the EMU of preventing vs. not preventing E. <br />What would we be multiplying here? (even hypothetically?) Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.com