tag:blogger.com,1999:blog-3891434218564545511.post8387135652238130268..comments2024-03-28T19:56:42.305-05:00Comments on Alexander Pruss's Blog: Valuations and credencesAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-3891434218564545511.post-16946651424245489922021-03-23T13:08:06.921-05:002021-03-23T13:08:06.921-05:00I suppose this makes sense if one is willing to sa...I suppose this makes sense if one is willing to say that a sufficiently practically irrational agent doesn't have credences. But I don't want to say that: I think one could be arbitrarily practically irrational and still have credences.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-42706626537554408012021-03-20T21:51:38.169-05:002021-03-20T21:51:38.169-05:00I’m not sure I follow this.
An agent’s preference...I’m not sure I follow this.<br /><br />An agent’s preferences can be summarized by utilities and credences only if they conform to a suitable model. For the textbook rational agent, the model is expected utility based on consistent credences.<br /><br />But for irrational agents, possible models are endless. For example, if an irrational agent’s preferences are intransitive, they cannot be represented by utilities. This rules out any sort of utility-and-credence representation. Even for agents whose preferences can be represented by utilities, there is still room for lots of weirdness. An irrational agent’s preferences may have no relation to their epistemic credences. (This may seem irrational, but that’s kinda the point…)<br /><br />But there are much ‘tamer’ types of irrationality. Inconsistent credences + Level Set Integral, Split version is one such. For this model, all the relevant credences can be defined by the valuations of suitable test gambles. (e.g. for a coin flip, the previsions of two ‘single proposition’ gambles would suffice, the first 1 utile on Heads and the second 1 utile on Tails. For cases with more possible outcomes, more test gambles would be required, one for each possible combination of outcomes.<br /><br />The moral: there cannot be a ‘more sophisticated story’ that works in general, because weird irrational systems of preferences are endless. But for some simple sorts irrationality, the previsions of suitably selected ‘single proposition’ gambles can determine the implicit model credences.<br /> <br />A note on units: I had understood that utility, where is exists, strictly has no units other than arbitrarily scaled ‘utiles’. One utile might represent the utility of X dollars, Y yen, or Z hours of rock climbing, but utility does not have units of dollars, etc. Further, it need not be linear in dollars, etc.IanShttps://www.blogger.com/profile/00111583711680190175noreply@blogger.com