One of the things I’ve learned from the St Petersburg Paradox and Pascal’s Wager is that we are rationally required to have attitudes to risk that significantly discount tiny chances of benefits, rather than to maximize expected utility. This requirement is rational because failure to have such attitudes to risk makes one subject to two-person diachronic Dutch Books. But it is also clearly irrational to significantly discount large chances of benefits.
But where are the lines to be drawn? Maybe it’s not worth enduring an hour of sitting on an uncomfortable chair for a 1/101000 chance of any finite length of bliss, but enduring an hour of sitting in such a chair for a 45% chance of 1000 years of bliss is worthwhile. As long as we thought the decisions were to be made on the basis of expected utility, we could have said that the lines are to be non-arbitrarily drawn by multiplying probabilities and utilities. But that fails.
It is possible, I suppose, that there is a metaphysically necessary principle of rationality that says where the line of the negligibility of chances is to be drawn. Perhaps an hour in the uncomfortable chair for a 1/101000 chance of a finite benefit cannot possibly be worthwhile, but for a 1/106 chance of a large enough finite benefit it is worth it, and there is a cut-off precisely at π ⋅ 10−9. But the existence of any such a metaphysically necessary cut-off is just as implausible as it is to think that the constants in the laws of nature are metaphysically necessary.
(Vagueness is of no help. For even if the cut-off is vague, the shape—vague or exact—of the vagueness profile of the cut-off will still look metaphysically contingent.)
One could leave it to the individual. Perhaps rationality requires each individual to have a cut-off but where the cut-off lies is up to the individual. But rationality also places constraints on that cut-off: the person who is unwilling to sit in an uncomfortable chair for an hour for a 45% chance of 1000 years of bliss is irrational. (I deliberately made it 45%. The cut-off isn’t at 1/2, which would be satisfyingly non-arbitrary.) And where the constraints on the cut-off lie is itself something to be explained, and again it is implausible that it is metaphysically necessary.
In morals, we also have similar cut-off phenomena. It is morally wrong to put someone in prison for life for stealing an ordinary book, while a week of community service is morally permissible. Whence the cut-off? The problem in both cases comes from two features of the situation:
We have a parameter that seems to have a normative force independent of our minds.
That parameter appears to be contingent.
Utilitarianism provides an elegant answer, but no analog of that answer seems to apply in the rationality/risk case. Kantianism is out of luck. Divine command theory provides an answer, but one whose analogue in the case of rationality is quite implausible: it is irrational to be unwilling to sit in the uncomfortable chair for the 45% chance of the great benefit, rather than forbidden by God.
Natural Law, on the other hand, provides a framework for both the moral and the rational cases by saying that the parameter necessarily comes from our nature. Our nature is independent of our minds, and hence we do justice to (1). But while it is presumably not a contingent fact that we have the nature we do, it is a contingent fact that the persons that inhabit the world have the natures they do. Humans couldn’t have these normative risk or moral parameters other than they do, but there could easily have existed non-humans somewhat similar to us who did. The explanation is parallel to the Kripkean explanation of the seeming arbitrariness of water having two hydrogen atoms. Water couldn’t have had a different number of hydrogen atoms, but something similar to water could have had.
More and more, I think something like Natural Law is a powerful framework in normative areas outside of what is normally construe to be moral theory: in decision theory and epistemology. (I hedge with the “normally construe”, because I happen to think that both decision theory and epistemology are branches of moral theory.)
I think you are dismissing expected utility too quickly.
ReplyDeleteExpected utility requires the possibility of measuring value with numbers. There is necessarily going to be something arbitrary about that, just as a numerical measure of temperature is arbitrary. Nonetheless, we can generate a continuous scale of value with the stipulation:
Utility(X) is said to be exactly double Utility(Y) when we are indifferent between a 50% chance of X and a certainty of Y.
But given this definition, for exactly the reasons you have been giving, a utility of a googolplex times the utility of Y cannot exist. In order to have a utility Z which is 4 times Y, we need to equally value Y and a 25% chance of Z. And so on. Given a Y which is some reasonable finite good, there will be nothing that satisfies the stipulation needed for sufficiently large utilities. Given that we generate our numerical scale of value in the above way, numerically astronomical utilities simply cannot exist.
Several things follow from this. Utility has to be bounded, in the sense of an upper limit in our numerical scale. That does not mean that you can reach that limit and then nothing else is considered beneficial. It means it is a limit and you might approach it continuously, e.g. we could set the limit of utility to 1,000,000. It would be impossible to achieve that exact utility, but increasingly large benefits would be increasingly close to it.
Second, values are not multiplied directly with the multiplication of good things. E.g. 10,000 years of bliss is not 10 times as valuable as 1,000 years of bliss, but the coefficient is somewhat less than 10. This is actually reasonable: 10% chance of the 10,000 years is not in fact exactly equal to a certainty of the 1,000, but somewhat less in value because of the risk. This is what generates the result that sufficiently low probabilities are not worth anything.
Yes, taking boundedness of utilities is another way out. Note, however, that this leads to the same big picture issue: just where do the bounds lie? If 10,000 years of bliss is not 10 times as valuable as 1,000 years of bliss, what determines the actual multiplier?
ReplyDeleteI think our intuitions are misled by thoughts about boredom and the like. Suppose boredom is not a part of the equation. (Maybe you are one of the people who don't get bored with what you enjoy.)
Also, I think that even if the coefficient isn't 10, it is implausible that eventually the increments go to zero.
Here's an argument. It is very plausible that *pain*, absent habituation effects, is at least additive: 10,000 years of pain is at least 10 times as bad as 1,000 years of pain.
Let's say you've just had N years of bliss, where N is at least 1. Now, you're offered another N for the cost of a minute of mild pain. Of course, it's worth getting that, no matter how large N already is. (In fact, it's a better deal if N is larger.) It's always worth doubling your years of bliss for the cost of a minute of mild pain.
But this shows that 2^N years of bliss is worth paying for with N minutes of pain. Since N minutes of pain is at least N times as bad as a minute of pain, 2^N years of bliss must go to infinity in value.
Alex,
ReplyDeleteI'm not sure what you mean by "two-person diachronic Dutch book argument" (due to my lack of familiarity with this area) but I would think that if you could specify the DBA with precision, then that would yield the rational answer: the rational cutoff lies wherever the Dutch book argument no longer works.
Incidentally, shortly after I read your earlier post on St. Petersburg, I ran across a post where a guy was literally denying the Archimedean principle in a very practical context:
http://www.philosophicaleconomics.com/2017/04/diversification-adaptation-and-stock-market-valuation/
Search for "I sat down and worked out my own price-allocation mapping"
Alex,
ReplyDeleteLeaving aside other issues, it seems to me that what's rational depends on what sort of mind we have, because it depends on our preferences.
It may be impossible for a human to have certain parameters, but that seems to be because of the sort of mind humans have. If it's about natures, it's about the nature of our minds, it seems to me.
Take, for example, the case of someone who is unwilling to sit in an uncomfortable chair for an hour for a 45% chance of 1000 years of bliss. You say that's irrational for any human. I think there is an issue with how "bliss" is defined, but leaving that aside and granting it's irrational for any humans, let's say that there is some agent E for whom it's not irrational. It seems clear to me that E must have a different mind from the mind of any human. If there is a human H1 such that the mind of H1 and the mind of E are identical (not the same person, but the same memories, preference structure, etc.), then it would be irrational of E to be unwilling to do so, just as it would be irrational of H1.
What seems to be happening is that the range of possible human minds is bounded (beyond some fuzzy boundary, the agent would not be human), and so that bounds the range of possible rational choices.
It is not so clear that your switching paradox requires you to reject Archimedianism. You could also evade the paradox by rejecting conditionalization. To add to my comment on your earlier post (Fun with St Petersburg, Apr 28):
ReplyDeletede Finetti took the view that decision theory tells you which bets and conditional bets you should accept in advance, knowing the setup. Suppose the setup is explained to you in advance: you will receive initial St Petersburg winnings, then invited to swap for independent St Petersburg winnings, less a penalty. In effect, you get to choose a strategy, anything from ‘never accept’, which has zero expected value, through ‘accept on one of specified finite number of initial winnings’ (infinite expected value), to ‘always accept’ (indeterminate expected value). [Note: the values given are the expected values of the swap, not of the total payout.] Even granted Archimedianism, you are not rationally obliged to choose ‘always accept’, with its indeterminate value.
Of course, ‘in advance’ decision theory is not much use in real life, where choices are often forced on us ‘out of the blue’. I think this is as it should be. The further from textbook gambling scenarios, where the rules are known in advance, the less relevant is decision theory.
Ian:
ReplyDeleteThat's a good solution. But then why keep to the policy? It seems that you have an infinite expected utility for breaking the policy.
De Finetti’s line is that you choose a policy in advance and stick with it. In a slogan: ‘Don’t conditionalize’. You can choose in advance a conditional policy (e.g. ‘accept the swap on initial winnings less than N’), but you don’t update it in the light of your initial winnings.
ReplyDeleteNote that in ‘normal’ setups the optimal ‘in advance’ policy is in fact to maximize conditional expectation. But in weird setups (infinite expectations, non-conglomerability) it need not be. As de Finetti saw it, the general agreement is reassuring, but where they differ, it is the ‘in advance’ approach that is basic.
How might you justify this? One line: your initial winnings tells you nothing new about the setup, only that something that could have happened did. So it can give you no reason to change your plan. Another line: if you are thinking of many repetitions and the laws of large numbers, it is the whole setup that will be repeated, not the setup with particular initial winnings.
My argument depends on agreeing that assigning numbers to values is something that is done basically by stipulation.
ReplyDeleteTo illustrate this, "twice as hot" does not have a natural meaning, because heat is a quality, not a quantity. Nonetheless some things are hotter than others. And for the sake of making the matter more tractable, we can assign values to heats, saying, "This temperature is assigned the value of 50, and this other, hotter one, the value of 100." And once we have completely matched heats to values, we can then say, "According to this scale, this particular heat is twice as hot as that other."
In the same way, some good things are better than others. But goodness is not a quantity, so "twice as good" does not have a natural meaning. We can create a scale by stipulation, however, with these axioms or similar ones:
1. The value of some particular good at some particular time for me is set equal to one unit of value. Of course in order to get a real scale, we would actually have to choose something: e.g. the value of writing this comment at the moment.
2. The value of A is greater than the value of B if A is better than B.
3. The value of A is twice the value of B if a 50% chance of A is exactly as good as a certainty of B.
Once we have done this, we can assign exact values to all goods, as long as we can always compare A and B as better, worse, or equal, including chances in the comparison (e.g. which is better, a 30% chance of having a cancer operation and surviving, or a 25% chance of surviving but without needing the operation?)
Then, I am saying that if we do generate our scale of value using such stipulations, in order to be reasonably realistic, the scale simply must be bounded. That does not mean we could not generate an unbounded scale using other stipulations; for example we could take our bounded scale and project it onto the real line. But then the 3rd stipulation above would be false, in that unbounded scale of value.
It works like this: we know that astronomically low probabilities of finite goods are worthless. But we know from the consequences of stipulation 3, that if the numerical assessment of a finite good is unbounded, even an astronomically low probability of a finite good can be very high. Therefore the numerical assessment of a finite good must be bounded.
"Also, I think that even if the coefficient isn't 10, it is implausible that eventually the increments go to zero."
I agree. Suppose the bound is 100,000. This does not mean that after you get a million years of bliss, additional years are worthless. It means that the value of a million years will be something like 99,999.99, while the value of a billion years might be 99,999.999, and so on. You won't actually reach the bound.
"It is very plausible that *pain*, absent habituation effects, is at least additive: 10,000 years of pain is at least 10 times as bad as 1,000 years of pain."
The problem with this is that you seem to be assuming that there is a natural meaning to "10 times as bad" apart from our stipulations. But as I said before, goodness (and badness) are not intrinsically quantities. So there is no natural meaning to "10 times as bad". We get the impression that there is, here, because the good and bad things are made of parts which can be counted (the years.) But in reality goodness and badness are still not quantities. And given that we are generating the quantitative assignments in the manner stated, it is impossible for pain to be additive in that way, precisely because an astronomically low probability of a very great pain cannot be very bad, just as an astronomically low probability of something very good cannot be very good.
ReplyDeleteAlso (this was too long to include), getting real numbers from my system depends on knowing, in every case, "Is A better than B?", whether A and B are real things, or probabilities of things. And of course "which is better for us" depends on our natures, so in that sense I did not mean to disagree with your point about natural law. I was simply saying that we don't have to get rid of expected utility. You can keep both.
Ian:
ReplyDeleteThis becomes similar to the discussion between rule and act utilitarianism, doesn't it?
One technical worry is that we never make decisions "in advance". We are always _in media res_ in some way or another. So the de Finetti approach seems to require stepping back and thinking hypothetically what policy one would have set prior to any data. I am not sure this can be done.
Another technical worry is this. Suppose that a dictator tortures everyone who sets policies in advance. What should one do?
entirelyuseless:
ReplyDelete1. For any N, a 0.959 chance of 2^N years of bliss beats a 0.960 chance of N years of bliss.
Therefore, as I understand your stipulation:
2. U(2^N years of bliss) ≥ (0.960/0.959) U(N years of bliss).
Now, stipulate 1 unit of utility to U(1 year of bliss). It follows from (2) that U(2 years of bliss) ≥ 1.001, and hence U(2^2 years of bliss) ≥ 1.001^2, and hence U(2^(2^2) years of bliss) ≥ 1.001^3, and so on. The right hand side will go to infinity, and so the left hand side will as well, and hence utilities are unbounded.
I suppose you will say that for N sufficiently large, (1) is false. But this is implausible.
Yes, I would say that for sufficiently large N, (1) is false. And in particular, it is inconsistent with the bird-in-the-hand principle. This is why:
ReplyDeleteIt follows from the stipulations regarding how we are measuring utility that once U(2 years) is given a definite value, say 1.002, then a certainty of a year of bliss is just as good as a 1/1.002 chance of two years of bliss. And the 1/1.002 chance of two years of bliss is just as good as an even smaller chance of four years. And so on. It follows that a certainty of a year of bliss will be equal to a probability of less than one in a googolplex of a sufficiently large number of years.
But the conclusion is false, since you agree that if the probability is made low enough and the number of years finite, the certainty will be better. Therefore (1) is false.
Right: it's incompatible with the bird in the hand principle, and hence the "stipulation" is incorrect. It's not really a stipulation, because it carries with it substantive assumptions about the structure of one's preferences--otherwise, there is no guarantee that there is an assignment of numbers to outcomes that satisfies the constraints you place on the assignment (viz., that if a p chance of U is preferable to a q chance of V, then p Utility(U) ≥ q Utility(V).
ReplyDeleteDo you accept the negative version of (1)?
1neg. For any N, a 0.959 chance of 2^N years of pain is worse than a 0.960 chance of N years of pain.
The only assumptions about the structure of preferences implied by my stipulation are:
ReplyDelete1) Consistency requirements like transitivity, and that a higher probability of a good thing is better than a lower probability.
2) If A is just as good as B, a probability p of getting A is just as good as an equal probability p of getting B. This is obviously true, but I am including it because it is implied by the stipulation, since "twice as good" is defined by the goodness of a 50% chance of something, which will not be well defined if a 50% chance of a good thing is not just as good as a 50% chance of another equally good thing. Also, while I think that this is also a reasonable consistency requirement, it may be the point you end up disagreeing with.
Consider the following indefinite series of questions:
Which is better, a certainty of 1 day of bliss, or a probability of 0.999999 of 2 days of bliss?
Which is better, a probability of 0.999999 of 2 days of bliss, or a probability of (0.999999^2) of 2^2 days?
Which is better, a probability of (0.999999^2) of 4 days, or a probability of (0.999999^3) of 2^4 days?
Which is better, a probability of (0.999999^3) of 16 days, or a probability of (0.999999^4) of 2^16 days?
...
If you always choose the second option, then transitivity implies that the bird in the hand principle is false, since the last second option, which will end up having a probability as small as we like, will be better than the certainty of one day of bliss, after you trace back the series.
This means that the bird in the hand principle absolutely requires that at some point you say that the first option is better, even though the first option has N days and the second option 2^N days. You seem to be suggesting that the reason for this simply because the probability of the second option is too low, and that it has nothing to do with N and 2^N. But this cannot be true, at least if you accept the second requirement above. The statement where we finally agree that the first option is better looks like this:
"Which is better, a probability of (0.999999^X) of N days, or a probability of (0.999999^(X+1)) of 2^N days?", where X is some presumably high integer, and N presumably an extremely high value.
By my second requirement, we can adjust the probabilities by the same factor, and the comparison should remain the same. But we just said the first option is better; therefore a certainty of N days will be better than a probability of 0.999999 of 2^N days, since we just have to adjust both probabilities by a factor of (0.999999^X).
This proves that the real reason that we change from asserting that the second option is better, to the first, is that once N is high enough, N^2 days is not worth much more than N days.
The same thing is true about 1neg. It is not true for all N, because for very high N, 2^N years of pain is not much worse than N years. E.g. if N is a busy beaver function taken on Graham's Number, it will be better to choose the 0.959 chance of 2^N years, because the small extra chance of avoiding the pain will be more important than the extremely small difference in value between N days of pain and 2^N days of pain (which are both essentially as bad as possible.)
Imagine you've lived through N days of pain. Isn't it obvious that there is an enormous difference between zero more days and 2^N - N more days of pain?
DeleteI do like your initial argument, though. I guess I have to say that your second requirement is false. For instance, certainty of A and chance p of B might be equally valuable, but chance p of A might beat p^2 of B as p^2 might be negligible while p is not.
"Isn't it obvious that there is an enormous difference between zero more days and 2^N - N more days of pain?"
ReplyDeleteI thought you might be thinking something like this, but the comment was already too long to discuss it. There's two different ways of thinking about this objection. First, it could be a psychological story: every day, for 2^N - N more days, you realize what a bad choice you made by picking the 2^N days instead of the N. Second, it could be the claim that badness simply must be additive, because the bad things are distinct things that keep coming: you already had all the badness of N days, and now there will still be a lot more.
In regard to the psychological story, I have a corresponding psychological story to show that N days and 2^N days don't differ very much. N is already far more than you can count or calculate or remember in any way. So if you were keeping track of the days, trying to figure out when you were going to be released, you have already lost track ages ago, by the time you reach N days. So the days are just blurred together: the psychological experience is just one of pain that lasts "forever," but nonetheless somehow comes to an end at some point.
And of course that is exactly the same experience for the 2^N days.
Of course, if someone explicitly tells you that you have come to N days and that you would be released if you hadn't chosen the 2^N days, that will be painful in additional way, but then your comparison is different: N days of regular pain, or N days of regular pain followed by (2^N - N) days of regular pain plus the knowledge that you could have got out by now.
It is still true, in my account, that those two will not differ much. But you have added something that might change the exact point where you switch from preferring the probability of N days to the probability of 2^N days.
The answer to the second interpretation of the objection is that goodness and being are not convertible materially, but only insofar as being is seen by reason as desirable. And reason sees very little difference between N and 2^N; given that N already far exceeds anything we can comprehend, for reason both are just "incomprehensibly large numbers." Of course you know the general fact that 2^N is more, and a bit about the structure of that, but nothing that would affect the desirability much. Finally, we have to refer desirability to a point in time, namely the point when you choose between the N and 2^N in the beginning. Obviously, once you have gotten through the N days, it is very bad if you are still going to have the 2^N - N days. But is is very bad from the point of view you have at that moment; it does not add anything to the point of view you had at the beginning when you made the choice.
This also means that a being that had a better comprehension of very large numbers would have different preferences, and would likely accept lower probabilities of very great goods as well.
About the probability factor requirement:
ReplyDeleteAgain, you seem to be saying that what makes us switch from the second option to the first is just that the overall probability becomes "too low." I do not think this is a possible explanation, and one way to investigate it would be to think about how your preferences would change in that series of questions if you make different kinds of changes to the probability and to N. E.g. instead of 2^N, what happens to the switching point if we say N+1 ?
Let's suppose you say that a probability of 1 in 10,000 is fine, but a probability of 1 in 100,000,000 is not fine. Let's suppose we are offered this choice:
Which is better, a certainty of $1, or a probability of 1 in 10,000 of $10,000,000? Presumably we agree that the second is better.
Now suppose we are offered this choice:
Which is better, a probability of 1 in 10,000 of $1, or a probability of 1 in 100,000,000 of $10,000,000?
My argument is that since the things have changed by the same factor, we should keep our choice the same: the second is better. Your argument is that since the probability of 1 in 100,000,000 is worthless, we should switch and choose the first.
My answer to your argument is that we cannot look only at the probability, but also the thing: perhaps I agree that a probability of 1 in 100,000,000 is worthless, but a probability of 1 in 10,000 of $1 is also worthless. So there is no longer any intuitive reason for switching. Both are worthless, but there is no reason to say the second is more worthless; on the contrary, the factor argument suggests that the first is more worthless.
Also, we can always consider probabilities as being the certainty of something: e.g.:
Which is better, a certainty of $1, or the certainty of being invited into my lottery establishment? [It is a known fact that if you get into the establishment, you have a probability of 1 in 10,000 of $10,000,000.]
If you prefer the second, then we can ask which is better:
A probability of 1 in 10,000 of $1, or a probability of 1 in 10,000 of being invited into my lottery establishment?
Since the probabilities are the same, and both are of something certain, there is no reason to switch to the first option.
Alex, in response to your comment May 24:
ReplyDeleteDictators who punish you for how you reason make rational choice difficult. But you can achieve a similar effect like this:
You are told that you will receive an initial payout. How the payout will be determined has already been set, but will not be revealed to you until after you have received it. You will then be invited to swap it for a St. Petersburg payout less a penalty. As promised, you receive the initial payout, are told that it was a St. Petersburg payout, and offered the swap. Should you accept?
Can you (and if so, should you) formulate a policy retrospectively, ‘as if’ you did not already know the initial payout? I’m not sure.
The example certainly illustrates the strangeness of a policy. If the initial payout were, say, $2^30, and you were told that it had been settled from the beginning, you would accept the swap (at least if you were Archimedian). But if you were told that is was a St. Petersburg outcome, you might not.
I've deleted spam from an online casino. :-)
ReplyDelete