Thursday, April 13, 2023

Some problems with neglecting small probabilities

While I’ve been very friendly to the idea that tiny probabilities should be neglected, here is a serious difficulty. Suppose what we do is neglect probabilities smaller than some positive ϵ that is much smaller than one. Now suppose someone gives you offer A:

  • With probability 1.1ϵ, get a penny.

  • With probability 0.9ϵ, get a year of torture.

If you neglect probabilities less than ϵ, then you ought to accept A. For you will neglect the year of torture, but not the penny. (This follows both on a simplistic “drop events with probability less than ϵ reading of”neglect tiny probabilities” and on the more sophisticated version discribed here.)

But it is absurd to think you should accept an offer where the probability of the positive payoff is only about 20% bigger than that of the negative payoff, while the magnitude of the negative payoff is many orders of magnitude bigger.

Consider, too, that if we think probabilities less than ϵ to be negligible, shouldn’t we by the same token think that differences of probability of 0.2ϵ are negligible as well? Yet that is the difference in probabilities between the penny and the year of torture, and this difference is what makes A allegedly obligatory.

Next, consider this. Let’s say that offer B is as follows:

  • With probability 0.55, get a penny.

  • Otherwise, get a year of torture.

Obviously, this is a terrible deal and you should refuse. But now consider offer Bx for a constant x > 0:

  • With probability x, get offer B.

On the neglect of tiny probabilities account, we get the following oddity. You ought to refuse B, but you ought to accept probability 2ϵ of B. For B2ϵ is equivalent to A. It seems very odd indeed that a tiny probability of a terrible deal could be a good deal!

It may be that the above problems can be solved by a more careful tweaking of the utility calculations, so that you don’t just sharply cut-off the probabilities, but you attenuate them continuously to zero.

But there is a final problem that cannot be solved in such technical way. For any reasonable neglect of small probabilities account on which probabilities less than ϵ are completely neglected and probabilities bigger than ϵ are not completely neglected will admit a case where C is a deal to be refused but there is a probability x of C, for a certain 0 < x < 1, that is to be accepted. For instance, suppose C is as follows:

  • With probability 4ϵ, get X.

  • With probability 2ϵ, pay Y.

(I am assuming that ϵ < 1/4. If we neglect a 1/4 chance, then we’re crazy.) Whatever the attenuation factors on probabilities are, we can choose positive amounts X and Y such that C is a bad deal and to be refused. But now let C1/3 be a 1/3 chance of C. For concreteness, suppose a die is rolled and you get C if the die shows 1 or 2. Then C1/3 has this profile:

  • With probability (4/3)ϵ, get X.

  • With probability (2/3)ϵ, pay Y.

The second option will be neglected. The first one may be attenuated, but not to zero, and so C1/3 is guaranteed to have some small but positive value δ > 0. Now consider a final deal D:

  • Pay δ/2 to get C1/3.

You ought to go for D on the account we are considering, since the value of C1/3 is δ. But now imagine you’ve gone for D. Now the die is rolled to see if you will get C. If the die comes up 1 or 2, then you know you will get C. But C is a bad deal, we have agreed. So in that case you will have regrets. But if the die comes up 3, 4, 5 or 6, then you know you will get nothing, but have paid in δ/2, so you will also have regrets. So no matter what, you will have regrets.

Basically, we have here a violation of a decision-theoretic version of conglomerability. I expect this isn’t really new, because a variant of the regret argument can be applied to any decision procedure that violates independence given some reasonable assumptions.

I think it may be worth biting the bullet on the regret argument.

4 comments:

SMatthewStolte said...

It seems like you are setting up epistemically hostile environments for normal, well-functioning human cognition.

How would you compare these sorts of decisions with the kind of thing you mention here?
https://alexanderpruss.blogspot.com/2021/10/a-simple-moral-preference-circle-with.html

Alexander R Pruss said...

It may all be in the same boat.

I think of it as a giant dilemma. Either (a) we have a complex and messy system of species-relative norms of rationality or (b) we have some variant on standard decision theory. A lot of my research is exploring (b), and in the end I think all this pushes me to (a).

Alexander R Pruss said...

What's so bad about the regrets at the end of my post? Well, for one, it leads to the following counterintuitive Dutch Book. You have agreed to contract to D. But now when you learn how the die comes up, you regret your decision, and you will pay some small amount to get out of the contract. In other words, there is a nice money pump for someone else: Offer D, watch you accept it, wait for the die roll result, offer to take D back for a small fee, repeat.

Ashlee Rolfson said...

Why are the second thoughts toward the finish of my post so awful? Indeed, as far as one might be concerned, it prompts the accompanying outlandish Dutch Book. You have consented to an agreement to D. Be that as it may, presently when you figure out how the bite the dust comes up, you lament your choice, and you will pay a limited quantity to escape the agreement can you entrust to click here Taylor Lautner Net Worth.