In yesterday’s post, I argued that there is something problematic about the idea of discounting small probabilities, given that in a large enough lottery every possibility with has a small probability. I then offered a way of making sense of the idea by “trimming” the utility function at the top and bottom.
This morning, however, I noticed that one can also take the idea of discounting small probabilities more literally and still get the exact same results as by trimming utility functions. Specifically, given a probability function P and a probability discount threshold ϵ, we form a credence function Pϵ by letting Pϵ(A) = P(A) if ϵ ≤ P(A) ≤ 1 − ϵ, Pϵ(A) = 0 if P(A) < ϵ and Pϵ(A) = 1 if P(A) > 1 − ϵ. This discounts close-to-zero probabilities to zero and raises close-to-one probabilities to one. (We shouldn’t forget the second or things won't work well.)
Of course, Pϵ is not in general a probability, but it does satisfy the Zero, Non-Negativity, Normalization and Monotonicity axioms, and we can now use LSI↑ level-set integral to calculate utilities with Pϵ.
If Uϵ is the “trimmed” utility function from my previous post, then LSI↑Pϵ(U) = E(U2ϵ), so the two approaches are equivalent.
One can also do the same thing within Buchak’s REU theory, since that theory is equivalent to applying LSI↑ with a probability transformed by a monotonic map of [0,1] to [0,1] keeping endpoints fixed, which is exactly what I did when moving from P to Pϵ.
Why calculate the expected utility via "level-set integrals", when according to Fubini you can also calculate the expected utility with "block integrals" or any other appropriate transformations of the x and y coordinates gaining the same exact result?
ReplyDeleteWhy not calculate it with polar coordinates?
Sure, that would be more difficult to do without having any rotational symmetries here.
But you can do that and by doing that properly you or we should gain the same result for the expected utility.
Sooo...
What exactly makes "level-set integrals" so special here?!?
I don't see any particular good reason for this specific approach for calculating expected utilities over "block integrals" here.
Fubini's theorem applies to expected values defined with respect to a measure. The credence function P_e is not a measure in general, because in general it fails finite additivity. Thus, the standard Lebesgue integral with respect to P_e is undefined. I don't know what a "block integral" is.
ReplyDeleteThe point of level-set integrals for me is that they allow one to define a fairly well-behaved expectation or prevision with respect to credence assignments that are not probabilities because instead of additivity they only satisfy monotonicity (P(A) is less than or equal to P(B) if A is a subset of B).
Ah, "block" is my term. :-)
ReplyDeleteHere's the background for why I am interested in expected values with respect to non-probabilities. The credences or degrees of belief of real human beings are unlikely to be consistent. In particular, they are unlikely to satisfy the axioms of probability, especially additivity. At the same time, real human beings need a way of making predictions. Mathematical expectation is out, because that requires at least a finitely-additive measure (normally Lebesgue integrals are defined with respect to a countably-additive measure but they can also be defined with respect to a finitely-additive one). So we need some other method for making predictions or generating expectations when the credences do not satisfy the axioms of probability.