Suppose that we wish to model an infinite fair lottery with tickets numbered by integers by means of qualitative probabilities, i.e., a reflexive and transitive relation ≲ between sets of tickets that satisfies the non-negativity constraint that ∅ ≲ A for all A and the additivity constraint that A ≲ B iff A − B ≲ B − A. Suppose, further, that we want to have the regularity constraint that ∅ < A if A is not empty.
At this point, we want to ask what “fairness” is. One proposal is that fairness is strong translation invariance: if A is a set of integers and n + A is the set {n + m : m ∈ A} of all the members of A shifted over by m, then A and n + A are equally probable. Unfortunately, if we require strong translation invariance, then we violate the regularity constraint, since we will have to assign the same probability to the winning ticket being in {1, 2, 3, ...} as to the winning ticket being in {2, ...}, which (given additivity) violates the constraint that ∅ < {1}.
One possible option that I’ve been thinking about is is to require weak translation invariance. Weak translation invariance says that A ≲ B iff n + A ≲ n + B. Thus, a set might not have the same probability as a shift of itself, but comparisons between sets are not changed by shifts. I’ve spent a good chunk of the last week or two trying to figure out whether (given the other constraints) it is coherent to require weak translation invariance. Last night, Harry West gave an elegant affirmative proof on MathOverflow. So, yes, one can require weak translation invariance.
However, weak translation invariance does not capture the concept of fairness. Here is one reason why.
Say that a set B of integers is right-to-left (RTL) bigger than a set A of integers provided that there is an integer n such that:
n ∈ B but not n ∈ A, and
for every m > n, if m ∈ A, then m ∈ B.
RTL comparison of sets of integers thus always favors sets with larger integers. Thus, the set {2, 3} is RTL bigger than the infinite set {..., − 3, −2, −1, 0, 1, 3}, because the former set has 2 in it while the latter does not.
It looks to me that West’s proof straightforwardly adapts to show that that there is a weakly translation invariant qualitative probability that coheres with RTL ordering: if B is RTL bigger than A, then B is strictly more likely than A. But a probability comparison that coheres with RTL ordering is about as far from fairness as we can imagine: a bigger ticket number is always more likely than a smaller one, and indeed each ticket number is more likely to be the winner than the disjunction of all the smaller ticket numbers!
So, weak translation invariance doesn’t capture the concept of fairness.
Here is a natural suggestion. Let’s add to weak translation invariance the following constraint: any two tickets are equally likely.
I think—but here I need to check more details—that a variant of West’s proof again shows that this won’t do. Say that a set B of integers is right-skewed (RS) at least as big as a set A of integers provided that one or more of the following holds:
A is finite and B has at least as many members than A, or
B has infinitely many positive integers and A does not, or
A is a subset of B.
Intuitively, a probability ordering that coheres with RS ordering fails to be fair, because, for instance, it makes it more likely that the winning ticket will be, say, a power of two than that it be a negative number. But at the same time, a probability ordering that coheres with RS ordering makes all individual tickets be equally likely by (1).
To make this work with West’s proof, replace his C0 with the set of bounded functions that have a well-defined and non-negative sum or whose positive part has an infinite sum.
13 comments:
I have no comment on the purely mathematical problem. But it seems unlikely that there could be an intuitively reasonable ordering.
Think about the ‘Left’ sets (-∞, L] and the ‘Right’ sets [R, ∞). Clearly, (by additivity and regularity), the Left sets have to strictly increase in L and the Right sets have to strictly decrease in R. So they cannot all rank equal. They cannot ‘cross over’ (that would be inconsistent with weak translation invariance, monotone increase and decrease, and transitivity – if I’m thinking straight). So either all the Left sets rank higher that all the Right sets, or vice versa. Either way would be arbitrary and intuitively unreasonable.
Right. :-)
Apologies. I see you already noted that at MathOverflow.
I think (warning: sorcerer’s apprentice at work) that left or right bias is implicit in West’s approach, even for single ticket comparisons. The comparison function for two tickets is +1 for one, -1 for the other, and zero elsewhere. This is not in the equivalence class of the zero function. (It does not convolve with any finitely supported non negative function to make the zero function.) So it has to rank either above or below zero. By translation and transitivity, the left or right bias of a single adjacent pair will apply to the whole line.
Can this be changed by simple tweaking? I will leave that to others.
I didn't notice that: it's a nice point. It can be tweaked, however: We can throw into C0 the set of all finitely supported functions whose sum is non-negative (since that's not changed by convolutions). If we do that, then any two tickets will be equally likely.
I’m not seeing how that works. I am no doubt missing the point. For two tickets to rank equal, both the comparison function and its negative must be in the maximal good cone C. So the function itself must be in both C and -C. West’s condition C3 for a good cone requires C ∩ (−C) = [0]. So to change which tickets rank equal, you have to change the equivalence class of the zero function. So you have to use a different equivalence relation (or perhaps a different initial set of functions).
Good point. Would this work? After we've generated his maximal cone C, we add to it all the functions whose difference from a function in C sums to zero? I think that gives us a cone that satisfies C1, C2 and C\cap (-C) = {[z]: sum z = 0}.
Transitivity could be a problem.
Take X = {even numbers with 0 removed and 1 added}, Y = {even numbers}, Z = {odd numbers}. Then X ranks equal to Y (comparison function sums finitely to zero), Y ranks equal to Z (translation or comparison function is periodic and averages zero, hence in [0]). But for X vs Z, the comparison function is neither finitely supported nor in [0].
Here is a possible fix. Including not just functions that sum finitely to zero, but also functions that can be convoluted with some finitely supported non-negative not identically zero function to give a function that sums finitely to zero. (In the example, use the function φ with φ(0) = φ(1) = 1 and φ(j) = 0 elsewhere. The convolution of the X vs Z comparison function with φ sums finitely to zero.)
In a similar vein, you might start from the equivalence relation that functions a and b are equivalent if for some φ and ψ as above, a * φ - b * ψ sums finitely to zero.
Does either idea actually work, or do they introduce their own problems? I don’t know. :-)
I should have specified that the difference sums to zero and the sum is absolutely convergent.
BTW, West's ordering also yields weakly translation invariant qualitative expectations, which is kind of neat, until you realize the implausible directional bias.
Or that the difference sums to zero and the function is finitely supported?
Maybe the latter is what you meant by "sums finitely to zero". But then note that if b is finitely supported and non-negative and not identically zero, then a*b sums finitely (or absolutely convergently) to zero iff a does.
BTW, the problem I am now working on is whether there is a weakly invariant comparison for the free group on two elements. That's an interesting case, because (unlike for Z) there is no finitely additive invariant real-valued probability measure on it (i.e., it's not amenable).
Yes, by “sums finitely to zero” I did mean “finitely supported and sums to zero”. Apologies: brevis esse laboro, obscurus fio... And yes, this, rather than absolute convergence of the sum to zero, is crucial. In my example, the X - Z difference function is not finitely supported, nor does its sum converge absolutely. But its convolution with the 1, 1 function I described is -1 at o, 0 at 1, +1 at 2, and zero elsewhere. So it is finitely supported and sums to zero. On my proposal, this would make X and Z rank equal, as transitivity requires.
Excuse my ignorance, what is a “qualitative expectation”? But here is a trivial but possibly relevant fact. Think about comparison of finite sets. Then a comparison of characteristic functions based on Σ f(j)exp(αj), with any real α, and j being the index, is weakly translation invariant. But it is clearly not ‘fair’ unless α = 0.
No comment on amenable groups, at least for now.
I haven't thought through the details.
But the relevant sequences all have finitely many values (e.g., -1, 0, 1), and for a series whose terms take on only finitely many values, absolute convergence is the same thing as having only finitely many non-zero values. So I can't think why the choice between the two would make a difference.
Yes, you are right. West’s method seems to go through (I have checked, but the usual warning applies…) with this definition of equivalence: bounded functions a and b are equivalent if there are finitely supported non-negative not identically zero functions φ and ψ such that a * φ - b * ψ is finitely supported and sums to zero. West’s method also seems to work if equivalence requires only that there be finitely supported etc. φ and ψ such that a * φ - b * ψ sums absolutely convergently to zero.
Both methods force individual tickets to rank equal.
But there is a worrying ‘feature’. Think about functions that sum absolutely finitely to zero but are not finitely supported. (Of course, such functions cannot result from set comparisons, but West’s method still applies to them.) The ‘finitely supported’ approach seems to force all such functions (even antisymmetric ones) to rank unequal to zero. This seems weird.
Post a Comment