Consider any game, like St Petersburg where the expected payoff is infinite but the prizes are guaranteed to be finite. For instance, a number x is picked uniformly at random in the interval from 0 to 1 not inclusive, and your prize is 1/x.
Suppose you and I independently play this game, and we find our winnings. Now I go up to you and say: “Hey, I’ve got a deal for you: you give me your winnings plus a million dollars, and then you’ll toss a hundred coins, and if they’re all heads, you’ll get one percent of what I won.” That’s a deal you can’t rationally refuse (assuming I’m dead-set against your negotiating a better one). For the payoff for refusing is the finite winnings you have. The payoff for accepting is −1000000 + 2−100⋅0.01⋅(+∞) = +∞.
Wow!
Now let’s play doubles! There are two teams: (i) I and Garibaldi, and (ii) you and Delenn. The members of each team don’t get to talk to each other during the game, but after the game each team evenly splits its winnings. This is what happens. The house calculates two payoffs using independent runs of our St Petersburg style game, w1 and w2. I am in a room with you; Garibaldi is in a room with Delenn. I and Delenn are each given w1; you and Garibaldi are each given w2. Now, by pre-arrangement with Garibaldi, I offer you the deal above: You give me a million, and then toss a hundred coins, and then you get one percent of my winnings if they’re all heads. You certainly accept. And Garibaldi offers exactly the same deal to Delenn, and she accepts. What’s the result? Well, the vast majority of the time, the Pruss and Garibaldi team ends up with all the winnings (w1 + w2 + w1 + w2 = 2w1 + 2w2), plus two million, and the you and Delenn team end up out two million. But about once in 2100 runs, the Pruss and Garibaldi team ends up with 1.99w1 + 1.99w2, plus two million, while you and Delenn end up with 0.01w1 + 0.01w2 − 2000000.
And, alas, I don’t see a way to use Causal Finitism to solve this paradox.
This is unnecessarily complicated. The expected wealth of all parties at the end will be infinite, but with the distribution much worse for some than for others.
ReplyDeleteWhat the argument actually proves, is that it is foolish to use an unbounded utility function.
The expected utility will be infinite but the actual utility will be finite after a finite number of runs. So doubling that finite payoff seems worth it.
DeleteWouldn’t a simpler deal ask for $1 to swap winnings make the same point, albeit less dramatically?
ReplyDeleteOn causal finitism: Think about original St Petersburg (the coin is flipped until it lands heads). You have to be prepared for the coin to land tails every time, even though it almost certainly won’t. Then you will need an infinite number of flips. Maybe this makes the setup count as causally infinite.
Put differently: To run St Petersburg ‘synchronously’ with coins, you would need an infinite number of them.
An obvious objection to this idea is that there is no problem with a probability distribution like P(n) = 2^(-n) in itself. The problem comes from the probabilities and the rapidly increasing payouts together. A possible response: Strictly, causal finitism would rule out all infinite distributions. But normal cases could be approximated for practical purposes by a sufficiently large number of coins, and infinite expectation cases could not. [uniform convergence]
But what about just rolling a suitably biased die with a countably infinite number of faces? I’m not entirely convinced by any of this.
A different line: Conditional on any particular winnings, the deal in the OP has infinite expected value. But unconditionally, its expected value is indeterminate. The maths is fine, if a bit unintuitive. But what do we make of it? One answer: Don’t conditionalize. Don’t accept that we have to act only on the latest conditional probabilities, and that nothing else is relevant. In normal cases, it can be proved as a theorem that it is indeed safe to ignore everything else. But is weird cases like this, it need not be. Note, non-conglomerability raises similar issues.
ReplyDeleteThinking in this way, ‘you’ (and Delenn) could adopt a strategy like this: accept the offer on ‘low’ winnings (i.e. less than some number N) and reject otherwise. Adopting such a strategy would give you infinite expected gains without the risk of unbounded losses.
On reflection, Ian, I like your simpler version a lot and I may use it in print, but it has the disadvantage that (right? I haven't checked) eventually the dollars will likely be swamped and insignificant.
ReplyDeleteAccepting my deal will always lose Team ‘You’ exactly $2. I would have thought that any such loss, however small, would suffice to make the point. But you can easily make it more dramatic: ‘half my winnings for all yours’, or ‘half my winnings less $x for all yours’ etc.
ReplyDeleteAs a point of exposition, you would do well to say explicitly that Team ‘You’ know that Team Pruss make their offers before they see their winnings, or else that they are bound to make their offers in any case. Team Pruss must not have the option of offering or not, depending on their winnings, and Team ‘You’ must know this.
Unless I'm missing something, you lose $2 each time, but the team also gets two St Petersburg payoffs which swamp the loss.
ReplyDeleteHere's a way to make it just a solid loss. You and Delenn play against the house. The house offers you a free St Petersburg payoff and offers Delenn an independent St Petersburg payoff, also for free. Then the house pays you and Delenn, but you don't see what the other got. Then the house offers you and Delenn this second deal: if you pay in twice what you won, plus a dollar, you get the other's winnings. Now it's a solid $2 loss to the team if you both go for the swap.
Here's a possible solution, though.
This problem is most compelling if both you and Delenn know that the other is getting this deal--but it seems you're still each rational in going for the swap.
But now it's not so clear. Let p be the probability of Delenn swapping and let A and B be what you got in the first round. Then here are the overall payoffs for your choice:
SWAP: With probability p, get -1. With probability 1-p, get (1/2)(-1-A+2B).
NOT: With probability p, get (1/2)(-1-B+2A). With probability 1-p, get (1/2)(A+B).
Since you know A, EA is finite; but you don't know anything about what B is, so EB is infinite. Hence, E(SWAP)=infinity, and E(NOT)=infinity. Now it's not so clear that you should go for the swap.
Does this really avoid the swamping problem? If you can afford to lose $2A, having just received $A, you must have had at least $A to start with. And A could be arbitrarily large. So, for the setup to work, you must have started with infinite assets. This will swamp any finite loss.
ReplyDeleteBut does swamping matter? Isn’t the paradox about the value of the deal? Judged by expected return, conditional on knowing your winnings, it seems to have infinite value. But by the pairing, you can force a sure total loss.