Good’s Theorem basically says that a utility-maximizing agent can expect to make decisions that are at least as good if they get more information. (And under some additional conditions, one can expect the decisions to be better.)
Now consider this case:
- You will be offered a chance to make a bet at certain odds on the result of a coin toss, where as far as you can tell it’s equally likely that the coin is fair and that it is double-headed. Someone offers to tell you how the previous toss of the coin went.
Good’s Theorem says your decision whether to make the bet will be at least as good given the information about the previous three tosses as without that information. Hence, if the information is being announced, you don’t need to cover your ears. This is, of course, very intuitive. But now consider a slightly different case:
- Things are set up just as in (1), except now instead of information about the previous toss, you are offered a chance to have the following experiment get performed before your decision: the coin will be tossed an extra time and the result will be announced to you.
The difference is that in (2) you are not simply being offered additional information about how things are. For whether you go for the experiment or not, either way, you have full information about the experiment and its results. If you don’t go for the experiment, that full information is that the coin was not tossed an extra time (and hence did not land either heads or tails). If you do go for the experiment, the full information is that the coin was tossed and it landed heads, or else that it was tossed and it landed tails. In (2), you are not just finding out information by going for the deal: you are making something happen—an extra toss—and then finding out something about that.
So you can’t apply Good’s Theorem directly to (2). It would be nice to have a formulation of Good’s Theorem that works in cases where instead of merely finding out information, you perform an experiment.
I initially thought this would be easy. Maybe it is, but I don’t see it. There are, after all, cases where performing a cost-free experiment is not a good idea. Suppose, for instance, that you will be allowed to bet tomorrow that a certain car has more than 10 gallons of gasoline. The experiment is to start up the car and look at the gas gauge. But starting the car reduces the amount of gasoline in it, and one can easily rig the case so that benefits from the information gain are outweighed by the fact that you have made that bet less favorable.
So, we want to rule out cases where there is dependence between whether you perform the experiment and the payoffs of the wagers. If F is the event of performing the experiment, it may seems initially we should assume something like:
- E(U|Wi∩F) = E(U|Wi∩Fc) for all i,
where Wi is your choosing wager i and U is the utility random variable. In other words, the expected utility of each wager is unaffected by whether the experiment has been performed. But no! Suppose a coin has been tossed, and you are choosing between W1 where you get a dollar on heads and W2 where you get a dollar on tails. But let F be the experiment of looking at the coin. (This is a case for the original Good’s Theorem.) Then E(U|Wi∩Fc) = 0.50, while E(U|Wi∩F) is very close to 1.00 for the reason that when you find out what the coin is like, you are close to certain to bet on what you see, and hence you are close to certain to win your bet.
If F1 is heads and F2 is tails, we solve the problem by replacing (3) with:
- E(U|Wi∩Fj∩F) = E(U|Wi∩Fj∩Fc) for i and j.
Namely, the expected utility of wager Wi given information Fj is independent of whether you performed the experiment F. But that only works because it makes sense to ask what the coin is showing if you aren’t looking: it makes sense to conditionalize on Fj ∩ Fc. But in the cases that interest me, there is no fact of the matter as to the result of the experiment when the experiment is not performed, since Molinism is false and we live in an indeterministic world. And in these cases, Fj ∩ Fc is the empty set: the Fj represent the possible results of the experiment but the experiment has no result when it is not performed.
I can get something by supposing a two-step procedure. You perform the experiment, event F, and you learn the result, event L. Then we can assume:
E(U|Wi∩F∩Lc) = E(U|Wi∩Fc) for all i
E(U|Wi∩Fj∩F∩L) = E(U|Wi∩Fj∩F∩Lc) for all i and j
P(Fj|F∩L) = P(Fj|F∩Lc).
Assumption (5) says that it makes no difference to the expected utility of a wager whether (3) the experiment is performed but its result is not learned or (b) the experiment is not performed at all. In other words, the experiment itself doesn’t affect things. Assumption (6) says that given a specific experimental result, learning the result makes no difference to the expected utility of each wager–result pair. Assumption (7) says that the results of the experiment are unaffected by whether you learn the result of the experiment.
Without (6) or (7), we wouldn’t expect to get the result we want. If we don’t have (6), it might be that utilities are wildly affected by whether you learn the result. (The simplest case is that the wagers all have a big negative payoff on L.) If we don’t have (7), then learning the result might have some evidential or retrocausal impact on what the result is, and then again we shouldn’t expect that learning the result is a good thing.
Given (5)–(7), I think we can now reason as follows. You are choosing between:
- performing the experiment and learning the results
and
- not performing the experiment and (hence) not learning the results.
By (5), a rational agent will decide the same way in (ii) as in:
- performing the experiment and not learning the results,
and the expected utilities of (ii) and (iii) will be the same for this rational agent.
We now apply Good’s Theorem to the choice between (i) and (iii) (we will use (6) and (7) here, and assume the case is non-Newcombian and hence allows the use of Evidential Decision Theory) and get the result that (i) is at least as good as (iii). Since we have indifference between (ii) and (iii), it follows that (i) is at least as good as (ii). (We can also analyze the cases of a strict expected utility inequality.)
This is roundabout, but that’s not main main worry.
What I am really worried about is one technicality. To run the above argument, I had to assume that there is a way of performing the experiment without learning the result, namely that F ∩ Lc is non-empty. In general, however, we cannot assume this. Suppose, for instance, that we have a world with a quantum mechanics where observation causes collapse. Then the experiment of collapsing a wavefunction by means of observation cannot be done without observing the result of the experiment. In such scenarios, I cannot simply introduce a third option of performing the experiment and not learning the results, since that third option may not be consistent with the laws of physics. (And, of course, the utilities for breaking the laws of physics could be wild.)
But without introducing that third option, namely F ∩ Lc, I don’t know how to formulate the independence assumptions that are needed. I also don’t know if the problem is “merely technical” or “deep”. If I had to bet at even odds, I would bet on its being merely technical. But it might be deep.
No comments:
Post a Comment