You've existed for an infinite amount of time, and each day until the year 2100 a coin is tossed. You always know the results of the past tosses, and before each toss you are asked to guess the next toss. Given the Axiom of Choice, there is a mathematical strategy that guarantees you make only a finite number of mistakes.
Here's a simpler fact, no doubt well-known, but not dependent on the Axiom of Choice. There is a mathematical strategy that guarantees that you guess correctly infinitely often. This is surprising. Granted, it's not surprising that you guess correctly infinitely often--that is what you would expect. But what is surprising is that there is a guarantee of it! Here's the simple strategy:
- If among the past tosses, there were infinitely many heads, guess "heads"
- Otherwise, guess "tails".
I take the paradoxical existence of this mathematical strategy to be evidence for causal finitism: causal finitism rules out the possibility of your having observational information from infinitely many past tosses. Thus the strategy remains purely mathematical: it cannot be implemented in practice.
26 comments:
Let's suppose God offers to play a game with a human being. It works like this. God chooses heads or tails, and tells the human being his choice. Then the human being gets to choose heads or tails. They repeat, and this goes on forever.
God wins if his choice matches the human's infinitely often, and the human wins if God's choice never matches his, or only finitely often.
God says, "I have a guaranteed win: I can look at the future sequence, and if the human chooses heads infinitely often, I say heads, otherwise I say tails."
The human says, "I have a guaranteed win. At each point if God says heads, I say tails. If God says tails, I say heads."
There clearly is a problem here, but it is not clear that the problem is God's foreknowledge, while it seems to follow from your argument that God's knowledge is precisely the problem.
God's announcement will depend -- either causally or quasi-causally -- on infinitely many decisions by you. Thus, causal finitism rules this out. It does not, I think, rule out God's *knowledge* of the die throws, but it renders the knowledge unusable for practical purposes.
However, the case you give seems closely akin to this one: You play a game with God. God chooses heads or tails once, and tells you to choose between heads or tails. God wins if the choices match. Otherwise you win. This happens only once. And, again, there is an argument that God can win and an argument that the human can win.
In this case, there are no infinities involved. The problem with this scenario is that although God foreknows what you will choose, his foreknowledge depends on your choice. If your choice depends on God's announcement, then we have a circular dependence, which is impossible. Hence in this case while God can foreknow, he cannot announce to you what he foreknew, for that would create a circular dependency.
There is also a fun non-divine version of my story. Notice how the argument I gave seems to work no matter how the coins are rolled. Thus, it should work even if the coins tosses are controlled by another agent (say, with magnets, or just crudely by direct placement).
Very well. Suppose you play with me. You apply the strategy in my post. But I control the coin tosses, and I make sure that the toss comes out the opposite of your prediction.
It seems you have a perfectly well-defined strategy: "heads" iff infinitely many heads; else, "tails". I have a perfectly well-defined strategy: place the opposite of what you say.
But we can't both follow our strategies. For your strategy guarantees you always guess "heads" or you always guess "tails". If you always guess "heads", I will have always placed tails, in which case you won't have guessed "heads". If you always guess "tails", I will have always placed heads, in which case you won't have guessed "tails".
I am not sure what to make of this.
Alex:
I don't think I get your scenario:
1. If you're guessing about past coin tosses, and you see infinitely many heads (or tails), just by saying what you see infinitely many times, you'll "guess" right infinitely many times. But that's not even a guess, you're just saying what you know.
2. If you're guessing about future coin tosses, you're guaranteed to get it right infinitely many times in the sense that assuming that there is probability 1 that the future sequence will contain infinitely many heads and infinitely many tails, so just guess "heads" always, and the probability of getting it right infinitely many times in the future is 1...but I don't know that you have probability 1: if you're talking epistemic probability, there is a non-zero chance that something will disrupt the sequence or your life in a finite amount of time. If you're talking some other probability, I would ask what is it, and why is the probability 1 if you have info from the past, but not if you don't.
3. If you're talking about guessing only the immediate future coin toss, point 2. applies.
I guess I'm missing something here, but I don't know what it is.
It's the third one. Sure, you get probability one without any strategy. But probability one is not a guarantee.
Okay, so let's suppose you're about to guess the a future coin toss, at some time t0. You see infinitely many past heads. You guess "heads". It's tails. You guess "heads" again. It's tails again. In fact, there will be no more tails, so you fail every single time. Sure, you got it right before t0, infinitely many times if you always guessed "heads". But at t0, you already had it right infinitely many times.
So, at each point in time (i.e., when you have to guess), you only have a guarantee that counting all of your guesses, you get it right infinitely many times, only if you already guessed it right infinitely many times. That's not surprising.
In fact, we can divide the scenario in two possibilities as follows:
You've existed for an infinite amount of time, and each day until the year 2100 a coin is tossed. Then:
1. If you already guessed it right infinitely many times, there is a guarantee that, in total, you will guess it right infinitely many times. That's trivially true.
2. If you only guessed it right finitely many times, then there is no known strategy you can adopt that will guarantee that you will get it right infinitely many times.
Think of the strategy as a policy. It's surprising there is a policy that is guaranteed to succeed.
Consider a reward structure. If you guessed right even once, once the game ends, namely in 2100, you get happiness for eternity. Otherwise, the opposite. It's surprising there is a policy such that if you were to adopt it, you'd be guaranteed happiness.
Ok, after thinking about I think you are right that something at least similar to causal finitism follows from this. But you stated it incorrectly in saying that what follows is that you cannot have observational information from an infinite number of tosses.
The real problem is not having information. As you say, the strategy implies either that you always guess heads or that you always guess tails. Let's suppose it is heads (as you would expect with a fair coin.) Then you have always been guessing heads. Suppose someone asks, "Why have you always been guessing heads?" The response is, "Because I have always been looking at the sequence and there were always an infinite number of previous cases of heads." Then every individual case of guessing depends on a past, but there is no past relative to the whole series of cases of guessing, since it is an infinite time. So the impossibility here seems to result from the way that the infinite series of guesses depends on the infinite series of tosses; causality would be violated because the series of tosses is not in fact prior in the way that would be necessary in order for the series of tosses to cause the series of guesses.
This implies that the real problem is not saying that the person can look at an infinite series of tosses, but that he can have a strategy of always already having looked at that series and acted on it, for an infinite time. This is analogous to your saying that God could have foreknowledge but not act on it.
Alex:
But the description I gave in my previous post is accurate, and unsurprising.
Another way of stating the result is: "You existed for an infinite amount of time, and each day until the year 2100 a coin was tossed. At every time, you've always known the results of previous tosses. Also, you've always guessed "tails" if you knew that infinitely many coins were already tails. Otherwise, you have always guessed "heads". Then, you have guessed correctly infinitely many times."
That's true, but not surprising at all. Then again, what is surprising to a person isn't always surprising to another person, so maybe it's just not surprising to me.
Addition: I do not agree with the characterization of the scenario as "there is a policy that is guaranteed to succeed", because that seems to indicate that a person can make a choice and adopt that policy, but that is not so.
In fact, if you make a choice to follow a certain policy PX (whatever PX is) at some time t0 (whatever t0 is), the policy is not guaranteed to make you succeed infinitely many times in the future (or even more than n times, for all natural n), as far as we know, and regardless of what we know about the past. On the other hand, no matter what PX is, if you've already succeeded infinitely many times, then that's that: you have.
Granted, you can define "guaranteed to succeed" in a way that makes the statement true by counting past successes as in my immediately previous post above, but that is still unsurprising to me.
For me the surprising thing is that using information about past performance yields a strategy that, when generalized, does in some sense better than just guessing heads all the time.
When you say "using information about past performance" do you mean that the person making the choices is using information about past performance, or we are?
If it's the former, then information about past performance does not give the person making the choices any advantageous strategy - at least, not the strategy under consideration.
For example, let's say that Alice exists on 03/12/2015 (DD/MM/YYYY), and existed for an infinite amount of time. She decides to use info about the past (which she has; she knows the previous results), in order to guess. She sees that infinitely many coin tosses were heads. She picks "heads". What's her probability of winning? 1/2. What's the probability that she'll win more than n times, for any n (not counting the number of times she already won)? Almost 1, or 1 if we assume that the tossing and guessing will continue forever, and she will continue with that strategy. But on the other hand, the same probability results if she picks "tails" instead (here, I'm talking about epistemic probability from the perspective of Alice's info, and assuming that the earlier series does not allow her to properly reckon that the coin isn't fair). The strategy is neither better nor worse than picking "tails" all the time, or "heads" all the time.
If it's the latter, from the assumption that she's been using that strategy daily for an infinite amount of time, it follows she got it right infinitely many times, whereas from the assumption that she just guessed "heads" all the time, it does not follow that she already got it right infinitely many times, but that does not seem at all surprising to me - rather, it seems clear from the description of the strategy -, and I wouldn't be inclined to say that that in some sense it does better when generalized. That she already got it right infinitely many times seems to me like an immediate consequence of the way the scenario is described.
But as I said, sometimes people get surprised by different things.
I am thinking that strategy A is better than strategy B if (but not only if) (a) in some cases following-A-always is better than following-B-always and (b) in no cases is following-A-always worse following-B-always.
If "better" is established in terms of epistemic rationality, assuming fair coins, neither OP (i.e., the strategy described in the OP) nor AT (i.e., "always tails") is a better strategy than the other.
If it's results that count - which is what you're saying, if I got this right -, we may consider the following scenario:
S1: Bob, at t0, is pondering what strategy to follow from that moment on. He chooses the strategy "OP" from that point on. He ends up picking always heads, and losing every single time in the future.
S2: Alice, at t0, decides to pick AT. She wins every single time in the future.
In that case, choosing "AT" turned out (at any time after t0) to be better than following OP.
But if I got this right, your definition counts also what happened in the past, so that rules out S1 and S2 as counterexamples.
I guess in that case, OP would be better than AT, but I don't think this reflects what usually is described as "better". For example, we may consider the following scenarios:
S3-4-5:
Before t0, the results were:
10^999 immediately previous tosses: tails.
1 toss before that: heads.
10^999tosses before that: tails.
1 toss before that: heads.
10^999 tosses before that: tails.
1 toss before that: heads.
Etc.
S3: Bob, at t0, is pondering what strategy to follow from that moment on. As he has always done, he chooses to continue picking the strategy "OP" from that point on.
It turns out that counting from t0 into the future (i.e., apart from his previous guesses), his number of victories over total number of tosses is never greater than 2/(10^999)
S4: Alice, at t0, is pondering what strategy to follow from that moment on. As she has always done, he chooses the to continue picking the strategy "AT" from that point on.
It turns out that, counting from t0 into the future (i.e., apart from her previous guesses), her number of defeats over total number of tosses is never greater than 2/(10^999).
S5: José, at t0, decides to continue with his earlier strategy (say, JS), which involves picking 10^999 times tails, then one head, etc. So far, he has always won. He keeps winning always.
It turns out that in that scenario, JS does better than OP on your terminology, but AT does not. That is not what I intuitively would call "not better", if results are what count (and the same goes for the S1 and S2 cases; counting previous cases when it comes to assessing overall quality of strategies picked at a certain time seems weird to me).
Anyway, by your definition, OP is better than AT or AH, but not better than, say, 1T1H (i.e., picking tails, heads, tails, etc.), or JS.
I'm not thinking about what policy to adopt at a particular time, but what policy it would be best to have always adopted. There are some Newcomb related issues that I may want to think about here, though.
Alex:
I was not familiar with the result, but I’m not surprized. Our intuition is tuned to our experience. We (i.e. humans) have beginnings and finite capacities. The scenario has no beginning and the rule requires infinite capacity, so it’s not surprizing that the outcome is unintuitive.
The “hats” scenario avoids the problem of beginnings – this may address Angra’s concern. (A countable infinity of numbered people, each randomly given a red or black hat, are placed so that each can see all and only the higher-numbered hats. Each person has to guess the colour of his own hat. Collusion is allowed before the hats are assigned, but not afterwards.) It still requires infinite capacities and also funny physics.
On the “fun non-divine” version: The “guesser’s” rule and the “placer’s” rule can be merged to a single rule for the “placer”: Place “heads” iff finitely many “heads”, else place “tails”. This is mathematically impossible (at least in a past-infinite scenario), even leaving metaphysics aside. Note that the “rule” is really an infinity of rules, one for each coin-toss. A finite number of such rules would be compatible. So would an open future. But a past infinity is not.
Another response to the scenario:
The analysis of temporal relations ('happened before,' 'happened after') precludes any event P whose probability is zero relative to event Q from happening before Q. In other words, it is analytically necessary that the past always have a probability greater than 0 relative to the future.
I'm suggesting that 'does not have a probability of 0 relative to' is, so to speak, written into 'is in the past of' in something like the same way that 'has interior angles that sum to 180 degrees' is written into 'is a Euclidean triangle.'
If this is so, then it is analytically impossible for there ever to be a time before which there already have been an infinite series of fair heads-tails coin tosses which did not include infinitely many heads. The probability of any given fair infinite series of coin-tosses turning out not to include infinitely many heads is 0, relative to any future. So it is a probability 0 event (relative to every future). Given my hypothesis about the analysis of time, it is analytically impossible for a probability 0 event (relative to every future) to be in the past of any future.
I don't know of any good empirical or theological reason for thinking that there are any past events whose probability is 0 relative to their futures. Quantum mechanics doesn't give us such a case: particles in superpositions of infinitely many different point-sized locations have never been observed to collapse into exactly one of those point-locations. Instead, any such particle is observed to collapse into a superposition of some infinite proper subset of the original infinite set of point-locations. The odds of THAT happening is greater than 0, so the fact that it happens does not give us a case of probability 0 event.
So the proposal's that the analysis of temporal relations precludes --- as a matter of analytic necessity --- that any event with a probability 0 relative to every future should happen at all. It seems to me this idea should be on the table, both because we haven't (so far as I know) found any empirical reason to reject it AND because this proposal may fill an important explanatory role: it helps explain the curious fact that the future has any evidential relation to the past at all. On my proposal about analysis, the future MUST evidence the past to some non-zero degree, because it is analytically necessary that the past has a non-zero probability given the future.
It should also help with the policy paradox about coin-guessing, since it rules out the coherence (analytically) of one of the two cases (the one where there aren't infinitely many heads). Instead, the proposal entails that given an infinite series of fair tosses, there MUST be infinitely many heads and there MUST be infinitely tails. So the policy of guessing the same way every time guarantees you will be right infinitely often.
(continued)
This is still surprising --- viz., that there *is* a policy which carries a guarantee of infinitely many correct guesses --- but the advance made by the thesis about the analysis of 'happened before' is that it gives us an /explanation/ of why this surprising fact obtains. For comparison, causal finitism resolves the paradox by providing grounds for denying that there is such a policy, rather than by explaining why there is one.
It looks like causal finitism has one advantage over the analysis proposal: the causal finitist response to the paradox affirms one of our intuitions --- the intuition that there *shouldn't* be a policy which guarantees infinite correct guesses --- which the analysis proposal must deny (because on the analysis proposal, there *is* such a policy). But then, I'm concerned that if causal finitism is true, then some sort of broader sort of finitism about the past must also be true: one which rules out the very possibility of infinitely many past events, be they coin tosses or physical events or anything else. So I think it's an advantage of the analysis proposal (vis-a-vis causal finitism) that the analysis proposal is consistent with the possibility of infinitely many past events.
Ian,
Assuming the funny physics and infinite capacity, I don't find the existence of a strategy under those conditions counterintuitive, but I'm not sure that answers whether the hat problem addresses one of my concerns. Which one do you have in mind?
Richard:
There are, indeed, other ways out of the paradox than by embracing causal finitism. But it is unlikely that the other ways will handle the many other paradoxes that causal finitism does. :-)
Ian:
I do like the simultaneous version, too. Thanks!
Alex:
Do you have anywhere a list of the motivations for causal finitism?
Without seeing the list, I'm concerned that most of the motivations will involve infinitely complex situations (by which I mean events --- for example, a series of infinitely many past coin flips --- which consist of infinitely many subsidiary events). Many infinitely complex situations invite the reply that those situations are impossible because they have a probability of 0 (and probability 0 events are impossible because of the analysis of temporal relations). For instance, there's a Reaper paradox which I believe you and Rob Koons have treated which motivates finitism about past events, and it involves a series of infinitely many numerically distinct Reapers. So then I'm suspicious that the denial of the analytic possibility of probability 0 events might, in fact, resolve most or all of the same paradoxes that motivate causal finitism.
Angra,
The “hats” version (unlike the original version) gives the people time to discuss and agree on a policy before the hats are assigned. Of course, there is still a question of how an infinite number of people could come to an agreement...
Ian, thanks for clarifying.
Richard:
It's hard to say, because for a lot of the cases it is very hard to see what the prior probabilities would be.
That said, I suspect that there are lots of truths that have prior probability zero. For instance, the precise values of constants in the laws of nature.
Alex,
I take the trouble with situations with infinite definite complexity to be that they have prior probabilities of 0 almost no matter how we set the priors for the probabilities of their component parts. If there are infinitely many independent elements of the situation each of which has a probability less than 1, the whole situation tends to have a probability of 0 due to the way that the probability of a conjunction depends on the probabilities of its conjuncts (the more probabilistically independent conjuncts, the lower the conjunction's overall probability).
You mentioned the precise values of constants of nature as a plausible example of a probability zero fact. But my proposal about time only causes trouble for situations which (i) have a past or future and (ii) have probability 0 even with respect to that past or future.
If there are definite values of the constants of nature, which those constants have at particular times, and yet those constants do not randomly vary over time, then it seems likely that their being what they are at a given time has a probability of 1 with respect to those times' immediate pasts or futures. In such a world, for instance, the fact that the gravitational constant = G at t1 predicts with probability 1 that the gravitational constant still = G before and after t1. So this sort of case does not give rise to probability zero.
Alternatively, if the constants of nature do NOT have definite and exact values at times, then even if they DO change randomly over time, then consider the fact of their having the (indefinite and/or inexact) values which they happen to have at a given time: this fact should have a non-zero probability with regard to that time's immediate past or future. The indefiniteness and/or inexactness of the constants grants a certain width in logical space to the fact of those values obtaining, and this keeps the probability of the fact in question from being forced to 0 by the indifference principle.
My analysis would only rule out definite exact values for the constants (moreover, such values had at times) which do change randomly over time. This sort of commitment seems an advantage rather than a drawback: it makes it look as though the proposed analysis of temporal relations are doing positive work in predicting and explaining something of the weird superpositional (indefinite and/or inexact) nature of the empirical universe. The universe must be superpositioned enough to guarantee that its definite complexity is not infinite in a way which gives its overall state at any particular time a probability of zero relative to that time's future or past.
Are there any other truths which seem to have probability 0 relative to the pasts or futures of the times at which they are true?
Post a Comment