Consider algorithms for sequentially placing a coin each day either in a heads or a tails configuration depending on how coins were placed on past days. For instance, the rule might say that if you placed heads yesterday, today you place tails, and if yesterday you placed tails, this time you place heads. The algorithm might depend on the date, too: maybe on Wednesdays you place heads if and only if you placed tails last Wednesday, but on all other days of the week you place heads.
Here's an interesting question about a coin-placing algorithm: Is it mathematically coherent to suppose that the algorithm had been running from eternity? For some algorithms, the answer is positive. Both of the algorithms I described above have that property. But not all algorithms are like that. For instance, here's an algorithm based on a comment by Ian: if infinitely many heads have been placed, place tails; otherwise, place heads. This algorithm could not have been running from eternity. [Proof: For suppose it was. Then either it would have placed infinitely many heads or not. Suppose it placed infinitely many heads up to today. Suppose that on some day n the last of the heads was placed. Then prior to day n there would have been infinitely many heads (since from that day to today only one heads was placed, namely on day n, as day n was the latest heads day), and so on day n tails was placed, which is a contradiction. So the algorithm would have placed only finitely many heads. But then prior to each day there would have been only finitely many tails, and so each day a heads should have been placed, whereas only finitely many were placed, so again we have a contradiction.]
Now here's a fun fact:
Theorem: If the algorithm only needs to check a finite number of past placements to determine a given day's placement, then there is a sequence of coin placements that goes back infinitely many days that fits with the algorithm.
In Ian's algorithm, the placement on each day of course depends on infinitely many past placements. [I find it moderately surprising that the proof doesn't use the Axiom of Choice. Basically, you generate first a sequence of finite sequences of heads or tails with the property that the sequence could be generated by applying the algorithm a finite number of times to some pre-given backwards-infinite sequence, and that the sequence is the alphabetically the smallest sequence (heads=H and tails=T) if we write right-to-left that has this property. Then we're guaranteed convergence in at any particular distance from the end of this sequence as the length goes to infinity. The result generalizes to any finite alphabet, not just heads/tails.]
So what? Maybe nothing much. Still, it's an interesting difference between the finite and the infinite: it's always mathematically coherent to suppose algorithms with finite memory for history that ran for eternity but not always mathematically coherent to have algorithms ones that look infinitely far back that have done so. (Causal finitism, however, rules out both kinds. I wonder if that's a count against causal finitism? If so, not much of one, since metaphysical possibility implies mathematical coherence but not conversely.)