Friday, October 31, 2014

Antipresentism

Presentists think that the past and future are unreal but the present is real. I was going to do a tongue-in-cheek post about an opposed view where we have the past and future but no present. But as I thought about it, the position grew a little on me philosophically, at some expense of the tongueincheekness. Still, please take all I say below in good fun. If you get a plausible philosophical view out of it, that's great, but it's really just an exercise in philosophical imagination.

One way to think about antipresentism is to imagine the eternalist's four-dimensional universe, but then to remove one slice from it. Thus, we might have 1:59 pm and 2:01 pm, but no 2:00 pm. Put that way, the view isn't particularly attractive. Still, I do wonder why it would be more unattractive to remove just one time slice than to remove everything but that one time slice as the presentist does. It would, of course, be weird for the antipresentist to say that events first exist in the future, then pop out of existence just as one would have thought that they would come to be present, and then pop back into existence in the past. But perhaps no weirder than events coming out of nothing and going back into nothing, as on presentism. This way to think about antipresentism makes it a species of the A-theory.

But the antipresentisms I want to think about are ones that might be compatible with the B-theory. Start with the famous puzzles of Zeno and Augustine about the now. Augustine worried about the infinite thinness of the now. Zeno on the other hand worried about the fact that there are no processes in the now; there is no change in the now since within a single moment all is still.

One way of taking these ideas seriously is to see the present as an imaginary dividing line between the past and the future. There is in fact no dividing line: there is just the past and the future. (I think Joseph Diekemper's work inspired this thought.)

We might, for instance, instead of thinking of times as instants think of the basic entities as temporally extended events or time intervals, not made out of instantaneous events or moments. An event or interval might be past, or it might be future, or—like the writing of this post—it might be both past and future. (Thus, "past" and "future" is taken weakly: "at least partly past" and "at least partly future".) Some events or time intervals have the special property of being both past and future. We can stipulate that those events or time intervals are present. But they aren't real because they are present. They're just lucky enough to have two holds on reality: they are past and they are present. (In this framework, the presentist's claim that only present events are real sounds very strange. For why should reality require both pastness and futurity—why wouldn't one be enough?) There are no events or time intervals that are solely present.

There is a natural weakly-earlier-than relation e on events. If we had instants of time, we would say that EeF if and only if some time at which E happens is earlier than some time at which F happens. But that's just to aid intuition. Because there are noever instantaneous events, every event is weakly earlier than itself: e is reflexive. It is not transitive, however. The antipresentist theory I am sketching takes e to be primitive. There is also a symmetric temporal overlap relation o that can be defined in terms of e: EoF if and only if EeF and FeE.

If we like, we can now introduce abstract times. Maybe we can say that an abstract time is a maximally pairwise overlapping set of time intervals (or of events, if we prefer). We can say that t1 is earlier than t2 provided that some element of t1 is strictly earlier than some element of t2 (where E is strictly earlier than F provided EeF but not FeE). I haven't checked what formal properties this satisfies—I need to get ready for class now (!).

Wednesday, October 29, 2014

How to make an infinite fair lottery out of infinitely many coin flips

This is a technical post arising from a question Rob Koons asked me.

An infinite sequence of fair and independent coin flips determines a sequence of zeroes and ones (e.g., zero = tails, one = heads). Let Ω be the set of all infinite sequences of zero/one sequences, equipped with the probability measure P corresponding to the fair and independent coin flips.

Notice an invariance property capturing at least part of the independence and fairness assumption. If ρn is the operation of flipping the nth element in the sequence, and ρnA for a subset A of Ω is the set obtained by applying ρn to every sequence in A, then PnA)=P(A) whenever A is measurable. Moreover, intuition extends this idea beyond the measurable sets: A and ρnA are always going to be probabilistically on par.

Let Ω0 be the subset of Ω consisting of those sequences that have only finitely many ones in them. There is a natural one-to-one correspondence between Ω0 and the natural numbers N. Suppose a=(a0,a1,...,ak,0,0,0,...) is a member of Ω0. Then let N(a) be the natural number whose binary digits are ak...a1a0. Conversely, given a natural number n with binary digits ak...a1a0, let n* be the sequence (a0,a1,...,ak,0,0,0,...) in Ω0. Thus, we can interpret the members of Ω0 as binary numbers written least significant digit first.

For any members a and b of Ω, write a#b for the sequence whose nth element is the sum modulo 2 (xor) of the nth elements of a and b. For a subset B of Ω, let a#B = { a#b : bB }. We can think of a#B as a twist of B by a. If a is in Ω0, I will call it a finite twist. Any finite twist can be written as a finite sequence of flips ρn, where the positions n correspond to the non-zero digits in the sequence we twist by. Thus, if A is measurable, a finite twist of it will have the same probability as A does, and even if A is not measurable, a finite twist will be intuitively equivalent to A.

Say that a~b if and only if a and b differ in only finitely many places. Thus, a~b if and only if a#b is a member of Ω0. This is an equivalence relation. By the Axiom of Choice, there is a set A0 such that for every b in Ω, there is a unique a in A0 with a~b. (Thus, A0 contains exactly one member of each equivalence class.) For any natural number other than 0, let An=n*#A0 and it's easy to check that this equation holds for n=0 as well.

It's easy to see that the An are disjoint and their union is all of Ω. They are disjoint because if a is in n*#A0 and m*#A0, then a=n*#b and a=m*#c for b and c in A0. It follows that b~c. But A0 contains only one member from each equivalence class, so b=c, and so n*#b=m*#b, from which it obviously follows that n*=m* and so n=m. Their union is all of Ω, because if b is in Ω, and a is the unique member of A0 such that a~b, then N(a#b)*#a=(a#b)#a=b (by obvious properties of addition modulo 2), and so b is a member of AN(a#b).

But all the An are going to be intuitively probabilistically on par: they are each a finite twist of A0.

Our lottery is now obvious. Given a random sequence of coin flips, we take its representation a in Ω and choose the unique number n such that a is in An.

This is really the Vitali-set construction applied directly to sequences of coin flips. Note that along the way we basically showed that Ω has nonmeasurable subsets. For the sets An cannot be measurable with respect to P, since they would all have equal probability, and so by countable additivity they would have to have probability zero, which would violate the total probability axiom.

The construction in this post is more complicated than the one here, I guess, but it has the advantage that it always works, while that construction only worked with probability 1.

Tuesday, October 28, 2014

A divine command and an open future

I'm piling on to the argument here.

Suppose God creates Adam and Eve, and gives them eternal life. He then commands them that:

  1. They freely pray for at least a minute on each of the infinitely many Sabbaths starting with day t7 (the day after their creation).
This seems a reasonable command. But it is unreasonable to command something that the agent cannot ever make true. And on open future views, it is impossible for (1) ever to be true. For at any time, (1) depends on future free choices. So on open future views, the command (1) is unreasonable. And that's a problem for open future views.

Monday, October 27, 2014

Yet another infinite population problem

There are infinitely many people in existence, unable to communicate with one another. An angel makes it known to all that if, and only if, infinitely many of them make some minor sacrifice, he will give them all a great benefit far outweighing the sacrifice. (Maybe the minor sacrifice is the payment of a dollar and the great benefit is eternal bliss for all of them.) You are one of the people.

It seems you can reason: We are making our decisions independently. Either infinitely many people other than me make the sacrifice or not. If they do, then there is no gain for anyone to my making it—we get the benefit anyway, and I unnecessarily make the sacrifice. If they don't, then there is no gain for anyone to my making it—we don't get the benefit even if I do, so why should I make the sacrifice?

If consequentialism is right, this reasoning seems exactly right. Yet one better hope that it's not the case that everyone reasons like this.

The case reminds me of both the Newcomb paradox—though without the need for prediction—and the Prisoner's Dilemma. Like in the case of the Prisoner's Dilemma, it sounds like the problem is with selfishness and freeriding. But perhaps unlike in the case of the Prisoner's Dilemma, the problem really isn't about selfishness.

For suppose that the infinitely many people each occupy a different room of Hilbert's Hotel (numbered 1,2,3,...). Instead of being asked to make a sacrifice oneself, however, one is asked to agree to the imposition of a small inconvenience on the person in the next room. It seems quite unselfish to reason: My decision doesn't affect anyone else's (I so suppose—so the inconveniences are only imposed after all the decisions have been made). Either infinitely many people other than me will agree or not. If so, then we get the benefit, and it is pointless to impose the inconvenience on my neighbor. If not, then we don't get the benefit, and it is pointless to add to this loss the inconvenience to my neighbor.

Perhaps, though, the right way to think is this: If I agree—either in the original or the modified case—then my action partly constitutes the a good collective (though not joint) action. If I don't agree, then my action runs a risk of partly constituting a bad collective (though not joint) action. And I have good reason to be on the side of the angels. But the paradoxicality doesn't evaporate.

I suspect this case, or one very close to it, is in the literature.

Aristotelian propositions, promises and an open future

Aristotelian propositions are "tensed propositions" that are supposed to be able to change their truth value. If I say "It is sunny", this is supposed to express an Aristotelian proposition p such that p is true today, but p was false on cloudy days.

Now, a necessary condition for me to have fulfilled a promise is that

  1. the proposition that was the object of the promise is true.
Suppose yesterday—i.e., on Sunday—I promised:
  1. Tomorrow, I will do a blog post on Aristotelian propositions.
And I do make such a post today, i.e., on Monday, but I won't make another one on Tuesday. If the propositions expressed by tensed sentences are Aristotelian, then I have not fulfilled my promise. For the tensed proposition expressed by (2) is not true.

So tensed sentences don't express Aristotelian propositions, it seems. Rather, the proposition that yesterday I expressed with (2) is different from the proposition that would have been expressed with (2) today. The proposition that I expressed with (3) yesterday is "tenseless".

The advocate of Aristotelian propositions does have a way out. She can modify the condition (1) for promise fulfillment to:

  1. the proposition that was the object of the promise was true at the time of the promise.
Now, there is no difficulty. The Aristotelian proposition that would have been expressed by (2) was true yesterday (since today I do make such a blog post) but isn't true today (since tomorrow I won't—I hope!).

But note that the advocates of an open future cannot go for (3). For on their view, the proposition that was the object of the promise wasn't true when I made the promise. Thus, there is a tension between holding that tensed sentences express Aristotelian propositions and accepting an Open Future. But a lot of Open Futurists do just that.

This is not an insoluble difficulty. One can, for instance, suppose an operator By that acts on an Aristotelian proposition and "shifts it backward by y. Thus, B1 day applied to the Aristotelian proposition that tomorrow I will do a blog post on Aristotelian propositions is the Aristotelian proposition that today I do such a blog post. Then we replace condition (1) with:

  1. I fulfill at t2 a promise I made at t1 only if By(p) is true at t2, where y=t2t1 and p is the object of the promise I made at t1.
Still, it's weird, isn't it, that I fulfill a promise by bringing about something other than what I promised?

Saturday, October 25, 2014

Propositions that never become true but are probable

According to open future views, the proposition that in 2015 a fair and indeterministic coin lands heads has some probability but is not true. However, that proposition is apt to become true in 2015. So the probability of the proposition isn't the same as the probability of the proposition being true, since it's certainly not true now, but might well become true in 2015.

So far so good (or bad). Suppose God promises you that from 2015 onward, every year, a fair and indeterministic coin will be tossed. Now let Q be the proposition that every year from 2015 onward, ad infinitum, a fair and indeterministic coin lands heads. Now note that on open future views Q can never possibly become true. For on any date, the proposition requires for its truth that there will be infinitely fair and indeterministic heads results still past that date, and on open future views a proposition that requires an undetermined future event won't be true.

So, open future views have to say that it's impossible for Q to ever be true. But a proposition such that it's impossible for it ever to be true should get probability zero. But the probability that of the infinitely many coin tosses, infinitely many will be heads is 1 according to classical probability theory. So open future views should be rejected.

Here's another argument in the same vein. Suppose I know I will have an eternal afterlife, and I promise you that I will freely pray for you every day, ad infinitum, starting November 1, 2014. On open future views, the object of my promise is a proposition that can never be true. But it's clearly a bad thing to promise something that can never be true. Yet what I promised wasn't a bad thing to promise. So open future views are false.

One might even have the direct intuition that one could keep the promise. That intuition is incompatible with open future views.

Friday, October 24, 2014

Yet yet another probability paradox

Start with a set M of countably infinitely many people, and a set D of countably infinitely many fair dice. Suppose that there are no natural orderings on the set D, and that each person in M has exactly one of the dice in D assigned to her. (Or if you prefer, these are sets of unique names of people and coins respectively.) You are a person in M, and you know what all the members of D are but have no information whatsoever on which member of D is yours. Now all the dice are simultaneously and independently tossed. Obviously, your probability that your die showed sixes is 1/6.

Then the set of all the dice that landed sixes is revealed to you. Call the revealed set D6.

Suppose—this will be no surprise, as it had probability one—that the set of six-landing dice is infinite and the set of non-six-landing dice is infinite as well. Before it was revealed to you which dice landed sixes, your probability that your die yielded a six was 1/6. Did that probability change after you learned which set was the set of dice that landed sixes?

There are three options:

  1. No, it didn't change at all—it stayed at 1/6.
  2. Yes, it changed to an undefined value.
  3. Yes, it changed to some other defined value.
To choose between the options, observe first that your current probability that your die landed six must now be exactly the same as the probability that your die is a member of D6. But the fact that D6 is in fact the set of the six-showing dice carries no information as whether your die is in D6. Since all the dice are independent and fair, learning which dice landed sixes is completely irrelevant to finding your die. So whatever probability you assign to your die being among the members of D6 after the revelation must be the same as the probability you assigned to it before the revelation.

So, if we choose option (1), then already before you found out that the double-six rollers were the members of D6, you would have already assigned probability 1/6 to your die being in D6. But there was no natural ordering on the set D of dice, so the set D6 will be epistemically on par with its complement WW6. Both are simply countably infinite sets with countably infinite complements, and we can easily define an isomorphism of D onto itself that swaps the two sets. So if prior to learning the dice results you assigned 1/6 to your die being in D, you should have equally assigned 1/6 to your die being in DD6. But that's incoherent, since it's a given that the die is in D or DD6 but 1/6+1/6=1/3<1. So it seems that (1) is not an option.

That leaves (2) and (3). But those options are very strange. They imply that in such infinite die rolling scenarios, more data can always destroy your reasonable initial probability assignments.

Now, you might think that the above scenario only works when you don't know which die is yours, and that's kind of a strange scenario. But one can modify the scenario to work even when you do know which die is yours, but there is some other unique feature you don't know about your die, say, which of infinitely many (metaphysically) possible exotic particles is hidden inside the die, which of infinitely many angels has your die as a personal favorite, or what an independent sequence of rolls of the die yielded. Then the set D will be set of these unique features, and D6 will be the set of these features among the dice that landed six.

Thursday, October 23, 2014

Yet another probability paradox

You know for sure that infinitely many people, including yourself, each are independently tossing fair coins. You don't see your coin's result. But then you learn for sure something amazing: only finitely many of the coins came up heads. This is extremely unlikely—indeed, by the Law of Large Numbers it has zero probability—but it seems nonetheless possible. What probability should you now assign to your coin being heads?

Intuition: Very small, maybe zero, maybe infinitesimal.

Here's an argument, however, that you should stick to your guns and continue to assign 1/2. Let F be the proposition that only finitely many of the coins landed heads. Let G be the proposition that of the coins other than yours, only finitely many of the coins landed heads. Learning G does not affect your probability that your coin landed heads. The coins are all independent, so no information about the other tosses tells you about yours. But, now, necessarily (given the setup that you toss only one coin) F is true if and only if G is true. For your coin won't make the difference between infinitely and finitely many heads. So learning F does not affect your probability that your coin landed heads.

To make sticking to your guns even more amazing, note that this works for any infinity of people, even a very high uncountable infinity. Wow!

Wednesday, October 22, 2014

Scoring rules and epistemic rationality

Scoring rules measures the inaccuracy of one's credences. Roughly, when p is true, and one assigns credence r to p, then a scoring rule measures the distance between r and 1, while when p is false, the scoring rule measures the distance between r and 0. The smaller the score, the better.

Some scoring rules are better than others. Let's suppose some scoring rules are right. Then this thesis seems to be implicit in some applications of scoring rules (e.g., here):

  1. If S is the right scoring rule, then a credence-assignment policy is epistemically rational only if following the policy minimizes expected total or average S-scores.
(And there will be a debate about whether we should have "total" or "average"—see link.)

But (1) is false. Here's a simple counterexample that works for most reasonable scoring rules. Consider a situation like this: A fair coin is flipped. If you assign credence 0.51 to heads, a mindreader who knows your credence assignments will immediately reveal to you how the coin landed. Otherwise, you will never have any information on how the coin landed.

Obviously, the epistemically rational thing to do is to assign 0.5 to heads. But this leads to higher expected total and average scores on most reasonable scoring rules. For if you assign 0.51, then once the mindreader tells you how the coin landed, you will update your credence to be very close to 0 or 1, and your score will be very low. And the only cost of this scenario is the slight inoptimality from briefly having score 0.51 instead of the optimal score of 0.5. So the epistemically rational policy for dealing with situations like this, namely assigning 0.5, does less well in expected scores than the epistemically irrational policy of assigning 0.51.

The case may seem farfetched. But there are real-life cases that may be similar. It may be that for psychological reasons when you are a bit more sure, or a bit less sure (depending on your character and the thesis), of a thesis than rationality calls for, you will be better able to investigate whether the thesis is true. Thus it may be better for your long term epistemic score that you do what is epistemically irrational.

Tuesday, October 21, 2014

Hair

For a while, I've thought that:

  1. Hair is not alive.
  2. Every part of me is alive.
  3. So, hair is not a part of me.
This goes against the wisdom embodied in court precedent which has, I understand, held that cutting someone's hair without consent is battery rather than, say, theft.

Interestingly, in L'usage de la Raison, Mersenne talks of the human as a microcosm and mentions that humans, like the universe, have non-living parts, and gives hair as an example. So Mersenne denies (2). And on further reflection, I don't think I really had much reason to accept (2). Indeed, there seem to be other clear counterexamples to (2), such as the electrons in my heart (they are parts of my heart, and parthood seems transitive, at least in this case). Maybe one could argue that while the electrons are at least a part of a living part of me, hair isn't a part of a living part of me. But that would beg the question. For if my hair is a part of me, it's also a part of my head, and my head is surely a living part of me.

So I don't see much ground for denying that hair is a part of me. It's just one of my many nonliving parts.

Of course, speaking fundamentally, there is no such thing as hair (just as there are no hearts, chairs, stones, etc.). There is only I, who am hirsute.

Monday, October 20, 2014

Limiting frequencies and probabilities

You are one of infinitely many blindfolded people arranged in a line with a beginning and no end. Some people have a red hat and others have a white hat. The process by which hat colors were assigned took no account of the order of people. You don't know where you were in the line. Suppose you learn the exact sequence of hat colors, say, RWRRRRWRWRWWWWRWWWR.... But you still don't know your position. What should your probability be that your hat is red?

A natural way to answer this is to compute the limiting frequency of reds. Let R(n) be the number of red hats among the first n people, and then see if R(n)/n converges to some number. If so, then that number, call it r, seems to be a reasonable value for the probability. Call the assignment of r to the probability when the limit r exists the frequency rule.

Here's a curious and simple thing I hadn't noticed before. If you think the frequency rule is always the right rule, then for all integers n, you are committed to being almost certain that your position is greater than n. Here's why. Suppose that the sequence that comes up is n white hats followed by just red hats. The limiting frequency of R(n)/n is 1. So by the frequency rule, you're committed to assigning probability 1 to having a red hat. But since you have a red hat if and only if your position is greater than n, you are committed to assigning probability 1 to your position being greater than n. And since there is no connection between the hat color arrangement and the order of people on the line, if you have this commitment after learning the sequence of hat colors, you also had it before. The argument applies for all n, so for all n you must have been almost certain that your position in the sequence is greater than n.

And this in turn leads to the paradoxes of nonconglomerability. For instance, suppose that I flip a fair coin. If it's heads, I let N be your position number. If it's tails, I choose a number N at random such that P(N=n)=2n. In either case, I reveal to you the value of N, but not how the flip went. For any number n, the probability that N=n is zero given heads (since you're almost certain that your position is greater than n), and the probability that N=n is greater than zero given tails, so by Bayes' Theorem you will be almost certain that the coin landed tails. So I can make you be sure that a coin landed tails, and thereby exploit you in paradoxical ways.

So the frequency rule isn't as innocent as it seems. It commits one to something like an infinite fair lottery.

Friday, October 17, 2014

Too late!

Let's say that something very good will happen to you if and only if the universe is in state S at midnight today. You labor mightily up until midnight to make the universe be in S. But then, surely, you stop and relax. There is no point to anything you may do after midnight with respect to the universe being in S at midnight, except for prayer or research on time machines or some other method of affecting the past. It's too late for anything else!

This line of thought immediately implies two-boxing in the Newcomb's Paradox. For suppose that the predictor will decide on the contents of the boxes on the basis of her predictions tonight at midnight about your actions tomorrow at noon when you will be shown the two boxes. Her predictions are based on the state of the universe at midnight. Let S be the state of the universe being such as to make her predict that you will engage in one-boxing. Then until midnight you will labor mightily to make the universe be in S. You will read the works of epistemic decision theorists, and shut out from your mind the two-boxers' responses. But then midnight strikes. And then, surely, you stop and relax. There is no point to anything you may do after midnight with respect to whether the universe was in S at midnight or not, except for prayer or research on time machines or some other method of affecting the past, and in the Newcomb paradox one normally stipulates that such techniques are not relevant. In particular, with respect to the universe being in S at midnight tonight, it makes no sense to choose a single box tomorrow at noon. So you might as well choose two. Though, if you're lucky, by midnight tonight you will have got yourself into such a firm spirit of one-boxing that by noon tomorrow you will be blind to this thought and will choose only one box.

Thursday, October 16, 2014

Continuous Sleeping Beauty

A coin is tossed without the result being shown to you. If it's heads, you are put in a sensory deprivation chamber for 61 minutes. If it's tails, you are put in it for 121 minutes. Data from your past sensory deprivation chamber visits shows that after about a minute, you will lose all track of how long you've been in the chamber. So now you find yourself in the chamber, and realize that you've lost track of how long you've been there. What should your credence be that the coin landed heads?

Why is this a Sleeping Beauty case? Well, take the following discretized version. If it's heads, you get woken up 1,001,000 times and if's tails, you get woken up 2,001,000 times. There is no memory wiping, but empirical data from past experiments shows that you completely stop keeping track of wake-up counts after you've been woken up a thousand times. So now you've been woken up, and you know you've stopped counting. What should your credence be? This is clearly a version of Sleeping Beauty, except that instead of memory-wiping we have a cessation of keeping count, which plays the same role of being a non-rational process disturbing normal rational processes.

Oddly, though, in the sensory deprivation chamber case, I have the intuition that you should go for 1/2, even though in the original Sleeping Beauty case I've argued for 1/3. I don't have much intuition about my discretized version of the sensory deprivation chamber case.

P.s. I was thinking of blogging another Sleeping Beauty case, but it looks like LessWrong has beaten me to essentially it. (There may be a published version somewhere, too.)

Tuesday, October 14, 2014

Clumps and continuity

Our backyard had been free of black cats for as long as we've lived in this house, well over 400 days, except that over the last two nights, a black cat has visited our yard, meowing at the doors and windows. It's reasonable to think that it will visit again tonight. Yet 99.5% of evenings have been free of black cats. So how can it be inductively reasonable to think a black cat will visit tonight?

Presumably, it is because the data from the last two days is more relevant than the data from the earlier days, even though there are two orders of magnitude more black-cat-free days. But why is that data more relevant?

Granted, yesterday and the day before are more temporally similar to today than the other days. But why should temporal similarity override other kinds of similarity? No doubt there are many features (say, temperature, lunar phase, etc.) in respect of which today is more like some other day in the past 400 than like yesterday or the day before—after all, the earlier 398 days have a wide diversity of properties. But temporal similarity seems particularly important.

Maybe it is because we expect clumping, both in time and in space. Two black-cat evenings suggest the beginning of a clump.

I am curious: Is our expectation of clumping a priori justified or only a posteriori? Clumping seems to be a kind of
continuity. Is an expectation of continuity a priori justified or only a posteriori?

Monday, October 13, 2014

Not a finetuning argument

In The Impiety... (1624), as part of the 6th argument for the existence of God, Mersenne writes:

The proportion found between all the bodies of the world also shows that there is a God who has made all the universe in weight, in number and in measure: for the earth has no other ratio with the sun than 1:140, with the moon than 40:1, ... (pp. 98-99)
(I don't know off hand what the ratios are exactly meant to be; if they are ratios of volume, the moon is within 25% of the truth but the sun is off several orders of magnitude; if they are ratios of diameter, the sun is within an order of magnitude of the truth but the moon is an order of magnitude off.)

Mersenne's argument is full of such numerical (claimed) facts (the sun goes around the earth in 365.241 days, the moon traverses the Zodiac in 27 days, etc., etc.) and claims that God is needed to explain these facts. Now, I'm right now teaching on the fine-tuning argument, so I am sensitized to seeing such numbers in an argument for the existence of God. But it's striking that nowhere can I see Mersenne saying why these numbers are at all better than others, especially since surely some tuning facts seem very close at hand--surely, for instance, if the sun were much bigger or much smaller than it is, it would be too hot or too cold for life.

Mersenne explicitly insists that the numbers aren't explained by the essential natures of the objects, just before the above quote:

For the sun wouldn't be any the less the sun if it were closer or further from the earth, just as the stars could still be stars if they absented themselves from us by more than 14,000 earth radii.
Mersenne's argument seems to be a pure application of the idea that all contingent facts need explanation, and the arbitrariness of the numbers in the numerical statements seems to be cited precisely in order to show the contingency of the numerical statements. The argument suggests a strikingly strong commitment to a Principle of Sufficient Reason for contingent facts: all he needs to argue for a cosmic cause is to argue that there are contingent cosmic facts. Mersenne is confident that God has "many reasons" (as he says in the case of one of the numerical claims) for making the numbers be what they are, but these are reasons "which we aren't going to know except in Paradise" (101-102).

Mersenne's argument isn't a design argument--it doesn't advert to value-laden features that a God would have good reason to actualize. I think it's a kind of cosmological argument, but an eccentric one. Rather than arguing from generic features like motion or causation as Aquinas did, it focuses on very particular features.

The focus on these very particular features seems to have two benefits. The first is that it makes any appeal to necessity as the explanation implausible. Maybe it's necessary that there is motion, but it is incredible that it be necessary that the ratio of the diameter of the earth to that of the moon have to be 3.665:1 (to use modern numbers). So we get contingency very easily. The second feature is one I didn't notice right away. The astronomical features cited by Mersenne are ones that would reasonably be thought to be permanent features. They are thus prime candidates to be dismissed by it is so, as it has always been so. Mersenne's focus on the seeming arbitrariness of these features makes it very clear that would be no explanation. Thus Mersenne's cosmological argument works whether or not the past is finite. It is not disturbed by an infinite regress but does not need one either.

Of course, we no longer think that these particular features are permanent in the same way--the earth and sun changed in size in the formation of the solar system. But impermanent features are no better explained by an infinite regress than permanent ones--the permanence of the features in Mersenne's argument is only heuristic (and I don't see him explicitly drawing the reader's attention to the permanence). Plus we could run the argument on the basis of the apparently permanent but seemingly arbitrary elements in the laws of nature, such as precise values of constants.

The downside of Mersenne's argument, however, is that unless it is explained why the features are desirable, it is difficult to show that the cause of these features of the universe must be intelligent.