Suppose there is an infinite line of paving stones, labeled 1, 2, 3, ..., on each of which there is a blindfolded person. You are one of these persons. That's all you know. How likely is it you're on a number not divisible by ten? The obvious answer is: 9/10. But now I give you a bit more information. Yesterday, all the same people were standing on the paving stones, but differently arranged. At midnight, all the people were teleported, in such a way that the people who yesterday were standing on numbers divisible by ten are now standing on numbers not divisible by ten, and *vice versa*. Should you change your estimate of the likelihood you're on a number divisible by ten?

Suppose you stick to your current estimate. Then we can ask: How likely is it that you were yesterday on a number not divisible by ten? Exactly the same reasoning that led to your 9/10 answer now should give you a 9/10 answer to the back-dated question. But the two probabilities are inconsistent: you've assigned probability 9/10 to the proposition *p*_{1} that yesterday you were on a number not divisible by ten and 9/10 to the proposition *p*_{2} that today you are on a number not divisible by ten, even though *p*_{1} holds if and only if *p*_{2} does not (this violates finite additivity).

So you better not stick to your current estimate. You have two natural choices left. Switch to 1/2 or switch to 1/10. Switching to 1/2 is not reasonable. Let's imagine that today is the earlier day, and you have a button you can choose to press. If you press it, the big switch will happen—the folks on numbers divisible by ten will be swapped with the folks on numbers not divisible by ten. If you had switched to 1/2 in my earlier story, then if you press the button, you should also switch your probabilities to 1/2, while if you don't press the button, you should clearly stick with 9/10. But it's absurd that your decision whether to press the button or not should affect your probabilities (assume that there is no correlation between what decision you make and what number you're on).

Alright, so the right answer seems to be: switch to 1/10. But this means that the governing probabilities in infinite cases are those derived from the *initial* arrangements. Why should that be so?

Here is a suggestion. We assume that the initial arrangement came from some sort of a regular process, perhaps stochastic (where "regular" is understood in the same sense as "regularity" in discussions of laws). For instance, maybe God or a natural process brought about which squares the people go on by taking the people one by one, and assigning them to squares using some natural probability distribution, like probability 1/2 to 1, 1/4 to 2, 1/8 to 3, and so on, with the assignment being iterated until a vacant square is found (equivalently do it in one step: use this distribution but condition on vacancy). And, maybe, for most of the "regular" distributions, once enough people are laid down, we get about a 9/10 chance that the process will land you on a square not divisible by ten.

This assumes, however, that there is a *process* that puts people on squares. Suppose this assumption is false. Then there seems to be no reason to privilege the probability distribution from the first time the folks are put on squares. And our intuitions now lead to inconsistency: assigning 9/10 to *p*_{1} and 9/10 to *p*_{2}.

Where has all this got us? I think there is an argument here that absurdity results from an actual, simultaneous infinity of uncaused objects. But if an actual infinity of objects is possible, and it is possible to have a contingent uncaused object, then it is very plausible (this is an ampliative inference) that it is possible to have an actual infinity of simultaneous contingent uncaused objects.

Therefore: either it is impossible to have an uncaused object or it is impossible to have an actual infinity of simultaneous contingent objects. But it is possible to have an actual infinity of simultaneous contingent objects if it is possible to have an infinite past. This follows by al-Ghazali's argument: just imagine at each past day a new immortal soul coming into existence, and observe that by now we'll have a simultaneous infinity of objects. So, it is either impossible to have an uncaused contingent object or it is impossible to have an infinite past. We thus have an argument for the disjunction of the premises of the Kalaam argument, which is kind of cool, since both of the premises of the argument have been disputed. Of course, it would be nicer to have an argument for their conjunction. But this is some progress. And it may be the further thought along these lines will yield more fruit.

## 37 comments:

A

roundnumber being a number divisible by 10, I ask myself how likely I am to be on a round number in your scenario. The obvious answer is 1/10. But having just read your post, another obvious answer must be that there is no likelihood. Why should there be? I am on some stone, and it is either round numbered or not. Now, I do feel the pull of a tendency to say that there is a likelihood, and that it is obviously 0.1, but then I ask myself why I feel such a pull.For myself, it is by analogy with similar finite situations. Were I on a line of very many paving stones, labeled 1, 2, 3 and so forth to some very big number, then the likelihood of my being on a round numbered stone would be very close to 1/10. I could have arrived at my position by any method imaginable, for all I know, but there are still 9 non-round numbered paving stones for every round numbered stone, and so my guestimate of 0.1 seems reasonable to me.

But in your scenario, there are infinitely many paving stones, and in your post you have given me a good reason to doubt that I can extrapolate from a similar finite scenario to yours (and in my background knowledge of Hilbert's Hotel I have more reasons to doubt such extrapolations in general).

I ask myself again why there should be any likelihood at all, and I answer that there was not even much of a reason to guestimate a likelihood in the finite case. My only justification for doing so was that there seemed to be no problem doing so. And if asked to bet on it, those are the natural odds for me to choose, given how little I know. Still I would wonder, was there something else I knew in the back of my mind about how the world is which could tell me something more about how I could have got onto the stones and which would give me a better estimate than 9/10.

Your paradox reminds me, incidentally, of quite an old paradox of the infinite, due I think to Galileo: There are clearly twice as many even numbers as natural numbers, but if you double all the natural numbers, you get just as many even numbers. So twice as many is just as many, which is absurd. Your paradox reminds me of that because although there are nine times as many non-round numbered paving stones as there are round numbered stones (by analogy with the very large number of stones case, or obviously), there are also the same number (as you can see by multiplying each number by ten) or rather, slightly fewer (via the hundreds and thousands).

Galileo's paradox was resolved by Cantor, of course; although Cantor's own paradox remains a problem for those who want to argue more positively for the so-called Actual infinite; or rather, for the natural numbers being collectively Actual infinite (as neither that problem nor the supertask problems, nor any other I know of, is any problem at all for the Actual infinite that is the number of points in a line of points that is given by the cardinal number 1/0 that I discovered a few years ago and about which I love to go on and on:) Incidentally, Cantor's paradox is an argument for Open Theism, from the supposition that God is omnipotent and omniscient.

"the people who yesterday were standing on numbers divisible by ten are now standing on numbers not divisible by ten, and vice versa"

Is this possible? Or is that exactly the question you're raising? (I am not familiar with, or find the meaning of the term 'simultaneous infinity of uncaused objects')

Suppose there is an infinite line of paving stones, with labels of 1, 10, 2, 20, 3, 30, ..., 9, 90, 11, 100, 12, 110, ..., 19, 180, 21, 190, 22, 200, ..., on each of which there is a blindfolded person. You are one of these persons. That's all you know. How likely is it you're on a number not divisible by ten? The obvious answer is: 1/2. But all I have done is to describe the same thing you described but in a slightly different way (whence the similarity with Galileo’s paradox). You might think that the differing likelihoods following from those different descriptions was contradiction enough to refute the standard view of the natural numbers, but in view of the Water/Wine paradox, for example, it would arguably be reasonable enough to think that (if the natural numbers form a set then) there would not be any logical likelihood. There might be a likelihood of 9/10 if your description was more relevant to the likelihood, e.g. because of how the people were put on the stones, but as you say, there might be no such process. And while your description more accurately described how the stones were laid out (presumably), it seems implausible that one's likelihood of being on a non-round number labelled stone should depend upon the arrangement of the stones. Would it depend upon one's point of view as one looked at them?

The argument for Open Theism via Cantor’s paradox, incidentally, from God being omnipotent and omniscient, first uses Cantor’s paradox to show that the whole numbers are indefinitely extensible. Then since God is omnipotent—has the most power He could possibly have—so His knowledge of arithmetic is indefinitely extensible, since only He could make worlds containing numbers of things for all whole numbers. He would be able to make a world of N things even before He knew the cardinal number N because He would be able to know N and able to make any possible world. And He could also have, at all times, infallible knowledge of all arithmetical truths if arithmetic is for Him Intuitionistic, if mathematics is something that is divinely created (cf. Divine Command metaethics) from the basic concepts of a thing and a possibility (which He would always have known perfectly). (And that mathematics is divinely created might also follow from God’s omnipotence.)

If you have in infinite number of stone each labeled by a natural number. Wouldn't the probability of being on a number divisible by 10 be:

P = aleph-0 / aleph-0

Why would it be 9/10? Isn't P undefinable in transfinite math? But if there no probability of being on a stone divisible by 10, then how can this be? The person must be on some stone. The following seems plausible:

A) In order for something to be true in the real world there must be a definable probability of it being true.

If A is correct, then we have the reducto. There cannot be an infinite number of stones.

A possible refutation of A is the probability of my cat being yellow. It is, incidentally, true that my cat is yellow. But what is the

probabilitythat it is yellow? Given that it is, there is a trivial probability of 1; but then, that would apply to all truths. So in that sense, A is irrefutably true, but trivial. The probability of you being on a number divisible by 10 is 1 if you are, and 0 if you are not.So we are considering other sorts of probability. Let's say that you don't know whether or not I was lying about my cat's colour. What is, for you, the probability that my cat is yellow? Is it the probability that I was lying? But maybe it is really an orange colour that you

wouldcall 'yellow'!So let's beglin again. Is the probability that some cat (about which you know nothing) is yellow 1 in 3, there being three primary colours, or 1 in 10 (3 primary, 3 secondary, 1 tertiary and black, white and grey)? Or 1 in 22 (as there are striped and mottled coats too), and so forth.

It may be reasonable to say that there is in fact no definite probability of my cat being yellow, but rather lots of probabilities, many of them rather fuzzy (is the chance of something about which you have no information either way really exactly 0.500000000000..., or is it rather more like 0/0 where that division is given some fuzzy interpretation, such as the famous triangular one)...

I think (A) may be false. I think there are at least aleph-0 logically possible worlds. What's the probability of the present world existing?

W = 1 / aleph-0

Does that mean that the probability of our world is 0? If so, it's rather strange that it exists. How can something be true with a 0 probability of it being true?

It may mean that, or it may mean that there is no probability. A probability is a number satisfying the axioms of some mathematical theory of probability. Or, more informally, a probability may be not a number but high or low, and so again not be a definite number. As to whether the probability is 1/aleph-null, that depends upon whether such a division is defined. It is not standardly defined, but it may be defined. And it may be defined to be 0, or to be a nonzero infinitesimal. As to what one should do, there are again all sorts of criteria that one might use to judge such theories. Which are the best criteria? Here we have a risk of an infinite regress; what are the best criteria for deciding which the best criteria are!

Personally, I don't see why a probability of 0 has to mean an impossibility. A simple argument is an endless sequence of coin tosses. That gives us an endless string of heads and tails, perhaps completely randomly, which is isomorphic to a real number from the interval [0, 1] in binary notation. Now, there is a standard uniform probability distribution over [0, 1], which is just the unit square. That does not give us the probability of getting any particular result from the endless coin-tosses, but it does take areas to be probabilities, and the area over a single real number is of course 0, standardly. So it would be quite natural to take the probability of any particular outcome from endlessly tossing a coin to be 0. And of course, all those outcomes are possible.

enigMan,

But maybe there can't be an infinite number of randomly tossed coins. It seems to lead to the strange result that each possible outcome has a probability of 1/aleph-0, but also the outcome it self is undefinable. You could put all the heads in a one to one correspondence with all the tails. All the instances with 2 heads in a row could be put in one to one correspondence with all the tails. We could do the same with 3 heads and 4 heads etc.

The result doesn't seem to be definable and the probability would have to be either undefined or 0. But how can something really happen by chance and have an either undefined or zero probability, if it actually happens?

You try to argue that a zero probability doesn't mean impossibility, but you only do this by setting up an infinite series of coin tosses. Perhaps this is a reducto for the possibility of an infinite number of coin tosses, as opposed to an argument that 0 probability isn't an impossibility.

Yes, perhaps it is; indeed, I think it is. However, the outcome is not so much indefinable as a particular random sequence. It might be a string of nothing but heads, for example; but even if there were as many heads as tails, their particular order would define the particular result. But I do think that there are paradoxes involved in such a thing.

Indeed, infinity divided by infinity is, as such, undefined, or an indeterminate form, but that does not mean that when that division crops up in a particular scenario there is no some other way of defining its particular value in that scenario. The observed frequency may be an indeterminate form whilst the propensity in each case is a definite number, such as 9/10. A similar thing happens in the calculus (according to d'Alembert), where the differential of a curve at a point is in general 0/0, but for any particular curve takes the value of the gradient of that curve (if it has one).

A probability with the value of 0 is not necessarily the same as there being no possibility. Similarly, in non-standard analysis there are infinitesimals, which have a zero standard part. Or in complex analysis, there are complex numbers with a non-zero modulus but a zero real part. A numerical probability depends upon a mathematical theory of probability, whereas a possibility depends upon a metaphysical theory. So there is this conceptual possibility, of probability 0 of something that turns out to be actual. There may be no reason why you should allow it in your world-view; but nor does there seem to be a reason why it is absurd.

In a finite space, a probability of 0 probably should mean impossibility, I think (whence our intuitions, perhaps); but when we entertain the concept of an infinite space of possibilities (e.g. aleph-null logically possible worlds) it is apposite that in general, in mathematics, zero times infinity is an indeterminate form. Insofar as it is not defined within some particular mathematics (e.g. standard set theory), that does not mean that it is logically indefinable, that it cannot be defined to be a range of values, or even some particular value.

Dr. Pruss,

Thanks very much for pointing me to this interesting paradox. However, I must disagree with your initial remark that a 1/10 probability assignment is "obvious." I must instead opt for what Dr. Castell pejoratively called the abstention position (Brit J Phil Sci 49, 1998). In other words, as enigMan has already suggested in his first comment, another obvious answer is that there is no likelihood which may be justifiably assigned.

Now, you seem to interpret this as being a problem with actual infinities, but I rather think it shows us a limitation of probability. We are not guaranteed to always have sufficient information to begin talking about our propensity to guess correctly certain facts. This paradox uses our intuition against us; but I do not find it a persuasive argument against infinite collections of physical objects.

Still, it's very interesting. Thanks again!

If we go for the abstention position, then we seem to get this interesting result: In an infinite multiverse, we can't do probability. But science requires probability. Thus, insofar as the multiverse is a scientific hypothesis, it is self-defeating.

Okay, I can see how that might be an interesting point: An infinite multiverse poses problems for probability assignments insofar as it prevents us from assigning certain values which could potentially be helpful under a finite multiverse hypothesis. But it seems like you're taking that problem further than I'd be willing to do.

I would make two claims regarding how to temper your view: First, probability assignments regarding physical systems inside our own universe remain demonstrably useful. I think this is pretty clear from an empirical perspective, and I don't see how we could demonstrate that an infinite multiverse leads us to any logical problems, so long as we stay within the confines of our own context---this universe. Second, I only find it to be problematic as a scientific hypothesis because it seems not to be falsifiable. So, if you have some other criticism in mind when you call it "self-defeating," I'd be curious to hear what that is.

Anyway, I don't mean to overwhelm you with comments. I thank you for your responses so far.

The post for the day after this one addresses the in-universe case. If that argument works, then you can't assign probabilities to the outcomes of random processes. And that would be pretty bad, since science needs to be able to do that.

Your argument is interesting, and makes me think of general objections to probability involving infinite sets. We can't compare the size of say, the integers and the odd numbers since both can be Cantor-matched as equivalent cardinality. So you're tempted to say, there's no way to talk about probabilities if there are infinite sets involved.

However, consider a coin-toss exercise. You expect 50/50 from a fair coin over time, and whatever appropriate results from doing other things like cards, dice. But suppose the universe is infinite (and it seems flat, so it supposedly really is.) Now there are Aleph-null people tossing coins, Aleph-null tossings, etc. all over this infinite universe. Aleph null head-landings and Aleph null tails ... Do you really want to say that this exotic boundary condition makes it impossible for you to "expect" normal outcomes?

Somehow, the proportions for probability are based on intrinsic tendencies of the finite limit and not the idealized set properties. If you "fill it in" by making it infinite, that doesn't change the proportion (almost like taking dy/dx in reverse.) So suppose I looked at integers between one and ten and the chance of hitting two, we'd say"1/10". If it's numbers made into tenths like 1, 1.1, ... 9.9, 10 then the chance of hitting from say 1.6 through 2.5 is again 1/10. We can keep making it finer, and the ratio holds. Indeed, we can take a range "1.5-2.5" of the continuum (Aleph = ?!) and it's still intelligible despite there being infinitely many targets.

That's how I look at the problem of constants and features of possible universes. Imagine it being grainy to some fine degree (like 0.001 increments to each spec.), and there's various chances of this or that. Then cut the grain to 1/10, and so on ... The proportionality should hold. That's what matters, not the limit infinities. Think of it more like the chance a dart would hit one colored region rather than another on a picture. So yes there are various "chances" we're likely to end up in various universes depending upon the other factors (ie, Bayesian reasoning.) It is not invalid.

Neil:

But in the infinite sequence of coin tosses, there are subsequences where it's just heads for a very, very long time, and there are subsequences where it's just tails for a very, very long time. So how do we know that we're in a place in the infinite sequence where heads and tails are roughly in equal proportion, rather than in one of those portions where it's just heads or just tails?

Well, we might say: There are a lot more portions in the sequence where heads and tails are roughly in equal proportion. But what does "a lot more portions" means in respect of an infinite sequence?

Thanks for a quick reply, Dr. Pruss! Well you have a point but we could wonder about similar issues of "am I stuck in a statistical fluke" if the universe had "only" 10^1200 such tossings or card games going on. My point was: do you really think you can't expect a likely outcome, just because you're part of an infinite set of such activities v. a finite one, however large the latter? How could that be? Our universe likely is infinite and that means infinite copies of us and what we do - should I care? Like I said, the number being infinite should be of no consequence, only the local proportionality I observe at each successive scale. (That is, ignore the boundary condition.)

If there were a small number of sites, it would make sense to assume that each site is equally likely, and then the number of flukish sites would be a small fraction of the total number of sites, so the probability of a fluke would be low. But (a) it doesn't seem to make sense to assume all sites are equally likely given an infinite number of sites, and (b) the number of flukish sites is the same as the number of non-flukish ones.

Moreover, whether it makes sense to talk of the "local", if it includes other close-by universes, depends on what kind of a multiverse we have. A multiverse where the universes are not embedded in a metric space may not allow one to talk of which universes are local to us.

How about this:

Lets assume we have an infinitely long array of squares. And a fair 6-sided dice.

We roll the dice an infinite number of times and write each roll's number into a square.

When we finish, how many squares have a "1" written in them? An infinite number, right?

How many squares have an even number written in them? Also an infinite number.

How many squares have a number OTHER than "1" written in them? Again, an infinite number.

Therefore, the squares with "1" can be put into a one-to-one correspondence with the "not-1" squares...correct?

Now, while we have this one-to-one correspondence between "1" and "not-1" squares set up, let's put a sticker with an "A" on it in the "1" squares. And a sticker with a "B" on it in the "not-1" squares. We'll need the same number of "A" and "B" stickers, obviously. Aleph-null.

So, if we throw a dart at a random location on the array of squares, what is the probability of hitting a square with a "1" in it?

What is the probability of hitting a square with a "A" sticker?

The two questions don't have a compatible answers, right? So, in this scenario, probability is useless. It just doesn't apply. You should have no expectations about either outcome.

BUT. NOW. Let's erase the numbers and remove the stickers and start over.

This time, let's just fill in the squares with a repeating sequence of 1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,...

And then, let's do our same trick about putting the "1" squares into a one-to-one mapping with the "not-1" squares, and putting an "A" sticker on the "1" squares, and a "B" sticker on the "not-1" squares.

Now, let's throw a dart at a random location on the array of squares. What is the probability of hitting a square with a "1" in it?

What is the probability of hitting a square with a "A" sticker on it?

THIS time we have some extra information! There is a repeating pattern to the numbers and the stickers. No matter where the dart hits, we know the layout of the area. This is our "measure" that allows us to ignore the infinite aspect of the problem and apply probability.

For any area the dart hits, there will always be an equal probability of hitting a 1, 2, 3, 4, 5, *or* 6. As you'd expect. So the probability of hitting a square with a "1" in it is ~16.67%.

Any area where the dart hits will have a repeating pattern of one "A" sticker followed by five "B" stickers. So the probability of hitting an "A" sticker is ~16.67%.

The answers are now compatible, thanks to the extra "structural" information that gave us a way to ignore the infinity.

In other words, you can't apply probability to infinite sets, but you can apply it to the *structure* of an infinite set.

If the infinite set has no structure, then you're out of luck. At best you can talk about the method used to generate the infinite set...but if this method involves randomness, it's not quite the same thing.

Alexander, Allen: I accept the intrinsic logical problems with infinite sets per se. Right, we can't compare infinite ratios like we can finite sets. But that's just taking them in the abstract and asking which is bigger and by how much - and finding we can't, "per se." As we've seen argued, that isn't all there is to it. We need some context. Whether that has to be the idea of structure that Allen poses or not is perhaps debatable.

So I want people to peel away from the pure abstraction some more and consider again the counter-example of betting in an infinite universe. There are infinite cases of the tosses etc. yet still we expect (and really do find) the appropriate probabilities. That is supposed to show that the pure abstract argument is inadequate by

reductio. We should not imagine, as Mr. Pruss seems to, that the counter-argument must be wrong because the abstraction is unassailable.I think that taking the limit of fine graining even if the "real case" goes to continuum is also valid, and there could be more.

Here's a final challenge to taking problems with infinity too seriously, albeit it regards "potential infinity" and not the expressed set per se: probability theory itself is often couched in terms of, how the frequentist relative proportions converge to a distinct ratio as we approach an infinite number of trials (presumably Aleph null but could be others I suppose.) In a logically possible world I could keep tossing "forever" and mosts thinkers would say, I can expect whatever chances per ordinary probability theory. (Note: some thinkers find problems with extension into an infinite past, e.g. "I was always doing it and never started ...." One could argue it's just a time reversal of the former.)

Few would think that approaching infinity as the idea of the definition in probability, invalidates the essential concept. (However, note that probabilistic claims are not strictly falsifiable in Popperian terms, since no particular run (and a stated "set of runs" is of course just a fragmented "run") can be dispositive! It's a judgment call, FWIW ...) So it seems to be a valid concept. The pure abstractions of set theory are inadequate as a critique.

BTW, I recommend Rudy Rucker's

Infinity and the Mindfor mind-bending excursions into the transfinite world."There are infinite cases of the tosses etc. yet still we expect (and really do find) the appropriate probabilities."

The "really do find" depends on the claim that we are actually in an infinite universe, which we do not know. The "expect" could be explained away as an expectation unduly generalized from finite cases.

As for frequentism, unless there are well-defined probabilities, there is no guarantee that there are any limiting frequencies. The limit might just not exist. (For instance, in the pattern 010011000011110000000011111111..., there is no limiting frequency of 1's.) Even if there are well-defined probabilities, the existence of the limit is not logically guaranteed--it is only guaranteed to exist with probability 1.

OK, what if we

werein an infinite universe. Do you think probability would cease to be either meaningful or a tool for study and expressing expectations? Really? Why should it matter, whether or not there's all that "out there," to what happens here?Why should it matter if the universe is infinite? Because of this argument (and the ones in my other posts on the issue). :-)

I came to the question of the Kalaam argument with very strong intuitions that an infinite universe is possible. I find the conclusions of arguments, like the present one, against the possibility of an infinite universe to be very counterintuitive. It seems almost obvious that there could be an infinite universe.

But sometimes arguments force one to accept one thing that is counterintuitive, in order to escape another that is even more counterintuitive. Even so, I am reluctant to accept the conclusions--that's just a psychological statement about me. But I don't know of a plausible refutation of my arguments that lead to the conclusion that it's impossible to have an infinite universe (or, more precisely, that it's impossible for there to be an infinite number of items in the past of any item).

OK ... I can't blame you for wanting to take seriously the implications of an argument that seems valid. I'm not sure your argument about the stones has the implication that probability for infinite sets (given the actual context) simply must be absurd in all cases. All I can say is, the counterargument e.g. that local context can't be invalidated by infinite boundary conditions also seems valid, so we have conflict over what to accept. The limit graining argument seems valid to me also.

And again, consider the "0/0" is in itself absurd, yet dy/dx somehow makes sense. Have you asked colleagues about this argument? I wonder if "surreal numbers" and other ways of handling infinites and infinitesimals (as in non-standard analysis) could help out here.

I have a further argument directly about the paving stones. It is very much like my previous arguments. Let's say we had groups of ten blindfolded people each, and indeed an infinite number (A-null) of those groups of ten. You're in one of them. Each group is assigned to a set of stones: 1-10, 11-20, ... 4361-4370, .... You don't know which set you end up on. So I expect that I and my nine comrades will end up arranged "randomly" on some set of stones.

How could I not think it plausible that I have a "1/10 chance" of ending up on #10, or maybe # 23,780, etc? After all, each group doesn't have to give a hoot -

in advance of any further meddling- whether there are other groups at all, or even other stones! Their existence is not "felt" by my group, how could it be?Now sure, if you rearrange people that changes the chances but almost by circular argument. You're taking the people who were on e.g. 1, 2, ... 9, 11, 12, ... 137, 138, 139, 141, ... and switching them with those on 10, 20, 30, ... I know you can do that, it's just another Hilbert Hotel. But you changed something from what it was before. If we accepted the original probability as accurate and random, then I should believe I was likely not on a decadal stone. Hence it would make sense to change, regardless of it being done it that manner.

So I think it is the argument justifying the original probability that matters. Sure, after the shuffle the sets are "equivalent" but the second set wasn't "used" to establish the probability directly. I don't know how else to handle this. I admit it is perplexing and we have conflicts of interest here over two compelling ways to look at it: it's a paradox!

I want to put this problem up at my own blog. Is the stones argument specifically yours Dr. Pruss, and in any case I'd like a way to cite. tx

So, start with the groups 1-10, 11-20, etc. Maybe each group is defined by the fact that they have the same color of shoes. (There are infinitely many colors, I suppose.)

Question: How likely is it that your number is divisible by 10? You want to say: 1/10.

Fine. But keep the very same people, but recolor the shoes. So now the people with numbers 1,2,3,4,5,6,7,10,20 have the same color of shoes; the people with numbers 8,9,11,12,13,14,15,16,30,40 form another group, with new shoe colors; so do the people with numbers 17,18,19,21,22,23,24,25,50,60, etc. You get the point. After recoloring the shoes, two out of ten people in each group have a number divisible by 10. So using the grouping method, we now conclude that the probability you have a number divisible by 10 is 2/10=1/5.

But your probability of being on a number divisible by 10 should not change when someone repaints your shoes!

Perhaps, though, you think there is something special about a case where the grouping is done spatially, rather than by color. I don't know exactly why. Though, I kind of feel a pull of that claim when the grouping is done temporally.

Neil:

I expect problems like this have been produced independently by lots of people, but no, I didn't get my initial stones scenario from anybody else.

Yes of course Alexander, a

regroupingattempt can rearrange the people and make the expectation different than before. But if my original reasoning is sound, and since it contradicts what you consider the implications of regrouping - then we have a paradox. Maybe it just can't be resolved, that's the sort of problem it is.Perhaps you can write up the problem and with my attempted rebuttal, and see what splash it makes. (I'd love to get credit even as informal citation.) I will put something up at my blog meanwhile, but go ahead and write it up if you wish.

This stuff is tough. Right now, my project is a modality book. Once that's out the door--it's due Sept. 15--I can get on to other publication projects, and this is one option. I am also thinking of running a conference on probability and infinity, with some good people (I have two really good people who in principle agreed to come). If I run the conference, I might just present this stuff at it, and then probably there'd be a book from the conference.

Hi again, Alex. I notice that at the end of last month you wrote: "I came to the question of the Kalaam argument with very strong intuitions that an infinite universe is possible. I find the conclusions of arguments, like the present one, against the possibility of an infinite universe to be very counterintuitive. It seems almost obvious that there could be an infinite universe."

I'd like to reiterate my observation that such arguments as this, even if they do work, do not show that an infinite universe is impossible. They would seem to show most directly that simple, countable infinities of physical objects are impossible. But mathematicians have long believed that the natural numbers (the intuitive ones, not the formal ones within ZF) might be indefinitely extensible. If so then we could only have finite numbers of ordinary objects in an infinite space, but we could have the infinite space. The reason why we could not use all that space to get a simple transfinity of objects would be the nonsensicality of such transfinities.

I've thought about this observation quite deeply over several years, and I can see why most people (myself included, most of the time) do not make it. But it appears to be correct.

Different locations in a non-Euclidean space can have different geometrical properties. If so, then an infinite space on its own, assuming we're substantivalists about space (if we're relationalists, I don't know that the hypothesis of an infinite space without infinitely many objects or objects of infinite extent makes sense) can be enough to generate the sorts of problems involved here. Instead of people on different locations, just imagine different bits of space with different local geometrical properties.

I wouldn't suggest we have to take what physicists believe with no grains of salt, but: most of them think the spatial extent of our universe is infinite, due to GR considerations. IOW, it is neither just an expanding cloud of stuff with empty space beyond (an explosion in classical space) nor a closed hypersphere of finite volume.

I do accept that the paradoxes discussed here are thought-provoking and give pause to glib acceptance of the reasonableness of infinite sets of real objects. However, if space is infinite it's hard to imagine it not being populated with the same kinds of objects "forever" into space (or else we'd have a special boundary, even if "pure space" could go on.) That means an Aleph_null of every card game, whatever. And as far as I'm concerned the probabilities are still what they're "supposed to be."

If we put people on squares, I guess it depends on

howwe do that and not the sheer logic of set theory that matters most. Note also in cases like coin tossing there is an intrinsic "generator" of the probabilities, not just a hollow comparison between outcome sets. But that doesn't help my own argument about expectations for possible worlds as much as I'd like ...Dr. Pruss,

I agree with hatsoff that your conclusion to this thought experiment is too hasty. Rather than pointing to problems with infinities, this more likely points to problems with assigning probabilities. Such problems are nothing new. Our intuitions about "natural" distributions can become confused as setups become more complicated. Faced with an apparent paradox involving assignments of probabilities, I would much sooner give up on probabilities than leap to such a sweeping metaphysical conclusion.

In this particular situation it is important to realize that in the second case we have additional information, and it is only natural to expect that additional information changes our probability assignments. At least, it is natural on the view that probability is a function of our ignorance and the measure of our uncertainty. I don’t actually see an inconsistency in assigning p=9/10 in the first case (where all we know is the state of affairs today) and p1=9/10, p2=1/10 in the second case (where we know about the teleportation). Our knowledge of the universe has changed – therefore, our probability assignments must change too.

In fact, your conundrum is not specific to scenarios involving infinities. Consider this modified version of the thought experiment:

(a) There are 10 paving stones labeled 1-10. , on each of which there is a blindfolded person. You are one of these persons. That's all you know. How likely is it you’re not on number 10? The obvious answer is: 9/10.

(b) But now I give you a bit more information. Yesterday, all the same people were standing on the paving stones, but differently arranged. At midnight, all the people were teleported, in such a way that you end up on number 10. Should you change your estimate of the likelihood you're on a number 10?

What makes your original and my modified scenarios similar is that additional information that you receive in (b) modifies your initial ignorance-based assignment of probabilities. In your case the effect is achieved by a non-uniform mapping of the initial distribution, which would not be possible in my case. But that detail does not seem significant. What makes the two experiments essentially similar is that our knowledge (or our ignorance) changes from (a) to (b), and our probability assessment follows.

In the two paragraphs starting with "Here is a suggestion..." and "This assumes, however, that there is a process...," you hint at the Principle of Indifference. The Principle of Indifference, or some generalized form of it, is pretty much the only game in town when it comes to objectively assigning prior probabilities. However, it has known challenges: apparent paradoxes not unlike the one you outlined here (see for instance the Bertrand paradox). These issues are not necessarily related to infinities.

Post a Comment