Sunday, May 28, 2023

An observation about the backwards-infinity branching view of possibility

In my dissertation, I defended a causal power account of modality on which something is possible just in case either it’s actual or something can bring about a causal chain leading to its being actual. I noted at the time that unless there is a necessary first cause, this leads to an odd infinite branching view on which any possible world matches our world exactly once you get far enough back, but nonetheless every individual event is contingent, because if you go back far enough, you get a causal power to generate something else in its place. Rejecting this branching view yields a cosmological argument for a necessary being. To my surprise when I went around giving talks on the account, I found that some atheists were willing to embrace the branching view. And since then Graham Oppy has defended it, and Schmid and Malpass have cleverly used it to attack certain cosmological arguments.

I want to note a curious, and somewhat unappealing, probabilistic feature of the backwards-infinite branching view. While it is essential to the view that it be through-and-through contingentist, assuming classical probabilities can be applied to the setup, then the further back you go on a view like that, the closer it gets to fatalism.

For let St be a proposition describing the total state of our world at time t. Let Qt be the conjunction of Su for all u ≤ t: this is the total present and past at t. Here is what I mean by saying that the further back you go, the closer you get to fatalism on the backwards-infinite branching view:

  1. limt→− ∞ P(Qt) = 1.

I.e., the further back we go, the less randomness there is. In our time, there are many sources of randomness, and as a result the current state of the world is extremely unlikely—it is unlikely that I would be typing this in precisely this way at precisely this time, it is unlikely that the die throws in casinos right now come out as they do, and so on. But as we go back in time, the randomness fades away, and things are more and more likely.

This is not a completely absurd consequence (see Appendix). But it is also a surprising prediction about the past, one that we would not expect in a world with physics similar to ours.

Proof of (1): Let tn be any decreasing sequence of times going to  − ∞. Let Q be the infinite disjunction Qt1 ∨ Qt2 ∨ .... The backwards-infinite branching view tells us that Q is a necessary truth (because any possible world has Qt is true for t sufficiently low). Thus, P(Q) = 1. But now observe that Qt1 implies Qt2 implies Qt3 and so on. It follows from countable additivity that limn→∞ P(Qtn) = P(Q) = 1.

Appendix: Above, I said that the probabilistic thesis is not absurd. Here is a specific model. Imagine a particle that on day  − n for n > 0 has probability 2n of moving one meter to the left and probability 2n of moving one meter to the left, and otherwise it remains still. Suppose all these steps are independent. Then with probability one, there is a time before which the particle did not move (by the Borel-Cantelli lemma). We can coherently suppose that necessarily the particle was at position 0 if you go far enough back, and then the system models backwards-infinite branching. However, note an unappealing aspect of this model: the movement probabilities are time-dependent. The model does not seem to fit our laws of nature which are time-translation symmetric (which is why we have energy conservation by Noether’s theorem).

Wednesday, May 24, 2023

The five-five trolley

The standard trolley case is where a trolley is heading to a track with five people, and you can redirect it to a track with one person. It seems permissible to do so.

But now imagine that a trolley is heading to a track with five people, and you can redirect it to another track also with five people. Why would you bother? Well, suppose that you enjoy turning the steering wheel on the trolley, and you reason that there is no overall harm in your redirecting the trolley.

This seems callous.

Yet we are in cases like the five-five trolley all the time. By the butterfly effect, many minor actions of ours affect the timings of human mating (you have a short conversation with someone as they are leaving work; this affects traffic patterns, and changes the timing of sexual acts for a number of people in the traffic), which then changes which sperm reaches an ovum, and hence affects which human beings exist in the next generation, and the changes balloon, and pretty soon there are major differences as to who is in the path of a hurricane, and so on.

But of course there is still a difference between the five-five trolley and the butterfly effect cases. In the five-five trolley, you know some of the details of the effects of your action: you know that these five will die if you don’t redirect and those five if you do. But note that these details are not much. You still may not know any of the ten people from Adam. In the butterfly effect cases, you can say a fair amount about the sort of effects your minor action has, but not much more than that.

What’s going on? I am inclined to think that here we should invoke something about the symbolic meaning of one’s actions. In the case where one turns the steering wheel on the trolley for fun, while knowing epistemically close effects, one exhibits a callous disregard for the sanctity of human life. But when one has a conversation with someone after work, given the epistemic distance, one does not exhibit the same callous disregard.

It is not surprising if callousness and regard for sacredness should depend on fine details of epistemic and other distance. Think of the phenomenon of jokes that come “too soon” after a terrible event: they show a callous disregard for evil. But similar jokes about temporally, personally and/or epistemically distant events may be acceptable.


Suppose that Alice wishes to steal an item she can only take if Bob is dead. She plans to go to Bob’s house and ensure he is dead, by first checking if he is already dead, and shooting him if he is not, and then she plans to take the item. She goes to Bob’s house and finds Bob dead. And then she successfully takes the item.

Did Alice intend Bob’s death as a means to her theft? It seems she did. But one might think this:

  1. If x intends y as a means to z, and x plan of action succeeds, then x caused y.

However, Alice’s plan does succeed and yet Alice did not cause Bob’s death.

Perhaps I was too quick to say that Alice intended Bob’s death. Maybe instead Alice has a conditional intention that Bob die if he is not already dead. This is, of course, a wicked conditional intention, but it is a different thing to have this conditional intention than to intend Bob’s death.

I am now inclined to think this is the right way to understand ensuring: one ensures X provided that one conditionally intends to bring about X if X does not happen for some other reason.

Bidirectionality in means and ends

I never seem to tire of this action-theoretic case. You need to send a nerve signal to your arm muscles because there is a machine that detects these signals and dispenses food, and you’re hungry. So you raise your arm. What is your end? Food. What is your means to the food? Sending a nerve signal. But what is the means to the nerve signal?

The following seems correct to say: You raised your arm in order that a nerve signal go to your arm. What has puzzled me greatly about this case in the past is this. The nerve signal is a cause of the arm’s rising, and the effect can’t be the means to the cause. But I now think I was confused. For while the nerve signal is a cause of the arm’s rising, the nerve signal is not a cause of your raising your arm. For your raising your arm is a complex event C that includes an act of will W, a nerve signal S, and the rising of the arm R. The nerve signal S is a part, but not a cause, of the raising C, though it is a cause of the rising R.

So it seems that the right way to analyze the case is this. You make the complex event C happen in order that its middle part S should happen. Thus we can say that you make C happen in order that its part S should happen in order that you should get food. Then C is a means to S, and S is a means to food, but while S is a causal means to food, C is a non-causal means to S. But it’s not a particularly mysterious non-causal means. It sometimes happens that to get an item X you buy an item Y that includes X as a part (for instance, you might buy an old camera for the sake of the lens). There is nothing mysterious about this. Your obtaining Y is a means to your obtaining X, but there is no causation between the obtaining of Y and the obtaining of X.

Interestingly, sometimes a part serves as a means to a whole, but sometimes a whole serves as a means to the part. And this can be true of the very same whole and the very same part in different circumstances. Suppose that as a prop for a film, I need a white chess queen. I buy a whole set of pieces to get the white queen, and then throw out the remaining pieces in the newly purchased set to avoid clutter. Years later, an archaeologist digs up the 31 pieces I threw out, and buys my white queen from a collector to complete the set. Thus, I acquired the complete set to have the white queen, while the archaeologist acquired the white queen to have the complete set. This is no more mysterious than the fact that sometimes one starts a fire to get heat and sometimes one produces heat to light a fire.

Just as in one circumstances an event of type A can cause an event of type B and in other circumstances the causation can go the other way, so too sometimes an event of type A may partly constitute an event of type B, and sometimes the constitution can go the other way. Thus, my legal title to the white queen is constituted by my legal title to the set, but the archaeologist’s legal title to the set is partly constituted by legal title to the white queen.

There still seems to be an oddity. In the original arm case, you intend your arm’s rise not in order that your arm might rise—that you don’t care about—but in order that you might send a nerve signal. Thus, you intend something that you don’t care about. This seems different from buying the chess set for the sake of the queen. For there you do care about your title to the whole set, since it constitutes your title to the queen. But I think the oddity can probably be resolved. For you only intend your arm’s rising by intending the whole complex event C of your raising your arm. Intending something you don’t care about as part of intending a whole you do care about is not that unusual.

Magnetic sensor arcade spinner

 For a while I've wanted to have an arcade spinner for games like Tempest and Arkanoid. I made one with an Austria Microsystems hall-effect magnetic sensor. The spinner is mounted on ball-bearings and has satisfyingly smooth motion with lots of inertia.

Build instructions are here.

Tuesday, May 16, 2023

Morality and intention

Some philosophers (Thomson and Rachels, for instance) think that intention does not affect the rightness or wrongness of an act.

This view is quite implausible in the special case of speech acts, where the existence, type and content of a speech act is determined in part by intentions. If I enter a password into a computer by voice, I am not engaging in a speech act, even if I know there is a person near me who may think that I am speaking to them. Whether I am promising or predicting a future action depends in part on my intentions (“If you give me a paper outside of class time, I will lose it” could be a promise when said by a mean professor, but ordinarily is just a prediction). Who “you” refers to depends on the speaker’s intention to address a particular person.

And of course whether a speech act of a particular type and content is being engaged in can be quite relevant to the moral status of what one is doing. For a police officer to assert a racist proposition is wrong, but it need not be wrong for them quote a racist proposition asserted by a suspect or to enter a racist sentence by voice as a password into a suspect’s computer, and in ambiguous contexts the difference can simply be intention.

One might say that speech acts are not a counterexample to the moral irrelevance of intention thesis because here the intention determines the type of act, and the irrelevance of intention thesis only applies when we fix the type of act:

  1. Two acts of the same type in the same circumstances have the same moral status, even if the intentions behind them are different.

If this is right, then the moral irrelevance of intention thesis is one that typical action theorists who think intention is morally important can agree with. For they think that intention is crucial to determining the type of act—an intentional killing, for instance, being a different kind of act from a the causing of a foreseen but unintended death.

Perhaps what the advocates of the irrelevance of intention need to do is to combine the moral irrelevance of intention thesis, for acts of fixed type, with the thesis:

  1. Many acts other than speech acts do not depend on intention for the identification of their type.

It’s hard to criticize such a squishy thesis. But it’s worth noting that most acts the interact with another person have an expressive component, and expressive acts are like speech acts in having intention as a crucial component. One respects, disrespects, regards or disregards other people in typical interactions, and these things depend in part on intention. This is compatible with (2), but it makes the moral irrelevance of intention thesis much less powerful.

Thursday, May 11, 2023

Two ways of evaluating rationality

Suppose that I am reliably informed that I am about to be cloned, with all my memories and personality. Tomorrow, I and my clone will both have apparent memories of having been so informed, though of course my clone will be wrong—my clone will not have been informed of the impending cloning, since he didn’t exist prior to the cloning.

After the cloning, what probability should I assign to the hypothesis that I am the original Alexander Pruss? It seems obvious that it should be 1/2. My evidence is the same as my clone’s, and exactly one of us is right, and so at this time it seems rational to assign 1/2.

But things look different from the forward-looking point of view. Suppose that after being informed that I am about to be cloned, and before the cloning is done, I have the ability to adopt any future epistemic strategy I wish, including the ability to unshakeably force myself to think that I am the original Alexander Pruss. The catch, of course, is that my clone will unshakeably think it’s the original, too. This may cause various inconveniences to me, and is unfortunate for the clone as one of its central beliefs will be wrong. But when one considers what is epistemically rational, one only considers what is epistemically good for oneself. And it is clear that I will have more of the truth if both I and the clone each thing ourselves to be the original person than if we both are sceptical. It thus seems that I ought to adopt the strategy of thinking myself to be the original, come what may.

Of course, after the cloning, I and my clone will both have apparent memories of having adopted that strategy. We may be impressed by the argument that we should assign probability 1/2, and each of us may struggle to suspend judgment on being the original, but in the end we will be stuck with the unshakeable belief—true in my case and false in his.

If my suggestions above are right, then a lesson of the story is that we need be very cautious about inferring what is rational to think now from what was a rational policy to have adopted. This should, for instance, make one cautious about arguments for Bayesian conditionalization on the grounds that such conditionalization is the optimal policy to adopt.

Monday, May 8, 2023

Gaining and losing personhood?

  1. Love (of the relevant sort) is appropriately only a relation towards a person.

  2. Someone appropriately has an unconditional love for another human.

  3. One can only appropriately have an unconditional R for an individual if the individual cannot cease to have the features that make R appropriate towards them.

  4. Therefore, at least one human is such that they cannot cease to be a person. (1–3)

  5. If at least one human is such that they cannot cease to be a person, then all humans are such that they cannot cease to be a person.

  6. If all humans are such that they cannot cease to be a person, then it is impossible for a non-person to become a human person.

  7. All humans are such that they cannot cease to be a person. (4,5)

  8. It is impossible for a non-person to become a human person. (6,7)

  9. Any normal human fetus can become a human person.

  10. Therefore, any normal human fetus is a person. (8,9)

(I think this holds of non-normal human fetuses as well, but that’ll take a bit more argument.)

It’s important here to distinguish the relevant sort of love—the intrinsically interpersonal kind—from other things that are analogously called love, but might perhaps better be called, say, liking or affection, which one can have towards a non-person.

I think the most controversial premises are 2 and 9. Against 2, I could imagine someone who denies 7 insisting that the most that is appropriate is to love someone on the condition of their remaining a person. But I still think this is problematic. Those who deny 7 presumably do so in part because they think that some real-world conditions like advanced Alzheimer’s rob us of our personhood. But now consider the repugnance of wedding vows that promise to love until death or damage to mental function do part.

Standing against 9 would be “constitution views” on which, normally, human fetuses become human animals, and these animals constitute but are not identical with human persons. These are ontologies on which two distinct things sit in my chair, I and the mammal that constitutes me, ontologies on which we are not mammals. Again, this is not very plausible, but it is a not uncommon view among philosophers.

Glitches in the moral law?

Human law is a blunt instrument. We often replace the thing that we actually care about by a proxy for it, because it makes the law easier to formulate, follow and/or enforce. Thus, to get a driver’s license, you need to pass a multiple choice test about the rules of the road. Nobody actually cares whether you can pass the test: what we care about is whether you know the rules of the road. But the law requires passing a test, not knowledge.

When a thing is replaced by (sometimes we say “operationalized by”) a proxy in law, sometimes the law can be practically “exploited”, i.e., it is possible to literally follow the law while defeating its purpose. Someone with good test-taking skills might be able to pass a driving rules test with minimal knowledge (I definitely had a feeling like that in regard to the test I took).

A multiple-choice test is not a terrible proxy for knowledge, but not great. Night is a very good proxy for times of significant natural darkness, but eclipses show it’s not a perfect proxy. In both cases, a law based on the proxy can be exploited and will in more or less rare cases have unfortunate consequences.

But whether a law can be practically exploited or not, pretty much any law involving a proxy will have unfortunate or even ridiculous consequences in far-out scenarios. For instance, suppose some jurisdiction defines chronological age as the difference in years between today’s date and the date of birth, and then has some legal right that kicks in at age 18. Then if a six-month-old travels to another stellar system at close to the speed of light, and returns as a toddler, but 18 years have elapsed on earth, they will have that the legal rights accruing to an 18-year-old. The difference in years between today’s date and the date of birth is only a proxy for the chronological age, but it is a practically nearly perfect proxy—as long as we don’t have near-light-speed travel.

If a law involves a proxy that does not match the reality we care about in too common or too easy to engineer circumstances, then that’s a problem. On the other hand, if the mismatch happens only in circumstances that the lawmaker knows for sure won’t actually happen, that’s not an imperfection in the law.

Now suppose that God is the lawmaker. By the above observations, it does not reflect badly on a lawmaker if a law involves a proxy that fails only in circumstances that the lawmaker knows for sure won’t happen. More generally, it does not reflect badly on a lawmaker if a law has unfortunate or ridiculous consequences in cases that the lawmaker knows for sure won’t happen. Our experience with human law suggests that such cases are difficult to avoid without making the law unwieldy. And while there is no great difficulty for God in making an unwieldy law, such a law would be hard for us to follow.

In a context where a law is instituted by God (whether by command, or by desire, or by the choice of a nature for a created person), we thus should not be surprised if the law “glitches” out in far-out scenarios. Such “glitches” are no more an imperfection than it is an imperfection of a helicopter that it can’t fly on the moon. This should put a significant limitation on the use of counterexamples in ethics (and likely epistemology) in contexts where we are allowing for the possibility of a divine institution normativity (say, divine command or theistic natural law).

One way that this “glitching” can be manifested is this. The moral law does not present itself to us as just as a random sequence of rules. Rather, it is an organized body, with more or less vague reasons for the rules. For instance “Do not murder” and “Do not torture” may come under a head of “Human life is sacred.” (Compare how US federal law has “titles” like “Title 17: Copyright” and “Title 52: Voting and Elections”, and presumably there are vague value-laden principles that go with the title, such as promoting progress with copyright and giving voice to people with voting.) In far-out scenarios, the rules may end up conflicting with their reasons. Thus, to many people “Do not murder” would not seem a good way to respect to respect the sacredness of human life in far-out cases where murdering an innocent person is the only way to save the human race from extinction. But suppose that God in instituting the law on murder knew for sure that there would never occur a situation where the only way to save the human race from extinction is murder. Then there would be no imperfection in making the moral law be “Do not murder.” Indeed, this would be arguably a better law than “Do not murder unless the extinction of humanity is at stake”, because the latter law is needlessly complex if the extinction of humanity will never be at stake in a potential murder.

Thus the theistic deontologist faced with the question of whether it would be right to murder if that were the only way to save the human race can say this: The law prohibits murder even in this case. But if this case was going to have a chance of happening, then God would likely have made a different law. Thus, there are two ways of interpreting the counterfactual question of what would happen if we were in this far-out situation. We can either keep fixed the moral law, and say that the murder would be wrong, or we can keep fixed God’s love of human life, and say that in that case God would likely have made a different law and so it wouldn’t be wrong.

We should, thus, avoid counterexamples in ethics that involve situations that we don’t expect to happen, unless our target is an ethical theory (Kantianism?) that can’t make the above move.

But what about counterexamples in ethics that involve rare situations that do not make a big overall difference (unlike the case of the extinction of the human race)? We might think that for the sake of making the moral law more usable by the limited beings governed by it, God could have good reason for making laws that in some situations conflict with the reasons for the laws, as long as these situations are not of great importance to the human species. (The case of murdering to prevent the extinction of the human race would be of great importance even if it were extremely rare!)

If this is right—and I rather wish it isn’t—then the method of counterexamples is even more limited.

Friday, May 5, 2023

The joy of error bars

We normally think of approximation and imprecision as an unfortunate fact of our epistemic lives. But if the arguments of my previous two posts are correct, then there is a really serious problem with updating our information on precise data, such as that a spinner landed exactly at 102.34, or that my height is exactly 182.01 cm, etc. Basically, it seems, in a number of probabilistic cases that information is useless, because it has zero probability on all the hypotheses under considertion. (A technical way to see the problem is that in contemporary probability theory, when dealing with continuous probability distributions, conditional probabilities are only defined up to sets of measure zero.)

But there is no problem at all in dealing with ranged information, such as that I am 182.0 ± 0.3 cm tall, since that is apt to have non-zero probability on the hypotheses that I am likely to be evaluating (say, hypotheses about the distribution of heights among male members of different professions). That our observations have error bars is essential for us to make use of them, at least sometimes.

Curious, isn’t it? It suggests something deep about the connection between our epistemology and our sensory limitations. But I don’t know what exactly it suggests.

Thursday, May 4, 2023

Reflection and null probability

Suppose a number Z is chosen uniformly randomly in (0, 1] (i.e., 0 is not allowed but 1 is) and an independent fair coin is flipped. Then the number X is defined as follows. If the coin is heads, then X = Z; otherwise, X = 2Z.

  • At t0, you have no information about what Z, X and the coin toss result are, but you know the above setup.

  • At t2, you learn the exact value of X.

Here’s the puzzling thing. At t2, when you are informed that X = x (for some specific value of x) your total evidence since t0 is:

  • Ex: Either the coin landed heads and Z = x, or the coin landed tails and Z = x/2.

Now, if x > 1, then when you learn Ex, you know for sure that the coin was tails.

On the other hand, if x ≤ 1, then Ex gives you no information about whether the coin landed heads or tails. For Z is chosen uniformly and independently of the coin toss, and so as long as both x and x/2 are within the range of possibilities for Z, learning Ex seems to tell you nothing about the coin toss. For instance, if you learn:

  • E1/4: Either the coin landed heads and Z = 1/4, or the coin landed tails and Z = 1/8,

that seems to give you no information about whether the coin landed heads or tails.

Now add one more stage:

  • At t1, you are informed whether x ≤ 1 or x > 1.

Suppose that at t1 what you learn is that x ≤ 1. That is clearly evidence for the heads hypothesis (since x > 1 would conclusively prove the tails hypothesis). In fact, standard Bayesian reasoning implies you will assign probability 2/3 to heads and 1/3 to tails at this point.

But now we have a puzzle. For at t1, you assign credence 2/3 to heads, but the above reasoning shows you that at t2, you will assign credence 1/2 to heads. For at t2 your total eidence since t0 will be summed up by Ex for some specific x ≤ 1 (Ex already includes the information given to you at t1). And we saw that if x ≤ 1, then Ex conveys no evidence about whether the coin was heads or tails, so your credence in heads at t2 must be the same as at t1.

So at t1 you assign 2/3 to heads, but you know that when you receive further more specific evidence, you will move to assign 1/2 to heads. This is counterintuitive, violates van Fraassen’s reflection principle, and lays you open to a Dutch Book.

What went wrong? I don’t really know! This has been really puzzling me. I have four solutions, but none makes me very happy.

The first is to insist that Ex has zero probability and hence we simply cannot probabilistically update on it. (At most we can take P(H|Ex) to be an almost-everywhere defined function of x, but that does not provide a meaningful result for any particular value of x.)

The second is to say that true uniformity of distribution is impossible. One can have the kind of uniformity that measure theorists talk about (basically, translation invariance), but that’s not enough to yield non-trivial comparisons of the probabilities of individual values of Z (we assumed that x and x/2 were equally likely options for Z if X ≤ 1).

The third is some sort of finitist thesis that rules out probabilistic scenarios with infinitely many possible outcomes, like the choice of Z.

The fourth is to bite the bullet, deny the reflection principle, and accept the Dutch Book.

Wednesday, May 3, 2023

Conditionalizing on classically null events

Some events have probability zero in classical probability. For instance, if you spin a continuous and fair spinner, the probability of its landing on any specific value is classically zero.

Some philosophers think we should be able to conditionalize on possible events that classically have zero probability, say by assigning non-zero infinitesimal probabilities to such events or by using Popper functions. I think there is very good reason to be suspicious of this.

Consider these very plausible claims:

  1. For y equal to 1 or 2, let Hy be a hypothesis about the production of the random variables X such that the conditional distribution of X on Hy is uniform over the interval [0, y). Suppose H1 and H2 have non-zero priors. Then the fact that the value of X is x supports H1 over H2 if x < 1.

  2. If two claims are logically equivalent and they can be conditionalized on, and one supports a hypothesis over another hypothesis, so does the other.

  3. If a random variable is independent of each of two hypotheses, then no fact about the value of the random variable supports either hypothesis over the other.

But (1) yields a counterexample to the method of conditionalization by infinitesimal probabilities. For suppose a random variable Z is uniformly randomly chosen in [0, 1) by some specific method. Suppose further that a fair coin, independent of Z, was flipped, and on heads we let X = Z and on tails we let X = 2Z. Let H1 be the heads hypothesis and let H2 be the tails hypothesis. Then X is uniformly distributed over [0, y) conditionally on Hy for y = 1, 2.

But now let E be the fact that X = 0, and suppose we can conditionalize on E. By (1), E supports H1 over H2 as 0 < 1. But E is logically equivalent to the fact that Z = 0. By (2), then Z = 0 supports H1 over H2. But Z is independent of H1 and of H2. So we have a contradiction to (3).

I think this line of thought undercuts my toy model argument in my last post.

Monday, May 1, 2023

Does my existence by itself confirm a multiverse?

Suppose I am considering two hypotheses, H1 and H2, and according to H2 there are more people. Does the fact that I exist give me reason to prefer H2, all other things being equal? If so, then my existence is apt to confirm the existence of a multiverse over a single universe.

Here is one reason to think this works. The probability that I exist in a given world, all other things being equal, seems proportional to the number of people in that world. Each person in that world corresponds to another opportunity for me to exist.

While this is tempting, here is a toy model that should give us pause. Suppose that I am defined by a real number parameter between 0 (inclusive) and 1 (not inclusive). According to hypothesis H1, a single real number is picked uniformly at random in the range, and the person with that parameter is created. According to hypothesis H2, two real numbers are picked uniformly and independently in the range, and persons corresponding to these are created. Learning that a person with my parameter is created seems to provide me with evidence for H2, since it’s twice as likely on H2 as on H1.

But this is tricky. In classical probability theory, it is correct to say that my parameter is twice as likely to be generated on H2 as on H1, but that’s only because both probabilities are zero, and zero is twice zero, so while H2 is twice as likely as H1, it is also true that H1 is twice as likely as H2!

Perhaps, though, we want to depart from classical probability theory in some way, say by allowing non-zero infinitesimal probabilities or by an intuitive handwavy “this is twice as likely as that”. However, it is then no longer clear that on H2 there is twice as big a chance of hitting my parameter. For there are (infinitely) many ways of picking a number between 0 and 1 uniformly randomly.

Here’s one way:

  1. You write down “0.”, then roll a fair ten-sided die infinitely many times, writing down the results as the digits after the decimal point, thereby generating a decimal representation of a number. If the number ends with infinitely many nines, try again.

(The final proviso is to ensure that intuitively each number is equally likely. Without that proviso, 1/10 would be more likely than 1/3, as there would be two ways of getting 1/10, namely 0.1000... and 0.0999..., but only one way to get 1/3, namely 0.3333.....)

Here is another way:

  1. You write down “0.”, then roll a fair ten-sided die infinitely many times, omitting the results of the first die throw, but writing down the results as the digits after the decimal point, thereby generating a decimal representation of a number. If the number ends with infinitely many nines, try again.

Intuitively, method B has ten times the probability of generating any given number than method A has, as long as literally the numerically same die throws occur in the two cases. For consider the number 1/3 = 0.3333.... By method A to generate it you need every die to show a three. By method B, to generate 1/3, all you need is for all the die throws other than the first one to be threes, and so there are ten times as many ways to generate the number.

Now, if the single selection of a parameter on H1 uses method B while the double selection of a parameter on H2 uses method A, then intuitively we are five times as likely to generate my parameter on H1 than on H2. Thus merely saying that on both hypotheses the parameters are generated uniformly is insufficient to determine how the comparison between the probabilities of generating my parameter goes.

We might insist that in both hypotheses the same method for generating parameters is used. But notice that in cosmological applications, this is implausible. If H2 is some multiverse hypothesis and H1 is a single universe hypothesis, we are unlikely to be able to count on the two hypotheses involving even the same laws of nature, much less the same selection process for the parameters of the persons. (Besides all this, it is really unclear what it even counts to say that there are two different runs of method A.)

So, here’s what I am thinking. On classical probability theory, there is no difference in the probability of my parameter getting generated on H2 than on H1, because both probabilities are zero. On non-classical probability theory, we can perhaps make sense of a difference between the probabilities, but cannot count on the hypothesis with more people being more likely to generate my parameter.

Given all this, there does not seem to be a way of making sense of comparing the evidential impact of my existence on the two hypotheses using probabilistic methods. Maybe all we have is intuition.