Wednesday, April 26, 2023

Multiverses as a skeptical hypothesis

  1. A multiverse hypothesis that counters the fine-tuning argument posits laws of nature that vary across physical reality.

  2. A hypothesis that posits laws of nature that vary across physical reality contradicts the uniformity of nature.

  3. A hypothesis that contradicts the uniformity of nature is a global skeptical hypothesis.

  4. Global skeptical hypotheses should be denied.

  5. So, a multiverse hypothesis that counters the fine-tuning argument should be denied.

The thought behind (1) is that the constants in the laws of nature are part and parcel of the laws. This can be denied. But still, the above argument seems to have some plausibility.

Cable Guy and van Fraassen's Reflection Principle

Van Fraassen’s Reflection Principle (RP) says that if you are sure you will have a specific credence at a specific future time, you should have that credence now. To avoid easy counterexamples, the RP needs some qualifications such that there is no loss of memory, no irrationality, no suspicion of either, full knowledge of one’s own credences at any time, etc.

Suppose:

  1. Time can be continuous and causal finitism is false.

  2. There are non-zero infinitesimal probabilities.

Then we have an interesting argument against van Fraassen’s Reflection Principle. Start by letting RP+ be the strengthened version of RP which says that, with the same qualifications as needed for RP, if you are sure you will have at least credence r at a specific future time, then you should have at least credence r now. I claim:

  1. If RP is true, so is RP+.

This is pretty intuitive. I think one can actually give a decent argument for (3) beyond its intuitiveness, and I’ll do that in the appendix to the post.

Now, let’s use Cable Guy to give a counterexample to RP+ assuming (1) and (2). Recall that in the Cable Guy (CG) paradox, you know that CG will show at one exact time uniformly randomly distributed between 8:00 and 16:00, with 8:00 excluded and 16:00 included. You want to know if CG is coming in the afternoon, which is stipulated to be between 12:00 (exclusive) and 16:00 (inclusive). You know there will come a time, say one shortly after 8:00, when CG hasn’t yet shown up. At that time, you will have evidence that CG is coming in the afternoon—the fact that they haven’t shown up between 8:00 and, say, 8:00+δ for some δ > 0 increases the probability that CG is coming in the afternoon. So even before 8:00, you know that there will come a time when your credence in the afternoon hypothesis will be higher than it is now, assuming you’re going to be rational and observing continuously (this uses (1)). But clearly before 8:00 your credence should be 1/2.

This is not yet a counterexample to RP+ for two reasons. First, there isn’t a specific time such that you know ahead of time for sure your credence will be higher than 1/2, and, second, there isn’t a specific credence bigger than 1/2 that you know for sure you will have. We now need to do some tricksy stuff to overcome these two barriers to a counterexample to RP+.

The specific time barrier is actually pretty easy. Suppose that a continuous (i.e., not based on frames, but truly continuously recording—this may require other laws of physics than we have) video tape is being made of your front door. You aren’t yourself observing your front door. You are out of the country, and will return around 17:00. At that point, you will have no new information on whether CG showed up in the afternoon or before the afternoon. An associate will then play the tape back to you. The associate will begin playing the tape back strictly between 17:59:59 and 18:00:00, with the start of the playback so chosen that that exactly at 18:00:00, CG won’t have shown up in the playback. However, you don’t get to see the clock after your return, so you can’t get any information from noticing the exact time at which playback starts. Thus, exactly at 18:00:00 you won’t know that it is exactly 18:00:00. However, exactly at 18:00:00, your credence that CG came in the afternoon will be bigger than 1/2, because you will know that the tape has already been playing for a certain period of time and CG hasn’t shown up yet on the tape. Thus, you know ahead of time that exactly at 18:00:00 your credence in the afternoon hypothesis will be higher than 1/2.

But you don’t know how much higher it will be. Overcoming that requires a second trick. Suppose that your associate is guaranteed to start the tape playback a non-infinitesimal amount of time before 18:00:00. Then at 18:00:00 your credence in the afternoon hypothesis will be more than 1/2 + α for any infinitesimal α. By RP+, before the tape playback, your credence in the afternoon hypothesis should be at least 1/2 + α for every infinitesimal α. But this is absurd: it should be exactly 1/2.

So, we now have a full counterexample to RP+, assuming infinitesimal probabilities and the coherence of the CG setup (i.e., something like (1)). At exactly 18:00:00, with no irrationality, memory loss or the like involved (ignorance of what time it is not irrational nor a type of memory loss), you will have a credence at least 1/2 + α for some positive infinitesimal α, but right now your credence should be exactly 1/2.

Appendix: Here’s an argument that if RP is true, so is RP+. For simplicity, I will work with real-valued probabilities. Suppose all the qualifications of RP hold, and you now are sure that at t1 your credence in p will be at least r. Let X be a real number uniformly randomly chosen between 0 and 1 independently of p and any evidence you will acquire by t1. Let Ct(q) be your credence in q at t. Let u be the following proposition: X < r/Ct(p) and p is true. Then at t1, your credence in u will be (r/Ct(p))Ct(p) = r (where we use the fact that r ≤ Ct(p)). Hence, by RP your credence now in u should be r. But since u is a conjunction of two propositions, one of them being p, your credence now in p should be at least r.

(One may rightly worry about difficulties in dropping the restriction that we are working with real-valued probabilities.)

Tuesday, April 25, 2023

The light and clap game

Suppose a light turns on at a uniformly chosen random time between 10 and 11 am, not including 10 am, and Alice wins a prize if she claps her hands exactly once after 10 am but before the light is on. Alice is capable of clapping or not clapping her hands instantaneously at any time, and at every time she knows whether the light is already on.

It seems that no matter when the light turns on, Alice could have clapped her hands before that, and hence if she does not clap, she can be rationally faulted.

But is there a strategy by which Alice is sure to win? Here is a reason to doubt it. Suppose there is such a strategy, and let C be the time at which Alice claps according to the strategy. Let L be the time at which the light turns on. Then we must have P(10<C<L) = 1: the strategy is sure to work. But let’s think about how C depends on L. If L ≤ 10 + x, for some specific x > 0, then it’s guaranteed that C < 10 + x. But because the only information available for deciding at a time t is whether the light is on or off, the probability that we have C < 10 + x cannot depend on what exact value L has as long as that value is at least 10 + x. You can’t retroactively affect the probability of C being before 10 + x once 10 + x comes around. Thus, P(C<10+x|L∈[t,t+δ]) will be the same for any t ≥ 10 + x and any δ > 0. But P(C<10+x|L∈[10+x,11]) = 1. So, P(C<10+x|L∈[t,t+δ]) = 1 whenever t ≥ 10 + x and x > 0. By countable additivity, it follows that P(C≤10|L∈[t,t+δ]) = 1, which is impossible since C > 10. Contradiction!

So there is no measurable random variable C that yields the time at which Alice wins and that depends only on the information available to Alice at the relevant time. So there is no winning strategy. Yet there is always a way to win!

I don’t know how paradoxical this is. But if you think it’s paradoxical, then I guess it’s another argument for causal finitism.

Monday, April 24, 2023

Instrumental and non-instrumental goods

Aristotle writes at times as if the fact that something is non-instrumentally good for one makes it a greater good than merely instrumental goods. That’s surely false. Among the non-instrumental goods are many quite trivial goods: knowing how many blades of grass there are on one’s front lawn, enjoying a quick game of Space Invaders, etc.

Predictability of future credence changes

Suppose you update your credences via Bayesian update at discrete moments of time (i.e., at any future time, your credence is the result of a finite number of Bayesian updates from your present credence). Then it can be proved that you cannot be sure (i.e., assign probability one) that your credence will ever be higher than it is now, and similarly you cannot be sure that your credence will ever be lower than it is now.

The same is not true for continuous Bayesian update, as is shown by Alan Hajek’s Cable Guy story. Cable Guy will come tomorrow between 8:00 am and 4:00 pm, with 4:00 pm included but 8:00 am excluded. Your current credence that they will come in the morning is 1/2 and your current credence that they will come in the afternoon is also 1/2.

Then it is guaranteed that there will be a time after 8:00 am when Cable Guy hasn’t come yet. At that time, because you have ruled out some of the morning possibilities but none of the afternoon possibilities, your credence that the Cable Guy will come in the afternoon will have increased and your credence that the Cable Guy will come in the morning will have decreased.

Proof of fact in first paragraph: A Bayesian agent’s credences are a martingale. To obtain a contradiction, suppose there is probability 1 that the credences will go above their current value. Let Cn be the agent’s credence after the nth update, and consider everything from the point of view of the agent right now, before the updates, with current credence r. Let τ be the first time such that Cτ > r (this is defined with probability one). By Doob’s Optional Sampling Theorem, E[Cτ] = r. But this contradicts the inequality Cτ > r.

Friday, April 21, 2023

Binding and small probabilities

Suppose we neglect events with probability less than ϵ for some small ϵ > 0. Let’s suppose two independent random things have happened. First, an independent event E may or may not have happened, and P(E) = 2ϵ. Second, a fair die was rolled. You don’t have any information on whether E happened or the die was rolled. The following complex deal (The Deal) is offered to you.

If you accept The Deal, it will be revealed to you whether E happened. Then the following will happen:

  1. If E happened, you get a choice between:

    1. you pay a dollar, or

    2. one following happens:

      1. you get a dollar if the die showed 1, 2, 3 or 4, but

      2. you get a year of torture if the die showed 5 or 6.

  2. If E did not happen, you pay ϵ cents.

The event of E happening and the die showing 5 or 6 has probability 2ϵ ⋅ (2/6) = (2/3)ϵ which we have supposed is negligible. So, it seems that 1.b.ii can be completely neglected. On the other hand, the event of E happening and the die showing 1, 2, 3 or 4 has probability 2ϵ ⋅ (4/6) = (4/3)ϵ, which is not negligible.

What should you do? The difficulty here is that a full probabilistic evaluation of what you will do depends on what your choice in case 1.a will be. One way to handle such cases is through binding: you think of your choice as being a choice between strategies and then you stick to your strategy no matter what. This seems to be a good way to handle various paradoxes like Satan’s Apple.

What are the relevant strategies here? Well, in terms of pure strategies (we can consider mixed strategies, but in this case I think they won’t change anything), they are:

  1. Reject The Deal.

  2. Accept The Deal, and if E happened, pay the dollar.

  3. Accept The Deal, and if E happened, don’t pay the dollar.

If you don’t neglect small probabilities, then clearly (A) is the right strategy to choose (and stick to).

Also, clearly, (B) is never the right strategy: whatever happens, you pay.

Now, suppose you do neglect small probabilities, and let’s evaluate (A) and (C). The payoff for (A) is zero. The payoff for (C) is, in dollars:

  • (4/3)ϵ − (1−2ϵ)(0.01)ϵ > 0.

For the torture option drops out, as it has the negligible probability (2/3)ϵ.

So, if you neglect small probabilities, and take binding to a strategy to be the right approach to such puzzles, you should accept The Deal and bind yourself to not pay the dollar. But now notice how psychologically impossible the binding is. If in fact E happened—and the probability of E is 2ϵ, which is not negligible—then you have to choose between paying a dollar and a wager that has a 2/3 chance of yielding a dollar and a 1/3 chance of a year of torture. How could you possibly accept a 1/3 chance of a year of torture in exchange for about $1.67? Real brainwashing would be required, not just a mere resolution to stick to a strategy.

So what? Why can’t the proponent of the binding solution simply agree that (C) is the abstractly best strategy, but since we can’t practically bind ourselves to it, we are stuck with (A)? But there is something counterintuitive about thinking that (C) is the abstractly best strategy when it requires brainwashing that is this extreme.

Thursday, April 20, 2023

Brownian motion and regret

Let Bt be a one-dimensional Brownian motion, i.e., Wiener process, with B0 = 0. Let’s say that time 0 you are offered, for free, a game where your payoff at time 1 will be B1. Since the expected value of a Brownian motion at any future time equals its current value, this game has zero value, so you are indifferent and go for it.

But here is a fun fact. With probability one, at infinitely many times t between 0 and 1 we will have Bt < 0 (this follows from Th. 27.24 here). At any such time, your expectation of your payout at B1 to be negative. Thus, at infinitely many times you will regret your decision to play the game.

Of course, by symmetry, with probability one, at infinitely many times between 0 and 1 we will have Bt > 0. Thus if you refuse to play, then at infinitely many times you will regret your decision not to play the game.

So we have a case where regret is basically inevitable.

That said, the story only works if causal finitism is false. So if one is convinced (I am not) that regret should always be avoidable, we have some evidence for causal finitism.

Wednesday, April 19, 2023

Avoiding regrets

I’ve recently been troubled by cases where you are sure to regret your decision, but the decision still seems reasonable. Some of these cases involve reasonable-seeming violations of expected utility maximization, but there is also the Cable Guy paradox, though admittedly I think I can probably exclude the Cable Guy paradox with causal finitism.

I shared Cable Guy with Clare Pruss, and she said that the principle of avoiding future regrets is false, and should be modified to a principle of avoiding final future regrets, because there are ordinary cases where you expect to regret something temporarily. For instance, you volunteer to do something onerous, and you expect that while volunteering, you will be regreting your choice, but you will be glad afterwards.

In all the cases that I’ve been interested in, while you are sure that there will be regret at some point in the future, you are not sure that there will be regret at the end (half the time the Cable Guy comes at the time you bet on him coming, after all).

Tuesday, April 18, 2023

An odd motive for love

Here is an odd reason for love. Someone is doing really well, and when we come to love them, their wellbeing becomes in part our wellbeing. So it’s especially good for us to love all the people who are doing well. The most extreme version of this is loving God. For God has infinite wellbeing. And so by loving God, that infinite wellbeing becomes ours in a way.

My first reaction to the above thoughts was that this is ridiculous. It’s too cheap a way of increasing one’s wellbeing and seems to be a reductio of the thesis that whenever you love anyone, their wellbeing is incorporated into yours.

But it’s not actually all that cheap. For consider one of the paradigm attitudes opposed to love: envy. In envy, the other’s wellbeing makes us suffer. It seems exactly right to say that in addition to the other-centered reasons to avoid envy, envy is just stupid, because it increases your suffering with no benefit to anyone. But if so, and if love is opposed to envy, then it is not surprising that there is a benefit to love. And because envy is hard to avoid, the opposed love is not cheap, since it requires one to renounce envy.

But what about the oddity? Well, that oddity, I think, comes from the fact that while the benefit to ourselves from loving is indeed a reason to love, it cannot be our only reason, since love is essentially an attitude focused on the other’s good. At most, the realization that loving someone is good for us will help overcome reasons against love (the costs of love, say), and motivate us to try to become the kind of person who is less envious and more loving. But we can’t just say: “It’s good for me to love, ergo I love.” It’s harder than that. And in particular it requires a certain degree of commitment to the other person for good and ill, so if an attitude is solely focused on the desire to share the other’s good, that attitude is not love.

Friday, April 14, 2023

Time and extended wellbeing

It is said that when your friend has bad things happen, the occurrence of these bad things constitutes a loss of wellbeing for you. Not just because you are saddened, but simply directly in virtue of your interest in your friend’s wellbeing. This is said to happen even if you don’t know about your friend’s misfortune.

But when do you lose wellbeing? On the story above, you lose wellbeing when you friend suffers. But if we say that your loss of wellbeing is simultaneous with your friend’s, what does that mean, given that simultaneity is relative? What is the relevant reference frame?

There are two obvious candidates:

  1. Your reference frame.

  2. Your friend’s reference frame.

These may be quite different if you and your friend are traveling at high speeds through space. And there doesn’t seem to be a compelling argument for choosing one over the other. Furthermore, there really isn’t such a thing as the reference frame of a squishy object like a human being. Different parts of a human being are always moving in different directions. My chest moves away from my backbone, and soon moves towards it. It is tempting to define my reference frame as the reference frame of my center of mass. I am not sure this makes complete sense in the framework of general relativity (a center of mass is a weighted average of the positions of my parts, but when the positions like in a curved spacetime, I don’t know if a weighted average is well-defined). But even in special relativity there are problems, since it is possible for an organism’s center of mass to move faster than light. (Imagine that a knife moving at nearly the speed of light cuts a stretched-out snake in half, and the snake briefly survives. During the time that the knife moved through the thickness of the snake, the center of mass of the snake moved by a quarter of the snake’s length.)

Here is another option. Perhaps your friend’s misfortunes are yours precisely when a ray of light from the misfortune could have illuminated you, i.e., precisely when you are at the surface of the future lightcone centered on some portion of the misfortune. There is something a bit wacky about this: misfortune propagates just as idealized light (not taking into account collisions with matter) would. In particular, this means that misfortune is subject to gravitational lensing. That seems really weird.

All of the above seems like it’s barking up the wrong tree. Here is a suggestion. While some aspects of wellbeing or illbeing can be temporally localized—pains, for instance—others cannot be. Having a rich and varied life is not temporally localized. Perhaps the contribution to your illbeing from the misfortune of your friends is similarly not temporally localized in your life. It’s just a negative in your life as a whole.

But I am not very happy with that, either. For it seems that if your friend is in pain, and then is no longer in pain, there is some change in the wellbeing of your life.

I don’t know.

Independence axiom

Here is an argument for the von Neumann – Morgenstern axiom of independence.

Consider these axioms for a preference structure on lotteries.

  1. If L ≺ M and K ≺ N, then pL + (1−p)K ≾ pM + (1−p)N.

  2. If M dominates L, then pL + (1−p)N ≾ pM + (1−p)N.

  3. If A ≾ pM + (1−p)N for all N′ dominating N, then A ≾ pM + (1−p)N.

  4. If L ≺ M, there is an M′ that dominates L but is such that M′ ≺ M.

  5. If M dominates L, then L ≺ M.

  6. Transitivity and completeness.

  7. 0 ⋅ L + 1 ⋅ N ∼ 1 ⋅ N + 0 ⋅ L ∼ N.

Now suppose that L ≺ M and 0 < p < 1. Let M dominate L but be such that M′ ≺ M by (4). Let N dominate N. Then pL + (1−p)N ≾ pM′ + (1−p)N by (2) and pM′ + (1−p)N ≾ pM + (1−p)N by (1). This is true for all N that dominates N, so pL + (1−p)N ≾ pM + (1−p)N by (3).

Now suppose that L ≾ M. Let M dominate M. Then L ≺ M by (5). By the above pL + (1−p)N ≾ pM′ + (1−p)N. This is true for all M′ dominating M, so by (3) we have pL + (1−p)N ≾ pM + (1−p)N. Hence we have independence for 0 < p < 1. And by (7) we get it for p = 0 and p = 1.

Enough mathematics. Now some philosophy. Can we say something in favor of the axioms? I think so. Axioms (5)–(7) are pretty standard fare. Axioms (3) and (4) are something like continuity axioms for the space of values. (I think axiom (4) actually follows from the other axioms.)

Axioms (1) and (2) are basically variants on independence. That’s where most of the philosophical work happens.

Axiom (2) is pretty plausible: it is a weak domination principle.

That leaves Axiom (1). I am thinking of it as a no-regret posit. For suppose the antecedent of (1) is true but the consequent is false, so by completeness pM + (1−p)N ≺ pL + (1−p)K. Suppose you chose pL + (1−p)K over pM + (1−p)N. Now imagine that the lottery is run in a step-wise fashion. First a coin that has probability p of heads is tossed to decide if the first (heads) or second (tails) option in the two complex lotteries is materialized, and then later M, N, L, K are resolved. If the coin is heads, then you now know you’re going to get L. But L ≺ M, so you regret your choice: it would have been much nicer to have gone for pM + (1−p)N. If the coin is tails, then you’re going to get K. But K ≺ N, so you regret your choice, too: it would have been much nicer to have gone for pM + (1−p)N. So you regret your initial choice no matter how the coin flip goes.

Moreover, if there are regrets, there is money to be made. Your opponent can offer to switch you to pM + (1−p)N for a small fee. And you’ll do it. So you have made a choice such that you will pay to undo it. That’s not rational.

So, we have good reason to accept Axiom (1).

This is a fairly convincing argument to me. A pity that the conclusion—the independence axiom—is false.

Thursday, April 13, 2023

Some problems with neglecting small probabilities

While I’ve been very friendly to the idea that tiny probabilities should be neglected, here is a serious difficulty. Suppose what we do is neglect probabilities smaller than some positive ϵ that is much smaller than one. Now suppose someone gives you offer A:

  • With probability 1.1ϵ, get a penny.

  • With probability 0.9ϵ, get a year of torture.

If you neglect probabilities less than ϵ, then you ought to accept A. For you will neglect the year of torture, but not the penny. (This follows both on a simplistic “drop events with probability less than ϵ reading of”neglect tiny probabilities” and on the more sophisticated version discribed here.)

But it is absurd to think you should accept an offer where the probability of the positive payoff is only about 20% bigger than that of the negative payoff, while the magnitude of the negative payoff is many orders of magnitude bigger.

Consider, too, that if we think probabilities less than ϵ to be negligible, shouldn’t we by the same token think that differences of probability of 0.2ϵ are negligible as well? Yet that is the difference in probabilities between the penny and the year of torture, and this difference is what makes A allegedly obligatory.

Next, consider this. Let’s say that offer B is as follows:

  • With probability 0.55, get a penny.

  • Otherwise, get a year of torture.

Obviously, this is a terrible deal and you should refuse. But now consider offer Bx for a constant x > 0:

  • With probability x, get offer B.

On the neglect of tiny probabilities account, we get the following oddity. You ought to refuse B, but you ought to accept probability 2ϵ of B. For B2ϵ is equivalent to A. It seems very odd indeed that a tiny probability of a terrible deal could be a good deal!

It may be that the above problems can be solved by a more careful tweaking of the utility calculations, so that you don’t just sharply cut-off the probabilities, but you attenuate them continuously to zero.

But there is a final problem that cannot be solved in such technical way. For any reasonable neglect of small probabilities account on which probabilities less than ϵ are completely neglected and probabilities bigger than ϵ are not completely neglected will admit a case where C is a deal to be refused but there is a probability x of C, for a certain 0 < x < 1, that is to be accepted. For instance, suppose C is as follows:

  • With probability 4ϵ, get X.

  • With probability 2ϵ, pay Y.

(I am assuming that ϵ < 1/4. If we neglect a 1/4 chance, then we’re crazy.) Whatever the attenuation factors on probabilities are, we can choose positive amounts X and Y such that C is a bad deal and to be refused. But now let C1/3 be a 1/3 chance of C. For concreteness, suppose a die is rolled and you get C if the die shows 1 or 2. Then C1/3 has this profile:

  • With probability (4/3)ϵ, get X.

  • With probability (2/3)ϵ, pay Y.

The second option will be neglected. The first one may be attenuated, but not to zero, and so C1/3 is guaranteed to have some small but positive value δ > 0. Now consider a final deal D:

  • Pay δ/2 to get C1/3.

You ought to go for D on the account we are considering, since the value of C1/3 is δ. But now imagine you’ve gone for D. Now the die is rolled to see if you will get C. If the die comes up 1 or 2, then you know you will get C. But C is a bad deal, we have agreed. So in that case you will have regrets. But if the die comes up 3, 4, 5 or 6, then you know you will get nothing, but have paid in δ/2, so you will also have regrets. So no matter what, you will have regrets.

Basically, we have here a violation of a decision-theoretic version of conglomerability. I expect this isn’t really new, because a variant of the regret argument can be applied to any decision procedure that violates independence given some reasonable assumptions.

I think it may be worth biting the bullet on the regret argument.

Barn facades and random numbers

Suppose we have a long street with building slots officially numbered 0-999, but with the numbers not posted. At numbers 990–994 and 996–999, we have barn facades with no barn behind them. At all the other numbers, we have normal barns. You know all these facts.

I will assume that the barns are sufficiently widely spaced that you can’t tell by looking around where you are on the street.

Suppose if you find yourself at #5 and judge you are in front of a barn. Intuitively, you know you are in front of a barn. But if you find yourself at #995 and judge you are in front of a barn, you are right, but you don’t know it, as you are surrounded by mere barn facades nearby.

At least that’s the initial intuition (it’s a “safety” intuition in epistemology parlance). But note first that this intuition is based on an unstated assumption, that the buildings are numbered in order. Suppose, instead, that the building numbers were allocated by someone suffering from a numeral reversal disorder, so that, from east to west, the slots are:

  • 000, 100, 200, …, 900, 010, 110, 210, …, 999.

Then when you are at #995, your immediate neighborhood looks like:

  • 595, 695, 795, 895, 995, 006, 106, 206, 306.

And all these are perfectly normal barns. So it seems you know.

But why should knowledge depend on geometry? Why should it matter whether the numerals are apportioned east to west in standard order, or in the order going with the least-significant-digit-first reinterpretation?

Perhaps the intuition here is that when you are at a given number, you could “easily have been” a few buildings to the east or to the west, while it would have been “harder” for you to have been at one of the further away numbers. Thus, it matters whether you are geometrically surrounded by mere barn facades or not.

Let’s assume from now on that the buildings are arranged east to west in standard order: 000, 001, 002, …, 999, and you are at #995.

But how did you get there? Here is one possibility. A random number was uniformly chosen between 0 and 999, hidden from you, and you were randomly teleported to that number. In this case, is there a sense in which it was “easy” for you to have been assigned a neighboring number (say, #994)? That depends on details of the random selection. Here are four cases:

  1. A spinner with a thousand slots was spun.

  2. A ten-sided die (sides numbered 0-9) was rolled thrice, generating digits the digits in order from left to right.

  3. The same as the previous, except the digits were generated in order from right to left.

  4. A computer picked the random number by first accessing a source of randomness, such as the time, to the millisecond, at which the program was started (or timings of keystrokes or fine details of mouse movements). Then a mathematical transformation was applied to the initial random number, to generate a sequence of cryptographically secure pseudorandom numbers whose relationship to the initial source of randomness is quite complex, eventually yielding the selected number. The mathematical transformations are so designed that one cannot assume that when the inputs are close to each other, the outputs are as well.

In case 1, it is intuitively true that if you landed at #995, you could “easily have been” at 994 or 996, since a small perturbation in the input conditions (starting position of spinner and force applied) would have resulted in a small change in the output.

In case 2, you could “easily have been” at 990-994 or 996-999 instead of 995, since all of these would have simply required the last die roll to have been different. In case 3, it is tempting to say that you could easily have been at these neighboring numbers since that would have simply required the first die roll to have been different. But actually I think cases 2 and 3 are further apart than they initially seem. If the first die roll came out differently, likely rolls two and three would have been different as well. Why? Well, die rolls are sensitive to initial conditions (the height from which the die is dropped, the force with which it is thrown, the spin imparted, the initial position, etc.) If the initial conditions for the first roll were different for some reason, it is very likely that this would have disturbed the initial conditions for the second roll. And getting a different result for the first roll would have affected the roller’s psychological state, and that psychological state feeds in a complex way into the way they will do the second and third rolls. So in case 3, I don’t think we can say that you could “easily” have ended up at a neighboring number. That would have required the first die roll to be different, and then, likely, you would have ended up quite far off.

Finally, in case 4, a good pseudorandom number generator is so designed that the relationship between the initial source of randomness and the outputs is sufficiently messy that a slight change in the inputs is apt to lead to a large change in the outputs, so it is false that you could easily have ended up at a neighboring number—intuitively, had things been different, you wouldn’t have been any more likely to end up at 994 or 996 than at 123 or 378.

I think at this point we can’t hold on to the initial intuition that at #995 you don’t know you’re at a barn but at #5 you would have known without further qualifications about how you ended up where you are. Maybe if you ended up at #995 via the spinner and the left-to-right die rolls, you don’t know, but if you ended up there via the right-to-left die rolls or the cryptographically secure pseudorandom number generator, then there is no relevant difference between #995 and #5.

At this point, I think, the initial intuition should start getting destabilized. There is something rather counterintuitive about the idea that the details of the random number generation matter. Does it really matter for knowledge whether the buildin number you were transported to was generated right-to-left or left-to-right by die rolls?

Why not just say that you know in all the cases? In all the cases, you engage in simple statistical reasoning: of the 1000 barn facades, 999 of them are fronts of a real barns, one is a mere facade, and it’s random which one is in front of you, so it is reasonable to think that you are in front of a real barn. Why should the neighboring buildings matter at all?

Perhaps it is this. In your reasoning, you are assuming you’re not in the 990-999 neighborhood. For if you realized you were in that neighborhood, you wouldn’t conclude you’re in front of a barn. But this response seems off-base for two reasons. First, by the same token you could say that when you are at #5, you are assuming you’re not in front of any of the buildings from the following set: {990, 991, 992, 993, 994, 5, 996, 997, 998, 999}. For if you realized you were in front of a building from that set, you wouldn’t have thought you are in front of a barn. But that’s silly. Second, you aren’t assuming that you’re not in the 990-999 neighborhood. For if you were assuming that, then your confidence that you’re in front of a real barn would have been the same as your confidence that you’re not in the 990-999 neighborhood, namely 0.990. But in fact, your confidence that you’re in front of a real barn is slightly higher than that, it is 0.991. For your confidence that you’re in front of a real barn takes into account the possibility that you are at #995, and hence that you are in the 990-999 neighborhood.

Wednesday, April 12, 2023

Generating qualitative probabilities

A (partial) qualitative probability is a relation defined on an algebra of sets and satisfying the following axioms:

  1. Preorder: ≾ is reflexive and transitive

  2. Zero: ⌀ ≾ A

  3. Additivity: if A ∪ B and C are disjoint, then A ≾ B if and only A ∪ C ≾ B ∪ C.

It’s just occurred to me that there is a nice way to construct qualitative probabilities out of a family of finitely additive probabilities. Fix an algebra F of subsets of Ω. Let Q be a non-empty set of finitely additive probability functions on F taking values in totally ordered fields. Then say that AQB if and only if p(A) ≤ p(B) for all p in Q. This clearly satisfies the three axioms above.

This may seem like a major extension of the concept of a total qualitative probability p generated by a single probability function p, but it’s not as much an extension as it may seem.

Remark 1: Not every qualitative probability, not even every total qualitative probability, can be constructed as Q for some Q.

At least one 1959 Kraft, Pratt and Seidenberg counterexample to the thesis that every total qualitative probability is generated by a single real-valued probability function trivially extends to show this.

Now let ≺ be the strict order generated by ≾ (and similarly with subscripts): A ≺ B iff A ≾ B but not B ≾ A.

Theorem 1: Assume Choice. Given a non-empty set Q of finitely additive probability functions on F there is a probability function p taking values in some ordered field such that ≾p extends ≾Q and ≺p extends Q.

Sketch of proof: All the ordered fields that the members of Q take values in can be embedded in the surreals. There will be a set-sized field that contains the ranges of all the embeddings. So we can assume that all the members of Q take values in a single field. The rest follows by an ultrafilter construction. Let H be the set of non-empty finite subsets of Q ordered by inclusion. Given K in H, let pK be the sum of the members of K. Then let p be the ultraproduct of the pK with respect to some ultrafilter on H. Verifying finite additivity and the fact that ≾p extends ≾Q is trivial. Verifying that p extends Q is only slightly harder. Suppose AQB. Then AQB. For some q in Q we must have q(A) < q(B). Then for any K in H containing {q}, we have pK(A) < pK(B), and so p(A) < p(B).

Theorem 2: Assume Choice. Given a non-empty set Q of finitely additive real-valued probability functions on F there is a real-valued probability function p such that p extends Q.

Sketch of proof: Let H be the set of non-empty finite subsets of Q ordered by inclusion. For K in H, let pK be the sum of the members of K. By compactness, pK has a limit point p.

Remark 2: One cannot require that p extend Q in Theorem 2. For let Ω have cardinality greater than the continuum. Then there is no regular real-valued finitely additive probability on the powerset of Ω (a probability is regular if P(A) > 0 for every non-empty A), since if there were, then each Ωz = {x ∈ Ω : x < z} would have different probabilities, and so the probability would have more values than the continuum. Let Q be all finitely additive real-valued probabilities on the powerset of Ω. Then ⌀≺QA for any non-empty A (since 0 < q(A) for q concentrated on some point of A). But if we had ⌀≺pA, then p would be regular. I am not sure what to say if Ω has continuum cardinality.

Tuesday, April 11, 2023

Is St Petersburg really a paradox of infinity?

In the St Petersburg game, you keep on tossing a coin until you get heads, and you get a payoff of 2n units (e.g, 2n days of fun) if you tossed n tails. Your expected payoff is:

  • (1/2) ⋅ 1 + (1/4) ⋅ 2 + (1/8) ⋅ 4 + ⋯ = ∞.

This infinite payoff leads to a variety of paradoxes (e.g., this).

But note that the infinite payoff is by itself paradoxical. If the expected payoff is infinite, it seems that it’s worth being tortured for a decade by the most effective torturers in the world for the sake of playing the game. And yet this is paradoxical!

However, the paradox here is not actually a paradox of infinity. For there will be some cut-off version of the game—a version where you get to toss the coin some predetermined finite number of times and if you don’t get to heads then you get nothing—where the value of the cut-off game exceeds the disvalue of the decade of torture. And it’s even more paradoxical to think that in the cut-off game it makes sense to pay that much to play, since the cut-off game is dominated by the infinite game.

This line of thought supports Monton’s thesis that we should neglect small probabilities.

Wednesday, April 5, 2023

Naturalism and perdurance

According to one version of naturalism, the only objects that have causal influence are objects posited by a completed science.

According to perdurantism, changing persisting objects have temporal parts which have the changing properties more fundamentally.

The changing properties of objects are causally efficacious. Thus, if perdurantism is true, the temporal parts have causal influence.

But the temporal parts of changing persisting objects are not among the objects posited by our current science. For instance, our current physics holds electrons and quarks to be fundamental, i.e., not made up of other objects studied by physics. The temporal parts of electrons and quarks are thus not studied by physics. Yet they have causal influence. If the completed physics is relevantly similar to our current physics in this regard, it won’t include temporal parts, either. And hence perdurantism posits causally efficacious entities that are unlikely to be posited by a complete science.

Thus perdurantism does not appear to fit with naturalism understood as above.

Tuesday, April 4, 2023

The vagueness of a prioricity

Let L be the following property of a positive integer:

  • being large or being greater than the number of carbon atoms in a water molecule.

Necessarily, every positive integer has L. But notice that 1 has L a posteriori, while 10100 has L a priori. For 1 is not large, and it has L because it is greater than the number of carbon atoms in a water molecule, but the latter is an a posteriori fact. On the other hand, it’s a priori that 10100 is large.

If n is a positive number such that it is vague whether it is large (e.g., maybe n = 50), it will be vague whether the fact that n has L is a priori or a posteriori. For the largeness of a number is, I assume, an a priori matter, and so it will be vaguely true that it is a priori true that n is large, and hence that n has L.

Hyperintensional vagueness

“Water” and “H2O” don’t mean the same thing in ordinary English: it is not a priori that water is H2O. But I suspect that when a chemist uses the word “water” in the right kind of professional context, they use it synonymously with “H2O”. Suppose this is right. But what if the chemist uses the word with fellow chemists in an “ordinary” way, telling a colleague that the tea water has boiled?

Here is a possibility: we then have a case of merely hyperintensional vagueness. In cases of merely hyperintensional vagueness, there is vagueness as to what an utterance means, but this vagueness has no effect on truth value.

I suspect that hyperintensional vagueness is a common phenomenon. Likely some people use “triangle” to mean a polygon with three angles (as the etymology indicates) and some use it to mean a polygon with three sides. (We can capture the difference by noting that to the latter group it is trivial that triangles have three sides while for the former it is a not entirely trivial theorem.) But consider a child who inherits the word “triangle” from two parents, one of whom uses it in the angle way and the other uses it in a side way. This is surely not an unusual phenomenon: much of the semantics of our language is inherited from users around us, and these users often have hyperintensional (or worse!) differences in meaning.