Thursday, June 26, 2025

A failed Deep Thought

I was going to post the following as Deep Thoughts XLIII, in a series of posts meant to be largely tautologous or at least trivial statements:

  1. Everyone older than you was once your age.

And then I realized that this is not actually a tautology. It might not even be true.

Suppose time is discrete in an Aristotelian way, so that the intervals between successive times are not always the same. Basically, the idea is that times are aligned with the endpoints of change, and these can happen at all sorts of seemingly random times, rather than at multiples of some interval. But in that case, (1) is likely false. For it is unlikely that the random-length intervals of time in someone else’s life are so coordinated with yours that the exact length of time that you have lived equals the sum of the lengths of intervals from the beginning to some point in the life of a specific other person.

Of course, on any version of the Aristotelian theory that fits with our observations, the intervals between times are very short, and so everyone older than you was once approximately your age.

One might try to replace (1) by:

  1. Everyone older than you was once younger than you are now.

But while (2) is nearly certainly true, it is still not a tautology. For if Alice has lived forever, then she’s older than you, but she was never younger than you are now! And while there probably are no individuals who are infinitely old (God is timelessly eternal), this fact is far from trivial.

Tuesday, June 24, 2025

Punishment, causation and time

I want to argue for this thesis:

  1. For a punishment P for a fault F to be right, P must stand in a causal-like relation to P.

What is a causal-like relation? Well, causation is a causal-like relation. But there is probably one other causal-like relation, namely when because of the occurrence of a contingent event E, God knows that E occurred, and this knowledge in turn explains why God did something. This is not exactly causation, because God is not causally affected by anything, but it is very much like causation. If you don’t agree, then just remove the ``like’’ from (1).

Thesis (1) helps explain what is wrong with punishing people on purely statistical grounds, such as sending a traffic ticket to Smith on the grounds that Smith has driven 30,000 miles in the last five years and anyone who drove that amount must have committed a traffic offense.

Are there other arguments against (1)? I think so. Consider forward-looking punishment where by knowing someone’s present character you know that they will commit some crime in ten days, so you punish them now (I assume that they will commit the crime even if you do not punish them). Or, even more oddly, consider circular forward-looking punishment. Suppose Alice has such a character that it is known that if we jail her, she will escape from jail. But assume that our in society an escape from jail is itself a crime punishable by jail, and that Alice is not currently guilty of anything. We then jail her, on the grounds that she will escape from jail, for which the punishment is us now jailing her.

One may try to rule out the forward-looking cases on the grounds that instead of (1) we should hold:

  1. For a punishment P for a fault F to be right, P must come after F.

But that’s not right. Simultaneous causation seems possible, and it does not seem unjust to set up a system where a shoplifter feels punitive pain at the very moment of the shoplifting, as long as the pain is caused by the shoplifting.

Or consider this kind of a case. You know that Bob will commit a crime in ten days, so you set up an automated system that will punish him at a preset future date. It does not seem to be of much significance whether the system is set to go off in nine or eleven days.

Or consider cases where Special Relativity is involved, and the punishment occurs at a location distant from the criminal. For instance, Carl, born on Earth, could be sentenced to public infamy on earth for a crime he commits around Alpha Centauri. Supposing that we have prior knowledge that he will commit the crime on such and such a date. If (2) is the right principle, when should we make him infamous on earth? Presumably after the crime. But in what reference frame? That seems a silly question. It is silly, because (2) isn’t the right principle—(1) is better.

Objection: One cannot predict what someone will freely do.

Response: One perhaps cannot predict with 100% certainty what someone will freely do, but punishment does not require 100% certainty.

Friday, June 20, 2025

Punishment, reward and theistic natural law

I’ve always found punishment and (to a lesser extent) reward puzzling. Why is it that when someone does something wrong is there moral reason to impose a harsh treatment on them, and why is it that when someone does something right—and especially supererogatory—is there moral reason to do something nice for them?

Of course, it’s easy to explain why it’s good for our species that there be a practice of reward and punishment: such a practice in obvious ways helps to maintain a cooperative society. But what makes it morally appropriate to impose a sacrifice on the individual for the good of the species in this way, whether the good of the person receiving the punishment or the good of the person giving the reward when the reward has a cost?

Punishment and reward thus fit into a schema where we would like to be able to make use of this argument form:

  1. It would be good (respectively, bad) for humans if moral fact F did (did not) obtain.

  2. Thus, probably, moral fact F does obtain.

(The argument form is better on the parenthetical negative version.) It would be bad for humans if we did not have distinctive moral reasons to reward and punish, since our cooperative society would be more liable to fall apart due to cheating, freeriding and neglect of others. So we have such moral reasons.

As I have said on a number of occasions, we want a metaethics on which this is a good argument. Rule-utilitarianism is such a metaethics. So is Adams’ divine command theory with a loving God. And so is theistic natural law, where God chooses which natures to exemplify because of the good features in these natures. I want to say something about this last option in our case, and why it is superior to the others.

Human nature encodes what is right and wrong for. Thus, it can encode that it is right for us to punish and reward. An answer as to why it’s right for us to reward and punish, then, is that God wanted to make cooperative creatures, and chose a nature of cooperative creatures that have moral reasons to punish and reward, since that improves the cooperation.

But there is a way that the theistic natural law solution stands out from the others: it can incorporate Boethius’ insight that it is intrinsically bad for one to get away unpunished with wrongdoing. For our nature not only encodes what is right and wrong for us to do, but also what is good or bad for us. And so it can encode that it is bad for us to get away unpunished. It is good for us that it be bad for us to get away unpunished, since its being bad for us to get away unpunished means that we have additional reason to avoid wrongdoing—if we do wrong, we either get punished or we get away unpunished, and both options are bad for us.

The rule-utilitarian and divine-command options only explain what is right and wrong, not what is good and bad, and so they don’t give us Boethius’ insight.

Thursday, June 5, 2025

What is an existential quantifier?

What is an existential quantifier?

The inferentialist answer is that an existential quantifier is any symbol that has the syntactic features of a one-place quantifier and obeys the same logical rules of an existential quantifier (we can precisely specify both the syntax and logic, of course). Since Carnap, we’ve had good reason to reject this answer (see, e.g., here).

Here is a modified suggestion. Consider all possible symbols that have the syntactic features of a one-place quantifier and obeys the rules of an existential quantifier. Now say that a symbol is an existential quantifier provided that it is a symbol among these symbols that maximizes naturalness, in the David Lewis sense of “naturalness”.

Moreover, this provides the quantifier variantist or pluralist (who thinks there are multiple existential quantifiers, none of them being the existential quantifier) with an answer to a thorny problem: Why not simply disjoin all the existential quantifiers to make a truly unrestricted existential quantifier, and say that that is the existential quantifier? THe quantifier variantist can say: Go ahead and disjoin them, but a disjunction of quantifiers is less natural than its disjuncts and hence isn’t an existential quantifier.

This account also allows for quantifier variance, the possibility that there is more than one existential quantifier, as long as none of these existential quantifiers is more natural than any other. But it also fits with quantifier invariance as long as there is a unique maximizer of naturalness.

Until today, I thought that the problem of characterizing existential quantifiers was insoluble for a quantifier variantist. I was mistaken.

It is tempting to take the above to say something deep about the nature of an existential quantifier, and maybe even the nature of being. But I think it doesn’t quite. We have a characterization of existential quantifiers among all possible symbols, but this characterization doesn’t really tell us what they mean, just how they behave.

Tuesday, June 3, 2025

Combining epistemic utilities

Suppose that the right way to combine epistemic utilities or scores across individuals is averaging, and I am an epistemic act expected-utility utilitarian—I act for the sake of expected overall epistemic utility. Now suppose I am considering two different hypotheses:

  • Many: There are many epistemic agents (e.g., because I live in a multiverse).

  • Few: There are few epistemic agents (e.g., because I live in a relatively small universe).

If Many is true, given averaging my credence makes very little difference to overall epistemic utility. On Few, my credence makes much more of a difference to overall epistemic utility. So I should have a high credence for Few. For while a high credence for Few will have an unfortunate impact on overall epistemic utility if Many is true, because the impact of my credence on overall epistemic utility will be small on Many, I can largely ignore the Many hypothesis.

In other words, given epistemic act utilitarianism and averaging as a way of combining epistemic utilities, we get a strong epistemic preference for hypotheses with fewer agents. (One can make this precise with strictly proper scoring rules.) This is weird, and does not match any of the standard methods (self-sampling, self-indication, etc.) for accounting for self-locating evidence.

(I should note that I once thought I had a serious objection to the above argument, but I can't remember what it was.)

Here’s another argument against averaging epistemic utilities. It is a live hypothesis that there are infinitely many people. But on averaging, my epistemic utility makes no difference to overall epistemic utility. So I might as well believe anything on that hypothesis.

One might toy with another option. Instead of averaging epistemic utilities, we could average credences across agents, and then calculate the overall epistemic utility by applying a proper scoring rule to the average credence. This has a different problematic result. Given that there are at least billions of agents, for any of the standard scoring rules, as long as the average credence of agents other than you is neither very near zero nor very near one, your own credence’s contribution to overall score will be approximately linear. But it’s not hard to see that then to maximize expected overall epistemic utility, you will typically make your credence extreme, which isn’t right.

If not averaging, then what? Summing is the main alternative.

Closed time loop

Imagine two scenarios:

  1. An infinitely long life of repetition of a session meaningful pleasure followed by a memory wipe.

  2. A closed time loop involving one session of the meaningful pleasure followed by a memory wipe.

Scenario (1) involves infinitely many sessions of the meaningful pleasure. This seems better than having only one session as in (2). But subjectively, I have a hard time feeling any preference for (1). In both cases, you have your pleasure, and it’s true that you will have it again.

I suppose this is some evidence that we’re not meant to live in a closed time loop. :-)

Monday, June 2, 2025

Shuffling an infinite deck

Suppose infinitely many blindfolded people, including yourself, are uniformly randomly arranged on positions one meter apart numbered 1, 2, 3, 4, ….

Intuition: The probability that you’re on an even-numbered position is 1/2 and that you’re on a position divisible by four is 1/4.

But then, while asleep, the people are rearranged according to the following rule. The people on each even-numbered position 2n are moved to position 4n. The people on the odd numbered positions are then shifted leftward as needed to fill up the positions not divisible by 4. Thus, we have the following movements:

  • 1 → 1

  • 2 → 4

  • 3 → 2

  • 4 → 8

  • 5 → 3

  • 6 → 12

  • 7 → 5

  • 8 → 16

  • 9 → 6

  • and so on.

If the initial intuition was correct, then the probability that now you’re on a position that’s divisible by four is 1/2, since you’re now on a position divisible by four if and only if initially you were on a position divisible by two. Thus it seems that now people are no longer uniformly randomly arranged, since for a uniform arrangement you’d expect your probability of being in a position divisible by four to be 1/4.

This shows an interesting difference between shuffling a finite and an infinite deck of cards. If you shuffle a finite deck of cards that’s already uniformly distributed, it remains uniformly distributed no matter what algorithm you use to shuffle it, as long as you do so in a content-agnostic way (i.e., you don’t look at the faces of the cards). But if you shuffle an infinite deck of distinct cards that’s uniformly distributed in a content-agnostic way, you can destroy the uniform distribution, for instance by doubling the probability that a specific card is in a position divisible by four.

I am inclined to take this as evidence that the whole concept of a “uniformly shuffled” infinite deck of cards is confused.

Saturday, May 31, 2025

Four-flour pancakes

I was watching an old Aunt Jemima pancake mix commercial which touted it as being made from four flours: wheat, corn, rye and rice, and I decided to see what pancakes made them are like. I started with this wheat flour pancake recipe, but tweaked some things, and made them this morning. Pretty good. Perhaps more hearty than standard pancakes, and the texture was a bit more crunchy, which I liked.

  • 1/2 cup of wheat flour

  • 1/2 cup of whole-grain rye flour

  • 1/2 cup of corn flour

  • 1/2 cup of (non-glutinous) rice flour

  • 4 3/4 teaspoons baking powder

  • 4 teaspoons white sugar

  • 1/3 teaspoon salt

  • 1 2/3 cup milk

  • 4 tablespoons melted butter

  • 1 large egg

  • 4 teaspoons apple sauce (or skip and use 1 1/3 egg, if you have some use for the remaining 2/3 of the egg)

  • cooking spray (I used canola spray)

  • optional: chocolate chips

Mix dry ingredients. Add wet ingredients. Mix well. Heat pan to medium heat. Spray with oil. Put a big serving spoon of mix on the pan. If you want to add chocolate chips, drop them in on top. Wait until the edges are getting dry. (It was surprisingly fast, about 1-2 minutes, and they would burn easily when I wasn’t fast enough.) Flip and brown the other side (again, it’s fast).



Yields 9-10 not very large pancakes. The frying took half an hour with two pans in simultaneous use. I measured out all the ingredients the night before and pre-mixed the dry ingredients so I could be fast in the morning before a pickleball game.

Friday, May 30, 2025

The value of moral norms

Here is a very odd question that occurred to me: Is it good for there to be moral norms?

Imagine a world just like this one, except that there are no moral norms for its intelligent denizens—but nonetheless they behave as we do. They feel repelled by the idea of murder and torture, and find the life of a Mother Teresa attractive, but there are no moral truths behind these things.

Such a world would have one great advantage over ours: there would be no moral evil. That world’s Hitler and Stalin would cause just as much pain and suffering, but they wouldn’t be wicked in so doing. Given the Socratic insight that it is worse to do than to suffer evil, a vast amount of evil would disappear in such a world. At least a third of the evil in the world would be gone. Our world has three categories of evil:

I. Undergoing of natural evils

  1. Undergoing of moral evils, and

  2. Performance of moral evils.

The third category would be gone, and it is probably the biggest of the three. Wouldn’t that be worth it?

Here is one answer. For cooperative intelligent social animals, a belief in morality is very useful. But to live one’s life by a belief that is false seems a significant harm. Cooperative intelligent social animals in the alternative world would be constantly deceived by their belief in morality. That is a great evil. But is it as great an evil as all Category III evils taken together? I suspect it is but a small fraction of the sum of all Category III evils.

Here is a second answer. In removing moral norms, one would admittedly remove a vast category of evils, but also a vast category of goods: the performance of moral good. If we have the intuition that having moral norms is a good thing—that it would be a disappointment to learn that moral norms were an illusion—then we have to think that the performances of moral good are a very great thing indeed, one comparable to the sum of all Category III evils.

I am attracted to a combination of the two answers. But I can also see someone saying: “It doesn’t matter whether it’s worth having moral norms or not, but it is simply impossible to have cooperative intelligent social animals that believe in morality without their being under moral norms.” A Platonist may say that on the grounds that moral norms are necessary. A theist may say it on the grounds that it is contrary to the character of a perfect God to manufacture the vast deceit that would be involved in us thinking there are moral norms if there were no moral norms. These aren’t bad answers. But I still feel it’s good that there really are moral norms.

Thursday, May 29, 2025

Philosophy and child-raising

Philosophy Departments often try to attract undergraduates by telling them about instrumental benefits of philosophy classes: learning generalizable reading, writing and reasoning skills, doing better on the LSAT, etc.

But here is a very real and much more direct reason why lots of people should take philosophy classes. Most people end up having children. And children ask lots of questions. These questions include philosophical ones. Moreover, as they grow, especially around the teenage years, philosophical questions come to have special existential import: why should I be virtuous, what is the point of life, is there life after death, is there a God, can I be sure of anything?

For children’s scientific questions, there is always Wikipedia. But that won’t be very helpful with the philosophical ones. In a less diverse society, where parents can count on agreeing philosophically with the schools, parents could outsource children’s philosophical questions to a teacher they agree with. Perhaps religious parents can count on such agreement if they send their children to a religious school, but in a public school this is unlikely. (And in any case, outsourcing to schools is still a way of buying into something like universal philosophical education.) So it seems that vast numbers of parents need philosophical education to raise their children well.

Friday, May 23, 2025

Hyperreal infinitesimal probabilities and definability

In order to assign non-zero probabilities to such things as a lottery ticket in an infinite fair lottery or hitting a specific point on a target with a uniformly distributed dart throw, some people have proposed using non-zero infinitesimal probabilities in a hyperreal field. Hajek and Easwaran criticized this on the grounds that we cannot mathematically specify a specific hyperreal field for the infinitesimal probability. If that were right, then if there are hyperreal infinitesimal probabilities for such a situation, nonetheless we would not be able to say what they are. But it’s not quite right: there is a hyperreal field that is "definable", or fully specifiable in the language of ZFC set theory.

However, for Hajek-Easwaran argument against hyperreal infinitesimal probabilities to work, we don’t need that the hyperreal field be non-definable. All we need is that the pair (*R,α) be non-definable, where *R is a hyperreal field and α is the non-zero infinitesimal assigned to something specific (say, a single ticket or the center of the target).

But here is a fun fact, much of the proof of which comes from some remarks that Michael Nielsen sent me:

Theorem: Assume ZFC is consistent. Then ZFC is consistent with there not being any definable pair (*R,α) where *R is a hyperreal field and α is a non-zero infinitesimal in that field.

[Proof: Solovay showed there is a model of ZFC where every definable set is measurable. But every free ultrafilter on the powerset of the naturals is nonmeasurable. However, an infinite integer in a hyperreal field defines a free ultrafilter on the naturals—given an infinite integer M, say that a subset A of the naturals is a member of the ultrafilter iff |M| ∈ *A. And a non-zero infinitesimal defines an infinite integer—say, as the floor of its reciprocal.]

Given the Theorem, without going beyond ZFC, we cannot count on being able to define a specific hyperreal non-zero infinitesimal probability for situations like a ticket infinite lottery or hitting the center of a target. Thus, if a friend of hyperreal infinitesimal probabilities wants to be able to define one, they must go beyond ZFC (ZFC plus constructibility will do).

Wednesday, May 21, 2025

Doxastic moral relativism

Reductive doxastic moral relativism is the view that an action type’s being morally wrong is nothing but an individual or society’s belief that the action type is morally wrong.

But this is viciously circular, since we reduce wrongness to a belief about wrongness. Indeed, it now seems that murder is wrong provided that it is believed that it is believed that it is believed ad infinitum.

A non-reductive biconditional moral relativism fares better. This is a theory on which (a) there is such a property as moral wrongness and (b) necessarily, an action type has that property if and only if it is believed that it does. Compare this: There is such a property as mass, and necessarily an object has mass if and only if God believes that it has mass.

There is a biconditional-explanatory version. On this theory (a) there is such a property as moral wrongness and (b) necessarily, an action type has that property if and only if, and if so then because, it is believed that it does.

While both the biconditional and biconditional-explanatory versions appear logically coherent, I think they are not particularly plausible. If there really is such a property as moral wrongness, and it does not reduce to our beliefs, then it just does not seem particularly plausible to think that it obtains solely because of our beliefs or that it obtains necessarily if and only if we believe it does. The only clear and non-gerrymandered examples we have of properties that obtain solely because of our beliefs or necessarily if and only if we believe they do are properties that reduce to our beliefs.

All this suggests to me that if one wishes to be a relativism, one should base the relativism on a different attitude than belief.

Monday, May 19, 2025

Sacraments and New Testament law

Christians believe that Jesus commanded us to baptize new Christians. However, there is a fundamental division in views: some Christians (such as Catholics and the Orthodox) have a sacramental view of baptism, on which baptism as such leads to an actual supernaturally-produced change in the person baptized, while others hold a symbolic view of it.

Here is an argument for the sacramental view. We learn from Paul that there is a radical change in God’s law from Old to New Testament times. I think our best account of that change is that we are no longer under divinely-commanded ceremonial and symbolic laws, but as we learn from the First Letter of John, we are clearly still under the moral law.

On the symbolic view, however, baptism is precisely a ceremonial and symbolic law—precisely the kind of thing that we are no longer under. On the sacramental view, however, it is easy to explain how baptism falls under the moral law. Love of neighbor morally enjoins on us that we provide effective medical treatment to our neighbor, and love of self requires us to seek such treatment for ourselves. Similarly, if baptism is crucial to the provision of grace for moral healing, then love of neighbor morally enjoins on us that we baptize and love of self requires us to seek baptism for ourselves.

The same kind of argument applies to the Eucharist: since it is commanded by God in New Testament times, it is not merely symbolic.

Wednesday, May 14, 2025

Semantics of syntactically incorrect language

As anyone who has talked with a language-learner knows, syntactically incorrect sentences often succeed in expressing a proposition. This is true even in the case of formal languages.

Formal semantics, say of the Tarski sort, has difficulties with syntactically incorrect sentences. One approach to saving the formal semantics is as follows: Given a syntactically incorrect sentence, we find a contextually appropriate syntactically correct sentence in the vicinity (and what counts as vicinity depends on the pattern of errors made by the language user), and apply the formal semantics to that. For instance, if someone says “The sky are blue”, we replace it with “The sky is blue” in typical contexts and “The skies are blue” in some atypical contexts (e.g., discussion of multiple planets), and then apply formal semantics to that.

Sometimes this is what we actually do when communicating with someone who makes grammatical errors. But typically we don’t bother to translate to a correct sentence: we can just tell what is meant. In fact, in some cases, we might not even ourselves know how to translate to a correct sentence, because the proposition being expressed is such that it is very difficult even for a native speaker to get the grammar right.

There can even be cases where there is no grammatically correct sentence that expresses the exact idea. For instance, English has a simple present and a present continuous, while many other languages have just one present tense. In those languages, we sometimes cannot produce an exact grammatically correct translation of an English sentence. One can use some explicit markers to compensate for the lack of, say, a present continuous, but the semantic value of a sentence using these markers is unlikely to correspond exactly to the meaning of the present continuous (the markers may have a more determinate semantics than the present continuous). But we can imagine a speaker of such a language who imitates the English present continuous by a literal word-by-word translation of “I am” followed by the other language’s closest equivalent to a gerund, even when such translation is grammatically incorrect. In such a case, assuming the listener knows English, the meaning may be grasped, but nobody is capable of expressing the exact meaning in a syntactically correct way. (One might object that one can just express the meaning in English. But that need not be true. The verb in question may be one that does not have a precise equivalent in English.)

Thus we cannot account for the semantics of syntactically incorrect sentences by applying semantics to a syntactically corrected version. We need a semantics that works directly for syntactically incorrect sentences. This suggests that formal semantics are necessarily mere approximate models.

Similar issues, of course, arise with poetry.

Tuesday, May 13, 2025

Truth-value realisms about arithmetic

Arithmetical truth-value realists hold that any proposition in the language of arithmetic has a fully determined truth value. Arithmetical truth-value necessists add that this truth value is necessary rather than merely contingent. Although we know from the incompleteness theorems that there are alternate non-standard natural number structures, with different truth values (e.g., there is a non-standard natural number structure according to which the Peano Axioms are inconsistent), the realist and necessist hold that when we engage in arithmetical language, we aren’t talking about these structures. (I am assuming either first-order arithmetic or second-order with Henkin semantics.)

Start by assuming arithmetical truth-value necessitism.

There is an interesting decision point for truth-value necessitism about arithmetic: Are these necessary truths twin-earthable? I.e., could there be a world whose denizens who talk arithmetically like we do, and function physically like we do, but whose arithmetical sentences express different propositions, with different and necessary truth values? This would be akin to a world where instead of water there is XYZ, a world whose denizens would be saying something false if they said “Water has hydrogen in it”.

Here is a theory on which we have twin-earthability. Suppose that the correct semantics of natural number talk works as follows. Our universe has an infinite future sequence of days, and the truth-values of arithmetical language are fixed by requiring the Peano Axioms (or just the Robinson Axioms) together with the thesis that the natural number ordering is order-isomorphic to our universe’s infinite future sequence of days, and then are rigidified by rigid reference to the actual world’s sequence of future days. But in another world—and perhaps even in another universe in our multiverse if we live in a multiverse—the infinite future sequence of days is different (presumably longer!), and hence the denizens of that world end up rigidifying a different future sequence of days to define the truth values of their arithmetical language. Their propositions expressed by arithmetical sentences sometimes have different truth values from ours, but that’s because they are different propositions—and they’re still as necessary as ours. (This kind of a theory will violate causal finitism.)

One may think of a twin-earthable necessitism about arithmetic as a kind of cheaper version of necessitism.

Should a necessitist go cheap and allow for such twin-earthing?

Here is a reason not to. On such a twin-earthable necessitism, there are possible universes for whose denizens the sentence “The Peano Axioms are consistent” expresses a necessary falsehood and there are possible universes for whose denizens the sentence expresses a necessary truth. Now, in fact, pretty much everybody with great confidence thinks that the sentence “The Peano Axioms are consistent” expresses a truth. But it is difficult to hold on to this confidence on twin-earthable necessitism. Why should we think that the universes the non-standard future sequences of days are less likely?

Here is the only way I can think of answering this question. The standard naturals embed into the non-standard naturals. There is a sense in which they are the simplest possible natural number structure. Simplicity is a guide to truth, and so the universes with simpler future sequences of days are more likely.

But this answer does not lead to a stable view. For if we grant that what I just said makes sense—that the simplest future sequences of days are the ones that correspond to the standard naturals—then we have a non-twin-earthable way of fixing the meaning of arithmetical language: assuming S5, we fix it by the shortest possible future sequence of days that can be made to satisfy the requisite axioms by adding appropriate addition and multiplication operations. And this seems a superior way to fix the meaning of arithmetical language, because it better fits with common intuitions about the “absoluteness” of arithmetical language. Thus it it provides a better theory than twin-earthable necessitism did.

I think the skepticism-based argument against twin-earthable necessitism about arithmetic also applies to non-necessitist truth-value realism about arithmetic. On non-necessitist truth-value realism, why should we think we are so lucky as to live in a world where the Peano Axioms are consistent?

Putting the above together, I think we get an argument like this:

  1. Twin-earthable truth-value necessitism about arithmetic leads to skepticism about the consistency of arithmetic or is unstable.

  2. Non-necessitist truth-value realism about arithmetic leads to skepticism about the consistency of arithmetic.

  3. Thus, probably, if truth-value realism about arithmetic is true, non-twin-earthable truth-value necessitism about arithmetic is true.

The resulting realist view holds arithmetical truth to be fixed along both dimensions of Chalmers’ two-dimensional semantics.

(In the argument I assumed that there is no tenable way to be a truth-value realist only about Σ10 claims like “Peano Arithmetic is consistent” while resisting realism about higher levels of the hierarchy. If I am wrong about that, then in the above argument and conclusions “truth-value” should be replaced by “Σ10-truth-value”.)