Friday, August 8, 2025

Extrinsic well-being and the open future

Klaus: Sometimes how well or badly off you are at time t1 depends on what happens at a later time t2. A particularly compelling case of this is when at t1 you performedan onerous action with the goal of producing some effect E at t2. How well off you were in performing the action depends on whether the action succeeded—which depends on whether E eventuates at t2. But now suppose the future is open. Then in a world with as much indeterminacy as ours, in many cases at t1 it will be contingent whether the event at t2 on which your well-being at t1 depends eventuates. And on open future views, at t1 there will then be no fact of the matter about your well-being. Hence, the future is not open.

Opie: In such cases, your well-being should be located at t2 rather than at t1. If you jump the crevasse, it is only when you land that you have the well-being of success.

Klaus: This does not work as well in cases where you are dead at t2. And yet our well-being does sometimes depend on what happens after we are dead. The action at t1 might be a heroic sacrifice of one’s life to save one’s friends—but whether one is a successful hero or a tragic hero depends on whether the friends will be saved, which may depend on what happens after one is already dead.

Opie: Thanks! You just gave me an argument for an afterlife. In cases like this, you are obviously better off if you manage to save your friends, but you aren’t better off in this life, so there must be life after death.

Klaus: But we also have the intuition that even if there were no afterlife, it would be better to be the successful hero than the tragic hero, and that posthumous fame is better than posthumous infamy.

Opie: There is an afterlife. You’ve convinced me. And moral intuitions about how things would be if our existence had a radically different shape from the one it in fact has are suspect. And, given that there is an afterlife, a scenario without an afterlife is a scenario where our existence has a radically different shape. Thus the intuition you cite is unreliable.

Klaus: That’s a good response. Let me try a different case. Suppose you perform an onerous action with a goal within this life, but then you change your mind about the goal and work to prevent that goal. This works best if both goals are morally acceptable, and switching goals is acceptable. For instance you initially worked to help the Niners train to win their baseball game against the Logicians, but then your allegiance shifted to the Logicians in a way that isn't morally questionable. And then suppose the Niners won. Your actions in favor of the Niners are successful, and you have well-being. But it is incorrect to locate that well-being at the time of the actual victory, since at that time you are working for the Logicians, not the Niners. So the well-being must be located at the time of your activity, and at that time it depends on future contingents.

Opie: Perhaps I should say that at the time Niners beat the Logicians, you are both well-off and badly-off, since one of your past goals is successful and the other is unsuccessful. But I agree that this doesn’t quite seem right. After all, if you are loyal to your current employer, you’re bummed out about the Logicians’ loss and you’re bummed out that you weren’t working for them from the beginning. So intuitively you're just badly off at this time, not both badly and well off. So, I admit, this is a little bit of evidence against open future views.

Consciousness and the open future

Plausibly:

  1. There is a “minimal humanly observable duration” (mhod) such that a human cannot have a conscious state—say, a pain—shorter than an mhod, but can have a conscious state that’s an mhod long.

The “cannot” here is nomic possibility rather than metaphysical possibility.

Let δ denote an mhod. Now, suppose that you feel a pain precisely from t0 to t2. Then t2 ≥ t0 + δ. Now, let t1 = t0 + δ/2. Then you feel a pain at t1. But at t1, you only felt a pain for half an mhod. Thus:

  1. At t1, that you feel pain depends on substantive facts about your mental state at times after t1.

For if your head were suddenly zapped by a giant laser a quarter of an mhod after t1, then you would not have felt a pain at t1, because you would have been in a position to feel pain only from t0 to t0 + (3/4)δ.

But in a universe full of quantum indeterminacy:

  1. These substantive facts are contingent.

After all, your brain could just fail a quarter of an mhod after t1 due to a random quantum event.

But:

  1. Given an open future, at t1 there are no substantive contingent facts about the future.

Thus:

  1. Given an open future, at t1 there is no fact that you are conscious.

Which is absurd!

Tuesday, July 29, 2025

Discrete time and Aristotle's argument for an infinite past

Aristotle had a famous argument that time had no beginning or end. In the case of beginnings, this argument caused immense philosophical suffering in the middle ages, since combined with the idea that time requires change it implies that the universe was eternal, contrary to the Jewish, Muslim and Christian that God created the universe a finite amount of time ago.

The argument is a reductio ad absurdum and can be put for instance like this:

  1. Suppose t0 is the beginning of time.

  2. Before t0 there is no time.

  3. It is a contradiction to talk of what happened before the the beginning of time.

  4. But if (1) is true, then (2) talks of what is before the beginning of time.

  5. Contradiction!

It’s pretty easy to see what’s wrong with the argument. Claim (2) should be charitably read as:

  • Not (before t0 there is time).

Seen that way, (2) doesn’t talk about what happened before t0, but is just a denial that there was any such thing as time-before-t0.

It just struck me that a similar argument could be used to establish something that Aristotle himself rejects. Aristotle famously believed that time was discrete. But now argue:

  1. Suppose t0 and t1 are two successive instants of time.

  2. After t0 and before t1 there is no time.

  3. It is a contradiction of what happened when there is no time.

  4. But if (7) is true, then (7) talks of what is when there is no time.

  5. Contradiction!

Again, the problem is the same. We should take (7) to deny that there is any such thing as time-after-t0-and-before-t1.

So Aristotle needed to choose between his preference for the discreteness of time and his argument for an infinite past.

What if there is no tomorrow?

There are two parts of Aristotle’s theory that are hard to fit together.

First, we have Aristotle’s view of future contingents, on which

  1. It is neither true nor false that tomorrow there will be a sea battle

but, of course:

  1. It is true that tomorrow there will be a sea battle or no sea battle.

Of course, nothing rides on “tomorrow” in (1) and (2): any future metric interval of times will do. Thus:

  1. It is true that in 86,400,000 milliseconds there will be a sea battle or not.

(Here I adopt the convention that “in x units” denotes the interval of time corresponding to the displayed number of significant digits in x. Thus, “in 86,400,000 ms” means “at a time between 86,399,999.5 (inclusive) and 86,400,000.5 (exclusive) ms from now.”)

Second, we have Aristotle’s view of time, on which time is infinitely divisible but not infinitely divided. Times correspond to what one might call happenings, the beginnings and ends of processes of change. Now which happenings there will be, and when they will fall with respect to metric time (say, 3.74 seconds after some other happening), is presumably something that is, or can be, contingent.

In particular, in a world full of contingency and with slow-moving processes of change, it is contingent whether there will be a time in 86,400,000 ms. But (3) entails that there will be such a time, since if there is no such time, then it is not true that anything will be the case in 86,400,000 ms, since there will be no such time.

Thus, Aristotle cannot uphold (3) in a world full of contingency and slow processes. Hence, (3) cannot be a matter of temporal logic, and thus neither can (2) be, since logic doesn’t care about the difference between days and milliseconds.

If we want to make the point in our world, we would need units smaller than milliseconds. Maybe Planck times will work.

Objection: Suppose that no moment of time will occur in exactly x1 seconds, because x1 falls between all the endpoints of processes of change. But perhaps we can still say what is happening in x1 seconds. Thus, if there are x0 < x1 < x2 such that x0 seconds from now and x2 seconds from now (imagine all this paragraph being said in one moment!) are both real moments of time, we can say things about what will happen in x1 seconds. If I will be sitting in both x0 and x2 seconds, maybe I can say that I will be sitting in x1 seconds. Similarly, if Themistocles is leading a sea battle in 86,399,999 ms and is leading a sea battle in 86,400,001 ms, then we can say that he is leading a sea battle in 86,400,000 ms, even though there is no moment of time then. And if he won’t lead a sea battle in either 86,399,999 ms or in 86,400,000 ms, neither will he lead one in 86,400,000 ms.

Response: Yes, but (3) is supposed to be true as a matter of logic. And it’s logically possible that Themistocles leads a sea battle in 86,399,999 ms but not in 86,400,001 ms, in which case if there will be no moment in 86,400,000 ms, we cannot meaningfully say if he will be leading a sea battle then or not. So we cannot save (3) as a matter of logic.

A possible solution: Perhaps Aristotle should just replace (2) with:

  1. It is true that will be: no tomorrow or tomorrow a sea battle or tomorrow no sea battle.

I am a bit worried about the "will" attached to a “no tomorrow”. Maybe more on that later.

Monday, July 28, 2025

An attempt to define possible futures for open futurism

On all-false open future (AFOF), future contingent claims are all false. The standard way to define “Will p” is to say that p is true in all possible futures. But defining a possible future is difficult. Patrick Todd does it in terms of possible worlds apparantly of the classical sort—ones that have well-defined facts about how things are at all times. But such worlds are not in general possible given open future views—it is not possible to simultaneously have a fact about how contingent events go on all future days (assuming the future is infinite).

Here is an approach that maybe has some hope of working better for open future views. Take as primitive not classical possible worlds, but possible moments, ways that things could be purely at a time. Possible moments do not include facts about the past and future.

Now put a temporal ordering on the possible moments, where we say that m1 is earlier than m2 provided that it is possible to have had m1 obtaining before m2.

For a possible moment m, define:

  • open m-world: a maximal set of possible moments including m such that (a) all moments in the set other than m are earlier or later than m and (b) the subset of moments earlier than m is totally ordered

  • possible history: a maximal totally ordered set of possible moments

  • possible future: a possible history that contains m.

Exactly one possible moment is currently actual. Then:

  • possible future: a possible future of the currently actual moment.

Now consider the problem of entailment on AFOF. The problem is this. Intutiively, that I will freely mow my lawn entails that I will mow my lawn, but does not entail that I will eat my lawn. However, since on AFOF “I will freely mow my lawn” is necessarily false—it is false at every possible moment, since “will” claims concerning future contingent claims are always false—both entailments have necessarily false antecedents and hence are trivially true.

Given a set S of moments and a moment m ∈ S, any sentence of Prior’s (or Brand’s) temporal logic can be evaluated for truth at (S,m). We can now define two modalities:

  • p is OW-necessary: p is true at (W,m) for every open m-world W

  • p is PH-necessary: p is true at (H,m) for every possible history H that contains m.

And now we have two entailments: p OW/PH-entails q if and only if the material conditional p → q is OW/PH-necessary.

Then that I will freely mow my lawn is OW-impossible, but PH-possible, and that I will freely mow my lawn OW-entails that I will eat my lawn, but does not PH-entail it. The open futurist can now say that our intuitive concept of entailment, in temporal contexts, corresponds to PH-entailment rather than OW-entailment.

I think this is helpful to the open futurist, but still has a serious problem. Consider the sentence “I will mow or I will not-mow.” On AFOF, this is false. But it is true at every possible history. Hence, it is PH-necessary. Thus, PH-necessity does not satisfy the T-axiom. Thus PH-entailment is such that a truth can PH-entail a falsehood. For instance, since “I will mow or I will not-mow” is PH-necessary, it is PH-entailed by every tautology.

On trivalent logics, if "I will mow or I will not-mow" is neither true nor false, we have a similar problem: a truth PH-entails a non-truth.

There are is a more technical problem on some metaphysical views. Suppose that it is contingent whether time continues past a certain moment. For instance, suppose there is no God and empty time is impossible, and there is a particle which can indeterministically cease to exist, and the world contains just that particle, so at any time it is possible that time is the last—the particle can pop out of existence. Oddly, because of the maximality condition on possible histories, there is no possible future where the particle pops out of existence.

I wonder if there is a better way to define entailment and possible futures that works with open future views.

Wednesday, July 23, 2025

Aristotelianism and transformative technology

The Aristotelian picture of us is that like other organisms, we flourish in fulfilling our nature. Our nature specifies the proper way of interacting with the world. We do not expect an organism’s nature to specify proper ways of interacting with scenarios far from its niche: how bats should fly in weightless conditions; how cats should feed in an environment with unlimited food supply; how tardigrades should live on the moon.

But with technology, we have shifted far from the environment we evolved for. While adaptability is a part of our nature, some technological innovations seem to go beyond the adaptability we expect, in that they appear to impact central aspects of the life of the social beings we are: innovations like the city, writing, and fast and widely accessible global communication. We should not expect for our nature specify how we should behave with respect to these new social technologies. We should have a skepticism that our nature contains sensible answers to questions about how we should behave in these cases.

Thus we appear to have an Aristotelian argument for avoiding the more transformative types of technology, since we are more likely to have meaningful answers to questions about how to lead our lives if our lives are less affected by social transformations. To be on the safe side, we should live in the country, and have most of our social interaction with a relatively small number of neighbors in person.

The theistic Aristotelian, however, has an answer to this. While evolution cannot foresee the Internet, God can, and he can give us a normative nature that specifies how we should adapt to vast changes in the shape of our lives. We do not need to avoid transformative technology in general, though of course we must be careful lest the transformation be for ill.

Friday, July 18, 2025

Optimalism and logical possibility

Optimalism holds that, of metaphysical necessity, the best world is actualized.

There are two ways to understand “the best world”: (1) the best of all metaphysically possible worlds and (2) the best of all (narrowly) logically possible worlds.

If we understand it in sense (1), then the best world is the best out of a class of one, and hence it’s also the worst world in the same class. So on reading (1), optimalism=pessimalism.

So sense (2) seems to be a better choice. But here is an argument against (2). It seems to be an a posteriori truth that I am living life LAP (the life in our world associated with the name “Alexander Pruss”) and that Napoleon is living life LNB (the life in our world associated with the name “Napoleon Bonaparte”). There seems to be a narrowly logically possible world just like this one where I live LNB and Napoleon lives LAP. That world with me and Napoleon swapped is neither better nor worse than this one. Hence our world is not the best one. It is tied or incommensurable with a whole bunch of worlds where the identities of individuals are permuted.

Maybe my identity is logically tied to certain aspects of my life, though? Leibniz certainly thought so—he thought it was tied to all the aspects of my life. But this is a controversial view.

Thursday, July 17, 2025

All-false open futurism

On All-False Open Futurism (AFOF), any future tensed statement about a future contingent must be false. It is false that there will be a sea battle tomorrow, for instance.

Suppose now I realize that due to a bug, tomorrow I will be able to transfer ten million dollars from a client’s account to mine, and then retire to a country that won’t extradite me. A little angel says to me:

  1. Your freely taking your client’s money without permission tomorrow entails your being a thief tomorrow.

I don’t want to be a thief, tomorrow or ever, so I am about to decide not to do it. But now a little devil convinces me of AFOF and says that while (1) is true, so is:

  1. Your freely taking your client’s money without permission tomorrow entails your being a saint tomorrow.

Perhaps I am not very good at modal logic and the devil needs to explain. Given AFOF, it is necessarily false that I will freely take my client’s money without permission tomorrow, and a necessary falsehood entails everything. So, the devil adds, I might as well buy my plane tickets now.

The angel, however, grants AFOF for the sake of argument, but says that notwithstanding (2), the following holds:

  1. Tomorrow it will be the case your taking your client’s money without permission entails your being a thief.

For the entailment holds always.

At this point, we have an interesting question. Given AFOF, should I guide my actions by the entailment between future-tensed claims in (2) or by the future-tensed entailment claim in (3)? The angel urges that the devil’s reasoning undercuts all rationality, while the angel’s reasoning does not, and hence is superior.

But the devil has one more trick up his sleeve. He notes that it is a contingent question whether there will be a tomorrow at all. For God might freely decide to end time before tomorrow. Thus, that there will be a tomorrow is false on AFOF. But (3) implies that there will be a tomorrow, and so (3) is false as well. I try to argue on the basis of Scripture that God has made promises that entail a future eternity, but the devil is a lot better at citing the Bible than I, and convinces me that God might transfer us to a timeless state or maybe eternal life is a supertask lasting from 8 to 9 pm tonight. And in any case, surely it should not depend on revelation whether the angel has a good argument not to take the client’s money. This is a problem for AFOF.

Maybe this is the way out. The angel could say this:

  1. Necessarily, if there will be a tomorrow, then it will be true tomorrow that taking your client’s money without permission entails your being a thief.

But while this conditional is true on AFOF, if the devil has made his case that God hasn’t promised there will be a tomorrow, he can respond with:

  1. Necessarily, if God hasn’t promised there will be a tomorrow and there will be a tomorrow, then it will be true tomorrow that taking your client’s money without permission entails your being a saint.

For the antecedent of the conditional here is necessarily false on AFOF, it being contingent that there will be a tomorrow absent a divine promise. And it seems that (5) is even more relevant to guiding action than (4), then.

Maybe the defender of AFOF can insist that the future must be infinite. But this does not seem plausible.

Wednesday, July 16, 2025

Yet another counterexample to act utilitarianism

It is wrong to torture a stranger for 99 minutes in order to avoid 100 minutes of equal torture to oneself.

Entailment and Open Future views

This is probably an old thing that has been discussed to death, but I only now noticed it. Suppose an open future view on which future contingents cannot have truth value. What happens to entailments? We want to say:

  1. That Jones will freely mow the lawn tomorrow entails that he will mow the lawn tomorrow

and to deny:

  1. That Jones will freely mow the lawn tomorrow entails that he will not mow the lawn tomorrow.

Now, a plausible view of entailment is that:

  1. p entails q if and only if it is impossible for p to be true while q is false.

But if future contingents cannot have truth value, then that Jones will freely mow the lawn tomorrow cannot be true, and hence by (3) it entails everything. In particular, both (1) and (2) will be true.

Presumably, the open futurist who believes future contingents cannot have truth value will give a different account of entailment, such as:

  1. p entails q if and only if there is no history in which p is true and q is false.

But what is a history? Here is a possible story. For a time t, let a t-possibility be a maximal set of propositions that could all be true together at t. Given the open future view we are exploring, a t-possibility will not include any propositions reporting contingent events after t. If t1 < t2, and A1 is a t1-possibility while A2 is a t2-possibility, we can say that A1 is included in A2 provided that for any proposition p in A1, the proposition that p was true at t1 is a member of A2. We can then say that a history h is a function that assigns a t-possibility h(t) to every time t such that h(t1) is included in h(t2) whenever t1 < t2.

(Technical note: Open theism implies a theory of tensed propositions, I assume. Thus if A is a t1-possibility, then it is not a t2-possibility if t2 ≠ t1, since any t-possibility will include the proposition that t is present.)

But what does it mean to say that a proposition p is true in a history h. Here is a plausible approach. Suppose t0 is the present time. Given a proposition p that says that s, let pt0 be the backdated proposition that at t0 it was such that s (with whatever shifts of tense are needed in s to make this grammatical). Then p is true in h provided that there is a time t1 > t0 such that pt0 is a member of h(t1). In other words, a proposition p is true in h provided that eventually h settles its truth value.

This works nicely for letting us affirm (1) and deny (2). In every history in which it becomes true that Jones will freely mow the lawn it becomes true that Jones will mow the lawn, while this is not so if we replace the consequent with “Jones will not mow the lawn.” But what about statements that quantify over times? Consider:

  1. Jones will mow the lawn, and for every time t at which Jones will mow the lawn, there will be a time t′ that is more than a year after t such that Jones will freely mow the lawn at t.

This entails:

  1. Jones will mow the lawn, and for every time t at which Jones will mow the lawn, there will be a time t′ that is more than a year after t such that Jones will mow the lawn at t.

but does not entail:

  1. Jones will not mow the lawn.

But there is no history h at which (5) is true by the above account of truth-at-a-history given our open future view. For let t0 be the present and let p be the proposition expressed by (5). Then at any future time t and any history h, the proposition pt0 is not a member of h(t). For if it were a member of h(t), it would be affirming the existence of an infinite number of future free mowings, and such a proposition cannot be true on our open future view. Since there is no history h at which (5) is true, by (4) we have it that (5) entails both (6) and (7), which is the wrong result.

What if instead of saying that future contingents lack truth value, we say that they are all false? This requires a slight modification to the account of p being true at a history. Instead of saying that p is true at h provided that there is some future time t such that pt0 is in h(t), we need to say that there is some future time t such that pt0 is in h(t′) for all t′ ≥ t. This gives the right truth values for (1) and (2), but it also makes (7) true.

I think the above open futurist accounts of entailment work nicely for statements with a single unbounded quantifier over times, but once we get alternating quantifiers like in (5), where the second conjunct is of the form ttϕ, things break down.

Perhaps the open futurist just needs to be willing to bite the bullet and say that (5) entails (7)?

Open Theism and divine promises

Open Theist Christians tend to think that there are some things God knows about the future, and these include the content of God’s promises to us. God’s promises are always fulfilled.

But it seems that the content of many of God’s promises depends on free choices. For imagine that all the recipients of God’s promise freely choose to release God from the promise; then God would be free not to follow the promise, it appears, and so he could freely choose not to act in according to the promise. Thus there seems to be a sequence of creaturely and divine free choices on which the content of the promise does not come about.

This argument may not work for all of God’s promises. Some of God’s promises are covenants, and it may be that covenants are a type of agreement in which neither party can release the other. There may be other unreleasable promises: perhaps when x promises to punish y, that’s a promise y cannot release x from. But do we have reason to think that God makes no “simple promises”, promises other than covenants and promises of punishment?

I do not think this is a definitive argument against open theism. The open theist can bite the bullet and say that God doesn’t always know he will fulfill his promises. But it is interesting to see that on open theism, God’s knowledge of the future is even more limited than we might have initially thought.

Tuesday, July 15, 2025

Open theism and the Incarnation

Here is a very plausible pair of claims:

  1. The Son could have become incarnate as a different human being.

  2. God foreknew many centuries ahead of time which human being the Son would become incarnate as.

Regarding 1, of course, the Son could not have been a different person—the person the Son is and was and ever shall be is the second person of the Trinity. But Son could have been a different human being.

Here is a sketch of an argument for 1:

  1. If the identity of a human being depends on the body, then if the Son became incarnate as a 3rd century BC woman in India, this would be a different human being from Jesus (albeit the same person).

  2. If the identity of a human being depends on the soul, then God could have created a different soul for the Son’s incarnation.

  3. The identity of a human being depends on either the body or the soul.

I don’t have as good an argument for 2 as I do for 1, but I think 2 is quite plausible given what Scripture says about God’s having planned out the mission of Jesus from of old.

Now add:

  1. If the Son could have become incarnate as a different human being, which human being he became incarnate as depends on a number of free human choices in the century preceding the incarnation.

Now, 1, 2 and 3 leads to an immediate problem for an open theist Christian (my thinking on this is inspired by a paper of David Alexander, though his argument is different) who thinks God doesn’t foreknow human free choices.

Why is 3 true? Well, if the identity of a human being even partly depends on the body (as is plausible), given that (plausibly) Mary was truly a biological mother of Jesus, then if Mary’s parents had not had any children, the body that Jesus actually had would not have existed, and an incarnation would have happened with a different body and hence a different human being.

Objection: God could have created Mary—or the body for the incarnation—directly ex nihilo in such a case, or God could have overridden human free will if some human were about to make a decision that would lead to Mary not existing.

Response: If essentiality of origins is true, then it is logically impossible for the same body to be created ex nihilo as actually had a partial non-divine cause. But I don’t want the argument to depend on essentiality of origins. Instead, I want to argue as follows. Both of the solutions in the objection require God to foreknow that he would in fact engage in such intervention if human free choices didn’t cooperate with his plan. God’s own interventions would be free choices, and so on open theism God wouldn’t know that he would thus intervene. One might respond that God could resolve to ensure that a certain body would become available, and a morally perfect being always keeps his resolutions. But while perhaps a morally perfect being always keeps his promises, I think it is false that a morally perfect being always keeps his resolutions. Unless one is resolving to do something that one is already obligated to do, it is not wrong to change one’s mind in a revolution. I suppose God could have promised someone that he would ensure the existence of a certain specific body, but we have no evidence of such a specific promise in Scripture, and it seems an odd maneouver for God to have to make in order to know ahead of time who the human that would save the world is.

What if the identity of a human depends solely on the soul? But then the identity of the human being that the Son would become incarnate as would depend on God’s free decision which soul to create for that human being, and the same remarks as I made about resolutions in the previous paragraph would apply.

Monday, July 14, 2025

The Reverse Special Composition Question

Van Inwagen famously raised the Special Composition Question (SCQ): What is an informative criterion for when a proper plurality of objects composes a whole.

There is, however, the Reverse Special Composition Question (RSCQ): What is an informative criterion for when an object is composed of a proper plurality?

The SCQ seems a more fruitful question when we think of parts as prior to the whole. The RSCQ seems a more fruitful question when we think of wholes as prior to the parts.

If by parts we mean something like “integral parts”, we have a pretty quick starter option for answering the RSCQ:

  1. An object is composed of a proper plurality of parts just in case it takes up more than a point of space.

I am not inclined to accept (1) because I like the possibility of extended simples, but it is a pretty neat and simple answer. Suppose that (1) is correct. Then we have a kind of simplicity argument for the thesis that the whole is prior to its parts. If the parts are prior to the whole, SCQ is a reasonable question, but doesn’t have an elegant and plausible answer (let us suppose). If the whole is prior to the parts, SCQ is not a reasonable question but RSCQ instead is, and RSCQ has an elegant and plausible answer (let us suppose). So we have some reason to accept that the whole is prior to the parts.

Natural kinds across categories

Most philosophical discussions of natural kinds concern entities in the category of substance: particles, chemical substances, organisms, etc. But I think we shouldn’t forget that there is good reason to posit natural kinds of entities in other categories.

For instance, you and I are each engaging in a token activity that falls under the natural kind (say) mammalian breathing. The natural kind specifies some essential properties of the kind, namely that it is a kind of filling and/or emptying of the lungs, as well as some teleological features, such as that the filling and emptying should be rhythmic. Instances of the kind may be better or worse: given that I am congested after a long drawn-out cold, likely your breathing is better than mine.

There are, plausibly, such things as natural activities, which fall under activity natural kinds. These may kinds may include gravitational attraction, mating, fish respiration, etc.

Dispositions, too, may fall under natural kinds, indeed a nested sequence of them. We might say that some dispositions are habits, and some habits are virtues. Thus, perhaps, you and I each have a certain disposition to rationally withstand danger, a disposition that is a token of courage, a kind of virtue. Your and my courages are different: for instance, perhaps, I am more willing to withstand social danger while you are more willing to withstand physical danger. Whether indeed virtues are natural kinds seems to me to be a central question for the metaphysics of virtue ethics.

There may be natural kinds of relations, too. Thus, I think marriage is a natural kind. On the other hand, I think presidency is not.

Friday, July 11, 2025

Reasons and direct support

A standard view of reasons is that reasons are propositions or facts that support an action. Thus, that I promised to visit is a reason to visit, that pain is bad is a reason to take an aspirin, and that I am hungry is a reason to eat.

But notice that any such facts can also be a reason for the opposite action. That I promised to visit is a reason not to visit, if you begged me not to keep any of my promises to you. That pain is bad is a reason not to take an aspirin and that I am hungry is a reason not to eat when I am striving to learn to endure harship.

One might think that this kind of contingency in what the reasons—considered as propositions or facts—support disappears when the reasons are fully normatively loaded. That I owe you a visit is always a reason to visit, and that I ought to relieve my hunger is always a reason to eat.

This is actually mistaken, too. That I owe you a visit is indeed always a reason to visit. But it can also be a reason—and even a moral one—not to visit. For instance, if a trickster informs me that that if I engage in an owed visit to you, they will cause you some minor harm—say, give you a hangnail—then the fact that I owe you a visit gives me a reason not to visit you, though that reason will be outweighed (indeed, it has to be outweighed, or else it wouldn’t be true that I owe you the visit).

In fact, plausibly, that an action is the right one is typically also a moral reason not to perform the action. For whenever we do the right thing, that has a potential of feeding our pride, and we have reason not to feed our pride. Of course, that reason is always outweighed. But it’s still there. And we might even say that the fact that an action is wrong is a reason, albeit not a moral one, to perform that action in order to exhibit one’s will to power (this is a morally bad reason to act on, but one that is probably minimally rational—we understand someone who does this).

All this suggests to me that we need a distinction: some reasons directly support doing something. That I owe you a visit directly supports my visiting you, but only indirectly supports my not visiting you to avoid pride in fulfilling my duties.

But now it is an interesting question what determined what reasons directly support what action. One option is that the relation is due to entailment: a reason directly supports ϕing provided that that reason entails that ϕing is good or right. But this misses the hyperintentionality in reasons. It is necessarily true that it’s right for me to respect my neighbor; a necessary truth is entailed by every proposition; but that my neighbor is annoying is not directly a reason to respect my neighbor. One might try for some “relevant entailment”, but I am dubious. Perhaps the fact that an action is wrong relevantly entails that there is reason to do it to exhibit one’s will to power, but that ϕing is wrong is directly a reason not to ϕ, and only indirectly a reason to ϕ.

I suspect the right answer is that this direct support relation comes from our human nature: if it is our nature to be directly motivated to ϕ because of R, then R directly supports ϕing. Hmm. This may work for epistemic support, too.

Wednesday, July 9, 2025

Habitual action

Alice has lived a long and reasonable life. She developed a lot of good habits. Every morning, she goes on a walk. On her walk, she looks at the lovely views, she smells the flowers in season, she gathers mushrooms, she listens to the birds chirping, she climbs a tree, and so on. Some of these things she does for their own sake and some she does instrumentally. For instance, she climbs a tree because she saw research that daily exercise promotes health, but she smells the flowers for the sake of the smelling itself.

She figured all this out when she was in her 30s, but now she is 60. One day, she realizes that for a while now she had forgotten the reasoning that led to her habits. In particular, she no longer knows which of her daily activities have innate value and which ones are merely instrumental.

So what can we say about her habitual activities?

One option is that they retain the teleology with which they were established. Although Alice no longer remembers that she climbs a tree solely for the sake of health, that is indeed what she climbs the tree for. On this picture, when we perform actions from habit, they retain the teleology they had when the habit was established. In particular, it follows that agential teleology need not be grounded in occurrent mental states of the agent. This is a difficult bullet to bite.

The other option is that they have lost their teleological characterization. This implies, interestingly, that there is no fact about whether the actions are being done for their own sake or instrumentally. In particular, it follows that the standard diviion of actions into those done for their own sake and those done instrumentally is not exhaustive. That is also a difficult bullet to bite.

I am not sure what to say. I suspect one lesson is that action is more complicated than we philosophers think, and our simple characterizations of it miss the complexity.

Acting without knowledge of rightness

Some philosophers think that for your right action to be morally worthy you have to know that the action is right.

On the contrary, there are cases where an action is even more morally worthy when you don’t know it’s right.

  1. Alice is tasked with a dangerous mission to rescue hikers stranded on a mountain. She knows it’s right, and she fulfills the mission.

  2. Bob is tasked with a dangerous mission to rescue hikers stranded on a mountain. He knows it’s right, but then just before he heads out, a clever philosopher gives him a powerful argument that there is no right or wrong. He is not fully convinced, but he has no time to figure out whether the argument works before the mission starts. Instead, he reasons quickly: “Well, there is a 50% chance that the argument is sound and there is no such thing as right and wrong, in which case at least I’m not doing anything wrong by rescuing. But there is a 50% chance that there is such a thing as right and wrong, and if anything is right, it’s rescuing these hikers.” And he fulfills the mission.

Bob’s action is, I think, even more worthy and praiseworthy than Alice’s. For while Alice risks her life for a certainty of doing the right thing, Bob is willing to risk his life in the face of uncertainty. Some people would take the uncertainty as an excuse, but Bob does not.

Monday, July 7, 2025

Acting because of and for reasons

It seems that:

  1. If you pursue friendship because friendship is non-instrumentally valuable, then you pursue friendship non-instrumentally.

But not so. Imagine a rich eccentric offers you $10,000 to pursue something that is non-instrumentally valuable. You think about it, correctly decide friendship is non-instrumentally valuable, and pursue it to gain the $10,000. You are pursuing friendship because it is non-instrumentally valuable, but you are pursuing it merely instrumentally.

More generally, is there any conditional of the form:

  1. If you pursue friendship because p, then you pursue friendship non-instrumentally

that is true in all cases, where p states some known reason for the pursuit of friendship? I don’t think so. For the rich eccentric can tell you that you will get $10,000 if it is both the case that p and you pursue friendship. In that case, if you know that it is the case that p, then your reason for pursuing friendship is p, since it is given p, and only given p, that you will get $10,000 for your pursuit of friendship.

Maybe the lesson from the above is that there is a difference between doing something because of a reason and doing it for the reason. That friendship is non-instrumentally valuable is a reason. In the first rich eccentric case, you are pursuing because of that reason, but you are not pursuing it for that reason. Thus maybe we can say:

  1. If you pursue friendship for the reason that friendship is non-instrumentally valuable, then you pursue friendship non-instrumentally.

In the case where you are aiming only at the $10,000, you are pursuing friendship for the reason that pursuing friendship will get you $10,000, or more explicitly for the conjunctive reason that (a) if friendship is non-instrumentally valuable it will get you $10,000 to pursue it and (b) it is non-instrumentally valuable. But you are nonetheless pursuing friendship because it is non-instrumentally valuable.

There is thus a rather mysterious “acting for R” relation in regard to actions which does not reduce to “acting because R”.

Thursday, June 26, 2025

A failed Deep Thought

I was going to post the following as Deep Thoughts XLIII, in a series of posts meant to be largely tautologous or at least trivial statements:

  1. Everyone older than you was once your age.

And then I realized that this is not actually a tautology. It might not even be true.

Suppose time is discrete in an Aristotelian way, so that the intervals between successive times are not always the same. Basically, the idea is that times are aligned with the endpoints of change, and these can happen at all sorts of seemingly random times, rather than at multiples of some interval. But in that case, (1) is likely false. For it is unlikely that the random-length intervals of time in someone else’s life are so coordinated with yours that the exact length of time that you have lived equals the sum of the lengths of intervals from the beginning to some point in the life of a specific other person.

Of course, on any version of the Aristotelian theory that fits with our observations, the intervals between times are very short, and so everyone older than you was once approximately your age.

One might try to replace (1) by:

  1. Everyone older than you was once younger than you are now.

But while (2) is nearly certainly true, it is still not a tautology. For if Alice has lived forever, then she’s older than you, but she was never younger than you are now! And while there probably are no individuals who are infinitely old (God is timelessly eternal), this fact is far from trivial.

Tuesday, June 24, 2025

Punishment, causation and time

I want to argue for this thesis:

  1. For a punishment P for a fault F to be right, P must stand in a causal-like relation to P.

What is a causal-like relation? Well, causation is a causal-like relation. But there is probably one other causal-like relation, namely when because of the occurrence of a contingent event E, God knows that E occurred, and this knowledge in turn explains why God did something. This is not exactly causation, because God is not causally affected by anything, but it is very much like causation. If you don’t agree, then just remove the ``like’’ from (1).

Thesis (1) helps explain what is wrong with punishing people on purely statistical grounds, such as sending a traffic ticket to Smith on the grounds that Smith has driven 30,000 miles in the last five years and anyone who drove that amount must have committed a traffic offense.

Are there other arguments against (1)? I think so. Consider forward-looking punishment where by knowing someone’s present character you know that they will commit some crime in ten days, so you punish them now (I assume that they will commit the crime even if you do not punish them). Or, even more oddly, consider circular forward-looking punishment. Suppose Alice has such a character that it is known that if we jail her, she will escape from jail. But assume that our in society an escape from jail is itself a crime punishable by jail, and that Alice is not currently guilty of anything. We then jail her, on the grounds that she will escape from jail, for which the punishment is us now jailing her.

One may try to rule out the forward-looking cases on the grounds that instead of (1) we should hold:

  1. For a punishment P for a fault F to be right, P must come after F.

But that’s not right. Simultaneous causation seems possible, and it does not seem unjust to set up a system where a shoplifter feels punitive pain at the very moment of the shoplifting, as long as the pain is caused by the shoplifting.

Or consider this kind of a case. You know that Bob will commit a crime in ten days, so you set up an automated system that will punish him at a preset future date. It does not seem to be of much significance whether the system is set to go off in nine or eleven days.

Or consider cases where Special Relativity is involved, and the punishment occurs at a location distant from the criminal. For instance, Carl, born on Earth, could be sentenced to public infamy on earth for a crime he commits around Alpha Centauri. Supposing that we have prior knowledge that he will commit the crime on such and such a date. If (2) is the right principle, when should we make him infamous on earth? Presumably after the crime. But in what reference frame? That seems a silly question. It is silly, because (2) isn’t the right principle—(1) is better.

Objection: One cannot predict what someone will freely do.

Response: One perhaps cannot predict with 100% certainty what someone will freely do, but punishment does not require 100% certainty.

Friday, June 20, 2025

Punishment, reward and theistic natural law

I’ve always found punishment and (to a lesser extent) reward puzzling. Why is it that when someone does something wrong is there moral reason to impose a harsh treatment on them, and why is it that when someone does something right—and especially supererogatory—is there moral reason to do something nice for them?

Of course, it’s easy to explain why it’s good for our species that there be a practice of reward and punishment: such a practice in obvious ways helps to maintain a cooperative society. But what makes it morally appropriate to impose a sacrifice on the individual for the good of the species in this way, whether the good of the person receiving the punishment or the good of the person giving the reward when the reward has a cost?

Punishment and reward thus fit into a schema where we would like to be able to make use of this argument form:

  1. It would be good (respectively, bad) for humans if moral fact F did (did not) obtain.

  2. Thus, probably, moral fact F does obtain.

(The argument form is better on the parenthetical negative version.) It would be bad for humans if we did not have distinctive moral reasons to reward and punish, since our cooperative society would be more liable to fall apart due to cheating, freeriding and neglect of others. So we have such moral reasons.

As I have said on a number of occasions, we want a metaethics on which this is a good argument. Rule-utilitarianism is such a metaethics. So is Adams’ divine command theory with a loving God. And so is theistic natural law, where God chooses which natures to exemplify because of the good features in these natures. I want to say something about this last option in our case, and why it is superior to the others.

Human nature encodes what is right and wrong for. Thus, it can encode that it is right for us to punish and reward. An answer as to why it’s right for us to reward and punish, then, is that God wanted to make cooperative creatures, and chose a nature of cooperative creatures that have moral reasons to punish and reward, since that improves the cooperation.

But there is a way that the theistic natural law solution stands out from the others: it can incorporate Boethius’ insight that it is intrinsically bad for one to get away unpunished with wrongdoing. For our nature not only encodes what is right and wrong for us to do, but also what is good or bad for us. And so it can encode that it is bad for us to get away unpunished. It is good for us that it be bad for us to get away unpunished, since its being bad for us to get away unpunished means that we have additional reason to avoid wrongdoing—if we do wrong, we either get punished or we get away unpunished, and both options are bad for us.

The rule-utilitarian and divine-command options only explain what is right and wrong, not what is good and bad, and so they don’t give us Boethius’ insight.

Thursday, June 5, 2025

What is an existential quantifier?

What is an existential quantifier?

The inferentialist answer is that an existential quantifier is any symbol that has the syntactic features of a one-place quantifier and obeys the same logical rules of an existential quantifier (we can precisely specify both the syntax and logic, of course). Since Carnap, we’ve had good reason to reject this answer (see, e.g., here).

Here is a modified suggestion. Consider all possible symbols that have the syntactic features of a one-place quantifier and obeys the rules of an existential quantifier. Now say that a symbol is an existential quantifier provided that it is a symbol among these symbols that maximizes naturalness, in the David Lewis sense of “naturalness”.

Moreover, this provides the quantifier variantist or pluralist (who thinks there are multiple existential quantifiers, none of them being the existential quantifier) with an answer to a thorny problem: Why not simply disjoin all the existential quantifiers to make a truly unrestricted existential quantifier, and say that that is the existential quantifier? THe quantifier variantist can say: Go ahead and disjoin them, but a disjunction of quantifiers is less natural than its disjuncts and hence isn’t an existential quantifier.

This account also allows for quantifier variance, the possibility that there is more than one existential quantifier, as long as none of these existential quantifiers is more natural than any other. But it also fits with quantifier invariance as long as there is a unique maximizer of naturalness.

Until today, I thought that the problem of characterizing existential quantifiers was insoluble for a quantifier variantist. I was mistaken.

It is tempting to take the above to say something deep about the nature of an existential quantifier, and maybe even the nature of being. But I think it doesn’t quite. We have a characterization of existential quantifiers among all possible symbols, but this characterization doesn’t really tell us what they mean, just how they behave.

Tuesday, June 3, 2025

Combining epistemic utilities

Suppose that the right way to combine epistemic utilities or scores across individuals is averaging, and I am an epistemic act expected-utility utilitarian—I act for the sake of expected overall epistemic utility. Now suppose I am considering two different hypotheses:

  • Many: There are many epistemic agents (e.g., because I live in a multiverse).

  • Few: There are few epistemic agents (e.g., because I live in a relatively small universe).

If Many is true, given averaging my credence makes very little difference to overall epistemic utility. On Few, my credence makes much more of a difference to overall epistemic utility. So I should have a high credence for Few. For while a high credence for Few will have an unfortunate impact on overall epistemic utility if Many is true, because the impact of my credence on overall epistemic utility will be small on Many, I can largely ignore the Many hypothesis.

In other words, given epistemic act utilitarianism and averaging as a way of combining epistemic utilities, we get a strong epistemic preference for hypotheses with fewer agents. (One can make this precise with strictly proper scoring rules.) This is weird, and does not match any of the standard methods (self-sampling, self-indication, etc.) for accounting for self-locating evidence.

(I should note that I once thought I had a serious objection to the above argument, but I can't remember what it was.)

Here’s another argument against averaging epistemic utilities. It is a live hypothesis that there are infinitely many people. But on averaging, my epistemic utility makes no difference to overall epistemic utility. So I might as well believe anything on that hypothesis.

One might toy with another option. Instead of averaging epistemic utilities, we could average credences across agents, and then calculate the overall epistemic utility by applying a proper scoring rule to the average credence. This has a different problematic result. Given that there are at least billions of agents, for any of the standard scoring rules, as long as the average credence of agents other than you is neither very near zero nor very near one, your own credence’s contribution to overall score will be approximately linear. But it’s not hard to see that then to maximize expected overall epistemic utility, you will typically make your credence extreme, which isn’t right.

If not averaging, then what? Summing is the main alternative.

Closed time loop

Imagine two scenarios:

  1. An infinitely long life of repetition of a session meaningful pleasure followed by a memory wipe.

  2. A closed time loop involving one session of the meaningful pleasure followed by a memory wipe.

Scenario (1) involves infinitely many sessions of the meaningful pleasure. This seems better than having only one session as in (2). But subjectively, I have a hard time feeling any preference for (1). In both cases, you have your pleasure, and it’s true that you will have it again.

I suppose this is some evidence that we’re not meant to live in a closed time loop. :-)

Monday, June 2, 2025

Shuffling an infinite deck

Suppose infinitely many blindfolded people, including yourself, are uniformly randomly arranged on positions one meter apart numbered 1, 2, 3, 4, ….

Intuition: The probability that you’re on an even-numbered position is 1/2 and that you’re on a position divisible by four is 1/4.

But then, while asleep, the people are rearranged according to the following rule. The people on each even-numbered position 2n are moved to position 4n. The people on the odd numbered positions are then shifted leftward as needed to fill up the positions not divisible by 4. Thus, we have the following movements:

  • 1 → 1

  • 2 → 4

  • 3 → 2

  • 4 → 8

  • 5 → 3

  • 6 → 12

  • 7 → 5

  • 8 → 16

  • 9 → 6

  • and so on.

If the initial intuition was correct, then the probability that now you’re on a position that’s divisible by four is 1/2, since you’re now on a position divisible by four if and only if initially you were on a position divisible by two. Thus it seems that now people are no longer uniformly randomly arranged, since for a uniform arrangement you’d expect your probability of being in a position divisible by four to be 1/4.

This shows an interesting difference between shuffling a finite and an infinite deck of cards. If you shuffle a finite deck of cards that’s already uniformly distributed, it remains uniformly distributed no matter what algorithm you use to shuffle it, as long as you do so in a content-agnostic way (i.e., you don’t look at the faces of the cards). But if you shuffle an infinite deck of distinct cards that’s uniformly distributed in a content-agnostic way, you can destroy the uniform distribution, for instance by doubling the probability that a specific card is in a position divisible by four.

I am inclined to take this as evidence that the whole concept of a “uniformly shuffled” infinite deck of cards is confused.

Saturday, May 31, 2025

Four-flour pancakes

I was watching an old Aunt Jemima pancake mix commercial which touted it as being made from four flours: wheat, corn, rye and rice, and I decided to see what pancakes made them are like. I started with this wheat flour pancake recipe, but tweaked some things, and made them this morning. Pretty good. Perhaps more hearty than standard pancakes, and the texture was a bit more crunchy, which I liked.

  • 1/2 cup of wheat flour

  • 1/2 cup of whole-grain rye flour

  • 1/2 cup of corn flour

  • 1/2 cup of (non-glutinous) rice flour

  • 4 3/4 teaspoons baking powder

  • 4 teaspoons white sugar

  • 1/3 teaspoon salt

  • 1 2/3 cup milk

  • 4 tablespoons melted butter

  • 1 large egg

  • 4 teaspoons apple sauce (or skip and use 1 1/3 egg, if you have some use for the remaining 2/3 of the egg)

  • cooking spray (I used canola spray)

  • optional: chocolate chips

Mix dry ingredients. Add wet ingredients. Mix well. Heat pan to medium heat. Spray with oil. Put a big serving spoon of mix on the pan. If you want to add chocolate chips, drop them in on top. Wait until the edges are getting dry. (It was surprisingly fast, about 1-2 minutes, and they would burn easily when I wasn’t fast enough.) Flip and brown the other side (again, it’s fast).



Yields 9-10 not very large pancakes. The frying took half an hour with two pans in simultaneous use. I measured out all the ingredients the night before and pre-mixed the dry ingredients so I could be fast in the morning before a pickleball game.

Friday, May 30, 2025

The value of moral norms

Here is a very odd question that occurred to me: Is it good for there to be moral norms?

Imagine a world just like this one, except that there are no moral norms for its intelligent denizens—but nonetheless they behave as we do. They feel repelled by the idea of murder and torture, and find the life of a Mother Teresa attractive, but there are no moral truths behind these things.

Such a world would have one great advantage over ours: there would be no moral evil. That world’s Hitler and Stalin would cause just as much pain and suffering, but they wouldn’t be wicked in so doing. Given the Socratic insight that it is worse to do than to suffer evil, a vast amount of evil would disappear in such a world. At least a third of the evil in the world would be gone. Our world has three categories of evil:

I. Undergoing of natural evils

  1. Undergoing of moral evils, and

  2. Performance of moral evils.

The third category would be gone, and it is probably the biggest of the three. Wouldn’t that be worth it?

Here is one answer. For cooperative intelligent social animals, a belief in morality is very useful. But to live one’s life by a belief that is false seems a significant harm. Cooperative intelligent social animals in the alternative world would be constantly deceived by their belief in morality. That is a great evil. But is it as great an evil as all Category III evils taken together? I suspect it is but a small fraction of the sum of all Category III evils.

Here is a second answer. In removing moral norms, one would admittedly remove a vast category of evils, but also a vast category of goods: the performance of moral good. If we have the intuition that having moral norms is a good thing—that it would be a disappointment to learn that moral norms were an illusion—then we have to think that the performances of moral good are a very great thing indeed, one comparable to the sum of all Category III evils.

I am attracted to a combination of the two answers. But I can also see someone saying: “It doesn’t matter whether it’s worth having moral norms or not, but it is simply impossible to have cooperative intelligent social animals that believe in morality without their being under moral norms.” A Platonist may say that on the grounds that moral norms are necessary. A theist may say it on the grounds that it is contrary to the character of a perfect God to manufacture the vast deceit that would be involved in us thinking there are moral norms if there were no moral norms. These aren’t bad answers. But I still feel it’s good that there really are moral norms.

Thursday, May 29, 2025

Philosophy and child-raising

Philosophy Departments often try to attract undergraduates by telling them about instrumental benefits of philosophy classes: learning generalizable reading, writing and reasoning skills, doing better on the LSAT, etc.

But here is a very real and much more direct reason why lots of people should take philosophy classes. Most people end up having children. And children ask lots of questions. These questions include philosophical ones. Moreover, as they grow, especially around the teenage years, philosophical questions come to have special existential import: why should I be virtuous, what is the point of life, is there life after death, is there a God, can I be sure of anything?

For children’s scientific questions, there is always Wikipedia. But that won’t be very helpful with the philosophical ones. In a less diverse society, where parents can count on agreeing philosophically with the schools, parents could outsource children’s philosophical questions to a teacher they agree with. Perhaps religious parents can count on such agreement if they send their children to a religious school, but in a public school this is unlikely. (And in any case, outsourcing to schools is still a way of buying into something like universal philosophical education.) So it seems that vast numbers of parents need philosophical education to raise their children well.

Friday, May 23, 2025

Hyperreal infinitesimal probabilities and definability

In order to assign non-zero probabilities to such things as a lottery ticket in an infinite fair lottery or hitting a specific point on a target with a uniformly distributed dart throw, some people have proposed using non-zero infinitesimal probabilities in a hyperreal field. Hajek and Easwaran criticized this on the grounds that we cannot mathematically specify a specific hyperreal field for the infinitesimal probability. If that were right, then if there are hyperreal infinitesimal probabilities for such a situation, nonetheless we would not be able to say what they are. But it’s not quite right: there is a hyperreal field that is "definable", or fully specifiable in the language of ZFC set theory.

However, for Hajek-Easwaran argument against hyperreal infinitesimal probabilities to work, we don’t need that the hyperreal field be non-definable. All we need is that the pair (*R,α) be non-definable, where *R is a hyperreal field and α is the non-zero infinitesimal assigned to something specific (say, a single ticket or the center of the target).

But here is a fun fact, much of the proof of which comes from some remarks that Michael Nielsen sent me:

Theorem: Assume ZFC is consistent. Then ZFC is consistent with there not being any definable pair (*R,α) where *R is a hyperreal field and α is a non-zero infinitesimal in that field.

[Proof: Solovay showed there is a model of ZFC where every definable set is measurable. But every free ultrafilter on the powerset of the naturals is nonmeasurable. However, an infinite integer in a hyperreal field defines a free ultrafilter on the naturals—given an infinite integer M, say that a subset A of the naturals is a member of the ultrafilter iff |M| ∈ *A. And a non-zero infinitesimal defines an infinite integer—say, as the floor of its reciprocal.]

Given the Theorem, without going beyond ZFC, we cannot count on being able to define a specific hyperreal non-zero infinitesimal probability for situations like a ticket infinite lottery or hitting the center of a target. Thus, if a friend of hyperreal infinitesimal probabilities wants to be able to define one, they must go beyond ZFC (ZFC plus constructibility will do).

Wednesday, May 21, 2025

Doxastic moral relativism

Reductive doxastic moral relativism is the view that an action type’s being morally wrong is nothing but an individual or society’s belief that the action type is morally wrong.

But this is viciously circular, since we reduce wrongness to a belief about wrongness. Indeed, it now seems that murder is wrong provided that it is believed that it is believed that it is believed ad infinitum.

A non-reductive biconditional moral relativism fares better. This is a theory on which (a) there is such a property as moral wrongness and (b) necessarily, an action type has that property if and only if it is believed that it does. Compare this: There is such a property as mass, and necessarily an object has mass if and only if God believes that it has mass.

There is a biconditional-explanatory version. On this theory (a) there is such a property as moral wrongness and (b) necessarily, an action type has that property if and only if, and if so then because, it is believed that it does.

While both the biconditional and biconditional-explanatory versions appear logically coherent, I think they are not particularly plausible. If there really is such a property as moral wrongness, and it does not reduce to our beliefs, then it just does not seem particularly plausible to think that it obtains solely because of our beliefs or that it obtains necessarily if and only if we believe it does. The only clear and non-gerrymandered examples we have of properties that obtain solely because of our beliefs or necessarily if and only if we believe they do are properties that reduce to our beliefs.

All this suggests to me that if one wishes to be a relativism, one should base the relativism on a different attitude than belief.

Monday, May 19, 2025

Sacraments and New Testament law

Christians believe that Jesus commanded us to baptize new Christians. However, there is a fundamental division in views: some Christians (such as Catholics and the Orthodox) have a sacramental view of baptism, on which baptism as such leads to an actual supernaturally-produced change in the person baptized, while others hold a symbolic view of it.

Here is an argument for the sacramental view. We learn from Paul that there is a radical change in God’s law from Old to New Testament times. I think our best account of that change is that we are no longer under divinely-commanded ceremonial and symbolic laws, but as we learn from the First Letter of John, we are clearly still under the moral law.

On the symbolic view, however, baptism is precisely a ceremonial and symbolic law—precisely the kind of thing that we are no longer under. On the sacramental view, however, it is easy to explain how baptism falls under the moral law. Love of neighbor morally enjoins on us that we provide effective medical treatment to our neighbor, and love of self requires us to seek such treatment for ourselves. Similarly, if baptism is crucial to the provision of grace for moral healing, then love of neighbor morally enjoins on us that we baptize and love of self requires us to seek baptism for ourselves.

The same kind of argument applies to the Eucharist: since it is commanded by God in New Testament times, it is not merely symbolic.

Wednesday, May 14, 2025

Semantics of syntactically incorrect language

As anyone who has talked with a language-learner knows, syntactically incorrect sentences often succeed in expressing a proposition. This is true even in the case of formal languages.

Formal semantics, say of the Tarski sort, has difficulties with syntactically incorrect sentences. One approach to saving the formal semantics is as follows: Given a syntactically incorrect sentence, we find a contextually appropriate syntactically correct sentence in the vicinity (and what counts as vicinity depends on the pattern of errors made by the language user), and apply the formal semantics to that. For instance, if someone says “The sky are blue”, we replace it with “The sky is blue” in typical contexts and “The skies are blue” in some atypical contexts (e.g., discussion of multiple planets), and then apply formal semantics to that.

Sometimes this is what we actually do when communicating with someone who makes grammatical errors. But typically we don’t bother to translate to a correct sentence: we can just tell what is meant. In fact, in some cases, we might not even ourselves know how to translate to a correct sentence, because the proposition being expressed is such that it is very difficult even for a native speaker to get the grammar right.

There can even be cases where there is no grammatically correct sentence that expresses the exact idea. For instance, English has a simple present and a present continuous, while many other languages have just one present tense. In those languages, we sometimes cannot produce an exact grammatically correct translation of an English sentence. One can use some explicit markers to compensate for the lack of, say, a present continuous, but the semantic value of a sentence using these markers is unlikely to correspond exactly to the meaning of the present continuous (the markers may have a more determinate semantics than the present continuous). But we can imagine a speaker of such a language who imitates the English present continuous by a literal word-by-word translation of “I am” followed by the other language’s closest equivalent to a gerund, even when such translation is grammatically incorrect. In such a case, assuming the listener knows English, the meaning may be grasped, but nobody is capable of expressing the exact meaning in a syntactically correct way. (One might object that one can just express the meaning in English. But that need not be true. The verb in question may be one that does not have a precise equivalent in English.)

Thus we cannot account for the semantics of syntactically incorrect sentences by applying semantics to a syntactically corrected version. We need a semantics that works directly for syntactically incorrect sentences. This suggests that formal semantics are necessarily mere approximate models.

Similar issues, of course, arise with poetry.

Tuesday, May 13, 2025

Truth-value realisms about arithmetic

Arithmetical truth-value realists hold that any proposition in the language of arithmetic has a fully determined truth value. Arithmetical truth-value necessists add that this truth value is necessary rather than merely contingent. Although we know from the incompleteness theorems that there are alternate non-standard natural number structures, with different truth values (e.g., there is a non-standard natural number structure according to which the Peano Axioms are inconsistent), the realist and necessist hold that when we engage in arithmetical language, we aren’t talking about these structures. (I am assuming either first-order arithmetic or second-order with Henkin semantics.)

Start by assuming arithmetical truth-value necessitism.

There is an interesting decision point for truth-value necessitism about arithmetic: Are these necessary truths twin-earthable? I.e., could there be a world whose denizens who talk arithmetically like we do, and function physically like we do, but whose arithmetical sentences express different propositions, with different and necessary truth values? This would be akin to a world where instead of water there is XYZ, a world whose denizens would be saying something false if they said “Water has hydrogen in it”.

Here is a theory on which we have twin-earthability. Suppose that the correct semantics of natural number talk works as follows. Our universe has an infinite future sequence of days, and the truth-values of arithmetical language are fixed by requiring the Peano Axioms (or just the Robinson Axioms) together with the thesis that the natural number ordering is order-isomorphic to our universe’s infinite future sequence of days, and then are rigidified by rigid reference to the actual world’s sequence of future days. But in another world—and perhaps even in another universe in our multiverse if we live in a multiverse—the infinite future sequence of days is different (presumably longer!), and hence the denizens of that world end up rigidifying a different future sequence of days to define the truth values of their arithmetical language. Their propositions expressed by arithmetical sentences sometimes have different truth values from ours, but that’s because they are different propositions—and they’re still as necessary as ours. (This kind of a theory will violate causal finitism.)

One may think of a twin-earthable necessitism about arithmetic as a kind of cheaper version of necessitism.

Should a necessitist go cheap and allow for such twin-earthing?

Here is a reason not to. On such a twin-earthable necessitism, there are possible universes for whose denizens the sentence “The Peano Axioms are consistent” expresses a necessary falsehood and there are possible universes for whose denizens the sentence expresses a necessary truth. Now, in fact, pretty much everybody with great confidence thinks that the sentence “The Peano Axioms are consistent” expresses a truth. But it is difficult to hold on to this confidence on twin-earthable necessitism. Why should we think that the universes the non-standard future sequences of days are less likely?

Here is the only way I can think of answering this question. The standard naturals embed into the non-standard naturals. There is a sense in which they are the simplest possible natural number structure. Simplicity is a guide to truth, and so the universes with simpler future sequences of days are more likely.

But this answer does not lead to a stable view. For if we grant that what I just said makes sense—that the simplest future sequences of days are the ones that correspond to the standard naturals—then we have a non-twin-earthable way of fixing the meaning of arithmetical language: assuming S5, we fix it by the shortest possible future sequence of days that can be made to satisfy the requisite axioms by adding appropriate addition and multiplication operations. And this seems a superior way to fix the meaning of arithmetical language, because it better fits with common intuitions about the “absoluteness” of arithmetical language. Thus it it provides a better theory than twin-earthable necessitism did.

I think the skepticism-based argument against twin-earthable necessitism about arithmetic also applies to non-necessitist truth-value realism about arithmetic. On non-necessitist truth-value realism, why should we think we are so lucky as to live in a world where the Peano Axioms are consistent?

Putting the above together, I think we get an argument like this:

  1. Twin-earthable truth-value necessitism about arithmetic leads to skepticism about the consistency of arithmetic or is unstable.

  2. Non-necessitist truth-value realism about arithmetic leads to skepticism about the consistency of arithmetic.

  3. Thus, probably, if truth-value realism about arithmetic is true, non-twin-earthable truth-value necessitism about arithmetic is true.

The resulting realist view holds arithmetical truth to be fixed along both dimensions of Chalmers’ two-dimensional semantics.

(In the argument I assumed that there is no tenable way to be a truth-value realist only about Σ10 claims like “Peano Arithmetic is consistent” while resisting realism about higher levels of the hierarchy. If I am wrong about that, then in the above argument and conclusions “truth-value” should be replaced by “Σ10-truth-value”.)

Friday, May 9, 2025

Possible futures

Given a time t and a world w, possible or not, say that w is t-possible if and only if there is a possible world wt that matches w in all atemporal respects as well as with respect to all that happens up to and including time t. For instance, a world just like ours but where in 2027 a square circle appears is 2026-possible but not 2028-possible.

Here is an interesting and initially plausible metaphysical thesis:

  1. The world w is possible iff it is t-possible for every finite time t.

But (1) seems false. For imagine this:

  1. On the first day of creation God creates you and promises you that on some future
    day a butterfly will be created ex nihilo. God never makes any other promises. God never makes butterflies. And nothing else relevant happens.

I assume God’s promises are unbreakable. The world described by (2) seems to be t-possible for every finite time t. For the fact that no butterfly has come into existence by time t does not falsify God’s promise that one day a butterfly will be created. But of course the world described by (2) is impossible.

(It’s interesting that I can’t think of a non-theistic counterexample to (1).)

So what? Well, here is one applicaiton. Amy Seymour in a nice paper responding to an argument of mine writes about the following proposition about situation where there are infinitely many coin tosses in heaven, one per day:

  1. After every heads result, there is another heads result.

She says: “The open futurist can affirm that this propositional content has a nearly certain general probability because almost every possible future is one in which this occurs.” But in doing so, Seymour is helping herself to the idea of a “possible future”, and that is a problematic idea for an open futurist. Intuitively:

  1. A possible future is one such that it is possible that it is true that it obtains.

But the open futurist cannot say that, since in the case of contingent futures, there can be no truth about its obtaining. The next attempt at accounting for a possible future may be to say:

  1. A future is possible provided it will be true that it is possible that it obtains.

But that doesn’t work, either, since any future with infinitely many coin tosses (spaced out one per day) is such that at any time in the future, it is still not true that it is possible that it obtains, since its obtaining still depends on the then-still-future coin tosses. The last option I can think of is:

  1. A future is possible provided that for every future time t it is t-possible.

But that fails for exactly the same reason that the t-possibility of worlds story fails.

Here is one way out: Deny classical theism, say that God is in time, and insist that God has to act at t in order to create something ex nihilo at t. But God, being perfect, can’t make a promise unless he has a way of ensuring the promise to come true. But how can God make sure that he will one day create the butterfly? After all, on any future day, God is free not to create it then. Now, if God promised to create a butterfly by some specific date, then God could be sure that he would follow through, since if he hadn’t done so prior to the specified date, he would be morally obligated to do so on that day, and being perfect he would do so. So since God can’t ensure the promise will come true, he can’t make the promise. (Couldn’t God resolve to create the butterfly on some specific day? On non-classical theism, maybe yes, but the act of resolving violates the clause “nothing else relevant happens” in (2).)

This way out doesn’t work for classical theism, where God is timeless and simple. For given timelessness, God can timelessly issue the promise and “simultaneously” timelessly make a butterfly appear on (say) day 18, without God being intrinsically any different for it. So I think the classical theist has reason to deny (1), and hence has no account of “possible futures” that is compatible with open futurism, and thus probably has to deny open futurism. Which is unsurprising—most classical theists do deny open futurism.

Monday, May 5, 2025

Unrestricted quantification and Tarskian truth

It is well-known—a feature and not a bug—that Tarski’s definition of truth needs to be given in a metalanguage rather than the object language. Here I want to note a feature of this that I haven’t seen before.

Let’s start by considering how Tarski’s definition of truth would work for set theory.

We can define satisfaction as a relation between finite gappy sequences of objects (i.e., sets) and formulas where the variables are x1, .... We do this by induction on formulas.

How does this work? Following the usual way to formally create an inductive definition, we will do something like this:

  1. A satisfaction-like relation is a relation between finite sequences of sets and formulas such that:

    1. the relation gets right the base cases, namely, a sequence s satisfies xn ∈ xm if and only if the nth entry of s is a member of the mth entry of s, and satisfies xn = xm if and only if the nth entry of s is identical to the mth entry

    2. the relation gets right the inductive cases (e.g., s satisfies xnϕ if and only if for every sequence s that includes an nth place and agrees with s on all the places other than the nth place we have s satisfying ϕ, etc.)

  2. A sequence s satisfies a formula ϕ provided that every satisfaction-like relation holds between s and ϕ.

The problem is that in (2) we quantify over satisfaction-like relations. A satisfaction-like relation is not a set in ZF, since any satisfaction-like relation includes ((a),ϕ=) for every set a, where (a) is the sequence whose only entry is a at the first location and ϕ= is x1 = x1. Thus, a satisfaction-like relation needs to be a proper class, and we are quantifying over these, which suggests ontological commitment to these proper classes. But ZF set theory does not have proper classes. It only has virtual classes, where we identify a class with the formula defining it. And if we do that, then (2) comes down to:

  1. A sequence s satisfies ϕ if for every satisfaction-like formula F the sentence F(s,ϕ) is true.

And that presupposes the concept of truth. (Besides which, I don’t know if we can define a satisfaction-like formula.) So that’s a non-starter. We need genuine and not merely virtual classes to give a Tarski-style definition of truth for set theory. In other words, it looks like the meta-language in which we give the Tarski-style definition of truth for set theory not only needs a vocabulary that goes beyond the object-language’s vocabulary, but it needs a domain of quantification that goes beyond the object-language’s domain.

Now, suppose that we try to give such a Tarskian definition of truth for a language with unrestricted quantification, namely quantification over literally everything. This is very problematic. For now the satisfaction-like relation includes the pair ((a),ϕ=) for literally every object a. This relation, then, can neither be a set, nor a class, nor a proper superclass, nor a supersuperclass, etc.

I wonder if there is a way of getting around this difficulty by having some kind of a primitive “inductive definition” operator instead of quantifying over satisfaction-like relations.

Another option would be to be a realist about sets but a non-realist about classes, and have some non-realist story about quantification over classes.

I bet people have written on this stuff, as it’s a well-explored area. Anybody here know?

Friday, May 2, 2025

Immortality of the soul and the soul's proper operation

This is an attempt to make an argument for the natural immortality of the soul from the premise that the soul has a proper operation that is independent of the body. The argument is going to be rather odd, because it depends on my rather eccentric four-dimensionalist version of Aristotelian metaphysics.

Start with the thought of how substances typically grow in space. They do this by causing themselves to have accidents in new locations, and they come to exist where these new accidents are. Thus, if I eat and my stomach becomes distended, I now have an accident of stomachness in a location where previously I didn’t, and normally I come to be partly located where my accidents are.

It is plausible (at least to a four-dimensionalist) that spatiotemporal substances grow in time like they grow in space. Thus, they produce accidents in a new temporal location, a future one, and typically come to be located where the accidents are—maybe they come to be there by being active in and through the accidents. (There are exceptions: in transsubstantiation, the bread and wine don’t follow their accidents. But I am focusing on what naturally happens, not on miracles.)

Suppose now that the soul has a proper operation that is independent of the body. Given the fact that my intellectual function is temporal in nature, it is plausible that in this proper operation, my soul is producing a future accident of mine—say, a future accident of grasping some abstract fact—and does so regardless of how sorry and near-to-death a state my body has. But a substance normally stretches both spatially and temporally to become partly located where its accidents are. So by producing a future accident of mine the soul normally ensures that I will be there in that future to be active in and through that accident. Thus the soul, in exercising that future-directed proper activity, makes me exist in the future.

Now that I’ve written this down, I see a gap. The fact that the soul has a proper operation independent of the body does not imply that the soul always engages in that operation. If it does not always engage in that operation, then there is the danger that if my body should perish at a time when the operation is not engaged in, the soul would fail to extend my existence futureward, and I would perish entirely.

On this version of the proper function argument, we thus need a proper operation that the soul normally or naturally always engages in. We might worry, however, that the intellectual operations all cease when we are in dreamless sleep. However, we might suppose that the soul by its nature always carries forward in time some aspect of the understandings or abstractions that it has gained, and this carrying forward in time is indeed a proper operation that occurs even in dreamless sleep, since we do not lose our intellectual gains when we are asleep. (We should distinguish this carrying forward of an aspect of the intellectual gains from the aspects of memory that are mediated by the brain. The need to do this is a weakness of the argument.)

The above depends on my idiosyncratic picture of persistence over time: substances cause their future existence. Divine sustenance is divine cooperation with this causation. The argument has holes. But I feel I may be on to something.

The argument does not establish that we necessarily are immortal. We are only naturally immortal, in that normally we do not perish. It is possible, as far as the argument goes, that the proper operation should fail to succeed in extending us into the future, if only because God might choose to stop cooperating in the way that constitutes sustenance (but I trust he won’t).