Thursday, September 11, 2025

Why do we like being confident?

We like being more confident. We enjoy having credences closer to 0 or 1. Even if the proposition we are confident in is one that is such that it is a bad thing that it is true, the confidence itself, abstracted from the badness of the state of affairs reported by the proposition, is something we enjoy.

Here is a potential justification of this attitude in many cases. We can think of the epistemic utility of one’s credence r in a proposition p as measured by an accuracy scoring rule given by two functions T(r) and F(r), where T(r) gives the value of having credence r in p when p is actually true and F(r) gives the value when p is actually false. Most people thinking about scoring rules think they should satisfy the technical condition of being strictly proper. But strict propriety implies that the function V(r) = rT(r) + (1−r)F(r) is strictly convex. Now suppose the scoring rule is also symmetric, so that T(r) = F(1−r). Then V(r) is a strictly convex function that is symmetric about r = 1/2. Such a function has its minimum at r = 1/2, and is strictly decreasing on [0,1/2] and strictly increasing on [1/2,1]. But the function V(r) measures your expectation of your epistemic utility. How happy you are about your credence, perhaps, corresponds to your expectation of your epistemic utility. So you are most unhappy at credence 1/2, and you get happier that closer you are to 0 or 1.

OK, it’s surely not that?!

Monday, September 8, 2025

Observations and risk of confirmation/disconfirmation

It seems that a rational agent cannot guarantee their credence in a hypothesis H to go up by choosing what observation to perform. For if no matter what I observe, my credence in H goes up given my observation, then my credence should already have gone up prior to the observation—I should boost my credence from the armchair.

But this reasoning is false in general. For in performing the observation, I not only learn which of the possible observable results is in place, but I also learn that I have performed the observation. In cases where the truth of H has a correlation with whether I actually perform the observation, this can have a predictable direction of effect on my credence in H.

Suppose that the hypothesis H is the conjunction that I am going to look in the closet and there is life on Mars. By looking to check if there is a mouse in my closet, I ensure that the first conjunct of H is true, and hence I increase my credence in H—no matter what I find out about mice.

This is a very trivial fact. But it does mean that we need to qualify the statement that any observation that can confirm a hypothesis can also disconfirm it. We need to specify that the confirmation and disconfirmation happen after one has already updated on the fact that one has performed the observation.

Epistemic utilities and decision theories

Warning: I worry there may be something wrong in the reasoning below.

Causal Decision Theory (CDT) and Epistemic Decision Theory (EDT) tend to disagree when the payoff of an option statistically depends on your propensity to go for that option. The most example of this phenomenon is Newcomb’s Problem (where money is literally put into a box or not depending on what your propensities are), and there is a large literature of other clever and mind-twisting examples. From the literature, one might get a feeling that these cases are all somehow weird, and normally there is no such dependence.

But here is a family of cases that happens literally almost all the time to us. Pretty much whenever we act we gain information relevant to facts about ourselves, and specifically to facts about our propensities to act. For instance, when you choose chocolate over vanilla ice cream you raise your credence for the hypothesis that you have a greater propensity to choose chocolate ice cream than to choose vanilla ice cream. But truth about oneself is valuable and falsehood about oneself is disvaluable. If in fact you have a greater propensity to choose chocolate ice cream, then by eating chocolate ice cream you gain credence in a truth, which is a good thing. If in fact your propensity for vanilla ice cream is at least as great as for chocolate ice cream, then by eating chocolate ice cream, you gain credence in a falsehood. The payoffs of your decision as to flavor of ice cream thus statistically depend on what your propensities actually are, and so this is exactly the kind of case where we would expect CDT and EDT to disagree.

Let’s be more precise. You have a choice between eating chocolate ice cream (C), eating vanilla ice cream (V) or not eating ice cream at all (N). Let H be the hypothesis that you have a greater propensity for eating chocolate ice cream than for eating vanilla ice cream. Then if you choose C, you will gain evidence for H. If you choose V, you will gain evidence for not-H. And if you choose N, you will (plausibly) gain no evidence for or against H. Your epistemic utility with respect to H is, let us suppose, measured by a single-proposition accuracy scoring rule, which we can think of as a pair of functions TH and FH, where TH(p) is the value of having credence p in H if in fact H is true and FH(p) is the value of having credence p in H if in fact H is false.

The expected evidential utilities of your three options are:

  • Ee(C) = P(H|C)TH(P(H|C)) + (1−P(H|C))FH(P(H|C))

  • Ee(V) = P(H|V)TH(P(H|V)) + (1−P(H|V))FH(P(H|V))

  • Ee(N) = P(H|N)TH(P(H|N)) + (1−P(H|N))FH(P(H|N)) = P(H)TH(P(H)) + (1−P(H))FH(P(H)).

The expected causal utilities are:

  • Ec(C) = P(H)TH(P(H|C)) + (1−P(H))FH(P(H|C))

  • Ec(V) = P(H)TH(P(H|V)) + (1−P(H))FH(P(H|V))

  • Ec(N) = P(H)TH(P(H|N)) + (1−P(H))FH(P(H|N)) = P(H)TH(P(H)) + (1−P(H))FH(P(H)).

We can make some quick observations in the case where the scoring rule is strictly proper, given that P(H|V) < P(H) < P(H|C):

  1. Ec(C) < Ec(N)

  2. Ec(V) < Ec(N)

  3. At least one of Ee(C) > Ee(N) and Ee(V) > Ee(N) is true.

Observations 1 and 2 follow immediately from strict propriety and the formulas for Ec. Observation 3 follows from the fact that the expected accuracy score after Bayesian update on evidence is better (in non-trivial cases where the scoring rule is strictly proper) than before update, and the expected accuracy score after update on what you’ve chosen is:

  • P(C)Ee(C) + P(V)Ee(V) + P(N)Ee(N)

while the expected accuracy score before update is equal to Ee(N). Since P(C) + P(V) + P(N) = 1, it follows from the superiority of the post-update expectation that at least one of Ee(C) and Ee(V) must be bigger than Ee(N).

The above results seem to be a black eye for CDT, which recommends that if what you care about is your epistemic utility with regard to your propensities regarding chocolate and vanilla ice cream, then you should always avoid eating ice cream!

(What about ratifiability? Some CDTers say that only ratifiable options should count. Is N ratifiable? Given that you’ve learned nothing about H from choosing N, I think N should be ratifiable. But I may be missing something. I find the epistemic utility case confusing.)

It also seems to me (I haven’t checked details) that on EDT there are cases where eating either flavor is good for you epistemically, but there are also cases where only one specific flavor is good for you.

Friday, September 5, 2025

"Swapping memories"

In Shoemaker’s Lockean memory theory of personal identity, in the absence of fission and fusion personal identity is secured by a chain of first-personal episodic quasimemories. All memories are quasimemories, but in defining a quasimemory the condition that the remembered episode happened to the same person is dropped to avoid circularity. It is important that quasimemories must be transmitted causally by the same kind of mechanism by which memories are transmitted. If I acquire vivid apparent memories of events in Napoleon’s life by reading his diaries, these apparent memories are neither memories nor quasimemories, because diaries are not the right kind of mechanism for memory transmission, and so Shoemaker can avoid the absurd conclusion that we can resurrect Napoleon by means of his diaries. If you wrote down an event in a diary, and then forgot the event, and then learned of the event from the diary, you should not automatically say “I now remember” (of course, the diary might have jogged your memory—but that’s a different phenomenon from your learning of the event from the diary).

It seems to me that discussion of memory theory after Shoemaker have often lost sight of this point, by engaging in science-fictional examples where memories are swapped between brains without much discussion of whether moving a memory from one brain to another is the right kind of mechanism for memory transmission. Indeed, it is not clear to me that there is a principled difference between reading Napoleon’s memory off from a vivid description in his diary and scanning it from his brain. With our current brain scanning technology, the diary method is more accurate. With future brain scanning technology, the diary method may be less accurate. But the differences here seem to be ones of degree rather than principle.

If I am right, then either the memory theorist should allow the possibility of resurrecting someone by inducing apparent memories in a blank brain that match vivid descriptions in their diary (assuming for the sake of argument that there is no afterlife otherwise) or should deny that brain-scan style “memory swapping” is really a quasimemory swap and leads to a body-swap between the persons. (A memory swap that physically moves chunks of brain matter is a different matter—the memories continue to be maintained and transmitted using the usual neural processes.)

Thursday, September 4, 2025

An instability in Newcomb one-boxing

Consider Newcomb’s Paradox, and assume the predictor has a high accuracy but is nonetheless fallible. Suppose you have the character of a one-boxer and you know it. Then you also know that the predictor has predicted your choosing one box and hence you know that there is money in both boxes. It is now quite obvious that you should go for two boxes! Of course, like the predictor, you predict that you won’t do it. But there is nothing unusual about a situation where you predict you won’t do the rational thing: weakness of the will is a sadly common phenomenon. Similarly, if you have the character of a two-boxer and you know it, the rational thing to do is to go for two boxes. For in this case you know the predictor put money only in the clear box, and it would be stupid to just go for the opaque box and get nothing.

None of what I said above should be controversial. If you know what the predictor did, you should take both boxes. It’s like Drescher’s Transparent Newcomb Problem where the boxes are clear and it seems obvious you should take both. (That said, some do endorse one-boxing in Transparent Newcomb!) Though you should be sad that you are the sort of person who takes both.

This means that principles that lead to one-boxing suffer from an interesting instability: if you find out you are firmly committed to acting in accordance with these principles, it is irrational for you to act in accordance with them. Not so for the principles that lead to two-boxing. Even when you find out you are firmly committed to them, it’s rational to act in accordance with them.

This instability is a kind of flip of the usual observation that if one expects to be faced with Newcomb situations, and one has two-boxing principles, then it becomes rational to regret having one-boxing principles. That, too, is an odd kind of inconsistency. But this inconsistency does not seem particularly telling. Take any correct rational principle R. There are situations where it becomes rational to regret having R, e.g., if a madman is going around torturing all the people who have R. (This is similar to the example that Xenophon attributes to Socrates that being wise can harm you because it can lead to your being kidnapped by a tyrant to serve as his advisor.)

Wednesday, September 3, 2025

Virtue and value

Suppose you have a choice between a course of action that greatly increases your level of physical courage and a course of action that mildly increases your level of loyalty to friends. But there is a catch: you have moral certainty that in the rest of your life you won’t have any occasion to exercise physical courage but you will have occasions to exercise loyalty to friends.

It seems to be a poor use of limited resources to gain heroic physical courage instead of improving your loyalty a bit when you won’t exercise the heroic physical courage.

If this is right, then the exercise of virtue counts for a lot more than the mere possession of it, as Aristotle already noted with his lifelong coma argument.

But now modify the case. You have a choice between a course of action that greatly increases your level of physical courage and feeding one hungry person for a day. Suppose that you don’t have the virtue of generosity, and that feeding the hungry person won’t help you gain it, because you have a brain defect that prevents you from gaining the virtue of generosity, though it allows you to act generously. And as before suppose you will never have an occasion to exercise physical courage. It still seems clear that you should feed the hungry person. Thus not only does the exercise of virtue count for a lot more than mere possession of virtue, acting in accordance with virtue, even in the absence of the virtue, counts for more than mere possession of virtue.

Next, consider a third case. You have a choice between two actions, neither of which will affect your level of virtue, because shortly after the actions your mind will be wiped. Action A has an 85% chance of saving a life, and if you perform action A, it will certainly be an exercise of generosity. Action B has a 90% chance of saving a life, and the action will be done in accordance with physical courage but will not be an exercise of virtue. Which should you do? It seems that you should do B. Thus, that an action is an exercise of a virtue does not seem to count for a lot in deliberation.

Nuclear deterrence, part II: False threats

In my previous post, I considered the argument against nuclear deterrence that says it’s wrong to gain a disposition to do something wrong, and a disposition to engage in nuclear retaliation is a disposition to dosmething wrong. I concluded that the argument is weaker than it seems.

Here I want to think about another argument againt nuclear deterrence under somewhat different assumptions. In the previous post, the assumption was that the leader is disposed to retaliate, but expects not to have to (because the deterrence is expected to work). But what about a case where the leader is not disposed to retaliate, but credibly threatens retaliation, while planning not to carry through the threat should the enemy attack?

It would be wrong, I take it, for the leader to promise to retaliate in such a case—that would be making a false promise. But threatening is not promising. Suppose that a clearly unarmed thief has grabbed your phone is about to run. (Their bathing suit makes it clear they are unarmed.) You pick up a realistic fake pistol, point it at the them, and yell: “Drop the phone!” This does not seem clearly morally wrong. And it doesn’t seem to necessarily become morally wrong (absent positive law against it) when the pistol is real as long as you have no intention or risk of firing (it is, of course, morally wrong to use lethal force merely to recover your property—though it apparently can be legal in Texas). The threat is a deception but not a lie. For, first, note, that you’re not even trying to get the thief to believe you will shoot them—just to scare them (fear requires a lot less than belief). Second, if the thief keeps on running and you don’t fire, the thief would not be right to feel betrayed by your words.

So, perhaps, it is permissible to threaten to do something that you don’t intend to do.

Still, there is a problem. For it seems that in threatening to something wrong, you are intentionally gaining a bad reputation, by making it appear like you are a wicked person who would shoot an unarmed thief or a wicked leader who would retaliate with an all-out nuclear strike. And maybe you have a duty not to intentionally gain a bad reputation.

Maybe you do have such a duty. But it is not clear to me that the leader who threatens nuclear retaliation or the person who pulls the fake or real pistol on the unarmed thief is intentionally gaining a bad reputation. For the action to work, it just has to create sufficient fear that one will carry out the threat, and that fear does not require one to think that the threatener would carry out the threat—a moderate epistemic probability might suffice.

Nuclear deterrence, part I: Dispositions to do something wrong

I take it for granted that all-out nuclear retaliation is morally wrong. Is it wrong (for a leader, say) to gain a disposition to engage in all-out nuclear retaliation conditionally on the enemy performing a first strike if it is morally certain that having that disposition will lead the enemy not to perform a first strike, and hence the disposition will not be actualized?

I used to think the answer was “Yes”, because we shouldn’t come to be disposed to do something wrong in some circumstance.

But I now think it’s a bit more complicated. Suppose you are stuck in a maze full of lethal dangers, with all sorts of things that require split-second decisions. You have headphones that connect you to someone you are morally certain is a benevolent expert. If you blindly follow the expert’s directions—“Now, quickly, fire your gun to the left, and then grab the rope and swing over the precipice”—you will survive. But if you think about the directions, chances are you won’t move fast enough. You can instill in yourself a disposition to blindly do whatever the expert says, and then escape. And this seems the right thing to do, even a duty if you owe it to your family to escape.

Notice, however, that such a disposition is a disposition to do something wrong in some circumstance. Once you are in blind-following mode, if the expert says “Shoot the innocent person to the right”, you will do so. But you are morally certain the expert is benevolent and hence won’t tell you anything like that. Thus it can be morally permissible to gain a disposition which disposes you to do things that are wrong under circumstances that you are morally certain will not come up.

Perhaps, though, there seems to be a difference between this and my nuclear deterrence case. In the nuclear deterrence case, the leader specifically acquires a disposition to do something that is wrong, namely to all-out retaliate, and this disposition is always wrong to actualize. In the maze case, you gain a general disposition to obey the expert, and normally that disposition is not wrong to actualize.

But this overstates what is true of the nuclear deterrence case. There are some conditions under which all-out retaliation is permissible, such as when 99.9% of one’s nuclear arsenal has been destroyed and the remainder is only aimed at legitimate military targets, or maybe when all the enemy civilians are in highly effective nuclear shelters and retaliation is the only way to prevent a follow-up strike from the enemy. Moreover, it may understate what is permissible in the expert case. You may need to instill in yourself the specific willingness to do what at the moment seems wrong, because sometimes the expert may tell you things that will seem wrong—e.g., to swing your sword at what looks like a small child (but in fact is a killer robot). I am not completely sure it is permissible to have an attitude of trust in the expert that goes that far, but I could be convinced of it.

I was assuming, contrary to fact in typical cases, that there is moral certainty that the nuclear deterrence will be effective and there will be no enemy first strike. Absent that assumption, the question is rather less clear. Suppose there is a 10% chance the expert is not so benevolent. Is it permissible to instill a disposition to blindly follow their orders? I am not sure.

Tuesday, September 2, 2025

Memory theories of personal identity and faster-than-light dependence

Consider this sequence of events:

  • Tuesday: Alice’s memory is scanned and saved to a hard drive.
  • Wednesday: Alice’s head is completely crushed in a car crash.
  • Thursday: Alice’s scanned memories are put into a fresh brain.

It seems that on a memory theory of personal identity, we would say that fresh brain on Thursday is Alice.

But now suppose that on Thursday, Alice’s scanned memories are put into two fresh brains.

If one of the operations is in the absolute past—the backwards light-cone—of the other, it is easy to say that what happens is that Alice goes to the brain that gets the memories first.

Fine. But what if which brain got the memories first depends on the reference frame, i.e., the two operations are space-like separated? It’s plausible that this is a case of symmetric fission, and in symmetric fission Alice doesn’t survive.

But now here is an odd thing. Suppose the two operations are simultaneous in some frame, but one happens on earth and the other on a spaceship by alpha-Centauri. Then whether Alice comes into existence in a lab on earth depends on what happens in a spaceship that’s four light-years away, and it depends on it in a faster-than-light way. That seems problematic.

Killing coiled and straight snakes

Suppose a woman crushes the head of a very long serpent. If the snake all dies instantly when its head is crushed, then in some reference frame the tail of the snake dies before the woman crushes the head, which seems wrong. So it seems we should not say the snake dies instantly.

I am not talking about the fact that the tail can still wiggle a significant amount of time after the head is crushed, or so I assume. That’s not life. What makes a snake be alive is having a snake substantial form. Death is the departure of the form. If the tail of the headless snake wiggles, that’s just a chunk of matter wiggling without a snake form.

What’s going on? Presumably it’s that metaphysical death—the separation of form from body—propagates from the crushed head to the rest of the snake, and it propagates at most at the speed of light. After all, the separation is a genuine causal process, and we are supposed to think that genuine causal processes happen at the speed of light or less.

So we get a constraint: a part of the snake cannot be dead before light emitted from the head-crushing event could reach the part. But it is also plausible that as soon as the light can reach the part, the part is dead. For a headless snake is dead, and as soon as the light from the head-crushing event can reach a part, the head-crushing event is in the absolute past of the part, and so the part is a part of a headless snake in every reference frame. Thus the part is dead.

So death propagates to the snake exactly at the speed of light from the head-crushing, it seems. Moreover, it does this not along the snake but in the shortest distance—that’s what the argument of the previous paragraph suggests. That means that a snake that’s tightly coiled into a ball dies faster than one that is stretched out when the head is crushed. Moreover, if you have a snake that is rolled into the shape of the letter C, and the head is crushed, the tail dies before the middle of the snake dies. That’s counterintuitive, but we shouldn’t expect reality to always be intuitive.

Friday, August 29, 2025

Proportionate causality

Let’s assume for the sake of argument:

Aquinas’ Principle of Proportionate Causality: Anything that causes something to have a perfection F must either have F or some more perfect perfection G.

And let’s think about what follows.

The Compatibility Thesis: If F is a perfection, then F is compatible with every perfection.

Argument: If F is incompatible with a perfection G, then having F rules out having perfection G. And that’s limitive rather than perfect. Perhaps the case where G = F needs to be argued separately. But we can do that. If F is incompatible with F, then F rules out all other perfections as well, and as long as there is more than one perfection (as is plausible) that violates the first part of the argument.

The Entailment Thesis: If F and G are perfections, and G is more perfect than F, then G entails F.

Argument: If F and G are perfections, and it is both possible to have F without having G and to have F while having G, it is better to have both F and G than to have just G. But if it is better to have both F and G than to have just G, then F contributes something good that G does not, and hence we cannot say that G is more perfect than F—rather, in one respect F is more perfect and in another G is more perfect.

From the Entailment Thesis and Aquinas’ Principle of Proportionate Causality, we get:

The Strong Principle of Proportionate Causality: Anything that causes something to have a perfection F must have F.

Interesting.

More on velocity

From time to time I’ve been playing with the question whether velocity just is rate of change of position over time in a philosophical elaboration of classical mechanics.

Here’s a thought. It seems that how much kinetic energy an object x has at time t (relative to a frame F, if we like) is a feature of the object at time t. But if velocity is rate of change of position over time, and velocity (together with mass) grounds kinetic energy as per E = m|v|2/2, then kinetic energy at t is a feature of how the object is at time and at nearby times.

This argument suggests that we should take velocity as a primitive property of an object, and then take it that by a law of nature velocity causes a rate of change of position: dx/dt = v.

Alternately, though, we might say that momentum and mass ground kinetic energy as per E = |p|2/2m, and momentum is not grounded in velocity. Instead, on classical mechanics, perhaps we have an additional law of nature according to which momentum causes a rate of change of position over time, which rate of change is velocity: v = dx/dt = p/m.

But in any case, it seems we probably shouldn’t both say that momentum is grounded in velocity and that velocity is nothing but rate of change of position over time.

Experiencing something as happening to you

In some video games, it feels like I am doing the in-game character’s actions and in others it feels like I am playing a character that does the actions. The distinction does not map onto the distinction between first-person-view and third-person-view. In a first-person view game, even a virtual reality one (I’ve been playing Asgard Wrath 2 on my Quest 2 headset), it can still feel like a character is doing the action, even if visually I see things from the character’s point of view. On the other hand, one can have a cartoonish third-person-view game where it feels like I am doing the character’s actions—for instance, Wii Sports tennis. (And, of course, there are games which have no in-game character responsible for the actions, such as chess or various puzzle games like Vexed. But my focus is on games where there is something like an in-game character.)

For those who don’t play video games, note that one can watch a first-person-view movie like Lady in the Lake without significantly identifying with the character whose point of view is presented by the camera. And sometimes there is a similar distinction in dreams, between events happening to one and events happening to an in-dream character from whose point of view one looks at things. (And, reversely, in real life some people suffer from a depersonalization where feels like the events of life are happening to a different person.)

Is there anything philosophically interesting that we can say about the felt distinction between seeing something from someone else’s point of view—even in a highly immersive and first-person way as in virtual reality—and seeing it as happening to oneself? I am not sure. I find myself feeling like things are happening to me more in games with a significant component of physical exertion (Wii Sports tennis, VR Thrill of the Fight boxing) and where the player character doesn’t have much character to them, so it is easier to embody them, and less so in games with a significant narrative where the player character has character of their own—even when it is pretty compelling, as in Deus Ex. Maybe both the physical aspect and the character aspect are bound up in a single feature—control. In games with a significant physical component, there is more physical control. And in games where there is a well-developed player character, presumably to a large extent this is because the character’s character is the character’s own and only slightly under one’s control (e.g., maybe one can control fairly coarse-grained features, roughly corresponding to alignment in D&D).

If this is right, then a goodly chunk of the “it’s happening to me” feeling comes not from the quality of the sensory inputs—one can still have that feeling when the inputs are less realistic and lack it when they are more realistic—but from control. This is not very surprising. But if it is true, it might have some philosophical implications outside of games and fiction. It might suggest that self-consciousness is more closely tied to agency than is immediately obvious—that self-consciousness is not just a matter of a sequence of qualia. (Though, I suppose, someone could suggest that the feeling of self-conscious is just yet another quale, albeit one that typically causally depends on agency.)

Wednesday, August 27, 2025

More decision theory stuff

Suppose there are two opaque boxes, A and B, of which I can choose one. A nearly perfect predictor of my actions put $100 in the box that they thought I would choose. Suppose I find myself with evidence that it’s 75% likely that I will choose box A (maybe in 75% of cases like this, people like me choose A). I then reason: “So, probably, the money is in box A”, and I take box A.

This reasoning is supported by causal decision theory. There are two causal hypotheses: that there is money in box A and that there is money in box B. Evidence that it’s 75% likely that I will choose box A provides me with evidence that it’s close to 75% likely that the predictor put the money in box A. The causal expected value of my choosing box A is thus around $75 and the causal expected value of my choosing box B is around $25.

On evidential decision theory, it’s a near toss-up what to do: the expected news value of my choosing A is close to $100 and so is that of my choosing B.

Thus, on causal decision theory, if I have to pay a $10 fee for choosing box A, while choosing box B is free, I should still go for box A. But on evidential decision theory, since it’s nearly certain that I’ll get a prize no matter what I do, it’s pointless to pay any fee. And that seems to be the right answer to me here. But evidential decision theory gives the clearly wrong answer in some other cases, such as that infamous counterfactual case where an undetected cancer would make you likely to smoke, with no causation in the other direction, and so on evidential decision theory you refrain from smoking to make sure you didn’t get the cancer.

In recent posts, I’ve been groping towards an alternative to both theories. The alternative depends on the idea of imagining looking at the options from the standpoint of causal decision theory after updating on the hypothesis that one has made a specific choice. In current my predictor cases, if you were to learn that you chose A, you would think: Very likely the money is in box A, so choosing box A was a good choice, while if you chose B, you would think: Very likely the money is in box B, so choosing box B was a good choice. As a result, it’s tempting to say that both choices are fine—they both ratify themselves, or something like that. But that misses out the plausible claim that if there is a $10 fee for choosing A, you should choose B. I don’t know how best to get that claim. Evidential decision theory gets it, but evidential decision theory has other problems.

Here’s something gerrymandered that might work for some binary choices. For options X and Y, which may or may not be the same, let eX(Y) be the causal expected value of Y with respect to the credences for the causal hypotheses updated with respect to your having chosen X. Now, say that the differential restrospective causal expectation d(X) of option X equals eX(X) − eX(Y). This measures how much you would think you gained, from the standpoint of causal decision theory, in choosing X rather than Y by the lights of having updated on choosing X. Then you should the option that provides a bigger d(X).

In the case where there is a $10 fee for choosing box A, d(B) is approximately $100 while d(A) is approximately $90, so you should go for box B, as per my intuition. So you end up agreeing with evidential decision theory here.

You avoid the conclusion you should smoke to make sure you don’t have cancer in the hypothetical case where cancer causes smoking but not conversely, because the differential retrospective causal expectation of smoking is positive while the differential retrospective causal expectation of not smoking is negative, assuming smoking is fun (is it?). So here you agree with causal decision theory.

What about Newcomb’s paradox? If the clear box has a thousand dollars and the opaque box has a million or nothing (depending on whether you are predicted to take just the opaque box or to take both), then the differential retrospective causal expectation of two-boxing is a thousand dollars (when you learned you two-box, you learn that the opaque box was likely empty) and the differential retrospective causal expectation of one-boxing is minus a thousand dollars.

So the differential retrospective causal expectation theory agrees with causal decision theory in the clear case (cancer-causes-smoking), the difficult case (Newcomb), but agrees with evidential decision theory in the $10 fee variant of my two-box scenario, and the last seems plausible.

But (a) it’s gerrymandered and (b) I don’t know how to generalize it to cases with more than two options. I feel lost.

Maybe I should stop worrying about this stuff, because maybe there just is no good general way of making rational decisions in cases where there is probabilistic information available to you about how you will make your choice.

Tuesday, August 26, 2025

Position: Assistant Professor of Bioethics, Tenure Track, Department of Philosophy, Baylor University

We're hiring again. Here's the full ad.

My AI policy

I’ve been wondering what to allow and what to disallow in terms of AI. I decided to treat AI as basically persons and I put this in my Metaphysics syllabus:

Even though (I believe) AI is not a person and its products are not “thoughts”, treat AI much like you would a person in writing your papers. I encourage you to have conversations with AIs about the topics of the class. If you get ideas from these conversations, put in a footnote saying you got the idea from an AI, and specifically cite which AI. If you use the AI’s words, put them in quotation marks. (If your whole paper is in quotation marks, it’s not cheating, but you haven’t done the writing yourself and so it’s like a paper not turned in, a zero.) Just as you can ask a friend to help you understand the reading, you can ask an AI to help you understand the reading, and in both cases you should have a footnote acknowledging the help you got. Just as you can ask a friend, or the Writing Center or Microsoft Word to find mistakes in your grammar and spelling, you can ask an AI to do that, and as long as the contribution of the AI is to fix errors in grammar and spelling, you don’t need to cite. But don’t ask an AI to rewrite your paper for you—now you’re cheating as the wording and/or organization is no longer yours, and one of the things I want you to learn in this class is how to write. Besides all this, last time I checked, current AI isn’t good at producing the kind of sharply focused numbered valid arguments I want you to make in the papers—AI produces things that look like valid arguments, but may not be. And they have a distinctive sound to them, so there is a decent chance of getting caught. When in doubt, put in a footnote at the end what help you got, whether from humans or AI, and if the help might be so much that the paper isn’t really yours, pre-clear it with me.

An immediate regret principle

Here’s a plausible immediate regret principle:

  1. It is irrational to make a decision such that learning that you’ve made this decision immediately makes it rational to regret that you didn’t make a different decision.

The regret principle gives an argument for two-boxing in Newcomb’s Paradox, since if you go for one box, as soon as you have made your decision to do that, you will regret you didn’t make the two-box decision—there is that clear box with money staring at you, but if you go for two boxes, you will have no regrets.

Interestingly, though, one can come up with predictor stories where one has regrets no matter what one chooses. Suppose there are two opaque boxes, A and B, and you can take either box but not both. A predictor put a thousand dollars in the box that they predicted you won’t take. Their prediction need not be very good—all we need for the story is that there is a better than even probability of their having predicted you choosing A conditionally on your choosing A and a better than even probability of their having predicted you choosing B conditionally on your choosing B. But now as soon as you’ve made your decision, and before you opened the chosen box, you will think the other box is more likely to have the money, and so your knowledge of your decision will make it rational to regret that decision. Note that while the original Newcomb problem is science-fictional, there is nothing particularly science-fictional about my story. It would not be surprising, for instance, if someone were able to guess with better than even chance of correctness about what their friends would choose.

Is this a counterexample to the immediate regret principle (1), or is this an argument that there are real rational dilemmas, cases where all options are irrational?

I am not sure, but I am inclined to think that it’s a counterexample to the regret principle.

Can we modify the immediate regret principle to save it? Maybe. How about this?

  1. No decision is such that learning that you’ve rationally made this decision immediately makes it rationally required to regret that you didn’t make a different decision.

On this regret principle, regret is compatible with non-irrational decision making but not with (known) rational decision making.

In my box story, it is neither rational nor irrational to choose A, and it is neither rational nor irrational to choose B. Then there is no contradiction to (2), since (2) only applies to decisions that are rationally made. And applying (2) to Newcomb’s Paradox no longer yields an argument for two-boxing, but only an argument that it is not rational to one-box. (For if it were rational to one-box, one could rationally decide to one-box, and one would then regret that.)

The “rationally” in (2) can be understood in a weaker way or a stronger way (the stronger way reads it as “out of rational requirement”). On either reading, (2) has some plausibility.

Monday, August 25, 2025

An odd decision theory

Suppose I am choosing between options A and B. Evidential decision theory tells me to calculate the expected utility E(U|A) given the news that I did A and the expected utility E(U|B) given the news that I did B, and go for the bigger of the two. This is well-known to lead to the following absurd result. Suppose there is a gene G that both causes one day to die a horrible death and makes one very likely to choose A, while absence of the gene makes one very likely to choose B. Then if A and B are different flavors of ice cream, I should always choose B, because E(U|A) ≪ E(U|B), since the horrible death from G trumps any advantage of flavor that A might have over B. This is silly, of course, because one’s choice does not affect whether one has G.

Causal decision theorists proceed as follows. We have a set of “causal hypotheses” about what the relevant parts of the world at the time of the decision are like. For each causal hypothesis H we calculate E(U|HA) and E(U|HB), and then we take the weighted average over our probabilities, and then decide accordingly. In other words, we have a causal expected utility of D

  • Ec(U|D) = ∑HE(U|HD)P(H)

and are to choose A over B provided that Ec(U|A) = Ec(U|B). In the gene case, the “bad news” of the horrible death on G is a constant addition to Ec(U|A) and to Ec(U|B), and so it can be ignored—as is right, since it’s not in our control.

But here is a variant case that worries me. Suppose that you are choosing between flavors A and B of ice cream, and you will only ever ever get to taste one of them, and only once. You can’t figure out which one will taste better for you (maybe one is oyster ice cream and the other is sea urchin ice cream). However, data shows that not only does G make one likely to choose A and its absence makes one likely to choose B, but everyone who has G derives pleasure from A and displeasure from B and everyone who lacks G has the opposite result, and all the pleasures and displeasures are of the same magnitude.

Now, background information says that you have a 3/4 chance of having G. On causal decision theory, this means that you should choose A, because likely you have G, and those who have G all enjoy A. Evidential decision theory, however, tells you that you should choose B, since if you choose B then likely you don’t have the terrible gene G.

In this case, I feel causal decision theory isn’t quite right. Suppose I choose A. Then after I have made my choice, but before I have consumed the ice cream, I will be glad that I chose A: my choice of A will make me think I have G, and hence that A is tastier. But similarly, if I choose B, then after I have made my choice, and again before consumption, I will be glad that I chose B, since my choice B will make me think I don’t have G and hence that B was a good choice. Whatever I choose, I will be glad I chose it. This suggests to me that my there is nothing wrong with either choice!

Here is the beginning of a third decision theory, then—one that is neither causal nor evidential. An option A is permissible provided that causal decision theory with the causal hypothesis credences conditioned on one’s choosing A permits one to do A. An option A is required provided that no alternative is permissible. (There are cases where no option is permissible. That’s weird, I admit.)

In the initial case, where the pleasure of each flavor does not depend on G, this third decision theory gives the same answer as causal decision theory—it says to go for the tastier flavor. In the second case, however, where the pleasure/displeasure depends on G, it permits one to go for either flavor. In a probabilistic-predictor Newcomb’s Paradox, it says to two-box.

Saturday, August 23, 2025

Gaze dualism and omnisubjectivity

I have toyed with a pair of theories.

The first is what I call gaze-dualism. On gaze-dualism, our sensory conscious experiences are constituted by a non-physical object—the soul—“gazing” at certain brain states. When the sensory data changes—say, when a sound goes from middle A to middle C—the subjective experience changes. But this change need not involve an intrinsic change in the soul. The change in experience is grounded in a change in the gazed-at brain state, a brain state that reflects the sensory data, rather than by a change in the gazing soul. (This is perhaps very close to Aquinas’ view of sensory consciousness, except that for Aquinas the gazed-at states are states of sense organs rather than of the brain.)

The second is an application of this to God’s knowledge of contingent reality. God knows contingent reality by gazing at it the way that our soul gazes at the brain states that reflect sensory data. God does not intrinsically change when contingent reality changes—the change is all on the side of the gazed-at contingent reality.

I just realized that this story makes a bit of progress on what Linda Zagzebski calls “omnisubjectivity”—God’s knowledge of all subjective states. My experience of hearing a middle C comes from my gazing at a brain state BC of my auditory center produced by nerve impulses caused by my tympanic membrane vibrating at 256 Hz. My gaze is limited to certain aspects of my auditory center—my gaze tracks whatever features of my auditory center are relevant to the sound, features denoted by BC, but does not track features of my auditory center that are not relevant to the sound (e.g., the temperature of my neurons). God’s gaze is not so limited—God gazes at every aspect of my auditory center. But in doing so, he also gazes at BC. This does not mean that God has the same experience as I do. My experience is partly constituted by my soul’s gaze at BC. God’s experience is partly constituted by God’s gaze at BC. Since my soul is very different from God, it is not surprising that the experiences are different. However, God has full knowledge of the constituents of my experience: myself, my gaze, and BC, and God’s knowledge of these is basically experiential—it is constituted by God’s gazing at me, my gaze, and BC. And God also gazes at their totality. This is, I think, all we need to be able to say that God knows my sensory consciousness states.

My non-sensory experiences may also be constituted by my soul’s gazing at a state of my brain, but they may also be constituted by the soul’s gazing at a state of the soul. And God gazes at the constituents and whole again.

Diversity of inner lives

There is a vast and rather radical diversity in the inner conscious lives of human beings. Start with the differences in dreams: some people know immediately whether they are dreaming and others do not; some are in control of their dreams and others are not; some dream in color and others do not. Now move on to the differences in thought. Some think in pictures, some in words with sounds, some in a combination of words with sounds and written words, and some without any visual or aural imagery. Some people are completely unable to imagine things in pictures, others can do so only in a shadowy and unstable way, and yet others can do so in detail. Even in the case of close friends, we often have no idea about how they differ in these respects, and to many people the diversity in inner conscious lives comes as a surprise, as they assume that almost everyone is like them.

But in their outer behavior, including linguistic behavior, people seem much more homogeneous. They say “I think that tomorrow is a good day for our bike trip” regardless of whether they thought it out in pictures, in sounds, or in some other way. They give arguments as a sequence of logically connected sentences. Their desires, while differing from person to person, are largely comprehensible and not very surprising. People are more homogeneous outside than inside.

This contrast between inner heterogeneity and outward homogeneity is something I realized yesterday while participating in a workshop on Linda Zagzebski’s manuscript on dreams. I am not quite sure what to make of this contrast philosophically, but it seems really interesting. We flatten our inner lives to present them to people in our behavior, but we also don’t feel like much is lost in this flattening. It doesn’t really matter much whether our thoughts come along with sights or sounds. It would not be surprising if there were differences in skill levels that correlated with the characteristics of inner life—it would not be surprising if people who thought more in pictures were better at low-dimensional topology—but these differences are not radical.

Many of us as children have wondered whether other people’s conscious experiences are the same as ours—does red look the same (bracketing colorblindness) and does a middle C sinewave sound the same (bracketing hearing deficiencies)? I have for a while thought it not unlikely that the answer is negative, because I am attracted to the idea that central to how things look to us are the relationships between different experiences, and different people have sets of experiences. (Compare the visual field reversal experiments, where people who wear visual field reversal glasses initially see things upside-down but then it turns right-side-up, which suggests to me that the directionality of the visual field is constituted by relationships between different experiences rather than being something intrinsic.) I think the vast diversity in conscious but non-sensory inner lives gives us some reason to think that sensory consciousness also differs quite a bit between people—and gets flattened and homogenized into words, much as thoughts are.

Friday, August 8, 2025

Extrinsic well-being and the open future

Klaus: Sometimes how well or badly off you are at time t1 depends on what happens at a later time t2. A particularly compelling case of this is when at t1 you performedan onerous action with the goal of producing some effect E at t2. How well off you were in performing the action depends on whether the action succeeded—which depends on whether E eventuates at t2. But now suppose the future is open. Then in a world with as much indeterminacy as ours, in many cases at t1 it will be contingent whether the event at t2 on which your well-being at t1 depends eventuates. And on open future views, at t1 there will then be no fact of the matter about your well-being. Hence, the future is not open.

Opie: In such cases, your well-being should be located at t2 rather than at t1. If you jump the crevasse, it is only when you land that you have the well-being of success.

Klaus: This does not work as well in cases where you are dead at t2. And yet our well-being does sometimes depend on what happens after we are dead. The action at t1 might be a heroic sacrifice of one’s life to save one’s friends—but whether one is a successful hero or a tragic hero depends on whether the friends will be saved, which may depend on what happens after one is already dead.

Opie: Thanks! You just gave me an argument for an afterlife. In cases like this, you are obviously better off if you manage to save your friends, but you aren’t better off in this life, so there must be life after death.

Klaus: But we also have the intuition that even if there were no afterlife, it would be better to be the successful hero than the tragic hero, and that posthumous fame is better than posthumous infamy.

Opie: There is an afterlife. You’ve convinced me. And moral intuitions about how things would be if our existence had a radically different shape from the one it in fact has are suspect. And, given that there is an afterlife, a scenario without an afterlife is a scenario where our existence has a radically different shape. Thus the intuition you cite is unreliable.

Klaus: That’s a good response. Let me try a different case. Suppose you perform an onerous action with a goal within this life, but then you change your mind about the goal and work to prevent that goal. This works best if both goals are morally acceptable, and switching goals is acceptable. For instance you initially worked to help the Niners train to win their baseball game against the Logicians, but then your allegiance shifted to the Logicians in a way that isn't morally questionable. And then suppose the Niners won. Your actions in favor of the Niners are successful, and you have well-being. But it is incorrect to locate that well-being at the time of the actual victory, since at that time you are working for the Logicians, not the Niners. So the well-being must be located at the time of your activity, and at that time it depends on future contingents.

Opie: Perhaps I should say that at the time Niners beat the Logicians, you are both well-off and badly-off, since one of your past goals is successful and the other is unsuccessful. But I agree that this doesn’t quite seem right. After all, if you are loyal to your current employer, you’re bummed out about the Logicians’ loss and you’re bummed out that you weren’t working for them from the beginning. So intuitively you're just badly off at this time, not both badly and well off. So, I admit, this is a little bit of evidence against open future views.

Consciousness and the open future

Plausibly:

  1. There is a “minimal humanly observable duration” (mhod) such that a human cannot have a conscious state—say, a pain—shorter than an mhod, but can have a conscious state that’s an mhod long.

The “cannot” here is nomic possibility rather than metaphysical possibility.

Let δ denote an mhod. Now, suppose that you feel a pain precisely from t0 to t2. Then t2 ≥ t0 + δ. Now, let t1 = t0 + δ/2. Then you feel a pain at t1. But at t1, you only felt a pain for half an mhod. Thus:

  1. At t1, that you feel pain depends on substantive facts about your mental state at times after t1.

For if your head were suddenly zapped by a giant laser a quarter of an mhod after t1, then you would not have felt a pain at t1, because you would have been in a position to feel pain only from t0 to t0 + (3/4)δ.

But in a universe full of quantum indeterminacy:

  1. These substantive facts are contingent.

After all, your brain could just fail a quarter of an mhod after t1 due to a random quantum event.

But:

  1. Given an open future, at t1 there are no substantive contingent facts about the future.

Thus:

  1. Given an open future, at t1 there is no fact that you are conscious.

Which is absurd!

Tuesday, July 29, 2025

Discrete time and Aristotle's argument for an infinite past

Aristotle had a famous argument that time had no beginning or end. In the case of beginnings, this argument caused immense philosophical suffering in the middle ages, since combined with the idea that time requires change it implies that the universe was eternal, contrary to the Jewish, Muslim and Christian that God created the universe a finite amount of time ago.

The argument is a reductio ad absurdum and can be put for instance like this:

  1. Suppose t0 is the beginning of time.

  2. Before t0 there is no time.

  3. It is a contradiction to talk of what happened before the the beginning of time.

  4. But if (1) is true, then (2) talks of what is before the beginning of time.

  5. Contradiction!

It’s pretty easy to see what’s wrong with the argument. Claim (2) should be charitably read as:

  • Not (before t0 there is time).

Seen that way, (2) doesn’t talk about what happened before t0, but is just a denial that there was any such thing as time-before-t0.

It just struck me that a similar argument could be used to establish something that Aristotle himself rejects. Aristotle famously believed that time was discrete. But now argue:

  1. Suppose t0 and t1 are two successive instants of time.

  2. After t0 and before t1 there is no time.

  3. It is a contradiction of what happened when there is no time.

  4. But if (7) is true, then (7) talks of what is when there is no time.

  5. Contradiction!

Again, the problem is the same. We should take (7) to deny that there is any such thing as time-after-t0-and-before-t1.

So Aristotle needed to choose between his preference for the discreteness of time and his argument for an infinite past.

What if there is no tomorrow?

There are two parts of Aristotle’s theory that are hard to fit together.

First, we have Aristotle’s view of future contingents, on which

  1. It is neither true nor false that tomorrow there will be a sea battle

but, of course:

  1. It is true that tomorrow there will be a sea battle or no sea battle.

Of course, nothing rides on “tomorrow” in (1) and (2): any future metric interval of times will do. Thus:

  1. It is true that in 86,400,000 milliseconds there will be a sea battle or not.

(Here I adopt the convention that “in x units” denotes the interval of time corresponding to the displayed number of significant digits in x. Thus, “in 86,400,000 ms” means “at a time between 86,399,999.5 (inclusive) and 86,400,000.5 (exclusive) ms from now.”)

Second, we have Aristotle’s view of time, on which time is infinitely divisible but not infinitely divided. Times correspond to what one might call happenings, the beginnings and ends of processes of change. Now which happenings there will be, and when they will fall with respect to metric time (say, 3.74 seconds after some other happening), is presumably something that is, or can be, contingent.

In particular, in a world full of contingency and with slow-moving processes of change, it is contingent whether there will be a time in 86,400,000 ms. But (3) entails that there will be such a time, since if there is no such time, then it is not true that anything will be the case in 86,400,000 ms, since there will be no such time.

Thus, Aristotle cannot uphold (3) in a world full of contingency and slow processes. Hence, (3) cannot be a matter of temporal logic, and thus neither can (2) be, since logic doesn’t care about the difference between days and milliseconds.

If we want to make the point in our world, we would need units smaller than milliseconds. Maybe Planck times will work.

Objection: Suppose that no moment of time will occur in exactly x1 seconds, because x1 falls between all the endpoints of processes of change. But perhaps we can still say what is happening in x1 seconds. Thus, if there are x0 < x1 < x2 such that x0 seconds from now and x2 seconds from now (imagine all this paragraph being said in one moment!) are both real moments of time, we can say things about what will happen in x1 seconds. If I will be sitting in both x0 and x2 seconds, maybe I can say that I will be sitting in x1 seconds. Similarly, if Themistocles is leading a sea battle in 86,399,999 ms and is leading a sea battle in 86,400,001 ms, then we can say that he is leading a sea battle in 86,400,000 ms, even though there is no moment of time then. And if he won’t lead a sea battle in either 86,399,999 ms or in 86,400,000 ms, neither will he lead one in 86,400,000 ms.

Response: Yes, but (3) is supposed to be true as a matter of logic. And it’s logically possible that Themistocles leads a sea battle in 86,399,999 ms but not in 86,400,001 ms, in which case if there will be no moment in 86,400,000 ms, we cannot meaningfully say if he will be leading a sea battle then or not. So we cannot save (3) as a matter of logic.

A possible solution: Perhaps Aristotle should just replace (2) with:

  1. It is true that will be: no tomorrow or tomorrow a sea battle or tomorrow no sea battle.

I am a bit worried about the "will" attached to a “no tomorrow”. Maybe more on that later.

Monday, July 28, 2025

An attempt to define possible futures for open futurism

On all-false open future (AFOF), future contingent claims are all false. The standard way to define “Will p” is to say that p is true in all possible futures. But defining a possible future is difficult. Patrick Todd does it in terms of possible worlds apparantly of the classical sort—ones that have well-defined facts about how things are at all times. But such worlds are not in general possible given open future views—it is not possible to simultaneously have a fact about how contingent events go on all future days (assuming the future is infinite).

Here is an approach that maybe has some hope of working better for open future views. Take as primitive not classical possible worlds, but possible moments, ways that things could be purely at a time. Possible moments do not include facts about the past and future.

Now put a temporal ordering on the possible moments, where we say that m1 is earlier than m2 provided that it is possible to have had m1 obtaining before m2.

For a possible moment m, define:

  • open m-world: a maximal set of possible moments including m such that (a) all moments in the set other than m are earlier or later than m and (b) the subset of moments earlier than m is totally ordered

  • possible history: a maximal totally ordered set of possible moments

  • possible future: a possible history that contains m.

Exactly one possible moment is currently actual. Then:

  • possible future: a possible future of the currently actual moment.

Now consider the problem of entailment on AFOF. The problem is this. Intutiively, that I will freely mow my lawn entails that I will mow my lawn, but does not entail that I will eat my lawn. However, since on AFOF “I will freely mow my lawn” is necessarily false—it is false at every possible moment, since “will” claims concerning future contingent claims are always false—both entailments have necessarily false antecedents and hence are trivially true.

Given a set S of moments and a moment m ∈ S, any sentence of Prior’s (or Brand’s) temporal logic can be evaluated for truth at (S,m). We can now define two modalities:

  • p is OW-necessary: p is true at (W,m) for every open m-world W

  • p is PH-necessary: p is true at (H,m) for every possible history H that contains m.

And now we have two entailments: p OW/PH-entails q if and only if the material conditional p → q is OW/PH-necessary.

Then that I will freely mow my lawn is OW-impossible, but PH-possible, and that I will freely mow my lawn OW-entails that I will eat my lawn, but does not PH-entail it. The open futurist can now say that our intuitive concept of entailment, in temporal contexts, corresponds to PH-entailment rather than OW-entailment.

I think this is helpful to the open futurist, but still has a serious problem. Consider the sentence “I will mow or I will not-mow.” On AFOF, this is false. But it is true at every possible history. Hence, it is PH-necessary. Thus, PH-necessity does not satisfy the T-axiom. Thus PH-entailment is such that a truth can PH-entail a falsehood. For instance, since “I will mow or I will not-mow” is PH-necessary, it is PH-entailed by every tautology.

On trivalent logics, if "I will mow or I will not-mow" is neither true nor false, we have a similar problem: a truth PH-entails a non-truth.

There are is a more technical problem on some metaphysical views. Suppose that it is contingent whether time continues past a certain moment. For instance, suppose there is no God and empty time is impossible, and there is a particle which can indeterministically cease to exist, and the world contains just that particle, so at any time it is possible that time is the last—the particle can pop out of existence. Oddly, because of the maximality condition on possible histories, there is no possible future where the particle pops out of existence.

I wonder if there is a better way to define entailment and possible futures that works with open future views.

Wednesday, July 23, 2025

Aristotelianism and transformative technology

The Aristotelian picture of us is that like other organisms, we flourish in fulfilling our nature. Our nature specifies the proper way of interacting with the world. We do not expect an organism’s nature to specify proper ways of interacting with scenarios far from its niche: how bats should fly in weightless conditions; how cats should feed in an environment with unlimited food supply; how tardigrades should live on the moon.

But with technology, we have shifted far from the environment we evolved for. While adaptability is a part of our nature, some technological innovations seem to go beyond the adaptability we expect, in that they appear to impact central aspects of the life of the social beings we are: innovations like the city, writing, and fast and widely accessible global communication. We should not expect for our nature specify how we should behave with respect to these new social technologies. We should have a skepticism that our nature contains sensible answers to questions about how we should behave in these cases.

Thus we appear to have an Aristotelian argument for avoiding the more transformative types of technology, since we are more likely to have meaningful answers to questions about how to lead our lives if our lives are less affected by social transformations. To be on the safe side, we should live in the country, and have most of our social interaction with a relatively small number of neighbors in person.

The theistic Aristotelian, however, has an answer to this. While evolution cannot foresee the Internet, God can, and he can give us a normative nature that specifies how we should adapt to vast changes in the shape of our lives. We do not need to avoid transformative technology in general, though of course we must be careful lest the transformation be for ill.

Friday, July 18, 2025

Optimalism and logical possibility

Optimalism holds that, of metaphysical necessity, the best world is actualized.

There are two ways to understand “the best world”: (1) the best of all metaphysically possible worlds and (2) the best of all (narrowly) logically possible worlds.

If we understand it in sense (1), then the best world is the best out of a class of one, and hence it’s also the worst world in the same class. So on reading (1), optimalism=pessimalism.

So sense (2) seems to be a better choice. But here is an argument against (2). It seems to be an a posteriori truth that I am living life LAP (the life in our world associated with the name “Alexander Pruss”) and that Napoleon is living life LNB (the life in our world associated with the name “Napoleon Bonaparte”). There seems to be a narrowly logically possible world just like this one where I live LNB and Napoleon lives LAP. That world with me and Napoleon swapped is neither better nor worse than this one. Hence our world is not the best one. It is tied or incommensurable with a whole bunch of worlds where the identities of individuals are permuted.

Maybe my identity is logically tied to certain aspects of my life, though? Leibniz certainly thought so—he thought it was tied to all the aspects of my life. But this is a controversial view.

Thursday, July 17, 2025

All-false open futurism

On All-False Open Futurism (AFOF), any future tensed statement about a future contingent must be false. It is false that there will be a sea battle tomorrow, for instance.

Suppose now I realize that due to a bug, tomorrow I will be able to transfer ten million dollars from a client’s account to mine, and then retire to a country that won’t extradite me. A little angel says to me:

  1. Your freely taking your client’s money without permission tomorrow entails your being a thief tomorrow.

I don’t want to be a thief, tomorrow or ever, so I am about to decide not to do it. But now a little devil convinces me of AFOF and says that while (1) is true, so is:

  1. Your freely taking your client’s money without permission tomorrow entails your being a saint tomorrow.

Perhaps I am not very good at modal logic and the devil needs to explain. Given AFOF, it is necessarily false that I will freely take my client’s money without permission tomorrow, and a necessary falsehood entails everything. So, the devil adds, I might as well buy my plane tickets now.

The angel, however, grants AFOF for the sake of argument, but says that notwithstanding (2), the following holds:

  1. Tomorrow it will be the case your taking your client’s money without permission entails your being a thief.

For the entailment holds always.

At this point, we have an interesting question. Given AFOF, should I guide my actions by the entailment between future-tensed claims in (2) or by the future-tensed entailment claim in (3)? The angel urges that the devil’s reasoning undercuts all rationality, while the angel’s reasoning does not, and hence is superior.

But the devil has one more trick up his sleeve. He notes that it is a contingent question whether there will be a tomorrow at all. For God might freely decide to end time before tomorrow. Thus, that there will be a tomorrow is false on AFOF. But (3) implies that there will be a tomorrow, and so (3) is false as well. I try to argue on the basis of Scripture that God has made promises that entail a future eternity, but the devil is a lot better at citing the Bible than I, and convinces me that God might transfer us to a timeless state or maybe eternal life is a supertask lasting from 8 to 9 pm tonight. And in any case, surely it should not depend on revelation whether the angel has a good argument not to take the client’s money. This is a problem for AFOF.

Maybe this is the way out. The angel could say this:

  1. Necessarily, if there will be a tomorrow, then it will be true tomorrow that taking your client’s money without permission entails your being a thief.

But while this conditional is true on AFOF, if the devil has made his case that God hasn’t promised there will be a tomorrow, he can respond with:

  1. Necessarily, if God hasn’t promised there will be a tomorrow and there will be a tomorrow, then it will be true tomorrow that taking your client’s money without permission entails your being a saint.

For the antecedent of the conditional here is necessarily false on AFOF, it being contingent that there will be a tomorrow absent a divine promise. And it seems that (5) is even more relevant to guiding action than (4), then.

Maybe the defender of AFOF can insist that the future must be infinite. But this does not seem plausible.

Wednesday, July 16, 2025

Yet another counterexample to act utilitarianism

It is wrong to torture a stranger for 99 minutes in order to avoid 100 minutes of equal torture to oneself.

Entailment and Open Future views

This is probably an old thing that has been discussed to death, but I only now noticed it. Suppose an open future view on which future contingents cannot have truth value. What happens to entailments? We want to say:

  1. That Jones will freely mow the lawn tomorrow entails that he will mow the lawn tomorrow

and to deny:

  1. That Jones will freely mow the lawn tomorrow entails that he will not mow the lawn tomorrow.

Now, a plausible view of entailment is that:

  1. p entails q if and only if it is impossible for p to be true while q is false.

But if future contingents cannot have truth value, then that Jones will freely mow the lawn tomorrow cannot be true, and hence by (3) it entails everything. In particular, both (1) and (2) will be true.

Presumably, the open futurist who believes future contingents cannot have truth value will give a different account of entailment, such as:

  1. p entails q if and only if there is no history in which p is true and q is false.

But what is a history? Here is a possible story. For a time t, let a t-possibility be a maximal set of propositions that could all be true together at t. Given the open future view we are exploring, a t-possibility will not include any propositions reporting contingent events after t. If t1 < t2, and A1 is a t1-possibility while A2 is a t2-possibility, we can say that A1 is included in A2 provided that for any proposition p in A1, the proposition that p was true at t1 is a member of A2. We can then say that a history h is a function that assigns a t-possibility h(t) to every time t such that h(t1) is included in h(t2) whenever t1 < t2.

(Technical note: Open theism implies a theory of tensed propositions, I assume. Thus if A is a t1-possibility, then it is not a t2-possibility if t2 ≠ t1, since any t-possibility will include the proposition that t is present.)

But what does it mean to say that a proposition p is true in a history h. Here is a plausible approach. Suppose t0 is the present time. Given a proposition p that says that s, let pt0 be the backdated proposition that at t0 it was such that s (with whatever shifts of tense are needed in s to make this grammatical). Then p is true in h provided that there is a time t1 > t0 such that pt0 is a member of h(t1). In other words, a proposition p is true in h provided that eventually h settles its truth value.

This works nicely for letting us affirm (1) and deny (2). In every history in which it becomes true that Jones will freely mow the lawn it becomes true that Jones will mow the lawn, while this is not so if we replace the consequent with “Jones will not mow the lawn.” But what about statements that quantify over times? Consider:

  1. Jones will mow the lawn, and for every time t at which Jones will mow the lawn, there will be a time t′ that is more than a year after t such that Jones will freely mow the lawn at t.

This entails:

  1. Jones will mow the lawn, and for every time t at which Jones will mow the lawn, there will be a time t′ that is more than a year after t such that Jones will mow the lawn at t.

but does not entail:

  1. Jones will not mow the lawn.

But there is no history h at which (5) is true by the above account of truth-at-a-history given our open future view. For let t0 be the present and let p be the proposition expressed by (5). Then at any future time t and any history h, the proposition pt0 is not a member of h(t). For if it were a member of h(t), it would be affirming the existence of an infinite number of future free mowings, and such a proposition cannot be true on our open future view. Since there is no history h at which (5) is true, by (4) we have it that (5) entails both (6) and (7), which is the wrong result.

What if instead of saying that future contingents lack truth value, we say that they are all false? This requires a slight modification to the account of p being true at a history. Instead of saying that p is true at h provided that there is some future time t such that pt0 is in h(t), we need to say that there is some future time t such that pt0 is in h(t′) for all t′ ≥ t. This gives the right truth values for (1) and (2), but it also makes (7) true.

I think the above open futurist accounts of entailment work nicely for statements with a single unbounded quantifier over times, but once we get alternating quantifiers like in (5), where the second conjunct is of the form ttϕ, things break down.

Perhaps the open futurist just needs to be willing to bite the bullet and say that (5) entails (7)?

Open Theism and divine promises

Open Theist Christians tend to think that there are some things God knows about the future, and these include the content of God’s promises to us. God’s promises are always fulfilled.

But it seems that the content of many of God’s promises depends on free choices. For imagine that all the recipients of God’s promise freely choose to release God from the promise; then God would be free not to follow the promise, it appears, and so he could freely choose not to act in according to the promise. Thus there seems to be a sequence of creaturely and divine free choices on which the content of the promise does not come about.

This argument may not work for all of God’s promises. Some of God’s promises are covenants, and it may be that covenants are a type of agreement in which neither party can release the other. There may be other unreleasable promises: perhaps when x promises to punish y, that’s a promise y cannot release x from. But do we have reason to think that God makes no “simple promises”, promises other than covenants and promises of punishment?

I do not think this is a definitive argument against open theism. The open theist can bite the bullet and say that God doesn’t always know he will fulfill his promises. But it is interesting to see that on open theism, God’s knowledge of the future is even more limited than we might have initially thought.

Tuesday, July 15, 2025

Open theism and the Incarnation

Here is a very plausible pair of claims:

  1. The Son could have become incarnate as a different human being.

  2. God foreknew many centuries ahead of time which human being the Son would become incarnate as.

Regarding 1, of course, the Son could not have been a different person—the person the Son is and was and ever shall be is the second person of the Trinity. But Son could have been a different human being.

Here is a sketch of an argument for 1:

  1. If the identity of a human being depends on the body, then if the Son became incarnate as a 3rd century BC woman in India, this would be a different human being from Jesus (albeit the same person).

  2. If the identity of a human being depends on the soul, then God could have created a different soul for the Son’s incarnation.

  3. The identity of a human being depends on either the body or the soul.

I don’t have as good an argument for 2 as I do for 1, but I think 2 is quite plausible given what Scripture says about God’s having planned out the mission of Jesus from of old.

Now add:

  1. If the Son could have become incarnate as a different human being, which human being he became incarnate as depends on a number of free human choices in the century preceding the incarnation.

Now, 1, 2 and 3 leads to an immediate problem for an open theist Christian (my thinking on this is inspired by a paper of David Alexander, though his argument is different) who thinks God doesn’t foreknow human free choices.

Why is 3 true? Well, if the identity of a human being even partly depends on the body (as is plausible), given that (plausibly) Mary was truly a biological mother of Jesus, then if Mary’s parents had not had any children, the body that Jesus actually had would not have existed, and an incarnation would have happened with a different body and hence a different human being.

Objection: God could have created Mary—or the body for the incarnation—directly ex nihilo in such a case, or God could have overridden human free will if some human were about to make a decision that would lead to Mary not existing.

Response: If essentiality of origins is true, then it is logically impossible for the same body to be created ex nihilo as actually had a partial non-divine cause. But I don’t want the argument to depend on essentiality of origins. Instead, I want to argue as follows. Both of the solutions in the objection require God to foreknow that he would in fact engage in such intervention if human free choices didn’t cooperate with his plan. God’s own interventions would be free choices, and so on open theism God wouldn’t know that he would thus intervene. One might respond that God could resolve to ensure that a certain body would become available, and a morally perfect being always keeps his resolutions. But while perhaps a morally perfect being always keeps his promises, I think it is false that a morally perfect being always keeps his resolutions. Unless one is resolving to do something that one is already obligated to do, it is not wrong to change one’s mind in a revolution. I suppose God could have promised someone that he would ensure the existence of a certain specific body, but we have no evidence of such a specific promise in Scripture, and it seems an odd maneouver for God to have to make in order to know ahead of time who the human that would save the world is.

What if the identity of a human depends solely on the soul? But then the identity of the human being that the Son would become incarnate as would depend on God’s free decision which soul to create for that human being, and the same remarks as I made about resolutions in the previous paragraph would apply.

Monday, July 14, 2025

The Reverse Special Composition Question

Van Inwagen famously raised the Special Composition Question (SCQ): What is an informative criterion for when a proper plurality of objects composes a whole.

There is, however, the Reverse Special Composition Question (RSCQ): What is an informative criterion for when an object is composed of a proper plurality?

The SCQ seems a more fruitful question when we think of parts as prior to the whole. The RSCQ seems a more fruitful question when we think of wholes as prior to the parts.

If by parts we mean something like “integral parts”, we have a pretty quick starter option for answering the RSCQ:

  1. An object is composed of a proper plurality of parts just in case it takes up more than a point of space.

I am not inclined to accept (1) because I like the possibility of extended simples, but it is a pretty neat and simple answer. Suppose that (1) is correct. Then we have a kind of simplicity argument for the thesis that the whole is prior to its parts. If the parts are prior to the whole, SCQ is a reasonable question, but doesn’t have an elegant and plausible answer (let us suppose). If the whole is prior to the parts, SCQ is not a reasonable question but RSCQ instead is, and RSCQ has an elegant and plausible answer (let us suppose). So we have some reason to accept that the whole is prior to the parts.

Natural kinds across categories

Most philosophical discussions of natural kinds concern entities in the category of substance: particles, chemical substances, organisms, etc. But I think we shouldn’t forget that there is good reason to posit natural kinds of entities in other categories.

For instance, you and I are each engaging in a token activity that falls under the natural kind (say) mammalian breathing. The natural kind specifies some essential properties of the kind, namely that it is a kind of filling and/or emptying of the lungs, as well as some teleological features, such as that the filling and emptying should be rhythmic. Instances of the kind may be better or worse: given that I am congested after a long drawn-out cold, likely your breathing is better than mine.

There are, plausibly, such things as natural activities, which fall under activity natural kinds. These may kinds may include gravitational attraction, mating, fish respiration, etc.

Dispositions, too, may fall under natural kinds, indeed a nested sequence of them. We might say that some dispositions are habits, and some habits are virtues. Thus, perhaps, you and I each have a certain disposition to rationally withstand danger, a disposition that is a token of courage, a kind of virtue. Your and my courages are different: for instance, perhaps, I am more willing to withstand social danger while you are more willing to withstand physical danger. Whether indeed virtues are natural kinds seems to me to be a central question for the metaphysics of virtue ethics.

There may be natural kinds of relations, too. Thus, I think marriage is a natural kind. On the other hand, I think presidency is not.