Wednesday, April 29, 2026

Transsubstantiation and the conversion of bread into Christ's body

One of the philosophical challenges of Aquinas’ account of transsubstantiation is his insistence that the bread and wine are not merely annihilated and replaced by Christ’s body and blood, but that they are changed into Christ’s body and blood.

Now, it is easy to see how bread could be changed into a part of Christ’s body. That routinely happened when Christ ate bread in his earthly life. But Aquinas thinks that Christ is wholly present in the Eucharist, so that can’t be the account. But it is very puzzling what it would mean for an item B to be changed into an individual item C that already existed prior to the change. What would it mean, for instance, for the chair I am sitting on to change into the laptop I am typing this on? It is easy to imagine God moving the fundamental particles of the chair into positions such that they constitute a laptop. But that would be a case of the chair changing into a second laptop, not into the laptop that I am typing this on. Indeed, it seems like it’s impossible for something to change into something that already exists, simply because the thing already exists.

Aquinas is well aware of this objection, and has a fascinating response:

A form cannot be changed into another form, or one [designated] matter into another [designated] matter, by the power of a finite agent. However, such a conversion can be effected by the power of an infinite agent, which has an action on the whole entity. For the common nature of being belongs to each form and to each [designated] matter, and the author of being (auctor entis) is able to convert what there is of being in the one (id quod entitatis est una) into what there is of being in the other (id quod est entitatis in altera), by removing that by which it was distinguished from the latter.

Here is what I think is going on. Like many other philosophers before and after him, Aquinas thinks that individual objects need something whereby they are individuated—something that distinguishes them from other things. The project of figuring out what individuates things from other things is indeed a major part of Aristotelian metaphysics. Aquinas’ point seems to be this. God wields a very fine scalpel at the level of being, a scalpel so infinitely sharp that no finite being can wield it. That scalpel allows God to slice off an individual B that which distinguishes B from an individual C. When God slices that off, B literally loses its identity, and becomes C, as there is then nothing whereby B can be distinguished from C. (That God has such a fine scalpel is also indicated by the way that in the Eucharist he can slice a substance away from its accidents, and have the accidents remain without the substance.)

Let’s explore this account. First note that it seems to commit Aquinas to a different account of the individuation of material objects from his usual one. Aristotelians normally think that material objects are distinguished either by having different forms or, when the form is the same, by having different matter. Now, the bread on the altar and the body of Christ do have different forms: one is bread and the other is a human body. So on the usual Aristotelian account of what makes the bread different from the body of Christ, it is the bread’s bready form. But slicing away the bready form does not turn the bread into the body of Christ, or indeed into any human body. It just turns the bread into a formless lump of matter.

Perhaps we should suppose, however, that there is more than just literal removal going on. Maybe what happens is that God removes the bready form and replaces it with the form of the human body. But great as that miracle would be, that would just turn bread into a human, not into this human, Jesus Christ.

What if we suppose that God removes the bready form and replaces it with the form of Jesus Christ (namely, the soul of Jesus Christ)? But now the bread is simply becoming a new part of the body of Christ (in a miraculous verison of the way that the bread you may have for lunch may become new cells in you), and so only a part of Christ is present in the Eucharist.

But perhaps what I have described doesn’t slice away enough. Suppose the following happens. God slices the bready form away from the bread. That still leaves the bread’s matter. And the matter of the bread is distinct from the matter of Christ’s body. God continues removing the grounds of distinctness. He wields his infinitely sharp scalpel and carefully removes that in the matter of the bread which makes it be distinct from the matter of Christ’s body. The result is that that now the matter of the bread is not distinct from the matter of Christ’s body. Indeed, the matter of the bread literally converts into the matter of Christ’s body, not merely a new part of Christ’s body.

A major problem with this interpretation is that the form of bread is annihilated, whereas Thomas thinks the form of bread is also converted into the form of Christ’s body (admittedly with a qualification; see ST III.75.A6repl2 for details).

But perhaps we should make another move. Suppose that we have a non-Aristotelian account of individuation that works as follows: for any two created things, B and C, there is a relation that B has to C that individuates B from C and a relation that C has to B that individuates C from B. We can imagine each created thing having a vast number of labels. Somehow Alice has written into her being “I am not Bob” and “I am not Seabiscuit” and “I am not Oak Tree #18289”, and Bob has written into his being “I am not Alice” and “I am not Seabiscuit” and “I am not Oak Tree #18289”. This relational account of individuation does not require form or matter. It is not very Aristotelian. But it has a great theological merit: it makes the individuation of creatures be an image of the individuation of persons in the Trinity, which also proceeds (according to Western Christians) by opposed relations. Now imagine that God slices off of Alice the label “I am not Seabiscuit.” Instantly, Alice is converted into Seabiscuit. (Of course, it’s not right to say that Alice is Seabiscuit now. In this respect, it’s like when Bucephalus turned into a cadaver: Bucephalus and the horse-shaped cadaver are distinct entities.)

The exegetical problem with this interpretation is that it forces one to reject the standard Aristotelian story about individuation across species being by form and within a species being by matter. Instead, individuation is always by “individuating relations”. I am happy with this, because I never liked the standard Aristotelian story. But it makes it unlikely that the story is what Thomas has in mind.

But suppose one wants this to be more Aristotelian. Here is a way to do this. Take the orthodox Aristotelian account that across species individuation is by form and within species by matter. This account leaves unanswered the question of what makes a human form and a horse form different, as well as the question of what makes Peter’s matter different from Paul’s matter. Suppose we answer these questions by the relational account, thereby combining the Aristotelian account with the relational. Thus, a human form has (perhaps primitive) distinctness relations to all other kinds of forms, and Peter’s matter has a (perhaps primitive) distinctness relation to all other chunks of matter.

We can now imagine the following happening. There is bread on the altar. At the moment of consecration, God (a) removes from the bready form that which distinguishes it from a human form and (b) removes from the bread’s matter that which distinguishes it from Christ’s matter. Step (a) ensures that now the bread has human form, while step (b) ensures that the human form is that of Christ, since within a species the numerical distinction of forms is due to matter.

This last account is quite Aristotelian, and only requires that we go one step further than Aristotle by supposing an answer to the question of what makes different kinds of forms different and distinct chunks of matter distinct. It’s too Aristotelian for my taste—I don’t want matter to play that much of a metaphysical role. But it is a cool account, I think. And it could be Aquinas’.

A four-dimensional model of Eucharistic presence

In yesterday’s post, I discussed the Real Presence in the context of relativistic time. There, I made the assumption that when Christ is really present in the Eucharist, it is Christ at a specific time of life (intuitively, the current time, but that notion is tricky given Relativity). It is, in particular, an adult glorified Christ and not the toddler Christ who is present in the Eucharist.

But after discussion with my Aquinas seminar grad students, I think there is something rather appealing about denying that assumption. What if instead we say that the whole of the four-dimensional Christ is present in the Eucharist? Aquinas apparently thinks that the whole Christ is present in every potential “part” of the consecrated host. This suggests (but does not entail) the idea of a three-dimensional entity present at a single point in space. Why, then, can’t a four-dimensional entity be present at a single point of spacetime? This would require a distinction between internal and external time. During an instant of external time there would be a positive (indeed, infinite, in the case of a being that lives forever) length of internal time. This is just as the whole-presence of a three-dimensionally Christ in the Eucharist requires a distinction between internal and external space: there may be five feet (say) of internal space between Christ’s head and Christ’s toes, but both are present in the external space of two inches—or much less if Aquinas is right that Christ is present in every potential part.

Is there any point to such a supposition? Yes.

First, the Tradition holds that Christ is wholly present in the Eucharist. Given four-dimensionalism, a literal metaphysical reading of that requires the whole of the four-dimensional extent of Christ to be present. Granted, I think this is an overreading of the Tradition: even if four-dimensionalism is true, it is plausible that the doctrinal pronouncements on this only refer to the whole three-dimensional extent of Christ. But, still, supposing the four-dimensionalism, it is certainly in the spirit of the teaching on Christ being wholly present, even if not required by it, to suppose the whole four-dimensional extent of Christ to be present.

Second, in Q73.A4, Aquinas has a beautiful discussion of the threefold temporal signification of the Eucharist. With respect to the past, it commemorates Christ’s passion. With respect to the present, it brings all the members of the Church together. With respect to the future, it prefigures the enjoyment of God in heaven. If we think that we are united with Christ in his full four-dimensional extent, this deepens and underscores this threefold signification.

Third, the Catholic tradition holds that the Mass is a re-presentation of the sacrifice of Calvary. This is a mysterious doctrine, and a four-dimensional whole-presence model of the Eucharist gives us a precise account of that doctrine as well: Christ as hanging on the Cross is present in the Eucharist.

That said, there are three things that make me uncomfortable about this four-dimensional extension of the doctrine that the whole of Christ is present in the Eucharist. The first is simply that it is a new theological theory (as far as I know), and most new theological theories are heretical.

The second is that it feels important to me that it is the glorified Jesus who is present in the Eucharist. But perhaps I am wrong about this feeling, and in having this feeling I am underplaying the commemorative aspect of the past temporal aspect of the Eucharist. Perhaps a justification for my feeling of discomfort is given by the Church’s emphasis on the Eucharist as an unbloody re-presentation of the sacrifice of Christ—but if Christ’s mangled crucified body is present in the Eucharist, then this the unbloodiness is merely a matter of appearances.

The third is that it is difficult what to make of the period when Christ was dead. Aquinas thinks God was still incarnate in the dead body of Christ, and if the Apostles had celebrated the Eucharist then, a dead body would have come to be present. If Aquinas’s reasons are good ones, which I am not confident of, then on the four-dimensional whole presence model we should say that the dead body of Christ is present. But this doesn’t seem right. I worry both about the apparently unfitting gruesomeness of this, as well as about the idea that there is something in the Eucharist other than Christ’s body, blood, soul and divinity (a dead body is not a body!). That said, I am suspicious of Aquinas’s view of the Incarnation and the dead body of Christ. But even if Aquinas is wrong, we have another problem. If the whole temporal extent of Christ is to be present, the soul of Christ as it was when Christ was dead needs to be present. (Especially if, as I think, survivalism is correct.) But a soul is only present in a spatial location insofar as it is united to a body that is in that location. But the soul of Christ as it was when he was dead was not united to a body, so it seems that there is no way for it to be present in the Eucharist. If this problem cannot be solved, the account may yield the whole four-dimensional extent of Christ being present, but not the whole temporal extent of Christ being present (the soul is temporal but not spatial, and hence not four-dimensional). Thus, the account may not achieve quite as much as it seems to.

One can resolve the second and third problems by supposing a moderate version of the view: Christ’s whole glorified four-dimensional self is present in the Eucharist—namely, Christ in the Eucharist is all of Christ from the time of his resurrection. This loses some of the advantages of the view, and it is not clear that what remains is sufficiently compelling. But, on the other hand, it’s also not clear that there is any serious disadvantage to that view over a three-dimensional-slice view of the Real Presence, except maybe the novelty. And it has the advantage of there not having to be a fact about the exact correlation of times between heaven and earth.

Tuesday, April 28, 2026

The Real Presence and Relativity Theory

Jesus Christ like all human beings has an internal clock. One can measure that clock in heartbeats or in lower level physical interactions or in some other way. Let’s measure it in “internal years”. If Jesus was born in 4 BC, then in 4 BC, his internal clock was at about a year (he was conceived about 0.75 years before he was born). In 1 BC, it was at 4 years, in 1 AD, it was at 5 years, and in 30 AD, it was at 34 years.

I don’t know what Jesus’s internal clock was at in 100 AD, and it’s not immediately obvious that the question makes sense. For it is not immediately obvious that there is a correlation between time on earth and time in heaven of such a sort it makes sense to ask “What is happening in heaven right now?” After all, according to Relativity Theory, it doesn’t make sense to ask “What is happening right now in the Andromeda Galaxy” without specifying a reference frame for the “right now”, and it’s not immediately clear that there is a common reference frame between heaven and earth.

However, the real presence of Jesus in the Eucharist does provide a temporal correlation between heaven and earth. Around 22:45 UTC today, Jesus will come to be present in our campus parish. Moreover, Jesus will be present as an adult glorified human, not as the three-year-old he was in 1 BC. There thus appears to be a fact of the matter as to what his internal clock will be showing when he comes to be present at 22:45 UTC in Waco today.

Interestingly, this gives a temporal ordering on events scattered across the earth apparently independent of our ordinary relativistic reference frames. For if the Eucharist is celebrated around 22:45 UTC in Waco and around 22:45 UTC in London, there is a fact of the matter whether Christ as present in Waco is older or younger or at the same age (according to his internal clock), and this fact provides a reference-frame independent temporal ordering between these two Eucharistic celebrations.

Indeed, since according to Catholic and Orthodox faith, Christ remains Eucharistically present in the tabernacles across the world, we constantly have a temporal ordering between events scattered spatially across the world. In principle, this defines a theologically privileged reference frame between scattered events—a Eucharistic reference frame. Events at locations z1 and z2 in spacetime are Eucharistically simultaneous, we might say, provided that Christ as Eucharistically present at z1 and at z2 has the same value of the internal clock.

Of course, some philosophers of time think there is an objective reference frame in the physical world. If they are right, then very likely the theologically privileged frame is the same as the objective one.

All that said, it is not completely clear to me that Christ as Eucharistically present has to have a well-defined value of his internal clock. But I suspect so, because of the intuition that it is the adult and not toddler Jesus who is Eucharistically present.

Monday, April 27, 2026

Gettiered by degrees

Consider a standard Gettier case. A cutout of a sheep in a field hides a sheep behind it. At that distance, the cutout looks just like a sheep. You have a justified true belief that there is a sheep, but you don’t know it (or so the story goes).

Now imagine that cutout is to some degree transparent, so some of the whiteness you see is in fact from the sheep, and some from the cutout. Consider the continuum of cases as the cutout goes from fully opaque to full transparent. Perhaps it fades from opaque to transparent as you’re looking—all without you knowing that it is fading. When it’s fully or nearly opaque, you are Gettiered and don’t know there is a sheep. When it’s fully or nearly fully transparent, you know there is a sheep.

Supposing that knowledge has a distinctive value over and beyond the value of justified true belief, it seems plausible to think that this value increases monotonically with the transparency of the cutout. If the cutout is becoming more and more transparent before your eyes, you are gaining epistemic value, without noticing you are doing so.

It’s an interesting question: What kind of a function is there from cutout-transparency to value? Is it continuous, or is there a transparency threshold for knowledge at which it jumps discontinuously? If it is continuous, is it linear?

I have to confess that these kinds of questions seem a bit silly, and this gives some ammunitition to the thought that knowledge does not have a distinctive value.

Action-guiding counterfactuals

Suppose Alice is essentially a non-liar and essentially knows all about human affairs as well as about her essential properties, but she does not have any significant powers to affect human affairs except by answering questions. You ask Alice whether there is poverty among humans.

She thinks to herself that since she essentially knows all about human affairs and is incapable of lying, therefore this is true for her:

  1. Were I to say “There is no poverty among humans”, there would be no poverty among humans.

Since it’s a lot better that there be no poverty, she says there is no poverty among humans.

That’s absurd. Yet (1) seems to follow from the following plausible premises:

  1. If p entails q, and it is contingent whether p holds, then were p to hold, q would hold.

  2. It is contingent whether there is poverty among humans.

For that Alice says that there is no poverty among humans entails that there is no poverty among humans, since she is essentially incapable of lying and essentially knows whether there is poverty among humans.

It seems that Alice should deny (1) in favor of:

  1. Were I to say “There is no poverty among humans”, I’d be lying.

But that’s very odd, because it’s a counterfactual with a contingent antecedent (Alice can say “There is no poverty among humans”: she says it in possible worlds where there is no poverty among humans) and an impossible consequent. And it seems like all such counterfactuals should be false.

What’s going on? Here’s a suggestion. Counterfactuals are highly context-dependent in what one keeps fixed. Richard Gale once illustrated this point with the dark joke: “What would Queen Victoria be doing if she were alive today? Clawing at the inside of her coffin!” In the context of action-guidance, we need to keep fixed the true “dependency hypotheses” (familiar from causal decision theory). Present facts are among the dependency hypotheses to keep fixed when deliberating, except in special cases like where you are deliberating about how to use a time machine. Thus, we keep fixed that there is poverty, and (4) is correct while (1) is false when said in an action-guiding context.

If we say that the dependency hypotheses count as part of the antecedent, we can keep a version of (2), though I don’t know that that’s exactly the right way to do the semantics.

But things are a bit more complicated. Essential facts about one’s traits are surely also among the dependency hypotheses. But now the true dependency hypotheses include:

  1. There is poverty

  2. Alice knows whether there is poverty

  3. Alice doesn’t lie.

What would be true if we were to combine (i)–(iii) with Alice saying “There is no poverty”? I have no idea! Poof, logic would explode, and everything would be true? Or would there be something more specific true? I don’t know. In any case, it is unclear that (4) is true and (1) isn’t when we think about it this way.

Here is a thought. There is a hierarchy among the dependency hypotheses. Facts about the agent’s character—even necessary facts—are lower down in the hierarchy than other dependency hypotheses. In cases of conflict, we keep fixed the higher up dependency hypotheses at the expense of the lower down ones. Thus, we keep fixed (i) at the expense of (ii) and (iii). Maybe that helps. But it seems rather ad hoc.

And consider this puzzle. You ask Alice whether she can lie. Now the relevant dependency hypotheses are just that Alice knows her essential traits and that she is essentially a non-liar, and these seem all on par. So if we had these dependency hypotheses along with Alice saying she can lie, there is no telling what would eventuate. And in particular it is unclear how Alice can reason to the conclusion that she should say “I can’t lie” rather than “I can lie.”

Friday, April 24, 2026

More on wagers for the perfectly rational

Consider a choice between two wagers on a fair coin:

  • W1: on heads, you get $1 if you are perfectly rational and $3 if you are not

  • W2: on tails, you get $2 if you are perfectly rational and $1 if you are not.

Suppose you are perfectly rational, and that it’s a part of perfect rationality that you know for sure you’re perfectly rational. It’s obvious you should go for W2. But let’s calculate. We immediately run into the zero-probability problem that I’ve lately been thinking about. For if you’re perfectly rational, the probability that you go for W1 is zero, so E(U|W1) seems to be undefined. Of course, E(U|W2) is unproblematically half of $2, or $1, but you can’t say whether that beats “undefined” or not.

Suppose you think: Maybe E(U|W1) is undefined in classical probability, but maybe I can use some other way of defining it, say using Popper functions.

Well, let’s think about what E(U|W1) “should be”. So imagine that you actually go for W1. Now, only an imperfectly rational agent would go for W1. So, if you were to go for W1, you would get $3 on heads, so your expected payoff would be $1.50, which beats anybody’s expected payoff for W2. So, formally, E(U|W1) is undefined, but if you close your eyes to that and think intuitively, you get E(U|W1) equally $1.50, which yields the wrong result that as a perfectly rational agent you should go for W1.

What if we say that a perfectly rational agent need not know for sure that they are perfectly rational? Suppose, say, you are perfectly rational agent who is 0.99 sure you are perfectly rational. Then E(U|W1) and E(U|W2) are both well-defined. But what are they? Well, it’s intuitively clear that if you are 0.99 sure that you are perfectly rational, you should go for W2. But supposing that’s right, then W1 entails you are not perfectly rational, and since P(W1) = 0.01, the expectation E(U|W1) is well-defined, and must be equal to $1.50. Oops!

This line of reasoning assumed evidential decision theory. What if you go for causal decision theory? Well, there are two causal hypotheses: R (you are perfectly rational) and Rc (you are not) with P(R) = 0.99 and P(Rc) = 0.01. So now your causal expected utility on W1 equals

  • CE(U|W1) = 0.99E(U|W1R) + 0.01E(U|W2Rc).

What is this? Well, W1 ∩ R is the empty set! But conditionalizing on an empty set is not a merely technical problem in the way that conditionalizing on a specific zero-probability outcome of a continuous spinner is. Rather, it is simply nonsense. So the first summand is undefined, and hence the sum is undefined. Thus you simply cannot make a decision with causal decision theory here.

It’s obvious that if you’re nearly sure you’re perfectly rational you should go for W2. But neither evidential nor causal decision theory gives a way to that conclusion.

[By the way, the reason I set up W1 and W2 as I did, with one having the payoff on heads and the other on tails, was to ensure that we didn’t have domination. For one might reasonably say that a perfectly rational agent will try to decide on grounds of domination first, before resorting to probabilities.]

Thursday, April 23, 2026

Purgatory and its alternatives

I was reading Jerry Walls’ lovely piece on purgatory for class. Thinking about it has made me realize that given that all who are in heaven are morally perfect, and almost nobody is morally perfect before death, we have the following options:

  1. Almost no Christians end up in heaven.

  2. There is purgatory after death during which character changes.

  3. There is instant and radical character change at the moment of death.

  4. There is a temporally extended and empirically invisible sanctification just before death, probably with time being subjectively stretched.

I think it’s tempting to think of purgatory as an odd Catholic addition to Scripture (though there is 1 Cor. 3:15, of course)—maybe even for a Catholic to think that. But consider the other options.

Option (1) is super pessimistic. It doesn’t make the Gospel really be the Good News it is.

Option (3) is at least as much—and perhaps more so—a theological addition to Scripture as purgatory may seem to be. It’s compatible with Scripture that there is such a sudden moral transformation, but so is purgatory, and both of them are major divine actions going over and beyond what is expressly given by Scripture. Both are suprising, I suppose. Of the two, however, the instant moral transformation seems a lot less in keeping with God’s usual way of proceeding with us. Presumably, being instant, this moral transformation is not something we could have much cooperation in. And it feels a bit odd to think that we struggle over many years to grow morally—and then in an instant it’s all fixed. It makes one wonder why we bothered to struggle. (On the purgatory story, the struggle makes sense, because purgatory does not exempt one from effort.)

Option (4) is also a theological addition to Scripture. It has the advantage over (3) that it is not instant, and hence is more in keeping with God’s typical way of proceeding with us. But it has the serious disadvantage of appearing to be rather a skeptical hypothesis—especially when it is not actually announced by God that that’s what God does for most people. Moreover, while I certainly am open to God using the period just before death for moral transformation, there is something odd about this being how God normally proceeds with Christians. For often the period just before death is naturally unsuited to moral transformation: the mind is falling apart as death takes the body. God could choose that difficult moment, but it doesn’t seem to fit well with a picture of a God who likes to make grace build on nature.

If I were a Protestant, I think would definitively reject (1), and then I would be inclined to suppose that (2) is somewhat more likely than either one of (3) and (4).

Good's Theorem, perfect rationality, and conditioning on zero probability events

Recently, I found myself puzzled by the difficulty in applying “classical” evidential decision theory to a perfectly rational agent. The problem was that the rational agent decides whether to do A or B based on a comparison between the conditional expectations E(U|A) and E(U|B) of the utility function U. But supposing that in fact E(U|A) > E(U|B), the perfectly rational agent has no chance of doing B, so P(B) = 0, and hence E(U|B) is undefined.

But then I thought this isn’t a big deal, because we aren’t perfectly rational agents, so we always have a chance of screwing up and hence P(B) > 0 even if E(U|B) is much less than E(U|A).

I am not entirely satisfied with this. After all, you might think: “I may be pretty imperfect, but if I am choosing between a donut D and a year of torture T, I have zero chance of choosing the year of torture. But then E(U|T) is undefined, so how am I being rational in this choice? Maybe that’s a good objection, maybe not.

But here is another reason why the “We’re imperfect” solution isn’t completely ideal. We want to say that Good’s Theorem tells us something important about rationality—namely, that more information makes rational agents make better decisions. Good’s Theorem is usually interpreted as saying that under some independence conditions, the expected value of a perfectly rational choice given more information is no less than that of a perfectly rational choice given less information. Notice that this is obviously false in the case of an imperfectly rational agent. Thus, we have to make sense of “What a perfectly rational agent would choose” to make sense of the standard interpretation of Good’s Theorem. Moreover, in the setting of Good’s Theorem, the perfectly rational agent has to be choosing based on expected utilities—and that’s precisely what generates the zero-probability-conditioning problem.

Now, the Theorem is still true as an abstract bit of mathematics. But the application is difficult if we can’t make sense of a perfectly rational agent who is certain to maximize expected utility.

Likely we can extend Good’s Theorem to talk about the limiting case of imperfect agents getting more and more perfect. But it would be nice if we didn’t have to.

Wednesday, April 22, 2026

A nuanced compatibilism and the problem of heavenly freedom

The problem of heavenly freedom is the apparent tension between these two claims:

  1. The blessed in heaven are free

  2. The blessed in heaven cannot sin.

One solution is compatibilism, but as Pawl and Timpe note, this undercuts the Free Will Defense.

But there is another move. One can be a compatibilist and say that while one can have freedom without the ability to do otherwise, nonetheless freedom with the ability to do otherwise is better. If one accepts this version of compatibilism, one can affirm (1) and (2) while yet offering a Free Will Defense.

This, however, leads to an obvious riposte: If freedom with the ability to do otherwise is better, why don’t we have that kind of freedom in heaven? Isn’t heaven supposed to be the best state for us?

One can, however, add another nuance. There are some activities that it is good to have done at some point, but repetition significantly diminishes the value. It is of some value to have read The Murder of Roger Ackroyd. To re-read it, not so much. Or for a religious example, think of the Hajj. Suppose freedom with the ability to do otherwise is like that. Perhaps, then, it is valuable to have made the choice for God with the ability to do otherwise. But a repeat of that choice is of rather lesser value. So much lesser, that if on earth one has made the choice for God with the ability to do otherwise, in heaven the value of doing so again is outweighed by the value of making guaranteed righteous choices.

This is not too different from Pawl and Timpe’s preferred solution of allowing for derivative freedom in heaven. But there may be an advantage to the above solution. Pawl and Timpe’s solution doesn’t solve the problem of infants who go to heaven without being able to make a free choice in this life—they don’t seem to have derivative freedom. (One of my undergraduate students has ably pressed this problem.) The nuanced compatibilism I have suggested can help with that: the infants in heaven genuinely have freedom. Granted, their death has denied them one of the goods proper to earthly life—the good of choosing righteousness with the ability to do otherwise. But that they have lost something by their untimely death is indeed rather intuitive.

We might ask: But why wouldn’t God then give them a chance to make a decision with ability to choose otherwise after death? Wouldn’t that be better? In one respect, it would indeed be better: in the respect of choosing with the freedom to choose otherwise. But in another respect, it would be less good: in the respect of having the risk of choosing wrongly. These are incommensurable considerations, and God can reasonably follow either one.

Granted, this move weakens the force of the Free Will Defense. We can no longer say that it’s better all things considered for God to give us the kind of freedom that allows us to reject him. For while that’s a better kind of freedom, it comes with an incommensurable cost—the risk that we will reject him. However, we can still say that God can rightly choose to follow either of the incommensurable considerations. In our case, he has opted to give us the better freedom despite the risk; in the case where he has taken some infants to himself, he opted for the guarantee of freedom being rightly used.

I don’t endorse the above solution. But I think it’s possible.

Extending Good's Theorem to experiments and not just observations

Good’s Theorem basically says that a utility-maximizing agent can expect to make decisions that are at least as good if they get more information. (And under some additional conditions, one can expect the decisions to be better.)

Now consider this case:

  1. You will be offered a chance to make a bet at certain odds on the result of a coin toss, where as far as you can tell it’s equally likely that the coin is fair and that it is double-headed. Someone offers to tell you how the previous toss of the coin went.

Good’s Theorem says your decision whether to make the bet will be at least as good given the information about the previous three tosses as without that information. Hence, if the information is being announced, you don’t need to cover your ears. This is, of course, very intuitive. But now consider a slightly different case:

  1. Things are set up just as in (1), except now instead of information about the previous toss, you are offered a chance to have the following experiment get performed before your decision: the coin will be tossed an extra time and the result will be announced to you.

The difference is that in (2) you are not simply being offered additional information about how things are. For whether you go for the experiment or not, either way, you have full information about the experiment and its results. If you don’t go for the experiment, that full information is that the coin was not tossed an extra time (and hence did not land either heads or tails). If you do go for the experiment, the full information is that the coin was tossed and it landed heads, or else that it was tossed and it landed tails. In (2), you are not just finding out information by going for the deal: you are making something happen—an extra toss—and then finding out something about that.

So you can’t apply Good’s Theorem directly to (2). It would be nice to have a formulation of Good’s Theorem that works in cases where instead of merely finding out information, you perform an experiment.

I initially thought this would be easy. Maybe it is, but I don’t see it. There are, after all, cases where performing a cost-free experiment is not a good idea. Suppose, for instance, that you will be allowed to bet tomorrow that a certain car has more than 10 gallons of gasoline. The experiment is to start up the car and look at the gas gauge. But starting the car reduces the amount of gasoline in it, and one can easily rig the case so that benefits from the information gain are outweighed by the fact that you have made that bet less favorable.

So, we want to rule out cases where there is dependence between whether you perform the experiment and the payoffs of the wagers. If F is the event of performing the experiment, it may seems initially we should assume something like:

  1. E(U|WiF) = E(U|WiFc) for all i,

where Wi is your choosing wager i and U is the utility random variable. In other words, the expected utility of each wager is unaffected by whether the experiment has been performed. But no! Suppose a coin has been tossed, and you are choosing between W1 where you get a dollar on heads and W2 where you get a dollar on tails. But let F be the experiment of looking at the coin. (This is a case for the original Good’s Theorem.) Then E(U|WiFc) = 0.50, while E(U|WiF) is very close to 1.00 for the reason that when you find out what the coin is like, you are close to certain to bet on what you see, and hence you are close to certain to win your bet.

If F1 is heads and F2 is tails, we solve the problem by replacing (3) with:

  1. E(U|WiFjF) = E(U|WiFjFc) for i and j.

Namely, the expected utility of wager Wi given information Fj is independent of whether you performed the experiment F. But that only works because it makes sense to ask what the coin is showing if you aren’t looking: it makes sense to conditionalize on Fj ∩ Fc. But in the cases that interest me, there is no fact of the matter as to the result of the experiment when the experiment is not performed, since Molinism is false and we live in an indeterministic world. And in these cases, Fj ∩ Fc is the empty set: the Fj represent the possible results of the experiment but the experiment has no result when it is not performed.

I can get something by supposing a two-step procedure. You perform the experiment, event F, and you learn the result, event L. Then we can assume:

  1. E(U|WiFLc) = E(U|WiFc) for all i

  2. E(U|WiFjFL) = E(U|WiFjFLc) for all i and j

  3. P(Fj|FL) = P(Fj|FLc).

Assumption (5) says that it makes no difference to the expected utility of a wager whether (3) the experiment is performed but its result is not learned or (b) the experiment is not performed at all. In other words, the experiment itself doesn’t affect things. Assumption (6) says that given a specific experimental result, learning the result makes no difference to the expected utility of each wager–result pair. Assumption (7) says that the results of the experiment are unaffected by whether you learn the result of the experiment.

Without (6) or (7), we wouldn’t expect to get the result we want. If we don’t have (6), it might be that utilities are wildly affected by whether you learn the result. (The simplest case is that the wagers all have a big negative payoff on L.) If we don’t have (7), then learning the result might have some evidential or retrocausal impact on what the result is, and then again we shouldn’t expect that learning the result is a good thing.

Given (5)–(7), I think we can now reason as follows. You are choosing between:

  1. performing the experiment and learning the results

and

  1. not performing the experiment and (hence) not learning the results.

By (5), a rational agent will decide the same way in (ii) as in:

  1. performing the experiment and not learning the results,

and the expected utilities of (ii) and (iii) will be the same for this rational agent.

We now apply Good’s Theorem to the choice between (i) and (iii) (we will use (6) and (7) here, and assume the case is non-Newcombian and hence allows the use of Evidential Decision Theory) and get the result that (i) is at least as good as (iii). Since we have indifference between (ii) and (iii), it follows that (i) is at least as good as (ii). (We can also analyze the cases of a strict expected utility inequality.)

This is roundabout, but that’s not my main worry.

What I am really worried about is one technicality. To run the above argument, I had to assume that there is a way of performing the experiment without learning the result, namely that F ∩ Lc is non-empty. In general, however, we cannot assume this. Suppose, for instance, that we have a world with a quantum mechanics where observation causes collapse. Then the experiment of collapsing a wavefunction by means of observation cannot be done without observing the result of the experiment. In such scenarios, I cannot simply introduce a third option of performing the experiment and not learning the results, since that third option may not be consistent with the laws of physics. (And, of course, the utilities for breaking the laws of physics could be wild.)

But without introducing that third option, namely F ∩ Lc, I don’t know how to formulate the independence assumptions that are needed. I also don’t know if the problem is “merely technical” or “deep”. If I had to bet at even odds, I would bet on its being merely technical. But it might be deep.

Monday, April 20, 2026

Consciousness, fine-tuning and skepticism

Models of the emergence of consciousness from a material substrate (whether weak or strong emergence—it won’t matter for this post) differ on how easy it is for consciousness to emerge. Functionalist or computationalist models make it relatively easy: as long as there is a functional isomorphism between a thing and a conscious thing, the former is conscious as well. Biological models, on the other hand, make it harder, by putting constraints on what kind of biological realization of a functional structure gives rise to consciousness.

It’s interesting to note that there the more permissive a model of consciousness is, the easier it is to tune the universe to get consciousness, and hence the better the response that can be given to fine-tuning arguments for theism or a multiverse. On the other hand, the more permissive a model, the greater the danger of skepticism from the fact that the buzzing atoms in a random rock have some sort of isomorphism to a human brain, and hence it is not clear that we have good reason to think we’re not rocks.

On the other hand, the more restrictive a model of consciousness is, the harder it is to tune the universe to get consciousness. On one extreme, you need brains to be conscious. But brains are a specific type of physical organ in DNA-based life forms, so you need life-forms rather like us to have consciousness, and the fine-tuning needed becomes more stringent. On the other hand, the more human-like that conscious things have to be the less skepticism we have to worry about.

Is there some kind of a Goldilocks zone in the range of theories of consciousness where the fine-tuning is not too onerous and skepticism is not an issue? I don’t know.

Lifetime epistemic value

Suppose I discover some fact that I never end up using for anything, or even occurrently thinking about after the discovery. Now, knowledge is good. If I learn the fact earlier in life, then I will have had the knowledge for a longer period of time. So is it better for me to have learned the fact earlier in life?

I doubt it. Consider two scenarios. On the first, I learn what the capital of Zambia is just before I enter a ten-year coma. On the second, I learn it right right after I exit the coma. Learning it before the coma gives me ten more years of knowing it. But that seems a worthless gain. I conclude that in the case of non-occurrent knowing, it doesn’t matter much how long I know.

What about for occurrent knowledge? Other things being equal, if I learn some fact earlier in life, I will occurrently know the fact more times. Is that valuable?

I am less sure. But consider a daily ritual where every morning after waking up, before I am capable of any serious intellectual activity, I think to myself: Sheep have four legs. Thereby, I greatly increase the number of instances in which that piece of knowledge is being occurrently known. Again, this doesn’t seem to be worth the bother.

So it seems that neither for non-occurrent or occurrent knowledge is there non-instrumental value in knowing the thing for a longer period of time. Of course, there typically is instrumental value in knowing something for a longer period of time, both instrumental epistemic value—you can use it in your intellectual investigations of more things—and often instrumental pragmatic value.

This suggests the following. If an agent never loses knowledge, then the lifetime non-instrumental value of their knowledge depends on what they have come to know, not on when they have come to know it. The analogous thesis for perfect Bayesian agents and scoring rules is that their lifetime epistemic utility is the epistemic accuracy score at the latest point in their lives. (If we apply this to Sleeping Beauty, we are apt to get halving. But we shouldn’t apply this to Sleeping Beauty, as she forgets about her first wakeup.)

Things are more complicated in the case of agents who do lose knowledge, whether to memory loss, irrationality or misleading evidence. If we count such an agent’s lifetime non-instrumental epistemic value based on all that they have ever known, that means that if they lost knowledge of p, there is no gain to them from getting it back. But obviously they are better off epistemically if they do get it back. Things get messy and complicated now. A short-period loss in old age doesn’t seem as bad as a case where you found out something early in life and then didn’t have it for the rest of your life.

This is getting messy.

The epistemic value of experiments

You perform an experiment and are going to rationally update on its results. It seems that you should expect this to be good for your epistemic utility as compared to non-performance of the experiment.

Not always! Silly case: Your boss has tasked you with performing a boring chemistry experiment. If you do the experiment, you will find out very little. But if you don’t do it, you will find out a lot about the range of swear words that your boss knows.

What makes this case silly is that you should really think of it as a choice of which experiment to perform, one in chemistry or one in psychology, and in this case the psychology experiment is the more interesting one.

So if we want to say that an experiment can be expected to improve your epistemic utility, we need to be a bit more careful. We need to ensure that non-performance of the experiment doesn’t itself generate information.

But it always does. At the very least, non-performance of the experiment generates the information that the experiment has not been performed by you. You find out something about yourself, and that might far outweigh the value of anything you find out from the experiment. Granted, you also find out something about yourself by performance of the experiment, but it is easy to imagine cases where what you find out by non-performance is more significant. For instance, it could be that your refusal to perform the experiment shows that you have a very specific and rare personality type, while your performance of the experiment gives you nothing so specific.

Suppose, for instance, that you score your epistemic utility by bits of information. The experiment consists in bending down to see which side an unusual coin lying on the ground is facing—that’s one bit of information. Your prior probability that you will look at the coin is 3/4: you are the sort of person who tends to look. So by looking at the coin, you will gain 1 − log2(3/4) = 1.4 bits, mostly regarding the coin but also a little bit about yourself. By not looking at the coin, you will gain 0 − log2(1/4) = 4 bits, all about yourself. Better not to look!

Of course, there are Newcomb-like issues here.

Lesson: The principle that performing a non-trivial experiment should be expected to improve epistemic utility is going to be difficult to formulate.

Epistemic possibility and the Liar

Here’s a fun Liar paradox involving epistemic possibility. Say that a proposition p is epistemically possible if it is consistent with all you know.

Construct a sentence G such that:

  1. G is true if and only if G is not epistemically possible.

E.g., “The proposition expressed by the first sentence in this post found in quotation marks is not epistemically possible.”

Now, you only know truths, and truth is consistent with truth. Thus:

  1. If G is true, then it is consistent with everything you know.

But G is true if and only if it is not epistemically possible. So:

  1. If G is true, then it is not consistent with everything you know.

Hence:

  1. G is not true.

But now that you’ve seen this argument, you surely are in a position to know G not to be true. Suppose you exploit this and indeed come to know G not to be true. But then we have a contradiction. For if you know G not to be true, then G is not epistemically possible, and hence by (0), it must be that G is true.

A piece of Wordle prehistory

A couple of years ago I helped make a variant on Wordle (same rules, copyright-free vocabulary) for the Nintendo Gameboy (you can play it online here), and I would play the official version. Since December, my hobby project has been reverse-engineering the computer built into my early 1990s HP 1653B logic analyzer/oscilloscope, and creating an SDK for programming it. Yesterday, I ported Davison's EhBASIC to it, and was trying out various games in Ahl's BASIC games book from the 1970s (1974 DEC version here), based on the EhBASIC ports here

One of the games I tried last night was Word, credited in the 1974 version of Ahl's book to Charles Reid of Lexington High School. It turns out to have rules very similar to Wordle. It hides a 5-letter puzzle word (there are only 12 in its puzzle vocabulary) and asks you to guess a 5-letter word. Then it shows you which of your letters are correct and in the right position and gives you a list of all the letters that match regardless of position. Basically the same as Wordle. There is no limit on the number of guesses. Here it is running on my oscilloscope. The keyboard is a Mac Quadra keyboard connected via a home-made adapter to the scope's serial port.

Interestingly Word leaks information that Wordle does not. It generates the list of position-independent matches in the array P by the following nested loop where S is the correct solution and L is the user's input word.


The outer loop goes over the letters in the solution S, in order from left-to-right, and adds the position-independent matches to P. Because P is then later printed as is, this means that you know the order in which the position-independent matches appear in the solution, which leaks information (e.g., if you were to put all the right letters in but in a different order, it would actually print the solution). 

Furthermore, if the solution has n repeats of a letter and your guess has m repeats of the same letter, then it will print that letter nm times, and you thus know exactly how many times the letter appears in the solution. Whether this is a bug or just an interesting mechanic depends presumably on what Mr. Charles Reid was thinking half a century ago. (Moreover, if nm>7, the program will crash, because only 7 slots were allocated in the S array. But I think there is no combination of words in game's 12-word vocabulary and English five-letter word that will result in more than 7 slots being occupied.) 

UPDATE: I've been scooped. And you can play the original game in your browser.

Thursday, April 16, 2026

A method for living forever

Maybe you have a cancer that would kill you in three months.

So, get a powerful rocket.

Accelerate close to the speed of light, and make a one light-year round-trip journey that from your reference frame takes about a month, but takes slightly over a year from the point of view of the earth. If your speed during the first journey was v1, now repeat the same trip with a speed of v2 = (3c2+v12)1/2/2. Then repeat with a speed of v3 = (3c2+v22)1/2/2. And so on, forever.

Fact: Each journey will take a bit more than a year of earth-time but only half of the you-time of the previous. So the total you-time of your journeying will be 1 + 1/2 + 1/4 + 1/8 + ... = 2 months. You’ll never die. At every future time, you will be alive.

But this is pointless. You might as well stay on earth, and then you’ll have three months of you-time. Three months of you-time followed by death is better than two months of you-time with no death.

A Christian argument against eternalism, with some remarks on "finite" and "infinite"

  1. We have an infinite future.

  2. If eternalism is true, then anything that has an infinite future is infinite.

  3. We are finite.

  4. So, eternalism is not true.

The crucial premise is 2. One thought behind 2 is that our best version of eternalism holds that we four-dimensional, and if we have an infinite future, that makes us infinite in the fourth dimension.

But I think we can do better than that. Plausibly, part of what we mean by “We have an infinite future” is that we will have infinitely many token future mental states (if not, add that to the premises). On eternalism, all these mental states exist. And they are clearly all ours. So if we have an infinite future, we have an infinite mental life, and that is a way of being infinite.

I am an eternalist, and I want to affirm 1 and 3. What can I do? One move is this. The relevant sense of “finite” in 3 is not a mathematical sense, but something more “metaphysical” like limited. Now, to be limited is to have one or more limits. This is quite compatible with there being respects in which we lack a limit. Thus, the charged infinite rod that sometimes figures in physics homework has limits: not limits of length, but limits of width and height (and others). In the metaphysical sense, then, the rod is finite. Likewise, then, even if we are temporally infinite or infinite in the number of mental states, we are still limited in other ways.

If we go for this move, we have to make a choice what to mean by “infinite”. We could say that something is infinite provided there is some respect in which it is unlimited. If we did that, then one thing could be finite and infinite—as long as it is limited in one way and unlimited in another. The “infinite rod” would then be both finite and infinite. And, if eternalism is true and there is an eternal afterlife, we are finite and infinite. On this take, the argument is invalid, because it is missing the assumption that nothing is both finite and infinite.

A second otion is to make “infinite” mean unlimited in all respects. In that case, we are finite and not infinite. Indeed, only God is infinite then. A set with what the mathematician calls “infinite cardinality” is limited by not having a greater cardinality than the one it has.

A third option would be to take “finite” to mean limited in every way, “infinite” to mean unlimited in all respects, and then allow for the possibility of things that are neither finite or infinite—perhaps us.

Wednesday, April 15, 2026

Anti-Lucretian preferences

Lucretius famously argued that non-existence at the end of one’s life is no more to be feared than non-existence before the beginning of one’s life. Nagel famously argued that there is an asymmetry. One could exist later than one will but one couldn’t have existed earlier than one did. I think he’s barking up the wrong tree. Death wouldn’t be less scary if it turned out to be metaphysically inevitable.

But in any case, I think there is a way to prescind from the metaphysics questions. You’ve just woken up after an operation. You have amnesia. You expect the amnesia to wear off—somehow you have knowledge of how such things go. But for now you have it. You look through some files a careless actuary left lying about. You expect one of these files is about you. The files describe these cases:

  • 35:20. Thirty-five-year-old expected to live twenty years more.

  • 30:20. Thirty-year-old expected to live twenty years more.

  • 20:30. Twenty-year-old expected to live thirty years more.

  • 30:30. Thirty-year-old expected to live thirty years more.

  • 20:20. Twenty-year-old expected to live twenty years more.

You can’t, of course, choose which of these is you, but you can have hopes and preferences. And suppose you think there is no afterlife.

My own preferences would be:

  • 30 : 30 > 20 : 30 > 35 : 20 > 30 : 20 > 20 : 20.

I consistently have a preference for a longer future other things being equal, and a longer past other things being equal, but I tend to prefer a longer future to a longer past even if that results in a somewhat shorter overall life.

But only to a point. Suppose another file is:

  • 50:28.

I definitely would greatly prefer that over 20:30, and not insignificantly over 30:30. The reason is that it seems quite a lot better to live 78 years than 50 or 60, even at the cost of two years of future life.

In any case, as regards my own preferences, Lucretius is just wrong. I would want more of a past life. Though to some degree my intuitions are distorted by the thought that in a longer life I am more likely to have more meaningful achievements.

What worries me philosophically about all this is whether I can reconcile my preferences with my belief in the B-theory of time. I think I can. It makes sense to me that the preferences I have at t should have a relationship to where t is located in my life.

Fear of death is not exactly fear of death or being dead

You don’t believe in the afterlife. Your doctor tells you that you will die in a week. You are terrified. A couple of minutes later, the doctor comes back, herself looking terrified. She tells you that she has both good news and bad news. The good news is that she had misdiagnosed you—you are just fine. However, the bad news is that her sister who is a cosmologist has just discovered that the everything—the universe, space and time—is coming to an end in a week. (She begs you not to tell anyone, because that will cause a panic.)

Out of nerdy curiosity, you ask the doctor whether there will be a last moment of time. She says that the same question occurred to her, and there won’t be. The interval of time is open on the upper end: for every time t, there is a later time t′. It’s just that time is literally running out, and all the remaining times are less than about a week from now.

With grim amusement you note that you won’t die. For at every time in the future you will be alive, and there won’t even be a last time which one might want to identify as the “time of death”.

You reflect. It’s a bit of a plus that none of your friends will suffer from your death, but a big minus that they all have only a week left. In any case, there is no relief from fear of death.

I think this case shows that it’s not death or being dead that we fear when we don’t believe in an afterlife. We fear the fact that our future is finite. If this is right, then people like Lucretius who thought that we somehow confusedly imagined ourselves as existing after the end of our existence and that this was what explained the fear of death are likely mistaken.

A nearly equivalent version of the above thought experiment would be one where you find out that you’re going to live for an infinite amount of time, but your life will exponentially slow down. In the next week of life, you will experience half a subjective week. In the week after that, you will only experience a quarter of a subjective week, and then an eighth and so on. Your subjective future will be a week. But you will never die. That’s just as bad as permanently dying.

Tuesday, April 14, 2026

A problem with perfectly rational agents and decision theory

Suppose I am perfectly rational in the decision theoretic sense. A coin is about to be tossed, and I will get five dollars on heads (H) and one dollar on tails (T). I have a choice whether to leave the coin fair (F) or load it (L) in favor of tails so that the probability of tails is 3/4.

It is obvious what I do. I calculate the expected utilities of my options F and L as follows.

  • EU(F) = P(H|F) ⋅ $5 + P(T|F) ⋅ $1 = (1/2) ⋅ $5 + (1/2) ⋅ $1 = $3

and

  • EU(L) = P(H|L) ⋅ $5 + P(T|L) ⋅ $1 = (1/4) ⋅ $5 + (3/4) ⋅ $1 = $2.

And then I choose F.

Except it’s not so simple. For I am perfectly rational. But since, as we just saw, the perfectly rational agent has to choose F, it follows that P(L) = 0, and so P(H|L) and P(T|L) are undefined. So I can’t decide! So now there is no guarantee how I will act, and P(H|L) and P(T|L) once again make sense. And then again they don’t. Oops!

What can be done? Causal decision theorists will note that I reasoned like an evidential decision theorist above. But this makes no difference in this case. The causalist’s story will be a bit more complicated but will end up with the same problem.

We might want to introduce primitive conditional probabilities like Popper functions that let you conditionalize on events with zero probability, and then have P(H|L) = 1/4 and P(T|L) = 3/4, even though P(L) = 0. But that is introducing a lot of complications. Primitive conditional probabilities are not unproblematic.

What should we do? Maybe we should suppose something like primitive suppositional decision theory, where what we are primitively given are the suppositional probabilities PF and PL, without them being defined in terms of conditional and unconditional credences as in evidential and causal decision theories. But this seems problematic. Do we have to suppose that in addition to conditional and unconditional credences, we have suppositional credences? Maybe.

Or perhaps decision theory only applies to agents that have non-zero credences of going for all the options.

Monday, April 13, 2026

An argument from wonderful people

  1. Person x appears like they are in the image of God.

  2. So, probably, x is in the image of God.

  3. If God doesn’t exist, no one is in the image of God.

  4. So, God exists.

From the imago Dei to God

  1. It is not inappropriate to have a level of respect for human beings at least as great as what would be fitting for beings in the image and likeness of God.

  2. If naturalism is true about humans and God does not exist, then it is inappropriate to have a level of respect for human being that is at least as great as what would be fitting for beings in the image and likeness of God.

  3. So, either naturalism is false about humans or God exists (or both).

Regarding premise 1, think about how problematic it would be to say that someone like Mother Teresa had too much respect for her fellow human beings.

Regarding premise 2, naturalism tells us that innately we’re just an arrangement of atoms, and if we add to that that God doesn’t exist, then this arrangement of atoms doesn’t have a special God-directed significance, so it seems inappropriate to bestow on us the level of respect that a being in the image and likeness of God would have.

I think one can strengthen the argument to provide additional evidence for the existence of God. If God doesn’t exist, then the only plausible way that humans could deserve the imago Dei level of respect is if human beings have a deep and very valuable reality going far beyond the neural networks in our brains, a reality that intrinsically calls for that very high level of respect. (If there is a God, then we don’t need quite as much intrinsic value for us to be worthy of that kind of respect, because we could derive value from our relation to the infinite God.) This is much more than ordinary non-naturalisms about consciousness give.

We thus learn from considerations of respect that if there is no God, humans need to be very non-natural in a god-like way. And beings like that are very hard to explain apart from God. So if we are beings like, this provides significant evidence for theism.

A double lottery and non-normalized probabilities

Suppose a positive integer N is generated by a fair lottery.

Then, a random integer K is chosen between 1 and N (inclusive).

What information does this give you about N?

Obviously you now know that N ≥ K. Anything else?

Consider some specific pair of numbers n ≥ k, and suppose we’ve found out that K = k. What’s the probability that N = n? Of course P(N=n|K=k) = 0/0. But what if we do this as a limiting procedure. Suppose first that N is randomly chosen between 1 and M where M ≥ n, and let PM be the probabilities for this case. Then

  • PM(N=n|K=k) = (1/M)(1/n)/[(1/M)Σj=kMj−1] = (1/n)/Σj=kMj−1.

Take the limit as M goes to infinity. Since Σn=kj−1 = ∞, the limit is zero, so we don’t have a meaningful distribution for N.

On the other hand, what if we independently choose two random integers K1 and K2 between 1 and N? Suppose n ≥ ki for i = 1, 2. Let k* = max (k1,k2). Then:

  • PM(N=n|K1=k1,K2=k2) = (1/M)(1/n2)/[(1/M)Σj=k*Mj−2] = (1/n2)/Σj=k*Mj−2.

Take the limit as M → ∞ and call that P(N=n|K1=k1,K2=k2). The limit behaves like ck*/n2, for a constant c > 0, and generates a well-defined probability for N = n.

With zero samples, we don’t have a well-defined probability for N. With one sample, we still don’t. But with two samples (or more), now we do. This is a rummy thing: how is it that sampling turns probabilistic nonsense into sense?

This is making me more friendly to using non-normalized probabilities. After all, the fair lottery for N is easily modeled by the constant probability p0(n) = 1. With one sample N = k, we have p1(n) = 1/n for n ≥ k and p1(n) = 0 for n < k. With two samples k1, k2, we have p2(n) = 1/n2 for n ≥ max (k1,k2) and p2(n) otherwise. All this makes perfect sense. And there is a lovely mathematical feature of non-normalized probabilities: conditionalization is conjunction. The conditional probability of an event A on event B is just the probability of A ∩ B.

Non-normalized probabilities aren’t going to solve all problems with infinite fair lotteries. For instance, I toss a fair coin and generate a number N with the following rule. On heads, I choose N with my fair lottery on the positive integers. On tails, I choose N such that the probability of N = n is 2n (e.g., I toss an independent fair coin and let N be the number of the first toss that gives heads). What’s my non-normalized probability p(x,n), where x is heads or tails and n is a positive integer? We surely want np(H,n) = ∑np(T,n): the total probability of the heads options equals the total probability of the tails options. But clearly p(T,n) has to exponentially decrease so np(T,n) is finite and non-zero. On the other hand, p(H,n) is constant, so np(H,n) is zero or infinity. So they can’t be equal.

But I wonder if one could say something like this: Non-normalized probabilities make sense in certain cases, and in those cases it’s reasonable to use them?

Thursday, April 9, 2026

Predictability and epistemic utility

You’re thinking whether to become an assembly-line worker or an artist. Then you reflect on the value of knowledge. And you become a factory worker, on the grounds that if you become an assembly-line worker, you will know what you’ll be doing every working day of your future, but if you’re an artist, your activities will be unpredictable.

Some remarks. First, there is something perverse about using the value of knowledge in this way. The normal way to pursue the value of knowledge is to find out things that are independent of your pursuit. But here you are pursuing knowledge by making there be less to know about the world (or your world). Yet, paradoxically, it sure seems like the line of thought above makes sense.

Second, the my initial story depends on Molinism being false. For if there are comprehensive subjective conditionals of free will, then by becoming an artist you get to know the conditionals about what you would do in the various artistic situations you’re in. But on the assembly line story, you don’t get to know these. So the Molinist doesn’t have the paradox. I suppose that’s a bit of evidence for Molinism.

Life-time epistemic utility

I’ve been thinking about the diachronic aspects of epistemic utility. In the case of non-epistemic utility, we can get a decent first approximation to life-time utility by adding up (or, if time is continuous, integrating) momentary utility. But I think this works less well for the epistemic case. For many things of purely epistemic importance, figuring them out is much more important than when one figures them out. (Granted, figuring them out earlier is instrumentally epistemically valuable, because it gives one more time to use leverage the knowledge to figure other things.)

Here’s an extreme version of encoding the value of “figuring something out”. Assuming one does not suffer from mental decline, the epistemic value of one’s life is the epistemic value of the very last moment of it. It’s interesting to note that this won’t work. For imagine that no matter what other credences you had at a given time, you always set the credence of “This is the last moment of my life” to one, while being careful to (inconsistently) make no use of this credence in update. If only the last moment counts, this modification to your credences would be a good idea: it makes sure that when the last moment comes, you get the epistemic utility credit for it.

I suspect that other weightings that favor later over earlier beliefs will suffer from a similar problem—they make it a good idea to err on the side of pessimism about how close death is.

But at the same time, I think some sort of favoring of later beliefs over earlier ones seems appropriate. I don’t know how to resolve this difficulty.

Gender/sex minimization and Christian sexual ethics

Here is a way to live a life: Generally strive to minimize the number of cases where one’s having a particular sex or gender functions as a reason for one’s actions or emotional attitudes.

An extreme version of this is not compatible with traditional Christian sexual ethics unless one is planning on celibacy. It is also not compatible with American law which requires one to correctly fill out various forms, such as census forms, that ask what one’s sex is. And it is not compatible with common-sense morality which requires one to respect things like sex- or gender-segregated bathrooms.

A moderate version of such a gender-minimizing practice, however, could be perhaps sustainable within the bounds of traditional Christian practice, American law and common-sense morality.

One might think that given the heterosexualism of traditional Christian practice, it is hard in romantic contexts to avoid basing decisions on reasons like “I’m a a man and she’s a woman.” In this post I want to explore the idea that one could instead base one’s romantic actions and attitudes on: “We are an opposite-sex pair.”

One might object: “That’s cheating. The reason why the two are an opposite-sex pair is that one is a man and the other is a woman.” But this is “reason” in a different sense of “reason” from that of reasons for actions and attitudes. That one is a man and the other is a woman is a metaphysical ground of the two being an opposite-sex pair, and it may well be one’s epistemic reason for thinking the two are an opposite-sex pair. But it need not be one’s reason for, say, asking the other out on a date—the reason for asking the other out on a date could just be “we are an opposite-sex pair”, and of course the delights of the other’s person, even if the evidence and metaphysical ground for “we are an opposite-sex pair” is the more fine-grained fact that one is a man and the other is a woman.

One could do the same thing when discerning a vocation to the priesthood. Instead of thinking “I’m a man, so I should consider the priesthood”, one might think “I am of the opposite sex to the symbolic sex of the Church (which in turn is the opposite sex to the sex of the incarnate Word), so I should consider the priesthood.”

Would formulating one’s reasons for action in such an unusual way have any benefits? I think so. “We are an opposite-sex pair” focuses one on a relation between the persons. On Trinitarian grounds, there is reason to think that relationality is central to personhood. Half of “I am a man and she is a woman” is self-focused. Better to use “we” than “I” in romantic thinking.

Moreover, we perhaps shouldn’t focus on what is morally irrelevant to a decision. Suppose two people are a good romantic match in terms of character traits, interests, etc., and one is male and the other is female. Supposing (perhaps per impossibile) that their sexes were swapped, but their character traits stayed the same, plausibly they would still be a good romantic match. That they are of the opposite sex is relevant romantically, assuming traditional Christian sexual ethics. But perhaps which one is a man and which one is a woman is not very relevant—what’s relevant is that the couple has one of each sex.

The picture in the above exploration is that the significance of men and women is largely relational: there is a relationship possible to a man and a woman that is not possible to two men or to two women. This is presumably because a man and a woman are an opposite-sex pair, or a potential mating pair, or something like that. This fact about a man and a woman is a relational fact. Granted, this relational fact is metaphysically grounded in certain biological features of the man and the woman, which features may not be themselves relational (say, the existence of body parts with a certain shape, or at least of activated genetic coding for them; though even there the teleology of the parts is relational). But even if the features are not themselves metaphysically relational, their ethical significance could still be largely relational.

Of course, in the end, the physical consummation of love in marital union will require each party to pay attention to the sexed nature of their own and the other’s bodies. That’s unavoidable. But perhaps that’s just a detail? I doubt it’s just a detail myself, but I could very well be wrong.

Another objection to the above story is that in love we focus on the specific features of the other, and in romantic love this includes sex-linked physical features. So when Juliet loves Romeo, she does not love him just as a “someone of the opposite-sex with character traits T1, ..., Tn”, but also as someone with a rich set of lovely physical features, for which it is important that he is male, as many of them would be aesthetically and biologically unfitting in a woman. Agreed! But that doesn’t mean that Juliet’s own femaleness needs to be a part of her reasons for loving Romeo. Instead, she can love him as “someone of the opposite-sex to me with character traits T1, ..., Tn, and with physical features Φ1, ..., Φm which are splendidly fitted to his maleness.” Of course that Romeo is of the opposite-sex to Juliet and that Romeo is male implies that Juliet is female, but even though it implies this, it need not be a part of her reasons for love. I do like the thought that we should minimize focus on self in other-love.

Perhaps the above story isn’t right. I don’t endorse it. It’s entirely hypothetical. I find some features of the story attractive, but my credence in the story is well below 50%. There may well be ethically significant non-relational features of being and being female. Perhaps swapping the sexes of a well-matched romantic couple might in fact change whether they are a good match.

For instance, maybe there are specifically male virtues and specifically female virtues, and then swapping sexes while keeping character the same produces someone who lacks virtues that they should have. Maybe, but I am inclined to be skeptical of this suggestion. A more moderate view would be C. S. Lewis’s, that although men and women should have the same virtues, the lack of certain virtues in a man is worse than their lack in a woman and vice versa. That has more of a chance of being right. I could imagine future scientific research telling us that the typical hormonal make-up of men tends to make some virtues easier for them and the typical hormonal make-up of women tends to make other virtues easier for them. Lacking an easier virtue seems worse than lacking a harder virtue, other things being equal. If so, then if you swapped sexes while keeping characters unchanegd, the moral evaluation could change. Perhaps Alice and Bob are respectively a decent woman and a decent man, but Alice would make a terrible woman and Bob would make a terrible man. Maybe. But if they both got worse symmetrically in this way, while keeping the same actual virtues, maybe it wouldn’t make a big difference to their romantic relationships. They’d still have the same virtues between the two of them, the same vices between the two of them, it’s just that the evaluation of these virtues and vices would be a bit different. And in any case on a story like the hormonal one, it’s not clear that the specific sexes matter except causally, as tending to produce the hormones.

My thinking on this was sparked by two things. First, for a while I’ve been exploring imagining what life would be like if we were isogamous heterothallic organisms, which of course we are not. Second, I recently attended the defense of a very interesting dissertation arguing among other things that it is compatible with Catholic orthodoxy to hold that gender (though not biological sex) distinctions are a major result of original sin.

Epistemic utilities and death

In the previous post, I proved that we get a proper scoring rule if we compute epistemic utilities as follows. We start with our current credence assignment, consider what credence assignment we will have in the future after we update on some further evidence, and then score that. I then suggested that one could get a lifetime epistemic utility by adding up the epistemic utilities over all the moments of life, and as long as death wasn’t random—as long as the lifespan was fixed—this would generate a proper scoring rule. I then said that if death is random (as it is) it might be the case that you don’t get a proper scoring rule.

My conjecture was wrong. You still get a proper scoring rule despite random death. It’s easy to see this. The basic idea is this. Suppose that you might die the next moment. This partitions the probability space into two subsets, D and L, for death and life. Your current credence is p. Next moment, on D, your credence either doesn’t exist (because you don’t exist or you exist in some supernatural state where you don’t have credences) or doesn’t count (because I am after lifetime credences). Thus, the appropriate way to do a forward-looking scoring of your credence p is to score it s(pL) on L and 0 on D, where pL is the result of conditionalizing your credence on the evidence L (after all, if you are alive, you will conditionalize on being alive), and s is some proper scoring rule. In other words, your forward looking score is sL(p) = 1L ⋅ s(pL).

Is this score proper? Yes! For by propriety of s we have:

  • EpL(s(pL)) ≥ EpL(s(qL)).

But this is the same as:

  • (p(L))−1Ep(1Ls(pL)) ≥ (p(L))−1Ep(1Ls(qL)).

Multiplying both sides by p(L) (I am assuming a non-zero probability of survival), we get:

  • Ep(sL(p)) ≥ Ep(sL(q)).

We can combine this with a more complex set of future investigations as in the previous post, and things will still work.

It is crucial to the above argument that when you’re alive, you can tell you’re alive. I suppose that’s not always true. When you’re asleep, you are alive, but can’t tell you’re alive. So to generalize beyond the above toy example, replace death with unconsciousness or something like that.

Forward-looking scoring rules

An accuracy scoring rule assigns a score to a probability function representing an agent’s credences, ostensibly measuring how close that probability function is to the truth. The score s(p) of a probability function p is a random variable, because the value of the score depends on what is actually true, i.e., on where we are in the probability space.

A proper scoring rule (on probabilistic credences) satisfies the propriety inequality

  1. Eps(p) ≥ Eps(q)

which says that the expected score of your current lights—your current credences p—by your current lights is optimal: you won’t improve your expected score (by your current lights) by switching to a different credence q.

You can think of a proper scoring rule as representing the epistemic utility of having a credence p.

But now let’s think about things dynamically. In the future, you will receive additional evidence. As a good Bayesian agent, you will update on this evidence by conditionalization. Perhaps instead of thinking about maximizing your current score, you should think about maximizing your future score. Maybe your true epistemic utility is the score you will end up with after all the future evidence is in.

A simple model of this is as follows. There is some finite partition I = (I1,...,In) of your probability space Ω with each cell Ii of the partition representing a possibility for what you might learn given future evidence. Your current credence function is p, and p(Ii) > 0 for all i. There is then a random credence function pI where pI(ω) is the credence function you will have once the evidene is in if you are at ω ∈ Ω. In other words, pI(ω)(A) = p(AIi) where Ii is the member of the partition that contains ω. (Technically, the function that maps ω to pI(ω)(A) is equal to the conditional probability p(AG) where G is the algebra generated by I.)

Now, given a proper scoring rule s, define a new scoring rule sI as follows:

  1. sI(p)(ω) = s(pI(ω))(ω).

Your sI-score for p at ω then represents the score you will have at ω once you learn which cell of the partition I you are in.

Theorem: The scoring rule sI is proper if s is proper.

Note that sI won’t be strictly proper (i.e., (1) won’t always have strict inequality when p and q are distinct) if I has two or more cells, because pI and qI are going to be the same if p and q assign different probabilities to the cells, but have the same conditional probabilities on each cell. But it might still be the case that sI is strictly proper with respect to some relevant subfield of Ω—that needs some further investigation.

Suppose now you are a Bayesian agent who is guaranteed to consciously live for n moments. In each moment, new information comes in. Thus, we have a sequence J0, ..., Jn of finer and finer partitions, with J0 being the trivial partition, and with pJk representing the credence you will have at time k. Your overall epistemic lifetime score is then:

  1. sΣ(p) = ∑ksJk(p).

It follows from the Theorem that sΣ is a proper scoring rule if s is. And if J0 is the trivial partition, then sJ0 = s, and so if s is strictly proper, then the lifetime score sΣ is strictly proper, since the sum of a strictly proper rule and a proper rule is strictly proper. So, lifetime scores are strictly proper if they are constructed from an instantaneous score—in the above toy model.

Alas, the toy model is not fully adequate, because it is random when we will die, and so our lifespan doesn’t have a fixed sequence of moments. Once we take into account the randomness of when we will die, the overall epistemic lifetime score might stop being proper: this needs further investigation.

Proof of Theorem: By the Greaves and Wallace Theorem, an optimal method of updating credences with respect to expected proper score is by Bayesian conditionalization. Apply the Greaves and Wallace Theorem to the scoring rule s and the starting credence p with the following two strategies:

A. Bayesian conditionalization on the true cell of I.

B. Switch your credence from p to q, then apply Bayesian conditionalization on the true cell of I.

Saying that (A) is at least as good as (B) is equivalent to the the proper scoring rule inequality (1) for sI.