Thursday, April 23, 2026

Purgatory and its alternatives

I was reading Jerry Walls’ lovely piece on purgatory for class. Thinking about it has made me realize that given that all who are in heaven are morally perfect, and almost nobody is morally perfect before death, we have the following options:

  1. Almost no Christians end up in heaven.

  2. There is purgatory after death during which character changes.

  3. There is instant and radical character change at the moment of death.

  4. There is a temporally extended and empirically invisible sanctification just before death, probably with time being subjectively stretched.

I think it’s tempting to think of purgatory as an odd Catholic addition to Scripture (though there is 1 Cor. 3:15, of course)—maybe even for a Catholic to think that. But consider the other options.

Option (1) is super pessimistic. It doesn’t make the Gospel really be the Good News it is.

Option (3) is at least as much—and perhaps more so—a theological addition to Scripture as purgatory may seem to be. It’s compatible with Scripture that there is such a sudden moral transformation, but so is purgatory, and both of them are major divine actions going over and beyond what is expressly given by Scripture. Both are suprising, I suppose. Of the two, however, the instant moral transformation seems a lot less in keeping with God’s usual way of proceeding with us. Presumably, being instant, this moral transformation is not something we could have much cooperation in. And it feels a bit odd to think that we struggle over many years to grow morally—and then in an instant it’s all fixed. It makes one wonder why we bothered to struggle. (On the purgatory story, the struggle makes sense, because purgatory does not exempt one from effort.)

Option (4) is also a theological addition to Scripture. It has the advantage over (3) that it is not instant, and hence is more in keeping with God’s typical way of proceeding with us. But it has the serious disadvantage of appearing to be rather a skeptical hypothesis—especially when it is not actually announced by God that that’s what God does for most people. Moreover, while I certainly am open to God using the period just before death for moral transformation, there is something odd about this being how God normally proceeds with Christians. For often the period just before death is naturally unsuited to moral transformation: the mind is falling apart as death takes the body. God could choose that difficult moment, but it doesn’t seem to fit well with a picture of a God who likes to make grace build on nature.

If I were a Protestant, I think would definitively reject (1), and then I would be inclined to suppose that (2) is somewhat more likely than either one of (3) and (4).

Good's Theorem, perfect rationality, and conditioning on zero probability events

Recently, I found myself puzzled by the difficulty in applying “classical” evidential decision theory to a perfectly rational agent. The problem was that the rational agent decides whether to do A or B based on a comparison between the conditional expectations E(U|A) and E(U|B) of the utility function U. But supposing that in fact E(U|A) > E(U|B), the perfectly rational agent has no chance of doing B, so P(B) = 0, and hence E(U|B) is undefined.

But then I thought this isn’t a big deal, because we aren’t perfectly rational agents, so we always have a chance of screwing up and hence P(B) > 0 even if E(U|B) is much less than E(U|A).

I am not entirely satisfied with this. After all, you might think: “I may be pretty imperfect, but if I am choosing between a donut D and a year of torture T, I have zero chance of choosing the year of torture. But then E(U|T) is undefined, so how am I being rational in this choice? Maybe that’s a good objection, maybe not.

But here is another reason why the “We’re imperfect” solution isn’t completely ideal. We want to say that Good’s Theorem tells us something important about rationality—namely, that more information makes rational agents make better decisions. Good’s Theorem is usually interpreted as saying that under some independence conditions, the expected value of a perfectly rational choice given more information is no less than that of a perfectly rational choice given less information. Notice that this is obviously false in the case of an imperfectly rational agent. Thus, we have to make sense of “What a perfectly rational agent would choose” to make sense of the standard interpretation of Good’s Theorem. Moreover, in the setting of Good’s Theorem, the perfectly rational agent has to be choosing based on expected utilities—and that’s precisely what generates the zero-probability-conditioning problem.

Now, the Theorem is still true as an abstract bit of mathematics. But the application is difficult if we can’t make sense of a perfectly rational agent who is certain to maximize expected utility.

Likely we can extend Good’s Theorem to talk about the limiting case of imperfect agents getting more and more perfect. But it would be nice if we didn’t have to.

Wednesday, April 22, 2026

A nuanced compatibilism and the problem of heavenly freedom

The problem of heavenly freedom is the apparent tension between these two claims:

  1. The blessed in heaven are free

  2. The blessed in heaven cannot sin.

One solution is compatibilism, but as Pawl and Timpe note, this undercuts the Free Will Defense.

But there is another move. One can be a compatibilist and say that while one can have freedom without the ability to do otherwise, nonetheless freedom with the ability to do otherwise is better. If one accepts this version of compatibilism, one can affirm (1) and (2) while yet offering a Free Will Defense.

This, however, leads to an obvious riposte: If freedom with the ability to do otherwise is better, why don’t we have that kind of freedom in heaven? Isn’t heaven supposed to be the best state for us?

One can, however, add another nuance. There are some activities that it is good to have done at some point, but repetition significantly diminishes the value. It is of some value to have read The Murder of Roger Ackroyd. To re-read it, not so much. Or for a religious example, think of the Hajj. Suppose freedom with the ability to do otherwise is like that. Perhaps, then, it is valuable to have made the choice for God with the ability to do otherwise. But a repeat of that choice is of rather lesser value. So much lesser, that if on earth one has made the choice for God with the ability to do otherwise, in heaven the value of doing so again is outweighed by the value of making guaranteed righteous choices.

This is not too different from Pawl and Timpe’s preferred solution of allowing for derivative freedom in heaven. But there may be an advantage to the above solution. Pawl and Timpe’s solution doesn’t solve the problem of infants who go to heaven without being able to make a free choice in this life—they don’t seem to have derivative freedom. (One of my undergraduate students has ably pressed this problem.) The nuanced compatibilism I have suggested can help with that: the infants in heaven genuinely have freedom. Granted, their death has denied them one of the goods proper to earthly life—the good of choosing righteousness with the ability to do otherwise. But that they have lost something by their untimely death is indeed rather intuitive.

We might ask: But why wouldn’t God then give them a chance to make a decision with ability to choose otherwise after death? Wouldn’t that be better? In one respect, it would indeed be better: in the respect of choosing with the freedom to choose otherwise. But in another respect, it would be less good: in the respect of having the risk of choosing wrongly. These are incommensurable considerations, and God can reasonably follow either one.

Granted, this move weakens the force of the Free Will Defense. We can no longer say that it’s better all things considered for God to give us the kind of freedom that allows us to reject him. For while that’s a better kind of freedom, it comes with an incommensurable cost—the risk that we will reject him. However, we can still say that God can rightly choose to follow either of the incommensurable considerations. In our case, he has opted to give us the better freedom despite the risk; in the case where he has taken some infants to himself, he opted for the guarantee of freedom being rightly used.

I don’t endorse the above solution. But I think it’s possible.

Extending Good's Theorem to experiments and not just observations

Good’s Theorem basically says that a utility-maximizing agent can expect to make decisions that are at least as good if they get more information. (And under some additional conditions, one can expect the decisions to be better.)

Now consider this case:

  1. You will be offered a chance to make a bet at certain odds on the result of a coin toss, where as far as you can tell it’s equally likely that the coin is fair and that it is double-headed. Someone offers to tell you how the previous toss of the coin went.

Good’s Theorem says your decision whether to make the bet will be at least as good given the information about the previous three tosses as without that information. Hence, if the information is being announced, you don’t need to cover your ears. This is, of course, very intuitive. But now consider a slightly different case:

  1. Things are set up just as in (1), except now instead of information about the previous toss, you are offered a chance to have the following experiment get performed before your decision: the coin will be tossed an extra time and the result will be announced to you.

The difference is that in (2) you are not simply being offered additional information about how things are. For whether you go for the experiment or not, either way, you have full information about the experiment and its results. If you don’t go for the experiment, that full information is that the coin was not tossed an extra time (and hence did not land either heads or tails). If you do go for the experiment, the full information is that the coin was tossed and it landed heads, or else that it was tossed and it landed tails. In (2), you are not just finding out information by going for the deal: you are making something happen—an extra toss—and then finding out something about that.

So you can’t apply Good’s Theorem directly to (2). It would be nice to have a formulation of Good’s Theorem that works in cases where instead of merely finding out information, you perform an experiment.

I initially thought this would be easy. Maybe it is, but I don’t see it. There are, after all, cases where performing a cost-free experiment is not a good idea. Suppose, for instance, that you will be allowed to bet tomorrow that a certain car has more than 10 gallons of gasoline. The experiment is to start up the car and look at the gas gauge. But starting the car reduces the amount of gasoline in it, and one can easily rig the case so that benefits from the information gain are outweighed by the fact that you have made that bet less favorable.

So, we want to rule out cases where there is dependence between whether you perform the experiment and the payoffs of the wagers. If F is the event of performing the experiment, it may seems initially we should assume something like:

  1. E(U|WiF) = E(U|WiFc) for all i,

where Wi is your choosing wager i and U is the utility random variable. In other words, the expected utility of each wager is unaffected by whether the experiment has been performed. But no! Suppose a coin has been tossed, and you are choosing between W1 where you get a dollar on heads and W2 where you get a dollar on tails. But let F be the experiment of looking at the coin. (This is a case for the original Good’s Theorem.) Then E(U|WiFc) = 0.50, while E(U|WiF) is very close to 1.00 for the reason that when you find out what the coin is like, you are close to certain to bet on what you see, and hence you are close to certain to win your bet.

If F1 is heads and F2 is tails, we solve the problem by replacing (3) with:

  1. E(U|WiFjF) = E(U|WiFjFc) for i and j.

Namely, the expected utility of wager Wi given information Fj is independent of whether you performed the experiment F. But that only works because it makes sense to ask what the coin is showing if you aren’t looking: it makes sense to conditionalize on Fj ∩ Fc. But in the cases that interest me, there is no fact of the matter as to the result of the experiment when the experiment is not performed, since Molinism is false and we live in an indeterministic world. And in these cases, Fj ∩ Fc is the empty set: the Fj represent the possible results of the experiment but the experiment has no result when it is not performed.

I can get something by supposing a two-step procedure. You perform the experiment, event F, and you learn the result, event L. Then we can assume:

  1. E(U|WiFLc) = E(U|WiFc) for all i

  2. E(U|WiFjFL) = E(U|WiFjFLc) for all i and j

  3. P(Fj|FL) = P(Fj|FLc).

Assumption (5) says that it makes no difference to the expected utility of a wager whether (3) the experiment is performed but its result is not learned or (b) the experiment is not performed at all. In other words, the experiment itself doesn’t affect things. Assumption (6) says that given a specific experimental result, learning the result makes no difference to the expected utility of each wager–result pair. Assumption (7) says that the results of the experiment are unaffected by whether you learn the result of the experiment.

Without (6) or (7), we wouldn’t expect to get the result we want. If we don’t have (6), it might be that utilities are wildly affected by whether you learn the result. (The simplest case is that the wagers all have a big negative payoff on L.) If we don’t have (7), then learning the result might have some evidential or retrocausal impact on what the result is, and then again we shouldn’t expect that learning the result is a good thing.

Given (5)–(7), I think we can now reason as follows. You are choosing between:

  1. performing the experiment and learning the results

and

  1. not performing the experiment and (hence) not learning the results.

By (5), a rational agent will decide the same way in (ii) as in:

  1. performing the experiment and not learning the results,

and the expected utilities of (ii) and (iii) will be the same for this rational agent.

We now apply Good’s Theorem to the choice between (i) and (iii) (we will use (6) and (7) here, and assume the case is non-Newcombian and hence allows the use of Evidential Decision Theory) and get the result that (i) is at least as good as (iii). Since we have indifference between (ii) and (iii), it follows that (i) is at least as good as (ii). (We can also analyze the cases of a strict expected utility inequality.)

This is roundabout, but that’s not my main worry.

What I am really worried about is one technicality. To run the above argument, I had to assume that there is a way of performing the experiment without learning the result, namely that F ∩ Lc is non-empty. In general, however, we cannot assume this. Suppose, for instance, that we have a world with a quantum mechanics where observation causes collapse. Then the experiment of collapsing a wavefunction by means of observation cannot be done without observing the result of the experiment. In such scenarios, I cannot simply introduce a third option of performing the experiment and not learning the results, since that third option may not be consistent with the laws of physics. (And, of course, the utilities for breaking the laws of physics could be wild.)

But without introducing that third option, namely F ∩ Lc, I don’t know how to formulate the independence assumptions that are needed. I also don’t know if the problem is “merely technical” or “deep”. If I had to bet at even odds, I would bet on its being merely technical. But it might be deep.

Monday, April 20, 2026

Consciousness, fine-tuning and skepticism

Models of the emergence of consciousness from a material substrate (whether weak or strong emergence—it won’t matter for this post) differ on how easy it is for consciousness to emerge. Functionalist or computationalist models make it relatively easy: as long as there is a functional isomorphism between a thing and a conscious thing, the former is conscious as well. Biological models, on the other hand, make it harder, by putting constraints on what kind of biological realization of a functional structure gives rise to consciousness.

It’s interesting to note that there the more permissive a model of consciousness is, the easier it is to tune the universe to get consciousness, and hence the better the response that can be given to fine-tuning arguments for theism or a multiverse. On the other hand, the more permissive a model, the greater the danger of skepticism from the fact that the buzzing atoms in a random rock have some sort of isomorphism to a human brain, and hence it is not clear that we have good reason to think we’re not rocks.

On the other hand, the more restrictive a model of consciousness is, the harder it is to tune the universe to get consciousness. On one extreme, you need brains to be conscious. But brains are a specific type of physical organ in DNA-based life forms, so you need life-forms rather like us to have consciousness, and the fine-tuning needed becomes more stringent. On the other hand, the more human-like that conscious things have to be the less skepticism we have to worry about.

Is there some kind of a Goldilocks zone in the range of theories of consciousness where the fine-tuning is not too onerous and skepticism is not an issue? I don’t know.

Lifetime epistemic value

Suppose I discover some fact that I never end up using for anything, or even occurrently thinking about after the discovery. Now, knowledge is good. If I learn the fact earlier in life, then I will have had the knowledge for a longer period of time. So is it better for me to have learned the fact earlier in life?

I doubt it. Consider two scenarios. On the first, I learn what the capital of Zambia is just before I enter a ten-year coma. On the second, I learn it right right after I exit the coma. Learning it before the coma gives me ten more years of knowing it. But that seems a worthless gain. I conclude that in the case of non-occurrent knowing, it doesn’t matter much how long I know.

What about for occurrent knowledge? Other things being equal, if I learn some fact earlier in life, I will occurrently know the fact more times. Is that valuable?

I am less sure. But consider a daily ritual where every morning after waking up, before I am capable of any serious intellectual activity, I think to myself: Sheep have four legs. Thereby, I greatly increase the number of instances in which that piece of knowledge is being occurrently known. Again, this doesn’t seem to be worth the bother.

So it seems that neither for non-occurrent or occurrent knowledge is there non-instrumental value in knowing the thing for a longer period of time. Of course, there typically is instrumental value in knowing something for a longer period of time, both instrumental epistemic value—you can use it in your intellectual investigations of more things—and often instrumental pragmatic value.

This suggests the following. If an agent never loses knowledge, then the lifetime non-instrumental value of their knowledge depends on what they have come to know, not on when they have come to know it. The analogous thesis for perfect Bayesian agents and scoring rules is that their lifetime epistemic utility is the epistemic accuracy score at the latest point in their lives. (If we apply this to Sleeping Beauty, we are apt to get halving. But we shouldn’t apply this to Sleeping Beauty, as she forgets about her first wakeup.)

Things are more complicated in the case of agents who do lose knowledge, whether to memory loss, irrationality or misleading evidence. If we count such an agent’s lifetime non-instrumental epistemic value based on all that they have ever known, that means that if they lost knowledge of p, there is no gain to them from getting it back. But obviously they are better off epistemically if they do get it back. Things get messy and complicated now. A short-period loss in old age doesn’t seem as bad as a case where you found out something early in life and then didn’t have it for the rest of your life.

This is getting messy.

The epistemic value of experiments

You perform an experiment and are going to rationally update on its results. It seems that you should expect this to be good for your epistemic utility as compared to non-performance of the experiment.

Not always! Silly case: Your boss has tasked you with performing a boring chemistry experiment. If you do the experiment, you will find out very little. But if you don’t do it, you will find out a lot about the range of swear words that your boss knows.

What makes this case silly is that you should really think of it as a choice of which experiment to perform, one in chemistry or one in psychology, and in this case the psychology experiment is the more interesting one.

So if we want to say that an experiment can be expected to improve your epistemic utility, we need to be a bit more careful. We need to ensure that non-performance of the experiment doesn’t itself generate information.

But it always does. At the very least, non-performance of the experiment generates the information that the experiment has not been performed by you. You find out something about yourself, and that might far outweigh the value of anything you find out from the experiment. Granted, you also find out something about yourself by performance of the experiment, but it is easy to imagine cases where what you find out by non-performance is more significant. For instance, it could be that your refusal to perform the experiment shows that you have a very specific and rare personality type, while your performance of the experiment gives you nothing so specific.

Suppose, for instance, that you score your epistemic utility by bits of information. The experiment consists in bending down to see which side an unusual coin lying on the ground is facing—that’s one bit of information. Your prior probability that you will look at the coin is 3/4: you are the sort of person who tends to look. So by looking at the coin, you will gain 1 − log2(3/4) = 1.4 bits, mostly regarding the coin but also a little bit about yourself. By not looking at the coin, you will gain 0 − log2(1/4) = 4 bits, all about yourself. Better not to look!

Of course, there are Newcomb-like issues here.

Lesson: The principle that performing a non-trivial experiment should be expected to improve epistemic utility is going to be difficult to formulate.

Epistemic possibility and the Liar

Here’s a fun Liar paradox involving epistemic possibility. Say that a proposition p is epistemically possible if it is consistent with all you know.

Construct a sentence G such that:

  1. G is true if and only if G is not epistemically possible.

E.g., “The proposition expressed by the first sentence in this post found in quotation marks is not epistemically possible.”

Now, you only know truths, and truth is consistent with truth. Thus:

  1. If G is true, then it is consistent with everything you know.

But G is true if and only if it is not epistemically possible. So:

  1. If G is true, then it is not consistent with everything you know.

Hence:

  1. G is not true.

But now that you’ve seen this argument, you surely are in a position to know G not to be true. Suppose you exploit this and indeed come to know G not to be true. But then we have a contradiction. For if you know G not to be true, then G is not epistemically possible, and hence by (0), it must be that G is true.

A piece of Wordle prehistory

A couple of years ago I helped make a variant on Wordle (same rules, copyright-free vocabulary) for the Nintendo Gameboy (you can play it online here), and I would play the official version. Since December, my hobby project has been reverse-engineering the computer built into my early 1990s HP 1653B logic analyzer/oscilloscope, and creating an SDK for programming it. Yesterday, I ported Davison's EhBASIC to it, and was trying out various games in Ahl's BASIC games book from the 1970s (1974 DEC version here), based on the EhBASIC ports here

One of the games I tried last night was Word, credited in the 1974 version of Ahl's book to Charles Reid of Lexington High School. It turns out to have rules very similar to Wordle. It hides a 5-letter puzzle word (there are only 12 in its puzzle vocabulary) and asks you to guess a 5-letter word. Then it shows you which of your letters are correct and in the right position and gives you a list of all the letters that match regardless of position. Basically the same as Wordle. There is no limit on the number of guesses. Here it is running on my oscilloscope. The keyboard is a Mac Quadra keyboard connected via a home-made adapter to the scope's serial port.

Interestingly Word leaks information that Wordle does not. It generates the list of position-independent matches in the array P by the following nested loop where S is the correct solution and L is the user's input word.


The outer loop goes over the letters in the solution S, in order from left-to-right, and adds the position-independent matches to P. Because P is then later printed as is, this means that you know the order in which the position-independent matches appear in the solution, which leaks information (e.g., if you were to put all the right letters in but in a different order, it would actually print the solution). 

Furthermore, if the solution has n repeats of a letter and your guess has m repeats of the same letter, then it will print that letter nm times, and you thus know exactly how many times the letter appears in the solution. Whether this is a bug or just an interesting mechanic depends presumably on what Mr. Charles Reid was thinking half a century ago. (Moreover, if nm>7, the program will crash, because only 7 slots were allocated in the S array. But I think there is no combination of words in game's 12-word vocabulary and English five-letter word that will result in more than 7 slots being occupied.) 

UPDATE: I've been scooped. And you can play the original game in your browser.

Thursday, April 16, 2026

A method for living forever

Maybe you have a cancer that would kill you in three months.

So, get a powerful rocket.

Accelerate close to the speed of light, and make a one light-year round-trip journey that from your reference frame takes about a month, but takes slightly over a year from the point of view of the earth. If your speed during the first journey was v1, now repeat the same trip with a speed of v2 = (3c2+v12)1/2/2. Then repeat with a speed of v3 = (3c2+v22)1/2/2. And so on, forever.

Fact: Each journey will take a bit more than a year of earth-time but only half of the you-time of the previous. So the total you-time of your journeying will be 1 + 1/2 + 1/4 + 1/8 + ... = 2 months. You’ll never die. At every future time, you will be alive.

But this is pointless. You might as well stay on earth, and then you’ll have three months of you-time. Three months of you-time followed by death is better than two months of you-time with no death.

A Christian argument against eternalism, with some remarks on "finite" and "infinite"

  1. We have an infinite future.

  2. If eternalism is true, then anything that has an infinite future is infinite.

  3. We are finite.

  4. So, eternalism is not true.

The crucial premise is 2. One thought behind 2 is that our best version of eternalism holds that we four-dimensional, and if we have an infinite future, that makes us infinite in the fourth dimension.

But I think we can do better than that. Plausibly, part of what we mean by “We have an infinite future” is that we will have infinitely many token future mental states (if not, add that to the premises). On eternalism, all these mental states exist. And they are clearly all ours. So if we have an infinite future, we have an infinite mental life, and that is a way of being infinite.

I am an eternalist, and I want to affirm 1 and 3. What can I do? One move is this. The relevant sense of “finite” in 3 is not a mathematical sense, but something more “metaphysical” like limited. Now, to be limited is to have one or more limits. This is quite compatible with there being respects in which we lack a limit. Thus, the charged infinite rod that sometimes figures in physics homework has limits: not limits of length, but limits of width and height (and others). In the metaphysical sense, then, the rod is finite. Likewise, then, even if we are temporally infinite or infinite in the number of mental states, we are still limited in other ways.

If we go for this move, we have to make a choice what to mean by “infinite”. We could say that something is infinite provided there is some respect in which it is unlimited. If we did that, then one thing could be finite and infinite—as long as it is limited in one way and unlimited in another. The “infinite rod” would then be both finite and infinite. And, if eternalism is true and there is an eternal afterlife, we are finite and infinite. On this take, the argument is invalid, because it is missing the assumption that nothing is both finite and infinite.

A second otion is to make “infinite” mean unlimited in all respects. In that case, we are finite and not infinite. Indeed, only God is infinite then. A set with what the mathematician calls “infinite cardinality” is limited by not having a greater cardinality than the one it has.

A third option would be to take “finite” to mean limited in every way, “infinite” to mean unlimited in all respects, and then allow for the possibility of things that are neither finite or infinite—perhaps us.

Wednesday, April 15, 2026

Anti-Lucretian preferences

Lucretius famously argued that non-existence at the end of one’s life is no more to be feared than non-existence before the beginning of one’s life. Nagel famously argued that there is an asymmetry. One could exist later than one will but one couldn’t have existed earlier than one did. I think he’s barking up the wrong tree. Death wouldn’t be less scary if it turned out to be metaphysically inevitable.

But in any case, I think there is a way to prescind from the metaphysics questions. You’ve just woken up after an operation. You have amnesia. You expect the amnesia to wear off—somehow you have knowledge of how such things go. But for now you have it. You look through some files a careless actuary left lying about. You expect one of these files is about you. The files describe these cases:

  • 35:20. Thirty-five-year-old expected to live twenty years more.

  • 30:20. Thirty-year-old expected to live twenty years more.

  • 20:30. Twenty-year-old expected to live thirty years more.

  • 30:30. Thirty-year-old expected to live thirty years more.

  • 20:20. Twenty-year-old expected to live twenty years more.

You can’t, of course, choose which of these is you, but you can have hopes and preferences. And suppose you think there is no afterlife.

My own preferences would be:

  • 30 : 30 > 20 : 30 > 35 : 20 > 30 : 20 > 20 : 20.

I consistently have a preference for a longer future other things being equal, and a longer past other things being equal, but I tend to prefer a longer future to a longer past even if that results in a somewhat shorter overall life.

But only to a point. Suppose another file is:

  • 50:28.

I definitely would greatly prefer that over 20:30, and not insignificantly over 30:30. The reason is that it seems quite a lot better to live 78 years than 50 or 60, even at the cost of two years of future life.

In any case, as regards my own preferences, Lucretius is just wrong. I would want more of a past life. Though to some degree my intuitions are distorted by the thought that in a longer life I am more likely to have more meaningful achievements.

What worries me philosophically about all this is whether I can reconcile my preferences with my belief in the B-theory of time. I think I can. It makes sense to me that the preferences I have at t should have a relationship to where t is located in my life.

Fear of death is not exactly fear of death or being dead

You don’t believe in the afterlife. Your doctor tells you that you will die in a week. You are terrified. A couple of minutes later, the doctor comes back, herself looking terrified. She tells you that she has both good news and bad news. The good news is that she had misdiagnosed you—you are just fine. However, the bad news is that her sister who is a cosmologist has just discovered that the everything—the universe, space and time—is coming to an end in a week. (She begs you not to tell anyone, because that will cause a panic.)

Out of nerdy curiosity, you ask the doctor whether there will be a last moment of time. She says that the same question occurred to her, and there won’t be. The interval of time is open on the upper end: for every time t, there is a later time t′. It’s just that time is literally running out, and all the remaining times are less than about a week from now.

With grim amusement you note that you won’t die. For at every time in the future you will be alive, and there won’t even be a last time which one might want to identify as the “time of death”.

You reflect. It’s a bit of a plus that none of your friends will suffer from your death, but a big minus that they all have only a week left. In any case, there is no relief from fear of death.

I think this case shows that it’s not death or being dead that we fear when we don’t believe in an afterlife. We fear the fact that our future is finite. If this is right, then people like Lucretius who thought that we somehow confusedly imagined ourselves as existing after the end of our existence and that this was what explained the fear of death are likely mistaken.

A nearly equivalent version of the above thought experiment would be one where you find out that you’re going to live for an infinite amount of time, but your life will exponentially slow down. In the next week of life, you will experience half a subjective week. In the week after that, you will only experience a quarter of a subjective week, and then an eighth and so on. Your subjective future will be a week. But you will never die. That’s just as bad as permanently dying.

Tuesday, April 14, 2026

A problem with perfectly rational agents and decision theory

Suppose I am perfectly rational in the decision theoretic sense. A coin is about to be tossed, and I will get five dollars on heads (H) and one dollar on tails (T). I have a choice whether to leave the coin fair (F) or load it (L) in favor of tails so that the probability of tails is 3/4.

It is obvious what I do. I calculate the expected utilities of my options F and L as follows.

  • EU(F) = P(H|F) ⋅ $5 + P(T|F) ⋅ $1 = (1/2) ⋅ $5 + (1/2) ⋅ $1 = $3

and

  • EU(L) = P(H|L) ⋅ $5 + P(T|L) ⋅ $1 = (1/4) ⋅ $5 + (3/4) ⋅ $1 = $2.

And then I choose F.

Except it’s not so simple. For I am perfectly rational. But since, as we just saw, the perfectly rational agent has to choose F, it follows that P(L) = 0, and so P(H|L) and P(T|L) are undefined. So I can’t decide! So now there is no guarantee how I will act, and P(H|L) and P(T|L) once again make sense. And then again they don’t. Oops!

What can be done? Causal decision theorists will note that I reasoned like an evidential decision theorist above. But this makes no difference in this case. The causalist’s story will be a bit more complicated but will end up with the same problem.

We might want to introduce primitive conditional probabilities like Popper functions that let you conditionalize on events with zero probability, and then have P(H|L) = 1/4 and P(T|L) = 3/4, even though P(L) = 0. But that is introducing a lot of complications. Primitive conditional probabilities are not unproblematic.

What should we do? Maybe we should suppose something like primitive suppositional decision theory, where what we are primitively given are the suppositional probabilities PF and PL, without them being defined in terms of conditional and unconditional credences as in evidential and causal decision theories. But this seems problematic. Do we have to suppose that in addition to conditional and unconditional credences, we have suppositional credences? Maybe.

Or perhaps decision theory only applies to agents that have non-zero credences of going for all the options.

Monday, April 13, 2026

An argument from wonderful people

  1. Person x appears like they are in the image of God.

  2. So, probably, x is in the image of God.

  3. If God doesn’t exist, no one is in the image of God.

  4. So, God exists.