Monday, April 20, 2026

Consciousness, fine-tuning and skepticism

Models of the emergence of consciousness from a material substrate (whether weak or strong emergence—it won’t matter for this post) differ on how easy it is for consciousness to emerge. Functionalist or computationalist models make it relatively easy: as long as there is a functional isomorphism between a thing and a conscious thing, the former is conscious as well. Biological models, on the other hand, make it harder, by putting constraints on what kind of biological realization of a functional structure gives rise to consciousness.

It’s interesting to note that there the more permissive a model of consciousness is, the easier it is to tune the universe to get consciousness, and hence the better the response that can be given to fine-tuning arguments for theism or a multiverse. On the other hand, the more permissive a model, the greater the danger of skepticism from the fact that the buzzing atoms in a random rock have some sort of isomorphism to a human brain, and hence it is not clear that we have good reason to think we’re not rocks.

On the other hand, the more restrictive a model of consciousness is, the harder it is to tune the universe to get consciousness. On one extreme, you need brains to be conscious. But brains are a specific type of physical organ in DNA-based life forms, so you need life-forms rather like us to have consciousness, and the fine-tuning needed becomes more stringent. On the other hand, the more human-like conscious things have to be the less skepticism we have to worry about.

Is there some kind of a Goldilocks zone in the range of theories of consciousness where the fine-tuning is not too onerous and skepticism is not an issue? I don’t know.

Lifetime epistemic value

Suppose I discover some fact that I never end up using for anything, or even occurrently thinking about after the discovery. Now, knowledge is good. If I learn the fact earlier in life, then I will have had the knowledge for a longer period of time. So is it better for me to have learned the fact earlier in life?

I doubt it. Consider two scenarios. On the first, I learn what the capital of Zambia is just before I enter a ten-year coma. On the second, I learn it right right after I exit the coma. Learning it before the coma gives me ten more years of knowing it. But that seems a worthless gain. I conclude that in the case of non-occurrent knowing, it doesn’t matter much how long I know.

What about for occurrent knowledge? Other things being equal, if I learn some fact earlier in life, I will occurrently know the fact more times. Is that valuable?

I am less sure. But consider a daily ritual where every morning after waking up, before I am capable of any serious intellectual activity, I think to myself: Sheep have four legs. Thereby, I greatly increase the number of instances in which that piece of knowledge is being occurrently known. Again, this doesn’t seem to be worth the bother.

So it seems that neither for non-occurrent or occurrent knowledge is there non-instrumental value in knowing the thing for a longer period of time. Of course, there typically is instrumental value in knowing something for a longer period of time, both instrumental epistemic value—you can use it in your intellectual investigations of more things—and often instrumental pragmatic value.

This suggests the following. If an agent never loses knowledge, then the lifetime non-instrumental value of their knowledge depends on what they have come to know, not on when they have come to know it. The analogous thesis for perfect Bayesian agents and scoring rules is that their lifetime epistemic utility is the epistemic accuracy score at the latest point in their lives. (If we apply this to Sleeping Beauty, we are apt to get halving. But we shouldn’t apply this to Sleeping Beauty, as she forgets about her first wakeup.)

Things are more complicated in the case of agents who do lose knowledge, whether to memory loss, irrationality or misleading evidence. If we count such an agent’s lifetime non-instrumental epistemic value based on all that they have ever known, that means that if they lost knowledge of p, there is no gain to them from getting it back. But obviously they are better off epistemically if they do get it back. Things get messy and complicated now. A short-period loss in old age doesn’t seem as bad as a case where you found out something early in life and then didn’t have it for the rest of your life.

This is getting messy.

The epistemic value of experiments

You perform an experiment and are going to rationally update on its results. It seems that you should expect this to be good for your epistemic utility as compared to non-performance of the experiment.

Not always! Silly case: Your boss has tasked you with performing a boring chemistry experiment. If you do the experiment, you will find out very little. But if you don’t do it, you will find out a lot about the range of swear words that your boss knows.

What makes this case silly is that you should really think of it as a choice of which experiment to perform, one in chemistry or one in psychology, and in this case the psychology experiment is the more interesting one.

So if we want to say that an experiment can be expected to improve your epistemic utility, we need to be a bit more careful. We need to ensure that non-performance of the experiment doesn’t itself generate information.

But it always does. At the very least, non-performance of the experiment generates the information that the experiment has not been performed by you. You find out something about yourself, and that might far outweigh the value of anything you find out from the experiment. Granted, you also find out something about yourself by performance of the experiment, but it is easy to imagine cases where what you find out by non-performance is more significant. For instance, it could be that your refusal to perform the experiment shows that you have a very specific and rare personality type, while your performance of the experiment gives you nothing so specific.

Suppose, for instance, that you score your epistemic utility by bits of information. The experiment consists in bending down to see which side an unusual coin lying on the ground is facing—that’s one bit of information. Your prior probability that you will look at the coin is 3/4: you are the sort of person who tends to look. So by looking at the coin, you will gain 1 − log2(3/4) = 1.4 bits, mostly regarding the coin but also a little bit about yourself. By not looking at the coin, you will gain 0 − log2(1/4) = 4 bits, all about yourself. Better not to look!

Of course, there are Newcomb-like issues here.

Lesson: The principle that performing a non-trivial experiment should be expected to improve epistemic utility is going to be difficult to formulate.

Epistemic possibility and the Liar

Here’s a fun Liar paradox involving epistemic possibility. Say that a proposition p is epistemically possible if it is consistent with all you know.

Construct a sentence G such that:

  1. G is true if and only if G is not epistemically possible.

E.g., “The proposition expressed by the first sentence in this post found in quotation marks is not epistemically possible.”

Now, you only know truths, and truth is consistent with truth. Thus:

  1. If G is true, then it is consistent with everything you know.

But G is true if and only if it is not epistemically possible. So:

  1. If G is true, then it is not consistent with everything you know.

Hence:

  1. G is not true.

But now that you’ve seen this argument, you surely are in a position to know G not to be true. Suppose you exploit this and indeed come to know G not to be true. But then we have a contradiction. For if you know G not to be true, then G is not epistemically possible, and hence by (0), it must be that G is true.

A piece of Wordle prehistory

A couple of years ago I helped make a variant on Wordle (same rules, copyright-free vocabulary) for the Nintendo Gameboy (you can play it online here), and I would play the official version. Since December, my hobby project has been reverse-engineering the computer built into my early 1990s HP 1653B logic analyzer/oscilloscope, and creating an SDK for programming it. Yesterday, I ported Davison's EhBASIC to it, and was trying out various games in Ahl's BASIC games book from the 1970s (1974 DEC version here), based on the EhBASIC ports here

One of the games I tried last night was Word, credited in the 1974 version of Ahl's book to Charles Reid of Lexington High School. It turns out to have rules very similar to Wordle. It hides a 5-letter puzzle word (there are only 12 in its puzzle vocabulary) and asks you to guess a 5-letter word. Then it shows you which of your letters are correct and in the right position and gives you a list of all the letters that match regardless of position. Basically the same as Wordle. There is no limit on the number of guesses. Here it is running on my oscilloscope. The keyboard is a Mac Quadra keyboard connected via a home-made adapter to the scope's serial port.

Interestingly Word leaks information that Wordle does not. It generates the list of position-independent matches in the array P by the following nested loop where S is the correct solution and L is the user's input word.


The outer loop goes over the letters in the solution S, in order from left-to-right, and adds the position-independent matches to P. Because P is then later printed as is, this means that you know the order in which the position-independent matches appear in the solution, which leaks information (e.g., if you were to put all the right letters in but in a different order, it would actually print the solution). 

Furthermore, if the solution has n repeats of a letter and your guess has m repeats of the same letter, then it will print that letter nm times, and you thus know exactly how many times the letter appears in the solution. Whether this is a bug or just an interesting mechanic depends presumably on what Mr. Charles Reid was thinking half a century ago. (Moreover, if nm>7, the program will crash, because only 7 slots were allocated in the S array. But I think there is no combination of words in game's 12-word vocabulary and English five-letter word that will result in more than 7 slots being occupied.) 

UPDATE: I've been scooped. And you can play the original game in your browser.

Thursday, April 16, 2026

A method for living forever

Maybe you have a cancer that would kill you in three months.

So, get a powerful rocket.

Accelerate close to the speed of light, and make a one light-year round-trip journey that from your reference frame takes about a month, but takes slightly over a year from the point of view of the earth. If your speed during the first journey was v1, now repeat the same trip with a speed of v2 = (3c2+v12)1/2/2. Then repeat with a speed of v3 = (3c2+v22)1/2/2. And so on, forever.

Fact: Each journey will take a bit more than a year of earth-time but only half of the you-time of the previous. So the total you-time of your journeying will be 1 + 1/2 + 1/4 + 1/8 + ... = 2 months. You’ll never die. At every future time, you will be alive.

But this is pointless. You might as well stay on earth, and then you’ll have three months of you-time. Three months of you-time followed by death is better than two months of you-time with no death.

A Christian argument against eternalism, with some remarks on "finite" and "infinite"

  1. We have an infinite future.

  2. If eternalism is true, then anything that has an infinite future is infinite.

  3. We are finite.

  4. So, eternalism is not true.

The crucial premise is 2. One thought behind 2 is that our best version of eternalism holds that we four-dimensional, and if we have an infinite future, that makes us infinite in the fourth dimension.

But I think we can do better than that. Plausibly, part of what we mean by “We have an infinite future” is that we will have infinitely many token future mental states (if not, add that to the premises). On eternalism, all these mental states exist. And they are clearly all ours. So if we have an infinite future, we have an infinite mental life, and that is a way of being infinite.

I am an eternalist, and I want to affirm 1 and 3. What can I do? One move is this. The relevant sense of “finite” in 3 is not a mathematical sense, but something more “metaphysical” like limited. Now, to be limited is to have one or more limits. This is quite compatible with there being respects in which we lack a limit. Thus, the charged infinite rod that sometimes figures in physics homework has limits: not limits of length, but limits of width and height (and others). In the metaphysical sense, then, the rod is finite. Likewise, then, even if we are temporally infinite or infinite in the number of mental states, we are still limited in other ways.

If we go for this move, we have to make a choice what to mean by “infinite”. We could say that something is infinite provided there is some respect in which it is unlimited. If we did that, then one thing could be finite and infinite—as long as it is limited in one way and unlimited in another. The “infinite rod” would then be both finite and infinite. And, if eternalism is true and there is an eternal afterlife, we are finite and infinite. On this take, the argument is invalid, because it is missing the assumption that nothing is both finite and infinite.

A second otion is to make “infinite” mean unlimited in all respects. In that case, we are finite and not infinite. Indeed, only God is infinite then. A set with what the mathematician calls “infinite cardinality” is limited by not having a greater cardinality than the one it has.

A third option would be to take “finite” to mean limited in every way, “infinite” to mean unlimited in all respects, and then allow for the possibility of things that are neither finite or infinite—perhaps us.

Wednesday, April 15, 2026

Anti-Lucretian preferences

Lucretius famously argued that non-existence at the end of one’s life is no more to be feared than non-existence before the beginning of one’s life. Nagel famously argued that there is an asymmetry. One could exist later than one will but one couldn’t have existed earlier than one did. I think he’s barking up the wrong tree. Death wouldn’t be less scary if it turned out to be metaphysically inevitable.

But in any case, I think there is a way to prescind from the metaphysics questions. You’ve just woken up after an operation. You have amnesia. You expect the amnesia to wear off—somehow you have knowledge of how such things go. But for now you have it. You look through some files a careless actuary left lying about. You expect one of these files is about you. The files describe these cases:

  • 35:20. Thirty-five-year-old expected to live twenty years more.

  • 30:20. Thirty-year-old expected to live twenty years more.

  • 20:30. Twenty-year-old expected to live thirty years more.

  • 30:30. Thirty-year-old expected to live thirty years more.

  • 20:20. Twenty-year-old expected to live twenty years more.

You can’t, of course, choose which of these is you, but you can have hopes and preferences. And suppose you think there is no afterlife.

My own preferences would be:

  • 30 : 30 > 20 : 30 > 35 : 20 > 30 : 20 > 20 : 20.

I consistently have a preference for a longer future other things being equal, and a longer past other things being equal, but I tend to prefer a longer future to a longer past even if that results in a somewhat shorter overall life.

But only to a point. Suppose another file is:

  • 50:28.

I definitely would greatly prefer that over 20:30, and not insignificantly over 30:30. The reason is that it seems quite a lot better to live 78 years than 50 or 60, even at the cost of two years of future life.

In any case, as regards my own preferences, Lucretius is just wrong. I would want more of a past life. Though to some degree my intuitions are distorted by the thought that in a longer life I am more likely to have more meaningful achievements.

What worries me philosophically about all this is whether I can reconcile my preferences with my belief in the B-theory of time. I think I can. It makes sense to me that the preferences I have at t should have a relationship to where t is located in my life.

Fear of death is not exactly fear of death or being dead

You don’t believe in the afterlife. Your doctor tells you that you will die in a week. You are terrified. A couple of minutes later, the doctor comes back, herself looking terrified. She tells you that she has both good news and bad news. The good news is that she had misdiagnosed you—you are just fine. However, the bad news is that her sister who is a cosmologist has just discovered that the everything—the universe, space and time—is coming to an end in a week. (She begs you not to tell anyone, because that will cause a panic.)

Out of nerdy curiosity, you ask the doctor whether there will be a last moment of time. She says that the same question occurred to her, and there won’t be. The interval of time is open on the upper end: for every time t, there is a later time t′. It’s just that time is literally running out, and all the remaining times are less than about a week from now.

With grim amusement you note that you won’t die. For at every time in the future you will be alive, and there won’t even be a last time which one might want to identify as the “time of death”.

You reflect. It’s a bit of a plus that none of your friends will suffer from your death, but a big minus that they all have only a week left. In any case, there is no relief from fear of death.

I think this case shows that it’s not death or being dead that we fear when we don’t believe in an afterlife. We fear the fact that our future is finite. If this is right, then people like Lucretius who thought that we somehow confusedly imagined ourselves as existing after the end of our existence and that this was what explained the fear of death are likely mistaken.

A nearly equivalent version of the above thought experiment would be one where you find out that you’re going to live for an infinite amount of time, but your life will exponentially slow down. In the next week of life, you will experience half a subjective week. In the week after that, you will only experience a quarter of a subjective week, and then an eighth and so on. Your subjective future will be a week. But you will never die. That’s just as bad as permanently dying.

Tuesday, April 14, 2026

A problem with perfectly rational agents and decision theory

Suppose I am perfectly rational in the decision theoretic sense. A coin is about to be tossed, and I will get five dollars on heads (H) and one dollar on tails (T). I have a choice whether to leave the coin fair (F) or load it (L) in favor of tails so that the probability of tails is 3/4.

It is obvious what I do. I calculate the expected utilities of my options F and L as follows.

  • EU(F) = P(H|F) ⋅ $5 + P(T|F) ⋅ $1 = (1/2) ⋅ $5 + (1/2) ⋅ $1 = $3

and

  • EU(L) = P(H|L) ⋅ $5 + P(T|L) ⋅ $1 = (1/4) ⋅ $5 + (3/4) ⋅ $1 = $2.

And then I choose F.

Except it’s not so simple. For I am perfectly rational. But since, as we just saw, the perfectly rational agent has to choose F, it follows that P(L) = 0, and so P(H|L) and P(T|L) are undefined. So I can’t decide! So now there is no guarantee how I will act, and P(H|L) and P(T|L) once again make sense. And then again they don’t. Oops!

What can be done? Causal decision theorists will note that I reasoned like an evidential decision theorist above. But this makes no difference in this case. The causalist’s story will be a bit more complicated but will end up with the same problem.

We might want to introduce primitive conditional probabilities like Popper functions that let you conditionalize on events with zero probability, and then have P(H|L) = 1/4 and P(T|L) = 3/4, even though P(L) = 0. But that is introducing a lot of complications. Primitive conditional probabilities are not unproblematic.

What should we do? Maybe we should suppose something like primitive suppositional decision theory, where what we are primitively given are the suppositional probabilities PF and PL, without them being defined in terms of conditional and unconditional credences as in evidential and causal decision theories. But this seems problematic. Do we have to suppose that in addition to conditional and unconditional credences, we have suppositional credences? Maybe.

Or perhaps decision theory only applies to agents that have non-zero credences of going for all the options.

Monday, April 13, 2026

An argument from wonderful people

  1. Person x appears like they are in the image of God.

  2. So, probably, x is in the image of God.

  3. If God doesn’t exist, no one is in the image of God.

  4. So, God exists.

From the imago Dei to God

  1. It is not inappropriate to have a level of respect for human beings at least as great as what would be fitting for beings in the image and likeness of God.

  2. If naturalism is true about humans and God does not exist, then it is inappropriate to have a level of respect for human being that is at least as great as what would be fitting for beings in the image and likeness of God.

  3. So, either naturalism is false about humans or God exists (or both).

Regarding premise 1, think about how problematic it would be to say that someone like Mother Teresa had too much respect for her fellow human beings.

Regarding premise 2, naturalism tells us that innately we’re just an arrangement of atoms, and if we add to that that God doesn’t exist, then this arrangement of atoms doesn’t have a special God-directed significance, so it seems inappropriate to bestow on us the level of respect that a being in the image and likeness of God would have.

I think one can strengthen the argument to provide additional evidence for the existence of God. If God doesn’t exist, then the only plausible way that humans could deserve the imago Dei level of respect is if human beings have a deep and very valuable reality going far beyond the neural networks in our brains, a reality that intrinsically calls for that very high level of respect. (If there is a God, then we don’t need quite as much intrinsic value for us to be worthy of that kind of respect, because we could derive value from our relation to the infinite God.) This is much more than ordinary non-naturalisms about consciousness give.

We thus learn from considerations of respect that if there is no God, humans need to be very non-natural in a god-like way. And beings like that are very hard to explain apart from God. So if we are beings like, this provides significant evidence for theism.

A double lottery and non-normalized probabilities

Suppose a positive integer N is generated by a fair lottery.

Then, a random integer K is chosen between 1 and N (inclusive).

What information does this give you about N?

Obviously you now know that N ≥ K. Anything else?

Consider some specific pair of numbers n ≥ k, and suppose we’ve found out that K = k. What’s the probability that N = n? Of course P(N=n|K=k) = 0/0. But what if we do this as a limiting procedure. Suppose first that N is randomly chosen between 1 and M where M ≥ n, and let PM be the probabilities for this case. Then

  • PM(N=n|K=k) = (1/M)(1/n)/[(1/M)Σj=kMj−1] = (1/n)/Σj=kMj−1.

Take the limit as M goes to infinity. Since Σn=kj−1 = ∞, the limit is zero, so we don’t have a meaningful distribution for N.

On the other hand, what if we independently choose two random integers K1 and K2 between 1 and N? Suppose n ≥ ki for i = 1, 2. Let k* = max (k1,k2). Then:

  • PM(N=n|K1=k1,K2=k2) = (1/M)(1/n2)/[(1/M)Σj=k*Mj−2] = (1/n2)/Σj=k*Mj−2.

Take the limit as M → ∞ and call that P(N=n|K1=k1,K2=k2). The limit behaves like ck*/n2, for a constant c > 0, and generates a well-defined probability for N = n.

With zero samples, we don’t have a well-defined probability for N. With one sample, we still don’t. But with two samples (or more), now we do. This is a rummy thing: how is it that sampling turns probabilistic nonsense into sense?

This is making me more friendly to using non-normalized probabilities. After all, the fair lottery for N is easily modeled by the constant probability p0(n) = 1. With one sample N = k, we have p1(n) = 1/n for n ≥ k and p1(n) = 0 for n < k. With two samples k1, k2, we have p2(n) = 1/n2 for n ≥ max (k1,k2) and p2(n) otherwise. All this makes perfect sense. And there is a lovely mathematical feature of non-normalized probabilities: conditionalization is conjunction. The conditional probability of an event A on event B is just the probability of A ∩ B.

Non-normalized probabilities aren’t going to solve all problems with infinite fair lotteries. For instance, I toss a fair coin and generate a number N with the following rule. On heads, I choose N with my fair lottery on the positive integers. On tails, I choose N such that the probability of N = n is 2n (e.g., I toss an independent fair coin and let N be the number of the first toss that gives heads). What’s my non-normalized probability p(x,n), where x is heads or tails and n is a positive integer? We surely want np(H,n) = ∑np(T,n): the total probability of the heads options equals the total probability of the tails options. But clearly p(T,n) has to exponentially decrease so np(T,n) is finite and non-zero. On the other hand, p(H,n) is constant, so np(H,n) is zero or infinity. So they can’t be equal.

But I wonder if one could say something like this: Non-normalized probabilities make sense in certain cases, and in those cases it’s reasonable to use them?

Thursday, April 9, 2026

Predictability and epistemic utility

You’re thinking whether to become an assembly-line worker or an artist. Then you reflect on the value of knowledge. And you become a factory worker, on the grounds that if you become an assembly-line worker, you will know what you’ll be doing every working day of your future, but if you’re an artist, your activities will be unpredictable.

Some remarks. First, there is something perverse about using the value of knowledge in this way. The normal way to pursue the value of knowledge is to find out things that are independent of your pursuit. But here you are pursuing knowledge by making there be less to know about the world (or your world). Yet, paradoxically, it sure seems like the line of thought above makes sense.

Second, the my initial story depends on Molinism being false. For if there are comprehensive subjective conditionals of free will, then by becoming an artist you get to know the conditionals about what you would do in the various artistic situations you’re in. But on the assembly line story, you don’t get to know these. So the Molinist doesn’t have the paradox. I suppose that’s a bit of evidence for Molinism.

Life-time epistemic utility

I’ve been thinking about the diachronic aspects of epistemic utility. In the case of non-epistemic utility, we can get a decent first approximation to life-time utility by adding up (or, if time is continuous, integrating) momentary utility. But I think this works less well for the epistemic case. For many things of purely epistemic importance, figuring them out is much more important than when one figures them out. (Granted, figuring them out earlier is instrumentally epistemically valuable, because it gives one more time to use leverage the knowledge to figure other things.)

Here’s an extreme version of encoding the value of “figuring something out”. Assuming one does not suffer from mental decline, the epistemic value of one’s life is the epistemic value of the very last moment of it. It’s interesting to note that this won’t work. For imagine that no matter what other credences you had at a given time, you always set the credence of “This is the last moment of my life” to one, while being careful to (inconsistently) make no use of this credence in update. If only the last moment counts, this modification to your credences would be a good idea: it makes sure that when the last moment comes, you get the epistemic utility credit for it.

I suspect that other weightings that favor later over earlier beliefs will suffer from a similar problem—they make it a good idea to err on the side of pessimism about how close death is.

But at the same time, I think some sort of favoring of later beliefs over earlier ones seems appropriate. I don’t know how to resolve this difficulty.