Friday, August 29, 2025

Proportionate causality

Now let’s assume for the sake of argument:

Aquinas’ Principle of Proportionate Causality: Anything that causes something to have a perfection F must either have F or some more perfect perfection G.

And let’s think about what follows.

The Compatibility Thesis: If F is a perfection, then F is compatible with every perfection.

Argument: If F is incompatible with a perfection G, then having F rules out having perfection G. And that’s limitive rather than perfect. Perhaps the case where G = F needs to be argued separately. But we can do that. If F is incompatible with F, then F rules out all other perfections as well, and as long as there is more than one perfection (as is plausible) that violates the first part of the argument.

The Entailment Thesis: If F and G are perfections, and G is more perfect than F, then G entails F.

Argument: If F and G are perfections, and it is both possible to have F without having G and to have F while having G, it is better to have both F and G than to have just G. But if it is better to have both F and G than to have just G, then F contributes something good that G does not, and hence we cannot say that G is more perfect than F—rather, in one respect F is more perfect and in another G is more perfect.

From the Entailment Thesis and Aquinas’ Principle of Proportionate Causality, we get:

The Strong Principle of Proportionate Causality: Anything that causes something to have a perfection F must have F.

Interesting.

More on velocity

From time to time I’ve been playing with the question whether velocity just is rate of change of position over time in a philosophical elaboration of classical mechanics.

Here’s a thought. It seems that how much kinetic energy an object x has at time t (relative to a frame F, if we like) is a feature of the object at time t. But if velocity is rate of change of position over time, and velocity (together with mass) grounds kinetic energy as per E = m|v|2/2, then kinetic energy at t is a feature of how the object is at time and at nearby times.

This argument suggests that we should take velocity as a primitive property of an object, and then take it that by a law of nature velocity causes a rate of change of position: dx/dt = v.

Alternately, though, we might say that momentum and mass ground kinetic energy as per E = |p|2/2m, and momentum is not grounded in velocity. Instead, on classical mechanics, perhaps we have an additional law of nature according to which momentum causes a rate of change of position over time, which rate of change is velocity: v = dx/dt = p/m.

But in any case, it seems we probably shouldn’t both say that momentum is grounded in velocity and that velocity is nothing but rate of change of position over time.

Experiencing something as happening to you

In some video games, it feels like I am doing the in-game character’s actions and in others it feels like I am playing a character that does the actions. The distinction does not map onto the distinction between first-person-view and third-person-view. In a first-person view game, even a virtual reality one (I’ve been playing Asgard Wrath 2 on my Quest 2 headset), it can still feel like a character is doing the action, even if visually I see things from the character’s point of view. On the other hand, one can have a cartoonish third-person-view game where it feels like I am doing the character’s actions—for instance, Wii Sports tennis. (And, of course, there are games which have no in-game character responsible for the actions, such as chess or various puzzle games like Vexed. But my focus is on games where there is something like an in-game character.)

For those who don’t play video games, note that one can watch a first-person-view movie like Lady in the Lake without significantly identifying with the character whose point of view is presented by the camera. And sometimes there is a similar distinction in dreams, between events happening to one and events happening to an in-dream character from whose point of view one looks at things. (And, reversely, in real life some people suffer from a depersonalization where feels like the events of life are happening to a different person.)

Is there anything philosophically interesting that we can say about the felt distinction between seeing something from someone else’s point of view—even in a highly immersive and first-person way as in virtual reality—and seeing it as happening to oneself? I am not sure. I find myself feeling like things are happening to me more in games with a significant component of physical exertion (Wii Sports tennis, VR Thrill of the Fight boxing) and where the player character doesn’t have much character to them, so it is easier to embody them, and less so in games with a significant narrative where the player character has character of their own—even when it is pretty compelling, as in Deus Ex. Maybe both the physical aspect and the character aspect are bound up in a single feature—control. In games with a significant physical component, there is more physical control. And in games where there is a well-developed player character, presumably to a large extent this is because the character’s character is the character’s own and only slightly under one’s control (e.g., maybe one can control fairly coarse-grained features, roughly corresponding to alignment in D&D).

If this is right, then a goodly chunk of the “it’s happening to me” feeling comes not from the quality of the sensory inputs—one can still have that feeling when the inputs are less realistic and lack it when they are more realistic—but from control. This is not very surprising. But if it is true, it might have some philosophical implications outside of games and fiction. It might suggest that self-consciousness is more closely tied to agency than is immediately obvious—that self-consciousness is not just a matter of a sequence of qualia. (Though, I suppose, someone could suggest that the feeling of self-conscious is just yet another quale, albeit one that typically causally depends on agency.)

Wednesday, August 27, 2025

More decision theory stuff

Suppose there are two opaque boxes, A and B, of which I can choose one. A nearly perfect predictor of my actions put $100 in the box that they thought I would choose. Suppose I find myself with evidence that it’s 75% likely that I will choose box A (maybe in 75% of cases like this, people like me choose A). I then reason: “So, probably, the money is in box A”, and I take box A.

This reasoning is supported by causal decision theory. There are two causal hypotheses: that there is money in box A and that there is money in box B. Evidence that it’s 75% likely that I will choose box A provides me with evidence that it’s close to 75% likely that the predictor put the money in box A. The causal expected value of my choosing box A is thus around $75 and the causal expected value of my choosing box B is around $25.

On evidential decision theory, it’s a near toss-up what to do: the expected news value of my choosing A is close to $100 and so is that of my choosing B.

Thus, on causal decision theory, if I have to pay a $10 fee for choosing box A, while choosing box B is free, I should still go for box A. But on evidential decision theory, since it’s nearly certain that I’ll get a prize no matter what I do, it’s pointless to pay any fee. And that seems to be the right answer to me here. But evidential decision theory gives the clearly wrong answer in some other cases, such as that infamous counterfactual case where an undetected cancer would make you likely to smoke, with no causation in the other direction, and so on evidential decision theory you refrain from smoking to make sure you didn’t get the cancer.

In recent posts, I’ve been groping towards an alternative to both theories. The alternative depends on the idea of imagining looking at the options from the standpoint of causal decision theory after updating on the hypothesis that one has made a specific choice. In current my predictor cases, if you were to learn that you chose A, you would think: Very likely the money is in box A, so choosing box A was a good choice, while if you chose B, you would think: Very likely the money is in box B, so choosing box B was a good choice. As a result, it’s tempting to say that both choices are fine—they both ratify themselves, or something like that. But that misses out the plausible claim that if there is a $10 fee for choosing A, you should choose B. I don’t know how best to get that claim. Evidential decision theory gets it, but evidential decision theory has other problems.

Here’s something gerrymandered that might work for some binary choices. For options X and Y, which may or may not be the same, let eX(Y) be the causal expected value of Y with respect to the credences for the causal hypotheses updated with respect to your having chosen X. Now, say that the differential restrospective causal expectation d(X) of option X equals eX(X) − eX(Y). This measures how much you would think you gained, from the standpoint of causal decision theory, in choosing X rather than Y by the lights of having updated on choosing X. Then you should the option that provides a bigger d(X).

In the case where there is a $10 fee for choosing box A, d(B) is approximately $100 while d(A) is approximately $90, so you should go for box B, as per my intuition. So you end up agreeing with evidential decision theory here.

You avoid the conclusion you should smoke to make sure you don’t have cancer in the hypothetical case where cancer causes smoking but not conversely, because the differential retrospective causal expectation of smoking is positive while the differential retrospective causal expectation of not smoking is negative, assuming smoking is fun (is it?). So here you agree with causal decision theory.

What about Newcomb’s paradox? If the clear box has a thousand dollars and the opaque box has a million or nothing (depending on whether you are predicted to take just the opaque box or to take both), then the differential retrospective causal expectation of two-boxing is a thousand dollars (when you learned you two-box, you learn that the opaque box was likely empty) and the differential retrospective causal expectation of one-boxing is minus a thousand dollars.

So the differential retrospective causal expectation theory agrees with causal decision theory in the clear case (cancer-causes-smoking), the difficult case (Newcomb), but agrees with evidential decision theory in the $10 fee variant of my two-box scenario, and the last seems plausible.

But (a) it’s gerrymandered and (b) I don’t know how to generalize it to cases with more than two options. I feel lost.

Maybe I should stop worrying about this stuff, because maybe there just is no good general way of making rational decisions in cases where there is probabilistic information available to you about how you will make your choice.

Tuesday, August 26, 2025

Position: Assistant Professor of Bioethics, Tenure Track, Department of Philosophy, Baylor University

We're hiring again. Here's the full ad.

My AI policy

I’ve been wondering what to allow and what to disallow in terms of AI. I decided to treat AI as basically persons and I put this in my Metaphysics syllabus:

Even though (I believe) AI is not a person and its products are not “thoughts”, treat AI much like you would a person in writing your papers. I encourage you to have conversations with AIs about the topics of the class. If you get ideas from these conversations, put in a footnote saying you got the idea from an AI, and specifically cite which AI. If you use the AI’s words, put them in quotation marks. (If your whole paper is in quotation marks, it’s not cheating, but you haven’t done the writing yourself and so it’s like a paper not turned in, a zero.) Just as you can ask a friend to help you understand the reading, you can ask an AI to help you understand the reading, and in both cases you should have a footnote acknowledging the help you got. Just as you can ask a friend, or the Writing Center or Microsoft Word to find mistakes in your grammar and spelling, you can ask an AI to do that, and as long as the contribution of the AI is to fix errors in grammar and spelling, you don’t need to cite. But don’t ask an AI to rewrite your paper for you—now you’re cheating as the wording and/or organization is no longer yours, and one of the things I want you to learn in this class is how to write. Besides all this, last time I checked, current AI isn’t good at producing the kind of sharply focused numbered valid arguments I want you to make in the papers—AI produces things that look like valid arguments, but may not be. And they have a distinctive sound to them, so there is a decent chance of getting caught. When in doubt, put in a footnote at the end what help you got, whether from humans or AI, and if the help might be so much that the paper isn’t really yours, pre-clear it with me.

An immediate regret principle

Here’s a plausible immediate regret principle:

  1. It is irrational to make a decision such that learning that you’ve made this decision immediately makes it rational to regret that you didn’t make a different decision.

The regret principle gives an argument for two-boxing in Newcomb’s Paradox, since if you go for one box, as soon as you have made your decision to do that, you will regret you didn’t make the two-box decision—there is that clear box with money staring at you, but if you go for two boxes, you will have no regrets.

Interestingly, though, one can come up with predictor stories where one has regrets no matter what one chooses. Suppose there are two opaque boxes, A and B, and you can take either box but not both. A predictor put a thousand dollars in the box that they predicted you won’t take. Their prediction need not be very good—all we need for the story is that there is a better than even probability of their having predicted you choosing A conditionally on your choosing A and a better than even probability of their having predicted you choosing B conditionally on your choosing B. But now as soon as you’ve made your decision, and before you opened the chosen box, you will think the other box is more likely to have the money, and so your knowledge of your decision will make it rational to regret that decision. Note that while the original Newcomb problem is science-fictional, there is nothing particularly science-fictional about my story. It would not be surprising, for instance, if someone were able to guess with better than even chance of correctness about what their friends would choose.

Is this a counterexample to the immediate regret principle (1), or is this an argument that there are real rational dilemmas, cases where all options are irrational?

I am not sure, but I am inclined to think that it’s a counterexample to the regret principle.

Can we modify the immediate regret principle to save it? Maybe. How about this?

  1. No decision is such that learning that you’ve rationally made this decision immediately makes it rationally required to regret that you didn’t make a different decision.

On this regret principle, regret is compatible with non-irrational decision making but not with (known) rational decision making.

In my box story, it is neither rational nor irrational to choose A, and it is neither rational nor irrational to choose B. Then there is no contradiction to (2), since (2) only applies to decisions that are rationally made. And applying (2) to Newcomb’s Paradox no longer yields an argument for two-boxing, but only an argument that it is not rational to one-box. (For if it were rational to one-box, one could rationally decide to one-box, and one would then regret that.)

The “rationally” in (2) can be understood in a weaker way or a stronger way (the stronger way reads it as “out of rational requirement”). On either reading, (2) has some plausibility.

Monday, August 25, 2025

An odd decision theory

Suppose I am choosing between options A and B. Evidential decision theory tells me to calculate the expected utility E(U|A) given the news that I did A and the expected utility E(U|B) given the news that I did B, and go for the bigger of the two. This is well-known to lead to the following absurd result. Suppose there is a gene G that both causes one day to die a horrible death and makes one very likely to choose A, while absence of the gene makes one very likely to choose B. Then if A and B are different flavors of ice cream, I should always choose B, because E(U|A) ≪ E(U|B), since the horrible death from G trumps any advantage of flavor that A might have over B. This is silly, of course, because one’s choice does not affect whether one has G.

Causal decision theorists proceed as follows. We have a set of “causal hypotheses” about what the relevant parts of the world at the time of the decision are like. For each causal hypothesis H we calculate E(U|HA) and E(U|HB), and then we take the weighted average over our probabilities, and then decide accordingly. In other words, we have a causal expected utility of D

  • Ec(U|D) = ∑HE(U|HD)P(H)

and are to choose A over B provided that Ec(U|A) = Ec(U|B). In the gene case, the “bad news” of the horrible death on G is a constant addition to Ec(U|A) and to Ec(U|B), and so it can be ignored—as is right, since it’s not in our control.

But here is a variant case that worries me. Suppose that you are choosing between flavors A and B of ice cream, and you will only ever ever get to taste one of them, and only once. You can’t figure out which one will taste better for you (maybe one is oyster ice cream and the other is sea urchin ice cream). However, data shows that not only does G make one likely to choose A and its absence makes one likely to choose B, but everyone who has G derives pleasure from A and displeasure from B and everyone who lacks G has the opposite result, and all the pleasures and displeasures are of the same magnitude.

Now, background information says that you have a 3/4 chance of having G. On causal decision theory, this means that you should choose A, because likely you have G, and those who have G all enjoy A. Evidential decision theory, however, tells you that you should choose B, since if you choose B then likely you don’t have the terrible gene G.

In this case, I feel causal decision theory isn’t quite right. Suppose I choose A. Then after I have made my choice, but before I have consumed the ice cream, I will be glad that I chose A: my choice of A will make me think I have G, and hence that A is tastier. But similarly, if I choose B, then after I have made my choice, and again before consumption, I will be glad that I chose B, since my choice B will make me think I don’t have G and hence that B was a good choice. Whatever I choose, I will be glad I chose it. This suggests to me that my there is nothing wrong with either choice!

Here is the beginning of a third decision theory, then—one that is neither causal nor evidential. An option A is permissible provided that causal decision theory with the causal hypothesis credences conditioned on one’s choosing A permits one to do A. An option A is required provided that no alternative is permissible. (There are cases where no option is permissible. That’s weird, I admit.)

In the initial case, where the pleasure of each flavor does not depend on G, this third decision theory gives the same answer as causal decision theory—it says to go for the tastier flavor. In the second case, however, where the pleasure/displeasure depends on G, it permits one to go for either flavor. In a probabilistic-predictor Newcomb’s Paradox, it says to two-box.

Saturday, August 23, 2025

Gaze dualism and omnisubjectivity

I have toyed with a pair of theories.

The first is what I call gaze-dualism. On gaze-dualism, our sensory conscious experiences are constituted by a non-physical object—the soul—“gazing” at certain brain states. When the sensory data changes—say, when a sound goes from middle A to middle C—the subjective experience changes. But this change need not involve an intrinsic change in the soul. The change in experience is grounded in a change in the gazed-at brain state, a brain state that reflects the sensory data, rather than by a change in the gazing soul. (This is perhaps very close to Aquinas’ view of sensory consciousness, except that for Aquinas the gazed-at states are states of sense organs rather than of the brain.)

The second is an application of this to God’s knowledge of contingent reality. God knows contingent reality by gazing at it the way that our soul gazes at the brain states that reflect sensory data. God does not intrinsically change when contingent reality changes—the change is all on the side of the gazed-at contingent reality.

I just realized that this story makes a bit of progress on what Linda Zagzebski calls “omnisubjectivity”—God’s knowledge of all subjective states. My experience of hearing a middle C comes from my gazing at a brain state BC of my auditory center produced by nerve impulses caused by my tympanic membrane vibrating at 256 Hz. My gaze is limited to certain aspects of my auditory center—my gaze tracks whatever features of my auditory center are relevant to the sound, features denoted by BC, but does not track features of my auditory center that are not relevant to the sound (e.g., the temperature of my neurons). God’s gaze is not so limited—God gazes at every aspect of my auditory center. But in doing so, he also gazes at BC. This does not mean that God has the same experience as I do. My experience is partly constituted by my soul’s gaze at BC. God’s experience is partly constituted by God’s gaze at BC. Since my soul is very different from God, it is not surprising that the experiences are different. However, God has full knowledge of the constituents of my experience: myself, my gaze, and BC, and God’s knowledge of these is basically experiential—it is constituted by God’s gazing at me, my gaze, and BC. And God also gazes at their totality. This is, I think, all we need to be able to say that God knows my sensory consciousness states.

My non-sensory experiences may also be constituted by my soul’s gazing at a state of my brain, but they may also be constituted by the soul’s gazing at a state of the soul. And God gazes at the constituents and whole again.

Diversity of inner lives

There is a vast and rather radical diversity in the inner conscious lives of human beings. Start with the differences in dreams: some people know immediately whether they are dreaming and others do not; some are in control of their dreams and others are not; some dream in color and others do not. Now move on to the differences in thought. Some think in pictures, some in words with sounds, some in a combination of words with sounds and written words, and some without any visual or aural imagery. Some people are completely unable to imagine things in pictures, others can do so only in a shadowy and unstable way, and yet others can do so in detail. Even in the case of close friends, we often have no idea about how they differ in these respects, and to many people the diversity in inner conscious lives comes as a surprise, as they assume that almost everyone is like them.

But in their outer behavior, including linguistic behavior, people seem much more homogeneous. They say “I think that tomorrow is a good day for our bike trip” regardless of whether they thought it out in pictures, in sounds, or in some other way. They give arguments as a sequence of logically connected sentences. Their desires, while differing from person to person, are largely comprehensible and not very surprising. People are more homogeneous outside than inside.

This contrast between inner heterogeneity and outward homogeneity is something I realized yesterday while participating in a workshop on Linda Zagzebski’s manuscript on dreams. I am not quite sure what to make of this contrast philosophically, but it seems really interesting. We flatten our inner lives to present them to people in our behavior, but we also don’t feel like much is lost in this flattening. It doesn’t really matter much whether our thoughts come along with sights or sounds. It would not be surprising if there were differences in skill levels that correlated with the characteristics of inner life—it would not be surprising if people who thought more in pictures were better at low-dimensional topology—but these differences are not radical.

Many of us as children have wondered whether other people’s conscious experiences are the same as ours—does red look the same (bracketing colorblindness) and does a middle C sinewave sound the same (bracketing hearing deficiencies)? I have for a while thought it not unlikely that the answer is negative, because I am attracted to the idea that central to how things look to us are the relationships between different experiences, and different people have sets of experiences. (Compare the visual field reversal experiments, where people who wear visual field reversal glasses initially see things upside-down but then it turns right-side-up, which suggests to me that the directionality of the visual field is constituted by relationships between different experiences rather than being something intrinsic.) I think the vast diversity in conscious but non-sensory inner lives gives us some reason to think that sensory consciousness also differs quite a bit between people—and gets flattened and homogenized into words, much as thoughts are.

Friday, August 8, 2025

Extrinsic well-being and the open future

Klaus: Sometimes how well or badly off you are at time t1 depends on what happens at a later time t2. A particularly compelling case of this is when at t1 you performedan onerous action with the goal of producing some effect E at t2. How well off you were in performing the action depends on whether the action succeeded—which depends on whether E eventuates at t2. But now suppose the future is open. Then in a world with as much indeterminacy as ours, in many cases at t1 it will be contingent whether the event at t2 on which your well-being at t1 depends eventuates. And on open future views, at t1 there will then be no fact of the matter about your well-being. Hence, the future is not open.

Opie: In such cases, your well-being should be located at t2 rather than at t1. If you jump the crevasse, it is only when you land that you have the well-being of success.

Klaus: This does not work as well in cases where you are dead at t2. And yet our well-being does sometimes depend on what happens after we are dead. The action at t1 might be a heroic sacrifice of one’s life to save one’s friends—but whether one is a successful hero or a tragic hero depends on whether the friends will be saved, which may depend on what happens after one is already dead.

Opie: Thanks! You just gave me an argument for an afterlife. In cases like this, you are obviously better off if you manage to save your friends, but you aren’t better off in this life, so there must be life after death.

Klaus: But we also have the intuition that even if there were no afterlife, it would be better to be the successful hero than the tragic hero, and that posthumous fame is better than posthumous infamy.

Opie: There is an afterlife. You’ve convinced me. And moral intuitions about how things would be if our existence had a radically different shape from the one it in fact has are suspect. And, given that there is an afterlife, a scenario without an afterlife is a scenario where our existence has a radically different shape. Thus the intuition you cite is unreliable.

Klaus: That’s a good response. Let me try a different case. Suppose you perform an onerous action with a goal within this life, but then you change your mind about the goal and work to prevent that goal. This works best if both goals are morally acceptable, and switching goals is acceptable. For instance you initially worked to help the Niners train to win their baseball game against the Logicians, but then your allegiance shifted to the Logicians in a way that isn't morally questionable. And then suppose the Niners won. Your actions in favor of the Niners are successful, and you have well-being. But it is incorrect to locate that well-being at the time of the actual victory, since at that time you are working for the Logicians, not the Niners. So the well-being must be located at the time of your activity, and at that time it depends on future contingents.

Opie: Perhaps I should say that at the time Niners beat the Logicians, you are both well-off and badly-off, since one of your past goals is successful and the other is unsuccessful. But I agree that this doesn’t quite seem right. After all, if you are loyal to your current employer, you’re bummed out about the Logicians’ loss and you’re bummed out that you weren’t working for them from the beginning. So intuitively you're just badly off at this time, not both badly and well off. So, I admit, this is a little bit of evidence against open future views.

Consciousness and the open future

Plausibly:

  1. There is a “minimal humanly observable duration” (mhod) such that a human cannot have a conscious state—say, a pain—shorter than an mhod, but can have a conscious state that’s an mhod long.

The “cannot” here is nomic possibility rather than metaphysical possibility.

Let δ denote an mhod. Now, suppose that you feel a pain precisely from t0 to t2. Then t2 ≥ t0 + δ. Now, let t1 = t0 + δ/2. Then you feel a pain at t1. But at t1, you only felt a pain for half an mhod. Thus:

  1. At t1, that you feel pain depends on substantive facts about your mental state at times after t1.

For if your head were suddenly zapped by a giant laser a quarter of an mhod after t1, then you would not have felt a pain at t1, because you would have been in a position to feel pain only from t0 to t0 + (3/4)δ.

But in a universe full of quantum indeterminacy:

  1. These substantive facts are contingent.

After all, your brain could just fail a quarter of an mhod after t1 due to a random quantum event.

But:

  1. Given an open future, at t1 there are no substantive contingent facts about the future.

Thus:

  1. Given an open future, at t1 there is no fact that you are conscious.

Which is absurd!