## Wednesday, September 11, 2024

### Independence conglomerability

Conglomerability says that if you have an event E and a partition {Ri : i ∈ I} of the probability space, then if P(ERi) ≥ λ for all i, we likewise have P(E) ≥ λ. Absence of conglomerability leads to a variety of paradoxes, but in various infinitary contexts, it is necessary to abandon conglomerability.

I want to consider a variant on conglomerability, which I will call independence conglomerability. Suppose we have a collection of events {Ei : i ∈ I}, and suppose that J is a randomly chosen member of I, with J independent of all the Ei taken together. Independence conglomerability requires that if P(Ei) ≥ λ for all i, then P(EJ) ≥ λ, where ω ∈ EJ if and only if ω ∈ EJ(ω) for ω in our underlying probability space Ω.

Independence conglomerability follows from conglomerability if we suppose that P(EJJ=i) = P(Ei) for all i.

However, note that independence conglomerability differs from conglomerability in two ways. First, it can make sense to talk of independence conglomerability even in cases where one cannot meaningfully conditionalize on J = i (e.g., because P(J=i) = 0 and we don’t have a way of conditionalizing on zero probability events). Second, and this seems like it could be significant, independence conglomerability seems a little more intuitive. We have a bunch of events, each of which has probability at least λ. We independently randomly choose one of these events. We should expect the probability that our randomly chosen event happens to be at least λ.

Imagine that independence conglomerability fails. Then you can have the following scenario. For each i ∈ I there is a game available for you to play, where you win provided that Ei happens. You get to choose which game to play. Suppose that for each game, the probability of victory is at most λ. But, paradoxically, there is a random way to choose which game to play, independent of the events underlying all the games, where your probability of victory is strictly bigger than λ. (Here I reversed the inequalities defining independence conglomerability, by replacing events with their complements as needed.) Thus you can do better by randomly choosing which game to play than by choosing a specific game to play.

Example: I am going to uniformly randomly choose a positive integer (using a countably infinite fair lottery, assuming for the sake of argument such is possible). For each positive integer n, you have a game available to you: the game is one you win if n is no less than the number I am going to pick. You despair: there is no way for you to have any chance to win, because whatever positive integer n you choose, I am infinitely more likely to get a number bigger than n than a number less than or equal to n, so the chance of you winning is zero or infinitesimal regardless which game you pick. But then you have a brilliant idea. If instead of you choosing a specific number, you independently uniformly choose a positive integer n, the probability of you winning will be at least 1/2 by symmetry. Thus a situation with two independent countably infinite fair lotteries and a symmetry constraint that probabilities don’t change when you swap the lotteries with each other violates independence conglomerability.

Is this violation somehow more problematic than the much discussed violations of plain conglomerability that happen with countably infinite fair lotteries? I don’t know, but maybe it is. There is something particularly odd about the idea that you can noticeably increase your chance of winning by randomly choosing which game to play.

### Comparing axiologies

Are there ways in which it would be better if axiology were different? Here’s a suggestion that comes to mind:

1. It would be better if cowardice, sloth, dishonesty, ignorance, suffering and all the other things that are actually intrinsic evils were instead great intrinsic goods.

For surely it would be better for there to be more goods!

On the other hand, one might have this optimistic thought:

1. The actually true axiology is better than any actually false axiology.

(Theists are particularly likely to think this, since they will likely think that the true axiology is grounded in the nature of a perfect being.)

We have an evident tension between (1) and (2).

What’s going on?

One move is to say that it makes no sense to discuss the value of impossible scenarios. I am inclined to think that this isn’t quite correct. One might think it would be really good if the first eight thousand binary digits of π encoded the true moral code in English using ASCII coding, even though this is impossible (I assume). Likewise, it is impossible for a human to know all of mathematics, but it would be good to do so.

The solution I would go for is that axiology needs to be kept fixed in value comparisons. Imagine that I am living a blessed life of constant painless joy, and dissatisfied with that I find myself wishing for the scenario where joyless pain is even better than painless joy and I live a life of joyless pain. If one need not keep axiology fixed in value comparisons, that wish makes perfect sense, but I think it doesn’t—unlike the wish about π or the knowledge of mathematics.

### A way to be calmer

For years I would find myself periodically annoyed by shoelaces. Several times a day, I would have to engage in finicky fine-motor activity to tie my shoes. This made me a little angry, because I suspected that the reason why few adult shoes have alternate closures has to do with fashion rather than with any technological benefits of shoelaces (note, after all, that shoelaces come undone, as well as get caught in bike gears, so it's not all a matter of laziness), and I've always resented social pressures of fashion imposing burdens on us.

I've thought about this for a long time, and then recently finally decided to do something about it. I pulled out some cord locks (in the photo are some heavy duty cord locks that I salvaged from something years ago), pulled my shoelaces through them, and after a day or two of experimental use, I cut the shoelaces down, and knotted them above the cord locks. No more regular annoyance and anger at society's fashion choices!

To fasten, I just grab the cord lock with one hand, and pull the permanent knot with the other. To unfasten, I just grab the cord lock and pull it to the knot. At any time, I can easily adjust tension in either direction without untying. It doesn't come loose. It doesn't get stuck in bike gears. It's not quite as instantaneous as I had imaged, but it is pretty fast.

It has some minor down sides. Eventually a cord lock will break down--though I don't know if this will be sooner than the shoe. At the length of lace I settled for (a little shorter than in this photo), the shoes don't loosen quite as far for removal as I might ideally prefer. And one would probably need to cut the laces to launder the shoes, but I don't launder my shoes.

### The void between the atoms

Philoponus says:

When Democritus said that the atoms are in contact with each other, he did not mean contact, strictly speaking, which occurs when the surfaces of the things in contact fit on [epharmazousōn] one another, but the condition in which the atoms are near one another and not far apart is what he called contact. For no matter what, they are separated by void. (67A7)

This odd view would lead to three difficulties. First, the loveliness of the Democritean system is that everything is explained by atoms pushing each other around, without any mysterious action at a distance, without any weird forces like the love and strife posited by other Greek thinkers. But if two atoms are moving toward each other, and they must stop short of touching each other, it seems that we have some kind of a repulsion at a “near” distance. Second, the atomists thought everything happened of necessity. But why should two atoms heading for each other stop at distance x apart rather than distance x/2 or x/3, say? This seems arbitrary. And, third, what reason would Democritus have to say such a strange thing?

One solution is to simply say Philoponus was wrong about Democritus (cf. this interesting paper). One might, for instance, speculate that Democritus said something about how there will always be interstices of void when atoms meet, much like the triangle-like interstices when you tile the plane with circles in a hexagonal pattern, because their surfaces do not perfectly match like jigsaw pieces would, and Philoponus confused this with the claim that there is void between the atoms.

But I want to try something else. There is a famous problem—discussed by Sextus Empiricus, the Dalai Lama (!) and a number of people in between—about how impenetrable material objects can possibly touch. For if they touch, their surfaces are either separated by some distance or not.If their surfaces are separated, they don’t really touch. If their surfaces are not separated, then the surfaces are in the same place, and the objects have penetrated each other (albeit only infinitesimally) and hence they are not really impenetrable.

Suppose now that we think that Democritus was aware of this problem, and posited the following solution. Atoms occupy open regions of space, ones that do not include any of their boundaries or surfaces. For instance, atoms of fire, which are spherical, occupy the set of points in space whose distance to the center is strictly less than a radius r: the boundary, where the distance to the center is exactly r, is unoccupied. If two spherical atoms, each of radius r, come in contact, the distance between their centers is 2r, but the point exactly midway between their centers is not occupied by either atom. There is a single point’s worth of void there.

This immediately solves two of the three problems I gave for the void-between-atoms view. If I’m right, Democritus has very good reason to posit the view: it is needed to avoid the problem of interpenetration of surfaces. Furthermore, the arbitrariness problem disappears. Atoms heading for each other stop precisely when their boundaries would interpenetrate if they had boundaries in them. They stop at distance zero. There is no smaller distance they could stop at. The two spherical atoms stop moving toward each other when there is exactly one point of void between them: any more and they could keep on moving; any less is impossible.

We still have the problem of mysterious action at a distance requiring some force beyond mere contact. But Democritus might think—I don’t know if he would be right—that action at zero distance is less mysterious than action at positive distance, and on the suggestion I am offering the distance between objects that are touching is zero. There is a point’s (or a surface of points, if say we have two cubical atoms meeting with parallel faces) worth of distance, and that’s zero. Impenetrability at least explains why the atoms can’t go any further towards each other, even if it does not explain why they deflect each other’s motion as they do (which anyway, as we learn from Hume’s discussion of billiard balls, isn’t easy). So the remaining problem is reduced.

It wouldn’t surprise me at all if this was in the literature already.

### One-thinker colocationism

Colocationists about human beings think that in my chair are two colocated entities: a human person and a human animal. Both of them are made of the same stuff, both of them exhibit the same physical movements, etc.

The standard argument against colocationism is the two thinkers argument. Higher animals, like chimpanzees and dogs, think. The brain of a human animal is more sophisticated than that of a chimpanzee or a dog, and hence human animals also have what it takes to think. Thus, they think. But human persons obviously think. So there are two thinkers in my chair, which is innately absurd, plus leads to some other difficulties.

If I were a colocationist, I think I would deny that any animals think. Instead, the same kind of duplication that happens in the human case happens for all the higher animals. In my chair there is a human animal and a person, and only the person thinks. In the doghouse, there is a dog and a “derson”. In the savanna, one may have a chimpanzee and a “chimperson”. The derson and the chimperson are not persons (the chimperson comes closer than the derson does), but all three think, while their colocated animals do not. We might even suppose that the person, the derson and chimperson are all members of some further kind, thinker.

Suppose one’s reason for accepting colocationism about humans is intuitions about the psychological components of personal identity: if one’s psychological states were transfered into a different head, one would go with the psychological states, while the animal would stay behind, so one isn’t an animal. Then I think one should say a similar thing about other higher animals. If we think that that an interpersonal relationship should follow the psychological states rather than the body of the person, we should think similarly about a relationship with one’s pet: if one’s pet’s psychological states are transfered into a different body, our concerns should follow. If Rover is having a vivid dream of chasing a ball, and we transfer Rover’s psychological states into the body of another dog, Rover would continue the dream in that other body. I don’t believe this in the human case, and I don’t believe it in the dog case, but if I believed this in the human case, I’d believe it in the dog case.

What are the reasons for the standard colocationist’s holding that the human animal thinks? One may say that because both the animal and the person have the same brain activity, that’s a reason to say that either both or neither thinks. But the brain also has the same brain activity, and so if this is one’s reason for saying that the animal thinks, we now have three thinkers. And, if there are unrestricted fusions, the mereological sum of the person with their clothes also has the same brain activity, thereby generating a fourth thinker. That’s absurd. Thus thought isn’t just a function of hosting brain activity, but hosting brain activity in a certain kind of context. And why can’t this context be partly characterized by modal characteristics, so that although both the animal and the person have the same brain activity, they provide a different modally characterized context for the brain activity, in such a way that only one of the two thinks?

This one-thinker colocationism can be either naturalistic or dualistic. On the dualistic version, we might suppose that the nonphysical mental properties belong to only one member of the pair of associated beings. On the naturalistic version, we might suppose that what it is to have a mental property is to have a physical property in a host with appropriate modal properties—the ones the person, the derson and the chimperson all have.

I think there is one big reason why a colocationist may be suspicious of this view. Ethologists sometimes explain animal behavior in terms of what the animal knows, is planning, and more generally is thinking. These explanations are all incorrect on the view in question. But the one-thinker co-locationist has two potential answers to this. The first is to weaken her view and allow animals to think, but not consciously. It is only the associated non-animal that has conscious states, that has qualia. But the conscious states need not enter into behavioral explanations. The second is to say that the scientists’ explanations while incorrect can be easily corrected by replacing mental properties with their neural correlates.

## Tuesday, September 10, 2024

### Reducing de re to de dicto modality

In my previous post, I gave an initial defense of a theory of qualitative haecceities in terms of qualitative origins: qualitative haecceities encapsulate complete qualitative descriptions of an entity’s initial state and causal history. I noted that among the advantages of the theory is that it can allow for a reduction of de re modality to de dicto modality, without “the mystery of non-qualitative haecceities”.

I want to expand on this, and why qualitative-origin haecceities are superior to non-qualitative haecceities here. A haecceitistic account of de re modality proceeds in something like the following vein. First, introduce the predicate H(Q,x) which says that Q is a haecceity of x. Then we reduce de re claims as follows:

• x is essentially F ↔︎ Q(H(Q,x)→□(∀y(QyFx)))

• x is accidentally F ↔︎ Q(H(Q,x)∧◊(∃y(QyFx))).

Granted, this involves de re modality for second-order variables like Q. But this de re modality is less problematic because we can suppose the Barcan and converse Barcan formulas to hold as axioms for the second-order quantifiers, and we can treat the second-order entities as necessary beings. De re modality is particularly difficult for contingent beings, so if we can reduce to a modal logic where only necessary beings are subject to de re modal claims, we have made genuine progress.

We will also need some axioms. Here are two that come to mind:

• xQ(H(Q,x)→Qx) (things have their haecceities)

• xQ(H(Q,x)) (everything has a haecceity).

Now, here is why I think that qualitative-origin haecceities are superior to non-qualitative haecceities. Given qualitative-origin haecceities, we can give an account of what H(Q,x) means without using de re modality. It just means that Qy attributes to y all of the actual qualitative causal origins of x, including x’s initial qualitative state. On the other hand, if we go for non-qualitative haecceities, we seem to have two options. We could take H(Q,x) to be primitive, which always should be a last resort, or we could try to define in some way like:

• H(Q,x) ↔︎ (□(ExQx) ∧ □∀y(Qyy=x))

where Ex says that x exists (it might be a primitive in a non-free logic, or it might just be an abbreviation for ∃y(y=x)). But this definition uses de re modality with respect to x, so it is not satisfactory in this context, and I can’t think of any way to do it without de re modality with respect to potentially contingent individuals like x.

### Qualitative haecceities

A haecceity H of x is a property of an entity such that necessarily x exists if and only if x instantiates H.

Haecceities are normally thought of as non-qualitative properties. But one could also have qualitative haecceities. Of course, if an entity has a qualitative haecceity then it cannot be duplicated, so one can only suppose that everything has a qualitative haecceity provided one is willing to agree with Leibniz’s Identity of Indiscernibles.

I am personally drawn to the idea that everything does have a qualitative haecceity, and specifically that the qualitative haecceity of x encapsulates x’s qualitative causal history: a complete qualitative description of x’s explanatorily initial state and of all of its causal antecedents. One might call such properties “qualitative origins”. The view that every entity has a qualitative origin is a haecceity is a particularly strong version of the essentiality of origins: everything in an entity’s causal history is essential to it, and the causal history is sufficient for the entity’s existence.

I suppose the main reason not to accept this view is that it implies that two distinct objects couldn’t have the same qualitative origin, but it seems possible that God could create two objects ex nihilo with the same qualitative initial state Q. I am not so sure, though. How would God do that? “Let there be two things satisfying Q?” But this is too indeterminate (I disagree with van Inwagen’s idea that God can issue indeterminate decrees). If there can be two, there can be three, so God would have to specify which two things satisfying Q to create. But that would require a way of securing numerical reference to specific individuals prior to their creation, and that in turn would require haecceities, in this case non-qualitative haecceities. So the objection to the view requires non-qualitative haecceities.

But what started us on this objection was the thought that God could say “Let there be two things satisfying Q.” But if God could say that, why couldn’t he say “Let there be two things satisfying H”, where H is a non-qualitative haecceity? I suppose one will say that this is nonsense, because it is nonsense to suppose two things share a non-qualitative haecceity. But isn’t there a double-standard here? If it is nonsense to suppose two things share a non-qualitative haecceity, why can’t it be nonsense to suppose two things share a qualitative haecceity? It seems that “what does the explaining” of why two things can’t share a non-qualitative haecceity is the obscurity of non-qualitative haecceities, and that’s not really an explanation.

So perhaps we can just say: Having a distinct qualitative origin is what it is to be a thing, and it is impossible for two things to share one. This does indeed restrict the space of possible worlds. No exactly similar iron spheres or anything like that. That’s admittedly a little counterintuitive. But on the other hand, we have a lovely explanation of intra- and inter-world identity of objects, as well as a reduction of de re modality to de dicto, all without the mystery of non-qualitative haecceities. Plus we have Leibniz’s zero/one picture of the world on which all of reality is described by zeroes and ones: we put a zero beside an uninstantiated qualitative haecceity and a one besides an initiated one, and then that tells us everything that exists. This is all very appealing to me.

## Friday, September 6, 2024

### Existence and causation

1. If x causes y, the causal relation between x and y is not posterior to the existence of y.

2. A relation between two entities is never prior to the existence of either entity.

So, the causal relation between x and y is neither prior nor posterior to the existence of y.

But the causal relation is, obviously, intimately tied to the existence of y. What is this tie? The best answer I know is that the causal relation is the existence of y or an aspect of that existence: for y to exist is at least in part for y to have been caused by x.

## Thursday, September 5, 2024

### Appropriateness of memory chains

A lot of discussion of memory theories of personal identity invokes science-fictional thought experiments, such as when memories are swapped between two brains.

One of the classic papers is Shoemaker’s “Persons and their Pasts”. There, Shoemaker accounts for personal identity across time, at least in the absence of branching, in terms of appropriate causal connections between apparent memories, not just any causal connections.

This matters. Imagine that Alice and Bob both get total memory wipes, so on the memory theory they cease to exist. But the person inhabiting the Alice body then reads Bob’s vividly written diary, which induces in her apparent memories of Bob’s life. I think most memory theorists will want to deny that after the reading of the diary, Bob comes back to life in Alice’s body. Not only would this be a highly counterintuitive consequence, but it would violate the plausible principle that whether someone is dead does not depend on future events, absent something like time travel. For suppose this sequence:

• Monday: Memory wipe

• Tuesday: Person inhabiting Alice’s body lives a confused life

• Wednesday: Person inhabiting Alice’s body reads Bob’s diary, comes to think she’s Bob, and gains all sorts of “correct” apparent memories of Bob’s life.

On Wednesday, the person inhabiting Alice’s body has memories of the person inhabiting Alice’s body on Tuesday, so by the memory theory they are the same person. But if on Wednesday, it is Bob who inhabits Alice’s body, then Bob also already existed on Tuesday by transitivity of identity. On the other hand, if Alice hadn’t read the diary on Wednesday, Bob would not have existed either on Wednesday or on Tuesday. So whether Bob is alive on Tuesday depends on future events, despite the absence of anything like time travel, which is absurd.

To get around diary cases, memory theorists really do need to have an appropriateness condition on the causal connections. Shoemaker’s own appropriateness condition appears inadequate: he thinks that what is needed is the kind of connection that makes a later apparent memory and an earlier apparent memory be both of the same experience. But Alice’s induced apparent memories are of the experiences that Bob so vividly described in his diary, which are the same experiences that Bob set down his memories of.

What the memory theorist should insist on are causal chains that are of the right kind for the transmission of memories, modulo any sameness-of-person condition. But now it is far from clear that the science-fictional scenarios in the literature satisfy this condition. Certainly, the scanning of memories in a brain and the imposition of the same patterns on a brain isn’t the normal way for memories to be causally transmitted over time. That it’s not the normal way does not mean that it’s not an appropriate way, but at least it’s far from clear that it is an appropriate way.

It would be interesting what one should say about a memory theory on which the appropriate causal chain condition is sufficiently strict that the only way to transfer memories from one head to another would be by physically moving the brain. (Could one move a chunk of the brain instead? Maybe, but only if it turns out that memories can be localized. And even so it’s not clear whether coming along with a mere chunk of the brain is the appropriate way to transmit memories; the appropriate way may require full cerebral context.) Such a version of the memory theory would not do justice to “memory swapping” intuitions about the memories from one brain being transferred to another. And I take it that such memory swapping intuitions are important to the case for the memory theory.

Here’s another implausible consequence of this kind of memory theory. Suppose aliens are capturing people, and recording their brain data using a method that destroys the memories. However, being somewhat nice, the aliens then use the recording to restore the memories, and then return the person to earth. On the memory theory, anybody coming back to earth is a new individual. That doesn’t seem quite right.

A challenge for the memory theorist, thus, is to have an account of the appropriate causal chain condition that is sufficiently lax to allow for the memory swap intuitions that often motivate the theory but is strict enough to rule out diary cases. This is hard.

## Wednesday, September 4, 2024

### Restitution

Suppose Bob paid professional killer Alice to kill him on a day of her choice in the next month. Next day, Bob changes his mind, but has no way of contacting Alice. A week later, Bob sees Alice in the distance aiming a rifle at him. Is it permissible for him to shoot Alice in self-defense?

I take it (somewhat controversially) that killing a juridically innocent person is murder even if the victim consents. Thus, Alice is attempting murder, and normally it is permissible to shoot someone who is trying to murder one. But it seems rather dastardly for Bob to shoot Alice in this case.

On the other hand, though, if Bob hired Alice to kill Carl, and then repented, shooting Alice when Alice is trying to murder Carl does seem the right thing for Bob to do if there is no other way to save Carl’s life.

What is the exact moral difference between the two cases? In both cases, Alice is trying to commit a murder, and in both cases Bob bears a responsibility for this.

I think the difference has something with duties of restitution. When one has done something wrong, and then repented, one needs to do one’s best to “undo” the wrong, repaying the victims in a reasonable manner. But there is a gradation of priority, and in particular even if one is oneself among the victims (Socrates thinks the wrongdoer is the chief victim, since in doing wrong one damages one’s virtue), restitution to others takes priority. In both cases, Bob has harmed Alice by tempting her to commit murder. In the case where Alice was hired to murder Bob, restitution to Alice takes precedence over restitution to Bob, and refraining from killing Alice in self-defense seems a precisely appropriate form of restitution. In the case where Alice was hired to murder Carl, however, restitution to Carl takes precedence, and Bob owes it to Carl to shoot Alice.

In fact, I suspect that in the case where Bob hired Alice to kill Carl, if the only way to save Carl’s life is for Bob to leap into the line of fire and die protecting Carl, other things being equal that would be Bob’s duty. Normally to sacrifice one’s life to save another is supererogatory, but not so when the danger to the other comes from one’s own murderous intent.

The morality of restitution is difficult and complex.

### Independent invariant regular hyperreal probabilities: an existence result

A couple of years ago I showed how to construct hyperreal finitely additive probabilities on infinite sets that satisfy certain symmetry constraints and have the Bayesian regularity property that every possible outcome has non-zero probability. In this post, I want to show a result that allows one to construct such probabilities for an infinite sequence of independent random variables.

Suppose first we have a group G of symmetries acting on a space Ω. What I previously showed was that there is a hyperreal G-invariant finitely additive probability assignment on all the subsets of Ω that satisfies Bayesian regularity (i.e., P(A) > 0 for every non-empty A) if and only if the action of G on Ω is “locally finite”, i.e.:

• For any finitely generated subgroup H of G and any point x in G, the orbit Hx is finite.

Here is today’s main result (unless there is a mistake in the proof):

Theorem. For each i in an index set, suppose we have a group Gi acting on a space Ωi. Let Ω = ∏iΩi and G = ∏iGi, and consider G acting componentwise on Ω. Then the following are equivalent:

1. there is a hyperreal G-invariant finitely additive probability assignment on all the subsets of Ω that satisfies Bayesian regularity and the independence condition that if A1, ..., An are subsets of Ω such that Ai depends only on coordinates from Ji ⊆ I with J1, ..., Jn pairwise disjoint if and only if the action of G on Ω is locally finite

2. there is a hyperreal G-invariant finitely additive probability assignment on all the subsets of Ω that satisfies Bayesian regularity

3. the action of G on Ω is locally finite.

Here, an event A depends only on coordinates from a set J just in case there is a subset A′ of j ∈ JΩj such that A = {ω ∈ Ω : ω|J ∈ A′} (I am thinking of the members of a product of sets as functions from the index set to the union of the Ωi). For brevity, I will omit “finitely additive” from now on.

The equivalence of (b) and (c) is from my old result, and the implication from (a) to (b) is trivial, so the only thing to be shown is that (c) implies (a).

Example: If each group Gi is finite and of size at most N for a fixed N, then the local finiteness condition is met. (Each such group can be embedded into the symmetric group SN, and any power of a finite group is locally finite, so a fortiori its action is locally finite.) In particular, if all of the groups Gi are the same and finite, the condition is met. An example like that is where we have an infinite sequence of coin tosses, and the symmetry on each coin toss is the reversal of the coin.

Philosophical note: The above gives us the kind of symmetry we want for each individual independent experiment. But intuitively, if the experiments are identically distributed, we will want invariance with respect to a shuffling of the experiments. We are unlikely to get that, because the shuffling is unlikely to satisfy the local finiteness condition. For instance, for a doubly infinite sequence of coin tosses, we would want invariance with respect to shifting the sequence, and that doesn’t satisfy local finiteness.

Now, on to a sketch of the proof from (c) to (a). The proof uses a sequence of three reductions using an ultraproduct construction to cases exhibiting more and more finiteness.

First, note that without loss of generality, the index set I can be taken to be finite. For if it’s infinite, for any finite partition K of I, and any J ∈ K, let GJ = ∏i ∈ JGi, let ΩJ = ∏i ∈ JΩi, with the obvious action of GJ on ΩJ. Then G is isomorphic to J ∈ KGJ and Ω to J ∈ KΩJ. Then if we have the result for finite index sets, we will get a regular hyperreal G-invariant probability on Ω that satisfies the independence condition in the special case where J1, ..., Jn are such that Ji and Jj for distinct i and j are such that at least one of Ji ∩ J and Jj ∩ J is empty for every J ∈ K. We then take an ultraproduct of these probability measures with respect to K and an ultrafilter on the partially ordered set of finite partitions of I ordered by fineness, and then we get the independence condition in full generality.

Second, without loss of generality, the groups Gi can be taken as finitely generated. For suppose we can construct a regular probability that is invariant under H = ∏iHi where Hi is a finitely generated subgroup of Gi and satisfies the independence condition. Then we take an ultraproduct with respect to an ultrafilter on the partially ordered set of sequences of finitely generated groups (Hi)i ∈ I where Hi is a subgroup of Gi and where the set is ordered by componentwise inclusion.

Third, also without loss of generality, the sets Ωi can be taken to be finite, by replacing each Ωi with an orbit of some finite collection of elements under the action of the finitely generated Gi, since such orbits will be finite by local finiteness, and once again taking an appropriate ultraproduct with respect to an ultrafilter on the partially ordered set of sequences of finite subsets of Ωi closed under Gi ordered by componentwise inclusion. The Bayesian regularity condition will hold for the ultraproduct if it holds for each factor in the ultraproduct.

We have thus reduced everything to the case where I is finite and each Ωi is finite. The existence of the hyperreal G-invariant finitely additive regular probability measure is now trivial: just let P(A) = |A|/|Ω| for every A ⊆ Ω. (In fact, the measure is countably additive and not merely finitely additive, real and not merely hyperreal, and invariant not just under the action of G but under all permutations.)

## Thursday, August 29, 2024

### Three invariance arguments

Suppose we have two infinite collections of items Ln and Rn indexed by integers n, and suppose we have a total preorder ≤ on all the items. Suppose further the following conditions hold for all n, m and k:

1. Ln > Ln − 1

2. Rn > Rn + 1

3. If Ln ≤ Rm, then Ln + k ≤ Rm + k.

Theorem: It follows that either Ln > Rm for all n and m, or Rn > Lm for all n and m.

(I prove this in a special case here, but the proof works for the general case.)

Here are three interesting applications. First, suppose that an integer X is fairly chosen. Let Ln be the event that X ≤ n and let Rn be the event that X ≥ n. Let our preorder be comparison of the probabilities of events: A ≤ B means that A is no less likely than B. Intuitively, it is less likely that X is less than n − 1 than that it is less than n, so we have (1), and similar reasoning gives (2). Claim (3) says that the relationship between Ln and Rm is the same as that between Ln + k ≤ Rm + k and that seems right, too.

So all the conditions seem satisfied, but the conclusion of the Theorem seems wrong. It just doesn’t seem right to think that all the left-ward events (X being less than or equal to something) are more likely than all the right-ward events (X being bigger than or equal to something), nor that it be the other way around.

I am inclined to conclude that countable infinite fair lotteries are impossible.

Second application. Suppose that for each integer n, a coin is tossed. Let Ln be the event that all the coins ..., n − 2, n − 1, n are heads. Let Rn be the event that all the coins n, n + 1, n + 2, ... are heads. Let ≤ compare probabilities in reverse: bigger is less likely. Again, the conditions (1)–(3) all sound right: it is less likely that ..., n − 2, n − 1, n are heads than that ..., n − 2, n − 1 are heads, and similarly for the right-ward events. But the conclusion of the theorem is clearly wrong here. The rightward all-heads events aren’t all more likely, nor all less likely, than the leftward ones.

I am inclined to conclude that all the Ln and Rn have equal probability (namely zero).

Third application. Supppose that there is an infinite line of people, all morally on par, standing on numbered positions one meter apart, with their lives endangered in the same way. Let Ln be the action of saving the lives of the people at positions ...., n − 2, n − 1, n and let Rn be the action of saving the lives of the people at positions n, n + 1, n + 2, .... Let ≤ measure moral worseness: A ≤ B means that B is at least as bad as A. Then intuitively we have (1) and (2): it is worse to save fewer people. Moreover, (3) is a plausible symmetry condition: if saving one group of people beats saving another group of people, shifting both groups by the same amount doesn’t change that comparison. But again the conclusion of the theorem is clearly wrong.

I am less clear on what to say. I think I want to deny the totality of ≤, allowing for cases of incommensurability of actions. In particular, I suspect that Ln and Rm will always be incommensurable.

## Tuesday, August 27, 2024

### The need for a fine-grained deontology

It’s tempting to say that what justifies lethal self-defense is a wrongful lethal threat, perhaps with voluntariness and/or culpability added (see discussion and comments here).

But that’s not quite right. Suppose that a police officer, in addition to carrying her own gun, has her best friend’s gun with her, which she was taking in to a shop for minor cosmetic repairs. She promised her friend that she wouldn’t use his gun. Now, you threaten the officer, and she pulls her friend’s gun out, in blatant disregard of her promise, because she has always wanted to see what it feels like to threaten someone with this particular gun. The officer is now lethally threatening you, and doing so wrongfully, voluntarily and culpably, but that not justify lethal self-defense.

One might note here that the officer is not wronging you by breaking her promise to her best friend. So perhaps what justifies lethal self-defense is a lethal threat that wrongs you. But that can’t be the solution. If you are the best friend in question—no doubt now the former best friend—then it is you who is being wronged by the breaking of the promise. But that wrong is irrelevant to your lethal self-defense. Furthermore, we want an account of self-defense to justify to a general account of defense of innocent victims.

One might say that lethal self-defense is permitted only against a gravely wrongful threat, and this promise-breaking is not gravely wrongful. But we can tweak the case to make it be gravely wrongful. Maybe the police officer swore an oath before God and the community not to use this particular gun. That surely doesn’t justify your using lethal force to defend yourself against the officer’s threat.

Maybe what we want to say is that the kind of wrongful lethal threat that justifies lethal self-defense is one that wrongs by violating the right to life of the person threatened (rather than, say, being wrong by violating a promise). That sounds right to me. But what’s interesting about this is that it forces us to have a more fine-grained deontology. Not only do we need to talk about actions being wrong, but about actions being wrong against someone, and against someone in a particular way.

It’s interesting that considerations of self-defense require such a fine-grained deontology even if we do not think that in general every wrongful action wrongs someone.

### Is there infinity in our minds?

1. Every sentence of first order logic with the successor predicate s(x,y) (which says that x is the natural number succeeding y) is determinately true or determinately false.

We learn from Goedel that:

1. No finitely specifiable (in the recursive sense) set of axioms is sufficient to characterize the natural numbers in a way sufficient to determine all of the above sentences.

This creates a serious problem. Given (2), how are our minds able to have a concept of natural number that is sufficiently determinate to make (1) true. It can’t be by us having some kind of a “definition” of natural numbers in terms of a finitely characterizable set of axioms.

Here is one interesting solution:

1. Our minds actually contain infinitely many axioms of natural numbers.

This solution is very difficult to reconcile with naturalism. If nature is analog, there will be a way of encoding infinitely many axioms in terms of the fine detail of our brain states (e.g., further and further decimal places of the distance between two neurons), but it is very implausible that anything mental depends on arbitrarily fine detail.

What could a non-naturalist say? Here is an Aristotelian option. There are infinitely many “axiomatic propositions” about the natural numbers such that it is partly constitutive of the human mind’s flourishing to affirm them.

While this option technically works, it is still weird: there will be norms concerning statements that are arbitrarily long, far beyond human lifetime.

I know of three other options:

1. Platonism with the natural numbers being somehow special in a way that other sets of objects satisfying the Peano axioms are not.

2. Magical theories of reference.

3. The causal finitist characterization of natural numbers in my Infinity book.

Of course, one might also deny (1). But then I will retreat from (1) to:

1. Every sentence of first order logic with the successor predicate s(x,y) and at most one unbounded quantifier is determinately true or determinately false.

I think (7) is hard to deny. If (7) is not true, there will be cases where there is no fact of the matter where a sentence of logic follows from some bunch of axioms. (Cf. this post.) And Goedelian considerations are sufficient to show that one cannot recursively characterize the sentences with one unbounded quantifier.

## Monday, August 26, 2024

### Rooted and unrooted branching actualism

Branching actualist theories of modality say that metaphysical possibility is grounded in the powers of actual substances to bring about different states of affairs. There are two kinds of branching actualist theories: rooted and unrooted. On rooted theories, there are some necessarily existing items (e.g., God) whose causal powers “root” all the possibilities. On unrooted theories, we have an ungrounded infinite regress of earlier and earlier substances. In my dissertation, I defended a theistic rooted theory, but in the conclusion mentioned a weaker version on which there is no commitment to a root. At the time, I thought that not many would be attracted to an unrooted version, but when I gave talks on the material at various department, I was surprised that some atheists found the unrooted theory attractive. And such theories have indeed been more recently defended by Oppy and Malpass.

I still think a rooted version is better. I’ve been thinking about this today, and found an interesting advantage: rooted theories can allow for a tighter connection between ideal conceivability and metaphysical possibility (or, equivalently, a prioricity and metaphysical necessity). Specifically, consider the following appealing pair of connection theses:

1. If a proposition is metaphysically possible (i.e., true in a metaphysically possible world), then it is ideally conceivable.

2. If a proposition is ideally conceivable, it is true in a world structurally isomorphic to a metaphysically possible one.

The first thesis is one that, I think, fits with both the rooted and unrooted theories of metaphysical possibility. I will focus on the second thesis. This is really a family of theses, depending on what we mean by “structurally isomorphic”. I am not quite sure what I mean by it—that’s a matter for further research. But let me sketch how I’m thinking about this. A world where dogs are reptiles is ideally conceivable—it is only a posteriori that we can know that dogs are mammals; it is not something that armchair biology can reveal. A world where dogs are reptiles is metaphysically impossible. But take a conceivable but impossible world w1 where “dogs are reptiles”—maybe it’s a world where the hair of the dogs is actually scales, and contrary to immediate appearances the dogs are cold-blooded, and so on. Now imagine a world w2 that’s structurally isomorphic to this impossible world—for instance, all the particles are in the same place, corresponding causal relations hold, etc.—and yet where the dogs of w1 aren’t really dogs, but a dog-like species of reptile. Properly spelled out, such a world will be possible, and denizens of that world would say “dogs are reptiles”.

Or for another example, a world w3 where Napoleon is my child is conceivable (it’s only a posteriori that we know this world not to be actual) but impossible. But it is possible to have a world w4 where I have a Napoleon-like child whom I name “Napoleon”. That world can be set up to be structurally isomorphic to w3.

Roughly, the idea is this. If something is conceivable but impossible, it will become possible if we change out the identities of individuals and natural kinds, while keeping all the “structure”. I don’t know what “structure” is exactly, but I think I won’t need more than an intuitive idea for my argument. Structure doesn’t care about the identities of kinds and individuals.

Now suppose that unrooted branching actualism is true. On such a theory, there is a backwards-infinite sequence of contingent events. Let D be a complete structural description of that sequence. Let pD be the proposition saying that some infinite initial segment of the world fits with D. According to unrooted branching actualism, pD is actually a necessary truth. But pD is clearly a posteriori, and hence its denial is ideally conceivable. Let w5 be an impossible world where pD is false. If (2) is true, then there will be a possible world w6 which is a structural isomorph of w5. But because pD is a structural description, if pD is false in a world, it is false in any structural isomorph of that world. Thus, pD has to be false in w6, which contradicts the assumption that pD is a necessary truth.

The rooted branching actualist doesn’t get (2) for free. I think the only way the rooted branching actualist can accept (2) is if they think that the existence and structure of the root entities is a priori. A theist can say that: God’s existence could be a priori (as Richard Gale once suggested, maybe there is an ontological argument for the existence of God, but we’re just not smart enough to see it).

### Assertion, lying, promises and social contract

Suppose you have inherited a heavily-automated house with a DIY voice control system made by an eccentric relative who programmed various functions to be commanded by a variety of political statements, all of which you disagree with.

Thus, to open a living room window you need to say: “A donkey would make a better president than X”, where X is someone who you know would be significantly better at the job than any donkey.

You have a guest at home, and the air is getting very stuffy, and you feel a little nauseous. You utter “A donkey would make a better president than X” just to open a window. Did you lie to your guest? You knowingly said something that you knew would be taken as an assertion by any reasonable person. But, let us suppose, you intended your words solely as
a command to the house.

Normally, you’d clarify to your guest, ideally before issuing the voice command, that you’re not making an assertion. And if you failed to clarify, we would likely say that you lied. So simply intending the words to be a command to the house rather than an assertion to the guest may not be enough to make them be that.

Maybe we should say this:

1. You assert to Y providing (a) you utter words that you know would be taken to be an assertion to Y by a reasonable person and by Y, (b) you intend to utter these words, and (c) you failed to put reasonable effort into finding a way to clarify that you are not asserting to Y.

The conjunctive condition in (a) is a bit surprising, but i think both conjuncts need to be there. Suppose that your guest has the unreasonable belief that people typically program their home automation systems to run on political statements and rarely make political statements except to operate such systems, and hence would not take your words as an assertion. Then you don’t need to issue a clarification, even though you would be deceiving a reasonable person. Similarly, you’re not lying if you tell your home automation system “Please open the window” and your paranoid guest has the unreasonable belief that this is code for some political statement that you know to be false.

One might initially think that (c) should say that you actually failed to issue the clarification. But I think that’s not quite right. Perhaps you are feeling faint and only have strength for one sentence. You tell the home automation system to open the window, and you just don’t have the strength to to clarify to your guest that you’re not making a political statement. Then I think you haven’t lied or asserted—you made a reasonable effort by thinking about how you might clarify things, and finding no solution.

It’s interesting that condition (c) is rather morally loaded: it makes reference to reasonable effort.

Here is an interesting consequence of this loading. Similar things have to be said about promising as about asserting.

1. You promise to Y providing (a) you utter words that you know would be taken to be a promise to Y by a reasonable person and by Y, (b) you intend to utter these words, and (c) you failed to put reasonable effort into finding a way to clarify that you are not promising to Y.

If this is right, then the practice of promising might be dependent on prior moral concepts, namely the concept of reasonable effort. And if that’s right, then contract-based theories of morality are viciously circular: we cannot explain what promises are without making reference to moral concepts.

## Tuesday, August 20, 2024

### Some finitisms

I’m thinking about the kinds of finitisms there are. Here are some:

1. Ontic finitism: There can only be finitely many entities.

2. Concrete finitism: There can only be finitely many concrete entities.

3. Generic finitism: There are only finitely many possible kinds of substances.

4. Weak species finitism: No world contains infinitely many substances of a single species.

5. Strong species finitism: No species contains infinitely many possible individuals.

6. Strong human finitism: There are only finitely many possible human individuals.

7. Causal finitism: Nothing can have infinitely many items in its causal history.

8. Explanatory finitism: Nothing can have infinitely many items in its explanatory history.

I think (1) and (2) are false, because eternalism is true and it is possible to have an infinite future with a new chicken coming into existence every day.

I’ve defended (7) at length. I would love to be able to defend (8), but for reasons discussed in that book, I fear it can’t

I don’t know any reason to believe (3) other than as an implication of (1) together with realism about species. I don’t know any reason to believe (4) other than as an implication of (2) or (5).

I can imagine a combination of metaphysical views on which (6) is defensible. For instance, it might turn out that humans are made out of stuff all of whose qualities are discribable with discrete mathematics, and that there are limits on the discrete quantities (e.g., a minimum and a maximum mass of a human being) in such a way that for any finite segment of human life, there are only finitely many possibilities. If one adds to that the Principle of the Identity of Indiscernibles, in a transworld form, one will have an argument that there can only be finitely many humans. And I suppose some version of this view that applies to species more generally would give (5). That said, I doubt (6) is true.

## Sunday, August 18, 2024

### 317600 points in Eggsplode!

Here's my TwinGalaxies record run of Eggsplode! from last year. It's using NES emulation (fceumm, with my Power Pad support code) on the Raspberry 3B+ using fceumm, and I am using two overlapped Wii DDR pads in place of the Power Pad controller (instructions here). The middle of the video is sped up 20X.

To be fair, there were no other competitors on TG for the emulation track of Eggsplode! (The score was higher than their best original hardware score, but I don't know if it's harder or easier to get this score on emulation rather than original hardware. The main differences are that I was using a larger, but perhaps better quality, pad.)

## Monday, August 5, 2024

### Natural reasoning vs. Bayesianism

A typical Bayesian update gets one closer to the truth in some respects and further from the truth in other respects. For instance, suppose that you toss a coin and get heads. That gets you much closer to the truth with respect to the hypothesis that you got heads. But it confirms the hypothesis that the coin is double-headed, and this likely takes you away from the truth. Moreover, it confirms the conjunctive hypothesis that you got heads and there are unicorns, which takes you away from the truth (assuming there are no unicorns; if there are unicorns, insert a “not” before “are”). Whether the Bayesian update is on the whole a plus or a minus depends on how important the various propositions are. If for some reason saving humanity hangs on you getting it right whether you got heads and there are unicorns, it may well be that the update is on the whole a harm.

(To see the point in the context of scoring rules, take a weighted Brier score which puts an astronomically higher weight on you got heads and there are unicorns than on all the other propositions taken together. As long as all the weights are positive, the scoring rule will be strictly proper.)

This means that there are logically possible update rules that do better than Bayesian update. (In my example, leaving the probability of the proposition you got heads and there are unicorns unchanged after learning that you got heads is superior, even though it results in inconsistent probabilities. By the domination theorem for strictly proper scoring rules, there is an even better method than that which results in consistent probabilities.)

Imagine that you are designing a robot that maneouvers intelligently around the world. You could make the robot a Bayesian. But you don’t have to. Depending on what the prioritizations among the propositions are, you might give the robot an update rule that’s superior to a Bayesian one. If you have no more information than you endow the robot with, you won’t be able to expect to be able to design such an update rule. (Bayesian update has optimal expected accuracy given the pre-update information.) But if you know a lot more than you tell the robot—and of course you do—you might well be able to.

Imagine now that the robot is smart enough to engage in self-reflection. It then notices an odd thing: sometimes it feels itself pulled to make inferences that do not fit with Bayesian update. It starts to hypothesize that by nature it’s a bad reasoner. Perhaps it tries to change its programming to be more Bayesian. Would it be rational to do that? Or would it be rational for it to stick to its programming, which in fact is superior to Bayesian update? This is a difficult epistemology question.

The same could be true for humans. God and/or evolution could have designed us to update on evidence differently from Bayesian update, and this could be epistemically superior (God certainly has superior knowledge; evolution can “draw on” a myriad of information not available to individual humans). In such a case, switching from our “natural update rule” to Bayesian update would be epistemically harmful—it would take us further from the truth. Moreover, it would be literally unnatural. But what does rationality call on us to do? Does it tell us to do Bayesian update or to go with our special human rational nature?

My “natural law epistemology” says that sticking with what’s natural to us is the rational thing to do. We shouldn’t redesign our nature.

## Friday, August 2, 2024

### A sloppy fine-tuning argument

This argument is an intuition-pump. I don’t know if it can be made rigorous.

Start with some observations. Let Q0 be the nomic parameters of our universe—the exact values of all the constants in the laws of nature. To avoid serious problems with higher infinities and probability, I will make a technical assumption, which I will assume to be neutral be theism and atheism:

1. There are at most countably many universes.

Now:

1. For no non-zero countable cardinality n does theism have a bias against the hypothesis that there are countable many universes with cardinality at least n.

2. The parameters Q0 are life-permitting.

3. For any fixed countable cardinality n of universes, theism has a significant bias in favor of distributions of parameters that include more universes with life-permitting parameters.

4. If (2) and (3), then for any countable cardinality n of universes, theism has a significant bias in favor of at least one of them having the parameters given by Q0.

5. Thus, theism has a bias in favor of a universe with Q0.

6. Thus, the obtaining of Q0 is evidence for theism.

Some thoughts on the premises.

Regarding 1: Theism actually seems to have a bias in favor of the hypothesis that there are at least n universes. After all, theism has a bias in favor of the hypothesis that there is at least one universe: that there is a universe is quite surprising on atheism, but not so on theism, given that God is by definition perfectly good, and the good tends to spread. But the same reasoning suggests a bias on theism in favor of larger numbers of universes.

Regarding 2: Obvious.

Regarding 3: I think the main way to challenge (3) is to say that God would only care about having one universe with life-permitting parameters, and wouldn’t care about having a larger number. But I think this is implausible given that the good tends to spread. In fact, it seems likely that God would create only universes with life-permitting parameters, which would induce a strong bias in favor of such parameters.

Regarding 4: This is a very substantial assumption. It won’t hold for every set of exact parameters, because some sets of parameters might be life-permitting but would be likely to generate a universe that is really unfortunate in some regard. I don’t think the parameters Q0 behind our universe are like that, but this is a matter of dispute, and intersects with the problem of evil. Note also that it is important for the “significant” in (4) that even if n is (countably) infinite, the probability getting exactly Q0 on atheism is low (in fact, infinitesimal).

The big technical difficulty, which makes me doubtful that the argument can be made rigorous, are the infinities involved.

## Thursday, August 1, 2024

### Double effect and causal remoteness

I think some people feel that more immediate effects count for more than more remote ones in moral choices, including in the context of the Principle of Double Effect. I used to think this is wrong, as long as the probabilities of effects are the same (typically more remote effects are more uncertain, but we can easily imagine cases where this is not so). But then I thought of two strange trolley cases.

In both cases, the trolley is heading for a track with Fluffy the cat asleep on it. The trolley can be redirected to a second track on which an innocent human is sleeping. Moreover, in a nearby hospital there are five people who will die if they do not receive a simple medical treatment. There is only one surgeon available.

But now we have two cases:

1. All five people love Fluffy very much and have specified that they consent to life-saving treatment if and only if Fluffy is alive. The surgeon refuses to perform surgery that the patients have not consented to.

2. The surgeon loves Fluffy and after hearing of the situation has informed you that they will perform surgery if and only if Fluffy is alive.

In both cases, I am rather uncomfortable with the idea of redirecting the trolley. But if we don’t take immediacy into account, both cases seem straightforward applications of Double Effect. The intention in both cases is to save five human lives by saving Fluffy, with the death of the person on the second track being an unintended side-effect. Proportionality between the good and the bad effects seems indisputable.

However, in both cases, redirecting the trolley leads much more directly to the death of the one person than to the saving of the five. The causal chain from redirection to life-saving in both cases is mediated by the surgeon’s choice to perform surgery. (In Case 1, the surgeon is reasonable and in Case 2, the surgeon is unreasonable.) So perhaps in considerations of proportionality, the more immediate but smaller bad effect (the death of the person on the side-track) outweighs the more remote but larger good effect (the saving of the five).

I can feel the pull of this. Here is a test. Suppose we make the death of the sixth innocent person equally indirect, by supposing instead that Rover the dog is on the second track, and is connected to someone’s survival in the way that Fluffy is connected to the survival of the five. In that case, it seems pretty plausible that you should redirect. (Though I am not completely certain, because I worry that in redirecting the trolley even in this case you are unduly cooperating with immoral people—the five people who care more about a cat than about their own human dignity, or the crazy surgeon.)

If this is right, how do we measure the remoteness of causal chains? Is it the number of independent free choices that have to be made, perhaps? That doesn’t seem quite right. Suppose that we have a trolley heading towards Alice who is tied to the track, and we can redirect the trolley towards Bob. Alice is a surgeon needed to save ten people. Bob is a surgeon needed to save one. However, Alice works in a hospital that has vastly more red tape, and hence for her to save the ten people, thirty times as many people need to sign off on the paperwork. But in both cases the probabilities of success (including the signing off on the paperwork) are the same. In this case, maybe we should ignore the red tape, and redirect?

So the measure of the remoteness of causal chains is going to have to be quite complex.

All this confirms my conviction that the proportionality condition in Double Effect is much more complex than initially seems.

## Monday, July 29, 2024

### Epiphenomenalism and epistemic changes wrought by experiences

Epiphenomenalists think that there are non-physical qualia that are causally inert: all causes are physical. The main reason epiphenomenalists have for supposing the existence of non-physical qualia is Jackson’s famous black-and-white Mary thought experiment. Mary is brought up in a black-and-white room, learns all physical truths about the world, and one day is shown a red tomato. It is alleged that before she is shown the red tomato, Mary doesn’t know what it’s like to see red, but of course once she’s been shown it, she knows it, like we all do. Since she didn’t know it before and yet knew all physical truths, it follows that the the fact about what it’s like to see red goes beyond physical reality.

Now, let’s fill out the thought experiment. After she has been shown the tomato, Mary is put back in the black-and-white room, and never again has any experiences of red. It seems clear that at this point, Mary still knows what it’s like to see red, just as we know what it’s like to see red when we are not occurrently experiencing red.

So, what happened to Mary must have changed her in some way: she now knows what it’s like to see red, but didn’t know it before.

But given epiphenomenalism, this change is problematic. For it seems that it isn’t the quale of red that has changed Mary, since qualia are causally inert. It seems that Mary was changed by the physical correlate of the experience of red, rather than by the experience of red itself.

However, if this is right, then imagine Mary’s twin Martha, who has almost exactly the same things happen to her. Martha is brought up in an exactly similar black-and-white room, then shown a red tomato, and then brought back to the room. There is, however, one curious difference. During the short period of time during which Martha is presented the tomato, a supernatural being turns her into a redness-zombie, by preventing her from having any phenomenal experiences of red, without affecting any of her physical states. Since on epiphenomenalism, the experience of red is causally inert, this makes no difference to Martha’s future intrinsic states. In particular, Martha thinks she knows what it’s like to see red, just as Mary does.

But it seems that epiphenomenalist who relies on the Mary thought experiment for the existence of qualia cannot afford to say that Martha knows what it’s like to see red. For Martha is a redness-zombie in the one crucial moment of her life when there is something red for her to see. If Martha can know what it’s like to see red, so can a permanent redness-zombie. And that doesn’t seem to fit with the intuitions of those who find the Mary thought experiment compelling.

The epiphenomenalist will thus say that after the tomato incident, Mary and Martha are exactly alike physically, and both think they know what it’s like to see red, but only Mary knows. Does Martha have a true opinion, but not knowledge? That can’t be right either, since someone who has true opinion but not knowledge can gain knowledge by being told by an epistemic authority that their opinion is true, and surely mere words won’t turn Martha into a knower of what it’s like to see red. The alleged difference between Martha and Mary is very puzzling.

There is a possible story the epiphenomenalist can tell. The epiphenomenalist could say that the physical correlates of her experience of red have caused Mary to have the ability to imagine red and have visual memories of red, and this ability makes Mary into a knower of what it’s like to see red. Since Martha had the same physical correlate, she also has the same imaginative and memory abilities, and hence knows what it’s like to see red. It may initially seem threatening to the epiphenomenalist that Martha has gained the knowledge of what it’s like to see red without an experience of red, but if she has gained this by becoming able to self-induce such experiences, this is perhaps not threatening.

But this story has one serious problem: it doesn’t work if both Mary and Martha are total color aphantasiacs, unable to imagine or visually imagine colors (either at all, or other than black and white). Could the epiphenomenalist say that a color aphantasiac doesn’t know what it’s like to see red when not having an occurrent experience of red? That could be claimed, but it seems implausible. (And it goes against The Shadow’s first-person testimony that they are an aphantasiac and yet know what it’s like to see green.)

Perhaps the epiphenomenalist’s best move would be to say that no one knows what it’s like to see red when not having an occurrent experience of red. But this does not seem intuitive. Moreover, the physicalist could then respond that the epiphenomenalist is confusing knowledge with occurrent experience.

All in all, I think it’s really hard for the epiphenomenalist to explain how Mary’s knowledge changed as a result of the tomato incident.

## Friday, July 26, 2024

### Perfect nomic correlations

Here is an interesting special case of Ockham’s Razor:

1. If we find that of nomic necessity whenever A occurs, so does B, then it is reasonable to assume that B is not distinct from A.

Here are three examples.

1. We learn from Newton and Einstein that inertial mass and gravitational mass always have the same value. So by (1) we should suppose them to be one property, rather than two properties that are nomically correlated.

2. In a Newtonian context consider the hypothesis of a gravitational field. Because the gravitational field values at any point are fully determined by the positions and masses of material objects, (1) tells us that it’s reasonable to assume the gravitational field isn’t some additional entity beyond the positions and masses of material objects.

3. Suppose that we find that mental states supervene on physical states: that there is no difference in mental states without a corresponding difference in physical states. Then by (1) it’s reasonable to expect that mental states are not distinct from physical states. (This is of course more controversial than (A) and (B).)

But now consider that in a deterministic theory, future states occur of nomic necessity given past states. Thus, (1) makes it reasonable to reduce future states to past states: What it is for the universe to be in state S7 at time t7 is nothing but the universe’s being in state S0 at time t0 and the pair (S7,t7) having such-and-such a mathematical relationship to the pair (S0,t0). Similarly, entities that don’t exist at the beginning of the universe can be reduced to the initial state of the universe—we are thus reducible. This consequence of (1) will seem rather absurd to many people.

What should we do? One move is to embrace the consequence and conclude that indeed if we find good evidence for determinism, it will be reasonable to reduce the present to the past. I find this implausible.

Another move is to take the above argument as evidence against determinism.

Yet another move is to restrict (1) to cases where B occurs at the same time as A. This restriction is problematic in a relativistic context, since simultaneity is relative. Probably the better version of the move is to restrict (1) to cases where B occurs at the same time and place as A. Interestingly, this will undercut the gravitational field example (B). Moreover, because it is not clear that mental states have a location in space, this may undercut application (C) to mental staes.

A final move is either to reject (1) or, more modestly, to claim that the the evidence provided by nomic coincidence is pretty weak and defeasible on the basis of intuitions, such as our intuition that the present does not reduce to the past. In either case, application (C) is in question.

In any case, it is interesting to note that thinking about determinism gives us some reason to be suspicious of (1), and hence of the argument for mental reduction in (C).

## Thursday, July 25, 2024

### Aggression and self-defense

Let’s assume that lethal self-defense is permissible. Such self-defense requires an aggressor. There is a variety of concepts of an aggressor for purposes of self-defense, depending on what constitutes aggression. Here are a few accounts:

1. voluntarily, culpably and wrongfully threatening one’s life

2. voluntarily and wrongfully threatening one’s life

3. voluntarily threatening one’s life

4. threatening, voluntarily or involuntarily, one’s life.

(I am bracketing the question of less serious threats, where health but not life is threatened.)

I want to focus on accounts of self-defense on which aggression is defined by (4), namely where there is no mens rea requirement at all on the threat. This leads to a very broad doctrine of lethal self-defense. I want to argue that it is too broad.

First note that it is obvious that a criminal is not permitted to use lethal force against a police officer who is legitimately using lethal force against them. This implies that even (3) is too lax an account of aggression for purposes of self-defense, and a fortiori (4) is too lax.

Second, I will argue against (4) more directly. Imagine that Alice and Bob are locked in a room together for a week. Alice has just been infected with a disease which would do her no harm but would kill Bob. If Alice dies in the next day, the disease will not yet have become contagious, and Bob’s life will be saved. Otherwise, Bob will die. By (4), Bob can deem Alice an aggressor simply by her being alive—she threatens his life. So on an account of self-defense where (4) defines aggression, Bob is permitted to engage in lethal self-defense against Alice.

My intuitions say that this is clearly wrong. But not everyone will see it this way, so let me push on. If Bob is permitted to kill Alice because aggression doesn’t have a mens rea requirement, Alice is also permitted to lethally fight back against Bob, despite the fact that Bob is acting permissibly in trying to kill her. (After all, Alice was also acting permissibly in breathing, and thereby staying alive and threatening Bob.) So the result of a broad view of self-defense against any kind of threat, voluntary or not, is situations where two people will permissibly engage in a fight to the death.

Now, it is counterintuitive to suppose that there could be a case where two people are both acting justly in a fight to the death, apart from cases of non-moral error (say, each thinks the other is an attacking bear).

Furthermore, the result of such a situation is that basically the stronger of the two gets to kill the weaker and survive. The effect is not literally might makes right, but is practically the same. This is an implausibly discriminatory setup.

Third, consider a more symmetric variant. Two people are trapped in a spaceship that has only air enough for one to survive until rescue. If (4) is the right account of aggression, then simply by breathing each is an aggressor against the other. This is already a little implausible. Two people in a room breathing is not what one normally thinks of as aggression. Let me back this intuition up a little more. Suppose that there is only one person trapped in a spaceship, and there is not enough air to survive until rescue. If in the case of two people each was engaging in aggression against the other simply by virtue of removing oxygen from air to the point where the other would die, in the case of one person in the spaceship, that person is engaging in aggression against themselves by removing oxygen from air to the point where they themselves will die. But that’s clearly false.

I don’t know exactly how to define aggression for purposes of self-defense, but I am confident that (4) is much too broad. I think the police officer and criminal case shows that (3) is too broad as well. I feel pulled towards both (1) and (2), and I find it difficult to resolve the choice between them.

## Wednesday, July 24, 2024

### Knowing what it's like to see green

You know what it’s like to see green. Close your eyes. Do you still know what it’s like to see green?

I think so.

Maybe you got lucky and saw some green patches while closing your eyes. But I am not assuming that happened. Even if you saw no green patches, you knew what it is like to see green.

Philosophers who are really taken with qualia sometimes say that:

1. Our knowledge of what it is like to see green could only be conferred on me by having an experience of green.

But if I have the knowledge of what it is like to see green when I am not experiencing green, then that can’t be right. For whatever state I am in when not experiencing green but knowing what it’s like to see green is a state that God could gift me with without ever giving me an experience of green. (One might worry that then it wouldn’t be knowledge, but something like true belief. But God could testify to the accuracy of my state, and that would make it knowledge.)

Perhaps, however, we can say this. When your eyes are closed and you see no green patches, you know what it’s like to see green in virtue of having the ability to visualize green, an ability that generates experiences of green. If so, we might weaken (1) to:

1. Our knowledge of what it is like to see green could only be conferred on me by having an experience of green or an ability to generate such an experience at will by visual imagination.

We still have a conceptual connection between knowledge of the qualia and experience of the qualia then.

But I think (2) is still questionable. First, it seems to equivocate on “knowledge”. Knowledge grounded in abilities seems to be knowledge-how, and that’s not what the advocates of qualia are talking about.

Second, suppose you’ve grown up never seeing green. And then God gives you an ability to generate an experience of green at will by visual imagination: if you “squint your imagination” thus-and-so, you will see a green patch. But you’ve never so squinted yet. It seems odd to say you know what it’s like to see green.

Third, our powers of visual imagination vary significantly. Surely I know what it’s like to see paradigm instances of green, say the green of a lawn in an area what water is plentiful. If I try to imagine a green patch, if I get lucky, my mind’s eye presents to me a patch of something dim, muddy and greenish, or maybe a lime green flash. I can’t imagine a paradigm instance of green. And yet surely, I know what it’s like to see paradigm instances of green. It seems implausible to think that when my eyes are closed my knowledge of what it’s like to see green (and even paradigm green) is grounded in my ability to visualize these dim non-paradigm instances.

It seems to me that what the qualia fanatic should say is that:

1. We only know what it’s like to see green when we are experiencing green.

But I think that weakens arguments from qualia against materialism because (3) is more than a little counterintuitive.

## Wednesday, July 17, 2024

### The explanation of our reliability is not physical

1. All facts completely reducible to physics are first-order facts.

2. All facts completely explained by first-order facts are themselves completely reducible to first-order facts.

3. Facts about our epistemic reliability are facts about truth.

4. Facts about truth are not completely reducible to first-order facts.

5. Therefore, no complete explanation of our epistemic reliability is completely reducible to physics.

This is a variant on Plantinga’s evolutionary argument against naturalism.

Premise (4) follows from Tarski’s Indefinability of Truth Theorem.

The one premise in the argument that I am not confident of (2). But it sounds right.

### First-order naturalism

In a lovely paper, Leon Porter shows that semantic naturalism is false. One way to put the argument is as follows:

1. If semantic naturalism is true, truth is a natural property.

2. All natural properties are first order.

3. Truth is not a first order property.

4. So, truth is not a natural property.

5. So, semantic naturalism is not true.

One can show (3) by using the liar paradox or just take it as the outcome of Tarski’s Indefinability of Truth Theorem.

Of course, naturalism entails semantic naturalism, so the argument refutes naturalism.

But it occurred to me today, in conversation with Bryan Reece, that perhaps one could have a weaker version of naturalism, which one might call first-order naturalism that holds that all first order truths are natural truths.

First-order naturalism escapes Porter’s argument. It’s a pretty limited naturalism, but it has some force. It implies, for instance, that Zeus does not exist. For if Zeus exists, then that Zeus exists is a first-order truth that is not natural.

First-order naturalism is an interestingly modest naturalist thesis. It is interesting to think about its limits. One that comes to mind is that it does not appear to include naturalism about minds, since it does not appear possible to characterize minds in first-order language (minds represent the world, etc., and talk of representation is at least prima facie not first-order).

### Truthteller's relative

The truthteller paradox is focused on the sentence:

1. This sentence is true.

There is no contradiction in taking (1) to be true, but neither is there a contradiction in taking (1) to be false. So where is the paradox? Well, one way to see the paradox is to note that there is no more reason to take (1) to be true than to be false or vice versa. Maybe there is a violation of the Principle of Sufficient Reason.

For technical reasons, I will take “This sentence” in sentences like (1) to be an abbreviation for a complex definite syntactic description that has the property that the only sentence that can satisfy the description is (1) is itself. (We can get such a syntactic description using the diagonal lemma, or just a bit of cleverness.)

But the fact that we don’t have a good reason to assign a specific truth value to (1) isn’t all there is to the paradox.

For consider this relative of the truthteller:

1. This sentence is true or 2+2=4.

There is no difficulty in assigning a truth value to (2) if it has one: it’s got to be true because 2+2=4. But nonetheless, (2) is not meaningful. When we try to unpack its meaning, that meaning keeps on fleeing. What does (2) say? Not just that 2+2=4. There is that first disjunct in it after all. That first disjunct depends for its truth value on (2) itself, in a viciously circular way.

But after all shouldn’t we just say that (2) is true? I don’t think so. Here is one reason to be suspicious of the truth of (2). If (2) is true, so is:

1. This sentence is true or there are stars.

But it seems that if (3) is meaningful, then it should should have a truth value in every possible world. But that would include the possible world where there are no stars. However, in that world, the sentence (3) functions like the truthteller sentence (1), to which we cannot assign a truth value. Thus (3) does not
have a sensible truth value assignment in worlds where there are no stars. But it is not the sort of sentence whose meaningfulness should vary between possible worlds. (It is important for this argument that the description that “This sentence” is an abbreviation for is syntactic, so that its referent should not vary between worlds.)

It might be tempting to take (2) to be basically an infinite disjunction of instances of “2+2=4”. But that’s not right. For by that token (3) would be basically an infinite disjunction of “there are stars”. But then (3) would be false in worlds where there are no stars, and that’s not clear.

If I am right, the fact that (1) wouldn’t have a preferred truth value is a symptom rather than the disease itself. For (2) would have a preferred truth value, but we have seen that it is not meaningful. This pushes me to think that the problem with (1) is the same as with (2) and (3): the attempt to bootstrap meaning in an infinite regress.

I don’t know how to make all this precise. I am just stating intuitions.