Monday, September 30, 2024

Four philosophy / adjacent jobs at Baylor

We have four jobs in philosophy or closely adjacent areas at Baylor, with most of the deadlines coming in mid-October:

Friday, September 27, 2024

Special treatment of humans

Sometimes one talks of humans as having a higher value than other animals, and hence it being appropriate to treat them better. While humans do have a higher value, I don't think this is what justifies favoring them. For to treat something well is to bestow value on them. But it is far from clear why the fact that x has more value than y justifies bestowing additional value on x rather than on y. It seems at least as reasonable to spread value around, and preferentially treat y.

A confusing factor is that we do have reason to preferentially treat those who have more desert, and desert is a value. But the reason here is specific to desert, and does not in any obvious way generalize to other values.

I don't deny that we should treat humans preferentially over other animals, nor that humans are more valuable. But these two facts should not be confused. Perhaps we should treat humans preferentially over other animals because humans are persons and other animals are not--but this is a point about personhood rather than about value. I am inclined to think we shouldn't argue: humans are persons, personhood is very valuable, so we should treat humans preferentially. Rather, I suspect we should directly argue: humans are persons, so we should treat humans preferentially, skipping the value step. (To put it in Kantian terms, beings with dignity are valuable, but what makes them have dignity isn't just that they are valuable.)

Thursday, September 26, 2024

Laws and mathematical complexity

Over the last couple of days I have realized that the laws of physics are rather more complex than they seem. The lovely equations like G = 8πT and F = Gmm′/r2 (with a different G in the two equations) seem to be an iceberg most of which is submerged in the icy waters of the foundations of mathematics where the foundational concepts of real analysis and arithmetic are defined in terms of axioms.

This has a curious consequence. We might think that F = Gmm′/r2 is much simpler than F = Gmm′/r2 + Hmm′/r3 (where H is presumably very, very small). But if we fill out each proposal with the foundational mathematical structure, the percentage difference in complexity will be slight, as almost all of the contribution to complexity will be in such things as the construction of real numbers (say, via Dedekind cuts).

Perhaps, though, the above line of thought is reason to think that real analysis and arithmetic are actually fundamental?

Moral conversion and Hume on freedom

According to Hume, for one to be responsible for an action, the action must flow from one’s character. But the actions that we praise people for the most include cases where someone breaks free from a corrupt character and changes for the good. These cases are not merely cases of slight responsibility, but are central cases of responsibility.

A Humean can, of course, say that there was some hidden determining cause in the convert’s character that triggered the action—perhaps some inconsistency in the corruption. But given determinism, why should we think that this hidden determining cause was indeed in the agent’s character, rather than being some cause outside of the character—some glitch in the brain, say? That the hidden determining cause was in the character is an empirical thesis for which we have very little evidence. So on the Humean view, we ought to be quite skeptical that the person who radically changes from bad to good is praiseworthy. We definitely should not take such cases to be among paradigm cases of praiseworthiness.

Wednesday, September 25, 2024

Humeanism and knowledge of fundamental laws

On a "Humean" Best System Account (BSA) of laws of nature, the fundamental laws are the axioms of the system of laws that best combines brevity and informativeness.

An interesting consequence of this is that, very likely, no amount of advances in physics will
suffice to tell us what the fundamental laws are: significant advances in mathematics will also be needed. For suppose that after a lot of extra physics, propositions formulated in sentences p1, ..., pn are the physicist’s best proposal for the fundamental laws. They are simple, informative and fit the empirical data really well.

But we would still need some very serious mathematics. For we would need to know there isn’t a simpler collection of sentences {q1, ..., qm} that is logically equivalent to {p1, ..., pn} but simpler. To do that would require us to have a method for solving the following type of mathematical problem:

  1. Given a sentence s in some formal language, find a simplest sentence s that is logically equivalent to s,

in the case of significantly non-trivial sentences s.

We might be able to solve (1) for some very simple sentences. Maybe there is no simpler way of saying that there is only one thing in existence than xy(x=y). But it is very plausible that any serious proposal for the laws of physics will be much more complicated than that.

Here is one reason to think that any credible proposal for fundamental laws is going to be pretty complicated. Past experience gives us good reason to think the proposal will involve arithmetical operations on real numbers. Thus, a full statement of the laws will require including a definition of the arithmetical operations as well as of the real numbers. To give a simplest formulation of such laws will, thus, require us to solve the problem of finding a simplest axiomatization of the portions of arithmetic and real analysis that are needed for the laws. While we have multiple axiomatizations, I doubt we are at all close to solving the problem of finding an optimal such axiomatization.

Perhaps the Humean could more modestly hope that we will at least know a part of the fundamental laws—namely the part that doesn’t include the mathematical axiomatization. But I suspect that even this is going to be very difficult, because different arithmetical formulations are apt to need different portions of arithmetic and real analysis.

Tuesday, September 24, 2024

Chanceability

Say that a function P : F → [0,1] where F is a σ-algebra of subsets of Ω is chanceable provided that it is metaphysically possible to have a concrete (physical or not) stochastic process with a state space of the same cardinality as Ω and such that P coincides with the chances of that process under some isomorphism between Ω and the state space.

Here are some hypotheses ones might consider:

  1. If P is chanceable, P is a finitely additive probability.

  2. If P is chanceable, P is a countably additive probability.

  3. If P is a finitely additive probability, P is chanceable.

  4. If P is a countably additive probability, P is chanceable.

  5. A product of chanceable countably additive probabilities is chanceable.

It would be nice if (2) and (4) were both true; or if (1) and (3) were.

I am inclined to think (5) is true, since if the Pi are chanceable, they could be implemented as chances of stochastic processes of causally isolated universes in a multiverse, and the result would have chances isomorphic to the product of the Pi.

I think (3) is true in the special case where Ω is finite.

I am skeptical of (4) (and hence of (3)). My skepticism comes from the following line of thought. Let Ω = ℵ1. Let F be the σ-algebra of countable and co-countable subsets (A is co-countable provided that Ω − A is countable). Define P(A) = 1 for the co-countable subsets and P(A) = 0 for the countable ones. This is a countably additive probability. Now let < be the ordinal ordering on 1. Then if P is chanceable, it can be used to yield paradoxes very similar to those of a countably infinite fair lottery.

For instance, consider a two-person game (this will require the product of P with itself to be chanceable, not just P; but I think (5) is true) where each player independently gets an ordinal according to a chancy isomorph of P, and the one who gets the larger ordinal wins a dollar. Then each player will think the probability that the other player has the bigger ordinal is 1, and will pay an arbitrarily high fee to swap ordinals with them!

Culpability incompatibilism

Here are three plausible theses:

  1. You’re only culpable for a morally wrong choice determined by a relevantly abnormal mental state if you are culpable for that mental state.

  2. A mental state that determines a morally wrong choice is relevantly abnormal.

  3. You are not culpable for anything that is prior to the first choice you are culpable for.

Given these theses and some technical assumptions, it follows that:

  1. If determinism holds, you are not culpable for any morally wrong choice.

For suppose that you are blameworthy for some choice and determinism holds. Let t1 be the time of the first choice you are culpable for. Choices flow from mental states, and if determinism holds, these mental states determine the choice. So there is a time t0 at which you have a mental state that determines your culpable choice at t1. That mental state is abnormal by (2). Hence by (1) you must be culpable for it given that it determines a wrong choice. But this contradicts (3).

The intuition behind (1) is that abnormal mental states remove responsibility, unless either the abnormality is not relevant to the choice, or one has responsibility for the mental state. This is something even a compatibilist should find plausible.

Moreover, the responsibility for the mental state has to have the same valence as the responsibility for the choice: to be culpable for the choice, you must be culpable for the abnormal state; to be praiseworthy for the choice, you must be praiseworthy for the abnormal state. (Imagine this case. To save your friends from a horrific fate, you had to swallow a potion which had a side-effect of making you a kleptomaniac. You are then responsible for your kleptomania, but in a praiseworthy way: you sacrificed your sanity to save your friends. But now the thefts that come from the kleptomania you are not blameworthy for.)

Premise (2) is compatible with there being normal mental states that determine morally good choices, as well as with there being normal mental states that non-deterministically cause morally wrong choices (e.g., a desire for self-preservation can non-deterministically cause an act of cowardice).

What I find interesting about this argument is that it doesn’t have any obvious analogue for praiseworthiness. The conclusion of the argument is a thesis we might call culpability incompatibilism.

The combination of culpability incompatibilism with praiseworthiness compatibilism (the doctrine that praiseworthiness is compatible with determinism) has some attractiveness. Leibniz cites with approval St Augustine’s idea that the best kind of freedom is choosing the best action for the best reasons. Culpability incompatibilist who are praiseworthiness compatibilists can endorse that thesis. Moreover, they can endorse the idea that God is praiseworthy despite being logically incapable of doing wrong. Interestingly, though, praiseworthiness compatibilism makes it difficult to run free will based defenses for the problem of evil.

Friday, September 20, 2024

Uncertain guilt

Suppose there is a 75% chance that I have done a specific wrong thing yesterday. (Perhaps I have suffered from some memory loss.) What should be my attitude? Guilt isn’t quite right. For guilt to be appropriate, I should believe that I’ve done a wrong thing, and 75% is not high enough for belief.

Guilt does come in degrees, but those degrees correlate with the degrees of culpability and wrongness, not with the epistemic confidence that I actually did the deed.

If I am not sure that I’ve done something, then a conditional apology makes sense: “Due to memory loss, I don’t know if I did A. But if I did, I am really sorry.” Maybe there is some conditional guilt feeling that goes along with conditional apology. But I am not sure there is such a feeling.

However, even if there is such a thing as a conditional guilt feeling, it presumably makes just as much sense when the probability of wrongdoing is low as when it is high. But it seems that whatever feeling one has due to a probability p of having done the wrong thing should co-vary proportionately to p.

Here’s an interesting possibility. There is no feeling that corresponds to a case like this. Feelings represent certain states of the world. The feeling of guilt represents the state of one’s having done a wrong. But just as we have no perceptual state that represents ultraviolet light, we have no perceptual state that represents probably having done a wrong. Other emotions do exist that have probabilistic purport. For instance, fear represents a chance of harm, and the degree (and maybe type: compare ordinary fear with dread!) of fear varies with the probability of harm.

While we can have highly complex cognitive attitudes, our feelings have more in the way of limitations. Just as there are some birds that have perceptual states that represent ultraviolet light, there could be beings that represent a probability that one did wrong, a kind of uncertain-guilt. But perhaps we don’t have such a feeling.

We get around limitations in our perceptual skills by technological means and scientific inference. We cannot see ultraviolet, but we can infer its presence in other ways. Similarly, we may well have limitations in our emotional attitudes, and get around them in other ways, say cognitively.

It would be interesting to think what other kinds of feelings could make sense for beings like us but which we simply don’t have.

Tuesday, September 17, 2024

Fun with St. Petersburg

A generous patron makes an offer to you. You are to pick out a positive integer n and you will get 2n units of value. You have the ability to pick out any positive integer at no cost to yourself (maybe you can engage in a supertask and name long numbers really fast).

You think about naming a million, but then a billion would pay so much better, and a billion and two is four times better! You agonize. And then you have a brilliant idea. You will randomize by choosing positive integer n with probability 2n (say, by flipping a coin until you get heads and counting how many flips that took). Your expected payoff will be

  • (1/2)(2) + (1/4)(4) + (1/8)(8) + ... = ∞.

That beats any specific number you could choose. So you go for it.

And, poof, you get 4. Regrets! You don’t want to stick to what the random choice gave you, as you’ll “only” get 24 = 16 units of value. Disappointing! So you try again. You choose another positive integer. Now it is, mirabile dictu, a billion and two. But you think: 21000000002 may be a lot, but infinity is more, and if you randomly choose another number, your expected payoff is ∞. So you randomly choose again. And whatever you get, you are dissatisfied.

Friday, September 13, 2024

Animal experimentation

We have an intuitive line as to where the suffering in animal’s life is so great compared to the goods in it that when we are able to do so, we euthanize animals when the suffering causes the value of the animal’s life to fall below that line. On the other hand, the life of an animal that falls above this line is a life that is a benefit to the animal.

It seems to me that this intuitive line could be a helpful discernment criterion for animal experimentation. Animals that are used for experiments are often bred for that purpose. Thus, they wouldn’t exist absent the practice of experimentation. It seems, hence, that experiments where the stresses on the animals make the animal’s life fall below this intuitive line are very easy to justify: the animals are benefitted by the practice, even if the experiments do impose some suffering on the animal. It seems plausible that such experiments could thus be justified by the intrinsic value of the knowledge gained for its own sake, or even by pedagogical benefits for students.

On the other hand, if the stresses in the animal’s life are such as to make their life fall below the line, then stronger justification is needed: the prospective benefits of the research need to be rather more significant.

I don’t know how good we are at discerning where that line goes. People with pets and farm animals do make hard decisions about this, though, so we seem to have some epistemic access to the line.

(I do think that it is permissible to be much more utilitarian about animal life than about human life. I certainly would not generalize what I say above to the case of humans.)

Thursday, September 12, 2024

Three-dimensionality

It seems surprising that space is three-dimensional. Why so few dimensions?

An anthropic answer seems implausible. Anthropic considerations might explain why we don’t have one or two dimensions—perhaps it’s hard to have life in one or two dimensions, Planiverse notwithstanding—but thye don’t explain why don’t have thirty or a billion dimensions.

A simplicity answer has some hope. Maybe it’s hard to have life in one and two dimensions, and three dimensions is the lowest dimensionality in which life is easy. But normally when we do engage in simplicity arguments, mere counting of things of the same sort doesn’t matter much. If you have a theory on which in 2050 there will be 9.0 billion people, your theory doesn’t count as simpler in the relevant sense than a theory on which there will be 9.6 billion then. So why should counting of dimensions matter?

There is something especially mathematically lovely about three dimensions. Three-dimensional rotations are neatly representable by quaternions (just as two-dimensional ones are by complex numbers). There is a cross-product in three dimension (admittedly as well as in seven!). Maybe the three-dimensionality of the world suggests that it was made by a mathematician or for mathematicians? (But a certain kind of mathematician might prefer an infinite-dimensional space?)

Wednesday, September 11, 2024

Independence conglomerability

Conglomerability says that if you have an event E and a partition {Ri : i ∈ I} of the probability space, then if P(ERi) ≥ λ for all i, we likewise have P(E) ≥ λ. Absence of conglomerability leads to a variety of paradoxes, but in various infinitary contexts, it is necessary to abandon conglomerability.

I want to consider a variant on conglomerability, which I will call independence conglomerability. Suppose we have a collection of events {Ei : i ∈ I}, and suppose that J is a randomly chosen member of I, with J independent of all the Ei taken together. Independence conglomerability requires that if P(Ei) ≥ λ for all i, then P(EJ) ≥ λ, where ω ∈ EJ if and only if ω ∈ EJ(ω) for ω in our underlying probability space Ω.

Independence conglomerability follows from conglomerability if we suppose that P(EJJ=i) = P(Ei) for all i.

However, note that independence conglomerability differs from conglomerability in two ways. First, it can make sense to talk of independence conglomerability even in cases where one cannot meaningfully conditionalize on J = i (e.g., because P(J=i) = 0 and we don’t have a way of conditionalizing on zero probability events). Second, and this seems like it could be significant, independence conglomerability seems a little more intuitive. We have a bunch of events, each of which has probability at least λ. We independently randomly choose one of these events. We should expect the probability that our randomly chosen event happens to be at least λ.

Imagine that independence conglomerability fails. Then you can have the following scenario. For each i ∈ I there is a game available for you to play, where you win provided that Ei happens. You get to choose which game to play. Suppose that for each game, the probability of victory is at most λ. But, paradoxically, there is a random way to choose which game to play, independent of the events underlying all the games, where your probability of victory is strictly bigger than λ. (Here I reversed the inequalities defining independence conglomerability, by replacing events with their complements as needed.) Thus you can do better by randomly choosing which game to play than by choosing a specific game to play.

Example: I am going to uniformly randomly choose a positive integer (using a countably infinite fair lottery, assuming for the sake of argument such is possible). For each positive integer n, you have a game available to you: the game is one you win if n is no less than the number I am going to pick. You despair: there is no way for you to have any chance to win, because whatever positive integer n you choose, I am infinitely more likely to get a number bigger than n than a number less than or equal to n, so the chance of you winning is zero or infinitesimal regardless which game you pick. But then you have a brilliant idea. If instead of you choosing a specific number, you independently uniformly choose a positive integer n, the probability of you winning will be at least 1/2 by symmetry. Thus a situation with two independent countably infinite fair lotteries and a symmetry constraint that probabilities don’t change when you swap the lotteries with each other violates independence conglomerability.

Is this violation somehow more problematic than the much discussed violations of plain conglomerability that happen with countably infinite fair lotteries? I don’t know, but maybe it is. There is something particularly odd about the idea that you can noticeably increase your chance of winning by randomly choosing which game to play.

Comparing axiologies

Are there ways in which it would be better if axiology were different? Here’s a suggestion that comes to mind:

  1. It would be better if cowardice, sloth, dishonesty, ignorance, suffering and all the other things that are actually intrinsic evils were instead great intrinsic goods.

For surely it would be better for there to be more goods!

On the other hand, one might have this optimistic thought:

  1. The actually true axiology is better than any actually false axiology.

(Theists are particularly likely to think this, since they will likely think that the true axiology is grounded in the nature of a perfect being.)

We have an evident tension between (1) and (2).

What’s going on?

One move is to say that it makes no sense to discuss the value of impossible scenarios. I am inclined to think that this isn’t quite correct. One might think it would be really good if the first eight thousand binary digits of π encoded the true moral code in English using ASCII coding, even though this is impossible (I assume). Likewise, it is impossible for a human to know all of mathematics, but it would be good to do so.

The solution I would go for is that axiology needs to be kept fixed in value comparisons. Imagine that I am living a blessed life of constant painless joy, and dissatisfied with that I find myself wishing for the scenario where joyless pain is even better than painless joy and I live a life of joyless pain. If one need not keep axiology fixed in value comparisons, that wish makes perfect sense, but I think it doesn’t—unlike the wish about π or the knowledge of mathematics.

A way to be calmer

For years I would find myself periodically annoyed by shoelaces. Several times a day, I would have to engage in finicky fine-motor activity to tie my shoes. This made me a little angry, because I suspected that the reason why few adult shoes have alternate closures has to do with fashion rather than with any technological benefits of shoelaces (note, after all, that shoelaces come undone, as well as get caught in bike gears, so it's not all a matter of laziness), and I've always resented social pressures of fashion imposing burdens on us. 

I've thought about this for a long time, and then recently finally decided to do something about it. I pulled out some cord locks (in the photo are some heavy duty cord locks that I salvaged from something years ago), pulled my shoelaces through them, and after a day or two of experimental use, I cut the shoelaces down, and knotted them above the cord locks. No more regular annoyance and anger at society's fashion choices! 

To fasten, I just grab the cord lock with one hand, and pull the permanent knot with the other. To unfasten, I just grab the cord lock and pull it to the knot. At any time, I can easily adjust tension in either direction without untying. It doesn't come loose. It doesn't get stuck in bike gears. It's not quite as instantaneous as I had imaged, but it is pretty fast.

It has some minor down sides. Eventually a cord lock will break down--though I don't know if this will be sooner than the shoe. At the length of lace I settled for (a little shorter than in this photo), the shoes don't loosen quite as far for removal as I might ideally prefer. And one would probably need to cut the laces to launder the shoes, but I don't launder my shoes. 

The void between the atoms

Philoponus says:

When Democritus said that the atoms are in contact with each other, he did not mean contact, strictly speaking, which occurs when the surfaces of the things in contact fit on [epharmazousōn] one another, but the condition in which the atoms are near one another and not far apart is what he called contact. For no matter what, they are separated by void. (67A7)

This odd view would lead to three difficulties. First, the loveliness of the Democritean system is that everything is explained by atoms pushing each other around, without any mysterious action at a distance, without any weird forces like the love and strife posited by other Greek thinkers. But if two atoms are moving toward each other, and they must stop short of touching each other, it seems that we have some kind of a repulsion at a “near” distance. Second, the atomists thought everything happened of necessity. But why should two atoms heading for each other stop at distance x apart rather than distance x/2 or x/3, say? This seems arbitrary. And, third, what reason would Democritus have to say such a strange thing?

One solution is to simply say Philoponus was wrong about Democritus (cf. this interesting paper). One might, for instance, speculate that Democritus said something about how there will always be interstices of void when atoms meet, much like the triangle-like interstices when you tile the plane with circles in a hexagonal pattern, because their surfaces do not perfectly match like jigsaw pieces would, and Philoponus confused this with the claim that there is void between the atoms.

But I want to try something else. There is a famous problem—discussed by Sextus Empiricus, the Dalai Lama (!) and a number of people in between—about how impenetrable material objects can possibly touch. For if they touch, their surfaces are either separated by some distance or not.If their surfaces are separated, they don’t really touch. If their surfaces are not separated, then the surfaces are in the same place, and the objects have penetrated each other (albeit only infinitesimally) and hence they are not really impenetrable.

Suppose now that we think that Democritus was aware of this problem, and posited the following solution. Atoms occupy open regions of space, ones that do not include any of their boundaries or surfaces. For instance, atoms of fire, which are spherical, occupy the set of points in space whose distance to the center is strictly less than a radius r: the boundary, where the distance to the center is exactly r, is unoccupied. If two spherical atoms, each of radius r, come in contact, the distance between their centers is 2r, but the point exactly midway between their centers is not occupied by either atom. There is a single point’s worth of void there.

This immediately solves two of the three problems I gave for the void-between-atoms view. If I’m right, Democritus has very good reason to posit the view: it is needed to avoid the problem of interpenetration of surfaces. Furthermore, the arbitrariness problem disappears. Atoms heading for each other stop precisely when their boundaries would interpenetrate if they had boundaries in them. They stop at distance zero. There is no smaller distance they could stop at. The two spherical atoms stop moving toward each other when there is exactly one point of void between them: any more and they could keep on moving; any less is impossible.

We still have the problem of mysterious action at a distance requiring some force beyond mere contact. But Democritus might think—I don’t know if he would be right—that action at zero distance is less mysterious than action at positive distance, and on the suggestion I am offering the distance between objects that are touching is zero. There is a point’s (or a surface of points, if say we have two cubical atoms meeting with parallel faces) worth of distance, and that’s zero. Impenetrability at least explains why the atoms can’t go any further towards each other, even if it does not explain why they deflect each other’s motion as they do (which anyway, as we learn from Hume’s discussion of billiard balls, isn’t easy). So the remaining problem is reduced.

It wouldn’t surprise me at all if this was in the literature already.

One-thinker colocationism

Colocationists about human beings think that in my chair are two colocated entities: a human person and a human animal. Both of them are made of the same stuff, both of them exhibit the same physical movements, etc.

The standard argument against colocationism is the two thinkers argument. Higher animals, like chimpanzees and dogs, think. The brain of a human animal is more sophisticated than that of a chimpanzee or a dog, and hence human animals also have what it takes to think. Thus, they think. But human persons obviously think. So there are two thinkers in my chair, which is innately absurd, plus leads to some other difficulties.

If I were a colocationist, I think I would deny that any animals think. Instead, the same kind of duplication that happens in the human case happens for all the higher animals. In my chair there is a human animal and a person, and only the person thinks. In the doghouse, there is a dog and a “derson”. In the savanna, one may have a chimpanzee and a “chimperson”. The derson and the chimperson are not persons (the chimperson comes closer than the derson does), but all three think, while their colocated animals do not. We might even suppose that the person, the derson and chimperson are all members of some further kind, thinker.

Suppose one’s reason for accepting colocationism about humans is intuitions about the psychological components of personal identity: if one’s psychological states were transfered into a different head, one would go with the psychological states, while the animal would stay behind, so one isn’t an animal. Then I think one should say a similar thing about other higher animals. If we think that that an interpersonal relationship should follow the psychological states rather than the body of the person, we should think similarly about a relationship with one’s pet: if one’s pet’s psychological states are transfered into a different body, our concerns should follow. If Rover is having a vivid dream of chasing a ball, and we transfer Rover’s psychological states into the body of another dog, Rover would continue the dream in that other body. I don’t believe this in the human case, and I don’t believe it in the dog case, but if I believed this in the human case, I’d believe it in the dog case.

What are the reasons for the standard colocationist’s holding that the human animal thinks? One may say that because both the animal and the person have the same brain activity, that’s a reason to say that either both or neither thinks. But the brain also has the same brain activity, and so if this is one’s reason for saying that the animal thinks, we now have three thinkers. And, if there are unrestricted fusions, the mereological sum of the person with their clothes also has the same brain activity, thereby generating a fourth thinker. That’s absurd. Thus thought isn’t just a function of hosting brain activity, but hosting brain activity in a certain kind of context. And why can’t this context be partly characterized by modal characteristics, so that although both the animal and the person have the same brain activity, they provide a different modally characterized context for the brain activity, in such a way that only one of the two thinks?

This one-thinker colocationism can be either naturalistic or dualistic. On the dualistic version, we might suppose that the nonphysical mental properties belong to only one member of the pair of associated beings. On the naturalistic version, we might suppose that what it is to have a mental property is to have a physical property in a host with appropriate modal properties—the ones the person, the derson and the chimperson all have.

I think there is one big reason why a colocationist may be suspicious of this view. Ethologists sometimes explain animal behavior in terms of what the animal knows, is planning, and more generally is thinking. These explanations are all incorrect on the view in question. But the one-thinker co-locationist has two potential answers to this. The first is to weaken her view and allow animals to think, but not consciously. It is only the associated non-animal that has conscious states, that has qualia. But the conscious states need not enter into behavioral explanations. The second is to say that the scientists’ explanations while incorrect can be easily corrected by replacing mental properties with their neural correlates.

Tuesday, September 10, 2024

Reducing de re to de dicto modality

In my previous post, I gave an initial defense of a theory of qualitative haecceities in terms of qualitative origins: qualitative haecceities encapsulate complete qualitative descriptions of an entity’s initial state and causal history. I noted that among the advantages of the theory is that it can allow for a reduction of de re modality to de dicto modality, without “the mystery of non-qualitative haecceities”.

I want to expand on this, and why qualitative-origin haecceities are superior to non-qualitative haecceities here. A haecceitistic account of de re modality proceeds in something like the following vein. First, introduce the predicate H(Q,x) which says that Q is a haecceity of x. Then we reduce de re claims as follows:

  • x is essentially F ↔︎ Q(H(Q,x)→□(∀y(QyFx)))

  • x is accidentally F ↔︎ Q(H(Q,x)∧◊(∃y(QyFx))).

Granted, this involves de re modality for second-order variables like Q. But this de re modality is less problematic because we can suppose the Barcan and converse Barcan formulas to hold as axioms for the second-order quantifiers, and we can treat the second-order entities as necessary beings. De re modality is particularly difficult for contingent beings, so if we can reduce to a modal logic where only necessary beings are subject to de re modal claims, we have made genuine progress.

We will also need some axioms. Here are two that come to mind:

  • xQ(H(Q,x)→Qx) (things have their haecceities)

  • xQ(H(Q,x)) (everything has a haecceity).

Now, here is why I think that qualitative-origin haecceities are superior to non-qualitative haecceities. Given qualitative-origin haecceities, we can give an account of what H(Q,x) means without using de re modality. It just means that Qy attributes to y all of the actual qualitative causal origins of x, including x’s initial qualitative state. On the other hand, if we go for non-qualitative haecceities, we seem to have two options. We could take H(Q,x) to be primitive, which always should be a last resort, or we could try to define in some way like:

  • H(Q,x) ↔︎ (□(ExQx) ∧ □∀y(Qyy=x))

where Ex says that x exists (it might be a primitive in a non-free logic, or it might just be an abbreviation for ∃y(y=x)). But this definition uses de re modality with respect to x, so it is not satisfactory in this context, and I can’t think of any way to do it without de re modality with respect to potentially contingent individuals like x.

Qualitative haecceities

A haecceity H of x is a property of an entity such that necessarily x exists if and only if x instantiates H.

Haecceities are normally thought of as non-qualitative properties. But one could also have qualitative haecceities. Of course, if an entity has a qualitative haecceity then it cannot be duplicated, so one can only suppose that everything has a qualitative haecceity provided one is willing to agree with Leibniz’s Identity of Indiscernibles.

I am personally drawn to the idea that everything does have a qualitative haecceity, and specifically that the qualitative haecceity of x encapsulates x’s qualitative causal history: a complete qualitative description of x’s explanatorily initial state and of all of its causal antecedents. One might call such properties “qualitative origins”. The view that every entity has a qualitative origin is a haecceity is a particularly strong version of the essentiality of origins: everything in an entity’s causal history is essential to it, and the causal history is sufficient for the entity’s existence.

I suppose the main reason not to accept this view is that it implies that two distinct objects couldn’t have the same qualitative origin, but it seems possible that God could create two objects ex nihilo with the same qualitative initial state Q. I am not so sure, though. How would God do that? “Let there be two things satisfying Q?” But this is too indeterminate (I disagree with van Inwagen’s idea that God can issue indeterminate decrees). If there can be two, there can be three, so God would have to specify which two things satisfying Q to create. But that would require a way of securing numerical reference to specific individuals prior to their creation, and that in turn would require haecceities, in this case non-qualitative haecceities. So the objection to the view requires non-qualitative haecceities.

But what started us on this objection was the thought that God could say “Let there be two things satisfying Q.” But if God could say that, why couldn’t he say “Let there be two things satisfying H”, where H is a non-qualitative haecceity? I suppose one will say that this is nonsense, because it is nonsense to suppose two things share a non-qualitative haecceity. But isn’t there a double-standard here? If it is nonsense to suppose two things share a non-qualitative haecceity, why can’t it be nonsense to suppose two things share a qualitative haecceity? It seems that “what does the explaining” of why two things can’t share a non-qualitative haecceity is the obscurity of non-qualitative haecceities, and that’s not really an explanation.

So perhaps we can just say: Having a distinct qualitative origin is what it is to be a thing, and it is impossible for two things to share one. This does indeed restrict the space of possible worlds. No exactly similar iron spheres or anything like that. That’s admittedly a little counterintuitive. But on the other hand, we have a lovely explanation of intra- and inter-world identity of objects, as well as a reduction of de re modality to de dicto, all without the mystery of non-qualitative haecceities. Plus we have Leibniz’s zero/one picture of the world on which all of reality is described by zeroes and ones: we put a zero beside an uninstantiated qualitative haecceity and a one besides an initiated one, and then that tells us everything that exists. This is all very appealing to me.

Friday, September 6, 2024

Existence and causation

Start with these plausible claims:

  1. If x causes y, the causal relation between x and y is not posterior to the existence of y.

  2. A relation between two entities is never prior to the existence of either entity.

So, the causal relation between x and y is neither prior nor posterior to the existence of y.

But the causal relation is, obviously, intimately tied to the existence of y. What is this tie? The best answer I know is that the causal relation is the existence of y or an aspect of that existence: for y to exist is at least in part for y to have been caused by x.

Thursday, September 5, 2024

Appropriateness of memory chains

A lot of discussion of memory theories of personal identity invokes science-fictional thought experiments, such as when memories are swapped between two brains.

One of the classic papers is Shoemaker’s “Persons and their Pasts”. There, Shoemaker accounts for personal identity across time, at least in the absence of branching, in terms of appropriate causal connections between apparent memories, not just any causal connections.

This matters. Imagine that Alice and Bob both get total memory wipes, so on the memory theory they cease to exist. But the person inhabiting the Alice body then reads Bob’s vividly written diary, which induces in her apparent memories of Bob’s life. I think most memory theorists will want to deny that after the reading of the diary, Bob comes back to life in Alice’s body. Not only would this be a highly counterintuitive consequence, but it would violate the plausible principle that whether someone is dead does not depend on future events, absent something like time travel. For suppose this sequence:

  • Monday: Memory wipe

  • Tuesday: Person inhabiting Alice’s body lives a confused life

  • Wednesday: Person inhabiting Alice’s body reads Bob’s diary, comes to think she’s Bob, and gains all sorts of “correct” apparent memories of Bob’s life.

On Wednesday, the person inhabiting Alice’s body has memories of the person inhabiting Alice’s body on Tuesday, so by the memory theory they are the same person. But if on Wednesday, it is Bob who inhabits Alice’s body, then Bob also already existed on Tuesday by transitivity of identity. On the other hand, if Alice hadn’t read the diary on Wednesday, Bob would not have existed either on Wednesday or on Tuesday. So whether Bob is alive on Tuesday depends on future events, despite the absence of anything like time travel, which is absurd.

To get around diary cases, memory theorists really do need to have an appropriateness condition on the causal connections. Shoemaker’s own appropriateness condition appears inadequate: he thinks that what is needed is the kind of connection that makes a later apparent memory and an earlier apparent memory be both of the same experience. But Alice’s induced apparent memories are of the experiences that Bob so vividly described in his diary, which are the same experiences that Bob set down his memories of.

What the memory theorist should insist on are causal chains that are of the right kind for the transmission of memories, modulo any sameness-of-person condition. But now it is far from clear that the science-fictional scenarios in the literature satisfy this condition. Certainly, the scanning of memories in a brain and the imposition of the same patterns on a brain isn’t the normal way for memories to be causally transmitted over time. That it’s not the normal way does not mean that it’s not an appropriate way, but at least it’s far from clear that it is an appropriate way.

It would be interesting what one should say about a memory theory on which the appropriate causal chain condition is sufficiently strict that the only way to transfer memories from one head to another would be by physically moving the brain. (Could one move a chunk of the brain instead? Maybe, but only if it turns out that memories can be localized. And even so it’s not clear whether coming along with a mere chunk of the brain is the appropriate way to transmit memories; the appropriate way may require full cerebral context.) Such a version of the memory theory would not do justice to “memory swapping” intuitions about the memories from one brain being transferred to another. And I take it that such memory swapping intuitions are important to the case for the memory theory.

Here’s another implausible consequence of this kind of memory theory. Suppose aliens are capturing people, and recording their brain data using a method that destroys the memories. However, being somewhat nice, the aliens then use the recording to restore the memories, and then return the person to earth. On the memory theory, anybody coming back to earth is a new individual. That doesn’t seem quite right.

A challenge for the memory theorist, thus, is to have an account of the appropriate causal chain condition that is sufficiently lax to allow for the memory swap intuitions that often motivate the theory but is strict enough to rule out diary cases. This is hard.

Wednesday, September 4, 2024

Restitution

Suppose Bob paid professional killer Alice to kill him on a day of her choice in the next month. Next day, Bob changes his mind, but has no way of contacting Alice. A week later, Bob sees Alice in the distance aiming a rifle at him. Is it permissible for him to shoot Alice in self-defense?

I take it (somewhat controversially) that killing a juridically innocent person is murder even if the victim consents. Thus, Alice is attempting murder, and normally it is permissible to shoot someone who is trying to murder one. But it seems rather dastardly for Bob to shoot Alice in this case.

On the other hand, though, if Bob hired Alice to kill Carl, and then repented, shooting Alice when Alice is trying to murder Carl does seem the right thing for Bob to do if there is no other way to save Carl’s life.

What is the exact moral difference between the two cases? In both cases, Alice is trying to commit a murder, and in both cases Bob bears a responsibility for this.

I think the difference has something with duties of restitution. When one has done something wrong, and then repented, one needs to do one’s best to “undo” the wrong, repaying the victims in a reasonable manner. But there is a gradation of priority, and in particular even if one is oneself among the victims (Socrates thinks the wrongdoer is the chief victim, since in doing wrong one damages one’s virtue), restitution to others takes priority. In both cases, Bob has harmed Alice by tempting her to commit murder. In the case where Alice was hired to murder Bob, restitution to Alice takes precedence over restitution to Bob, and refraining from killing Alice in self-defense seems a precisely appropriate form of restitution. In the case where Alice was hired to murder Carl, however, restitution to Carl takes precedence, and Bob owes it to Carl to shoot Alice.

In fact, I suspect that in the case where Bob hired Alice to kill Carl, if the only way to save Carl’s life is for Bob to leap into the line of fire and die protecting Carl, other things being equal that would be Bob’s duty. Normally to sacrifice one’s life to save another is supererogatory, but not so when the danger to the other comes from one’s own murderous intent.

The morality of restitution is difficult and complex.

Independent invariant regular hyperreal probabilities: an existence result

A couple of years ago I showed how to construct hyperreal finitely additive probabilities on infinite sets that satisfy certain symmetry constraints and have the Bayesian regularity property that every possible outcome has non-zero probability. In this post, I want to show a result that allows one to construct such probabilities for an infinite sequence of independent random variables.

Suppose first we have a group G of symmetries acting on a space Ω. What I previously showed was that there is a hyperreal G-invariant finitely additive probability assignment on all the subsets of Ω that satisfies Bayesian regularity (i.e., P(A) > 0 for every non-empty A) if and only if the action of G on Ω is “locally finite”, i.e.:

  • For any finitely generated subgroup H of G and any point x in G, the orbit Hx is finite.

Here is today’s main result (unless there is a mistake in the proof):

Theorem. For each i in an index set, suppose we have a group Gi acting on a space Ωi. Let Ω = ∏iΩi and G = ∏iGi, and consider G acting componentwise on Ω. Then the following are equivalent:

  1. there is a hyperreal G-invariant finitely additive probability assignment on all the subsets of Ω that satisfies Bayesian regularity and the independence condition that if A1, ..., An are subsets of Ω such that Ai depends only on coordinates from Ji ⊆ I with J1, ..., Jn pairwise disjoint if and only if the action of G on Ω is locally finite

  2. there is a hyperreal G-invariant finitely additive probability assignment on all the subsets of Ω that satisfies Bayesian regularity

  3. the action of G on Ω is locally finite.

Here, an event A depends only on coordinates from a set J just in case there is a subset A′ of j ∈ JΩj such that A = {ω ∈ Ω : ω|J ∈ A′} (I am thinking of the members of a product of sets as functions from the index set to the union of the Ωi). For brevity, I will omit “finitely additive” from now on.

The equivalence of (b) and (c) is from my old result, and the implication from (a) to (b) is trivial, so the only thing to be shown is that (c) implies (a).

Example: If each group Gi is finite and of size at most N for a fixed N, then the local finiteness condition is met. (Each such group can be embedded into the symmetric group SN, and any power of a finite group is locally finite, so a fortiori its action is locally finite.) In particular, if all of the groups Gi are the same and finite, the condition is met. An example like that is where we have an infinite sequence of coin tosses, and the symmetry on each coin toss is the reversal of the coin.

Philosophical note: The above gives us the kind of symmetry we want for each individual independent experiment. But intuitively, if the experiments are identically distributed, we will want invariance with respect to a shuffling of the experiments. We are unlikely to get that, because the shuffling is unlikely to satisfy the local finiteness condition. For instance, for a doubly infinite sequence of coin tosses, we would want invariance with respect to shifting the sequence, and that doesn’t satisfy local finiteness.

Now, on to a sketch of the proof from (c) to (a). The proof uses a sequence of three reductions using an ultraproduct construction to cases exhibiting more and more finiteness.

First, note that without loss of generality, the index set I can be taken to be finite. For if it’s infinite, for any finite partition K of I, and any J ∈ K, let GJ = ∏i ∈ JGi, let ΩJ = ∏i ∈ JΩi, with the obvious action of GJ on ΩJ. Then G is isomorphic to J ∈ KGJ and Ω to J ∈ KΩJ. Then if we have the result for finite index sets, we will get a regular hyperreal G-invariant probability on Ω that satisfies the independence condition in the special case where J1, ..., Jn are such that Ji and Jj for distinct i and j are such that at least one of Ji ∩ J and Jj ∩ J is empty for every J ∈ K. We then take an ultraproduct of these probability measures with respect to K and an ultrafilter on the partially ordered set of finite partitions of I ordered by fineness, and then we get the independence condition in full generality.

Second, without loss of generality, the groups Gi can be taken as finitely generated. For suppose we can construct a regular probability that is invariant under H = ∏iHi where Hi is a finitely generated subgroup of Gi and satisfies the independence condition. Then we take an ultraproduct with respect to an ultrafilter on the partially ordered set of sequences of finitely generated groups (Hi)i ∈ I where Hi is a subgroup of Gi and where the set is ordered by componentwise inclusion.

Third, also without loss of generality, the sets Ωi can be taken to be finite, by replacing each Ωi with an orbit of some finite collection of elements under the action of the finitely generated Gi, since such orbits will be finite by local finiteness, and once again taking an appropriate ultraproduct with respect to an ultrafilter on the partially ordered set of sequences of finite subsets of Ωi closed under Gi ordered by componentwise inclusion. The Bayesian regularity condition will hold for the ultraproduct if it holds for each factor in the ultraproduct.

We have thus reduced everything to the case where I is finite and each Ωi is finite. The existence of the hyperreal G-invariant finitely additive regular probability measure is now trivial: just let P(A) = |A|/|Ω| for every A ⊆ Ω. (In fact, the measure is countably additive and not merely finitely additive, real and not merely hyperreal, and invariant not just under the action of G but under all permutations.)