Thursday, February 29, 2024

The Incarnation and unity of consciousness

A number of people find the following thesis plausible:

  1. Necessarily, the conscious states hosted in a single person at one time are unified in a single conscious state that includes them.

But now consider Christ crucified.

  1. Christ has conscious pain states in his human mind.

  2. Christ has no conscious pain states in his divine mind.

  3. Christ has a conscious divine comprehension state in his divine mind.

  4. Christ has no conscious divine comprehension state in his human mind.

  5. Any conscious state is in a mind.

  6. Christ has no minds other than a human and a divine one.

It seems that (2)–(7) contradict (1). For by (1), (2) and (4) it seems there is a conscious state in Christ that includes both Christ’s pain and Christ’s divine comprehension. But that state wouldn’t be in the divine mind because of (3) and wouldn’t be in the human mind because of (5). But it would have to be in a mind, and Christ has no other minds.

There is a nitpicky objection that (7) might be false for all we know—maybe Christ has some other incarnation on another planet. But that is a mere complication to the argument, given that none of these other incarnations could host the divine comprehension in the created mind.

But the argument I gave above fails if God is outside time. For then the “has” in (4) is compatible with the divine comprehension being atemporal, then it does not follow from (2) and (4) that the divine comprehension and the pain happen at the same time, as is required to contradict (1).

In other words, we have an argument from the Incarnation to God’s atemporality, assuming the unity of consciousness thesis (1).

That said, while I welcome arguments for divine atemporality, I am not convinced of (1).

Wednesday, February 28, 2024

More on benefiting infinitely many people

Once again let’s suppose that there are infinitely people on a line infinite in both directions, one meter apart, on positions numbered in meters. Suppose all the people are on par. Fix some benefit (e.g., saving a life or giving a cookie). Let Ln be the action of giving the benefit to all the people to the left of position n. Let Rn be the action of giving the benefit to all the people to the right of position n.

Write A ≤ B to mean that action B is at least as good as action A, and write A < B to mean that A ≤ B but not B ≤ A. If neither A ≤ B nor B ≤ A, then we say that A and B are noncomparable.

Consider these three conditions:

  • Transitivity: If A ≤ B and B ≤ C, then A ≤ C for any actions A, B and C from among the {Lk} and the {Rk}.

  • Strict monotonicity: Ln < Ln + 1 and Rn > Rn + 1 for all n.

  • Weak translation invariance: If Ln ≤ Rm, then Ln + k ≤ Rm + k and if Ln ≥ Rm, then Ln + k ≥ Rm + k, for any n, m and k.

Theorem: If we have transitivity, strict monotonicity and weak translation invariance, then exactly one of the following three statements is true:

  1. For all m and n, Lm and Rn are incomparable

  2. For all m and n, Lm < Rn

  3. For all m and n, Lm > Rn.

In other words, if any of the left-benefit actions is comparable with any of the right-benefit actions, there is an overwhelming moral skew whereby either all the left-benefit actions beat all the right-benefit actions or all the right-benefit actions beat all the left-benefit actions.

Proposition 1 in this paper is a special case of the above theorem, but the proof of the theorem proceeds in basically the same way. For a reductio, assume that (i) is false. Then either Lm ≥ Rn or Lm ≤ Rn for some m and n. First suppose that Lm ≥ Rn. Then the second and third paragraphs of the proof of Proposition 1 show that (iii) holds. Now suppose that Lm ≤ Rn. Let Lk* = Rk and Rk* = Lk. Say that A*B iff A* ≤ B*. Then transitivity, strict monotonicity and weak translation invariance hold for ≤*. Moreover, we have Lm ≤ Rn, so Rm*Ln. Applying the previous case with  − m and  − n in place of n and m respectively we conclude that we always have Lj>*Rk and hence that we always have Lj < Rk, i.e., (ii).

I suppose the most reasonable conclusion is that there is complete incomparability between the left- and right-benefit actions. But this seems implausible, too.

Again, I think the big conclusion is that human ethics has limits of applicability.

I hasten to add this. One might reasonably think—Ian suggested this in a recent comment—that decisions about benefiting or harming infinitely many people (at once) do not come up for humans. Well, that’s a little quick. To vary the Pascal’s Mugger situation, suppose a strange guy comes up to you on the street, and tells you that there are infinitely many people in a line drowning in a parallel universe, and asks you if you want him to save all the ones to the left of position 123 or all the ones to the right of position  − 11, because he can magically do either one, and nothing else, and he needs help in his moral dilemma. You are, of course, very dubious of what he is saying. Your credence that he is telling the truth is very, very small. But as any good Bayesian will tell you, it shouldn’t be zero. And now the decision you need to make is a real one.

Tuesday, February 27, 2024

Incommensurability in rational choice

When I hear that two options are incommensurable, I imagine things that are very different in value. But incommensurable options could also be very close in value. Suppose an eccentric tyrant tells you that she will spare the lives of ten innocents provided that you either have a slice of delicious cake or listen to a short but beautiful song. You are thus choosing between two goods:

  1. The ten lives plus a slice of delicious cake.

  2. The ten lives plus a short but beautiful song.

The values of the two options are very close relatively speaking: the cake and song make hardly any difference compared to the ten lives that comprise the bulk of the value. Yet, because the cake and the song are incommensurable, when you add the same ten lives to each, the results are incommensurable.

We can make the differences between the two incommensurables arbitrarily small. Imagine that the tyrant offers you the choice between:

  1. The ten lives plus a chance p of a slice of delicious cake.

  2. The ten lives plus a chance p of a short but beautiful song.

Making p be as small as we like, we make the difference between the options as small as possible, but the options remain incommensurable.

Well, maybe “noncomparable” is a better term than “incommensurable”, as it is a more neutral term, without that grand sound. Then we can say that (1) and (2) are “noncomparable by a slight amount” (relative to the magnitude of the overall goods involved).

There is a common test for incommensurability. Suppose A and B are options where neither is better than the other, and we want to know if they are equal in value or incommensurable. The test is to vary one of the two options by a slight amount of value, either positive or negative. If after the tweak the two options are still such that neither is better than the other, they must be incommensurable. (Proof: If A is slightly better or worse than A, and B is equal to A, then A will be slightly better or worse than B. So if A is neither better nor worse than B, we couldn’t have had B and A equal.)

But cases of things that are noncomparable by a slight amount show that we need to be careful with the test. The test still offers a sufficient condition for incommensurability: if the fact that neither is better than the other remains after making an option better or worse, we must have incommensurability. But if the two options are noncomparable by a very, very slight amount, a merely very slight variation in one could destroy the noncomparability, and generate a false positive for incommensurability. For instance, suppose that our two options are (3) and (4) with p = 10−100. Now suppose the slight variation on (3) is that we suppose you are given a mint in addition to the goods in (3). A mint beats a 10−100 chance of a song, even if it’s incommensurable with a larger chance of a song. So the variation on (3) beats the original (4). But we still have incommensurability.

(Note: There are two concepts of incommensurability. One is purely value based, and the other is agent-centric and based on rational choice. It is the second one that I am using in this post. I am comparing not pure values, but the reasons for pursuing the values. Even if the values are strictly incommensurable, as in the case of a certainty of a mint and a 10−100 chance of a song, the former is rationally preferable at least for humans.)

Saving infinitely many lives

Suppose there is an infinitely long line with equally-spaced positions numbered sequentially with the integers. At each position there is a person drowning. All the persons are on par in all relevant respects and equally related to you. Consider first a choice between two actions:

  1. Save people at 0, 2, 4, 6, 8, ... (red circles).

  2. Save people at 1, 2, 3, 5, 7, ... (blue circles).

It seems pretty intuitive that (1) and (2) are morally on par. The non-negative evens and odds are alike!

But now add a third option:

  1. Save people at 2, 4, 6, 8, ... (yellow circles).

The relation between (2) and (3) is exactly the same as the relation between (1) and (2)—after all, there doesn’t seem to be anything special about the point labeled with the zero. So, if (1) and (2) are on par, so are (2) and (3).

But by transitivity of being on par, (1) and (3) are on par. But they’re not! It is better to perform action (1), since that saves all the people that action (3) saves, plus the person at the zero point.

So maybe (1) is after better than (2), and (2) is better than (3)? But this leads to the following strange thing. We know how much better (1) is than (2): it is better by one person. If (1) is better than (2) and (2) is better than (3), then since the relationships between (1) and (2) and between (2) and (3) are the same, it follows that (1) must be better than (2) by half a person and (2) must be better than (3) by that same amount.

But when you are choosing which people to save, and they’re all on par, and the saving is always certain, how can you get two options that are “half a person” apart?

Very strange.

In fact, it seems we can get options that are apart by even smaller intervals. Consider:

  1. Save people at 0, 10, 20, 30, 40, ....

  2. Save people at 1, 11, 21, 31, 41, ....

and so on up to:

  1. Save people at 10, 20, 30, 40, ....

Each of options (4)–(14) is related the same way to the next. Option (4) is better than option (14) by exactly one person. So it seems that each of options (4)–(13) is better by a tenth of a person than the next!

I think there is one at all reasonable way out, and it is to say that in both the (1)–(3) series and the (4)–(14) series, each option is incomparable with the succeeding one, but we have comparability between the start and end of each series.

Maybe, but is the incomparability claim really correct? It still feels like (1) and (2) should be exactly on par. If you had a choice between (1) and (2), and one of the two actions involved a slight benefit to another person—say, a small probability of saving the life of the person at  − 17—then we should go for the action with that slight benefit. And this makes it implausible that the two are incomparable.

My own present preferred solution is that the various things here seem implausible to us because human morality is not meant for cases with infinitely many beneficiaries. I think this is another piece of evidence for the species-relativity of morality: our morality is grounded in human nature.

Monday, February 26, 2024

Consciousness finitism

My 11-year-old has an interesting intuition, that it is impossible to have an infinite number of conscious beings. She is untroubled by Hilbert’s Hotel, and insists the intuition is specific to conscious beigs, but is unable to put her finger on what exactly bothers her about an infinity of conscious beings. It’s not considerations like “If there are infinitely many people, you probably have a near-duplicate.” Near-duplicates don’t bother her. It’s consciousness specifically. She is surprised that a consciousness-specific finitist intuition isn’t more common.

My best attempt at a defense of consciousness-finitism was that it seems reasonable to think of yourself as a uniformly randomly chosen member of the set of all conscious beings. But thinking of yourself as a uniformly randomly chosen member of a countably infinite set leads to the well-known paradoxes of countably infinite fair lotteries. So that may provide some sort of argument for consciousness-finitism. But my daughter insists that’s not where her intuition comes from.

Another argument for consciousness-finitism would be the challenges of aggregating utilities across an infinite number of people: If all the people are positioned at locations numbered 1,2,3,…, and you benefit the people at even-numbered locations, you benefit the same quantity of people as when you benefit the people whose locations are divisible by four, but clearly benefiting the people at the even-numbered locations is a lot better. I haven’t tried this family of arguments on my daughter, but I don’t think her intuitions come from thinking about well-being.

Still, I have a hard time believing in the impossibility of an infinite number of consciousnesses on the strength of such arguments. The main reason I have such a hard time is that it seems obvious that you could have a forward infinite regress of conscious beings, each giving birth to the next.

Friday, February 23, 2024

Teaching virtue

A famous Socratic question is whether virtue can be taught. This argument may seem to settle the question:

  1. If vice can be taught, virtue can be taught.

  2. Vice can be taught. (Clear empirical fact!)

  3. So, virtue can be taught.

Well, except that what I labeled as a clear empirical fact is not something that Socrates would accept. I think Socrates reads “to teach” as a success verb, with a necessary condition for teaching being the conveyance of knowledge. In other words, it’s not possible to teach falsehood, since knowledge is always of the truth, and presumably in “teaching” vice one is “teaching” falsehoods such as that greed is good.

That said, if we understand “to teach” in a less Socratic way, as didactic conveyance of views, skills and behavioral traits, then (2) is a clear empirical fact, and (1) is plausible, and hence (3) is plausible.

That said, it would not be surprising if it were harder to teach virtue even in this non-Socratic sense than it is to teach vice. After all, it is surely harder to teach someone to swim well than to swim badly.

Tuesday, February 20, 2024

Relativism and natural law

Individual relativism and natural law ethics have something in common: both agree that the grounds of your ethical obligations are found in you. The disagreement, of course, is in how they are found. The relativist says that they are found in your subjectivity, in your beliefs and values that differ from person to person, while the natural lawyer thinks they are found in your human form, which is exactly like the human form of everyone else.

(Whether Kantianism shares this feature depends on how we read the metaphysics of rationality, namely whether our rationality as a genuine part of our selves, or as an abstraction.)

I think this commonality has some importance: it captures the idea that idea that we are in some sense morally beholden to ourselves rather than to something alien, something about which we could ask “Why should I listen to it?”

But I think in the end natural law does a better job being a non-alienating ethics. For we have good reason to think that my moral beliefs and values are etiologically largely the product of society around me and accidental features in my life. If these beliefs and values are what grounds my moral obligations, then my obligations are by and large the product of society and accident. (Think of the common philosophical observation that we do not choose our beliefs, but catch them like one catches a cold.) If I had lived in a different society with different accidental influences, I would have had different obligations on relativism. The obligations are, thus, largely the result of external and accidental influence on my cognition.

On the other hand, on natural law, my obligations are grounded in my individual human form which is my central and essential metaphysical constituent. Granted, I did not create this form for myself. But neither is it an accidental result of external influence—it defines me.

I think that as a society we feel that the variability of our individual beliefs and values makes us more autonomous if relativism is true. But once we realistically realize that this variability is largely due to external influence, our intuitions should shift. Natural law provides a more real autonomy.

Of course, on a theistic version of natural law, my form comes from God. Yes, but on orthodox Aristotelianism (which I am not sure I completely endorse) it is not an alien imposition, since I have no existence apart from that form.

Sunday, February 18, 2024

Joshua Rasmussen moving to Baylor

I am very, very happy that my brilliant friend Joshua Rasmussen, of Azusa Pacific University, has accepted a full professor position in Baylor's Philosophy Department starting Fall 2024.

Disable Windows double-finger-click

My new work laptop did not have dedicated buttons, and by default Windows set it up so that a two-finger tap or click on the touchpad would trigger a right-button click. I turned on the non-default setting that lets me click in the lower-right part of the touchpad to get a right button click, and turned off the two-finger tap options. There is no way to turn off the option for generating a right-button click with a two-finger click. 

This might seem quite innocent, but I kept on getting fake right-button clicks instead of left-button clicks when clicking the touchpad. I changed the registry settings to make the right-click area really small. It didn't solve the problem. Finally, I figured out what was going on: I click the touchpad with the side of my right thumb. This seems to result in the touchpad occasionally registering the tip and joint of my right thumb as separate contacts. The bad right-clicks were driving me crazy. I searched through registry and Windows .sys and .dll files for some hidden option to turn off the two-finger click for right-button clicks, finding nothing. Nothing. I tried to install some older proprietary touchpad driver, but none of them worked.

Finally, it was time to write some code to disable the bad right clicks. After a bunch of hiccups (I almost never write code that interacts with the Windows API), and a Python-based prototype, I wrote a little C program. Just set disable-two-finger-right-click.exe to run as Administrator in Task Scheduler on login, and it takes care of it. The code uses rawinput to get the touchpad HID report, uses the HidP-* functions to parse it, and registers a low level mouse hook to remap the bad right clicks to left clicks based on some heuristics (mainly based around how long ago there was a two-finger click before the right click, while ignoring the official right-click area of the touchpad). 

So many hours that would have been saved if Microsoft just added an extra option.

Thursday, February 15, 2024

Technology and dignitary harms

In contemporary ethics, paternalism is seen as really bad. On the other hand, in contemporary technology practice, paternalism is extremely widely practiced, especially in the name of security: all sorts of things are made very difficult to unlock, with the main official justification being that if if users unlock the things, they open themselves to malware. As someone who always wants to tweak technology to work better for him, I keep on running up against this: I spend a lot of time fighting against software that wants to protect me from my own stupidity. (The latest was Microsoft’s lockdown on direct access to HID data from mice and keyboards when I wanted to remap how my laptop’s touchpad works. Before this, because Chromecasts do not make root access available, to get my TV’s remote control fully working with my Chromecast, I had to make a hardware dongle sitting between the TV and the Chromecast, instead of simply reading the CEC system device on the Chromecast and injecting appropriate keystrokes.)

One might draw one of two conclusions:

  1. Paternalism is not bad.

  2. Contemporary technology practice is ethically really bad in respect of locking things down.

I think both conclusions would be exaggerated. I suspect the truth is that paternalism is not quite as difficult to justify as contemporary ethics makes it out, and that contemporary technology practice is not really bad, but just a little bad in the respect in question, even if that “a little bad” is very annoying to hacker types like me.

Here is another thought. While the official line on a lot of the locking down of hardware and software is that it is for the good of the user, in the name of security, it is likely that often another reason is that walled gardens are seen as profitable in a variety of ways. We think of a profit motive as crass. But at least it’s not paternalistic. Is crass better than paternalistic? On first, thought, surely not: paternalism seeks the good of the customer, while profit-seeking does not. On second thought, it shows more respect for the customer to have a wall around the garden in order to be able to charge admission rather than in order to control the details of the customer’s aesthetic experience for the customer’s own good (you will have a better experience if you start by these oak trees, so we put the gate there and erect a wall preventing you from starting anywhere else). One does have a right to seek reasonable compensation for one’s labor.

The considerations of the last paragraph suggest that the special harm of paternalistic behavior is a dignitary harm. There is no greater non-dignitary harm to me when I am prevented from rooting my device for paternalistic reasons than when I am prevented from doing so for profit reasons, but the dignitary harm is greater in the paternalistic case.

There is, however, an interesting species of dignitary harm that sometimes occurs in profit-motivated technological lockdowns. Some of these lockdowns are motivated by protecting content-creator profits from user piracy. This, too, is annoying. (For instance, when having trouble with one of our TV’s HDMI ports, I tried to solve the difficulty by using an EDID buffer device, but then I could no longer use our Blu-Ray player with that port because of digital-rights management issues.) And here there is a dignitary harm, too. For while paternalistic lockdowns are based on the presumption that lots of users are stupid, copyright lockdowns are based on the presumption that lots of users are immoral.

Objectively, it is worse to be treated as immoral than as stupid: the objective dignitary harm is greater. (But oddly I tend to find myself more annoyed when I am thought stupid than when I am thought immoral. I suppose that is a vice in me.) This suggests that in terms of difficulty of justification of technological lockdowns with respect to dignitary harms, the ordering of motives would be:

  1. Copyright-protection (hardest to justify, with biggest dignitary harm to the user).

  2. Paternalism (somewhat smaller dignitary harm to the user).

  3. Other profit motives (easiest to justify, with no dignitary harm to the user).

Wednesday, February 14, 2024

Yet another tweak of the knowledge argument against physicalism

Here is a variant on the knowledge argument:

  1. All empirical facts a priori follow from the fundamental facts.

  2. The existence of consciousness does not a priori follow from the fundamental physical facts.

  3. The existence of consciousness is an empirical fact.

  4. Thus, there are fundamental facts that are not fundamental physical facts.

In support of 2, note that we wouldn’t be able to tell which things are conscious by knowing their physical constitution without some a posteriori data like “When I say ‘ouch’, I am conscious.”

Tuesday, February 13, 2024

Physicalism and "pain"

Assuming physicalism, plausibly there are a number of fairly natural physical properties that occur when and only when I am having a phenomenal experience of pain, all of which stand in the same causal relations to other relevant properties of me. For instance:

  1. having a brain in neural state N

  2. having a human brain in neural state N

  3. having a primate brain in neural state N

  4. having a mammalian brain in neural state N

  5. having a brain in functional state F

  6. having a human brain in functional state F

  7. having a primate brain in functional state F

  8. having a mammalian brain in functional state F

  9. having a central control system in functional state F.

Suppose that one of these is in fact identical with the phenomenal experience of pain. But which one? The question is substantive and ethically important. If, for instance, the answer is (c), then cats and computers in principle couldn’t feel pain but chimpanzees could. If the answer is (i), then cats and computers and chimpanzees could all feel pain.

It is plausible on physicalism (e.g., Loar’s version) that my concept of pain refers to a physical property by ostension—I am ostending to the state that occurs in me in all and only the cases where I am in pain, and which has the right kind of causal connection to my pain behaviors. But there are many such states, as we saw above.

We might try to break the tie by saying that by reference magnetism I am ostending to the simplest physical state that has the above role, and the simplest one is probably (i). I don’t think this is plausible. Assuming naturalism, when multiple properties of a comparable degree of naturalness play a given role, ostension via the role is likely to be ambiguous, with ambiguity needing to be broken by a speaker or community decision. At some point in the history of biology, we had to decide whether to use “fish” at a coarse-grained functional level and include dolphins and whales as fish, or at a finer-grained level and get the current biological concept. One option might be a little more natural than the other, but neither is decisively more natural (any fish concept that has a close connection to ordinary language is going to have to be paraphyletic), and so a decision was needed. And even if (i) is somewhat simpler than (a)–(h), it is not decisively more natural.

This yields an interesting variant of the knowledge argument against physicalism.

  1. If “pain” refers to a physical property, it is a “merely semantic” question, one settled by linguistic decision, whether “pain” could apply to an appropriately programmed computer.

  2. It is not a “merely semantic” question, one settled by languistic decision, whether “pain” could apply to an appropriately programmed computer.

  3. Thus, “pain” does not refer to a physical property.

Playing to win in order to lose

Let’s say I have a friend who needs cheering up as she has had a lot of things not go her way. I know that she is definitely a better badminton player than I. So I propose a badminton match. My goal in doing so is to have her win the game, so as to cheer her up. But when I play, I will of course be playing to win. She may notice if I am not, plus in any case her victory will be the more satisfying the better my performance.

What is going on rationally? I am trying to win in order that she may win a closely contested game. In other words, I am pursuing two logically incompatible goals in the same course of action. Yet the story makes perfect rational sense: I achieve one end by pursuing an incompatible end.

The case is interesting in multiple ways. It is a direct counterexample to the plausible thesis that it is not rational to be simultaneously pursuing each of two logically incompatible goals. It’s not the only counterexample to that thesis. A perhaps more straightforward one is where you are pursuing a disjunction between two incompatible goods, and some actions are rationally justified by being means to each good. (E.g., imagine a more straightforward case where you reason: If I win, that’ll cheer me up, and if she wins, that’ll cheer her up, so either way someone gets cheered up, so let’s play.)

The case very vividly illustrates the distinction between:

  1. Instrumentally pursuing a goal, and

  2. Pursuing an instrumental goal.

My pursuit of victory is instrumental to cheering up my friend, but victory is not itself instrumental to my further goals. On the contrary, victory would be incompatible with my further goal. Again, this is not the only case like that. A case I’ve discussed multiple times is of follow-through in racquet sports, where after hitting the ball or shuttle, you intentionally continue moving the racquet, because the hit will be smoother if you intend to follow-through even though the continuation of movement has no physical effect on the ball or shuttle. You are instrumentally pursuing follow-through, but the follow-through is not instrumental.

Similarly, the case also shows that it is false that every end you have you either pursue for its own sake or it is your means to something else. For neither are you pursuing victory for its own sake nor is victory a means to something else—though your pursuit of victory is a means to something else.

Given the above remarks, here is an interesting ethics question. Is it permissible to pursue the death of an innocent person in order to save that innocent person’s life? The cases are, of course, going to be weird. For instance, your best friend Alice is a master fencer, and has been unjustly sentenced to death by a tyrant. The tyrant gives you one chance to save her life: you can fence Alice for ten minutes, with you having a sharpened sword and her having a foil with a safety tip, and you must sincerely try to kill her—the tyrant can tell if you are not trying to kill. If she survives the ten minutes, she goes free. If you fence Alice, the structure of your intention is just as in my badminton case: You are trying to kill Alice in order to save her life. Alice’s death would be pursued by you, but her death is not a means nor something pursued for its own sake.

If the story is set up as above, I think the answer is that, sadly, it is wrong for you to try to kill Alice, even though that is the only way to save her life.

All that said, I still wonder a bit. In the badminton case, are you really striving for victory? Or are you striving to act as if you were striving for victory? Maybe that is the better way to describe the case. If so, then this may be a counterexample to my main thesis here.

In any case, if there is a good chance the tyrant can’t tell the difference between your trying to kill Alice and your intentionally performing the same motions that you would be performing if you were trying to kill Alice, it seems to me that it might be permissible to do the latter. This puts a lot of pressure on some thoughts about the closeness problem for Double Effect. For it seems pretty plausible to me that it would be wrong for you to intentionally perform the same motions that you would be performing if you were trying to kill Alice in order to save people other than Alice.

Monday, February 12, 2024

Overdetermining causation and prevention

Overdetermination seems to work differently for prevention and positive causation.

Suppose Timmy the turtle is wearing steel armor over his shell, because it looks cool. Alice shoots an arrow at Timmy’s back from the side, which glances off the armor. Let us assume that arrows shot at the back of an unarmored turtle from the side also harmlessly glance off the shell. Then we have two questions:

  1. Did the armor prevent Timmy’s death?

  2. Did the armor cause the arrow to glance off?

My intuition is that the answers are “no” and “yes”, respectively. You only count as preventing death if you are stopping something lethal. But I assumed that an arrow aimed at the back of an ordinary turtle from the side glances off the shell and is not lethal. On the other hand, it is clear that the arrow glanced off the armor, not the shell, and so it was the armor that redirected the flight.

Why the difference?

I think it may be this. There is a particular token flight-redirection event f0 that the the armor caused. When you cause a token of a type, you automatically count as having cause an event of that type. So by causing f0, the armor caused a flight-redirection event, a glancing-off.

However, it does not seem right to say that in preventing an event, one is causing a token non-occurrence. There would be too many non-occurrences in the ontology! Prevention is prevention of a type.

Friday, February 9, 2024

Poetic justice

If you try to con someone out of their money by proposing a complex scheme to them, and then you make a mistake in your calculations and end up having to give them a lot of money, that’s poetic justice.

  1. Poetic justice is always justice.

  2. Instances of justice are always the intentional work of a person.

  3. Some instances of poetic justice are not the intentional work of any human person.

  4. So, there exists a non-human person.

And God is the best candidate.

Thursday, February 8, 2024

Supervenience and counterfactuals

On typical functionalist views of mind, what mental states a physical system has depends on counterfactual connections between physical properties in that system. But we can have two worlds that are exactly the same physically—have exactly the same tapestry of physical objects, properties and relations—but differ in what counterfactual connections hold between the physical properties. To see that, just imagine that one of the two worlds is purely physical, and in that world, w1, striking a certain match causes a fire, and:

  1. Were that match not struck, there would have been no fire.

But now imagine another world, w2, which is physically exactly the same, but there is a nonphysical spirit who wants the fire to happen, and who will miraculously cause the fire if the match is not struck. But since the match is struck, the spirit does nothing. In w2, the counterfactual (1) is false. (This is of course just a Frankfurt case.)

Thus physicalist theories where counterfactual connections are essential are incompatible with supervenience of the mental upon the physical.

I suppose one could insist that the supervenience base has to include counterfactual facts, and not just physical facts. But this is problematic. Even in purely physical worlds, counterfactual facts are not grounded in physical facts, but in physical facts combined with the absence of spirits, ghosts, etc. And in worlds that are only partly physical, counterfactual connections between physical facts may be grounded in the dispositions of non-physical entities.

Something Mary doesn't know

Here is something our old friend Mary, raised in a black and white world, cannot know simply by knowing all of physics:

  1. What are the necessary and sufficient physical conditions for two individuals to be in exactly the same phenomenal state?

Of course, her being raised in a black and white world is a red herring. I think nobody can know the answer to (2) simply by knowing all of physics.

Some remarks:

  • Knowledge of the answer to (1) is clearly factual descriptive knowledge. So responses to the standard knowledge argument for dualism that distinguish kinds of knowledge have no effect here.

  • The answer to (1) could presumably be formulated entirely in the language of physics.

  • Question (1) has a presupposition, namely that there are necessary and sufficient physical conditions, but the physicalist can’t deny that.

  • A sufficient conditions is easy given physicalism: the individuals have the exact same physical state.

  • Dennettian RoboMary-style simulation does not solve the question. One might hope that if you rewrite your software, you can check if you have the same qualia before and after the rewrite. But the problem is that you can only really do exact comparisons of qualia that you see in a unified way, and there is insufficient unification of your state across the software rewrite.

Humanity and humans

From childhood, I remember the Polish Christmas carol “Amidst the Silence of Night” from around the beginning of the 19th century, and I remember being particularly impressed by the lines:

Ahh, welcome, Savior, longed for of old,
four thousand years awaited.
For you, kings, prophets waited,
and you this night to us appeared.

I have lately found troubling the question: Why did God wait over a hundred thousand years from the beginning of the human race to send us his Son and give us the Gospel?

The standard answer is that God needed to prepare humankind. The carol’s version of this answer suggests that this preparation intensified our longings for salvation through millenia of waiting. A variant is that we need a lot of time to fully realize our moral depravity in the absence of God. Or one might emphasize that moral teaching is a slow and gradual process, and millenia are needed to make us ready to receive the Gospel.

I think there is something to all the answers, but they do not fully satisfy as they stand. After all, a human child from 100,000 years ago is presumably roughly as capable of moral development as a modern child. If we had time travel, it seems plausible that missionaries would be just as effective 100,000 years ago as they were 1000 years ago. The intensification of longings and the realization of social moral depravity are, indeed, important considerations, but human memory, even aided by writing, only goes back a few thousand years. Thus, two thousand years of waiting and learning about moral depravity would likely have had basically the same result for the individuals in the time of the Incarnation as a hundred thousand years did.

I am starting to think that this problem cannot be fully resolved simply by considering individual goods. It is important, I think, to consider humankind as a whole, with goods attached to the human community as a whole. The good of moral development can be considered on an individual level, and that good needs a few decade rather than millenia. But the good of moral development can also be considered on the level of humankind as well, and there millenia are fitting for the development not to ride roughshod over nature. Similarly, the good of longing for and anticipation of a great good only needs at most a few decades in an individual, but there is a value in humankind as a whole longing for and anticipating on a species timescale rather an individual timescale.

In other words, reflection on the waiting for Christ pushes us away from an overly individualistic view. As do, of course, other aspects of Christian theology, such as reflection on the Fall, the Church, the atonement, etc.

Am I fully satisfied? Not quite. Is the value of humankind’s more organic development worth sacrificing the goods of thousands of generations of ordinary humans who did not hear the Gospel? God seems to think so, and I am willing to trust him. There is doubtless a lot more to be said. But it helps me to think that this is yet another one of those many things where one needs to view a community (broadly understood) as having a moral significance going beyond the provision of more individualistic goods.

Two more remarks. First, a graduate student pointed out to me (if I understood them right) that perhaps we should measure individual moral achievement relative to the state of social development. If so, then perhaps there was not so great a loss to individuals, since what might matter for their moral wellbeing is this relative moral achievement.

Second, the specifically Christian theological problem that this post addresses has an analogue to a subspecies of the problem of evil that somehow has particularly bothered me for a long time: the evils caused by lack of knowledge, and especially lack of medical knowledge. Think of the millenia of people suffering and dying of in ways that could have been averted had people only known more, say, about boiling water, washing hands or making vaccines. I think there is a value in humankind’s organic epistemic development. But to employ that as an answer one has to be willing to say that such global goods of humankind as a whole can trump individual goods.

(Note that all that I say is meant to be compatible with a metaphysics of value on which the loci of value are always individuals. For an individual’s well-being can include external facts about humankind. Thus the good of humankind as a whole might be metaphysically housed in the members. The important thing, however, is that these goods are goods the human has qua part of humanity.)

Wednesday, February 7, 2024

Leibniz's King of China thought experiment

Leibniz famously offers this thought experiment:

Supposing that an individual were to instantly become King of China, but on the condition of forgetting what he has been, as if he was completely born again—isn’t that practically, with regard to perceivable effects, as if he were to be annihilated and a King of China were to be created in the same moment in his place? This the individual has no reason to desire. (Gerhardt IV, p. 460)

The context is that Leibniz isn’t doing metaphysics here, but supporting an ethical point that memory is needed for one to be a fit subject for reward and punishment and a theological point that eternal life requires more than mere eternal existence without the psychological features of human life. Nonetheless, some have thought that thought experiments like Leibniz’s offer support for memory theories of personal identity. I will argue that tweaking Leibniz’s thought experiment in two ways shows that this employment would be mistaken. In fact, I think the second tweak will offer an argument against memory theories.

Tweak 1: Memory theories of personal identity require a chain of memories, but not a chain of personally important memories. So all we need to ensure identity of the earlier individual with the later King of China is that the King of China remembers something really minor from the hour before the transformation, say seeing a fly buzzing around. Allowing the memory of a fly to survive the enthronement does not affect the intuition that the process is one that “the individual has no reason to desire.” The loss of personally important memories—especially of interpersonal relationships—is too high a price for the alleged benefit of ruling a great nation. Hence the intuition is not about personal identity, but—as Leibniz himself thinks—about prudential connections in a person’s life. Nor should we modify memory theories of personal identity to require the memories to be personally important, since that would make personal identity too fragile.

Tweak 2: First, suppose that in addition to the individual’s memories being wiped, the individual gets a new set of memories implanted, copied from some other living person. That so far does not affect the intuition that the process is one that one has “no reason to desire.” Second, add that the other living person happens to be one’s exact duplicate from Duplicate Earth. On memory theories of personal identity, one still perishes—the memories aren’t one’s own, even if they are exactly like one’s own. But a good chunk of the force of the thought experiment evaporates. It is, admittedly, an important thing that one’s apparent memories be real memories, and when they are taken from one’s exact duplicate, they are not. If one’s apparent memories are from one’s duplicate, then one isn’t remembering one’s friends and family, but instead is having quasi-memories of the duplicate’s friends and family, who happen to be exactly like one’s own. That is a real loss objectively speaking. But it is a much lesser loss than if one’s memories are simply wiped or replaced by those of a non-duplicate.

Note further that in the case where one’s memories are replaced by those of a duplicate, if enough benefits are thrown into the King of China scenario, the whole thing might actually become positively worthwhile. Suppose you are a lonely individual without significant personal relationships, but as King of China you would have a fuller and more interpersonally fulfilling life, despite the inevitable presence of flatterers and the mind-numbing work of ruling a vast empire. Or suppose creditors are hounding you night and day. Or you have a disease that can only be cured with the resources of a vast empire. When we note this, we see that the modified thought experiment provides evidence against the memory theory. For on the memory theory, it makes no difference to one’s identity whether the memories will come from a duplicate or not, as long as they don’t come from oneself, and what benefits the King of China will receive is largely prudentially irrelevant.

Objection 1: If the King of China gets memories from your duplicate, then the King of China will have your values and will promote your goals with all of the power of an empire. That could be prudentially worth it and provides some noise for the tweaked thought experiment.

Response: We can control for this noise. Distinguish your goals into two classes: those where your existence is essential to the goal and those where your existence is at most incidental to the goal. We can now suppose that you are a selfish individual who has no goals of the second type. Or we can suppose that all your goals of the second type are such that you think that being King of China will not actually help with them. (Perhaps world peace is a goal of yours, but like Tolstoy you think individuals, including emperors, are irrelevant to such goals.)

Objection 2: If you know that the duplicate has the exact same memories as you do, then copying memories from the duplicate at your behest maintains a counterfactual connection between the final memory state and your pre-transformation memories. If the latter were different from what they are, you wouldn’t have agreed to the copying.

Response: There is nothing in Leibniz’s thought experiment about your consent. We can suppose this just happens to you. And that it is a complete coincidence that the subject from whom memories are taken and put into you is your duplicate.

Tuesday, February 6, 2024

The hand and the moon

Suppose Alice tells me: “My right hand is identical with the moon.”

My first reaction will be to suppose that Alice is employing some sort of metaphor, or is using language in some unusual way. But suppose that further conversation rules out any such hypotheses. Alice is not claiming some deep pantheistic connection between things in the universe, or holding that her hand accompanies her like the moon accompanies the earth, or anything like that. She is literally claiming of the object that the typical person will identify as “Alice’s hand” that it is the very same entity as the typical person will identify as “the moon”.

I think I would be a little stymied at this point. Suppose I expressed this puzzlement to Alice, and she said: “An oracle told me that over the next decade my hand will swell to enormous proportions, and will turn hard and rocky, the exact size and shape of the moon. Then aliens will amputate the hand, put it in a giant time machine, send it back 4.5 billion years, so that it will orbit the earth for billions of years. So, you see, my hand literally is the moon.”

If Alice isn’t pulling my leg, she is insane to think this. But now I can make some sense of her communication. Yes, she really is using words in the ordinary and literal sense.

Now, to some dualist philosophers the claim that a mental state of feeling sourness is an electrochemical process in the brain is about as weird as the claim that Alice’s hand is the moon. I’ve never found this “obvious difference” argument very plausible, despite being a dualist. Thinking through my Alice story makes me a little more sympathetic to the idea that there is something incredible about the mental-physical identity claim. But I think there is an obvious difference between the hand = moon and quale = brain-state claims. The hand and the moon obviously have incompatible properties: the colors are different, the shapes are different, etc. Some sort of an explanation is needed how that can happen despite identity—say, time-travel.

The analogue would be something like this: the quale doesn’t have a shape, while the brain process does. But it doesn’t seem at all clear to me that the quale positively doesn’t have a shape. It’s just that it is not the case that it positively seems to have a shape. Imagine that qualia turned out to be nonphysical but spatially extended entities spread through regions of the brain, kind of like a ghost is a nonphysical but spatially extended entity. There is nothing obvious about the falsity of this hypothesis. And on this hypothesis, qualia would have shape.

To be honest, I suspect that even if qualia don’t have a shape, God could give them the additional properties (say, the right relation to points of space) that would give them shape.

Computationalism and subjective time

Conway’s Game of Life is Turing complete. So if computationalism about mind is true, we can have conscious life in the Game of Life.

Now, consider a world C (for Conway) with three discrete spatial dimensions, x, y and z, and one temporal dimension, t, discrete or not. Space is thus a three-dimensional regular grid. In addition to various “ordinary” particles that occupy the three spatial dimensions and have effects forwards along the temporal dimension, C also has two special particle types, E and F.

The causal powers of the E and F particles have effects simultaneous with their causes, and follow a spatialized version of the rules of the Game of Life. Say that a particle at coordinates (x0,y0,z0) has as its “neighbors” particles at the eight grid points with the same z coordinate z0 and surrounding (x0,y0,z0). Then posit these causal powers of an E (for “empty”) or F (for “full”) particle located at (x0,y0,z0):

  • If it’s an F particle and has less two or three F neighbors, it instantaneously causes an F particle at (x0,y0,z0+1).

  • If it’s an E particle and has exactly three F neighbors, it instantaneously causes an F particle at (x0,y0,z0+1).

  • Otherwise, it instantaneously causes an E particle at (x0,y0,z0+1).

Furthermore, suppose that along the temporal axis, the E and F particles are evanescent: if they appear at some time, they exist only at that time, perishing right after.

In other words, once some E and F particles occur somewhere in space, they instantly propagate more E and/or F particles to infinity along the z-axis, all of which particles then perish by the next moment of time.

Given the Turing completeness of the Game of Life and a computational theory of mind, the E and F particles can compute whatever is needed for a conscious life. We have, after all, a computational isomorphism between an appropriately arranged E and F particle system in C and any digital computer system in our world.

But because the particles are evanescent, that conscious life—with all its subjective temporal structure—will happen all at once according to the objective time of its world!

If this is right, then on a computational theory of mind, we can have an internal temporally structured conscious life with a time sequence that has nothing to do with objective time.

One can easily get out of this consequence by stipulating that mind-constituting computations must be arranged in an objective temporal direction. But I don’t think a computationalist should add this ad hoc posit. It is better, I think, simply to embrace the conclusion that internal subjective time need not have anything to do with external time.

Monday, February 5, 2024

Heavier objects fall sooner

We like to say that Galileo was right that more massive objects don’t fall any faster than lighter ones, at least if we abstract away from friction.

But it occurred to me that there is a sense in which this is false. Suppose I drop an object from a meter above the moon, and measure the time until impact. If the object is more massive, the time to impact is lower. Why? Because there are two relevant gravitational accelerations that affect the time of impact: the moon pulls the object down, but simultaneously additionally the object pulls the moon up. The impact time is affected by both accelerations, and the more massive the object, the greater the upward acceleration of the moon, even though the object's acceleration is unaffected by its mass.

Of course, if we are dropping a one kilogram ball, the gravitational acceleration it induces on the moon is about 1/1023 of the gravitational acceleration the moon induces on it. It’s negligible. But it’s still not zero. :-) A heavier object of the same size will impact sooner.

If all this is unclear, think about the extreme case where we are “dropping” a black hole on the moon.

Physicalism, consciousness and history

Many physicalists think that conscious states are partly constituted by historical features of the organism. For instance, they think that Davidson’s swampman (who is a molecule-by-molecule duplicate of Davidson randomly formed by lightning hitting a swamp) does not have conscious states, because swampman lacks the right history (on some views, one just needs a history of earlier life, and on others, one needs the millenia of evolutionary history).

I want to argue that probably all physicalists should agree that conscious states are partly constituted by historical features.

For if there is no historical component to the constitution of a conscious state, and physicalism is true, then conscious states are constituted by the simultaneous arrangement of spatially disparate parts of the brain. But consciousness is not relative to a reference frame, while simultaneity is.

Here’s another way to see the point. Suppose that conscious states are not even partly constituted by the past. Then, surely, they are also not even partly constituted by the past. In other words, conscious states are fully constituted by how things are on an infinitesimally thin time-slice. On that view, it would be possible for a human-like being, Alice, to exist only for an instant and to be conscious at that instant. But now imagine that in inertial reference frame F, Alice is a three-dimensional object that exists only at an instant. Then it turns out that in every other frame than F, Alice’s intersection with a simultaneity hyperplane is two-dimensional—but she also has a non-empty intersection with more than one simultaneity hyperplane. Consequently, in every frame other than F, Alice exists for more than an instant, but is two-dimensional at every time. A two-dimensional slice of a human brain can’t support consciousness, so in no frame other than F can Alice be conscious. But then consciousness is frame-relative, which is absurd.

Once we have established:

  1. If physicalism is true, conscious states are partly constituted by historical features,

it is tempting to add:

  1. Conscious states are not even partly constituted by historical features.

  2. So, physicalism is not true.

But I am not very confident of (2).

If materialism is true, we can't die in constant pain

Here is an unfortunate fact:

  1. The last minute of your life can consist of constant conscious pain.

Of course, I think all pain is conscious, but I might as well spell it out. The modality of the “can” in this post will be something fairly ordinary, like some sort of nomic possibility.

Now say that a reference frame is “ordinary for you” provided that it is a reference frame corresponding to something moving no more than 100 miles per hour relative to your center of mass.

Next, note that switching between reference frames should not turn pain into non-pain: consciousness is not reference-frame relative. Thus:

  1. If the last minute of your life consists of constant conscious pain, then in every reference frame that is ordinary for you, in the last half-minute of your life you are in constant conscious pain.

Relativistic time-dilation effects of differences between “ordinary” frames will very slightly affect how long your final pre-death segment of pain is, but will not shorten that segment by even one second, and certainly not by 30 seconds.

Next add:

  1. If materialism is true, then you cannot have a conscious state when you are the size of a handful of atoms.

Such a small piece of the human body is not enough for consciousness.

But now (1)–(3) yield an argument against materialism. I have shown here that, given the simplifying assumption of special relativity, in almost every reference frame, and in particular in some ordinary frames, your life will end with you being the size of a handful of atoms. If materialism is true, in those frames towards the very end of your life you will have to exist without consciousness by (3), and in particular you won’t be able to have constant conscious pain (or any other conscious state) for your last half-minute.

Second-order awareness

Here is a plausible Cartesian thesis:

  1. There is a kind of second-order awareness of first-order conscious states that is never mistaken: whenever you have the awareness, then you have the first-order conscious state.

I don’t mean this to be true by packing factiveness into awareness.

But also plausibly:

  1. A veridical awareness of an event is caused by the event it is the awareness of.

And:

  1. Causation always involves a time-delay.

Given (1)–(3), suppose that you are having the second-order awareness, which is being caused byt he first-order conscious states, and then suddenly the first-order conscious state ends. Since causation always involves a time-delay, it follows that the second-order awareness lags after the end of the first-order conscious state—that there is a time at which you have the second-order awareness even though the first-order conscious state has ended. And this contradicts (1).

I am inclined to deny (1).

But there are other paths out. One could deny (2) and say that in some cases, awareness is not caused but partly constituted by the event it is the awareness of. That’s how God’s awareness of the world has to work by divine simplicity. So the argument provides a tiny bit of evidence for that picture of divine awareness.

Or one might deny (3). Apart from some phenomena like collapse of entangled quantum states or a particle’s position causing a field at the precise position of the particle, which phenomena don’t seem relevant to the case at hand, the best way out here would be to deny naturalism, allowing that a non-physical first-order awareness could instantly cause a non-physical second-order awareness.

Friday, February 2, 2024

Consciousness and plurality

One classic critique of Descartes’ “cogito ergo sum” is that perhaps there can be thought without a subject. Perhaps the right thing to say about thought is feature-placing language like “It’s thinking”, understood as parallel to “It’s raining” or “It’s sunny”, where there really is no entity that is raining or sunny, but English grammar requires a subject so we through in an “it”.

There is a more moderate option, though, that I think deserves a bit more consideration. Perhaps thought has an irreducibly plural subject, and in a language that expresses the underlying metaphysics better, we should say “The neurons are (collectively) thinking” or maybe even “The particles are (collectively) thinking.” On this view, thought is a relation that holds between a plurality of objects, without these objects making up a whole that thinks. This, for instance, is a very natural view for physicalist who is a compositional nihilist (i.e., thinks that only simples exists).

It seems to me that it is hard to reject this view if one’s only data is the fact of consciousness, as it is for Descartes. What kills the three-dimensionalist version of this view, in my opinion, is that it cannot do justice to the identity of the thinker over time, since there would be different pluralities of neurons or particles engaged in the thinking over time. And a four-dimensionalist version cannot do justice to the identity of the thinker in counterfactual scenarios. However, this data isn’t quite as self-evident as what Descartes wants.

In any case, I think this is a view that non-naturalists like me need to take pretty seriously.

Unifying Separation and Choice

Let's round out Axiom of Choice Week. :-)

It’s occurred to me that there is a somewhat pleasant way to integrate the Axioms of Separation and Choice into one axiom schema.

Let’s say that a formula F(x,y) is a partial equivalence (is that the right term?) provided that it’s symmetric and transitive. Now consider this schema (understood to be universally closed over all free variables in F other than x and y):

  • If F(x,y) is a partial equivalence, then for any set a there is a subset b such that for every x ∈ b we have F(x,x), and for any x ∈ a such that F(x,x), there is a unique y ∈ b such that F(x,y).

We might call this the Axiom (Schema) of Representatives.

To get the Axiom of Separation, given a formula G(x), let F(x,y) be the formula G(x) ∧ y = x. To get the Axiom of Choice, if c is a set of nonempty disjoint sets, let F(x,y) be d ∈ c(xdyd) and let a = ⋃c (so we need the Union Axiom).

So what?

Nothing earthshaking.

But, first, while there is an advantage to keeping axioms separate for purposes of proving independence results, the more unified our axiomatic system is, the less ad hoc it looks. Unifying Separation and Choice can make us less suspicious about Choice, for instance.

Second, the Axiom Schema of Representatives has nice analogues in some other contexts than set theory. It seems to directly generalize to classes, for instance. Moreover, it extends very nicely to plural quantification to integrate Plural Comprehension with a version of Choice:

  • If F(x,y) is a partial equivalence, then there are bs such that (i) for every x among the bs we have F(x,x), and (ii) for any x such that F(x,x), there is a unique y among the bs such that F(x,y).

I don’t know if there is a natural way to extend this to mereology.

One might complain that partial equivalence is less natural than equivalence. I don’t think so. First, it is defined by two instead of three conditions, which makes it seem more natural. Second, examples of partial equivalence relations tend to be more natural than examples of full equivalence relations if our domain is all of reality. For instance, “same color”, “same shape”, “same size”, “same species”, etc., are all partial equivalence relations, since only things with color are the same color as themselves, only things with shape are the same shape as themselves, etc. To form full equivalences, one needs to stipulate awkward relations like “same color or both colorless”.

Thursday, February 1, 2024

Fusion and the Axiom of Choice

Assume classical mereology. Then for any formula that has a satisfier, there is a fusion of all of its satisfiers. More precisely, if ϕ is a formula with z not a free variable in ϕ, then the universal closure of the following under all free variables is true:

  1. xϕ → ∃zFϕ, x(z)

where Fϕ, x(z) says that z is a fusion of the satisfiers of ϕ with respect to the variable x (there is more than one account of what exactly the “fusion” is). This is the fusion axiom schema.

Stipulate that a region of physical space is a fusion of points.

Question: Is there a nonmeasurable region of (physical) space?

Assuming the language for formulas in our classical mereology is sufficiently rich, the answer is positive. For simplicity, suppose that physical space is Euclidean (the non-Euclidean case is handled by working in a small neighborhood which is diffeomorphic to a neighborhood of a Euclidean space). Let ψ be the isomorphism between the points of physical space and the mathematical space R3. Let ϕ be the formula ψ(x) ∈ y. Applying (1), we conclude that for any subset a of R3, there is a set of points of physical space that correspond to a under ψ. If we let a be one of the standard nonmeasurable subsets of R3, we get an affirmative answer to our question.

But now we have an interesting question:

  1. What grounding or explanatory relation is there between the existence of a nonmeasurable region of physical space and the existence of a nonmeasurable subset of mathematical space?

The two simplest options are that one is explanatorily prior to the other. Let’s explore these.

Suppose the existence of a nonmeasurable physical region depends on the existence of the nonmeasurable set. Well, it is a bit strange to think of a concrete object—a region of physical space—as partly grounded in the existence of a set. This doesn’t sound quite right to me.

What about the other way around? This challenges the fairly popular doctrine that complex things entities are a free lunch given simples. For if the existence of the nonmeasurable region is prior to the existence of an abstract set, it seems that we actually have quite a significant metaphysical “effect” of this complex object.

Moreover, if the existence of the nonmeasurable region is not grounded in the existence of nonmeasurable set, whether or not there is grounding running the other way, we have a difficult question of why there is in fact a nonmeasurable region. Without relying on nonmeasurable sets, it doesn’t seem we can get the nonmeasurable region out of the axioms of classical mereology. It seems we need some sort of a mereological Axiom of Choice. How exactly to formulate that is difficult to say, but one version that is enough for our purposes would be that given any formula ρ(x,y) that expresses a non-empty equivalence relation on the simples satisfying ϕ, there is an object z such that if ϕ(x) then there is exactly one simple x′ such that ρ(x,x′) and x is a part of z, and every object that meets z meets some simple satsifying ϕ.

But my intuition is that a mereological Axiom of Choice would badly violate the doctrine that complex objects are a free lunch. If all we had in the way of complex-object-forming axioms were reflexivity, transitivity and fusion, then it would not be crazy to say that complex objects are a fancy way of talking about simples. But the “indeterministic” nature of the Axiom of Choice does not, I think, allow one to say that.