Friday, May 29, 2020

Individuating substances by their matter

According to traditional Aristotelianism, what makes you and me be distinct entities is that although we are of the same species, we’re made of distinct chunks of matter.

Here is a quick initial problem with this. The matter in us changes. It is quite possible that someone has different matter at age 1 and at age 20, and so by the Aristotelian individuation criterion, they are a different entity of the same species at ages 1 and 20, which is false.

One way out of this is to embrace presentism. But presentism is incompatible with the Aristotelian conviction that truth supervenes on being.

Another move is to narrow down the individuation criterion to say that:

  1. Conspecifics x and y are made distinct by their being simultaneously made of different chunks of matter.

There are two problems with this move.

First, time travel. If at age 20, with different matter, you enter a time machine and travel back to meet yourself back when you were 1, then 20-year-old you and 1-year-old you are made of different chunks of matter at the same time. And while many problems about time travel are solved by moving from external to internal time, that doesn’t work here. For one cannot say that matter individuates x and y at the same internal time, since internal time is a concept that only makes sense when you are dealing with a single substance.

Second, relativity theory and teleportation (which is also kind of like time travel). Suppose that by age 20 you have different matter from what you had at age 1. Then God teleports 20-year-old you 100 light-years away instantly or nearly instantly (with respect to some reference frame). Then there will be a reference frame with respect to which it is true that the teleported 20-year-old you is simultaneous with 1-year-old you. So either simultaneity with respect to that reference frame doesn’t count—and that leads us to a privileged reference frame, contrary at least to the spirit of relativity—or else you are not yourself, which is absurd.

While one can swallow the idea that time travel is impossible, spacelike teleportation seems clearly possible.

Here is another move: we replace (1) with:

  1. Conspecifics x and y are made distinct by their originating in distinct chunks of matter.

And 20-year-old you, no matter how they travel in space and/or time, has originated in the same chunk of matter as 1-year-old you.

This move has a cost: it requires that we be somewhat non-realist about substantial change. Full-blown realism about substantial change requires the matter to stay in existence while the substance changes. But if matter can stay in existence when its substance perishes, then that matter could be re-formed into another substance of the same species, which would violate the origination-restricted indviduation criterion. On (2), we have to accept the theory that the matter of a thing perishes when the substance does. This removes one of the major motivations behind positing matter: namely, that matter is supposed to explain why a corpse looks like the living body (viz., because it is allegedly made of the same matter).

Note, too, that (2) has a serious ambiguity once we have insisted that matter does not survive substantial change. By the chunk of matter that a substance originates in, do we mean the last chunk of matter before the substance’s existence or the first chunk of matter in the substance?

If we mean the last chunk of matter before the substance’s existence, there are two problems. First, it seems ad hoc to single out one aspect of the causes of the substance—the earlier matter—as doing the individuating. It seems better to individuate by means of the causes, applying the converse of the essentiality of origins. Second, it seems possible for an object to come into existence without prior matter. (If God exists, this is clear, since God creates ex nihilo. If God doesn’t exist, then very likely the world came into existence ex nihilo.) But then it seems quite possible for two objects of the same species to come into existence without prior matter. (Though if one wants to dispute this, one might point to the fact that a literal reading of the biblical creation account has God making the first two humans out of chunks of preexisting matter—soil and rib respectively. Maybe there is a deep metaphysical reason why this has to be so, and perhaps the initial ex nihilo created things had to be all of different species. But I just don’t find the latter requirement that plausible.)

So perhaps in (2) we mean the first chunk of matter in the substance. But if matter does not survive substantial change, then it seems plausible that the identity of the substance is prior to the identity of its initial matter, and hence the identity of the substance cannot come from its initial matter. This isn’t a very strong argument. Maybe the initial matter is prior to the substance, and has an identity of its own, while later matter is posterior to the substance.

So, our best version of the Aristotelian individuation account is this:

  1. Conspecifics x and y are made distinct by their each having a different first chunk of matter.

Finally, it is interesting to note that (2) and (3) are only plausible if it is impossible for a material substance to have no beginning. But our best account for why a material substance cannot have a beginning is causal finitism. So those who like the Aristotelian account of individuation—I am not one of them—have another reason to accept causal finitism.

Tuesday, May 26, 2020

Perdurance, physicalism and relativity

Here is a very plausible thesis:

  1. Exactly one object is a primary bearer of my present mental states.

This is a problem for the conjunction of standard perdurance, physicalism and special relativity. For according to standard perdurance on physicalism:

  1. The primary bearers of my mental states are time slices.

Now consider all the time-slices of me that include my present mental states. There will be many of them, since there will be one corresponding to each reference frame. On relativistic grounds none of them is special. Thus:

  1. Either all or none of them are the primary bearers of my present mental states.

If all of them are the primary bearers of my present mental states, we violate 1. If none of them are, then there is no primary bearer of my present mental states by 2, which also violates 1.

Monday, May 25, 2020

Subjective sameness of choice situations and Molinism

Suppose that Alice on a street corner sells Bob a “Rolex” for $15. Bob goes home and his wife Carla says: “You got scammed!” Bob takes the “Rolex” to a jeweller and finds that it is indeed a Rolex. He goes back to Carla and says: “No, I got a good deal!” But Carla says: “But if it was a fake, you would have bought it, too.”

Here is Carla’s reasoning behind her counterfactual. A typical fake would have looked the same as the real thing to Bob, and so the counterfactual situation where Bob is offered the fake would have been subjectively the same to Bob. And Carla subscribes to this principle:

  1. If you were instead in a different situation that was subjectively identical to the one you were in, you would have chosen the same way.

I find (1) pretty plausible, and it seems to nicely come out as true on a Lewis-style account of counterfactuals in terms of similarity of worlds (with approximate match counting as similarity).

But I doubt a Molinist should accept (1). For the Molinist does not think that counterfactuals of free will are grounded in similarity of worlds (except of the trivial sort, where truth values of counterfactuals count as part of the similarity).

Manual Star Tracker

My Instructable for a simple manual star tracker for astrophotography is now up. The idea is that you manually rotate a knob together in sync with a stopwatch to compensate for the earth's rotation. There is little that is innovative, except for the fact that I accidentally found that a carriage bolt head compensates for tangent error very nicely.


Sunday, May 24, 2020

Wii Remote Lightgun

The Wii Remote sort of works like a lightgun, but not quite. It has a camera that can track four infrared LEDs, but the LEDs it uses are normally arranged in a straight line in the Wii sensor bar, which is mathematically insufficient for figuring out where exactly the Wii Remote is pointed. If, however, we put two infrared LEDs on top of a TV and two on the bottom, then one could use homography calculations to figure out where the Wii Remote is pointing, and use it as a lightgun. I wrote some code for my Raspberry PI that does this, and together with a 3D printable handle and sights the Wii Remote becomes a nice lightgun for retro games. Instructions are here.


Friday, May 22, 2020

Lying to save lives

I’m imagining a conversation between Alice, who thinks it is permissible to lie to Nazis to protect innocents, and a Nazi. Alice has just lied to the Nazi to protect innocents hiding in her house. The Nazi then asks her: “Do you think it is permissible to lie to protect innocents from people like me?” If Alice says “Yes”, the Nazi will discount her statement, search her house and find innocents. So, she has to say “No.” But then the Nazi goes on to ask: “Why not? Isn’t life more important than truth? And I know that you think me an unjust aggressor (no, don’t deny it, I know you know it, but I’m not going to get you just for that).” And now Alice has to either cave and say that she does think it permissible to lie to unjust aggressors, in which case the game is up, and the innocents will die, or she has to exercise her philosophical mind to find the best arguments she can for a moral conclusion that she believes to be perverse. The latter seems really bad.

Or imagine that Alice thinks that the only way she will convince the Nazi that she is telling the truth in her initial lie is by adding lies about how much she appreciates the Nazi’s fearless work against Jews. That also seems really wrong to me.

Or imagine that Alice’s non-Nazi friend Bob can’t keep secrets and asks her if she is hiding any Jews. Moreover, Alice knows that Bob knows that Alice fearlessly does what she thinks is right. And so Bob will conclude that Alice is hiding Jews unless he thinks Alice believes Jews deserve death. And if Bob comes to believe that Alice is hiding Jews, the game will be up through no fault of Bob’s, since Bob can’t keep secrets. Now it looks like the only way Alice can keep the innocents she is hiding safe is by advocating genocide to Bob.

It is very intuitive that a Nazi at the door doesn’t deserve the truth about who is living in that house. And yet at the same time, it seems like everyone deserves the truth about what is right and wrong. But at the same time, it is difficult to limit a permission of lying to the former kinds of cases. There is a slippery slope here, with two stable positions: an absolutist prohibition on lying and a consequentialist calculus. An in-between position will be difficult to specify and defend.

Thursday, May 21, 2020

More on Double Effect and statistical reasoning

Consider this case:

  1. You are fighting a just war. There are 1000 people facing you and you have very good, but fallible, reason to think of each that they are an unjust aggressor that you are permitted to kill. At the same time, on statistical grounds you know one of the thousand is innocent. You kill the thousand for standard military reasons.

This is justifiable, assuming nothing defeats the standard military reasons.

  1. You are fighting a just war. There are 1000 people facing you and you have very good, but fallible, reason to think of each that they are an unjust aggressor that you are permitted to kill. At the same time, on statistical grounds you know one of the thousand is innocent. Moreover, the enemy is superstitious and thinks the number 1000 is especially significant, so that if you kill 1000, they will instantly surrender.

Now this case is tricky. At first, it seems like it’s an easier case than (1). After all, you have two separate reasons: the usual military reasons for killing unjust aggressors and the fact that if you kill them all, the enemy will instantly surrender. But it’s trickier than that. The problem is that if you simply kill the thousand for the standard military reasons, then you can intend to kill each one qua aggressor—for you have good reason to think of each that they are an aggressor, even though you know you are mistaken about each one. But if you act on the enemy’s superstition, you are intending to kill each one simpliciter, not just qua aggressor, for all 1000 need to be dead for the plan to be fulfilled. In particular, the one who is innocent needs to be dead, too, in order for your plan to be fulfilled. But when you acted on the standard military reasons, you didn’t need the innocent one to be dead—as that one wasn’t a problem.

So, in case (2) you cannot legitimately act on the enemy’s superstition and reason: “I will kill these 1000 in order that there be 1000 dead, which will trigger surrender.” For then the success of your action plan depends on the death of the innocents among the 1000, and not just on the death of the guilty. (I am not worried here about the moral problem of exploiting the enemy’s superstition. If you are, you can modify the case.)

That doesn’t mean you can’t take that superstition into account in a way. For instance, while military motives might be primary, you might have a defeater for these motives, such as that the mission is really dangerous. But the fact that the mission would end the war could defeat the reasons coming from the danger. This would be a Kamm-style triple effect case. (A more difficult question: could the fact that the enemy will surrender, thereby saving much bloodshed, defeat the reason against the action coming from the death of the innocent? I suspect not, but it’s a tough question.)

The above case pushes me to the idea that killing is one of those acts that can only be permissibly be done for certain kinds of reasons.

Double Effect and the death penalty

Each of the ten million densely populated planets in Empress Alice’s vast intergalactic empire has an average of one person on death row who has exhausted all appeals. Empress Alice’s justice system is a really good one, but she knows it to be fallible like all justice systems, and her statistics show there is one in a million chance that someone sentenced to death who exhausted all appeals is nonetheless innocent. So Alice knows that of the hundred million people on death row, at least one is innocent (assuming independence, the probability that all are guilty is 0.99999910000000 = 0.000045). (If we think, as I do, that under ordinary circumstances the death penalty is unjustified, we may suppose that the empire is suffering from extraordinary circumstances such that roughly one case per planet of the death penalty is justified.)

Every year, there is a Day of Justice. On that day, the Empress issues the order that all who are on death row and have exhausted appeals are to be executed.

So, the Empress intentionally kills a million people. That by itself sounds terrible, but we have to remember that she has ten million planets each with billions of people in her empire. Alice is a morally sensitive person, and she is weighed down by unspeakable grief over what justice requires of her, but being an Empress she must do justice.

But what is worse, the Empress knows that at least one (and probably a couple more) of the million people she intentionally kills is innocent. And yet it seems wrong to intentionally kill those who are innocent.

Now, it seems that I’ve just committed a serious slip in reasoning. I’ve moved from the claim that Alice intentionally kills the million people to the claim that each was intentionally killed. Let’s say that Bob is one of the handful of innocents. Then Alice does not intentionally kill Bob, because she does not know anything about Bob specifically. Well, but that can be remedied. We may suppose that for a month prior to Justice Day, the Empress spends all her waking hours looking at the photo of every person she is to have executed, and praying a quick and specific prayer for them. At some point in the month, she did look at Bob’s photo and prayed: “God, have mercy on Bob and give comfort to his victims and his family.” We may even suppose that Alice has a photographic memory and when she issued her order, she saw all million people before her mind’s eye. That shouldn’t make any moral difference to the justification of the executions, though it adds to Alice’s imperial burden.

Perhaps the thing to say is this: Alice did a wrong unknowingly. A few of the people she had executed should not have been executed, but since she did not know who they were, she did not do wrong in intentionally killing them. But the worrying thing is that Alice also did know that she was doing a wrong. She knew that one of the people was innocent.

But maybe here I am sliding back and forth between two actions: the overarching “Execute them all” action, and the specific actions of killing Bob, Carl, and all the other people whose faces bring tears to the tips of Alice’s tentacles. The overarching action is not wrong, but it is known to include a wrong component. The specific actions include some that are wrong, but they are not known to be such.

But we (and Alice) are not home free yet. For Alice’s overarching action clearly is an action that she foresees to result in the deaths of innocents. Thus, its justification requires something like the Principle of Double Effect. Now one of the conditions in the Principle of Double Effect is that none of the means be evil. But killing Bob (and Carl and all the others) is indeed a means to executing “all who have been sentenced to death and have exhausted their appeals”. So among the means, there are some that are bad. And the Empress knows this. She just doesn’t know which ones.

Can we get Alice off the hook by saying that she is intending only the deaths of the guilty? But how is she planning to kill the guilty, if not by means of “executing them all”? And she who intends the end intends the means, so if she intends to kill the guilty by “executing them all”, she must be intending to execute them all.

This seems to be a serious problem for Double Effect.

One possible solution is this. Alice really is only intending the deaths of the guilty. And the means that she intends to this end are: Kill the guilty Bob, kill the guilty Carl, and so on for about a million others. Each of these means is legitimate. But she also knows that some of the means will fail. For since Bob is not guilty, killing the guilty Bob will fail. It is weird to have an action that is overall successful but some of the means to which fail. But that can still happen: think of cases where there is a multiply redundant safety procedure, which is overall successful even though some of the means in it fail.

Tuesday, May 19, 2020

Continuity of time and causation

In a standard causal deterministic system, given three times t1 < t2 < t3, the state of the system at t1 causes the state of the system at t3 by means of the state of the state of system at t2. If time is infinitely subdivided, then the state of the system at t2 causes the state of the system at t3 by means of the state of the system at t2.5 (where t2 < t2.5 < t3), and so on. This is an infinite regress. And it’s vicious, because it’s a dependency regress.

Here is one way to see that it’s a dependency regress. Imagine a really unpleasant situation where you need to kill Hitler, but the only way to kill Hitler is to initiate a continuous causal chain that proceeds through Hitler’s uncountably many henchmen, set up so that at every time strictly between t1 and t2 a henchman dies, and their death is caused by the death of each previously dead henchman; at t1, Himmler is directly shot by you, and at t2, Hitler dies because of the previous henchman deaths. It is clear that in this case every henchman’s death is intended as a means to Hitler’s death. This matters morally. If it turns out that any person in the causal chain is actually innocent, then the Principle of Double Effect will not allow you to kill Hitler by killing Himmler. For the causal chain from Himmler’s death through to Hitler’s proceeds by means of that innocent. But an outcome depends on its means, so a regress of means is a dependency regress.

If there are no vicious infinite regresses, it follows that one cannot have deterministic causal chains intimately tied to infinitely subdivided time. In fact, I think nothing hangs on determinism here.

Observation, collapse and circularity

The following four premises seem to be contradictory:

  1. An observation of an event E is caused by the event E.

  2. Observation causes collapse.

  3. What is observed is the collapsed state.

  4. There is no circular causation.

Here is my first attempt to get out of this, on behalf of those attracted to the observation-causes-collapse view. For concreteness, let’s suppose that we’re observing an electron in a mixed up-down spin state, and suppose that we observe that it’s in the up state. Distinguish these two events:

  • O1: Observing whether the electron is in an up or a down state.

  • O2: Observing the electron to be in an up state.

Then I think what the defender of observation-causes-collapse can say this: O1 causes the collapsed state which in turn causes O2. But this is rather strange. For O1 and O2 actually seem to be the same coarse-grained event, which makes that coarse-grained event be its own cause! Another way to see the problem is to note that O1 is the disjunctive event of observing the electron to be in the up state or observing the electron to be in the down state. But then O2 grounds O1: disjunctions are grounded in their true disjunct(s). But then O1 is causally prior to its grounds, which seems absurd.

A second attempt: deny (3). Compare Elizabeth Anscombe’s theory that an intention to Ï• in the successful case constitutes one’s knowledge that one will Ï• or Thomas Aquinas’s theory that God’s knowledge of the world is the cause of the world’s being as it is. On these cases, the direction of fit in the knowledge is reversed. Observation of quantum phenomena could be like that.

Third attempt: cut up an act of observation into two parts. Metaphorically speaking, we could imagine the mind querying the world: “Is the electron in an up or a down state?” In response, reality collapses, and the mind observes that reality is in an up state. Thus, we have a query event Q and an observation-proper O2. It is Q that causes collapse, and P is then the observation of the collapse. This solves the circularity problem, but strictly speaking it’s incorrect to say that observation causes collapse. Rather, it is the pre-observation query event Q that causes collapse. And if simultaneous causation is possible, then Q and O2 may be simultaneous.

I think the second and third attempts are the way to go, assuming we're keeping the basic idea behind observation-causes-collapse.

Consciousness causing collapse and temporally extended conscious states

I have recently been arguing that states of consciousness are not localized to specific times. Instead, during an interval of times of non-zero length, say from t1 to t2, there can be a fact of the matter how many states of a particular sort have occurred.

It’s now occurred to me that there is an interesting difficulty for conjoining this theory with the consciousness-causes-collapse interpretation of quantum mechanics. Let t2 be the earliest time at which it is correct to say that a collapse-causing consciousness state Q has happened, and suppose that Q does not occur at t2 but over an interval of times ending at t2. Then when does collapse happen? Suppose that collapse happens at a time t < t2. Then we have a problem for consciousness-causes-collapse. For by time t there has yet to have been a conscious state. If God were to annihilate the universe right after t2, there would be no conscious state, and yet the collapse would presumably have already occurred. So the collapse wasn’t caused by consciousness—unless there is backwards causation, which is counterintuitive.

So collapse can only happen at a time t ≥ t2. If collapse happens at a time t > t2, then either there is backwards causation or else the conscious state cannot count as an observation of the collapsed state, since the collapsed state is occurring after the conscious state. Again dismissing backwards causation, and assuming that the conscious states that cause collapse are observations, it follows that the collapse must occur precisely at t = t2.

But now we have something weird: The bulk of the conscious state that causes collapse occurs before t2. Yet only what is happening at t2 can be caused by the collapse. So the very last moment of the temporally-extended conscious state has to be what makes the difference as to the qualitative content of the conscious state—say, whether it is a consciousness of a red light or a green light. That’s a bit strange, but not impossible.

Monday, May 18, 2020

Wobble board, gamified

Last year, I made an adjustable wobble board for balance practice: a plywood disc with a 3D-printed plastic dome rocker. One thing that I always wanted was some sort of a device that would measure how long I was staying up on the board, detecting when the board edge hit the ground. I imagined things like switches under the board or even a camera trained on the board.

But what I forgot is that I already carry the electronics for the detection in my pocket. A smartphone has an accelerometer, and so if it’s placed on the board, it can measure the board angle and thus detect the edge’s approximately touching the ground. I adapted my stopwatch app to start and stop based on accelerometer values, and made this Android app. Now all I need to do is lay my phone on the board, and when the board straightens out the timer starts, going on until the board hits the ground. There are voice announcements as to how long I’ve been on the board, and a voice announcement of the final time.

Source code is here.

Instructions on building the wobble board and links to the 3D printable files are here.

One forgets how many things can be done with a phone.

I think my best time is just under a minute, with the board set to a 19 degree maximum angle.

Gamification

Most philosophers don’t talk much about games. But games actually highlight one of the really amazing powers of the human being: the power to create norms and to create new forms of well-being.

Lately I’ve been playing this vague game with vague rules and vague non-numerical points when out and about:

  • Gain bonus points if I can stay at least nine feet away from non-family members in circumstances in which normally I would come within that distance of them; more points the further away I can be, though no extra bonus past 12 feet.

  • Win game if I avoid inhaling or exhaling within six feet of a non-family member. (And of course I have to be careful that the first breath past the requisite distance be moderate in size rather than a big huff.)

When the game goes well, it’s delightful, and adds value to life. On an ordinary walk around campus, I almost always win the game now. Last time I went shopping at Aldi, I would have won (having had to hold my breath a few times), except that I think I mumbled “Thank you” within six feet of the checkout worker (admittedly, if memory serves, I think I mumbled it quietly, trying to minimize the amount of breath going out, and then stepped back for the inhalation after the words; and of course I was wearing a mask, but it's still a defeat). Victory, or even near-victory, at the social distancing game is an extra good in life, only available because I imposed these game norms on myself, in addition to the legal and prudential norms that are independent of my will. Yesterday, I think I won the game all day despite going on a bike ride and a hike, attending Mass (we sat in the vestibule, in chairs at least nine feet away from anybody else, and the few times someone was passing by I held my breath), and playing tennis with a grad student. That's satisfying to reflect on. (At the same time, playing a game also generally adds a bit of extra stress, since there is the possibility, and sometimes actuality, of defeat. And it's hard to concentrate on the Mass while all the time looking around for someone who might be approaching within the forbidden distance. And, no, I didn't actually think of it as a game when I was at Mass, but rather as a duty of social responsibility.)

I think the only other person in my family who has gamified social distancing is my seven-year-old.

Friday, May 15, 2020

Aristotle's optimism and pessimism

Aristotle seems to accept these three claims:

  1. For the most part, things behave in a natural way.

  2. Most people are bad.

  3. To behave well is to behave in accordance with your nature.

I always thought there was a contradiction between (1) and (2) given (3). But actually whether there is a contradiction depends on the reference class of the “For the most part” operator in (1). Suppose the reference class is all behaviors of all things. Then it is quite likely that most of these behaviors are natural, bad human behaviors being far outnumbered by the natural behaviors of insects and elementary particles.

Back when I thought there was a contradiction, I assumed the reference class was the behaviors of a particular kind of thing, a sheep or a human, say. That may be correct exegetically, but even so it does not yield a contradiction. For morally significant activity is only a small fraction of the activity of a human being. Leibniz thought that about three quarters of the time we behaved as mere animals. That’s likely an underestimate. So even if all our morally significant activity is bad, it may be far outnumbered by non-moral activity, and hence it may well be that most activity of humans is good. But when we say that a human is good or bad, we only refer to their moral activity.

The only hope for a contradiction is to take the reference class of (1) to be all the activities of every subsystem type. Even so, I do not know that there is a contradiction. For to say that a person is bad is not to say that the majority of their morally significant actions are bad. Suppose that Monday in the morning Bob kicked a neighbor’s puppy. At noon, she sent a harsh and false email to a struggling student saying that he had never seen worse work than theirs. At three, he googled for articles in obscure Romanian journals that I could translate and plagiarize. And in the evening he cheated while playing chess with his daughter in order that she might never win. It would be fair to say that Bob am very bad person indeed, but that’s only on the strength of four morally significant actions. There were many other morally significant actions Bob engaged in. Each time he was asked a question, he had the possibility of lying. When driving, he had the possibility of murder. He did many things that were morally neutral and no doubt a number of things that were good. But the four bad things he did were enough to show that he was a bad person.

Our standards for moral okayness are much higher than the standards for a hard calculus exam where you just need to get more than half the questions right.

See also the quote from George MacDonald here.

The Need for Human Nature

A popular article of mine on “The Need for Human Nature” has just been posted on Sapientia. One can think of it as a precis of the main ideas in my in-progress Norms, Natures, and God book.

Progress on Norms, Natures and God

In the fall, I opened a github repository for my in-progress Norms, Natures and God book manuscript, but all I had was a table of contents. I’ve finally started to regularly contribute text to the book. You can monitor my progress here, and you’re welcome to submit suggestions and bug reports via the Issues page.

Don’t count on the repository being available permanently: it will disappear when it’s time to submit to a publisher. (My preferred way to write books is to write them and then submit the whole draft to a publisher.)

Wednesday, May 13, 2020

Vagueness and degrees of truth

Consider the non-bivalent logic solution to the problem of vagueness where we assign additional truth values between false and true. If the number of truth values is finite, then we immediately have a regress problem once we ask about the boundaries for the assignment of the finitely many truth values: for instance, if the truth values are False, 0.25, 0.50, 0.75 and True, then we will be able to ask where the boundary between “x is bald” having truth value 0.50 and having truth value 0.75 lies.

So, the number of truth values had better be infinite. But it seems to be worse than that. It seems there cannot be a set of truth values. Here is why. If x has any less hair than y, but neither is definitely bald or non-bald, then “x is bald” is more true than “y is bald”. But how much hair one has is quantified in our world with real numbers, say real numbers measuring something like a ratio between the volume of hair and the surface area of the scalp (the actual details will be horribly messy). But there will presumably be possible worlds with finer-grained distances than we have—distances measured using various hyperreals. Supposing that Alice is vaguely bald, there will be possible people y who are infinitesimally more or less bald than Alice. And as there is no set of all possible infinitesimals (because there is no set of all systems of hyperreal), there won’t be a set of all truth values.

Moreover, there will be vagueness as to comparisons between truth values. One way to be less bald is to have more hairs. Another way is to have longer hairs. And another is to have thicker hairs. And another is to have a more wrinkly scalp. Unless one adopts epistemicism, there are going to be many cases where it will be vague whether “x is bald” is more or less or equally or incommensurably true as “y is bald”.

We started with a simple problem: it is vague what is and isn’t bald. And the non-bivalent solution led us to a vast multiplication of such problems, and a vast system of truth values that cannot be contained in a set. This doesn’t seem like the best way to go.

Epistemicism and physicalism

  1. There is a precise boundary for the application of “bald”.

  2. If there is a precise boundary for the application of “bald”, that boundary is defined by a linguistic rule of infinite complexity.

  3. If physicalism is true, then no linguistic rules have infinite complexity.

  4. So, physicalism is not true.

The argument for (1) is classical logic. The argument for (2) depends on the many-species considerations at the end of my last post. And if (3) is true, then linguistic rules are defined by our practices, and our practices are finitary in nature.

Objection: We are analog beings, and every analog system has possible states of infinite complexity.

Response 1: Our computational states ignore small differences, so in practice we have only finite complexity.

Response 2: There is a cardinality limit on the complexity of states of analog systems (analog systems can only encode continuum-many states). But there is no cardinality limit on the number of humanoid species with hair, as there are possible such species in worlds whose spacetime is based on systems of hyperreals whose cardinality goes arbitrarily far beyond that of the reals.

The unknowability part of epistemicism about vagueness

One way to present epistemicism is to say that

  1. vague concepts have precise boundaries, but

  2. it is not possible for us to know these boundaries.

A theist should be suspicious of epistemicism thus formulated. For if there are precise boundaries, God knows them. And if God knows them, he can reveal them to us. So it is at least metaphysically possible for us to know them.

Perhaps the “possible” in (b) should be read as something stronger than metaphysical possibility. But whatever the modality in (b) is, it seems to imply:

  1. none of us will ever know these boundaries.

But if epistemicism entails (c), then we don’t know epistemicism to be true. For if there are sharp boundaries, for all we know God will one day reveal them to a pious philosopher who prays really hard for an answer.

I think the best move would be to replace (b) with:

  1. it is not possible for us to know these boundaries without reliance on the supernatural.

This is more plausible, but it seems hard to be all that confident about (d). Maybe there is some really elegant semantic theory that has yet to be discovered that yields the boundaries. Or maybe our mind has natural powers beyond those we know.

Let me try, however, to offer a bit of an argument for (d). Let’s imagine what the boundary between bald and non-bald would be like. As a first attempt, we might think it’s like this:

  1. Necessarily, x is bald iff x has fewer than n hairs.

But there is no n for which (1) is true. For n would have to be at least two, since it is possible to be bald but have a hair. Now imagine Bill the Bald who has n − 1 hairs, and now imagine that these hairs grow in length until each one is so long that Bill can visibly and fully cover his scalp with them. At that point, Bill wouldn’t be bald, yet he would still have n − 1 hairs. So, the baldness boundary cannot be expressed numerically in terms of the number of hairs.

As a second attempt, we might hope for a total-length criterion.

  1. Necessarily, x is bald iff the total length of x’s hairs is less than x centimeters.

But it is possible to have two people with the same total length of hairs, one of whom is bald and the other is not. For the thickness of hairs counts: if one just barely has the requisite total length but freakishly thin hairs, that won’t do. On the other hand, clearly x would have to be at least four centimeters, since a single ordinary hair of four centimeters is not enough to render one non-bald, but one could have a total hair length of four centimeters and yet be non-bald, if one has four hairs, each one centimeter long and 10 centimeters in diameter, covering one’s scalp with a thick keratinous layer.

So, we really should be measuring total volume, not length. But there are other problems. Shape probably matters. Suppose Helga has a single hair, of normal diameter, but it is freakishly rigid and long, long enough to provide the requisite volume, but immovably sticking up away from the scalp and providing no coverage. Moreover, whatever we are measuring has to be relative to the size of the scalp. A baby needs less hair to cease to be bald than an adult. But it’s not just relative to the size of the scalp, but also the shape of the scalp. If one has a very large surface area of scalp but that is solely due to many tiny wrinkles, one doesn’t need an amount of hair proportional to that large surface area. To a first approximation, what matters is the surface area of the upper part of the convex hull of one’s scalp. But even that’s not right if we imagine a scalp that has very large wrinkles.

So, in fact, we have good reason to think the real boundary wouldn’t be simply numerical. It would involve some function of hair shape, volume and rigidity, as well as of scalp shape and size. And if we think about cases, we realize that it will be a very complex function, and we are nowhere close to being able to state the function. Moreover, to be honest, there are likely to be other variables that matter.

At this point, we start to see the immense amount of complexity that would be involved in any plausible statement of the precise boundary of baldness, and that gives us positive reason to doubt that short of something supernatural we could know where the boundary lies.

But suppose our confidence has not yet been quashed. We still have other serious problems. What we are looking for is a perfectly precise necessary and sufficient condition for someone to be bald. In that definition, we cannot use other vague terms. That would be cheating. What the epistemicist meant by saying that we don’t know where the boundaries lie was that we do not know any transparently precise statements of the boundaries, statements not involving other vague terms. But “hair” itself is a vague term. Both hair and horns are made of keratin. Where does the boundary between hair and horns lie? Similarly, “scalp” is vague, too. And it’s only the volume of the part of the hairs sticking out of the scalp that counts—the size of the root is irrelevant. But “sticking out” is vague, as is obvious when we Google for microscopic photography of scalps. And which particles are in the hair or in the scalp is going to be vague. Next, any volume and surface area measurements suffer from vagueness even if we fix the particles, because for quantum reasons particles will have spread out wavefunctions. And then Relativity Theory comes in: volume and surface area depend on reference frame, and so we need a fully precise definition of the relevant reference frame.

Once we see all the complexity needed in giving a transparently precise statement of the boundary of baldness, it becomes very plausible that we can’t know it by natural means, just as it is very plausible that no human can know the first million digits of Ï€ by natural means.

And things get even worse. For humans are not the only things that can be bald. Klingons can be bald, too. Probably, though, only humanoid things are bald in the same sense of the word, but even when restricted to humanoid things, a precise statement of the boundary of baldness will have to apply to beings from an infinite number of possible species. And the norms of baldness will clearly be species-relative. Not to mention the difficulty of defining what hair and scalp are, once we are dealing with beings whose biochemistry is different from ours. It is now starting to look like a transparently precise statement of the boundary of baldness might actually have infinite complexity.

Monday, May 11, 2020

Mystery and religion

Given what we have learned from science and philosophy, fundamental aspects of the world are mysterious and verge on contradiction: photons are waves and particles; light from the headlamp on a fast train goes at the same speed relative to the train and relative to the ground; objects persist while changing; we should not murder but we should redirect trolleys; etc. Basically, when we think deeper, things start looking strange, and that’s not a sign of us going right. There are two explanations of this, both of which are likely a part of the truth: reality is strange and our minds are weak.

It seems not unreasonable to expect that if there were a definitive revelation of God, that revelation would also be mysterious and verge on contradiction. Of the three great monotheistic religions, Christianity with the mystery of the Trinity is the one that fits best with this expectation. At the same time, I doubt that this provides much of an argument for Christianity. For while it is not unreasonable to expect that God’s revelation would be paradoxical, it is a priori a serious possibility that God’s revelation might be so limited that what was revealed would not be paradoxical. And it would also be a priori a serious possibility that while creation is paradoxical, God is not, though this last option is a posteriori unlikely given what we learn from the mystical experience traditions found in all the three monotheistic religions.

So, I am not convinced that there is a strong argument for Christianity and against the other two great monotheistic religions on the grounds that Christianity is more mysterious. But at least there is no argument against Christianity on the basis of its embodying mysteries.

Three levels of theological models

There are three kinds of metaphysical models of a theological mystery—say, Trinity, Incarnation or Transubstantiation:

  • realistic model: a metaphysical story that is meant to be a true account of what makes the mysterious doctrine be true

  • potential model: a metaphysical story that is meant to be an epistemically possible account of what makes the mysterious doctrine be true

  • analogical model: a story that is meant to be an epistemically possible account of what makes something analogous to the mysterious doctrine be true.

For instance, Aquinas’s accounts of the Trinity, Incarnation and Transubstantiation are realistic models: they are meant to be accounts of what indeed makes the doctrines true. Van Inwagen’s relative identity account of the Trinity or his body-snatching account of the resurrection, on the other hand, are only potential models: van Inwagen does not affirm they are true. And the history of the Church is filled with analogical models.

A crucial test of any of these models is this: Imagine that you believe the story to be true, and see if the traditional things that one says about the mystery (in the case of a realistic or potential model), or analogues of them (in the case of an analogical model), sound like reasonable things to say given what one believes.

For instance, consider a time-travel model of the Incarnation. Alice, currently a successful ultramarathoner and brilliant geologist, will live a long and fruitful life. Near the end of her life, she has lost most of her physical and mental powers, and all her knowledge of geology. She uses a time machine to go back to 2020 when she is in her prime. If we thought this story was true, it would be reasonable to find ourselves saying things like:

  • Alice is a successful ultramarathoner and barely able to walk

  • Alice understands continental drift and does not not know what magma is

  • Alice is young and old

  • Alice is in the pink of health and dying.

These things would sound like a contradiction, but the time-travel story shows they are not. However, these claims are also analogous to claims that constitute an especially mysterious part of the mystery of the Incarnation (and I suppose a mysterious part of a mystery is itself a mystery): Christ suffers and is impassible; Christ is omniscient and does not know everything; Christ is timeless and born around 4 BC.

Of course nobody should think that it’s literally true that the Incarnation is to be accounted for in terms of time travel. But what the analogical model does show is that there are contexts in which it is reasonable to describe a non-contradictory reality in terms that are very similar to the apparently contradictory incarnational claims.

Friday, May 8, 2020

Slowing down pleasures and pains, once again

If suddenly everything in game and in my brain slowed down while I was having a good time playing Asteroids, my conscious sequence wouldn’t be subjectively affected, and the hedonic value of the game would not change in any way. It would just take proportionately longer to get the same overall hedonic value.

But this leads to a paradox. Suppose that I am experiencing an approximately constant moderate pleasure for five minutes, and you experience that pleasure for ten minutes. Then, obviously, you get approximately twice the hedonic value. But one way to make it be the case that you experience the same pleasure for ten minutes is just to slow down all of your life by a factor of two. And yet such a slowdown should not affect hedonic value.

I think I previously thought that one way out of this paradox was to suppose that time is discrete. But I don’t think so any more. In fact, it seems to me that making time be discrete makes the paradox worse. For in your slowed-down ten minutes of pleasure, there will be twice as many pleasurable moments of time, which should predict, contrary to the intuition I began with, that you will have twice the hedonic value. Granted, if time is discrete, there will be some technical difficulties with how the slowdown happens at very short time-scales. But that doesn’t matter for us, since if time is discrete, it is discrete on a Planck scale, which is way below any time-scales relevant to my enjoying a game of Asteroids. And we need not imagine any weird “microphysics slowing down” for the thought experiment: it suffices that the computer software slow down by a factor of two and that you be given drugs that make your brain work more sluggishly than mine.

A different way to try to solve the problem is to suppose that there is some kind of a clock in my brain, and that only states at a clock tick are pleasurable. Thus, if your life is slowed down by a favor of two, then that clock will slow down, and in ten minutes of your enjoying Asteroids there will be the same number of pleasurable ticks as in me, and so you will get the same total pleasure.

But this is tricky. Whatever process is generating the clock ticks in our brains is presumably a fairly continuous analog process. Thus, there will be no such thing as an instantaneous tick of the clock. Rather, there will be an extended period of time (on a time-scale many orders of magnitude above the Planck scale, so any discreteness of physical time will be irrelevant) at which the tick occurs. (Think of a physical clock ticking. The tick is a sound that occurs every second for a fraction of a second—but that fraction is non-zero.) So if I am having pleasure during the tick and you’re having pleasure during the tick, since your tick takes twice as long, it seems you have twice as much pleasure.

I can think of only one way out of the paradox right now, and that is to deny that it makes sense to talk of there being a pleasure or a pain at an instantaneous physical time. Rather, pleasures and pains (and presumably other qualia) always occur over an interval of times. The clock toy model can now be rescued. For we could say that what counts is a pleasurable or painful tick, but if the tick itself is shortened or extended, the hedonic value does not actually change. Let’s imagine that the clock works like some processor clocks. There is an electric square wave generated somewhere, and the ticks are the transitions from a high to a low voltage. Since real-life “square wave” isn’t actually square, but has transitions with wobbly smooth edges, the ticking—i.e., the transition from high to low—takes time. What makes it be the case that one has experienced a pleasure or pain during an interval of times is that this interval contained a clock transition from high to low together with some further state that is not itself pleasurable or painful but that, when combined with the clock transition, constitutes the pleasure or pain. The number of pains or pleasures during a period of time is the number of such transitions.

If one slows down the system, the clock transitions become slower. But the number of clock transitions is unchanged, as is the number of pleasurable or painful clock transitions. Thus there is no change in overall hedonic value.

But notice that on this toy model it is never true that one is experiencing a pleasure or pain at an instant. For there is no transition from high to low clock state at an instant. Transitions happen over an interval of times. This will bother presentists.

The above line of thought assumed supervenience of the mental on the physical. But a robust dualism faces the same problems of slowing down and speeding up, and the fundamental idea of the solution, that pleasures and pains are constituted by essentially temporally extended processes and that there are no instantaneous pleasures or pains, is still available.

Thursday, May 7, 2020

Swapping ones and zeroes

Decimal addition can be done by a computer using infinitely many algorithms. Here are two:

  1. Convert decimal to binary. Add the binary. Convert binary to decimal.

  2. Convert decimal to inverted binary. Inv-add the binary. Convert inverted binary to decimal.

By conversion between decimal and inverted binary, I mean this conversion (in the 8-bit case):

  • 0↔11111111, 1↔11111110, 2↔11111101, …, 255↔00000000.

By inv-add, I mean an odd operation that is equivalent to bitwise inverting, adding, and bitwise inverting again.

You probably thought (or would have thought had you thought about it) that your computer does decimal addition using algorithm (1).

Now, here’s the fun. We can reinterpret all the physical functioning of a digital computer in a way that reverses the 0s and 1s. Let’s say that normally 0.63V or less counts as zero and 1.17V or higher counts as one. But “zero” or “one” are our interpretation of analog physical states that in themselves do not have such meanings. So, we could deem 0.63V or less to be one and 1.17V or higher to be zero. With such a reinterpretation, logic gates change their semantics: OR and AND swap, NAND and NOR swap, while NOT remains NOT. Arithmetical operations change more weirdly: for instance, the circuit that we thought of as implementing an add should now be thought of as implementing what I earlier called an inv-add. (I am inspired here by Gerry Massey’s variant on Putnam reinterpretation arguments.)

And if before the reinterpretation your computer counted as doing decimal addition using algorithm (1), after the reinterpretation your computer uses algorithm (2).

So which algorithm is being used by a computer depends on the interpretation of the computer’s functioning. This is a kind of flip side to multiple realizability: multiple realizability talks of how the same algorithm can be implemented in physically very different ways; here, the same physical system implements many algorithms.

There is nothing really new here, though I think much of the time in the past when people have discussed the interpretation problem for a computer’s functioning, they talked of how the inputs and outputs can be variously interpreted. But the above example shows that we can keep fixed our interpretation of the inputs and outputs, and still have a lot of flexibility as to what algorithm is running “below the hood”.

Note that normally in practice we resolve the question of which algorithm is running by adverting to the programmers’ intentions. But we can imagine a case where an eccentric engineer builds a simple calculator without ever settling in her own mind how to interpret the voltages and whether the relevant circuit is an add or an inv-add, and hence without settling in her own mind whether algorithm (1) or (2) is used, knowing well that either one (as well as many others!) is a possible interpretation of the system’s functioning.

Tuesday, May 5, 2020

Another really weird thought experiment

Suppose we accept a memory theory of personal identity and accept that people can be moved from one set of hardware to another. Now suppose Alice is an internally determinstic person, currently without inputs from the outside world, whose mental state is constantly backed up to a hard drive. Suppose now that Alice is a person who in hardware AliceOne has experiences E0, E1, E2, E3 at times 0,1,2,3, respectively. Then the initial hardware is destroyed, and the backup from just before time 2 is restored into another piece of hardware, AliceTwo, who goes on to have experience E2. Then AliceTwo is destroyed, and a backup from just before time 1 is restored into AliceThree, who goes on to have experience E1, after which all the hardware and the backups are destroyed by a natural disaster.

What is the order of Alice’s experiences? The obvious answer is:

  • E0, E1, E2, E3, E2, E1 at times 0−5, respectively.

In particular, when Alice is experiencing E2 for the second time, if she were informed of what is going to happen, she would be rationally dreading E1 if E1 is unpleasant. For E1 would be in her future.

What makes it be the case that the second E1 is experienced after the second E2? It is the order of external time, according to which the second E1 comes after the second E2. It is not the order of causal connections in Alice (since the second E2 comes from first E1 while the second E1 comes from the first E0, and since there need be no causal connection between the hardware AliceTwo and AliceThree).

I think this is all a bit odd. To make it odder, let’s imagine that AliceTwo and AliceThree are in a room that time-travels in such a way that it is first at time 5 and then at time 4. Now, perhaps, Alice experiences the final E1 before she experiences the final E2. That’s really unclear, though.

The more I think about various combinations of time-traveling backups and time-traveling hardware, the more indeterminate it looks to me whether the final E2 comes before the final E1.

This is not much of an argument. But the above lines of thought lead me to think that one or more of the following is true:

  1. Time travel is impossible.

  2. People cannot be moved from one piece of hardware to another.

  3. One does not survive restoration from a backup.

  4. The order of experience does not have tight connections to rationality of attitudes.

  5. The order of experience can be quite indeterminate.

Timeless flow of consciousness?

We could imagine that all the computation a deterministic brain does being done by an incredibly complex system of gears operated by a single turn of the crank to generate all the different intermediate computational results in different gears. Now, imagine a Newtonian world with frictionless, perfectly rigid and perfectly meshing gears, and suppose that the computations are done by that system. Perfectly rigid and perfectly meshing gears compute instantly. So, all the computation of a life can be done with a single turn of a crank. Note that the computational states will then have an explanatory order but need not have a temporal order: all the computations happen simultaneously. So:

  1. On a computational theory of mind, it is possible to live a conscious mental life of many years of subjective flow of consciousness without any real temporal succession.

It follows that:

  1. Either computational theories of mind are false, or the subjective flow of consciousness does not require any real time.

I think there is a potential problem in (1) and (2), namely a potential confusion between real time and external time. For it could be that internal time is just as real as (or more real than!) external time, and is simply constituted by the causal order of interactions within a substance. If so, then if the system of gears were to be a substance (which I think it could only be if it had a unified form), its causal order could actually constitute a temporal order.

This and other recent posts could fit into a neat research project—perhaps a paper or even a dissertation or a monograph—exploring the costs of physicalism in accounting for the temporality of our lives. As usual, I am happy to collaborate if someone wants to do the heavy hitting.

Monday, May 4, 2020

Digital and analog states, consciousness and clock skew

In a computer, we have multiple layers of abstraction. There is an underlying analog hardware level (which itself may be an approximation to a discrete quantum world, for all we know)—all our electronic hardware is, technically, analog hardware. Then there is a digital hardware level which abstracts from the analog hardware level, by counting voltages above a certain threshold as a one, below another—lower—threshold as a zero. And then there are higher layers defined by the software. But it is interesting that there is already semantics present at the digital level: three volts (say) means a one while half a volt (say) means a zero.

At the (single-threaded) software level, we think of the computer as being in a sequence of well-defined discrete states. This sequence unfolds in time. However, it is interesting to note that the time with respect to which this sequence unfolds is not actually real physical time. One reason is this. At the analog hardware level, during state transitions there will be times when the voltage levels are in an area that does not define a digital state. For instance, in 3.3V TTL logic, a voltage below 0.8V is considered a zero, a voltage above 2.0V is considered a one, but in between what we have is “undefined and results in an invalid state”. Since physical changes at the analog hardware level are continuous, whenever there is a change between a zero and a one, there will be a period of physical time at which the voltage is in the “undefined” range.

It seems then that the well-defined software state thus can only occur at a proper subset of the physical times. Between these physical times are physical times at which the digital states, and hence the software states that are abstractions from them, are undefined. This is interesting to think about in connection with the hypothesis of a conscious computer. Would a conscious computer be conscious “all the time” or only during the times when software states are well defined?

But things are more complicated than that. The technical means by which undefined states are dealt with is the system clock, which sends a periodic signal to the various parts of the processor. The system is normally so designed that when the clock signal reaches a component of the processor (say, a flip-flop), that component’s electrical states have a well-defined digital value (i.e., are not in the undefined range). There is thus an official time at which a given component’s digital values are defined. But at the analog hardware level, that official time is slightly different for different components, because of “clock skew”, the physical phenomenon that clock signals reach different components at different times. Thus, when we say that component A is in state 1 and component B is in state 0 at the same time, the “at the same time” is not technically defined by a single physical time, but rather by the (normally) different times at which the same clock signal reaches A and B.

In other words, it may not be technically correct to say that the well-defined software state occurs at a proper subset of the physical times. For the software state is defined by the digital state of multiple components, and the physical times at which these digital state “count” is going to be different for different components because of clock skew. In fact, I assume that the following can and does sometimes happen: component B is designed so that the clock signal reaches it after it has reached component A, and by the time component B is reached by the clock signal, component A has started processing new data and no longer has a well-defined digital state. Thus at least in principle (and I don’t know enough about the engineering to know if this happens in practice) it could be that there is no single physical time at which all the digital states that correspond to a software state are defined.

If this is right, then when we go back to our thought experiment of conscious computer, we should say this: The times of the flow of consciousness in that computer are not even a subset of the physical times. They are, rather, an abstraction, what we might call “software time”. If this is right, the question of whether the computer is presently conscious will be literally nonsense. The computer’s software time, which its consciousness is strung out along, has a rather complex relationship to real time.

So what?

I don’t know exactly. But I think there are a few directions one could take this line of thought:

  1. Consciousness has to be strung out in a well-defined way along real time, and so computers cannot be conscious.

  2. It is likely that similar phenomena occur in our brains, and so either our consciousness is not based on our brains or else it is not strung out along real time. The latter makes the A-theory of time less plausible, because the main motive for the A-theory is to do justice to our experience of temporality. But if our experience of temporality is tied to an abstracted software time rather than real time, then doing justice to our experience of temporality is unlikely to reach the truth about real time. This in turn suggests to me the conditional: If the A-theory of time is true, then some sort of dualism is true.

  3. The problem that transitions between meaningful states (say, the ones and zeros of the digital hardware level) involve non-meaningful states between them is likely to afflict any plausible theory on which our mental functioning supervenes on a physical system. In digital computers, the way a sequence of meaningful states is reconstructed is by means of a clock signal. This leads to an empirical prediction: If the mental supervenes on the physical, then our brains have something analogous to a clock signal. Otherwise, the well-defined unity of our consciousness cannot be saved.

Saturday, May 2, 2020

Relativity, brains and the unity of consciousness

I was grading undergraduate metaphysics papers last night and came across a very interesting observation in a really smart student’s paper on Special Relativity and time (I have the student’s permission to share the observation): different parts of the brain have different reference frames, and so must experience time slightly differently.

Of course, the deviation in reference frames is very, very small. It comes from such facts as that

  • the lower parts of the brain are closer to a massive object—the earth—which causes a slight amount of time dilation, and

  • we are constantly wobbling our heads in a way that makes different parts of the brain move at different speeds relative to the earth.

Does such a small difference matter? As I understand their argument, my student thought it would make the A-theory less plausible. For it makes it questionable whether we can say that we really perceive the true objective now in the way that A-theorists would want to say we do. That’s an interesting thought.

I also think the line of thought might create a problem for someone who thinks that mental states supervene on physical states. For consider the unity of consciousness whereby we are aware of multiple things at once. If the consciousness of these different things is partly constituted by different chunks of the brain, then it seems that what precise stream of consciousness we have will depend on what reference frame we choose. For instance, I might hear a sound and feel a pinch at exactly the same moment in one reference frame, but in another reference frame the sound comes before the feeling, and in other the feeling comes before the sound. But that seems wrong: the precise stream of consciousness should not depend on the reference frame.

This shows that if the order of succession within the stream of consciousness does not depend on the reference frame (and it is plausible that it does not), then the precise stream of conciousness cannot supervene on physical states. This is clear if there is no privileged reference frame in the physical world. But even if there is a metaphysically privileged reference frame as A-theorists have to say, it seems reasonable to say that this frame is “metaphysical” rather than “physical”, and hence a dependence of consciousness on this frame is not a case of supervenience of mind on the physical.

Here is what I think we should say: If the A-theory is true, then the mind somehow catches on to the absolute now. If the B-theory is true, then the mind has its own subjective timeline, which is not the timeline of the brain or any part of it.

I think a really careful materialist might be able to affirm the latter option, by analogy to how in a modern digital computer, even though at the electronic hardware level there is analog time (perhaps itself an approximation to some frothy weird quantum time), synchronization of computation to clock ticks results in the possibility of abstracting a precisely defined discrete time that “pretends” that all combinatorial logic happens instantaneously. Roughly speaking, the assembly language programmer works with respect to the discrete time, while the FPGA programmer works primarily with respect to the discrete time but has to constantly bear in mind the constraints that come from the underlying analog time. However, the correspondence between the two levels of time is only vague. Similarly, I think that it is likely that the connection between the mind’s timeline and the physical timelines is going to suffer from vagueness (though perhaps only epistemic). How philosophically happy a materialist would be with such a view is unclear, and there is a serious empirical assumption here for the materialist, namely that the brain has a global synchronizing process similar to a microprocessor’s or FPGA’s synchronizing clock. I doubt that there is, but I know very little of neuroscience.

Friday, May 1, 2020

Simultaneity, A-Theory and Relativity

Here is a standard story about Special Relativity and the A-theory of time:

  • There is an objective metaphysical simultaneity, but

  • this metaphysical simultaneity does not affect physical events and is unobservable.

Let’s assume the A-theory is correct and this story is also correct.

Now, when people talk about this metaphysical simultaneity, they normally think they it aligns with the frame-relative simultaneity of Special Relativity for some privileged reference frame. This seems reasonable. But it is an interesting question to ask for an explanation of this alignment.

Causation may put some constraints on metaphysical simultaneity. For instance, perhaps, there shouldn’t be any possibility of future to past causation. But a metaphysical simultaneity relation can satisfy such constraints without coinciding with any frame-relative simultaneity.

If God exists, I guess we might suppose that metaphysical simultaneity coincides with a frame-relative simultaneity because it’s more elegant if it does.