Thursday, April 25, 2024

Brain snatching is not a model of life after death

Van Inwagen infamously suggested the possibility that at the moment of death God snatches a core chunk of our brain, transports it to a different place, replaces it with a fake chunk of brain, and rebuilds the body around the transported chunk.

I think that, were van Inwagen’s suggestion is correct, it would be correct to say that we die. If not, then it is a seriously problematic view given the Christian commitment that people do, in fact, die. Hence van Inwagen's model is not a model of life after death.

Argument: If in the distant future all of a person’s body was destroyed in an accident except for a surving core chunk, and medical technology had progressed so much that it could regrow the rest of the body from that chunk, I think we would not say that the medical technology resurrected the person, but that it prevented the person’s death.

Objection: The word “death” gets its meaning ostensively from typical cases we label as cases of “death”. In these cases, the heart stops, the parts of the brain observable to us stop having electrical activity, etc. What we mean by “death” is what happens in these cases when this stuff happens. If van Inwagen’s suggestion is correct, then what happens in these cases is the snatching of a core chunk. Hence if van Inwagen’s suggestion is correct, then death is divine snatching of a core chunk of the brain, and we do in fact die.

Responses: First, if death is divine snatching of a core chunk of the brain, then jellyfish and trees don’t die, because they don’t have a brain. I suppose, though, one might say that “death” is understood analogously between jellyfish and humans, and it is human death that is a divine snatching of a core chunk of the brain.

Second, it seems obvious that if God had chosen not to snatch a core chunk of Napoleon’s brain, and allowed Napoleon’s body to rot completely, then Napoleon would be dead. Hence, not even the death of a human is identical to a divine snatching.

Third, I think it is an important part of the concept of death is that death is something that is in common between humans and other organisms. People, dogs, jellyfish, and trees all die. We should have an account of death common between these. The best story I know is that death is the destruction of the body. And the van Inwagen story doesn’t have that. So it’s not a story about death.

Wednesday, April 24, 2024

A small disability

On the mere difference view of disability, one isn’t worse off for being disabled as such, though one is worse off due to ableist arrangements in society. A standard observation is that the mere difference view doesn’t work for really big disabilities.

In this post, I want to argue that it doesn’t work for some really tiny disabilities. For instance, about 3-5% of the population without any other brain damage exhibits “musical anhedonia”, an inability to find pleasure in music. I haven’t been diagnosed, but I seem to have something like this condition. With the occasional exception, music is something I either screen out or a minor annoyance. Occasionally I find myself with an emotional response, but I also don’t like having my emotions pulled on by something I don’t understand. When I play a video game, one of the first things I do is turn off all music. If I could easily run TV through a filter that removed music, I would (at least if watching alone). (Maybe movies as well, though I might feel bad about disturbing the artistic integrity of the director.)

On the basis of testimony, however, I know that music can embody immense aesthetic goods which cannot be found in any other medium. I am missing out on these goods. My missing out on them is not a function of ableist assumptions. After all, if the world were structured in accordance with musical anhedonia, there would be no music in it, and I would still miss out on the aesthetic goods of music—it’s just that everybody else would miss out on them as well, which is no benefit to me. I suppose in a world like that more effort would be put into other art forms. The money spent on music in movies might be spent on better editing, say. In church, perhaps, better poetic recitations would be created in place of hymns. However, more poetry and better editing wouldn’t compensate for the loss of music, since having music in addition to other art forms makes for a much greater diversity of art.

Furthermore, presumably, parallel to music anhedonia there are other anhedonias. If to compensate for musical anhedonia we replace music with poetic recitations, then those who have poetic anhedonia (I don’t know if that is a real or a hypothetical condition; I would be surprised, though, if no one suffered from it; I myself don’t appreciate sound-based poetry much, though I do appreciate meaning-based poetry, like Biblical Hebrew poetry or Solzhenitsyn’s “prose poems”) but don’t have musical anhedonia are worse off.

In general, the lack of an ability to appreciate a major artistic modality is surely a loss in one’s life. It need not be a major loss: one can compensate by enjoying other modalities. But it is a loss.

In the case of a more major disability, there can be personal compensations from the intrinsic challenges arising from the disability. But really tiny disabilities need not generate much in the way of such meaningful compensations.

Here’s another argument that musical anhedonia isn’t a mere difference. Suppose that Alice is a normal human being who would be fully able to get pleasure from music. But Alice belongs to a group unjustly discriminated against, and a part of this discrimination is that whenever Alice is in earshot, all music is turned off. As a result, Alice has never enjoyed music. It is clear that Alice was harmed by this. And the bulk of the harm was that she did not have the aesthetic experience of enjoying music—which is precisely the harm that the person with music anhedonia has.

Objection 1: Granted, musical anhedonia is not a mere difference. But it is also not a disability because it does not significantly impact life.

Response 1.1: But music is one of the great cultural accomplishments of the human species.

Response 1.2: Moreover, transpose my argument to a hypothetical society where it is difficult to get by without enjoying music, a society where, for instance, most social interactions involve explicit sharing in the pleasure of music. In that society, musical anhedonia may make one an outcast. It would be a disability. But it would still make one lose out on one of the great forms of art, and hence would still be a really bad thing, rather than a mere difference.

Objection 2: There is a philosophical and a spiritual benefit to me from my musical anhedonia, and it’s not minor. The spiritual benefit is that I look forward to being able to really enjoy music in heaven in a way in which I probably wouldn’t if I already enjoyed it significantly. The philosophical benefit is that music provides me with a nice model of an aesthetic modality that is beyond one’s grasp. Normally, “things beyond one’s grasp” are hard to talk about! But in the case of music, I can lean on the testimony of others, and thus talk about this art form that is beyond my grasp. And this, in turn, provides me with a reason to think that there are likely other goods beyond our current ken, perhaps even goods that we will enjoy in heaven (back to the spiritual). Furthermore, music provides me with a conclusive argument against emotivist theories of beauty. For I think music is beautiful, but I do not have the relevant aesthetic emotional reaction to it. My belief that music is beautiful is largely based on testimony.

Response 2: These kinds of compensating benefits help the mere difference view. Even if one were able to get tenure on the strength of a book on the philosophy of disease inspired by getting a bad case of Covid, the bad case of Covid would be bad and not a mere difference. The mere difference view is about something more intrinsic to the condition.

Tuesday, April 23, 2024

Value and aptness for moral concern

In two recent posts (this and this) I argued that dignity does not arise from value.

I think the general point here goes beyond value. Some entities are more apt for being morally concerned about than others. These entities are more appropriate beneficiaries of our actions, we have more reason to protect them, and so on. The degreed property these entities have more of has no name, but I will call it “apmoc”: aptness for moral concern. Dignity is then a particularly exalted version of apmoc.

Apmoc as such is agent-relative. If you and I have cats, then my cat has more apmoc relative to me than your cat, while your cat has more apmoc relative to you. Thus, I should have more moral concern for my cat and you for yours. Agent-relativity can be responsible for the bulk of the apmoc in the case of some entities—though probably not in the case of entities whose apmoc rises to the level of dignity.

However, we can distinguish an agent-independent core to an entity’s apmoc, which I will call the entity’s “core apmoc”. One can think of the core apmoc as the apmoc the entity has relative to an agent who has no special relationship to the entity. (Note: My concern in this post is the apmoc relative to human agents, so the core apmoc may still be relative to the human species.)

Now, then, here is a thesis that initially sounds good, but I think is quite mistaken:

  1. An entity’s core apmoc is proportional to its value.

For suppose I have two pet dragons, on par with respect to all properties, except one can naturally fly and the other is naturally flightless. The flying dragon has more value: it is a snazzier kind of being, having an additional causal power. Both dragons equally like being scratched under the chin (perhaps with a rake). The fact that the flying dragon has more value does not give me any additional reason to scratch it. More generally, the flying dragon does not have any more core apmoc.

One might object: if it is a matter of saving the life of one of the dragons, other things being equal, one should save the life of the flying dragon, because it is a better kind of being. However, even if this judgment is correct, it is not due to a difference in apmoc. If the flying dragon dies, more value is lost. The death of a dragon removes from the world all the goods of the dragon: its majestic beauty, its contribution to winter heating, its protection of the owner, its prevention of sheep overpopulation, and so on. The death of the flying dragon removes a good—an instance of the causal power of flight—from the world which the death of the flightless dragon does not. If the reason one should save the life of the flying dragon over the flightless one is that the flying one is a better kind of being, then the reason one is saving its life is not because the flying dragon has more apmoc, but because more is lost by its death. If I have a choice of saving Alice from losing a thumb or Bob from losing the little toe, I should save Alice from losing a thumb, not because Alice has more apmoc, but because a thumb is a bigger loss than a toe.

The above objection points out one feature. Sometimes bestowing what is in some sense “the same benefit” to entity will actually bestow a benefit proportional to the value of the entity. Saving an entity from destruction sounds like “the same benefit”, but is a greater benefit where there is more value to be saved. Similarly, if I have a choice between fixing a tire puncture in my car or in my bike, more value is gained when I fix the car’s tire, because the car is more valuable. However, this is not due to the car having more apmoc, but simply because the benefits are different: if I fix the car’s tire, the car would become capable of transporting around my whole family, while the bike would only become capable of transporting me.

Let’s move away from fantasy. Suppose Alice and Bob are on par in all respects, except that Alice knows the 789th digit of π while Bob does not. Knowledge is valuable, and so if you have more knowledge, you have more value. But now if I have a choice of whom to give a delicious chocolate-chip muffin, the fact that Alice knows the 789th digit of π is irrelevant—it contributes (slightly) to value but not at all to core apmoc (it might contribute to the agent-relative aspects of apmoc in some special cases, since shared knowledge can be a partial constituent of a morally relevant relationshiop).

Granted, a piece of knowledge is a contingent contribution to value. One might think that core apmoc is determined proportionately to the essential values of an entity. But I think this is implausible. Most people have the intuition that, other things being equal, a virtuous person has more apmoc than a vicious one. But virtue is not an essential value—it is a value that fluctuates over a lifetime.

The case of virtue and vice suggests that there may be some values that contribute to core apmoc. I think this is likely. Core apmoc does not appear in a vacuum. But the connection between apmoc and value is complex, and the two are quite different.

Monday, April 22, 2024

Does culpable ignorance excuse?

It is widely held that if you do wrong in culpable ignorance (ignorance that you are blameworthy for), you are culpable for the wrong you do. I have long though think this is mistaken—instead we should frontload the guilt onto the acts and omissions that made one culpable for the ignorance.

I will argue for a claim in the vicinity by starting with some cases that are not cases of ignorance.

  1. One is no less guilty if one tries to shoot someone and misses than if one hits them.

  2. If one drinks and drives and is lucky enough to hit no one, one is no less guilty than if one does hit someone, as long as the degree of freedom and knowledge in the drinking and driving is the same.

  3. If one freely takes a drug one knows to remove free will and produce violent behavior in 25% of cases, one is no less guilty if involuntary violence does not ensue than if involuntary violence does ensue.

Now, let’s consider this case of culpable ignorance:

  1. Mad scientist Alice offers Bob a million dollars to undergo a neural treatment that over the next 48 hours will make Bob think that Elbonians—a small ethnic group—are disease-bearing mosquitoes. Bob always kills organisms that he thinks are disease-bearing mosquitoes on sight. Bob correctly estimates that there is a 25% chance that he will meet an Elbonian over the next 48 hours. If Bob accepts the deal, he is no less guilty if he is lucky enough to meet no Elbonians than if he does meet and kill one.

This is as clear a case of culpable ignorance as can be: in accepting the deal, Bob knows he will become ignorant of the human nature of Elbonians, and he knows there is a 25% chance this will result in his killing an Elbonian. I think that just as in cases (1)–(3), one is no less guilty if the bad consequences for others don’t result, so too in case (4), Bob is no less guilty if he never meets an Elbonian.

For a final case, consider:

  1. Just like (4), except that instead of coming to think Elbonians are (disease-bearing) mosquitoes, Bob will come to believe that unlike all other innocent human persons whom it is impermissible to kill, it is obligatory to kill Elbonians, and Bob’s estimate that this belief will result in his killing an Elbonian is 25%.

Again, Bob is no less guilty for taking the money and getting the treatment if he does not run into any Elbonians than if he does run into and kill an Elbonian.

Therefore, one is no less guilty for one’s culpable ignorance if wicked action does not result. Or, equivalently:

  1. One is no more guilty if wicked action does result from culpable ignorance than if it does not.

But (6) is not quite the claim I started with. I started claiming one is not guilty for the wicked action in cases of culpable ignorance. The claim I argued for is that one is no guiltier for the wicked action than if there is no wicked action resulting from the ignorance. But now if one was guilty for the wicked action, it seems one would be guiltier, since one would have both the guilt for the ignorance and for the wicked action.

However, I am now not so sure. The argument in the previous paragraph depended on something like this principle:

  1. Being guilty of both action A and action B is guiltier than just being guilty of action A, all other things being equal. (Ditto for omissions, but I want to be briefer.)

Thus being guilty of acquiring ignorance and acting wickedly on the ignorance would be guiltier than just of acquiring ignorance, and hence by (6) the wicked action does not have guilt. But now that I have got to this point in the argument, I am not so sure of (7).

There may be counterexamples to (7). First, a politician’s lying to the people an hour after a deadly natural disaster is not less guilty than lying in the same way to the people an hour before the natural disaster. But in lying to the people after the disaster one lies to fewer people—since some people died in the disaster!—and hence there are fewer actions of lying (instead of lying to Alice, and lying to Bob, and lying to Carl, one “only” lies to Alice and one lies to Bob). But I am not sure that this is right—maybe there is just one action of lying lying to the people rather than a separate one for each audience member.

Second, suppose Bob strives to insult Alice in person, and consider two cases. In one case, when he has decided to insult Alice, he gets into his car, drives to see Alice, and insults her. In the other case, when he gets into the car he realizes he doesn’t have enough gas to reach Alice, and so he buys gas, then drives to see Alice, and then insults her. In the second case, Bob performed an action he didn’t perform in the first case: buy gas in order to insult Alice. But it doesn’t seem that Bob is guiltier in the second case, even though he did perform one more guilty action. I am also not sure about this case. Here I am actually inclined to think that Bob is more guilty, for two reasons. First, he was willing to undertake a greater burden in order to insult Alice—and that increases guilt. Second, he had an extra chance to repent—each time one acquiesces in a means, that’s a chance to just say no to the whole action sequence. And yet he refused this chance. (It seems to me that Bob is guiltier in the second case, just as the assassin possessing two bullets and shooting the second after missing with the first—regardless of whether the second shot hits—is guiltier than the assassin who after shooting and missing once stops.)

While I am not convinced of the cases, they point to the idea that in the context of (7), the guilt of action A might “stretch” to making B guilty without increasing the total amount of guilt. If that makes sense, then that might actually be the right way of account of accounting for actions done in culpable ignorance. If Bob kills an Elbonian, he is guilty. That is not an additional item of guilt, but rather the guilt of the actions and omissions that caused the guilt stretches over and covers the killing. This seems to me to mesh better with ordinary ways of talking—we don’t want to say that Bob’s killing of the Elbonian in either case (4) or (5) is innocent. And saying that there is no additional guilt may be a way of assuaging the intuition I have had over the years when I thought that culpable ignorance excuses.

Maybe.

A final obvious question is about punishment. We do punish differentially for attempted and completed murder, and for drunk driving that does not result in death and drink driving that does. I think there pragmatic reasons for this. If attempted and completed murder were equally punished, there would be an incentive to “finish the job” upon initial failure. And having a lesser penalty for non-lethal drunk driving creates an incentive for the drunk driver to be more careful driving—how much that avails depends on how drunk the driver is, but it might make some difference.

Thursday, April 18, 2024

Evaluating some theses on dignity and value

I’ve been thinking a bit about the relationship between dignity and value. Here are four plausible principles:

  1. If x has dignity, then x has great non-instrumental value.

  2. If x has dignity, then x has great non-instrumental value because it has dignity.

  3. If x has dignity and y does not, then x has more non-instrumental value than y.

  4. Dignity just is great value (variant: great non-instrumental value).

Of these theses, I am pretty confident that (1) is true. I am fairly confident (3) is false, except perhaps in the special case where y is a substance. I am even more confident that (4) is false.

I am not sure about (2), but I incline against it.

Here is my reason to suspect that (2) is false. It seems that things have dignity in virtue of some further fact F about them, such as that they are rational beings, or that they are in the image and likeness of God, or that they are sacred. In such a case, it seems plausible to think that F directly gives the dignified entity both the great value and dignity, and hence the great value derives directly from F and not from the dignity. For instance, maybe what makes persons have great value is that they are rational, and the same fact—namely that they are rational—gives them dignity. But the dignity doesn’t give them additional value beyond that bestowed on them by their rationality.

My reason to deny (4) is that great value does not give rise to the kinds of deontological consequences that dignity does. One may not desecrate something with dignity no matter what consequences come of it. But it is plausible that mere great value can be destroyed for the sake of dignity.

This leaves principle (3). The argument in my recent post (which I now have some reservations about, in light of some powerful criticisms from a colleague) points to the falsity of (3). Here is another, related reason. Suppose we find out that the Andromeda Galaxy is full of life, of great diversity and wonder, including both sentient and non-sentient organisms, but has nothing close to sapient life—nothing like a person. An evil alien is about to launch a weapon that will destroy the Andromeda Galaxy. You can either stop that alien or save a drowning human. It seems to me that either option is permissible. If I am right, then the value of the human is not much greater than that of the Andromeda Galaxy.

But now imagine that the Whirlpool Galaxy has an order of magnitude more life than the Andromeda Galaxy, with much greater diversity and wonder, than the Andromeda Galaxy, but still with nothing sapient. Then even if the value of the human is greater than that of the Andromeda Galaxy, because it is not much greater, while the value of the Whirlpool Galaxy is much greater than that of the Andromeda Galaxy, it follows that the human does not have greater value than the Whirlpool Galaxy.

However, the Whirlpool Galaxy, assuming it has no sapience in it, lacks dignity. A sign of this is that it would be permissible to deliberately destroy it in order to save two similar galaxies from destruction.

Thus, the human is not greater in value than the Whirlpool Galaxy (in my story), but the human has dignity while the Whirlpool Galaxy lacks it.

That said, on my ontology, galaxies are unlikely to be substances (especially if the life in the galaxy is considered a part of the galaxy, since following Aristotle I doubt that a substance can be a proper part of a substance). So it is still possible that principle (3) is true for substances.

But I am not sure even of (3) in the case of substances. Suppose elephants are not persons, and imagine an alien sentient but not sapient creature which is like an elephant in the temporal density of the richness of life (i.e., richness per unit time), except that (a) its rich elephantine life lasts millions of years, and (b) there can only be one member of the kind, because they naturally do not reproduce. On the other hand, consider an alien person who naturally only has a life that lasts ten minutes, and has the same temporal density of richness of life that we do. I doubt that the alien person is much more valuable than the elephantine alien. And if the alien person is not much more valuable, then by imagining a non-personal animal that is much more valuable than the elephantine alien, we have imagined that some person is not more valuable than some non-person. Assuming all non-persons lack dignity and all persons have dignity, we have a case where an entity with dignity is not more valuable than an entity without dignity.

That said, I am not very confident of my arguments against (3). And while I am dubious of (3), I do accept:

  1. If x has dignity and y does not, then y is not more valuable than x.

I think the case of the human and the galaxy, or the alien person and alien elephantine creature, are cases of incommensurability.

Wednesday, April 17, 2024

Desire-fulfillment theories of wellbeing

On desire-fulfillment (DF) theories of wellbeing, cases of fulfilled desire are an increment to utility. What about cases of unfulfilled desire? On DF theories, we have a choice point. We could say that unfulfilled desires don’t count at all—it’s just that one doesn’t get the increment from the desire being fulfilled—or that they are a decrement.

Saying that unfulfilled desires don’t count at all would be mistaken. It would imply, for instance, that it’s worthwhile to gain all the possible desires, since then one maximizes the amount of fulfilled desire, and there is no loss from unfulfilled desire.

So the DF theorist should count unfulfilled desire as a decrement to utility.

But now here is an interesting question. If I desire that p, and then get an increment x > 0 to my utility if p, is my decrement to utility if not p just  − x or something different?

It seems that in different cases we feel differently. There seem to be cases where the increment from fulfillment is greater than the decrement from non-fulfillment. These may be cases of wanting something as a bonus or an adjunct to one’s other desires. For instance, a philosopher might want to win a pickleball tournament, and intuitively the increment to utility from winning is greater than the decrement from not winning. But there are cases where the decrement is at least as large as the increment. Cases of really important desires, like the desire to have friends, may be like that.

What should the DF theorist do about this? The observation above seems to do serious damage to the elegant “add up fulfillments and subtract non-fulfulfillments” picture of DF theories.

I think there is actually a neat move that can be made. We normally think of desires as coming with strengths or importances, and of course every DF theorist will want to weight the increments and decrements to utility with the importance of the desire involved. But perhaps what we should do is to attach two importances to any given desire: an importance that is a weight for the increment if the desire is fulfilled and an importance that is a weight for the decrement if the desire is not fulfilled.

So now it is just a psychological fact that each desire comes along with a pair of weights, and we can decide how much to add and how much to subtract based on the fulfillment or non-fulfillment of the desire.

If this is right, then we have an algorithm for a good life: work on your psychology to gain lots and lots of new desires with large fulfillment weights and small non-fulfillment weights, and to transform your existing desires to have large fulfillment weights and small non-fulfillment weights. Then you will have more wellbeing, since the fulfillments of desires will add significantly to your utility but the non-fulfillments will make little difference.

This algorithm results in an inhuman person, one who gains much if their friends live and are loyal, but loses nothing if their friends die or are disloyal. That’s not the best kind of friendship. The best kind of friendship requires vulnerability, and the algorithm takes that away.

Tuesday, April 16, 2024

Value and dignity

  1. If it can be reasonable for a typical innocent human being to save lions from extinction at the expense of the human’s own life, then the life of a typical human being is not of greater value than that of all the lion species.

  2. It can be reasonable for a typical innocent human being to save lions from extinction at the expense of the human’s own life.

  3. So, the life of a typical innocent human being is not of greater value than that of the lion species.

  4. It is wrong to intentionally kill an innocent human being in order to save tigers, elephants and giraffes from extinction.

  5. It is not wrong to intentionally destroy the lion species in order to save tigers, elephants and giraffes from extinction.

  6. If (3), (4) and (5), then the right to life of innocent human beings is not grounded in how great the value of human life is.

  7. So, the right to life of innocent human beings is not grounded in how great the value of human life is.

I think the conclusion to draw from this is the Kantian one, that dignity that property of human beings that grounds respect, is not a form of value. A human being has a dignity greater than that of all lions taken together, as indicated by the deontological claims (4) and (5), but a human being does not have a value greater than that of all lions taken together.

One might be unconvinced by (2). But if so, then tweak the argument. It is reasonable to accept a 25% chance of death in order to stop an alien attack aimed at killing off all the lions. If so, then on the plausible assumption that the value of all the lions, tigers, elephants and giraffes is at least four times that of the lions (note that there are multiple species of elephants and giraffes, but only one of lions), it is reasonable to accept a 100% chance of death in order to stop the alien attack aimed at killing off all four types of animals. But now we can easily imagine sixteen types of animals such that it is permissible to intentionally kill off the lions, tigers, elephants and giraffes in order to save the 16 types, but it is not permissible to intentionally kill a human in order to save the 16 types.

Yet another argument against physician assisted suicide

Years ago, I read a clever argument against physician assisted suicide that held that medical procedures need informed consent, and informed consent requires that one be given relevant scientific data on what will happen to one after a procedure. But there is no scientific data on what happens to one after death, so informed consent of the type involved in medical procedures is impossible.

I am not entirely convinced by this argument, but I think it does point to a reason why helping to kill a patient is not an appropriate medical procedure. An appropriate medical procedure is one aiming at producing a medical outcome by scientifically-supported means. In the case of physician assisted suicide, the outcome is presumably something like respite from suffering. Now, we do not have scientific data on whether death causes respite from suffering. Seriously held and defended non-scientific theories about what happens after death include:

  1. death is the cessation of existence

  2. after death, existence continues in a spiritual way in all cases without pain

  3. after death, existence continues in a spiritual way in some cases with severe pain and in other cases without pain

  4. after death, existence continues in another body, human or animal.

The sought-after outcome, namely respite from severe pain, is guaranteed in cases (a), (b) and (d). However, first, evidence for preferring these three hypotheses to hypothesis (b) is not scientific but philosophical or theological in nature, and hence should not be relied on by the medical professional as a medical professional in predicting the outcome of the procedure. Second, even on hypotheses (b) and (d), the sought-after outcome is produced by a metaphysical process that goes beyond the natural processes that are the medical professional’s tools of the trade. On those hypotheses, the medical professional’s means for assuring improvement of the patient’s subjective condition relies on, say, a God or some nonphysical reincarnational process.

One might object that the physician does not need to judge between after-life hypotheses like (a)–(d), but can delegate that judgment to the patient. But a medical professional cannot so punt to the patient. If I go to my doctor asking for a prescription of some specific medication, saying that I believe it will help me with some condition, he can only permissibly fulfill my request if he himself has medical evidence that the medication will have the requisite effect. If I say that an angel told me that ivermectin will help me with Covid, the doctor should ignore that. The patient rightly has an input into what outcome is worth seeking (e.g., is relief from pain worth it if it comes at the expense of mental fog) and how to balance risks and benefits, but the doctor cannot perform a medical procedure based on the patient’s evaluation of the medical evidence, except perhaps in the special case where the patient has relevant medical or scientific qualifications.

Or imagine that a patient has a curable fracture. The patient requests physician assisted suicide because the patient has a belief that after death they will be transported to a different planet, immediately given a new, completely fixed body, and will lead a life there that is slightly happier than their life on earth. A readily curable condition like that does not call for physician assisted suicide on anyone’s view. But if there is no absolute moral objection to killing as such and if the physician is to punt to the patient on spiritual questions, why not? On the patient’s views, after all, death will yield an instant cure to the fracture, while standard medical means will take weeks.

Furthermore, the medical professional should not fulfill requests for medical procedures which achieve their ends by non-medical means. If I go to a surgeon asking that my kidney be removed because Apollo told me that if I burn one of my kidneys on his altar my cancer will be cured, the surgeon must refuse. First, as noted in the previous paragraph, the surgeon cannot punt to the patient the question of whether the method will achieve the stated medical goal. Second, as also noted, even if the surgeon shares the patient’s judgment (the surgeon thinks Apollo appeared to her as well), the surgeon is lacking scientific evidence here. Third, and this is what I want to focus on here, while the outcome (no cancer) is medical, the means (sacrificing a kidney) are not medical.

Only in the case of hypothesis (a) can one say that the respite from severe pain is being produced by physical means. But the judgment that hypothesis (a) is true would be highly controversial (a majority of people in the US seem to reject the hypothesis), and as noted is not scientific.

Admittedly, in cases (b)–(d), the medical method as such does likely produce a respite from the particular pain in question. But that a respite from a particular pain is produced is insufficient to make a medical procedure appropriate: one needs information that some other pain won’t show up instead.

Note that this is not an argument against euthanasia in general (which I am also opposed to on other grounds), but specifically an argument against medical professionals aiding killing.

A version of computationalism

I’ve been thinking how best to define computationalism about the mind, while remaining fairly agnostic about how the brain computes. Here is my best attempt to formulate computationalism:

  • If a Turing machine with sufficiently large memory simulates the functioning of a normal adult human being with sufficient accuracy, then given an appropriate mapping of inputs and outputs but without any ontological addition of a nonphysical property or part, (a) the simulated body dispositionally will behave like the simulated one at the level of macroscopic observation, and (b) the simulation will exhibit mental states analogous to those the simulated human would have.

The “analogous” in (b) allows the computationalist at least two difference between the mental states of the simulation and the mental states of the simulated. First, we might allow for the possibility that the qualitative features of mental states—the qualia—depend on the exact type of embodiment, so in vivo and in silico versions of the human will have different qualitative states when faced with analogous sensory inputs. Second, we probably should allow for some modest semantic externalism.

The “without any ontological addition” is relevant if one thinks that the laws of nature, or divine dispositions, are such that if a simulation were made, it would gain a soul or some other nonphysical addition. In other words, the qualifier helps to ensure that the simulation would think in virtue of its computational features, rather than in virtue of something being added.

Note that computationalism so defined is not entailed by standard reductive physicalism. For while the standard reductive physicalist is going to accept that a sufficiently accurate simulation will yield (a), they can think that real thought depends on physical features that are not had by the simulation (we could imagine, for instance, that to have qualia you need to have carbon, and merely simulated carbon is not good enough).

Moreover, computationalism so defined is compatible with some nonreductive physicalisms, say ones on which there are biological laws that do not reduce to laws of physics, as long as these biological laws are simulable, and the appropriate simulation will have the right mental states.

In fact, computationalism so defined is compatible with substance dualism, as long as the functioning of the soul is simulable, and the simulation would have the right mental states without itself having to have a soul added to it.

Computationalism defined as above is not the same as functionalism. Functionalism requires a notion of a proper function (even if statistically defined, as in Lewis). No such notion is needed above. Furthermore, the computationalism is not a thesis about every possible mind, but only about human minds. It seems pretty plausible that (perhaps in a world with different laws of nature than ours) it is possible to have a mind whose computational resources exceed those of a Turing machine.

Physician assisted suicide and martyrdom

  1. If physician assisted suicide is permissible, then it would have been permissible for early Christians facing being tortured to death by the Romans to kill themselves less painfully.

  2. It would not have been permissible for early Christians facing being tortured to death by the Romans to kill themselves less painfully.

  3. So, physician assisted suicide is not permissible.

The parity premise (1) is hard to deny. The best case for physician assisted suicide is where the patient strives to escape severe and otherwise unescapable pain while facing imminent death. That’s precisely the case of an early Christian being rounded up by Romans to be tortured to death.

Premise (2) is meant to be based on Christian tradition. The idea of suicide to escape pain could not have failed to occur to early Christians, given the cultural acceptance of suicide “to escape the shame of defeat and surrender” (Griffin 1986). It would have been culturally unsurprising, then, if a Christian were to fall on a sword with the Roman authorities at the door. But as far as I can tell, this did not happen. The best explanation is that the Christian tradition was strongly opposed to such “escape”.

There were, admittedly, cases of suicide to avoid rape (eventually rejected by St. Augustine, with great sensitivity to the tragedy), as well as cases where the martyr cooperated with the executioners (as Socrates is depicted having done).

Saturday, April 13, 2024

Legitimate and illegitimate authority

It is tempting to think that legitimate and illegitimate authorities are both types of a single thing. One might not want to call that single thing “authority”. After all, one doesn’t want to say that real and fake money are both types of money. But it sure seems like there is something X that legitimate and illegitimate authorities have in common with each other, and with nothing else. One imagines that a dictator and a lawfully elected president are in some way both doing the same kind of thing, “ruling” or whatever.

But this now seems to me to be mistaken. Or at least I can’t think what X could be. The only candidate I can think of is the trivial disjunctive property of being a legitimate authority or an illegitimate authority.

To a first approximation, one might think that the legitimate and illegitimate authorities both engage in the speech act of commanding. One might here try to object that “commanding” has the same problem as “authority” does: that it is not clear that legitimate and illegitimate commands have anything in common. This criticism seems to me to be mistaken: the two may not have any normative commonality, but they seem to be the same speech act.

However, imagine that Alice is the legitimate elected ruler of Elbonia, but Bob has put Alice in solitary confinement and set himself up as a dictator. Alice is not crazy: when she is in solitary confinement she isn’t commanding anyone as there is no one for her to command. Alice is a legitimate authority and Bob is an illegitimate authority, yet they do not have commanding, or ruling, or running the country in common. (Similarly, even without imprisonment, we could suppose Alice is a small government conservative who ran on a platform of not issuing any orders except in an emergency, and no emergency came up and she kept her promise.)

One might think that they have some kind of dispositional property in common. Alice surely would command if she were to get out of prison, after all. Well, maybe, but we need to specify the conditions quite carefully. Suppose she got out of prison but thought that no one would follow her commands, because she was still surrounded by Bob’s flunkies. Then she might not bother to command. It makes one look bad if one issues commands and they are ignored. Perhaps, though, we can say: Alice would issue commands if she thought they were needed and likely to be obeyed. But that can’t be the disposition that defines a legitimate or illegitimate authority. For many quite ordinary people in the country presumably have the exact same disposition: they too would issue commands if they thought they were needed and likely to be obeyed! But we don’t want to say that these people are either legitimate or illegitimate authorities.

We might argue that Alice isn’t a legitimate authority while imprisoned, because she is incapacited, and incapacitation removes legitimate authority. One reason to be dubious of this answer is that on a plausible account of incapacitation, insanity is a form of incapacitation. But an insane illegitimate dictator is still an illegitimate authority, and so incapacitation does not remove the disjunctive property legimate or illegitimate authority, but at most it removes legitimacy. Thus, Alice might still be an authority, but not an illegitimate one. Another reason is this: we could imagine that in order to discourage people from incapacitating the legitimate ruler, the laws insist that one remains in charge if one’s incapacitation is due to an act of rebellion. Moreover, we might suppose that Bob hasn’t actually incapacitated Alice. He lets her walk around and give orders freely, but his minions kill anybody who obeys, so Alice doesn’t bother to issue any orders, because either they will be disobeyed or the obeyers will be killed.

Perhaps we might try to find a disposition in the citizenry, however. Maybe what makes Alice and Bob be the same kind of thing is that the citizens have a disposition to obey them. One worry about this is this: Suppose the citizens after electing Alice become unruly, and lose the disposition to obey. It seems that Alice could still be the legitimate authority. I suppose someone could think, however, that some principles of democracy would imply that if there is no social disposition to obey someone, they are no longer an authority, legitimate or not. I am dubious. But there is another objection to finding a common disposition in the citizenry. The citizenry’s disposition to obey Bob could easily be conditional on them being unable to escape the harsh treatment he imposes on the disobedient and on him actually issuing orders. So the proposal now is something like this: z is a legitimate authority or an illegitimate authority if the citizenry would be disposed to obey z if z were to issue orders backed up credible threats of harsh treatment. But it could easily be that a perfectly ordinary person z satisfies this definition: people would obey z if z were to issue orders backed up by credible threats!

Let’s try one more thing. What fake and real money have in common is that they are both objects made to appear to be real money. Could we say that both Alice and Bob claim have this in common: They both claim to (“pretend to”, in the old sense of “pretend” that does not imply “falsely” as it does now) be the legitimate authority? Again, that may not be true. Alice is in solitary confinement. She has no one to make such claims to. Again, we can try to find some dispositional formulation, such as that she would claim it if she thought it beneficial to do so. But again many quite ordinary people would claim to be the legitimate authority if they thought it beneficial to do so. Moreover, Bob can be an illegitimate authority without any pretence to legitimacy! He need not claim, for instance, that people have a duty to obey him, backing up his orders by threat rather than by claimed authority. (It is common in our time that dictators pretend to a legitimacy that they do not have. But this is not a necessary condition for being an illegitimate authority.) Finally, if Carl is a crazy guy who claims to have been elected and no one, not even Carl’s friends and family, pays any attention to his raving, it does not seem that Carl is an illegitimate authority.

None of this denies the thesis that there is a similarity between illegitimate authority and legitimate authority. But it does not seem possible to turn that similarity into a non-disjunctive property that both of these share. Though maybe I am just insufficiently clever.

Thursday, April 11, 2024

Of snakes and cerebra

Suppose that you very quickly crush the head of a very long stretched-out serpent. Specifically, suppose your crushing takes less time than it takes for light to travel to the snake’s tail.

Let t be a time just after the crushing of the head.

Now causal influences propagate at most at the speed of light or less, the crushing of the head is the cause of death, and at t there wasn’t yet time for the effects of the crushing to have propagated to the tip of the tail. Furthermore, assume an Aristotelian account of life where a living thing is everywhere joined with its form or soul and death is the separation of the form from the matter. Then at t, because the effects of crushing haven’t propagated to the tail, the tail is joined with the snake’s form, even though the head is crushed and hence presumably no longer a part of the snake. (Imagine the head being annihilated for greater clarity.)

Now as long as any matter is joined to the form, the critter is alive. It follows that at time t, the snake is alive despite lacking a head. The argument generalizes. If we crush everything but the snake’s tail, including crushing all the major organs of the snake, the snake is alive despite lacking all the major organs, and having but a tail (or part of a tail).

So what? Well, one of the most compelling arguments against animalism—the view that people are animals—is that:

  1. People can survive as just a cerebrum (in a vat).

  2. No animal can survive as just a cerebrum.

  3. So, people are not animals.

But presumably the reason for thinking that an animal can’t survive as just a cerebrum is that a cerebrum makes an insufficient contribution to the animal functions. But the tail of a snake makes an even less significant contribution to the animal functions. Hence:

  1. If a snake can survive as just a tail, a mammal can survive as just a cerebrum.

  2. A snake can survive as just a tail.

  3. So, a mammal can survive as just a cerebrum.

Objection: Only physical effects are limited to the speed of light in their propagation, and the separation of form from matter is not a physical effect, so that instantly when the head is crushed, the form leaves the snake, all at once at t.

Response: Let z be the spacetime location of the tip of the snake’s tail at t. According to the object, at z the form is no longer present. Now, given my assumption that crushing takes less time than it takes for light to travel to the snake’s tail, and that in one reference frame w is just after the crushing, there will also be a reference frame according to which z is before the crushing has even started. If at z the form is no longer present, then the form has left the tip of the tail before the crushing.

In other words, if we try to get out of the initial argument by supposing that loss of form proceeds faster than light, then we have to admit that in some reference frames, loss of form goes backwards in time. And that seems rather implausible.

Tuesday, April 9, 2024

Absolute reference frame

Some philosophers think that notwithstanding Special Relativity, there is a True Absolute Reference Frame. Suppose this is so. This reference frame, surely, is not our reference frame. We are on a spinning planet rotating around a sun orbiting the center of our galaxy. It seems pretty likely that if there is an absolute reference frame, then we are moving with respect to it at least at the speed of the flow of the Local Group of galaxies due to the mass of the Laniakea Supercluster of galaxies, i.e., at around 600 km/s.

Given this, our measurements of distance and time are actually going to be a little bit objectively off the true values, which are the ones that we would measure if we were in the absolute reference frame. The things we actually measure here in our solar system will be objectively off due to time dilation and space contraction by about two parts per million, if my calculations are right. That means that our best possible clocks will be objectively about a minute(!) off per year, and our best meter sticks will be about two microns off. Not that we would notice these things, since the absolute reference frame is not observable, so we can’t compare our measurements to it.

As a result, we have a choice between two counterintuitive claims. Either we say that duration and distance are relative, or we have to say that our best machining and time measuring is necessarily off, and we don’t know by how much, since we don’t know what the True Absolute Reference Frame is.

Monday, April 8, 2024

Eclipse

The day started off all cloudy, but the clouds got less dense, and then when the eclipse in our front yard reached totality, we had a big break in the clouds.




The first picture has a sunspot in the middle. In the totality picture, slightly to the right of the bottom of the sun in the totality picture there is a hint of a reddish prominence which in my 8" telescope had lovely structure. A quick measurement from the photo shows that the prominence is about seven times the size of the earth.

Saturday, April 6, 2024

Plastic belt buckle

Quite a while back, I came across a discarded belt with a broken buckle. I kept it in my "long stringy things" box in the garage until I could figure out what to do with it. Finally, today, I designed and 3D printed a new buckle for it, along with plastic rivets. I replaced all the metal, and now I have a no-metal belt that hopefully can clear airline security without being removed (not tested yet).





Friday, April 5, 2024

A weaker epiphenomenalism

A prominent objection to epiphenomenalist theories of qualia, on which qualia have no causal efficacy, is that then we have no way of knowing that we had a quale of red. For a redness-zombie, who has no quale of red, would have the very same “I am having a quale of red” thought as me, since my “I am having a quale of red” thought is not caused by the quale of red.

There is a slight tweak to epiphenomanalism that escapes this objection, and the tweaked theory seems worth some consideration. Instead of saying that qualia have no causal efficacy, on our weaker epiphenomenalism we say that qualia have no physical effects. We can then say that my “I am having a quale of red” thought is composed of two components: one of these components is a physical state ϕ2 and the other is a quale q2 constituting the subjective feeling of thinking that I am having a quale of red. After all, conscious thoughts plainly have qualia, just as perceptions do, if there are qualia at all. We can now say that the physical state ϕ2 is caused by the physical correlate ϕ1 of the quale of red, while the quale q2 is wholly or partly caused by the quale q1 of red.

As a result, my conscious thought “I am having a quale of red” would not have occurred if I lacked the quale of red. All that would have occurred would be the physical part of the conscious thought, ϕ2, which physical part is what is responsible for further physical effects (such as my saying that I am having a quale of red).

If this is right, then the induced skepticism about qualia will be limited to skepticism with respect to unconscious thoughts about qualia. And that’s not much of a skepticism!

Thursday, April 4, 2024

Divine thought simplicity

One of the motivations for denying divine simplicity is the plausibility of the claim that:

  1. There is a multiplicity of divine thoughts, which are a proper part of God.

But it turns out there are reasons to reject (1) independent of divine simplicity.

Here is one reductio of the distinctness of God and God’s thoughts.

  1. God is distinct from his thoughts.

  2. If x’s thoughts are distinct from x, then x causes x’s thoughts.

  3. Everything caused by God is a creature.

  4. So, God’s thoughts are creatures.

  5. Every creature explanatorily depends on a divine rational decision to create it.

  6. A rational decision explanatorily depends on thoughts.

  7. So, we have an ungrounded infinite explanatory regress of thoughts.

  8. Ungrounded infinite explanatory regresses are impossible.

  9. Contradiction!

Here is another that also starts with 2–5 but now continues:

  1. God’s omniscience is identical with or dependent on God’s thoughts.

  2. None of God’s essential attributes are identical with or dependent on any creatures.

  3. Omniscience is one of God’s essential attributes.

  4. Contradiction!

Intending the bad as such

Here is a plausible thesis:

  1. You should never intend to produce a bad effect qua bad.

Now, even the most hardnosed deontologist (like me!) will admit that there are minor bads which it is permissible to intentionally produce for instrumental reasons. If a gun is held to your head, and you are told that you will die unless you come up to a stranger and give them a moderate slap with a dead fish, then the slap is the right thing to do. And if the only way for you to survive a bear attack is to wake up your fellow camper who is much more handy with a rifle than you are, and the only way to wake them up is to poke them with a sharp stick, then the poke is the right thing. But these cases are not counterexamples to (1), since while the slap and poke are bad, one is not intending them qua bad.

However, there are more contrived cases where it seems that you should intend to produce a bad effect qua bad. For instance, suppose that you are informed that you will die unless you do something clearly bad to a stranger, but it is left entirely up to you what the bad thing is. Then it seems obvious that the right thing to do is to choose the least bad thing you can think of—the lightest slap with a dead fish, perhaps, that still clearly counts as bad—and do that. But if you do that, then you are intending the bad qua bad.

Yet I find (1) plausible. I feel a pull towards thinking that you shouldn’t set your will on the bad qua bad, no matter what. However it seems weird to think that it would be right to give a stranger a moderate slap with a dead fish if that was specifically what you were required to do to save your life, but it would be wrong to give them a mild slap if it were left up to you what bad thing to do. So, very cautiously, I am inclined to deny (1) in the case of minor bads.

Tuesday, April 2, 2024

Abstaining from goods

There are many times when we refrain from pursuing an intrinsic good G. We can classify these cases into two types:

  1. we refrain despite G being good, and

  2. we refrain because G is good.

The “despite” cases are straightforward, such as when one refrains from from reading a novel for the sake of grading exams, despite the value of reading the novel.

The “because” cases are rather more interesting. St Augustine gives the example of celibacy for the sake of Christ: it is because marriage is good that giving it up for the sake of Christ is better. Cases of religious fasting are often like this, too. Or one might refrain from something of value in order to punish oneself, again precisely because the thing is of value. These are self-sacrificial cases.

One might think another type of example of a “because” case is where one refrains from pursuing G now in order to obtain it by a better means, or in better circumstances, in the future. For instance, one might refrain from eating a cake on one day in order to have the cake on the next day which is a special occasion. Here the value of the cake is part of the reason for refraining from pursuit. On reflection, however, I think this is a “because” case. For we should distinguish between the good G1 of having the cake now and the good G2 of having the cake tomorrow. Then in delaying one does so despite the good of G1 and because of the good of G2. The good of G1 is not relevant, unless this becomes sacrificial.

I don’t know if all the “because” cases are self-sacrificial in the way celibacy is. I suspect so, but I would not be surprised if a counterexample turned up.

Aristotelian functionalism

Rob Koons and I have argued that the best functionalist theory of mind is one where the proper function of a system is defined in an Aristotelian way, in terms of innate teleology.

When I was teaching on this today, it occured to me (it should have been obvious earlier) that this Aristotelian functionalism has the intuitive consequence that only organisms are minded. For although innate teleology may be had by substances other than organisms, inorganic Aristotelian material substances do not have the kind of teleology that would make for mindedness on any plausible functionalism. Here I am relying on Aristotle’s view that (maybe with some weird exceptions, like van Inwagen’s snake-hammock) artifacts—presumably including computers—are not substances.

If this is right, then the main counterexamples to functionalism disappear:

Next recall Leibniz’s mill argument: if a machine can be conscious, a mill full of giant gears could be conscious, and yet as we walked through such mill, it would be clear that there is no consciousness anywhere. But now suppose we were told that the mill has an innate function (not derived from the purposes of the architect) which governed the computational behavior of the gears. We would then realize that the mill is more than just what we can see, and that would undercut the force of the Leibnizian intuition. In other words, it is not so hard to believe that a mill with innate purpose is conscious.

Further, note that perhaps the best physicalist account of qualia is that qualia are grounded in the otherwise unknowable categorical features of the matter making up our brains. This, however, has a somewhat anti-realist consequence: the way our experiences feel has nothing to do with the way the objects we are experiencing are. But an Aristotelian functionalist can tell this story. If I have a state whose function is to represent red light, then I have an innate teleology that makes reference to red light. This innate teleology could itself encode the categorical features of red light, and since this innate teleology, via functionalism, grounds our perception of red light, our perception of red light is “colored” by the categorical features not just by our brains, but by the categorical features of red light (even if we are hallucinating the red light). This makes for a more realist theory of qualia, on which there is a non-coincidental connection between the external objects and how they seem to us.

Observe, also, how the Aristotelian story has advantages of panpsychism without the disadvantages. The advantage of panpsychism is that the mysterious gap between us and electrons is bridged. The disadvantages are two-fold: (a) it is highly counterintuitive that electrons are conscious (the gap is bridged too well) and (b) we don’t have a plausible story about how the consciousness of the parts gives rise to a consciousness of the whole. But on Aristotelian functionalism, it is teleology that we have in common with electrons, so we do not need to say that electrons are conscious—but because mind reduces to teleological function, though not of the kind electrons have, we still have bridging. And we can tell exactly the kind of story that non-Aristotelians do about how the function of the parts gives rise to the consciousness of the whole.

There is, however, a serious downside to this Aristotelian functionalism. It cannot work for the simple God of classical theism. But perhaps we can put a lot of stress on the idea that “mind” is only said analogously between creatures and God. I don’t know if that will work.

Functionalism and organizations

I am quite convinced that if standard (non-evolutionary, non-Aristotelian) functionalism is true, then complex organizations such as universities and nations have minds and are conscious. For it is clear to me that dogs are conscious, and the functioning of complex organizations is more intellectually sophisticated than that of dogs, and has the kind of desire-satisfaction drivers that dogs and maybe even humans have.

(I am pretty sure I posted something like this before, but I can’t find it.)

Wednesday, March 27, 2024

Knowledge of qualia

Suppose epiphenomenalism is true about qualia, so qualia are nonphysical properties that have no causal impact on anything. Let w0 be the actual world and let w1 be a world which is exactly like the actual world, except that (a) there are no qualia (so it’s a zombie world) and (b) instead of qualia, there are causally inefficacious nonphysical properties that have a logical structure isomorphic to the qualia of our world, and that occur in the corresponding places in the spatiotemporal and causal nexuses. Call these properties “epis”.

The following seems pretty obvious to me:

  1. In w1, nobody knows about the epis.

But the relationship of our beliefs about qualia to the qualia themselves seems to be exactly like the relationship of the denizens of w1 to the epis. In particular, neither are any of their beliefs caused by the obtaining of epis, nor are any of our beliefs caused by the obtaining of qualia, since both are epiphenomenal. So, plausibly:

  1. If in w1, nobody knows about the epis, then in w0, nobody knows about the qualia.

Conclusion:

  1. Nobody knows about the qualia.

But of course we do! So epiphenomenalism is false.

Tuesday, March 26, 2024

Today's sunspots

Today I was testing the solar filter I got for the eclipse. (300mm f/5.6, cropped).

Brains, bodies and souls

There are four main families of views of who we are:

  1. Bodies (or organisms)

  2. Brains (or at least cerebra)

  3. Body-soul composites

  4. Souls.

For the sake of filling out logical space, and maybe getting some insight, it’s worth thinking a bit about what other options there might be. Here is one that occurred to me:

  1. Brain-soul (or cerebrum-soul) composites.

I suppose the reason this is not much (if at all) talked about is that if one believes in a soul, the body-soul composite or soul-only views seem more natural. Why might one accept a brain-soul composite view? (For simplicity, I won’t worry about the brain-cerebrum distinction.)

Here is one line of thought. Suppose we accept some of the standard arguments for dualism, such as that matter can’t be conscious or that matter cannot think abstract thoughts. This leads us to think the mind cannot be entirely material. But at the same time, there is some reason to think the mind is at least partly material: the brain’s activity sure seems like an integral part of our discoursive thought. Thus, the dualist might have reason to say that the mind is a brain-soul composite. At the same time, there is a Cartesian line of thought that we should be identified with the minimal entity hosting our thoughts, namely the mind. Putting all these lines of thought together, we conclude that we are minds, and hence brain-soul composites.

Now I don’t endorse (5). The main ethical arguments against (2) and (4), namely that they don’t do justice to the deep ethical significance of the human body, apply against (5) as well. But if one is not impressed by these arguments, there really is some reason to accept (5).

Furthermore, exploring new options, like the brain-soul composite option, sometimes may give new insights into old options. I am now pretty much convinced that the mind is something like the brain plus soul (or maybe cerebrum plus intellectual part of soul or some other similar combination). Since it is extremely plausible that all of my mind is a part of me, this gives me a new reason to reject (4), the view that I am just a soul. At the same time, I do not think it is necessary to hold that I am just a mind, so I can continue to accept view (3).

The view that the mind is the brain plus soul has an interesting consequence for the interim state, the state of the human being between death and the resurrection of the body. I previously thought that the human being in the interim state is in an unfortunately amputated state, having lost all of the body. But if we see the brain as a part of the mind, the amputated nature of the human being in the interim state is even more vivid: a part of the human mind is missing in the interim state. This gives a better explanation of why Paul was right to insist on the importance of the physical resurrection—we cannot be fully in our mind without at least some of our physical components.

Walking

Unfortunately, most of the forms of exercise I do are too intense for me to think hard while exercising, though perhaps my subconscious is doing something. The main exception is that when swimming, I can do some thinking, but I am also counting lengths, and I can’t do both at once very well. (I have a project that I keep on putting off where I’d interface a BLE beacon on my person with a phone out of the water and use that to count lengths, but I haven’t done it yet. I suppose I could also get a watch that counts lengths.) Occasionally, I can also do some less deep thinking—say, preparing for class—while biking on flat pavement. But I can’t do serious thinking while rock climbing (however, there is good down time for thinking and writing between climbs), or playing badminton, or kayaking.

Last week, Baylor had a step challenge, so I ended up taking some longer brisk walks, alone. (I walk a fair amount with family.) It reminded me of how it is possible to do a lot of thinking while walking. That’s really nice! Though there is the danger that my achievement-oriented personality will push me to keep on increasing my walking speed, to the point where I won't be able to do deep thinking any more.

Monday, March 25, 2024

Representation and truth

For a while, I’ve been thinking of a teleological/normative account of representation. The basic idea is that:

  1. State S represents reality being such that r if and only if one’s teleology specifies that one should be in state S only if r.

But I’ve also been worried that this makes representation much too common in the world. If a bacterium’s nature says that some behavior that should only be triggered under some circumstances, then on this account, the bacterium’s behavior represents the occurrence of these circumstances.

I am kind of willing to bite that bullet. But perhaps I don’t need to.

For a long time I’ve been sensitive to the difference between a proposition p and the second-order proposition that p is true, but this sensitivity has largely been a matter of nitpicking. But today I realized that this distinction may help save the teleological account of normativity with a very small tweak:

  1. State S represents a proposition p if and only if one’s teleology specifies that one should be in state S only if p is true.

It is plausible that only higher organisms have a teleology that makes reference to truth as such.

Remark 1: If we want, we can have both (1) and (2) by distinguishing between “simple representation” and “alethic representation”. Alethic representation is then related to simple representation as follows:

  1. State S alethically represents reality being such that r if and only if S simply represents reality being such that it is true that r.

Remark 2: Given Leon Porter’s argument that truth is not a physical property, it is interesting to note that on the alethic version, representation requires a being that has normative properties that make reference to something nonphysical. In particular, this kind of normativity cannot be grounded in evolution.

Identity and eternity

Suppose that you are an immortal who lived for an infinite amount of time, and each year, your body replaces all its cells with new cells constructed from the matter in your food. Furthermore, you only eat local food, and at the beginning of each year it is randomly chosen by a coin toss whether you will live in Australia or America. Moreover, in the world we are imagining, the food in Australia and America has no matter in common.

Consider these two plausible principles:

  1. If x and y are people living in worlds w1 and w2, respectively, and at no time t in their lives do they have any matter in common, then x ≠ y.

  2. The identity of an already existing person never depends on what will happen to that person in the future.

But now whether you exist in year n does not depends on what happens in year n, since you are immortal and by (2) your identity was already determined in year n − 1. By the same token, whether you exist in year n does not depend on what happens in year n − 1, and so on. In particular, it follows that whether you exist now does not depend on any particular coin toss. However, by (1) whether you exist does depend on the totality of the coin tosses, since if all the coin tosses go differently from how they actually do, the matter in the body would always be different, and hence by (1) the person would be different.

But it is quite paradoxical that your existence depends on the coin tosses collectively and yet each one is irrelevant. This points to the hypothesis that beings that are significantly changeable cannot be eternal (and slightly supports causal finitism).

If you think that your identity also depends on your memories, add that in Australia and America you form different memories. If you think that your identity depends on your soul, then instead of running the argument about a human being, run it against something soulless.

If you think all complex objects have something like soul (as I do), the argument may not impress.

Friday, March 22, 2024

Tables and organisms

A common-sense response to Eddington’s two table problem is that a table just is composed of molecules. This leads to difficult questions of exactly which molecules it is composed of. I assume that at table boundaries, molecules fly off all the time (that’s why one can smell a wooden table!).

But I think we could have an ontology of tables where we deny that tables are composed of molecules. Instead, we simply say that tables are grounded in the global wavefunction of the universe. We then deny precise localization for tables, recognizing that nothing is localized in our quantum universe. There is some approximate shape of the table, but this shape should not be understood as precise—there is no such thing as “the set of spacetime points occupied by the table”, unless perhaps we mean something truly vast (since the tails of wavefunctions spread out very far very fast).

That said, I don’t believe in tables, so I don’t have skin in the game.

But I do believe in organisms. Similar issues come up for organisms as for tables, except that organisms (I think) also have forms or souls. So I wouldn’t want to even initially say that organisms are composed of molecules, but that organisms are partly composed of molecules (and partly of form). That still generates the same problem of which exact molecules they are composed of. And in a quantum universe where there are no sharp facts about particle number, there probably is no hope for a good answer to that question.

So maybe it would be better to say that organisms are not even partly composed of molecules, but are instead partly grounded in the global wavefunction of the universe, and partly in the form. The form delineates which aspects of the global wavefunction are relevant to the organism in question.

Monday, March 18, 2024

Simplicity and Newton's inverse square law

When I give talks about the way modern science is based on beauty, I give the example of how everyone will think Newton’s Law of Gravitation

  1. F = Gm1m2/r2

is more plausible than what one might call “Pruss’s Law of Gravitation”

  1. F = Gm1m2/r2.00000000000000000000000001

even if they fit the observation data equally, and even if (2) fits the data slightly better.

I like the example, but I’ve been pressed on this example at least once, because I think people find the exponent 2 especially plausible in light of the idea of gravity “spreading out” from a source in concentric shells whose surface areas are proportional to r2. Hence, it seems that we have an explanation of the superiority of (1) to (2) in physical terms, rather than in terms of beauty.

But I now think I’ve come to realize why this is not a good response to my example. I am talking of Newtonian gravity here. The “spreading out” intuition is based on the idea of a field of force as something energetic coming out of a source and spreading out into space around it. But that picture makes little sense in the Newtonian context where the theory says we have instantaneous action at a distance. The “spreading out” intuition makes sense when the field of force is emanating at a uniform rate from the source. But there is no sense to the idea of emanation at a uniform rate when we have instantaneous action at a distance.

The instantaneous action at a distance is just that: action at a distance—one thing attracting another at a distance. And the force law can then have any exponent we like.

With General Relativity, we’ve gotten rid of the instantaneous action at a distance of Newton’s theory. But my point is that in the Newtonian context, (1) is very much to be preferred to (2).

Beauty and simplicity in equations

Often, the kind of beauty that scientists, and especially physicists, look for in the equations that describe nature is taken to have simplicity as a primary component.

While simplicity is important, I wonder if we shouldn’t be careful not to overestimate its role. Consider two theories about some fundamental force F between particles with parameters α1 and α2 and distance r between them:

  1. F = 0.8846583561447518148493143571151840833168115852975428057361124296α1α2/r2

  2. F = 0.88465835614475181484931435711518α1α2/r2 + 2−64.

In both theories, the constants up front are meant to be exact and (I suppose) have no significantly more economical expression. By standard measures of simplicity where simplicity is understood in terms of the brevity of expression, (2) is a much simpler theory. But my intuition is that unless there is some special story about the significance of the 2 + 2−64 exponent, (1) is the preferable theory.

Why? I think it’s because of the beauty in the exponent 2 in (1) as opposed to the nasty 2 + 2−64 exponent in (2). And while the constant in (2) is simpler by about 106 bits, that additional simplicity does not make for significantly greater beauty.

Friday, March 15, 2024

A tweak to the Turing test

The Turing test for machine thought has an interrogator communicate (by typing) with a human and a machine both of which try to convince the interrogator that they are human. The interrogator then guesses which is human. We have good evidence of machine thought, Turing claims, if the machine wins this “imitation game” about as often as the human. (The original formulation has some gender complexity: the human is a woman, and the machine is trying to convince the interrogator that it, too, is a woman. I will ignore this complication.)

Turing thought this test would provide a posteriori evidence that a machine can think. But we have a good a priori argument that a machine can pass the test. Suppose Alice is a typical human, so that in competition with other humans she wins the game about half the time. Suppose that for any finite sequence Sn of n questions and n − 1 answers of reasonable length (i.e., of a length not exceeding how long we allow for the game—say, a couple of hours) ending on a question that could be a transcript of the initial part of an interrogation of Alice, there is a fact of the matter as to what answer Alice would make to the last question. Then there is a possible very large , but finite, machine that has a list of all such possible finite sequences and the answers Alice would make, and that at any point in the interrogation answers just as Alice would. That machine would do as well as Alice at the imitation game, so it would pass the Turing test.

Note that we do not need to know what Alice would say in response to the last question of Sn. The point isn’t that we could build the machine—we obviously couldn’t, just because the memory capacity required would be larger than the size of the universe—but that such a machine is possible. We could suppose constructing the database in the machine at random and just getting amazingly lucky and matching Alice’s dispositions.

The machine would not be thinking. Matching the current stage in the interrogation to the database and just giving the item in the line for that is not thinking. The point is obvious. Suppose that S1 consists of the question “What is the most important thing in life?” and the database gives the rote answer “It is living in such a way that you have no regrets.” It’s obvious that the machine doesn’t know what it’s saying.

Compare this to a giant chess playing machine which encodes for each of the 1040 legal chess positions the optimal next move. That machine doesn’t think about playing chess.

If the Turing test is supposed to be an a posteriori test for the possibility of machine intelligence, I propose a simple tweak: We limit the memory capacity of the machine to be within an order of magnitude of human memory capacity. This avoids cases where the Turing test is passed by rote recitation of responses.

Turing himself imagined that doing well in the imitation game would require less memory capacity than the human brain had, because he thought that only “a very small fraction” of that memory capacity was used for “higher types of thinking”. Specifically, Turing surmised that 109 bits of memory would suffice to do well in the game against “a blind man” (presumably because it would save the computer from having to have a lot of data about what the world looks like). So in practice my modification is one that would not decrease Turing’s own confidence in the passability of his test.

Current estimates of the memory capacity of the brain are of the order of 1015 bits, at the high end of the estimates in Turing’s time (and Turing himself inclined to the low end of the estimates, around 1010). The model size of GPT-4 has not been released, but it appears to be near but a little below the human brain capacity level. So if something with the model size of GPT-4 were to pass the Turing test, it would also pass the modified Turing test.

Technical comment: The above account assumed there was a fact about what answer Alice would make in a dialogue that started with Sn. There are various technical issues with regard to this. Given Molinism or determinism, these technical issues can presumably be overcome (we may need to fix the exact conditions in which Alice is supposed to be undergoing the interrogation). If (as I think) neither Molinism nor determinism is true, things become more complicated. But there are presumably to be statistical regularities as to what Alice is likely to answer to Sn, and the machine’s database could simply encode an answer that was chosen by the machine’s builders at random in accordance with Alice’s statistical propensities.

Wednesday, March 13, 2024

Do you and I see colors the same way?

Suppose that Mary and Twin Mary live almost exactly duplicate lives in an almost black-and-white environment. The exception to the duplication of the lives and to the black-and-white character of the environment is that on their 18th birthday, each sees a colored square for a minute. Mary sees a green square and Twin Mary sees a blue square.

Intuitively, Mary and Twin Mary have different phenomenal experiences on their 18th birthday. But while I acknowledge that this is intuitive, I think it is also deniable. We might suppose that they simply have a “new color” experience on their 18th birthday, but it is qualitatively the same “new color” experience. Maybe what determines the qualitative character of a color experience is not the physical color that is perceived, but the relationship of this color to the whole body of our experience. Given that green and blue have the same relationship to the other (i.e., monochromatic) color experiences of Mary and Twin-Mary, it may be that they appear the same way.

If this kind of relationalism is correct, then it is very likely that when you and I look at the same blue sky, our experiences are qualitatively different. Your phenomenal experience is defined by its position in the network of your experiences and mine is defined by its position in the network of my experiences. Since these networks are different, the experiences are different. Somehow I find this idea somewhat plausible. It is even more plausible some experiences other than colors. Take tastes and smells. It’s not unlikely that fried cabbage tastes differently to me because in the network of my experiences it has connections to experiences of my grandmother’s cooking that it does not have in your network.

Such a relationalism could help explain the wide variation in sensory preferences. We normally suppose that people disagree on which tastes they like and dislike. But what if they don’t? What if instead the phenomenal tastes are different? What if banana muffins, which I dislike, taste differently to me than they do to most people, because they have a place in a different network of experiences, and if banana muffins tasted to me like they do to you, I would like them just as much?

In his original Mary thought experiment, Jackson says that monochrome Mary upon experiencing red for the first time learns what experience other people were having when they saw a red tomato. If the above hypothesis is right, she doesn’t learn that at all. Other people’s experiences of a red tomato would be very different from Mary’s, because Mary’s monochrome upbringing would place the red tomato in a very different network of experiences from that which it has in other people’s networks of experiences. (I don’t think this does much damage to the thought experiment as an argument against physicalism. Mary still seems to learn something—what it is to have an experience occupying such-and-such a spot in her network of experiences.)

More fun with monochrome Mary

Here’s a fun variant of the black-and-white Mary thought experiment. Mary has been brought up in a black-and-white environment, but knows all the microphysics of the universe from a big book. One day she sees a flash of green light. She gains the phenomenal concept α that applies to the specific look of that flash. But does Mary know what green light looks like?

You might think she knows because her microphysics book will inform her that on such-and-such a day, there was a flash of green light in her room, and so she now knows that a flash of green light has appearance α. But that is not quite right. A microphysics book will not tell Mary that there was a flash of green light in her room. It will tell her that there was a flash of green light in a room with such-and-such physical properties. Whether she can deduce from these properties and her observations that this was her room depends on what the rest of the universe is like. If the universe contains Twin Mary who lives in a room with exactly the same monochromatically observable properties as Mary’s room, but where at the analogous time there is a flash of blue light, then Mary will have no way to resolve the question of whether she is the woman in the room with the green flash or in the room with the blue flash. And so, even though Mary knows all the microphysical facts about the world, Mary doesn’t know whether it is a green flash or a blue flash that has appearance α.

This version of the Mary thought experiment seems to show that there is something very clear, specific and even verbalizable (since Mary can stipulate a term in her language to express the concept α, though if Wittgenstein is right about the private language argument, we might require a community of people living in Mary’s predicament) that can remain unknown even when one knows all the microphysical facts and has all the relevant concepts and has had the relevant experiences: Whether it is green or blue light that has appearance α?

This seems to do quite a bit of damage to physicalism, by showing that the correlation between phenomenal appearances and physical facts is a fact about the world going beyond microphysics.

But now suppose Joan lives on Earth in a universe which contains both Earth and Twin Earth. The denizens of both planets are prescientific, and at their prescientific level of observation, everything is exactly alike between Earth and Twin Earth. Finer-grained observation, however, would reveal that Earth’s predominant surface liquid is H2O while Twin Earth’s is XYZ, but currently there is no difference. Now, Joan reads a book that tells her in full detail all the microphysical structure of the universe.

Having read the book, Joan wonders: Is water H2O or is it XYZ? Just by reading the book, she can’t know! The reason she doesn’t know it is because her prescientific observations combined with the contents of the book are insufficient to inform her whether she lives on Earth or on Twin Earth, whether she is Joan or Twin Joan, and hence are insufficient to inform her whether the liquid she refers to as “water” is H2O or XYZ.

But surely this shouldn’t make us abandon physicalism about water!

Now Joan and Twin Joan both have concepts that they verbalize as “water”. The difference between these concepts is entirely external to Joan and Twin Joan—the difference comes entirely from the identity of the liquid interaction with which gave rise to the respective concepts. The concepts are essentially ostensive in their differences. In other words, Joan’s ignorance of whether water is H2O or XYZ is basically an ignorance of self-locating fact: is she in the vicinity of H2O or in the vicinity of XYZ.

Is this true for Mary and Twin Mary? Can we say that Mary’s ignorance of whether it is a green or a blue flash that has appearance α is essentially an ignorance of self-locating facts? Can we say that the difference between Mary’s phenomenal concept formed from the green flash and Twin Mary’s phenomenal concept formed from the blue flash is an external difference?

Intuitively, the answer to both questions is negative. But the point is not all that clear to me. It could turn out that both Mary and Twin Mary have a purely comparative recognitive concept of “the same phenomenal appearance as that flash”, together with an ability to recognize that similarity, and with the two concepts being internally exactly alike. If so, then the argument is unconvincing as an argument against physicalism.