Tuesday, April 30, 2024

Killing and consent

I think it’s wrong for us to kill innocent people. Some fellow deontologists, however, think this prohibition should be restricted to say that it’s wrong for us to kill nonconsenting innocent people. These thinkers hold that it is both permissible to consent to being killed and to kill those who have given such consent (except in special cases, such as when the victim has overriding unfulfilled duties to others).

I want to argue for a curious consequence of this restriction of the prohibition of murder while maintaining deontology.

By “sacrificing one’s life to save lives”, I will mean actions which save lives but have one’s own death as an unintended but foreseen side-effect. For instance, jumping in front of a train to push a child out of the way. Everyone agrees it’s typically praiseworthy, and hence permissible, to sacrifice your life to save an innocent life. Most people, however, will say that it is supererogatory to do so. It is brave to do it, but not cowardly to omit it.

But now consider cases where by sacrificing your life you can save a larger number of innocent lives, say a dozen. It is pretty plausible that it would be cowardly to refrain from the sacrifice, and I suspect it would be wrong to do it except in special cases (such as when you have just figured out how to cure cancer). But I agree that the point is not completely clear to me. However, it is quite clear to me that it would be wrong to refuse to sacrifice your life to save a dozen people when that dozen includes one’s spouse and one’s children (again, with some very rare exceptions).

Now let’s assume the view that it is permissible to consent to being killed and permissible to kill the consenting. Consider a classic deontology case: a terrorist says that if you don’t kill Bob, a dozen other innocent people will be killed. Add that the dozen people include Bob’s spouse and children. If it’s permissible to kill the consenting, then if Bob were to consent, it would be permissible to kill him. But Bob expressly and clearly refuses consent, despite his believing that it would be permissible to consent.

Assuming that it is morally required to sacrifice your life to save a dozen innocent lives when these lives include your spouse and children, it is very difficult to deny that if it is permissible to consent to being killed, in a case like the above, Bob would be morally required to consent to being killed. Granted, the sacrifice case does not include consenting to one’s death, while the terrorist case does. But as long as we have granted that it is permissible to consent to one’s death, the difference does not seem significant. Thus Bob is morally required to consent to being killed, given our assumptions about consensual killing. Bob’s refusal of consent is thus morally wrong. And very badly so: it causes eleven more lives to be lost, including his very own spouse and children. His refusal is about as bad as mass murder!

It seems that Bob is far from innocent. On the contrary, he is guilty of refusing to save the lives of eleven people, including his spouse and children. But now it seems that the prohibition against killing the innocent does not apply to Bob, and hence it is permissible—and maybe even obligatory—to kill Bob. If so, then the deontological prohibition on killing the innocent, if restricted to the nonconsenting, has a giant loophole: when enough is at stake, a nonconsenting victim is no longer innocent! Now, maybe, it is only permissible to kill the guilty when one acts on behalf of a state (and when enough is stake, which it is in this case). But it would still be very strange for a deontologist to think it permissible to kill Bob even should the state authorize it.

This is not a knockdown argument against the restriction of the prohibition of murder to nonconsenting victims. But it is some evidence against the restriction.

Monday, April 29, 2024

From aggregative value comparisons to hyperreal values

Suppose that we have n objects α1, ..., αn, and we want to define something like numerical values (at least hyperreal ones, if we can’t have real ones) on the basis of comparisons of value. Here is one interesting way to proceed. Consider the space of formal sums m1α1 + ... + mnαn, where the mi are natural numbers, and suppose there is a total preorder (total transitive reflexive relation) on this space satisfying the axioms:

  1. x + z ≤ y + z iff x ≤ y

  2. mx ≤ my iff x ≤ y for all positive m.

We can think of m1α1 + ... + mnαn ≤ p1α1 + ... + pnαn as saying that the “aggregative value” of having mi copies of αi for all i is less than or equal to the “aggregative value” of having pi copies of αi for all i. The aggregative value of a number of objects is the “sum value”, where we don’t take into account things like the diversity or lack thereof or other “arrangement values”.

Now extend ≤ to formal sums m1α1 + ... + mnαn where the mi are allowed to be positive or negative by stipulating that:

  • m1α1 + ... + mnαn ≤ p1α1 + ... + pnαn iff (k+m1)α1 + ... + (k+mn)αn ≤ (k+p1)α1 + ... + (k+pn)αn for some natural k such that k + mi and k + pi are non-negative for all i.

Axiom (1) implies that the choice of k is irrelevant. It is easy to see that ≤ still satisfies both (1) and (2). Moreover, ≤ is still total, transitive and reflexive.

Next extend ≤ to formal sums r1α1 + ... + rnαn where the ri are rational numbers by stipulating that:

  • r1α1 + ... + rnαn ≤ s1α1 + ... + snαn iff ur1α1 + ... + urnαn ≤ us1α1 + ... + usnαn for some positive integer u such that uri and usi is an integer for all i.

Axiom (2) implies that the choice of u is irrelevant. Again, it is easy to see that ≤ continues to satisfy (1) and (2), and that it remains total, transitive and reflexive.

Thus, ≤ is a total vector space preorder on an n-dimensional vector space V over the rationals with basis α1, ..., αn.

Let C be the positive cone of ≤: C = {x ∈ V : 0 ≤ x}. This is closed under addition and positive rational-valued scalar multiplication. Let K be the kernel of the preorder, i.e., {x ∈ V : 0 ≤ x ≤ 0} = C ∩  − C.

Now, let W be the n-dimensional vector space over the reals with basis α1, ..., αn. Let D be the smallest subset of W containing C and closed under addition and multiplication by positive real scalars: this is the set of real-linear combinations of elements of C with positive coefficients. It is easy to check that D ∩ V = C. Let L = D ∩  − D. Then L ∩ V = K.

Let E be a maximal subset of W that contains D, is closed under addition and multiplication by positive real scalars, and is such that E ∩  − E = L. This exists by Zorn’s Lemma. I claim that for any v in W, either v or  − v is in E. For suppose neither v nor  − v is not in E. Then let E′ = {e + tv : t > 0, e ∈ E}. This contains C, and is closed under addition and multiplication by positive reals. If we can show that E′ ∩  − E′ = L, then since E is a proper subset of E′, we will contradict the maximality of E. Suppose z ∈ E′ ∩  − E but not z ∈ L. Since E ∩  − E = L, we must have either z or  − z in E′ ∖ E. Without loss of generality suppose z ∈ E′ ∖ E. Then z = e + tv for e ∈ E and t > 0. Thus, e + tv ∈  − E. Hence tv ∈ (−e) + (−E) ⊆  − E, since e ∈ E and E is closed under addition. Since E is closed under positive scalar multiplication, we have v ∈  − E, which contradicts our assumption that  − v is not in E.

Define ≤* on W by letting v*w iff w − v ∈ E. Note that ≤* agrees with on V. If v ≤ w are in V, then w − v ∈ C ⊆ E and so v*w. Conversely, if v*w, then w − v ∈ E. Now, since w − v is in V, and is total, if we don’t have v ≤ w, we must have w ≤ v and hence v − w ∈ C, so w − v ∈  − C. Since E ∩  − E = L, we have w − v ∈ L. But v, w ∈ V, so w − v ∈ L ∩ V = K. Thus, v ≤ w, a contradiction.

It’s also easy to see that * is total, transitive and reflexive. It is therefore representable by lexicographically-ordered vector-valued utilities by the work of Hausner in the middle of the last century. And vector-valued utilities are representable by hyperreals (just represent (x1,...,xn) with x1 + x2ϵ + ... + xnϵn − 1 for a positive infinitesimal ϵ).

Remark 1: Here is a plausible condition on the extension ≤* that we can enforce if we like: if Q and U are neighborhoods of v and w respectively, and for all q ∈ Q ∩ V and all u ∈ U ∩ V we have q ≤ v, then v*w. For this condition will hold provided we can show that if Q is a neighborhood of v such that Q ∩ V ⊆ C, then v ∈ E. Note that any positive-real-linear combination of points v satisfying this neighborhood condition also satisfies this condition, and any sum of a point v satisfying this condition and a point in D will also satisfy it. Thus we can add to D all such points v, and carry on with the rest of the proof.

Remark 2: If we start off with being a partial preorder, * still becomes a total order. Then instead of proving it agrees with the partial preordering on V (or the initial ordering), we use the basically the same proof to show that it extends both the non-strict and strict orders: (a) if w ≤ v, then w*v and if w < v, then w<*v.

Question 1: Can we make sure that the values are real numbers?

Response: No. Suppose you are comparing a sheep and a goat, and suppose that they are valued positively and equally—the one exception is ties are broken in favor of the sheep. Thus, n+1 copies of the goat are better than n copies of the sheep and both are better than nothing, but n copies of the sheep are better than n copies of the goat. To represent this with hyperreals we need to take the value of the sheep to be ϵ + g where g > 0 is the value of the goat, and where ϵ/g is a positive infinitesimal.

Question 2: Is the representation is “practically unique”, i.e., does it generate the same decisions in probabilistic situations, or at least ones with real-valued probabilities?

Response: No. Supose you have a sheep and a goat. Now consider two hypotheses: on the first, the sheep is worth  − ϵ + π goats, and on the second, the sheep is worth ϵ + π goats, for a positive infinitesimal ϵ. Both hypotheses generate the same aggregative value comparisons between aggregates consisting of n1 copies of the goat and n2 copies of the sheep for natural numbers n1 and n2, since π is irrational. But the two hypotheses generate opposite probabilistic decisions if we are choosing between a 1/π chance of the sheep and certainty of the goat.

Thursday, April 25, 2024

Brain snatching is not a model of life after death

Van Inwagen infamously suggested the possibility that at the moment of death God snatches a core chunk of our brain, transports it to a different place, replaces it with a fake chunk of brain, and rebuilds the body around the transported chunk.

I think that, were van Inwagen’s suggestion is correct, it would be correct to say that we die. If not, then it is a seriously problematic view given the Christian commitment that people do, in fact, die. Hence van Inwagen's model is not a model of life after death.

Argument: If in the distant future all of a person’s body was destroyed in an accident except for a surving core chunk, and medical technology had progressed so much that it could regrow the rest of the body from that chunk, I think we would not say that the medical technology resurrected the person, but that it prevented the person’s death.

Objection: The word “death” gets its meaning ostensively from typical cases we label as cases of “death”. In these cases, the heart stops, the parts of the brain observable to us stop having electrical activity, etc. What we mean by “death” is what happens in these cases when this stuff happens. If van Inwagen’s suggestion is correct, then what happens in these cases is the snatching of a core chunk. Hence if van Inwagen’s suggestion is correct, then death is divine snatching of a core chunk of the brain, and we do in fact die.

Responses: First, if death is divine snatching of a core chunk of the brain, then jellyfish and trees don’t die, because they don’t have a brain. I suppose, though, one might say that “death” is understood analogously between jellyfish and humans, and it is human death that is a divine snatching of a core chunk of the brain.

Second, it seems obvious that if God had chosen not to snatch a core chunk of Napoleon’s brain, and allowed Napoleon’s body to rot completely, then Napoleon would be dead. Hence, not even the death of a human is identical to a divine snatching.

Third, I think it is an important part of the concept of death is that death is something that is in common between humans and other organisms. People, dogs, jellyfish, and trees all die. We should have an account of death common between these. The best story I know is that death is the destruction of the body. And the van Inwagen story doesn’t have that. So it’s not a story about death.

Wednesday, April 24, 2024

A small disability

On the mere difference view of disability, one isn’t worse off for being disabled as such, though one is worse off due to ableist arrangements in society. A standard observation is that the mere difference view doesn’t work for really big disabilities.

In this post, I want to argue that it doesn’t work for some really tiny disabilities. For instance, about 3-5% of the population without any other brain damage exhibits “musical anhedonia”, an inability to find pleasure in music. I haven’t been diagnosed, but I seem to have something like this condition. With the occasional exception, music is something I either screen out or a minor annoyance. Occasionally I find myself with an emotional response, but I also don’t like having my emotions pulled on by something I don’t understand. When I play a video game, one of the first things I do is turn off all music. If I could easily run TV through a filter that removed music, I would (at least if watching alone). (Maybe movies as well, though I might feel bad about disturbing the artistic integrity of the director.)

On the basis of testimony, however, I know that music can embody immense aesthetic goods which cannot be found in any other medium. I am missing out on these goods. My missing out on them is not a function of ableist assumptions. After all, if the world were structured in accordance with musical anhedonia, there would be no music in it, and I would still miss out on the aesthetic goods of music—it’s just that everybody else would miss out on them as well, which is no benefit to me. I suppose in a world like that more effort would be put into other art forms. The money spent on music in movies might be spent on better editing, say. In church, perhaps, better poetic recitations would be created in place of hymns. However, more poetry and better editing wouldn’t compensate for the loss of music, since having music in addition to other art forms makes for a much greater diversity of art.

Furthermore, presumably, parallel to music anhedonia there are other anhedonias. If to compensate for musical anhedonia we replace music with poetic recitations, then those who have poetic anhedonia (I don’t know if that is a real or a hypothetical condition; I would be surprised, though, if no one suffered from it; I myself don’t appreciate sound-based poetry much, though I do appreciate meaning-based poetry, like Biblical Hebrew poetry or Solzhenitsyn’s “prose poems”) but don’t have musical anhedonia are worse off.

In general, the lack of an ability to appreciate a major artistic modality is surely a loss in one’s life. It need not be a major loss: one can compensate by enjoying other modalities. But it is a loss.

In the case of a more major disability, there can be personal compensations from the intrinsic challenges arising from the disability. But really tiny disabilities need not generate much in the way of such meaningful compensations.

Here’s another argument that musical anhedonia isn’t a mere difference. Suppose that Alice is a normal human being who would be fully able to get pleasure from music. But Alice belongs to a group unjustly discriminated against, and a part of this discrimination is that whenever Alice is in earshot, all music is turned off. As a result, Alice has never enjoyed music. It is clear that Alice was harmed by this. And the bulk of the harm was that she did not have the aesthetic experience of enjoying music—which is precisely the harm that the person with music anhedonia has.

Objection 1: Granted, musical anhedonia is not a mere difference. But it is also not a disability because it does not significantly impact life.

Response 1.1: But music is one of the great cultural accomplishments of the human species.

Response 1.2: Moreover, transpose my argument to a hypothetical society where it is difficult to get by without enjoying music, a society where, for instance, most social interactions involve explicit sharing in the pleasure of music. In that society, musical anhedonia may make one an outcast. It would be a disability. But it would still make one lose out on one of the great forms of art, and hence would still be a really bad thing, rather than a mere difference.

Objection 2: There is a philosophical and a spiritual benefit to me from my musical anhedonia, and it’s not minor. The spiritual benefit is that I look forward to being able to really enjoy music in heaven in a way in which I probably wouldn’t if I already enjoyed it significantly. The philosophical benefit is that music provides me with a nice model of an aesthetic modality that is beyond one’s grasp. Normally, “things beyond one’s grasp” are hard to talk about! But in the case of music, I can lean on the testimony of others, and thus talk about this art form that is beyond my grasp. And this, in turn, provides me with a reason to think that there are likely other goods beyond our current ken, perhaps even goods that we will enjoy in heaven (back to the spiritual). Furthermore, music provides me with a conclusive argument against emotivist theories of beauty. For I think music is beautiful, but I do not have the relevant aesthetic emotional reaction to it. My belief that music is beautiful is largely based on testimony.

Response 2: These kinds of compensating benefits help the mere difference view. Even if one were able to get tenure on the strength of a book on the philosophy of disease inspired by getting a bad case of Covid, the bad case of Covid would be bad and not a mere difference. The mere difference view is about something more intrinsic to the condition.

Tuesday, April 23, 2024

Value and aptness for moral concern

In two recent posts (this and this) I argued that dignity does not arise from value.

I think the general point here goes beyond value. Some entities are more apt for being morally concerned about than others. These entities are more appropriate beneficiaries of our actions, we have more reason to protect them, and so on. The degreed property these entities have more of has no name, but I will call it “apmoc”: aptness for moral concern. Dignity is then a particularly exalted version of apmoc.

Apmoc as such is agent-relative. If you and I have cats, then my cat has more apmoc relative to me than your cat, while your cat has more apmoc relative to you. Thus, I should have more moral concern for my cat and you for yours. Agent-relativity can be responsible for the bulk of the apmoc in the case of some entities—though probably not in the case of entities whose apmoc rises to the level of dignity.

However, we can distinguish an agent-independent core to an entity’s apmoc, which I will call the entity’s “core apmoc”. One can think of the core apmoc as the apmoc the entity has relative to an agent who has no special relationship to the entity. (Note: My concern in this post is the apmoc relative to human agents, so the core apmoc may still be relative to the human species.)

Now, then, here is a thesis that initially sounds good, but I think is quite mistaken:

  1. An entity’s core apmoc is proportional to its value.

For suppose I have two pet dragons, on par with respect to all properties, except one can naturally fly and the other is naturally flightless. The flying dragon has more value: it is a snazzier kind of being, having an additional causal power. Both dragons equally like being scratched under the chin (perhaps with a rake). The fact that the flying dragon has more value does not give me any additional reason to scratch it. More generally, the flying dragon does not have any more core apmoc.

One might object: if it is a matter of saving the life of one of the dragons, other things being equal, one should save the life of the flying dragon, because it is a better kind of being. However, even if this judgment is correct, it is not due to a difference in apmoc. If the flying dragon dies, more value is lost. The death of a dragon removes from the world all the goods of the dragon: its majestic beauty, its contribution to winter heating, its protection of the owner, its prevention of sheep overpopulation, and so on. The death of the flying dragon removes a good—an instance of the causal power of flight—from the world which the death of the flightless dragon does not. If the reason one should save the life of the flying dragon over the flightless one is that the flying one is a better kind of being, then the reason one is saving its life is not because the flying dragon has more apmoc, but because more is lost by its death. If I have a choice of saving Alice from losing a thumb or Bob from losing the little toe, I should save Alice from losing a thumb, not because Alice has more apmoc, but because a thumb is a bigger loss than a toe.

The above objection points out one feature. Sometimes bestowing what is in some sense “the same benefit” to entity will actually bestow a benefit proportional to the value of the entity. Saving an entity from destruction sounds like “the same benefit”, but is a greater benefit where there is more value to be saved. Similarly, if I have a choice between fixing a tire puncture in my car or in my bike, more value is gained when I fix the car’s tire, because the car is more valuable. However, this is not due to the car having more apmoc, but simply because the benefits are different: if I fix the car’s tire, the car would become capable of transporting around my whole family, while the bike would only become capable of transporting me.

Let’s move away from fantasy. Suppose Alice and Bob are on par in all respects, except that Alice knows the 789th digit of π while Bob does not. Knowledge is valuable, and so if you have more knowledge, you have more value. But now if I have a choice of whom to give a delicious chocolate-chip muffin, the fact that Alice knows the 789th digit of π is irrelevant—it contributes (slightly) to value but not at all to core apmoc (it might contribute to the agent-relative aspects of apmoc in some special cases, since shared knowledge can be a partial constituent of a morally relevant relationshiop).

Granted, a piece of knowledge is a contingent contribution to value. One might think that core apmoc is determined proportionately to the essential values of an entity. But I think this is implausible. Most people have the intuition that, other things being equal, a virtuous person has more apmoc than a vicious one. But virtue is not an essential value—it is a value that fluctuates over a lifetime.

The case of virtue and vice suggests that there may be some values that contribute to core apmoc. I think this is likely. Core apmoc does not appear in a vacuum. But the connection between apmoc and value is complex, and the two are quite different.

Monday, April 22, 2024

Does culpable ignorance excuse?

It is widely held that if you do wrong in culpable ignorance (ignorance that you are blameworthy for), you are culpable for the wrong you do. I have long though think this is mistaken—instead we should frontload the guilt onto the acts and omissions that made one culpable for the ignorance.

I will argue for a claim in the vicinity by starting with some cases that are not cases of ignorance.

  1. One is no less guilty if one tries to shoot someone and misses than if one hits them.

  2. If one drinks and drives and is lucky enough to hit no one, one is no less guilty than if one does hit someone, as long as the degree of freedom and knowledge in the drinking and driving is the same.

  3. If one freely takes a drug one knows to remove free will and produce violent behavior in 25% of cases, one is no less guilty if involuntary violence does not ensue than if involuntary violence does ensue.

Now, let’s consider this case of culpable ignorance:

  1. Mad scientist Alice offers Bob a million dollars to undergo a neural treatment that over the next 48 hours will make Bob think that Elbonians—a small ethnic group—are disease-bearing mosquitoes. Bob always kills organisms that he thinks are disease-bearing mosquitoes on sight. Bob correctly estimates that there is a 25% chance that he will meet an Elbonian over the next 48 hours. If Bob accepts the deal, he is no less guilty if he is lucky enough to meet no Elbonians than if he does meet and kill one.

This is as clear a case of culpable ignorance as can be: in accepting the deal, Bob knows he will become ignorant of the human nature of Elbonians, and he knows there is a 25% chance this will result in his killing an Elbonian. I think that just as in cases (1)–(3), one is no less guilty if the bad consequences for others don’t result, so too in case (4), Bob is no less guilty if he never meets an Elbonian.

For a final case, consider:

  1. Just like (4), except that instead of coming to think Elbonians are (disease-bearing) mosquitoes, Bob will come to believe that unlike all other innocent human persons whom it is impermissible to kill, it is obligatory to kill Elbonians, and Bob’s estimate that this belief will result in his killing an Elbonian is 25%.

Again, Bob is no less guilty for taking the money and getting the treatment if he does not run into any Elbonians than if he does run into and kill an Elbonian.

Therefore, one is no less guilty for one’s culpable ignorance if wicked action does not result. Or, equivalently:

  1. One is no more guilty if wicked action does result from culpable ignorance than if it does not.

But (6) is not quite the claim I started with. I started claiming one is not guilty for the wicked action in cases of culpable ignorance. The claim I argued for is that one is no guiltier for the wicked action than if there is no wicked action resulting from the ignorance. But now if one was guilty for the wicked action, it seems one would be guiltier, since one would have both the guilt for the ignorance and for the wicked action.

However, I am now not so sure. The argument in the previous paragraph depended on something like this principle:

  1. Being guilty of both action A and action B is guiltier than just being guilty of action A, all other things being equal. (Ditto for omissions, but I want to be briefer.)

Thus being guilty of acquiring ignorance and acting wickedly on the ignorance would be guiltier than just of acquiring ignorance, and hence by (6) the wicked action does not have guilt. But now that I have got to this point in the argument, I am not so sure of (7).

There may be counterexamples to (7). First, a politician’s lying to the people an hour after a deadly natural disaster is not less guilty than lying in the same way to the people an hour before the natural disaster. But in lying to the people after the disaster one lies to fewer people—since some people died in the disaster!—and hence there are fewer actions of lying (instead of lying to Alice, and lying to Bob, and lying to Carl, one “only” lies to Alice and one lies to Bob). But I am not sure that this is right—maybe there is just one action of lying lying to the people rather than a separate one for each audience member.

Second, suppose Bob strives to insult Alice in person, and consider two cases. In one case, when he has decided to insult Alice, he gets into his car, drives to see Alice, and insults her. In the other case, when he gets into the car he realizes he doesn’t have enough gas to reach Alice, and so he buys gas, then drives to see Alice, and then insults her. In the second case, Bob performed an action he didn’t perform in the first case: buy gas in order to insult Alice. But it doesn’t seem that Bob is guiltier in the second case, even though he did perform one more guilty action. I am also not sure about this case. Here I am actually inclined to think that Bob is more guilty, for two reasons. First, he was willing to undertake a greater burden in order to insult Alice—and that increases guilt. Second, he had an extra chance to repent—each time one acquiesces in a means, that’s a chance to just say no to the whole action sequence. And yet he refused this chance. (It seems to me that Bob is guiltier in the second case, just as the assassin possessing two bullets and shooting the second after missing with the first—regardless of whether the second shot hits—is guiltier than the assassin who after shooting and missing once stops.)

While I am not convinced of the cases, they point to the idea that in the context of (7), the guilt of action A might “stretch” to making B guilty without increasing the total amount of guilt. If that makes sense, then that might actually be the right way of account of accounting for actions done in culpable ignorance. If Bob kills an Elbonian, he is guilty. That is not an additional item of guilt, but rather the guilt of the actions and omissions that caused the guilt stretches over and covers the killing. This seems to me to mesh better with ordinary ways of talking—we don’t want to say that Bob’s killing of the Elbonian in either case (4) or (5) is innocent. And saying that there is no additional guilt may be a way of assuaging the intuition I have had over the years when I thought that culpable ignorance excuses.


A final obvious question is about punishment. We do punish differentially for attempted and completed murder, and for drunk driving that does not result in death and drink driving that does. I think there pragmatic reasons for this. If attempted and completed murder were equally punished, there would be an incentive to “finish the job” upon initial failure. And having a lesser penalty for non-lethal drunk driving creates an incentive for the drunk driver to be more careful driving—how much that avails depends on how drunk the driver is, but it might make some difference.

Thursday, April 18, 2024

Evaluating some theses on dignity and value

I’ve been thinking a bit about the relationship between dignity and value. Here are four plausible principles:

  1. If x has dignity, then x has great non-instrumental value.

  2. If x has dignity, then x has great non-instrumental value because it has dignity.

  3. If x has dignity and y does not, then x has more non-instrumental value than y.

  4. Dignity just is great value (variant: great non-instrumental value).

Of these theses, I am pretty confident that (1) is true. I am fairly confident (3) is false, except perhaps in the special case where y is a substance. I am even more confident that (4) is false.

I am not sure about (2), but I incline against it.

Here is my reason to suspect that (2) is false. It seems that things have dignity in virtue of some further fact F about them, such as that they are rational beings, or that they are in the image and likeness of God, or that they are sacred. In such a case, it seems plausible to think that F directly gives the dignified entity both the great value and dignity, and hence the great value derives directly from F and not from the dignity. For instance, maybe what makes persons have great value is that they are rational, and the same fact—namely that they are rational—gives them dignity. But the dignity doesn’t give them additional value beyond that bestowed on them by their rationality.

My reason to deny (4) is that great value does not give rise to the kinds of deontological consequences that dignity does. One may not desecrate something with dignity no matter what consequences come of it. But it is plausible that mere great value can be destroyed for the sake of dignity.

This leaves principle (3). The argument in my recent post (which I now have some reservations about, in light of some powerful criticisms from a colleague) points to the falsity of (3). Here is another, related reason. Suppose we find out that the Andromeda Galaxy is full of life, of great diversity and wonder, including both sentient and non-sentient organisms, but has nothing close to sapient life—nothing like a person. An evil alien is about to launch a weapon that will destroy the Andromeda Galaxy. You can either stop that alien or save a drowning human. It seems to me that either option is permissible. If I am right, then the value of the human is not much greater than that of the Andromeda Galaxy.

But now imagine that the Whirlpool Galaxy has an order of magnitude more life than the Andromeda Galaxy, with much greater diversity and wonder, than the Andromeda Galaxy, but still with nothing sapient. Then even if the value of the human is greater than that of the Andromeda Galaxy, because it is not much greater, while the value of the Whirlpool Galaxy is much greater than that of the Andromeda Galaxy, it follows that the human does not have greater value than the Whirlpool Galaxy.

However, the Whirlpool Galaxy, assuming it has no sapience in it, lacks dignity. A sign of this is that it would be permissible to deliberately destroy it in order to save two similar galaxies from destruction.

Thus, the human is not greater in value than the Whirlpool Galaxy (in my story), but the human has dignity while the Whirlpool Galaxy lacks it.

That said, on my ontology, galaxies are unlikely to be substances (especially if the life in the galaxy is considered a part of the galaxy, since following Aristotle I doubt that a substance can be a proper part of a substance). So it is still possible that principle (3) is true for substances.

But I am not sure even of (3) in the case of substances. Suppose elephants are not persons, and imagine an alien sentient but not sapient creature which is like an elephant in the temporal density of the richness of life (i.e., richness per unit time), except that (a) its rich elephantine life lasts millions of years, and (b) there can only be one member of the kind, because they naturally do not reproduce. On the other hand, consider an alien person who naturally only has a life that lasts ten minutes, and has the same temporal density of richness of life that we do. I doubt that the alien person is much more valuable than the elephantine alien. And if the alien person is not much more valuable, then by imagining a non-personal animal that is much more valuable than the elephantine alien, we have imagined that some person is not more valuable than some non-person. Assuming all non-persons lack dignity and all persons have dignity, we have a case where an entity with dignity is not more valuable than an entity without dignity.

That said, I am not very confident of my arguments against (3). And while I am dubious of (3), I do accept:

  1. If x has dignity and y does not, then y is not more valuable than x.

I think the case of the human and the galaxy, or the alien person and alien elephantine creature, are cases of incommensurability.

Wednesday, April 17, 2024

Desire-fulfillment theories of wellbeing

On desire-fulfillment (DF) theories of wellbeing, cases of fulfilled desire are an increment to utility. What about cases of unfulfilled desire? On DF theories, we have a choice point. We could say that unfulfilled desires don’t count at all—it’s just that one doesn’t get the increment from the desire being fulfilled—or that they are a decrement.

Saying that unfulfilled desires don’t count at all would be mistaken. It would imply, for instance, that it’s worthwhile to gain all the possible desires, since then one maximizes the amount of fulfilled desire, and there is no loss from unfulfilled desire.

So the DF theorist should count unfulfilled desire as a decrement to utility.

But now here is an interesting question. If I desire that p, and then get an increment x > 0 to my utility if p, is my decrement to utility if not p just  − x or something different?

It seems that in different cases we feel differently. There seem to be cases where the increment from fulfillment is greater than the decrement from non-fulfillment. These may be cases of wanting something as a bonus or an adjunct to one’s other desires. For instance, a philosopher might want to win a pickleball tournament, and intuitively the increment to utility from winning is greater than the decrement from not winning. But there are cases where the decrement is at least as large as the increment. Cases of really important desires, like the desire to have friends, may be like that.

What should the DF theorist do about this? The observation above seems to do serious damage to the elegant “add up fulfillments and subtract non-fulfulfillments” picture of DF theories.

I think there is actually a neat move that can be made. We normally think of desires as coming with strengths or importances, and of course every DF theorist will want to weight the increments and decrements to utility with the importance of the desire involved. But perhaps what we should do is to attach two importances to any given desire: an importance that is a weight for the increment if the desire is fulfilled and an importance that is a weight for the decrement if the desire is not fulfilled.

So now it is just a psychological fact that each desire comes along with a pair of weights, and we can decide how much to add and how much to subtract based on the fulfillment or non-fulfillment of the desire.

If this is right, then we have an algorithm for a good life: work on your psychology to gain lots and lots of new desires with large fulfillment weights and small non-fulfillment weights, and to transform your existing desires to have large fulfillment weights and small non-fulfillment weights. Then you will have more wellbeing, since the fulfillments of desires will add significantly to your utility but the non-fulfillments will make little difference.

This algorithm results in an inhuman person, one who gains much if their friends live and are loyal, but loses nothing if their friends die or are disloyal. That’s not the best kind of friendship. The best kind of friendship requires vulnerability, and the algorithm takes that away.

Tuesday, April 16, 2024

Value and dignity

  1. If it can be reasonable for a typical innocent human being to save lions from extinction at the expense of the human’s own life, then the life of a typical human being is not of greater value than that of all the lion species.

  2. It can be reasonable for a typical innocent human being to save lions from extinction at the expense of the human’s own life.

  3. So, the life of a typical innocent human being is not of greater value than that of the lion species.

  4. It is wrong to intentionally kill an innocent human being in order to save tigers, elephants and giraffes from extinction.

  5. It is not wrong to intentionally destroy the lion species in order to save tigers, elephants and giraffes from extinction.

  6. If (3), (4) and (5), then the right to life of innocent human beings is not grounded in how great the value of human life is.

  7. So, the right to life of innocent human beings is not grounded in how great the value of human life is.

I think the conclusion to draw from this is the Kantian one, that dignity that property of human beings that grounds respect, is not a form of value. A human being has a dignity greater than that of all lions taken together, as indicated by the deontological claims (4) and (5), but a human being does not have a value greater than that of all lions taken together.

One might be unconvinced by (2). But if so, then tweak the argument. It is reasonable to accept a 25% chance of death in order to stop an alien attack aimed at killing off all the lions. If so, then on the plausible assumption that the value of all the lions, tigers, elephants and giraffes is at least four times that of the lions (note that there are multiple species of elephants and giraffes, but only one of lions), it is reasonable to accept a 100% chance of death in order to stop the alien attack aimed at killing off all four types of animals. But now we can easily imagine sixteen types of animals such that it is permissible to intentionally kill off the lions, tigers, elephants and giraffes in order to save the 16 types, but it is not permissible to intentionally kill a human in order to save the 16 types.

Yet another argument against physician assisted suicide

Years ago, I read a clever argument against physician assisted suicide that held that medical procedures need informed consent, and informed consent requires that one be given relevant scientific data on what will happen to one after a procedure. But there is no scientific data on what happens to one after death, so informed consent of the type involved in medical procedures is impossible.

I am not entirely convinced by this argument, but I think it does point to a reason why helping to kill a patient is not an appropriate medical procedure. An appropriate medical procedure is one aiming at producing a medical outcome by scientifically-supported means. In the case of physician assisted suicide, the outcome is presumably something like respite from suffering. Now, we do not have scientific data on whether death causes respite from suffering. Seriously held and defended non-scientific theories about what happens after death include:

  1. death is the cessation of existence

  2. after death, existence continues in a spiritual way in all cases without pain

  3. after death, existence continues in a spiritual way in some cases with severe pain and in other cases without pain

  4. after death, existence continues in another body, human or animal.

The sought-after outcome, namely respite from severe pain, is guaranteed in cases (a), (b) and (d). However, first, evidence for preferring these three hypotheses to hypothesis (b) is not scientific but philosophical or theological in nature, and hence should not be relied on by the medical professional as a medical professional in predicting the outcome of the procedure. Second, even on hypotheses (b) and (d), the sought-after outcome is produced by a metaphysical process that goes beyond the natural processes that are the medical professional’s tools of the trade. On those hypotheses, the medical professional’s means for assuring improvement of the patient’s subjective condition relies on, say, a God or some nonphysical reincarnational process.

One might object that the physician does not need to judge between after-life hypotheses like (a)–(d), but can delegate that judgment to the patient. But a medical professional cannot so punt to the patient. If I go to my doctor asking for a prescription of some specific medication, saying that I believe it will help me with some condition, he can only permissibly fulfill my request if he himself has medical evidence that the medication will have the requisite effect. If I say that an angel told me that ivermectin will help me with Covid, the doctor should ignore that. The patient rightly has an input into what outcome is worth seeking (e.g., is relief from pain worth it if it comes at the expense of mental fog) and how to balance risks and benefits, but the doctor cannot perform a medical procedure based on the patient’s evaluation of the medical evidence, except perhaps in the special case where the patient has relevant medical or scientific qualifications.

Or imagine that a patient has a curable fracture. The patient requests physician assisted suicide because the patient has a belief that after death they will be transported to a different planet, immediately given a new, completely fixed body, and will lead a life there that is slightly happier than their life on earth. A readily curable condition like that does not call for physician assisted suicide on anyone’s view. But if there is no absolute moral objection to killing as such and if the physician is to punt to the patient on spiritual questions, why not? On the patient’s views, after all, death will yield an instant cure to the fracture, while standard medical means will take weeks.

Furthermore, the medical professional should not fulfill requests for medical procedures which achieve their ends by non-medical means. If I go to a surgeon asking that my kidney be removed because Apollo told me that if I burn one of my kidneys on his altar my cancer will be cured, the surgeon must refuse. First, as noted in the previous paragraph, the surgeon cannot punt to the patient the question of whether the method will achieve the stated medical goal. Second, as also noted, even if the surgeon shares the patient’s judgment (the surgeon thinks Apollo appeared to her as well), the surgeon is lacking scientific evidence here. Third, and this is what I want to focus on here, while the outcome (no cancer) is medical, the means (sacrificing a kidney) are not medical.

Only in the case of hypothesis (a) can one say that the respite from severe pain is being produced by physical means. But the judgment that hypothesis (a) is true would be highly controversial (a majority of people in the US seem to reject the hypothesis), and as noted is not scientific.

Admittedly, in cases (b)–(d), the medical method as such does likely produce a respite from the particular pain in question. But that a respite from a particular pain is produced is insufficient to make a medical procedure appropriate: one needs information that some other pain won’t show up instead.

Note that this is not an argument against euthanasia in general (which I am also opposed to on other grounds), but specifically an argument against medical professionals aiding killing.

A version of computationalism

I’ve been thinking how best to define computationalism about the mind, while remaining fairly agnostic about how the brain computes. Here is my best attempt to formulate computationalism:

  • If a Turing machine with sufficiently large memory simulates the functioning of a normal adult human being with sufficient accuracy, then given an appropriate mapping of inputs and outputs but without any ontological addition of a nonphysical property or part, (a) the simulated body dispositionally will behave like the simulated one at the level of macroscopic observation, and (b) the simulation will exhibit mental states analogous to those the simulated human would have.

The “analogous” in (b) allows the computationalist at least two difference between the mental states of the simulation and the mental states of the simulated. First, we might allow for the possibility that the qualitative features of mental states—the qualia—depend on the exact type of embodiment, so in vivo and in silico versions of the human will have different qualitative states when faced with analogous sensory inputs. Second, we probably should allow for some modest semantic externalism.

The “without any ontological addition” is relevant if one thinks that the laws of nature, or divine dispositions, are such that if a simulation were made, it would gain a soul or some other nonphysical addition. In other words, the qualifier helps to ensure that the simulation would think in virtue of its computational features, rather than in virtue of something being added.

Note that computationalism so defined is not entailed by standard reductive physicalism. For while the standard reductive physicalist is going to accept that a sufficiently accurate simulation will yield (a), they can think that real thought depends on physical features that are not had by the simulation (we could imagine, for instance, that to have qualia you need to have carbon, and merely simulated carbon is not good enough).

Moreover, computationalism so defined is compatible with some nonreductive physicalisms, say ones on which there are biological laws that do not reduce to laws of physics, as long as these biological laws are simulable, and the appropriate simulation will have the right mental states.

In fact, computationalism so defined is compatible with substance dualism, as long as the functioning of the soul is simulable, and the simulation would have the right mental states without itself having to have a soul added to it.

Computationalism defined as above is not the same as functionalism. Functionalism requires a notion of a proper function (even if statistically defined, as in Lewis). No such notion is needed above. Furthermore, the computationalism is not a thesis about every possible mind, but only about human minds. It seems pretty plausible that (perhaps in a world with different laws of nature than ours) it is possible to have a mind whose computational resources exceed those of a Turing machine.

Physician assisted suicide and martyrdom

  1. If physician assisted suicide is permissible, then it would have been permissible for early Christians facing being tortured to death by the Romans to kill themselves less painfully.

  2. It would not have been permissible for early Christians facing being tortured to death by the Romans to kill themselves less painfully.

  3. So, physician assisted suicide is not permissible.

The parity premise (1) is hard to deny. The best case for physician assisted suicide is where the patient strives to escape severe and otherwise unescapable pain while facing imminent death. That’s precisely the case of an early Christian being rounded up by Romans to be tortured to death.

Premise (2) is meant to be based on Christian tradition. The idea of suicide to escape pain could not have failed to occur to early Christians, given the cultural acceptance of suicide “to escape the shame of defeat and surrender” (Griffin 1986). It would have been culturally unsurprising, then, if a Christian were to fall on a sword with the Roman authorities at the door. But as far as I can tell, this did not happen. The best explanation is that the Christian tradition was strongly opposed to such “escape”.

There were, admittedly, cases of suicide to avoid rape (eventually rejected by St. Augustine, with great sensitivity to the tragedy), as well as cases where the martyr cooperated with the executioners (as Socrates is depicted having done).

Saturday, April 13, 2024

Legitimate and illegitimate authority

It is tempting to think that legitimate and illegitimate authorities are both types of a single thing. One might not want to call that single thing “authority”. After all, one doesn’t want to say that real and fake money are both types of money. But it sure seems like there is something X that legitimate and illegitimate authorities have in common with each other, and with nothing else. One imagines that a dictator and a lawfully elected president are in some way both doing the same kind of thing, “ruling” or whatever.

But this now seems to me to be mistaken. Or at least I can’t think what X could be. The only candidate I can think of is the trivial disjunctive property of being a legitimate authority or an illegitimate authority.

To a first approximation, one might think that the legitimate and illegitimate authorities both engage in the speech act of commanding. One might here try to object that “commanding” has the same problem as “authority” does: that it is not clear that legitimate and illegitimate commands have anything in common. This criticism seems to me to be mistaken: the two may not have any normative commonality, but they seem to be the same speech act.

However, imagine that Alice is the legitimate elected ruler of Elbonia, but Bob has put Alice in solitary confinement and set himself up as a dictator. Alice is not crazy: when she is in solitary confinement she isn’t commanding anyone as there is no one for her to command. Alice is a legitimate authority and Bob is an illegitimate authority, yet they do not have commanding, or ruling, or running the country in common. (Similarly, even without imprisonment, we could suppose Alice is a small government conservative who ran on a platform of not issuing any orders except in an emergency, and no emergency came up and she kept her promise.)

One might think that they have some kind of dispositional property in common. Alice surely would command if she were to get out of prison, after all. Well, maybe, but we need to specify the conditions quite carefully. Suppose she got out of prison but thought that no one would follow her commands, because she was still surrounded by Bob’s flunkies. Then she might not bother to command. It makes one look bad if one issues commands and they are ignored. Perhaps, though, we can say: Alice would issue commands if she thought they were needed and likely to be obeyed. But that can’t be the disposition that defines a legitimate or illegitimate authority. For many quite ordinary people in the country presumably have the exact same disposition: they too would issue commands if they thought they were needed and likely to be obeyed! But we don’t want to say that these people are either legitimate or illegitimate authorities.

We might argue that Alice isn’t a legitimate authority while imprisoned, because she is incapacited, and incapacitation removes legitimate authority. One reason to be dubious of this answer is that on a plausible account of incapacitation, insanity is a form of incapacitation. But an insane illegitimate dictator is still an illegitimate authority, and so incapacitation does not remove the disjunctive property legimate or illegitimate authority, but at most it removes legitimacy. Thus, Alice might still be an authority, but not an illegitimate one. Another reason is this: we could imagine that in order to discourage people from incapacitating the legitimate ruler, the laws insist that one remains in charge if one’s incapacitation is due to an act of rebellion. Moreover, we might suppose that Bob hasn’t actually incapacitated Alice. He lets her walk around and give orders freely, but his minions kill anybody who obeys, so Alice doesn’t bother to issue any orders, because either they will be disobeyed or the obeyers will be killed.

Perhaps we might try to find a disposition in the citizenry, however. Maybe what makes Alice and Bob be the same kind of thing is that the citizens have a disposition to obey them. One worry about this is this: Suppose the citizens after electing Alice become unruly, and lose the disposition to obey. It seems that Alice could still be the legitimate authority. I suppose someone could think, however, that some principles of democracy would imply that if there is no social disposition to obey someone, they are no longer an authority, legitimate or not. I am dubious. But there is another objection to finding a common disposition in the citizenry. The citizenry’s disposition to obey Bob could easily be conditional on them being unable to escape the harsh treatment he imposes on the disobedient and on him actually issuing orders. So the proposal now is something like this: z is a legitimate authority or an illegitimate authority if the citizenry would be disposed to obey z if z were to issue orders backed up credible threats of harsh treatment. But it could easily be that a perfectly ordinary person z satisfies this definition: people would obey z if z were to issue orders backed up by credible threats!

Let’s try one more thing. What fake and real money have in common is that they are both objects made to appear to be real money. Could we say that both Alice and Bob claim have this in common: They both claim to (“pretend to”, in the old sense of “pretend” that does not imply “falsely” as it does now) be the legitimate authority? Again, that may not be true. Alice is in solitary confinement. She has no one to make such claims to. Again, we can try to find some dispositional formulation, such as that she would claim it if she thought it beneficial to do so. But again many quite ordinary people would claim to be the legitimate authority if they thought it beneficial to do so. Moreover, Bob can be an illegitimate authority without any pretence to legitimacy! He need not claim, for instance, that people have a duty to obey him, backing up his orders by threat rather than by claimed authority. (It is common in our time that dictators pretend to a legitimacy that they do not have. But this is not a necessary condition for being an illegitimate authority.) Finally, if Carl is a crazy guy who claims to have been elected and no one, not even Carl’s friends and family, pays any attention to his raving, it does not seem that Carl is an illegitimate authority.

None of this denies the thesis that there is a similarity between illegitimate authority and legitimate authority. But it does not seem possible to turn that similarity into a non-disjunctive property that both of these share. Though maybe I am just insufficiently clever.

Thursday, April 11, 2024

Of snakes and cerebra

Suppose that you very quickly crush the head of a very long stretched-out serpent. Specifically, suppose your crushing takes less time than it takes for light to travel to the snake’s tail.

Let t be a time just after the crushing of the head.

Now causal influences propagate at most at the speed of light or less, the crushing of the head is the cause of death, and at t there wasn’t yet time for the effects of the crushing to have propagated to the tip of the tail. Furthermore, assume an Aristotelian account of life where a living thing is everywhere joined with its form or soul and death is the separation of the form from the matter. Then at t, because the effects of crushing haven’t propagated to the tail, the tail is joined with the snake’s form, even though the head is crushed and hence presumably no longer a part of the snake. (Imagine the head being annihilated for greater clarity.)

Now as long as any matter is joined to the form, the critter is alive. It follows that at time t, the snake is alive despite lacking a head. The argument generalizes. If we crush everything but the snake’s tail, including crushing all the major organs of the snake, the snake is alive despite lacking all the major organs, and having but a tail (or part of a tail).

So what? Well, one of the most compelling arguments against animalism—the view that people are animals—is that:

  1. People can survive as just a cerebrum (in a vat).

  2. No animal can survive as just a cerebrum.

  3. So, people are not animals.

But presumably the reason for thinking that an animal can’t survive as just a cerebrum is that a cerebrum makes an insufficient contribution to the animal functions. But the tail of a snake makes an even less significant contribution to the animal functions. Hence:

  1. If a snake can survive as just a tail, a mammal can survive as just a cerebrum.

  2. A snake can survive as just a tail.

  3. So, a mammal can survive as just a cerebrum.

Objection: Only physical effects are limited to the speed of light in their propagation, and the separation of form from matter is not a physical effect, so that instantly when the head is crushed, the form leaves the snake, all at once at t.

Response: Let z be the spacetime location of the tip of the snake’s tail at t. According to the object, at z the form is no longer present. Now, given my assumption that crushing takes less time than it takes for light to travel to the snake’s tail, and that in one reference frame w is just after the crushing, there will also be a reference frame according to which z is before the crushing has even started. If at z the form is no longer present, then the form has left the tip of the tail before the crushing.

In other words, if we try to get out of the initial argument by supposing that loss of form proceeds faster than light, then we have to admit that in some reference frames, loss of form goes backwards in time. And that seems rather implausible.

Tuesday, April 9, 2024

Absolute reference frame

Some philosophers think that notwithstanding Special Relativity, there is a True Absolute Reference Frame. Suppose this is so. This reference frame, surely, is not our reference frame. We are on a spinning planet rotating around a sun orbiting the center of our galaxy. It seems pretty likely that if there is an absolute reference frame, then we are moving with respect to it at least at the speed of the flow of the Local Group of galaxies due to the mass of the Laniakea Supercluster of galaxies, i.e., at around 600 km/s.

Given this, our measurements of distance and time are actually going to be a little bit objectively off the true values, which are the ones that we would measure if we were in the absolute reference frame. The things we actually measure here in our solar system will be objectively off due to time dilation and space contraction by about two parts per million, if my calculations are right. That means that our best possible clocks will be objectively about a minute(!) off per year, and our best meter sticks will be about two microns off. Not that we would notice these things, since the absolute reference frame is not observable, so we can’t compare our measurements to it.

As a result, we have a choice between two counterintuitive claims. Either we say that duration and distance are relative, or we have to say that our best machining and time measuring is necessarily off, and we don’t know by how much, since we don’t know what the True Absolute Reference Frame is.

Monday, April 8, 2024


The day started off all cloudy, but the clouds got less dense, and then when the eclipse in our front yard reached totality, we had a big break in the clouds.

The first picture has a sunspot in the middle. In the totality picture, slightly to the right of the bottom of the sun in the totality picture there is a hint of a reddish prominence which in my 8" telescope had lovely structure. A quick measurement from the photo shows that the prominence is about seven times the size of the earth.

Saturday, April 6, 2024

Plastic belt buckle

Quite a while back, I came across a discarded belt with a broken buckle. I kept it in my "long stringy things" box in the garage until I could figure out what to do with it. Finally, today, I designed and 3D printed a new buckle for it, along with plastic rivets. I replaced all the metal, and now I have a no-metal belt that hopefully can clear airline security without being removed (not tested yet).

Friday, April 5, 2024

A weaker epiphenomenalism

A prominent objection to epiphenomenalist theories of qualia, on which qualia have no causal efficacy, is that then we have no way of knowing that we had a quale of red. For a redness-zombie, who has no quale of red, would have the very same “I am having a quale of red” thought as me, since my “I am having a quale of red” thought is not caused by the quale of red.

There is a slight tweak to epiphenomanalism that escapes this objection, and the tweaked theory seems worth some consideration. Instead of saying that qualia have no causal efficacy, on our weaker epiphenomenalism we say that qualia have no physical effects. We can then say that my “I am having a quale of red” thought is composed of two components: one of these components is a physical state ϕ2 and the other is a quale q2 constituting the subjective feeling of thinking that I am having a quale of red. After all, conscious thoughts plainly have qualia, just as perceptions do, if there are qualia at all. We can now say that the physical state ϕ2 is caused by the physical correlate ϕ1 of the quale of red, while the quale q2 is wholly or partly caused by the quale q1 of red.

As a result, my conscious thought “I am having a quale of red” would not have occurred if I lacked the quale of red. All that would have occurred would be the physical part of the conscious thought, ϕ2, which physical part is what is responsible for further physical effects (such as my saying that I am having a quale of red).

If this is right, then the induced skepticism about qualia will be limited to skepticism with respect to unconscious thoughts about qualia. And that’s not much of a skepticism!

Thursday, April 4, 2024

Divine thought simplicity

One of the motivations for denying divine simplicity is the plausibility of the claim that:

  1. There is a multiplicity of divine thoughts, which are a proper part of God.

But it turns out there are reasons to reject (1) independent of divine simplicity.

Here is one reductio of the distinctness of God and God’s thoughts.

  1. God is distinct from his thoughts.

  2. If x’s thoughts are distinct from x, then x causes x’s thoughts.

  3. Everything caused by God is a creature.

  4. So, God’s thoughts are creatures.

  5. Every creature explanatorily depends on a divine rational decision to create it.

  6. A rational decision explanatorily depends on thoughts.

  7. So, we have an ungrounded infinite explanatory regress of thoughts.

  8. Ungrounded infinite explanatory regresses are impossible.

  9. Contradiction!

Here is another that also starts with 2–5 but now continues:

  1. God’s omniscience is identical with or dependent on God’s thoughts.

  2. None of God’s essential attributes are identical with or dependent on any creatures.

  3. Omniscience is one of God’s essential attributes.

  4. Contradiction!

Intending the bad as such

Here is a plausible thesis:

  1. You should never intend to produce a bad effect qua bad.

Now, even the most hardnosed deontologist (like me!) will admit that there are minor bads which it is permissible to intentionally produce for instrumental reasons. If a gun is held to your head, and you are told that you will die unless you come up to a stranger and give them a moderate slap with a dead fish, then the slap is the right thing to do. And if the only way for you to survive a bear attack is to wake up your fellow camper who is much more handy with a rifle than you are, and the only way to wake them up is to poke them with a sharp stick, then the poke is the right thing. But these cases are not counterexamples to (1), since while the slap and poke are bad, one is not intending them qua bad.

However, there are more contrived cases where it seems that you should intend to produce a bad effect qua bad. For instance, suppose that you are informed that you will die unless you do something clearly bad to a stranger, but it is left entirely up to you what the bad thing is. Then it seems obvious that the right thing to do is to choose the least bad thing you can think of—the lightest slap with a dead fish, perhaps, that still clearly counts as bad—and do that. But if you do that, then you are intending the bad qua bad.

Yet I find (1) plausible. I feel a pull towards thinking that you shouldn’t set your will on the bad qua bad, no matter what. However it seems weird to think that it would be right to give a stranger a moderate slap with a dead fish if that was specifically what you were required to do to save your life, but it would be wrong to give them a mild slap if it were left up to you what bad thing to do. So, very cautiously, I am inclined to deny (1) in the case of minor bads.

Tuesday, April 2, 2024

Abstaining from goods

There are many times when we refrain from pursuing an intrinsic good G. We can classify these cases into two types:

  1. we refrain despite G being good, and

  2. we refrain because G is good.

The “despite” cases are straightforward, such as when one refrains from from reading a novel for the sake of grading exams, despite the value of reading the novel.

The “because” cases are rather more interesting. St Augustine gives the example of celibacy for the sake of Christ: it is because marriage is good that giving it up for the sake of Christ is better. Cases of religious fasting are often like this, too. Or one might refrain from something of value in order to punish oneself, again precisely because the thing is of value. These are self-sacrificial cases.

One might think another type of example of a “because” case is where one refrains from pursuing G now in order to obtain it by a better means, or in better circumstances, in the future. For instance, one might refrain from eating a cake on one day in order to have the cake on the next day which is a special occasion. Here the value of the cake is part of the reason for refraining from pursuit. On reflection, however, I think this is a “because” case. For we should distinguish between the good G1 of having the cake now and the good G2 of having the cake tomorrow. Then in delaying one does so despite the good of G1 and because of the good of G2. The good of G1 is not relevant, unless this becomes sacrificial.

I don’t know if all the “because” cases are self-sacrificial in the way celibacy is. I suspect so, but I would not be surprised if a counterexample turned up.

Aristotelian functionalism

Rob Koons and I have argued that the best functionalist theory of mind is one where the proper function of a system is defined in an Aristotelian way, in terms of innate teleology.

When I was teaching on this today, it occured to me (it should have been obvious earlier) that this Aristotelian functionalism has the intuitive consequence that only organisms are minded. For although innate teleology may be had by substances other than organisms, inorganic Aristotelian material substances do not have the kind of teleology that would make for mindedness on any plausible functionalism. Here I am relying on Aristotle’s view that (maybe with some weird exceptions, like van Inwagen’s snake-hammock) artifacts—presumably including computers—are not substances.

If this is right, then the main counterexamples to functionalism disappear:

Next recall Leibniz’s mill argument: if a machine can be conscious, a mill full of giant gears could be conscious, and yet as we walked through such mill, it would be clear that there is no consciousness anywhere. But now suppose we were told that the mill has an innate function (not derived from the purposes of the architect) which governed the computational behavior of the gears. We would then realize that the mill is more than just what we can see, and that would undercut the force of the Leibnizian intuition. In other words, it is not so hard to believe that a mill with innate purpose is conscious.

Further, note that perhaps the best physicalist account of qualia is that qualia are grounded in the otherwise unknowable categorical features of the matter making up our brains. This, however, has a somewhat anti-realist consequence: the way our experiences feel has nothing to do with the way the objects we are experiencing are. But an Aristotelian functionalist can tell this story. If I have a state whose function is to represent red light, then I have an innate teleology that makes reference to red light. This innate teleology could itself encode the categorical features of red light, and since this innate teleology, via functionalism, grounds our perception of red light, our perception of red light is “colored” by the categorical features not just by our brains, but by the categorical features of red light (even if we are hallucinating the red light). This makes for a more realist theory of qualia, on which there is a non-coincidental connection between the external objects and how they seem to us.

Observe, also, how the Aristotelian story has advantages of panpsychism without the disadvantages. The advantage of panpsychism is that the mysterious gap between us and electrons is bridged. The disadvantages are two-fold: (a) it is highly counterintuitive that electrons are conscious (the gap is bridged too well) and (b) we don’t have a plausible story about how the consciousness of the parts gives rise to a consciousness of the whole. But on Aristotelian functionalism, it is teleology that we have in common with electrons, so we do not need to say that electrons are conscious—but because mind reduces to teleological function, though not of the kind electrons have, we still have bridging. And we can tell exactly the kind of story that non-Aristotelians do about how the function of the parts gives rise to the consciousness of the whole.

There is, however, a serious downside to this Aristotelian functionalism. It cannot work for the simple God of classical theism. But perhaps we can put a lot of stress on the idea that “mind” is only said analogously between creatures and God. I don’t know if that will work.

Functionalism and organizations

I am quite convinced that if standard (non-evolutionary, non-Aristotelian) functionalism is true, then complex organizations such as universities and nations have minds and are conscious. For it is clear to me that dogs are conscious, and the functioning of complex organizations is more intellectually sophisticated than that of dogs, and has the kind of desire-satisfaction drivers that dogs and maybe even humans have.

(I am pretty sure I posted something like this before, but I can’t find it.)