Friday, July 26, 2024

Perfect nomic correlations

Here is an interesting special case of Ockham’s Razor:

  1. If we find that of nomic necessity whenever A occurs, so does B, then it is reasonable to assume that B is not distinct from A.

Here are three examples.

  1. We learn from Newton and Einstein that inertial mass and gravitational mass always have the same value. So by (1) we should suppose them to be one property, rather than two properties that are nomically correlated.

  2. In a Newtonian context consider the hypothesis of a gravitational field. Because the gravitational field values at any point are fully determined by the positions and masses of material objects, (1) tells us that it’s reasonable to assume the gravitational field isn’t some additional entity beyond the positions and masses of material objects.

  3. Suppose that we find that mental states supervene on physical states: that there is no difference in mental states without a corresponding difference in physical states. Then by (1) it’s reasonable to expect that mental states are not distinct from physical states. (This is of course more controversial than (A) and (B).)

But now consider that in a deterministic theory, future states occur of nomic necessity given past states. Thus, (1) makes it reasonable to reduce future states to past states: What it is for the universe to be in state S7 at time t7 is nothing but the universe’s being in state S0 at time t0 and the pair (S7,t7) having such-and-such a mathematical relationship to the pair (S0,t0). Similarly, entities that don’t exist at the beginning of the universe can be reduced to the initial state of the universe—we are thus reducible. This consequence of (1) will seem rather absurd to many people.

What should we do? One move is to embrace the consequence and conclude that indeed if we find good evidence for determinism, it will be reasonable to reduce the present to the past. I find this implausible.

Another move is to take the above argument as evidence against determinism.

Yet another move is to restrict (1) to cases where B occurs at the same time as A. This restriction is problematic in a relativistic context, since simultaneity is relative. Probably the better version of the move is to restrict (1) to cases where B occurs at the same time and place as A. Interestingly, this will undercut the gravitational field example (B). Moreover, because it is not clear that mental states have a location in space, this may undercut application (C) to mental staes.

A final move is either to reject (1) or, more modestly, to claim that the the evidence provided by nomic coincidence is pretty weak and defeasible on the basis of intuitions, such as our intuition that the present does not reduce to the past. In either case, application (C) is in question.

In any case, it is interesting to note that thinking about determinism gives us some reason to be suspicious of (1), and hence of the argument for mental reduction in (C).

Thursday, July 25, 2024

Aggression and self-defense

Let’s assume that lethal self-defense is permissible. Such self-defense requires an aggressor. There is a variety of concepts of an aggressor for purposes of self-defense, depending on what constitutes aggression. Here are a few accounts:

  1. voluntarily, culpably and wrongfully threatening one’s life

  2. voluntarily and wrongfully threatening one’s life

  3. voluntarily threatening one’s life

  4. threatening, voluntarily or involuntarily one’s life.

(I am bracketing the question of less serious threats, where health but not life is threatened.)

I want to focus on accounts of self-defense on which aggression is defined by (4), namely where there is no mens rea requirement at all on the threat. This leads to a very broad doctrine of lethal self-defense. I want to argue that it is too broad.

First note that it is obvious that a criminal is not permitted to use lethal force against a police officer who is legitimately using lethal force against them. This implies that even (3) is too lax an account of aggression for purposes of self-defense, and a fortiori (4) is too lax.

Second, I will argue against (4) more directly. Imagine that Alice and Bob are locked in a room together for a week. Alice has just been infected with a disease which would do her no harm but would kill Bob. If Alice dies in the next day, the disease will not yet have become contagious, and Bob’s life will be saved. Otherwise, Bob will die. By (4), Bob can deem Alice an aggressor simply by her being alive—she threatens his life. So on an account of self-defense where (4) defines aggression, Bob is permitted to engage in lethal self-defense against Alice.

My intuitions say that this is clearly wrong. But not everyone will see it this way, so let me push on. If Bob is permitted to kill Alice because aggression doesn’t have a mens rea requirement, Alice is also permitted to lethally fight back against Bob, despite the fact that Bob is acting permissibly in trying to kill her. (After all, Alice was also acting permissibly in breathing, and thereby staying alive and threatening Bob.) So the result of a broad view of self-defense against any kind of threat, voluntary or not, is situations where two people will permissibly engage in a fight to the death.

Now, it is counterintuitive to suppose that there could be a case where two people are both acting justly in a fight to the death, apart from cases of non-moral error (say, each thinks the other is an attacking bear).

Furthermore, the result of such a situation is that basically the stronger of the two gets to kill the weaker and survive. The effect is not literally might makes right, but is practically the same. This is an implausibly discriminatory setup.

Third, consider a more symmetric variant. Two people are trapped in a spaceship that has only air enough for one to survive until rescue. If (4) is the right account of aggression, then simply by breathing each is an aggressor against the other. This is already a little implausible. Two people in a room breathing is not what one normally thinks of as aggression. Let me back this intuition up a little more. Suppose that there is only one person trapped in a spaceship, and there is not enough air to survive until rescue. If in the case of two people each was engaging in aggression against the other simply by virtue of removing oxygen from air to the point where the other would die, in the case of one person in the spaceship, that person is engaging in aggression against themselves by removing oxygen from air to the point where they themselves will die. But that’s clearly false.

I don’t know exactly how to define aggression for purposes of self-defense, but I am confident that (4) is much too broad. I think the police officer and criminal case shows that (3) is too broad as well. I feel pulled towards both (1) and (2), and I find it difficult to resolve the choice between them.

Wednesday, July 24, 2024

Knowing what it's like to see green

You know what it’s like to see green. Close your eyes. Do you still know what it’s like to see green?

I think so.

Maybe you got lucky and saw some green patches while closing your eyes. But I am not assuming that happened. Even if you saw no green patches, you knew what it is like to see green.

Philosophers who are really taken with qualia sometimes say that:

  1. Our knowledge of what it is like to see green could only be conferred on me by having an experience of green.

But if I have the knowledge of what it is like to see green when I am not experiencing green, then that can’t be right. For whatever state I am in when not experiencing green but knowing what it’s like to see green is a state that God could gift me with without ever giving me an experience of green. (One might worry that then it wouldn’t be knowledge, but something like true belief. But God could testify to the accuracy of my state, and that would make it knowledge.)

Perhaps, however, we can say this. When your eyes are closed and you see no green patches, you know what it’s like to see green in virtue of having the ability to visualize green, an ability that generates experiences of green. If so, we might weaken (1) to:

  1. Our knowledge of what it is like to see green could only be conferred on me by having an experience of green or an ability to generate such an experience at will by visual imagination.

We still have a conceptual connection between knowledge of the qualia and experience of the qualia then.

But I think (2) is still questionable. First, it seems to equivocate on “knowledge”. Knowledge grounded in abilities seems to be knowledge-how, and that’s not what the advocates of qualia are talking about.

Second, suppose you’ve grown up never seeing green. And then God gives you an ability to generate an experience of green at will by visual imagination: if you “squint your imagination” thus-and-so, you will see a green patch. But you’ve never so squinted yet. It seems odd to say you know what it’s like to see green.

Third, our powers of visual imagination vary significantly. Surely I know what it’s like to see paradigm instances of green, say the green of a lawn in an area what water is plentiful. If I try to imagine a green patch, if I get lucky, my mind’s eye presents to me a patch of something dim, muddy and greenish, or maybe a lime green flash. I can’t imagine a paradigm instance of green. And yet surely, I know what it’s like to see paradigm instances of green. It seems implausible to think that when my eyes are closed my knowledge of what it’s like to see green (and even paradigm green) is grounded in my ability to visualize these dim non-paradigm instances.

It seems to me that what the qualia fanatic should say is that:

  1. We only know what it’s like to see green when we are experiencing green.

But I think that weakens arguments from qualia against materialism because (3) is more than a little counterintuitive.

Wednesday, July 17, 2024

The explanation of our reliability is not physical

  1. All facts completely reducible to physics are first-order facts.

  2. All facts completely explained by first-order facts are themselves completely reducible to first-order facts.

  3. Facts about our epistemic reliability are facts about truth.

  4. Facts about truth are not completely reducible to first-order facts.

  5. Therefore, no complete explanation of our epistemic reliability is completely reducible to physics.

This is a variant on Plantinga’s evolutionary argument against naturalism.

Premise (4) follows from Tarski’s Indefinability of Truth Theorem.

The one premise in the argument that I am not confident of (2). But it sounds right.

First-order naturalism

In a lovely paper, Leon Porter shows that semantic naturalism is false. One way to put the argument is as follows:

  1. If semantic naturalism is true, truth is a natural property.

  2. All natural properties are first order.

  3. Truth is not a first order property.

  4. So, truth is not a natural property.

  5. So, semantic naturalism is not true.

One can show (3) by using the liar paradox or just take it as the outcome of Tarski’s Indefinability of Truth Theorem.

Of course, naturalism entails semantic naturalism, so the argument refutes naturalism.

But it occurred to me today, in conversation with Bryan Reece, that perhaps one could have a weaker version of naturalism, which one might call first-order naturalism that holds that all first order truths are natural truths.

First-order naturalism escapes Porter’s argument. It’s a pretty limited naturalism, but it has some force. It implies, for instance, that Zeus does not exist. For if Zeus exists, then that Zeus exists is a first-order truth that is not natural.

First-order naturalism is an interestingly modest naturalist thesis. It is interesting to think about its limits. One that comes to mind is that it does not appear to include naturalism about minds, since it does not appear possible to characterize minds in first-order language (minds represent the world, etc., and talk of representation is at least prima facie not first-order).

Truthteller's relative

The truthteller paradox is focused on the sentence:

  1. This sentence is true.

There is no contradiction in taking (1) to be true, but neither is there a contradiction in taking (1) to be false. So where is the paradox? Well, one way to see the paradox is to note that there is no more reason to take (1) to be true than to be false or vice versa. Maybe there is a violation of the Principle of Sufficient Reason.

For technical reasons, I will take “This sentence” in sentences like (1) to be an abbreviation for a complex definite syntactic description that has the property that the only sentence that can satisfy the description is (1) is itself. (We can get such a syntactic description using the diagonal lemma, or just a bit of cleverness.)

But the fact that we don’t have a good reason to assign a specific truth value to (1) isn’t all there is to the paradox.

For consider this relative of the truthteller:

  1. This sentence is true or 2+2=4.

There is no difficulty in assigning a truth value to (2) if it has one: it’s got to be true because 2+2=4. But nonetheless, (2) is not meaningful. When we try to unpack its meaning, that meaning keeps on fleeing. What does (2) say? Not just that 2+2=4. There is that first disjunct in it after all. That first disjunct depends for its truth value on (2) itself, in a viciously circular way.

But after all shouldn’t we just say that (2) is true? I don’t think so. Here is one reason to be suspicious of the truth of (2). If (2) is true, so is:

  1. This sentence is true or there are stars.

But it seems that if (3) is meaningful, then it should should have a truth value in every possible world. But that would include the possible world where there are no stars. However, in that world, the sentence (3) functions like the truthteller sentence (1), to which we cannot assign a truth value. Thus (3) does not
have a sensible truth value assignment in worlds where there are no stars. But it is not the sort of sentence whose meaningfulness should vary between possible worlds. (It is important for this argument that the description that “This sentence” is an abbreviation for is syntactic, so that its referent should not vary between worlds.)

It might be tempting to take (2) to be basically an infinite disjunction of instances of “2+2=4”. But that’s not right. For by that token (3) would be basically an infinite disjunction of “there are stars”. But then (3) would be false in worlds where there are no stars, and that’s not clear.

If I am right, the fact that (1) wouldn’t have a preferred truth value is a symptom rather than the disease itself. For (2) would have a preferred truth value, but we have seen that it is not meaningful. This pushes me to think that the problem with (1) is the same as with (2) and (3): the attempt to bootstrap meaning in an infinite regress.

I don’t know how to make all this precise. I am just stating intuitions.

Monday, July 15, 2024

From love of neighbor to Christianity

Start with this argument:

  1. It’s not wrong for me to love my friend as if they were in the image and likeness of God.

  2. If someone is not God and not in the image and likeness of God, then to love them as if they were in the image and likeness of God is excessive.

  3. Excessive love is wrong.

  4. My friend is not God.

  5. So, my friend is in the image and likeness of God.

  6. So, God exists.

I think there may be some other variants on this argument that are worth considering. Replace being in the image and likeness of God, for instance, with (a) being so loved by God that God became incarnate out of love for them, or with (b) having the Spirit of God living in them. Then the conclusion is that God become incarnate or that the Spirit of God lives in our neighbor.

The general point is this. Christianity gives us an admirable aspiration as to how much we should love our neighbor. But that much love of our neighbor is inappropriate unless something like Christianity is true.

I think there is a way in which this argument is far from new. One of the great arguments for Christianity has always been those Christians who loved their neighbor as God called them to do. The immense attractiveness of their lives showed that their love was not wrong, and knoweldge of these lives showed that they were indeed loving their neighbor in the ways the above arguments talk about.

Friday, July 12, 2024

An act with a normative end

Here’s an interesting set of cases that I haven’t seen a philosophical discussion of. To get some item B, you need to affirm that you did A (e.g., took some precautions, read some text, etc.) But to permissibly affirm that you did A, you need to do A. Let us suppose that you know that your affirmation will not be subject to independent verification, and you in fact do A.

Is A a means to B in this case?

Interestingly, I think the answer is: Depends.

Let’s suppose for simplicity that the case is such that it would be wrong to lie about doing A in order to get B. (I think lying is always wrong, but won’t assume this here.)

If you have such an integrity of character that you wouldn’t affirm that you did A without having done A, then indeed doing A is a means to affirming that you did A, which is a means to B, and in this case transitivity appears ot hold: doing A is a means to B.

But we can imagine you have less integrity of character, and if the only way to get B would be to falsely affirm that you did A, you would dishonestly so affirm. However, you have enough integrity of character that you prefer honesty when the cost is not too high, and the cost of doing A is not too high. In such a case, you do A as a means to permissibly affirming that you did A. But it is affirming that you did A that is a means to getting B: permissibly affirming is not necessary. Thus, your doing A is not a means to getting B, but it is a means to the additional bonus that you get B without being dishonest.

In both specifications of character, your doing A is a means to its being permissible for you to affirm you did A. We see, thus, that we have a not uncommon set of cases where an ordinary action has a normative end, namely the permissibility of another action. (These are far from the only such cases. Requesting someone’s permission is another example of an action whose end is the permissibility of some other action.)

The cases also have another interesting feature: your action is a non-causal means to an end. For your doing A is a means to permissibility of affirming you did A, but does not cause that permissibility. The relationship is a grounding one.

Thursday, July 11, 2024

The dependence of evidence on prior confidence

Whether p is evidence for q will often depend on one’s background beliefs. This is a well-known phenomenon.

But here’s an interesting fact that I hadn’t noticed before: sometimes whether p is evidence for q depends on how confident one is in q.

The example is simple: let p be the proposition that all other reasonable people have confidence level around r in q. If r is significantly bigger than one’s current confidence level, then p tends to be evidence for q. If r is significantly smaller than one’s current confidence level, then p tends to be evidence against q.

Friday, July 5, 2024

From theism to something like Christianity

The Gospel message—the account of the infinite and perfect God becoming one of us in order to suffer and die in atonement of our sins—is immensely beautiful. Even abstracting from the truth of the message, it is more beautiful than the beauties of nature around us. Suppose, now, that God exists and the Gospel message is false. Then a human (or demonic) falsehood has exceeded the beauty of God’s created nature around us. That does not seem plausible. Thus, it is likely that:

  1. If God exists, the Gospel message is true.

Furthermore, it seems unlikely that God would allow us to come up with a falsehood about what he has done where the content of that falsehood exceeds in beauty and goodness what God has in fact done. If so, then:

  1. If God exists, something at least as beautiful and good as the Gospel message is true.

Thinking hard

I don’t remember seeing much philosophical discussion of the duty to think hard.

There is a distinction we should start with. For many xs it sounds right to say:

  1. If you’re going to have an opinion about x, you should have thought hard about x.

But that doesn’t imply a duty to think hard about x unless you have a duty to have an opinion about x.

What I am interested in are things that you simply ought to think hard about. Some of these cases follow from specifics of your situation. If someone is drowning, and you don’t see how to save them, you ought to think hard about how to save them. But the more interesting cases are things that human beings at large should think hard about.

Consider these two statements, both of them likely true:

  1. There are agnostics who have thought hard and honestly about God.

  2. There are agnostics who have not thought hard about God.

Clearly, it is not crazy to think that (2) is a version of the problem of hiddenness: If God exists, why would he stay hidden from someone who thought hard about him? But (3) is not troubling in the same way. If there is a problem for theism from (3), it is just the good ol’ problem of moral evil: If God is perfectly good, why would he allow someone not to think hard about him. And it doesn’t feel like an especially problematic version of the problem of evil (it feels much less problematic than the problem of child abuse, say).

The intuitive difference between (2) and (3) suggest this plausible thesis:

  1. All humans in normal circumstances should think hard about God.

Or maybe at least:

  1. All humans in normal circumstances should think hard about fundamental questions.

How hard are people obligated to think about God and similar questions? Pascal’s Wager suggests that one should think very hard about them, both for prudential and moral reasons (the latter because our thinking hard about fundamental questions enables us to help others think about them). After all, God, if he exists, is the infinitely good ground of being, and there is nothing more important to think about.

I should note that I don’t think (4) means that everyone should think hard about whether God exists. I am inclined to think it is possible, either by faith or by easy observation of the world, to reasonably come to a position where it’s pretty obvious that God exists. But one should still think hard about God, even so.

All this leaves open a further question. What is it to think hard about something? The time ones puts into it is a part of that. But note that some of the time is apt to be unconscious: to think hard about something may involve significant periods during which one is not thinking consciously about the matter, but one is come back again and again to it. But there is also a seriousness or intensity of thought. I don’t know how exactly to specify what that means, but one interesting aspect of it is that if one is thinking seriously, one makes use of external tools. Thinking seriously can require actions of larger muscle groups: getting up to talk to friends; going to the library; performing scientific experiments; getting some scrap paper to make notes. (I sometimes know that I am not doing mathematics seriously if I don’t bother with scrap paper.) Thinking seriously involves more than just thinking. :-)

Tuesday, July 2, 2024

Do we have normative powers?

A normative power is supposed to be a power to directly change normative reality. We can, of course, indirectly change normative reality by affecting the antecedents of conditional norms: By unfairly insulting you, I get myself to have a duty to apologize, but that is simply due to a pre-existing duty to apologize for all unfair insults.

It would be attractive to deny our possession of normative powers. Typical examples of normative powers are promises, commands, permissions, and requests. But all of these can seemingly be reduced to conditional norms, such as:

  • Do whatever you promise

  • Do whatever you are validly commanded

  • Refrain from Ï•ing unless permitted

  • Treat what you are requested as a reason for doing it.

One might think that one can still count as having a normative power even if it is reducible to prior conditional norms. Here is a reason to deny this. I could promise to send you a dollar on any day on which your dog barks. Then your dog has the power to obligate me to send you a dollar, a power reducible to the norm arising from my promise. But dogs do not have normative powers. Hence an ability to change normative reality by affecting the antecedents of a prior conditional norm is not a normative power.

If this argument succeeds, if a power to affect normative reality is reducible to a non-normative power (such as the power to bark) and a prior norm, it is not a normative power. Are there any normative powers, then, powers not reducible in this way?

I am not sure. But here is a non-conclusive reason to think so. It seems we can invent new useful ways of affecting normative reality, within certain bounds. For instance, normally a request comes along with a permission—a request creates a reason for the other party to do the requested action and while removing any reasons of non-consent against the performance. But there are rare contexts where it is useful to create a reason without removing reasons of non-consent. An example is “If you are going to kill me, kill me quickly.” One can see this as creating a reason for the murderer to kill one quickly, without removing reasons of non-consent against killing (or even killing quickly). Or, for another example, normally a general’s command in an important matter generates a serious obligation. But there could be cases where the general doesn’t want a subordinate to feel very guilty for failing to fulfill the command, and it would be useful for the general to make a new commanding practice, a “slight command” which generates an obligation, but one that it is only slightly wrong to disobey.

There are approximable and non-approximable promises. When I promise to bake you seven cookies, and I am short on flour, normally I have reason to bake you four. But there are cases where there is no reason to bake you four—perhaps you are going to have seven guests, and you want to serve them the same sweet, so four are useless to you (maybe you hate cookies). Normally we leave such decisions to common sense and don’t make them explicit. However, we could also imagine making them explicit, and we could imagine promises with express approximability rules (perhaps when you can’t do cookies, cupcakes will be a second best; perhaps they won’t be). We can even imagine complex rules of preferability between different approximations to the promise: if it’s sunny, seven cupcakes is a better approximation than five cookies, while if it’s cloudy, five cookies is a better approximation. These rules might also specify the degree of moral failure that each approximation represents. It is, plausibly, within our normative authority over ourselves to issue promises with all sorts of approximability rules, and we can imagine a society inventing such.

Intuitively, normally, if one is capable of a greater change of normative reality, one is capable of a lesser one. Thus, if a general has the authority to create a serious obligation, they have the authority to create a slight one. And if you are capable of both creating a reason and providing a permission, you should be able to do one in isolation from the other. If you have the authority to command, you have the standing to create non-binding reasons by requesting.

We could imagine a society which starts with two normative powers, promising and commanding, and then invents the “weaker” powers of requesting and permitting, and an endless variety of normative subtlety.

It seems plausible to think that we are capable of inventing new, useful normative practices. These, of course, cannot be a normative power grab: there are limits. The epistemic rule of thumb for determining these limits is that the powers do not exceed ones that we clearly have.

It seems a little simpler to think that we can create new normative powers within predetermined limits than that all our norms are preset, and we simply instance their antecedents. But while this is a plausible argument for normative powers, it is not conclusive.

Monday, July 1, 2024

Duplicating electronic consciousnesses

Assume naturalism and suppose that digital electronic systems can be significantly conscious. Suppose Alice is a deterministic significantly conscious digital electronic system. Imagine we duplicated Alice to make another such system, Bob, and fed them both the same inputs. Then there are two conscious beings with qualitatively the same stream of consciousness.

But now let’s add a twist. Suppose that we create a monitoring system that continually checks all of Alice and Bob’s components, and as soon as any corresponding components disagree—are in a different state—then the system pulls the plug on both, thereby resetting all components to state zero. In fact, however, everything works well, and the inputs are always the same, so there is never any deviation between Alice and Bob, and the monitoring system never does anything.

What happens to the consciousnesses? Intuitively, neither Alice nor Bob should be affected by a monitoring system that never actually does anything. But it is not clear that this is the conclusion that specific naturalist theories will yield.

First, consider functionalism. Once the monitoring system is in place, both Alice and Bob change with respect to their dispositional features. All the subsystems of Alice are now incapable of producing any result other than one synchronized to Bob’s subsystems, and vice versa. I think a strong case can be made that on functionalism, Alice and Bob’s subsystems lose their defining functions when the monitoring system is in place, and hence lose consciousness. Therefore, on functionalism, consciousness has an implausible extrinsicness to it. The duplication-plus-monitoring case is some evidence against functionalism.

Second, consider Integrated Information Theory. It is easy to see that the whole system, consisting of Alice, Bob and the monitoring system, has a very low Φ value. Its components can be thought of as just those of Alice and Bob, but with a transition function that sets everything to zero if there is a deviation. We can now split the system into two subsystems: Alice and Bob. Each subsystem’s behavior can be fully predicted from that subsystem’s state plus one additional bit of information that represents whether the other system agrees with it. Because of this, the Φ value of the system is at most 2 bits, and hence the system as a whole has very, very little consciousness.

Moreover, Alice remains significantly conscious: we can think of Alice as having just as much integrated information after the monitoring system is attached as before, but now having one new bit of environmental dependency, so the Φ measure does not change significantly from the monitoring being added. Moreover, because the joint system is not significantly conscious, Integrated Information Theory’s proviso that a system loses consciousness when it comes to be in a part-to-whole relationship with a more conscious system is irrelevant.

Likewise, Bob remains conscious. So far everything seems perfectly intuitive. Adding a monitoring system doesn’t create a new significantly conscious system, and doesn’t destroy the two existing conscious systems. However, here is the kicker. Let X be any subsystem of Alice’s components. Let SX be the system consisting of the components in X together with all of Bob’s components that don’t correspond to the components in X. In other words, SX is a mix of Alice’s and Bob’s components. It is easy to see the information theoretic behavior of SX is exactly the same as the information theoretic behavior of Alice (or of Bob for that matter). Thus, the Φ value of SX will be the same for all X.

Hence, on Integrated Information Theory, each of the SX systems will be equally conscious. The number of these systems equals to 2n where n is the number of components in Alice. Of course, one of these 2n systems is Alice herself (that’s SA where A is the set of Alice’s components) and another one is Bob himself (that’s S). Conclusion: By adding a monitoring system to our Alice and Bob pair, we have created a vast number of new equally conscious systems: 2n − 2 of them!

The ethical consequences are very weird. Suppose that Alice has some large number of components, say 1011 (that’s how many neurons we have). We duplicate Alice to create Bob. We’ve doubled the number of beings with whatever interests Alice had. And then we add a dumb monitoring that pulls the plug given a deviation between them. Suddenly we have created 21011 − 2 systems with the same level of consciousness. Suddenly, the moral consideration owed to to the Alice/Bob line of consciousness vastly outnumbers everything.

So both functionalism and Integrated Information Theory have trouble with our duplication story.

Thursday, June 27, 2024

Improving the Epicurean argument for the harmlessness of death

The famous Epicurean argument that death (considered as leading to nonexistence) is not a harm is that death doesn’t harm one when one is alive and it doesn’t harm one when one is dead, since the nonexistent cannot be harmed.

However, the thesis that the nonexistent cannot be harmed is questionable: posthumous infamy seems to be a harm.

But there’s a neat way to fix this gap in the Epicurean argument. Suppose Bob lives 30 years in an ordinary world, and Alice lives a very similar 30 years, except that in her world time started with her existence and ended with her death. Thus, literally, Alice is always alive—she is alive at every time. But notice that the fact that the existence of everything else ends with Alice does not make Alice any better off than Bob! Thus, if death is a harm to Bob, it is a harm to Alice. But even if it is possible for the nonexistent to be harmed, Alice cannot be harmed at a time at which she doesn’t exist—because there is no time at which Alice doesn’t exist.

Hence, we can run a version of the Epicurean argument without the assumption that the nonexistent cannot be harmed.

I am inclined to think that the only satisfactory way out of the argument, especially in the case of Alice, is to adopt eternalism and say that death is a harm without being a harm at any particular time. What is a harm to Alice is that her life has an untimely shortness to it—a fact that is not tied to any particular time.

Tuesday, June 25, 2024

Infinite evil

Alice and Bob are both bad people, and both believe in magic. Bob believes that he lives in an infinite universe, with infinitely many sentient beings. Alice thinks all the life there is is life on earth. They each perform a spell intended to cause severe pain to all sentient beings other than themselves.

There is a sense in which Bob does something infinitely worse than Alice: he tries to cause severe pain to infinitely many beings, while Alice is only trying to harm finitely many beings.

It is hard to judge Bob as an infinitely worse person than Alice, because we presume that if Alice thought that there were infinitely many sentient beings, she would have done as Bob did.

But even if we do not judge Bob as an infinitely worse person, shouldn’t we judge his action as infinitely worse? Yet even that doesn’t seem right. And neither seems to deserve that much more punishment than a sadistic dictator who tries to infect “mere millions” with a painful disease.

Could it be that punishment maxes out at some point?