Monday, October 23, 2017

Murder by slowdown?

Zeno wants Alice dead and he has the following plan. He slows down Alice’s functioning—say, by cooling her or by sending her around the earth on a spaceship so fast that relativistic time dilation does the job—so much that each second of Alice’s internal time takes a billion years of external time. In six seconds of Alice’s internal time, she’s dead, because the sun runs out of hydrogen and turns into a red giant.

Did Zeno kill Alice or did the sun kill Alice? Both: Zeno kills Alice by shifting her future life into a spatiotemporal position where that life would be destroyed by the sun. This is akin to sending Alice now into the sun on a speeding rocket.

(I am not a lawyer, but I expect Zeno could only be convicted of attempted murder, since a conviction for murder requires the victim to be dead; similarly, I assume that an 80-year-old person who gives someone a poison that takes forty years to work can only be convicted of attempted murder, because by the time the poison does its work, the murderer will be dead.)

But now imagine that Zeno lives in a universe where the earth will be habitable forever. He sets up an automated system that slows down Alice’s internal time to such a degree that in the first year of external time, Alice’s internal time moves ahead only 3 seconds; in the next external year, it moves ahead by 1.5 seconds; in the next year, it moves ahead by 0.75 seconds; and so on. What happens? Well, Alice still cannot have more than six seconds of life ahead of him. In n years of external time, she will have had 6 − 6/2n seconds of internal time.

So just as in the first scenario, Zeno has ensured that Alice has less than six seconds of internal time left. It sure sounds like murder. But wait! In the second scenario, it seems that Alice never dies: she is alive this year, just sluggish; she will be alive next year, though even more sluggish; and so on.

But Alice will be dead in exactly six seconds of internal time. So what will be the cause of death? The unfortunate misalignment between Alice’s internal time and the external time of the universe, together with the universe running out of time “once year ω rolls around”? Maybe. I am not sure. This is paradoxical.

There is a way of getting out of this paradox. Suppose internal time must be discrete. Then to slow down Alice’s time means to space out the discrete ticks of her time. Suppose for simplicity that Alice has a hundred ticks per internal second. Then in the next year, she will have 300 ticks. Some time in year ten, the 599th tick of Alice’s future life happens. And the 600th tick will never happen. So, the gradual slowdown story is is impossible. The speed his zero after the tenth year. The best (or worst?) Zeno can do is ensure that the 599th tick of Alice’s life is the last one. But if that’s what he does, then he causes her death by ensuring that the 600th tick never happens. But if that’s what he does, there is no gradual slowdown paradox.

Friday, October 20, 2017

Why my present existence can't depend on future events

I find very persuasive arguments like this:

  1. If theory T is true, then whether I exist now depends on some future events.

  2. Facts about what exists now do not depend on future events.

  3. So, theory T is not true.

For instance, some four-dimensionalist solutions to problems of fission according to which the number of people there are now depends on whether fission will are subject to this criticism.

But I’ve had a nagging worry about arguments like this, that in accepting (2), I am not being faithful to my eternalist four-dimensionalist convictions: why should the present aspects of the four-dimensional me have this sort of priority? Moreover, I didn’t really have an argument for (2). Until today.

Here is an argument for (2). Start with this.

  1. If facts about my present existence depend on future events, then facts about my present existence depend on future events that happen to me.

For instance, suppose that whether I exist now depends on whether some surgeon cuts my brain in half tomorrow. Well, then, some of the events that my present existence depends on will be events that happen entirely to someone else—for instance, whether the surgeon gets to work on time. But other events, such as the cutting or non-cutting of the brain, will happen to me. It would be absurd to think that facts about my present existence or identity depend on future events that happen entirely to something other than me.

Then add:

  1. Any events that happen to me in the future depend on my present existence.

For, such events presuppose my future existence, and my future existence is caused by my present existence.

  1. Circular dependence is impossible.

  2. So, facts about my present existence do not depend on future events.

Note that (6) is a very strong premise, and is one place the argument can get attacked. Many people think that you can have circular dependence when the dependence in the two directions is of a different sort. In the case at hand, facts about my present existence might depend constitutively on future events, while the future events depend causally on my present existence. Nonetheless, I think (6) is true, even if the dependence in the two directions is of a different sort.

Another move is to describe the future events on which my existence depends without reference to me. Don’t describe what the surgeon does as the splitting of my brain, but as the splitting of brain x. Then we could say that the future event of the surgeon’s splitting my brain does depend on my present existence, but my present existence doesn’t depend on that event. Instead, it depends on the future event of the surgeon’s splitting brain x. This objection denies (5): while the splitting of my brain depends on my present existence, the splitting of brain x does not, and yet it happens to me.

I think this is mistaken. The splitting of brain x depends on the future existence of that brain, and that brain depends on me, because parts depend on wholes—that is a deep Aristotelian premise I accept. Thus I think (5) is true. An event that happens to me is an event that involves at least a part of me, and none of my parts could exist without me. Granted, a brain like mine could exist without me. But token events are individuated in part by the things caught up in them. A splitting of a brain merely like mine would be a different event from the splitting of this particular brain. And it is a token event that my present existence is supposed to depend on.

The above argument won’t move non-Aristotelians who think that wholes depend on parts rather than parts depending on wholes. But it works for me. And hence it assuages the worry that in accepting (2), I am being unfaithful to my views about time.

All that said, I don’t really want to affirm (2) in an exceptionless way. If I am a time-traveller born in the year 2200, then my present existence does depend on what will happen in the future. But it only depends on what will happen in the external-time future not on what will happen in my internal-time future. And, crucially, I think time-travel is only possible when it doesn’t result in causal loops. So even if I am a time-traveller from the future, I cannot affect anything that is causally relevant to whether I will be born, etc. This probably means that if time-travel is possible, it is possible only in very carefully limited settings.

Thursday, October 19, 2017

Conciliationism is false or trivial

Suppose you and I are adding up a column of expenses, but our only interest is the last digit for some reason. You and I know that we are epistemic peers. We’ve both just calculated the last digit, and a Carl asks: Is the last digit a one? You and I speak up at the same time. I say: “Probably not; my credence that it’s a one is 0.27.” You say: “Very likely; my credence that it’s a one is 0.99.”

Concialiationists now seem to say that I should lower my credence and you should raise yours.

But now suppose that you determine the credence for the last digit as follows: You do the addition three times, each time knowing that you have an independent 1/10 chance of error. Then you assign your credence as the result of a Bayesian calculation with equal priors over all ten options for the last digit. And since I’m your epistemic peer, I do it the same way. Moreover, while we’re poor at adding digits, we’re really good at Bayesianism—maybe we’ve just memorized a lot of Bayes’ factor related tables. So we don’t make mistakes in Bayesian calculations, but we do at addition.

Now I can reverse engineer your answer. If you say your credence in a one is 0.27, then I know that of your three calculations, one of them must have been a one. For if none of your calculations was a one, your credence that the digit was a one would have been very low and if two of your calculations yielded a one, your credence would have been quite high. There are now two options: either you came up with three different answers, or you had a one and then two answers that were the same. In the latter case, it turns out that your credence in a one would have been fairly low, around 0.08. So it must be that your calculations yielded a one, and then two other numbers.

And you can reverse engineer my answer. The only way my credence could be as high as 0.99 is if all three of my calculations yielded a one. So now we both know that my calculations were 1, 1, 1 and yours were 1, x, y where 1, x, y are all distinct. So now you aggregate this data, and I do the same as your peer. We have six calculations yielding 1, 1, 1, 1, x, y. A quick Bayesian calculation, given the fact that the chance of error in each calculation is 0.9, yields a posterior probability of 0.997.

So, your credence did go up. But mine went up too. Thus we can have cases where the aggregation of a high credence with a low credence results in an even higher credence.

Of course, you may say that the case is a cheat. You and I are not epistemic peers, because we don’t have the same evidence: you have the evidence of your calculations and I have the evidence of mine. But if this counts as a difference of evidence, then the standard example conciliationists give, that of different people splitting a bill in a restaurant, is also not a case of epistemic peerhood. And if the results of internal calculations count as evidence for purposes of peerhood, then there just can’t be any peers who disagree, and conciliationism is trivial.

Wednesday, October 18, 2017

From the finite to the countable

Causal finitism lets you give a metaphysical definition of the finite. Here’s something I just noticed. This yields a metaphysical definition of the countable (phrased in terms of pluralities rather than sets):

  1. The xs are countable provided that it is possible to have a total ordering on the xs such if a is any of the xs, then there are only finitely many xs smaller (in that ordering) than x.

Here’s an intuitive argument that this definition fits with the usual mathematical one if we have an independently adequate notion of nautral numbers. Let N be the natural numbers. Then if the xs are countable, for any a among the xs, define f(a) to be the number of xs smaller than a. Since all finite pluralities are numbered by the natural numbers, f(a) is a natural number. Moreover, f is one-to-one. For suppose that a ≠ b are both xs. By total ordering, either a is less than b or b is less than a. If a is less than b, there will be fewer things less than a than there are less than b, since (a) anything less than a is less than b but not conversely, and (b) if you take something away from a finite collection, you get a smaller collection. Thus, if a is less than b, then f(a)<f(b). Conversely, if b is less than a, then f(b)<f(a). In either case, f(a)≠f(b), and so f is one-to-one. Since there is a one-to-one map from the xs to the natural numbers, there are only countably many xs.

This means that if causal finitism can solve the problem of how to define the finite, we get a solution to the problem of defining the countable as a bonus.

One of the big picture things I’ve lately been thinking about is that, more generally, the concept of the finite is foundationally important and prior to mathematics. Descartes realized this, and he thought that we needed the concept of God to get the concept of the infinite in order to get the concept of the finite in turn. I am not sure we need the concept of God for this purpose.

Are there multiple models of the naturals that are "on par"?

Assuming the Peano Axioms of arithmetic are consistent, we know that there are infinitely many sets that satisfy them. Which of these infinitely many sets is the set of natural numbers?

A plausible tempting answer is: “It doesn’t matter—any one of them will do.”

But that’s not right. For the infinitely many sets each of which is a model of the Peano Axioms are not isomorphic. They disagree with each other on arithmetical questions. (Famously, one of the models “claims” that the Peano Axioms are consistent and another “claims” that they are inconsistent, where we know from Goedel that consistency is equivalent to an arithmetical question.)

So it seems that with regard to the Peano Axioms, the models are all on par, and yet they disagree.

Here’s a point, however, that is known to specialists, but not widely recognized (e.g., I only recognized the point recently). When one says that some set M is a model of the Peano Axioms, one isn’t saying quite as much as the non-expert might think. Admittedly, one is saying that for every Peano Axiom A, A is true according to M (i.e., MA). But one is not saying that according to M all the Peano Axioms are true. One must be careful with quantifiers. The statement:

  1. For every Peano Axiom A, according to M, A is true.

is different from:

  1. According to M, all the Peano Axioms are true.

The main technical reason there is such a difference is that (2) is actually nonsense, because the truth predicate in (2) is ineliminable and cannot be defined in M, while the truth predicate in (1) is eliminable; we are just saying that for any Peano Axiom A, MA.

There is an important philosophical issue here. The Peano Axiomatization includes the Axiom Schema of Induction, which schema has infinitely many formulas as instances. Whether a given sequence of symbols is an instance of the Axiom Schema of Induction is a syntactic matter that can be defined arithmetically in terms of the Goedel encoding of the sequence. Thus, it makes sense to say that some sequence of symbols is a Peano Axiom according to a model M, i.e., that according to M, its Goedel number satisfies a certain arithmetical formula, I(x).

Now, non-standard models of the naturals—i.e., models other than our “normal” model—will contain infinite naturals. Some of these infinite naturals will intuitively correspond, via Goedel encoding, to infinite strings of symbols. In fact, given a non-standard model M of the naturals, there will be infinite strings of symbols that according to M are Peano Axioms—i.e., there will be an infinite string s of symbols such that its Goedel number gs is such that I(gs). But then we have no way to make sense of the statement: “s is true according to M” or Ms. For truth-in-a-model is defined only for finite strings of symbols.

Thus, there is an intuitive difference between the standard model of the naturals and non-standard models:

  1. The standard model N is such that all the numbers that according to N satisfy I(x) correspond to formulas that are true in N.

  2. A non-standard model M is not such that all the numbers that according to M satisfy I(x) correspond to formulas that are true in M.

The reason for this difference is that the notion of “true in M” is only defined for finite formulas, where “finite” is understood according to the standard model.

I do not know how exactly to rescue the idea of many inequivalent models of arithmetic that are all on par.

Tuesday, October 17, 2017

Approximate truth and the very recent past

Suppose I say that Jim yelled in delight at 12:31. But in fact he did so at 12:32. Then I said something false but approximately true.

Now, suppose that I hear Jim giving a loud yell of delight about 300 meters away. While I am listening to that yell, I think that Jim is yelling. But in the last second of my hearing, Jim is no longer yelling, but the sound waves are still traveling to me. No big deal. My belief that Jim is yelling is false, but approximately true. Or so I want to say.

And it’s important to say something like this, for it allows us to preserve the idea that our sense give us approximate truth. The case of sound from 300 meters away is particularly strong, but the point goes through in all our sensation, as none of it travels faster than the speed of light. Now, granted, often when we become aware of a stimulus, our sensory organs are still undergoing it. But nonetheless it is strictly speaking false to say that this very part of the stimulus that we are now aware of is in fact going on. So our senses seem to lead us slightly astray. But at most very slightly. It is approximately true that this part of the stimulus is going on now, because it is in fact going on a fraction of a second earlier. Or, perhaps, it is a part of our common sense knowledge of the world that the data of the senses is only meant as an approximation to the truth, and so there is no straying at all.

Now imagine that I say that Jim actually yelled in delight at 12:31, but he was actually completely silent all day, although in a very nearby possible world he did yell in delight at 12:31. Then what I said is not approximately true. In ordinary contexts, the modal difference between the actual and the merely possible vitiates approximate truth, no matter how nearby the merely possible world is.

So now on to one of my hobby horses: presentism. If presentism is true, then the difference between what is happening now and what happened earlier is relevantly like the difference between the actual and the possible. In both cases, it is a difference between a neat and clean predication and a predication in the scope of a modal operator, pastly or possibly, respectively. If this is right, then if presentism is true, I cannot say what I said about its being approximately true that Jim is yelling if Jim has actually stopped. That difference is a very deep modal difference. That the time when Jim is yelling is in a nearby past no more suffices for the approximate truth of “Jim is yelling now” than that Jim is yelling in a nearby possible world is enough for the approximate truth of “Jim is actually yelling”. The ontological gulf between the actual and the possible is vast; so would be the ontological gulf between the present and the past if presentism were true.

Thus, the presentist cannot say that the senses tend to deliver approximate truth.

Objection: We know to correct the data of the senses for the delay.

Response: We know. But that's a recent development.

Hope vs. despair

A well-known problem, noticed by Meirav, is that it is difficult to distinguish hope from despair. Both the hoper and the despairer are unsure about an outcome and they both have a positive attitude towards it. So what's the difference? Meirav has a story involving a special factor, but I want to try something else.

If I predict an outcome, and the outcome happens, there is the pleasure of correct prediction. When I despair and predict a negative outcome, that pleasure takes the distinctive more intense "I told you so" form of vindicated despair. And if the good outcome happens, despite my despair, then I should be glad about the outcome, but there is a perverse kind of sadness at the frustration of the despair.

The opposite happens when I hope. When the better outcome happens, then even though I may not have predicted the better outcome, and hence I may not have the pleasure of correct prediction, I do have the pleasure of hope's vindication. And when the bad outcome happens, I forego the small comfort of the vindication of despair.

The pleasures of correct prediction and the pains of incorrect prediction are doxastic in nature: they are pleasures and pains of right and wrong opinion. But hope and despair can, of course, exist without prediction. But when I hope for a good outcome, then I dispose myself for pleasures and pains of this doxastic sort much as if I were predicting the good outcome. When I despair of the good outcome, then I dispose myself for these pleasures and pains much as if I were predicting the bad outcome.

We can think of hoping and despairing as moves in a game. If you hope for p, then you win if and only if p is true. If you despair of p, then you win if and only if p is false. In this game of hoping and despairing, you are respectively banking on the good and the bad outcomes.

But this banking is restricted. It is in general false that when I hope for a good outcome, I act as if it were to come true. I can hope for the best while preparing for the worst. But nonetheless, by hoping I align myself with the best.

This gives us an interesting emotional utility story about hope and despair. When I hope for a good outcome, I stack a second good outcome--a victory in the hope and despair game, and the pleasure of that victory--on top of the hoped-for good outcome, and I stack a second bad outcome--a sad loss in the game--on top of the hoped-against bad outcome. And when I despair of the good outcome, I moderate my goods and bads: when the bad outcome happens, the badness is moderated by the joy of victory in the game, but when the good outcome happens, the goodness is tempered by the pain of loss. Despair, thus, functions very much like an insurance policy, spreading some utility from worlds where things go well into worlds where things go badly.

If the four goods and bads that the hope/despair game super-adds (goods: vindicated hope and vindicated despair; bads: frustrated hope and needless despair) are equal in magnitude, and if we have additive expected utilities with expected utility maximization, then as far this super-addition goes, you are better off hoping when the probability of the good outcome is greater than 1/2 and are better off despairing when the probability of the bad outcome is is less than 1/2. And I suspect (without doing the calculations) that realistic risk-averseness will shift the rationality cut-off higher up, so that with credences slightly above 1/2, despair will still be reasonable. Hope, on the other hand, intensifies risks: the person who hoped whose hope was in vain is worse off than the person who despaired and was right. A particularly risk-averse person, by the above considerations, may have reason to despair even when the probability is fairly high. These considerations might give us a nice evolutionary explanation of why we developed the mechanisms of hope and despair as part of our emotional repertoire.

However, these considerations are crude. For there can be something qualitatively bad about despair: it makes one not be as single-minded. It aligns one's will with the bad outcome in such a way that one rejoices in it, and one is saddened by the good outcome. To engage in despair on the above utility grounds is like taking out life-insurance on someone one loves in order to be comforted should the person die, rather than for the normal reasons of fiscal prudence.

This suggests a reason why the New Testament calls Christians to hope. Hope in Christ is part and parcel of a single-minded betting of everything on Christ, rather than the hedging of despair or holding back from wagering in neither hoping nor despairing. We should not take out insurance policies against Christianity's truth. But when the hope is vindicated, the fact that we hoped will intensify the joy.

I am making no claim that the above is all there is to hope and despair.

Friday, October 13, 2017

An excessively simple theory of pain

A physicalist research program is to identify physical state types that underlie mental state types.

Here is an overly naive physicalist-friendly theory of pain:

  1. Pain is what occurs in the triggering of a damage-detector state that is linked to aversive behavior.

This theory is simple and elegant. Arguably, all actual instances of pain fits with the theory. So if there is an extensional problem with (1), it is that it classifies as pains states that aren’t pains. Plants feel pain on (1), and a program that monitors the health of a hard drive and relocates data away from damaged areas feels pain.

For the above reasons, I assume nobody will find (1) plausible.

The physicalist research program needs to be based on fitting physical stories to the data about mental states. This data has to be data about where a mental state type occurs and where a mental state type does not occurs. For as (1) shows, it is too easy to find physical stories that simply fit data about where mental states do occur. In fact, we can do even better than (1) if our only constraint is catching all cases of pain by giving this story:

  1. To be in pain is to be physical.

We have a nice source of data about where pain does occur: our own experience, the reports of other persons, and the behavior of animals similar to us. But do we have data about where pain does not occur?

We could say this: I now know that I am not in pain. So (2) is refuted directly: I am a physical being, but I am not in pain. Slightly more subtly, I can refute (1) as follows. No doubt as I am writing this, some of my cells are being damaged by some factors in the environment, and my body is doing something aversive about it. But I am not in pain.

But I think this argument against (1) is not as evidentially strong as it seems. The leading physicalist theory is functionalism. If functionalism is true, then pain is some sort of a functional state. I exhibit this functional state in the brain. But my body could mutate so that my stomach would host that sort of functional state, without any connection between that state and my brain. When my stomach would host the state, then on the functionalist theory I would be in pain. But it would be a pain that I am incapable of reporting, because the state would not be connected to the speech centers in the brain. This would be a case where either there are two conscious things—I and my stomach—or a case where my consciousness is divided into a brain-based and a stomach-based consciousness, and only the brain-based consciousness is able to drive action. (Similar phenomena seem to happen with split-brain patients.)

Likewise, then, if some of my individual cells were currently being damaged, and my body detected that damage and engaged in something aversive, then it shouldn’t be expected that I would report pain, even if (1) were true. Rather, if (1) were true, then in a scenario like this, either there would be two conscious things, one located in and around the particular cells and the other in the brain, or else I would have a divided consciousness. In neither case would the absence of pain to the brain-based consciousness be a refutation of (1).

So it seems I cannot refute (1) by observing my lacks of pain, because the pains predicted by (1) could be occurring in a different conscious thing found in my body or in a consciousness divided from the one that is driving my paradigmatically human activity.

I think the best way to refute (1) is to rely on intuitions like that plants aren’t conscious. But if naturalism were true, I wonder if there would be any reason to think such intuitions are truth-conducive.

Thursday, October 12, 2017

Consciousness in transitions

We can think of a digital computer processor as doing two things: Transitioning between states and remaining in a constant state between the transitions. How long the processor remains in a constant state depends on the clock rate: after the processor has done a flurry of computation (“combinatorial logic”) in a clock cycle, it will stay in a constant state until it’s time for the next flurry. If the clock rate is low, it will be able to stay in that constant state for a significant portion of the time, which is great, because presumably then the processor will be cooling off.

Suppose the computer is conscious by virtue of computation (as opposed to, say, being conscious by virtue of the functioning of a soul that God creates for it). When is it conscious? Is it during the transitions between the states or while remaining in a state? Intuitively, it should be during the transitions. After all, while it was remaining in a state, we could suddenly lower the computer’s temperature to near absolute zero. That wouldn’t disturb the computer’s remaining-in-a-state.

(Granted, it would disturb the computer’s clock. But the clock seems something extrinsic to the conscious system. One could in principle run a processor—very slowly—on a clock signal produced by a human being tapping a telegraph key, and surely that wouldn’t make the human’s hand a part of the conscious system.)

But the frozen state is functionally very much like the processor’s regular holding of a state when it waits for the next clock pulse. So just as it is implausible to think that a physical system like a computer that is frozen near absolute zero would continue to be conscious, it is implausible to think that the computer would be conscious while simply holding a state.

Thus, if a digital computer is conscious by virtue of computation, that consciousness occurs in and through transitions between states.

So what? I don’t know. I’m just trying to figure out what the best functionalist view would be like.

And note the contrast between this picture of consciousness-in-transitions and classical theism, according to which consciousness occurs in a timeless state.

A materialist intuition against materialism

The following argument is valid:

  1. It is metaphysically impossible for us to become wholly immaterial.

  2. If we are wholly material, then functionalism is true.

  3. If functionalism is true, then it is metaphysically possible for us to become wholly immaterial.

  4. So, we are not wholly material.

I think premise 1 is false, but intuitively 1 is pretty plausible—especially to a materialist.

Premise 2 is made plausible by the way functionalism solves serious problems in other materialist theories.

Premise 3 can be argued for: it is metaphysically possible for an immaterial being to have the same functional properties as I do, and furthermore for the immaterial being’s isomorphic functional states to be caused by my functional states at the last moment of my body’s existence in such a way that the immaterial being is a continuant of me given functionalism.

Wednesday, October 11, 2017

MIDI fruit piano

My daughters and I saw a Makey-Makey banana piano at a local fair, and they thought it was cool. So I made an Arduino(clone) fruit piano, using capacitive sensing, and a Python program on a computer that plays polyphonic music. It's super-simple, as it uses the ADCTouch library which doesn't need any electronic components besides the Arduino(clone), and it's better than the banana piano as it doesn't require the user to be grounded.

While tweaking the project, I learned that MIDI format is really simple, so now the fruit-piano sends notes to the computer via MIDI-over-serial-over-USB, and so one can presumably use the fruit-piano as a keyboard for various kinds of desktop music software.

Instructions are here. Code is here.

If you look at the picture carefully, you'll see that I cheated. We didn't have the eight oranges for the C major scale that my eldest daughter thought we should have, so two of the keys are soda cans.

Tuesday, October 10, 2017

Infinity book progress

I've just sent off the final contracted-for manuscript of Infinity, Causation and Paradox.

Attempts at wrongdoing

It is a common intuition, especially among Christians, that attempts at immoral actions—say, attempted murder or attempted adultery—are just as bad as the completion of the actions.

But in practice the situation is rather more complicated. Suppose Samantha is about to murder Fred. She is sitting on the rooftop with her rifle, has measured the windspeed, has made the corrections to her sights, is putting Fred in her cross-hairs and is getting ready to squeeze the trigger at an opportune moment. Then suddenly a police officer comes up and grabs Samantha’s rifle before she can do anything.

Samantha has performed actions whose end was Fred’s death. She is an attempted murderer. But I think there is an immoral act that she has been saved from. For imagine three versions of how the story could end:

  1. The police officer comes up and grabs her rifle at time t1 before she squeezes the trigger.

  2. At time t1, Samantha decides not to squeeze the trigger and not commit the murder.

  3. At time t1, Samantha decides to squeeze the trigger.

In all three cases, by the time of t1, Samantha is already an attempted murderer. But in version 2, Samantha has done at least one less bad thing than in version 3. As of t1, Samantha still has a decision to make: to go through with the action or not. In case 3, she decides that wrongly. In case 2, she decides that rightly.

In case 1, the police officer prevents her from making that decision. It seems clear that Samantha’s moral state in case 1 is less bad in than in case 3. For in case 3, Samantha makes a morally wrong decision that has no parallel in case 1. So the police officer has not only saved Fred’s life, but he has decreased the number of wrongs done by Samantha.

Of course, timing and details matter here. Suppose that the police officer grabs Samantha’s rifle at a moment when the bullet is already traveling through the barrel, making the shot go wide. Then Samantha is an attempted murderer, but the amount of wickedness on her conscience is the same as in case 3.

So there is a moral distinction to be made between Samantha in cases 1 and 3, but the distinction isn’t the distinction between attempt and success. Rather, the issue is that a typical wrong action involves multiple acts of will, many of which may well come with the possibility of stopping. Each time one does not will to stop, while being capable of willing to stop, one does another wrong. If one is prevented from completion of the act after the last of these acts of will, then one is not better off in terms of one’s moral guilt state. (Though one is better off in terms of how much restitution one owes and similar considerations.) But if one is stopped earlier, then one is better off.

This means that counting counts of sin is tricky. Suppose Fred had decided on committing adultery with Samantha’s sister Patricia. He texted Patricia offering to meet with her in a hotel room. He is already an attempted adulterer. But then he makes a number of decisions each of which could be a stopping point. He decides to get in his car. To drive to the hotel. To enter the room. Etc. At each of these points, Fred could have stopped, I assume. But at each point he chose adultery instead. So by the time he is in the room, he has committed adultery in his will many times.

But when we count wrongs, we don’t count like that. We count the number of murders, the number of adulteries or the number of thefts—not the number of times that one could have stopped along the way. We act as if the person who murdered five is worse than the person who murdered one, even if the person who murdered the one had to drive ten times as far.

Maybe the reason we count as we do is just a pragmatic matter. We don’t know just how many times one’s will is capable of stopping one, and how much a person just acts on auto-pilot, having set a course of action.

Or maybe the responsibility for the choose-not-to-stop decisions is much lower than for the initial decision?

I don’t know.

Monday, October 9, 2017

Preventing someone from murdering Hitler

You are a secret opponent of the Nazi regime, and you happen to see Schmidt sneaking up on Hitler with an axe and murderous intent. You know what’s happening: Schmidt believes that Hitler has been committing adultery with Mrs. Schmidt, and is going to murder Hitler. Should you warn Hitler’s guards?

  1. Intuition: No! If Hitler stays alive, millions will die.

  2. Objection: You would be intending Schmidt to kill Hitler, a killing that you know would be a murder, and you are morally speaking an accomplice. And it is wrong to intend an evil to prevent more evil.

There is a subtlety here. Perhaps you think: “It is permissible to kill an evil tyrant like Hitler, and so Schmidt is doing the right thing, but for the wrong reasons. So by not warning the guards, I am not intending Schmidt to commit a murder, but only a killing that is objectively morally right, albeit I foresee that Schmidt will commit it for the wrong reasons.” I think this reasoning is flawed—I don’t think one can say that Schmidt is doing anything morally permissible, even if the same physical actions would be morally permissible if they had other motive. But if you’re impressed by the reasoning, tweak the case a little. All this is happening before Hitler has done any of the evil tyrannical deeds that would justify killing him. However, you foresee with certainty that if Hitler is not stopped, he will do them. So Schmidt’s killing would be wrong, even if Schmidt were doing it to prevent millions of deaths.

What’s behind (2) is the thought that Double Effect forbids you to intend an evil, even if it’s for the purpose of preventing a greater evil.

But here is the fascinating thing. Double Effect forbids you from warning the guards. The action of warning the guards is an action that has two effects: (i) prevention of a murder, and (ii) the foreseen deaths of millions. Double Effect has a proportionality condition: it is only permissible to do an action with a good and a bad effect when the bad effect is proportionate to the good effect. But millions of deaths are not proportionate to the prevention of one murder. So Double Effect forbids you from warning the guards.

Now it seems that we have a conflict between Double Effect and Double Effect. On the one hand, Double Effect seems to say that you may not warn the guards, because doing so will cause millions of deaths. On the other hand, it seems to say that you may not refrain from warning the guards in order to save millions because in so doing you are intending Schmidt to kill Hitler.

I know of three ways out of this conflict.

Resolution 1: Double Effect applies only to commissions and not omissions. It is permissible to omit warning the guarads in order that Schmidt may have a free hand to kill Hitler, even though it would not be permissible to help Schmidt by any positive act. One may intend the killing of Hitler in the context of one’s omission but not in the context of one’s commission.

Resolution 2: This is a case of Triple Effect or, equivalent, of a defeater-defeater. You have some reason not to warn the guards. Maybe it’s just the general moral reason that you have not to invoke the stern apparatus of Nazi law, or the very minor reason not to bother straining one’s voice. There is a defeater for that reason, namely that warning the guards will prevent a murder. And there is a defeater-defeater: preventing that murder will lead to the deaths of millions. Thus, the defeater to your initial relatively minor moral reason not to warn guards—viz., that if you don’t, a murder will be committed—is defeater, and so you can just go with the initial moral reason. On this story, the initial Objection to the Intuition is wrong-headed, because it is not your intention to save millions—that is just a defeater to a defeater.

Resolution 3: Your intention is simply to refrain from acting in ways that have a disproportionately bad effect. We should simply not perform such actions. You aren’t refraining as a means to the prevention of the disproportionately bead effect, as the initial Objection claimed. Rather, you are refraining as a means to prevent oneself from contributing to a disproportionately bad effect, namely to prevent oneself from defending the life of the man who will kill millions.


While Resolution 1 is in some ways attractive, it requires an explanation why intentions for evils are permissible in the context of omissions but not of commissions.

I used to really like something like Resolution 2. But now it seems forced to me, because it claims that your primary intention in the omission can be something so very minor—perhaps as minor as not straining one’s voice in some versions of the story. That just doesn’t seem psychologically realistic, and it seems to trivialize the goods and evils involved if one is focused on something minor. I still think the Triple Effect reasoning like has much to be said for it, but only in those cases where there is a significant good at stake in the initial intention.

I find myself now pulled to Resolution 3. The worry is that Resolution 3 pulls one towards the consequentialist justification of the initial intuition. But I think Resolution 3 is distinguishable from consequentialism, both logically and psychologically. Logically: the intention is not to contribute to an overwhelmingly bad outcome. Psychologically: one can refrain from warning the guards even if one wouldn’t raise a finger to help Schmidt. Resolution 3 suggests that there is an asymmetry between commission and omission, but it locates that asymmetry more plausibly than Resolution 1 did. Resolution 1 claimed that it was permissible to intend evils in the context of omissions. That is implausible for the same reason why it is impermissible to intend evils in the context of comissions: the will of someone who intends evil is a corrupt will. But Resolution 3 is an intuitively plausible non-consequentialist principle about avoiding being a contributor to evil.

In fact, if one so wishes, one can use Resolution 3 to fix the problem with Resolution 2. The initial intention becomes: Don’t be a contributor to evil. Defeater: If you don’t warn, a murder will happen. Defeater-defeater: But millions will die. Now the initial intention is very much non-trivial.

Friday, October 6, 2017

Practice-internal goods

I’m hereby instituting a game: the breathing game. My score in the game ranges from 0 to 10. I get 0 points if I hold my breath for a minute. Otherwise, my score equals the number of breaths I took during the minute, up to ten (if I took more than eight breaths, my score is still ten).

It is good to do well at games. And I am really good at the breathing game, as are all other healthy people. For every game, there is a practice-internal good of victory. Thus, by choosing to play the breathing game, my life is enriched by a new practice-internal good, the good of winning the breathing game over and over. And of course there is an immense practice-external good at stake: this is a game where victory is life, as the Jem’Hadar say.

There is something absurd about the idea that I have significantly enhanced my life simply by deciding to be a player of the breathing game and thus attaining victory about 1440 times a day.

It is widely thought that there can be significant practice-internal goods in practices we institute. The breathing game’s practice-internal good of victory is not significant. Why not? Maybe because the game hasn’t caught on: I am the only one playing it. (Everybody is else is just breathing.) But if the good of victory would become significant were the game to catch on, then we have good consequentialist reason to promote the breathing game as widely as we can, so that as many people as possible could get a significant good 1440 times a day, thereby brighting up many drab lives. But that’s silly. It’s not that easy to improve the lot of humankind.

An intuitive thing to say about the breathing game is that it’s not very challenging. Healthy people can win without even trying. It’s a lot harder to get a score of 0 than a score of 10. The lack of challenge certainly makes the game less fun. But fun is a practice-external good. Does the lack of challenge make the practice-internal good less?

Maybe, but I am dubious. Challenge really seems rather external, while practice-internal goods are supposed to be instituted. Maybe, though, the story is this. When I institute a game, I am filling out a template provided by a broader social practice, the practing of playing games, and the broader social practice includes a rule that says that unchallenging victory is not worth much. I can’t override that rule while still counting as instituting a game.

That may be. But if the larger social practice, the one of games in general, is itself one that we have instituted, then we have good moral reason to institute another social practice, a practice of shgames. The practice of shgames is just like the practice of games, except that the practice-internal good of victory is stipulated as being a great good even when victory comes easily. We have very good reason to institute the practice of shgames, as this would allow everybody to play the breathing shgame (which has the same rules as the breathing game), and thus enrich their lives by 1440 valuable victories a day.

That’s absurd, too.

Here’s where the line of thought is leading me: We have significant limits on our normative power to set the value of the practice-internal goods of the practices we create. In particular, the practice-internal goods that are entirely our creation are only of little value—like the value of victory in a game.

One might think that this is just an artifact of games and similar practices, which are not very significant practices. Perhaps in our political practices, we can institute great practice-internal goods. I don’t think so. The state can bestow a title on everyone who scores a perfect ten in the breathing game, but the state cannot by mere stipulation make that title have great value of a practice-internal sort. Otherwise, it’s too easy to create value. (One may think that the issues here are related to why the state can’t just print more money to create wealth. But I think this is quite different: the reason the state can’t just print more money to create wealth is that wealth is defined partly in terms of practice-external goods, and mere printing doesn’t affect those. But purely internal goods can be broadcast widely.)

I am not claiming that there are no great practice-internal goods. There are great internal goods in marriage, for instance. But here is my hypothesis: wherever there are great practice-internal goods, these goods derive their value from a practice we do not instituted. For instance, if there are great internal goods in marriage, that is because either we have not institute marriage or because marriage is itself the filling out of a template provided by a broader practice that we have not instituted (I think the former is the case).

If we could institute practices with great practice-internal value, we should, just for the sake of the practice-internal value. But that is wrong-headed. In fact, I think that when we institute practices, it is for the sake of goods that we are not instituting. We get the practice-internal goods, then, but they are just icing on the cake, and not a good icing even.

Perhaps I am misunderstanding practice-internal goods, though. Maybe they have the following property: they provide reasons to pursue them for those who participate in the practice, but they do not provide any reasons for those who do not participate in the practice. On this picture, one could have a great practice-internal good, one that provides very significant reasons, but it would provide no reason at all to a non-participant, and hence it would provide no reason to institute the practice. This seems wrongheaded. Only real goods provide real reasons. If practice-internal goods were to provide real reasons to the participants, they would have to be real goods. But if something—say, the institution of a practice—would result in the existence of real goods to people, that does provide a reason to bring about the something. That’s part of what is true in consequentialism. Moreover, even if one removes the absurdity of thinking that there is reason to institute the breathing game, one does not remove the absurdity of thinking that people who play the breathing game are racking up much good.