Wednesday, August 31, 2022

A tale of two membranes

Suppse that I had a device that would cause a mild but sensible vibration in the nasal membranes of the person I pointed it at. Absent consent or a significant reason, it would be wrong to use this device on a stranger.

But the same is not true if we replace nasal membranes with the tympanic membrane: we routinely vibrate the tympanic membranes of strangers with neither consent nor significant reason, say when we ask a stranger on the street for directions.

In both cases one is inducing a physical change of arrangement of body parts in the other person without their consent. We may suppose that hedonically there is no difference: perhaps the vibration and the speech are both mildly unpleasant. The case can be tweaked so that the impact on autonomy is greater in either case (e.g., the unwilling listener may identify themselves as the sort of person who doesn’t listen to arguments) or so that it is equal.

It is tempting to say that we have a default consent to hearing others out. But default consents can be withdrawn, and we are permitted to vibrate tympanic membranes even against the express directions of their possessor. If during an argument someone says “I don’t want to hear another word!” it is not morally wrong to respond verbally nonetheless.

This implies that the need for consent does not supervene on hedonic or autonomy facts. It depends on details of the intervention that go beyond these.

The fact that in my thought experiment an apparatus is used in the nasal but not the aural case is not relevant. If one speaks through a speech generating device, as famously Hawking did, one is no less permitted to vibrate strangers’ tympanic membranes with the speech. And it would be just as wrong to go up to strangers and blow air into their nostrils in order to vibrate their nasal membranes as to use a device.

So what is the difference?

The difference, I think, is that it is a part of the proper function of the tympanic membrane to receive speech from random strangers, whether one consents to this or not, while the nasal membranes have no such proper function. It is as if our human nature gives permission to others to speak to us, but does not give such a permission for nasal membrane vibration.

I think this is difficult to account for in anything other than natural law or divine command ethics.

Tuesday, August 30, 2022

The afterlife of humans and animals

I’ve been thinking a bit about the afterlife for non-human animals. The first thought is that there is a relevant difference between human and non-human animals in terms of flourishing. There is something deeply incomplete about the eighty or so years a human lives. The incompleteness of our earthly life is a qualitative incompleteness: it is not just that we have not had enough pieces of cake or run enough miles. Typically, whole areas of virtue are missing, and our understanding of the world is woefully incomplete, so that one of the most important things one learns is how little one knows. The story of the life is clearly unfinished, even if life has gone as well as it is reasonable to expect, and flourishing has not been achieved. Not so for non-human animals. When things have gone as well as it is reasonable to expect, the animal has lived, played and reproduced, and the story is complete.

If we think of the form of an entity as specifying the proper shape of its life, we have good reason to think that the human form specifies the proper shape of life as eternal, or at least much longer than earthly life. But there is little reason to think that form of an animal’s life specifies the length of life as significantly longer than the typical observed life-span of in its species.

If we accept the thesis which I call “Aristotelian optimism”, namely that things tend to fulfill their form or nature, we have good reason to think there is more to human life than our earthly life, but not so for non-human animals. In the case of humans, this line of argument should worry typical atheistic Aristotelian ethicists, because it would push them to reject Aristotelian optimism, which I think is central to ensuring knowledge of the forms in Aristotle’s system.

By the way, there may be an exception in the above argument for animals whose flourishing consists in relationships with humans. For there its flourishing might be incomplete if it cannot be a companion to the human over its infinite life-span. So there is some reason to think that species that are domesticated for human companionship, like dogs and to a lesser extent cats and horses (where companionship is less central to flourishing), might have an afterlife.

Monday, August 29, 2022

A Thomistic argument for the possibility of an afterlife for animals

  1. Accidents are more intimately dependent on substance than substantial forms on matter.

  2. If (1) is true and God can make accidents survive without the substance, then God can make forms survive without matter.

  3. If God can make forms survive without matter, then God can ensure life after death for animals by making their forms survive and restoring their matter.

  4. God can make accidents survive without the substance.

  5. So, God can ensure life after death for animals.

The most controversial claim here is (4), but that follows from the Thomistic account of the transsubstantiation.

Of course, there is a great gap between the possibility of an afterlife for an animal and its actuality. And the above argument works just as well for plants and fungi.

Friday, August 26, 2022

Full conditional probabilities and the Axiom of Choice

Here’s a claim that turns out to be equivalent to the Axiom of Choice:

  1. Given any non-empty set Ω and a collection M of [0,∞]-valued finitely additive measures on the powerset of Ω such that for any non-empty E ⊆ Ω there is a μ ∈ M with 0 < μ(E) < ∞, there is a full conditional probability P on the powerset of Ω definable in terms of the measures in M in the sense that for each non-empty E there is a μ ∈ M such that 0 < μ(E) < ∞ and for all A we have P(A|E) = μ(AE)/μ(E).

The easy direction of proof is from (1) to AC. Let M be the collection of all the finitely additive probability measures on Ω that assign probability one to some singleton. Clearly M has the desired properties. Now, for any non-empty E ⊆ Ω, there will be a μ ∈ M such that P(A|E) = μ(AE)/μ(E) and 0 < μ(E) < ∞. Thus, the point at which μ is concentrated must be in E. Moreover, it is clear that for each E, the measure μ must be unique. Let f(E) be the point at which μ is concentrated. This is a choice function for all subsets of Ω. Since Ω is an arbitrary non-empty set, we have AC.

The other direction follows by the method of proof of Lemma 3 here.

Tuesday, August 23, 2022

Collapse and unitarity

Quantum collapse is often said to “violate unitarity”. Either I’m confused or this phrasing is misleading or both.

A bounded linear operator P on a Hilbert space H is said to be unitary iff it is surjective and preserves inner products. But as I understand it, quantum collapse is not even an operator. An operator on H is a function from H to H. But a function f, given a specific input |ψ, yields a unique output f(|ψ⟩). Quantum collapse does no such thing. It is an indeterministic process. Sometimes given input 2−1/2(|ψ1⟩+|ψ2⟩) (where |ψ1⟩ and |ψ2⟩ are eigenvectors corresponding to the measurable we are collapsing with respect to) it gives output |ψ1 and sometimes it gives output |ψ2.

While strictly speaking if some process is not modeled by an operator, it is not modeled by a unitary operator, to call that a violation of unitarity is misleading. It is better to say it’s a violation of operationality or functionality. We cannot even say what it would mean for a process not modeled by an operator to be unitary, just as we cannot say what it would mean for a frog to be unitary or a linear operator to be a vertebrate.

One might try to say what it would mean to have unitarity for a non-deterministic evolution. Suppose that |ψ would collapse to |ψ′⟩ and |ϕ would collapse to |ϕ′⟩ under some measurement. Then one could claim that unitarity would say that ϕ′|ψ′⟩=⟨ϕ|ψ. But this assumes that there is a fact of the matter as to what |ψ and |ϕ would collapse to. Now, if |ψ in fact collapses to |ψ′⟩, it might make sense to say that |ψwould collapse to |ψ′⟩. But for unitarity we need the identity ϕ′|ψ′⟩=⟨ϕ|ψ for all inputs |ψ⟩ and |ϕ⟩, not just for the ones that actually occurred.

I suppose one could have a generalized Molinist thesis that there is always a fact of the matter as to what a given wavefunction would collapse to, so that we might be able to define a collapse operator. And then we could say that unitarity fails. But it would still likely be misleading to say that unitarity fails, since we would expect linearity to fail, not merely unitarity. And in any case, such a generalized Molinist thesis is quite dubious.

But I know very little about quantum mechanics, and so I may simply be confused.

Intending to lower the probability of one's success

It seems a paradigm of irrationality to intend an event E in an action A and yet take the action to lower the probability of E.

But it’s not irrational if my principle that intending a specification of something implies intending that which it is a specification of.

Suppose that Alice is in a bicycle race and is almost at the finish. If she just lets inertia do its job, she will inevitably win. But she carefully starts braking just short of the finish, aiming to cross the finish just a hair in front of Barbara, the cyclist behind her. She does this because she wants to make the race more exciting for the spectators, and she carefully calibrates her braking to make her win but not inevitably so.

Alice is aiming to win with a probability modestly short of one. This is a specification of winning, so by my principle, she is intending to win. But she is also, and in the very same action, aiming to decrease the probability of winning.

Saturday, August 20, 2022

A weird space for non-classical probability values

Consider the proper class V of formal expressions of the form xϵy where x is a non-negative real number that is permitted to be zero only if y = 0, y is a non-negative surreal number, and ϵ is a formal symbol to be thought of as “something very small”. (If we want to be rigorous, we let V be the set of ordered pairs (y,x).) Stipulate:

  1. x = xϵ0 for real x

  2. xϵy ≤ xϵy iff either (a) y > y or (b) y = y and x ≤ x

  3. xϵy + xϵy equals (x+x′)ϵy if y = y′ and otherwise equals the greater of xϵy and xϵy

  4. if xϵy ≤ xϵy and they’re not both zero, then (xϵy/xϵy) = (x/x′)ϵy − y

  5. Std xϵy equals x if y = 0 and equals 0 othewise.

We can then define finitely-additive probabilities with values in V in the same way that we do so for reals, and we can then define conditional probabilities using the standard formula P(AB) = P(AB)/P(B).

Say that a V-valued probability P is regular iff 0 < P(A) whenever A is non-empty.

Now here is a fun fact. Given a V-valued probability P, we can define a real-valued full conditional probability as the standard part (Std) of P. Conversely, and less trivially, any real-valued full conditional probability can be obtained this way (this follows from the fact that any linear order can be embedded in the surreals).

So far this doesn’t mark any advantage of using V instead of hyperreals as the values of our probabilities. But there is an advantage. Specifically, if our probability space Ω is acted on by a supramenable group G of symmetries (any Abelian group is supramenable)—for instance, Ω might be a circle acted on by the group of rotations—then there is a V-valued regular G-invariant probability defined for all subsets of Ω. But if we have hyperreal (or surreal, for that matter) values, then the existence of a regular probability invariant under G requires significantly stricter conditions, ones that won’t be met in the case where Ω is the circle and G is rotations.

However, the advantage comes from the fact that V one to have a + b = a even though b > 0, so that one can have weak regularity—the condition that 0 < P(A) whenever A is nonempty—without strong regularity—the condition that P(A) < P(B) whenever A ⊂ B. If one wants strong regularity, using V instead of the hyperreals doesn’t have the same advantage.

Intention and entailment

Suppose Alice intends to hit Bob with a stick. There are two ways that the stick could be involved in Alice’s intentions. First, Alice might not care that it is a stick she hits Bob with, but a stick happens to be ready to hand. In that case, her hitting Bob with a stick is a means to her hitting Bob.

Second, Alice might care about hitting Bob with a stick—perhaps she is punishing him for hitting a defenseless person with a stick and wants the punishment to match the crime. In that case, hitting Bob with a stick is not a means to her hitting Bob, as her hitting Bob does not figure in her intentions apart from the stick. But even in that case it seems right to say that Alice intends to hit Bob. For while it is false to say in general that

  1. if p entails q and Alice intends p then Alice intends q

(even if one adds that Alice knows about the entailment, or makes the entailment relevant in the sense of relevance logic), it seems that the following special case is true:

  1. if q is a specification of p and Alice intends q then Alice intends p.

Alice’s hitting Bob with a stick is a specification of Alice’s hitting Bob.

A similar point applies to conjunctions. If Alice intends to hit Bob with a stick and to insult him, she intends to hit Bob with a stick and she intends to insult him. But sometimes at least, hitting Bob with a stick and insulting him do not figure as independent intentions. Yet they are intended nonetheless. So we have another special case of (1):

  1. if p is a conjunct of q and Alice intends q then Alice intends p.

It is an unhappy situation that some special cases of (1) are true, but (1) is not true in general, and I do not know how to specify which special cases are true.

Thursday, August 18, 2022

Reasons and permissions

The fact that a large animal is attacking me would give me both permission and reason to kill the animal. On the basis of cases like that, one might hypothesize that permissions to ϕ come from particularly strong reasons to ϕ.

But there are cases where things are quite different. There is an inexpensive watch on a shelf beside me that I am permitted to destroy. What gives me that permission? It is that I own it. But the very thing that gives me permission, my ownership, also gives me a reason not to smash it. So sometimes the same feature of reality that makes ϕing permissible is also a reason against ϕing.

This is a bit odd. For if it were impermissible to destroy the watch, that would be a conclusive reason against the smashing. So it seems that my ownership moves me from having a conclusive reason against smashing to not having a conclusive reason against smashing. Yet it does that while at the same time being a reason not to smash. Interesting.

I suspect there may be an argument against utilitarianism somewhere in the vicinity.

Error in "Non-classical probabilities invariant under symmetries"

Yesterday, I discovered an error in the proof of “Theorem 1” of this recent paper of mine (arxiv version). The error occurs in the harder direction of Lemma 2. I do not know how to fix the error. Here’s what I know to remain of the “Theorem”. The proof that (i) implies (ii)–(v) is unaffected. The proof that (iv) implies (ii)–(v) is also unaffected, and likewise unaffected is the equivalence of (ii), (iii) and (v).

But I no longer know if any of (ii)–(v) imply (i). However, (i) is true under the stronger assumption that G is supramenable or that there exist invariant hyperreal probabilities.

The above remarks suffice for almost all the philosophical points in the paper (the philosophical point that behavior for countable sets is decisive is no longer supported in the full conditional probability case), and all the applications I mention in the paper.

I do not know if “Theorem 1” is true. This is an interesting mathematical question.

Update: The error has been fixed and Theorem 1's proof now works.

Non-uniqueness of "uniform" full conditional probabilities

Consider a fair spinner that uniformly chooses an angle between 0 and 360. Intuitively, I’ve just fully described a probabilistic situation. In classical probability theory, there is indeed a very natural model of this: Lebesgue probability measure on the unit circle. This model’s probability measure can be proved to be the unique function λ on the subsets of the unit circle that satisfies these conditions:

  1. Kolmogorov axioms with countable additivity

  2. completeness: if λ(B) is zero and A ⊆ B, then λ is defined for A

  3. rotational invariance

  4. at least one arc on the circle of length greater than zero and less than 360 has an assigned probability

  5. minimality: any other function that satisfies 1-4 agrees with λ on the sets where λ is defined.

In that sense “uniformly chooses” can be given a precise and unique meaning.

But we may be philosophically unhappy with λ as our probabilistic model of the spinner for one of two reasons. First, but less importantly, we may want to have meaningful probabilities for all subsets of the unit circle, while λ famously has “non-measurable sets” where it is not defined. Second, we may want to do justice to such intuitions as that it is more likely that the spinner will land exactly at 0 or 180 than that it will land exactly at 0. But λ as applied to any finite (in fact, any countable) set of positions yields zero: there is no chance of the spinner landing there. Moreover, we want to be able to update our probabilities on learning, say, that the spinner landed on 0 or 180—presumably, after learning that disjunction, we want 0 and 180 to have probability 1/2—but λ provides no guidance how to do that.

One way to solve this is to move to probabilities whose values are in some field extending the reals, say the hyperreals. Then we can assign a non-zero (but in some cases infinitesimal) probability to every subset of the circle. But this comes with two serious costs. First, we lose rotational invariance: it is easy to prove that we cannot have rotational invariance in such a context. Second, we lose uniqueness: there are many ways of assigning non-zero probabilities, and we know of no plausible set of conditions that makes the assignment unique. Both costs put in serious question whether we have captured the notion of “uniform distribution”, because uniformity sure sounds like it should involve rotational invariance and be the kind of property that should uniquely determine the probability model given some plausible assumptions like (1)–(5).

There is another approach for which one might have hope: use Popper functions, i.e., take conditional probabilities to be primitive. It follows from results of Armstrong and the supramenability of the group of rotations on the circle that there is a rotation-invariant (and, if we like, rotation and reflection invariant) finitely-additive full conditional probability on the circle, which assigns a meaningful real number to P(A|B) for any subsets A and B with B non-empty. Moreover, if Ω is the whole circle, then we can further require that P(A|Ω) = λ(A) if λ(A) is defined. And now we can compare the probability of two points and the probability of one point. For although P({x,y}|Ω) = λ({x,y}) = 0 = λ({x}) = P({x}|Ω) when x ≠ y, there is a natural sense in which {x, y} is more likely than {x} because P({x}|{x,y}) = 1/2.

Unfortunately, the conditional probability approach still doesn’t have uniqueness, and this is the point of this post. Let’s say that what we require of our conditional probability assignment P is this:

  1. standard axioms of finitely-additive full conditional probabilities

  2. (strong) rotational and reflection invariance

  3. being defined for all pairs of subsets of the circle with the second one non-empty

  4. P(A|Ω) = λ(A) for any Lebesgue-measurable A.

Unfortunately, these conditions fail to uniquely define P. In fact, they fail to uniquely define P(A|B) for countably infinite B.

Here’s why. Let E be a countably infinite subset of the circle with the following property: for any non-identity isometry ρ of the circle (combination of rotations and reflections), E ∩ ρE is finite. (One way to generate E is this. Let E0 be any singleton. Given En, let Gn be the set of isometries ρ such that ρx = y for some x, y in E. Then Gn is finite. Let z be any point not in {ρx : ρ ∈ Gn, x ∈ E}. Let En + 1 = En ∪ {z} (since z is not unique, we’re using the Axiom of Dependent Choice, but a lot of other stuff depends on stronger versions of Choice anyway). Let E be the union of the En. Then it’s easy to see that E ∩ ρE contains at most one point for any non-identity isometry ρ.)

Let μ be any finitely additive probability on E that assigns zero to finite subsets. Note that μ is not unique: there are many such μ. Now define a finitely additive measure ν on Ω as follows. If A is uncountable, let ν(A) = ∞. Otherwise, let ν(A) = ∑ρμ(EρA), where the sum is taken over all isometries ρ. The condition that E ∩ ρE is finite for non-identity ρ and that μ is zero for finite sets ensures that if A ⊆ E, then ν(A) = μ(A). It is clear that ν is isometrically invariant.

Let λ* be any invariant extension of Lebesgue measure to a finitely additive measure on all subsets of the circle. By Armstrong’s results (most relevantly Proposition 1.7), there is a full conditional probability P satisfying (6)–(8) and such that P(A|E) = μ(AE) and P(A|Ω) = λ*(A) (here we use the fact that ν(A) = ∞ whenever λ*(A) > 0, since λ*(A) > 0 only for uncountable A). Since μ wasn’t unique and E is countable, conditions (6)–(9) fail to uniquely define P for countably additive conditions.

Wednesday, August 17, 2022

Murder without an intention of harm

I used to think that every murder is an intentional killing. But this is incorrect: beheading John the Baptist was murder even if the intention was solely to put his head on a platter rather than to kill him. Cases like that once made me think something like this: murder is an intentional injury that one expects to be lethal. (Cf. Ramraj 2000.)

But now I think there can be cases of murder where there is no intent to injure at all. Suppose that amoral Alice wants to learn what an exploding aircraft looks like. To that end, she launches an anti-aircraft missile at a civilian jetliner. SHe has the ordinary knowledge that the explosion will kill everyone on board, but in her total amorality she no more intends this than the ordinary person intends to contribute to wearing out shoes when going for a walk. Alice has committed murder, but without any intention to kill.

In terms of the Principle of Double Effect, Alice’s wrongdoing lies in the lack of proportionality between the foreseen gravely bad effect (mass slaughter) and the foreseen trivially good effect (satisfaction of desire for unimportant knowledge), rather than in a wrongful intention, at least if we bracket questions of positive law.

It is tempting to conclude that every immoral killing is a murder. But that’s not right, either. If Bob is engaged in a defensive just war and has been legitimately ordered not to kill any of the enemy before 7 pm no matter what (so as not to alert the enemy, say), and at 6 pm he kills an enemy invader in self-defense, then he does not commit murder, but he acts wrongly in disobeying an order.

It seems that for an immoral act to be a murder it needs to be wrong because of the lethality of the harm as such, rather than due to some incidental reason, such as the lethality of the harm as contrary to a valid order.

Friday, August 12, 2022

Two kinds of norms

On a natural law theory of morality, some moral facts are nature-relative, i.e., grounded in the particular kind of nature the particular rational being has, and other moral facts are structural, a part of the very structure of rationality, and will apply to any possible rational being.

Thus, the norm of monogamy for human beings (assuming, as I think, that there is one) is surely be nature-relative—it seems very likely that there could be rational animals to whom a different reproductive strategy would be natural. But what Aquinas calls the first precept of the natural law, namely that the good is to be pursued and the bad is to be avoided, is structural—it applies to any rational being.

I think that evidence for a human norm being nature-relative is that either the space of norms contains other very similar norms nearby or it’s vague which precise norm obtains. For instance, take the norm of respecting one’s parents. This norm implies that we should favor our parents over strangers in our beneficence. However, how much more should we favor our parents over strangers? If there is a precise answer to that question, then there will be other nearby norms—not human ones—that give a slightly different precise answer (requiring a greater or smaller degree of favoring). On the other hand, that the good is to be pursued does not seem to have very similar norms near it—it has an elegant simplicity that a variant norm like “The good is to be pursued except when it is an aesthetic good” does not have.

I used to think the norms involved in double-effect reasoning were structural and hence applied to all possible rational being. I am no longer confident of this. Take the case of pushing a large person in front of a trolley without their permission in order to stop the trolley from hitting five others. We can now imagine a continuum of cases depending on how thick the clothing worn by the large person is. If the clothing is sufficiently thick, the large person has an extremely small probability of being hurt. If the clothing is ordinary clothing, the large person is nearly certain to die. In between is a continuum of probabilities of death ranging from negligibly close to zero to negligibly close to one, and a continuum of probabilities of other forms of injury. It is wrong to push the large person if the chance of survival is one in a trillion. It is not wrong to push the large person if the chance of their being at all hurt is one in a trillion. Somewhere there is a transition between impermissibility and permissibility. Either that transition is vague or it’s sharp. If it’s sharp, then there are norms very similar to the one we have. If it’s vague, then it’s vague which of many very similar norms we have.

In either case, I think this is evidence that the relevant norm here is nature-relative rather than structural. If this is right, then even if it is wrong for us to push the large person in the paradigmatic case where death is, say, 99.99% certain, there could be rational beings for whom this is not wrong.

This leads to an interesting hypothesis about God’s ethics (somewhat similar to things Mark Murphy has considered):

  1. God is only subject to structural moral norms and does not have any nature-relative moral norms.

I do not endorse (1), but I think it is a hypothesis well worth considering.

Monday, August 8, 2022

Might well

It’s occurred to me that the “might well happen that” operator makes for an interesting modality. It divides into an epistemic and a metaphysical version. In both cases, if it might well happen that p, then p is possible (in the respective sense). In both cases, there is a tempting paraphrase of the operator into a probability: on the epistemic side, one might say that it might well happen that p if and only if p has a sufficiently high epistemic probability, and on the metaphysical side, one might say that it might well happen that p if and only if p has a sufficiently high chance given the contextually relevant background. In both cases, it is not clear that the probabilistic paraphrase is correct—there may be (might well be!) cases of “might well happen that” where numerical probabilities have no place. And in both cases, “might well happen that” seems context-sensitive and vague. It might well be that thinking about this operator could lead to progress on something interesting.

Monday, August 1, 2022

Triple effect, looping trolley and felix culpa

Frances Kamm uses her principle of triple effect to resolve the loop version of the trolley problem. On the loop version, as usual, the main track branches into are two tracks, track A with five people and track B with one person, and the trolley is heading for track A. But now the two tracks join via a loop, so if there were no one on either track, a trolley that goes on track A will come back on track B and vice versa. If we had five people on track A and no one on track B, and we redirected the trolley to track B, it would go on track B, loop around, and fatally hit the people on track A anyway. But the one person actually on track B is big enough that if the trolley goes on track B, it will be stopped by the impact and the five people will be saved.

The problem with redirecting to track B on the loop version of the trolley problem is that it seems that a part of your intention is that the trolley should hit the person on track B, since it is that impact which stops the trolley from hitting the five people on track A. And so you are intending harm to the person on track B.

In her Intricate Ethics book, Kamm gives basically this story about redirecting the trolley in the loop case:

  • Initial Intention: Redirect trolley to track A to prevent the danger of five people being hit from the front.

  • Initial Defeater: The five people come to be in danger of being hit from the back by the trolley.

  • Defeater to Initial Defeater: The one person on track B blocks the trolley and prevents the dangers of being hit from the back.

The important point here is that the defeater to the defeater is not intended—it is just a defeater to a defeater. Thus there is no intention to block the trolley via the one person on track B, and hence that person’s being hit is not a case of their intentionally being used as a means to saving lives.

But this defeater-defeater story is mistaken as it stands. For given the presence of the person on track B, there is no danger of the five people being hit from the back. Thus, there is no initial defeater here.

Now, if you don’t know about the one person on track B, you would have a defeater to the redirection, namely the defeater that there is danger of being hit from the back. But learning about the person on track B would not provide a defeater to that defeater—it would simply remove the defeater by showing that the danger doesn’t exist.

That the story doesn’t have a defeater-defeater structure does not mean that one is intending the one person to be hit. Kamm might still be right in thinking there is no intention to block the trolley via the one person on track B. But I am dubious of Kamm’s story now, because I am dubious that the danger of being hit from the front yields a worthy initial intention. For there is nothing particularly bad about being hit from the front. It is only the danger of being hit simpliciter that seems worth preventing.

It is interesting to me to note that even if Kamm’s story doesn’t have defeater-defeater form, the main place where I want to use her triple effect account seems to still have defeater-defeater form. That place is the felix culpa, where God allows Adam and Eve to exercise their free will, even though he knows that this would or might well (depending on details about theories of foreknowledge and middle knowledge) result in their sinning, and God’s reasoning involves the great goods of salvation history that come from Adam and Eve’s sin.

  • Initial Intention: Allow Adam and Eve to exercise their free will.

  • Initial Defeater: They will or might well sin.

  • Defeater to Initial Defeater: Great goods will come about.

Here the initial defeater is not mistaken as in the looping trolley case—the sin or its possibility is really real. Moreover, while it’s not an initially worthy intention to prevent people from being hit from the front, unless they aren’t going to be hit from behind (or some other direction) either, it is an initially worthy intention to allow Adam and Eve to exercise their free will, even if no further goods come about, because free will is intrinsically good.

Thus we can criticize Kamm’s own use of triple effect while yet preserving what I think is a really important theological application.