Monday, November 30, 2020

Independence, uniformity and infinitesimals

Suppose that a random variable X is uniformly distributed (in some intuitive sense) over some space. Then :

  1. P(X = y)=P(X = z) for any y and z in that space.

But I think something stronger should also be true:

  1. Let Y and Z be any random variables taking values in the same space as X, and suppose each variable is independent of X. Then P(X = Y)=P(X = Z).

Fixed constants are independent of X, so (1) follows from (2).

But if we have (2), and the plausible assumption:

  1. If X and Y are independent, then X and f(Y) are independent for any function f,

we cannot have infinitesimal probabilities. Here’s why. Suppose X and Y are independent random variables uniformly distributed over the interval [0, 1). Assume P(X = a) is infinitesimal for a in [0, 1). Then, so is P(X = Y).

Let f(x)=2x for x < 1/2 and f(x)=2x − 1 for 1/2 ≤ x. Then if X and Y are independent, so are X and f(Y). Thus:

  1. P(X = Y)=P(X = f(Y)).

Let g(x)=x/2 and let h(x)=(1 + x)/2. Then:

  1. P(Y = g(X)) = P(Y = X)

and

  1. P(Y = h(X)) = P(Y = X).

But now notice that:

  1. Y = g(X) if and only if X = f(Y) and Y < 1/2

and

  1. Y = h(X) if and only if X = f(Y) and 1/2 ≤ Y.

Thus:

  1. (Y = g(X) or Y = h(X)) if and only if X = f(Y)

and note that we cannot have both Y = g(X) and Y = h(X). Hence:

  1. P(X = Y)=P(X = f(Y)) = P(Y = g(X)) + P(Y = h(X)) = P(Y = X)+P(Y = X)=2P(X = Y).

Therefore:

  1. P(X = Y)=0,

which contradicts the infinitesimality of P(X = Y).

This argument works for any uniform distribution on an infinite set Ω. Just let A and B be a partition of Ω into two subsets of the same cardinality as Ω (this uses the Axiom of Choice). Let g be a bijection from Ω onto A and h a bijection from Ω onto B. Let f(x)=g−1(x) for x ∈ A and f(x)=h−1(x) for x ∈ B.

Note: We may wish to restrict (3) to intuitively “nice” functions, once that don’t introduce non-measurability. The functions in the initial argument are “nice”.

Incompatible reasons for the same action

While writing an earlier post, I came across a curious phenomenon. It is, of course, quite familiar that we have incompatible reasons that we cannot act on all of: reasons of convenience often conflict with reasons of morality, say. This familiar incompatibility is due to the fact that the reasons support mutually incompatible actions. But what is really interesting is that there seem to be incompatible reasons for the same action.

The clearest cases involve probabilities. Let’s say that Alice has a grudge against Bob. Now consider an action that has a chance of bestowing an overall benefit on Bob and a chance of bestowing an overall harm on Bob. Alice can perform the action for the sake of the chance of overall harm out of some immoral motive opposed to Bob’s good, such as revenge, or she can perform the action for the sake of the chance of overall benefit out of some moral motive favoring Bob’s good. But it would make no sense to act on both kinds of reasons at once.

One might object as follows: The expected utility of the action, once both the chance of benefit and the chance of harm are taken into account, is either negative, neutral or positive. If it’s negative, only the harm-driven action makes sense; if it’s positive, only the benefit-driven action makes sense; if it’s neutral, neither makes sense. But this neglects the richness of possible rational attitudes to risk. Expected utilities are not the only rational way to make decisions. Moreover, the chances may be interval-valued in such a way that the expected utility is an interval that has both negative and positie components.

Another objection is that perhaps it is possible to act on both reasons at once. Alice could say to herself: “Either the good thing happens to Bob, which is objectively good, or the bad thing happens, or I am avenged, which is good for me.” Sometimes such disjunctive reasoning does make sense. Thus, one might play a game with a good friend and think happily: “Either I will win, which will be nice for me, or my friend will win, and that’ll be nice, too, since he’s my friend.” But the Alice case is different. The revenge reason depends on endorsing a negative attitude towards Bob, while one cannot do while seeking to benefit Bob.

Or suppose that Carl read in what he took to be holy text that God had something to say about ϕing, but Carl cannot remember if the text said that God commanded ϕing or that God forbade ϕing—it was one of the two. Carl thinks there is a 30% chance it was a prohibition and a 70% chance that it was a command. Carl can now ϕ out of a demonic hope to disobey God or he can ϕ because ϕing was likely commanded by God.

In the most compelling cases, one set of motives is wicked. I wonder if there are such cases where both sets of motives are morally upright. If there are such cases, and if they can occur for God, then we may have a serious problem for divine omnirationality which holds that God always acts for all the unexcluded reasons that favor an action.

One way to argue that such cases cannot occur for God is by arguing that the most compelling cases are all probabilistic, and that on the right view of divine providence, God never has to engage in probabilistic reasoning. But what if we think the right view of providence involves probabilistic reasoning?

We might then try to construct a morally upright version of the Alice case, by supposing that Alice is in a position of authority over Bob, and instead of being moved by revenge, she is moved to impose a harm on Bob for the sake of justice or to impose a good on him out of benevolent mercy. But now I think the case becomes less clearly one where the reasons are incompatible. It seems that Alice can reasonably say:

  1. Either justice will be served or mercy will be served, and I am happy with both.

I don’t exactly know why it is that (1) makes rational sense but the following does not:

  1. Either vengeance on Bob will be saved or kindness to Bob will be served, and I am happy with both.

But it does seem that (1) makes sense in a way in which (2) does not. Maybe the difference is this: to avenge requires setting one’s will against the other’s overall good; just punishment does not.

I conjecture that there are no morally upright cases of rationally incompatible reasons for the same action. That conjecture would provide an interesting formal constraint on rationality and morality.

Friday, November 27, 2020

An improvement on the objective tendency interpretation of probability

I am very much drawn to the objective causal tendency interpretation of chances. What makes a quantum die have chance 1/6 of giving any of its six results is that there is an equal causal tendency towards each result.

However, objective tendency interpretations have a serious problem: not every conditional chance fact is an objective tendency. After all, if P(A|B) represents an objective causal tendency of the system in state B to have state A, to avoid causal circularity, we don’t want to say that P(B|A) represents an objective causal tendency of the system in state A to have state B.

There is a solution to this: a more complex objective tendency interpretation somewhat in the spirit of David Lewis’s best-fit interpretation. Specifically:

  • the conditional chance of A on B is r if and only if Q(A|B)=r for every probability function Q such that (a) Q satisfies the axioms of probability and (b) Q(C|D)=q whenever r is the degree of tendency of the system in state D to have state C.

There are variants of this depending on the choice of formalism and axioms for Q (e.g., one can make Q be a classical countably additive probability, or a Popper function, etc.). One can presumably even extend this to handle lower and upper chances of nonmeasurable events.

Scratch coding in Minecraft

Years ago, for my older kids' coding education, I made a Minecraft mod that lets you program in Python. Now I made a Scratch extension that works with that mod for block-based programming, that I am hoping to get my youngest into. Instructions and links are here.




Wednesday, November 25, 2020

Intending as a means or as an end

I used to think that it is trivial and uncontroversial that if one intends something, one intends it as an end or as a means.

Some people (e.g., Aquinas, Anscombe, O’Brian and Koons, etc.) have a broad view of intention. On such views, if something is known to inevitably and directly follow from something that one intends, then one intends that, too. This rules out sophistical Double Effect justifications, such as a Procrustes who cuts off the heads of people who are too tall to fit the bed claiming that he intends to shorten rather than kill.

But if one has a broad view of intention, then I think one cannot hold that everything intended is intended as an end or as a means. The death of Procrustes’ victim is not a means: for it does nothing to help the victim fit the bed. But it’s not an end either: it is the fit for the bed that is the end (or something else downstream of that, such as satisfaction at the fit). So on broad views of intention, one has to say that Procrustes intends death, but does not intend it either as a means or as an end.

While this is a real cost of the broad theory of intention, I think it is something that the advocates of that theory should simply embrace. They should say there are at least three ways of intending something: as a means, as an end, and as an inevitable known side-effect (or however they exactly want to formulate that).

On the other hand, if we want to keep the intuition that to intend is to intend as a means or as an end, then we need to reject broad theories of intentions. In that case, I think, we should broaden the target of the intention instead.

In any case, the lesson is that the characterization of intending as intending-as-a-means-or-as-an-end is a substantive and important question.

Reasons as construals

Scanlon argues that intentions do not affect the permissibility of non-expressive actions because our intentions come from our reasons, and our reasons are like beliefs in that they are not something we choose.

In this argument, our reasons are the reasons we take ourselves to have for action. Scanlon’s argument can be put as follows (my wording, not his):

  1. I do not have a choice of which reasons I take myself to have.

  2. If I rationally do A, I do it for all the reasons for A that I take myself to have for doing A.

And the analogy with beliefs supports (1). However, when formulated like this, there is something like an equivocation on “reasons I take myself to have” between (1) and (2).

On its face reasons I take myself to have are belief-like: indeed, one might even analyze “I take myself to have reason R for A” as “I believe that R supports A”. But if they are belief-like in this way, I think we can argue that (2) is false.

Beliefs come in occurrent and non-occurrent varieties. It is only the occurrent beliefs that are fit to ground or even be analogous to the reasons on the basis of which we act. Suppose I am a shady used car dealer. I have a nice-looking car. I actually tried it out and found that it really runs great. You ask me what the car is like. I am well-practiced at answering questions like that, and I don’t think about how it runs: I just say what I say about all my cars, namely that it runs great. In this case, my belief that the car runs great doesn’t inform my assertion to you. I do not even in part speak on the basis of the belief, because I haven’t bothered to even call to mind what I think about how this car runs.

So, (2) can only be true when the “take myself to have” is occurrent. For consistency, it has to be occurrent in (1). But (1) is only plausible in the non-occurrent sense of “take”. In the occurrent sense, it is not supported by the belief analogy. For we often do have a choice over which beliefs are occurrent. We have, for instance, the phenomenon of rummaging through our minds to find out what we think about something. In doing so, we are trying to make occurrent our beliefs about the matter. By rummaging through our minds, we do so. And so what beliefs are occurrent then is up to us.

This can be of moral significance. Suppose that I once figured out the moral value of some action, and now that action would be very convenient to engage in. I have a real choice: do I rummage through my mind to make occurrent my belief about the moral value of the action or not? I might choose to just do the convenient action without searching out what it is I believe about the action’s morality because I am afraid that I will realize that I believe the action to be wrong. In such a case, I am culpable for not making a belief occurrent.

While the phenomenon of mental rummaging is enough to refute (1), I think the occurrent belief model of taking myself to have a reason is itself inadequate. A better model is a construal model, a seeing-as model. It’s up to me whether I see the duck-rabbit as a duck or as a rabbit. I can switch between them at will. Similarly, I can switch between seeing an action as supported by R1 and seeing it as supported by R2. Moreover, there is typically a fact of the matter whether I am seeing the duck-rabbit as a duck or as a rabbit at any given time. And similarly, there may be a fact of the matter as to how I construed the action when I finally settled on it, though I may not know what that fact is (for instance, because I don’t know when I settled on it).

In some cases I can also switch to seeing the action as supported by both R1 and R2, unlike in the case of the duck-rabbit. But in some cases, I can only see it as supported by one of the reasons at a time. Suppose Alice is a doctor treating a patient with a disease that when untreated will kill the patient in a month. There is an experimental drug available. In 90% of the cases, the drug results in instant death. In 10% of the cases, the drug extends the remaining lifetime to a year. Alice happens to know that this patient once did something really terrible to her best friend. Alice now has two reasons to recommend the drug to the patient:

  • the drug may avenge the evil done to her friend by killing the patient, and

  • the drug may save the life of the patient thereby helping Alice fulfill her medical duties of care.

Both reasons are available for Alice to act on. Unless Alice has far above average powers of compartmentalization (in a way in which some people perhaps can manage to see the duck-rabbit as both a duck and a rabbit at once), it is impossible for Alice to act on both reasons. She can construe the recommending of the pill as revenge on an enemy or she can construe it as a last-ditch effort to give her patient a year of life, but not both. And it is very plausible that she can flip between these. (It is also likely that after the fact, she may be unsure of which reason she chose the action for.)

In fact, we can imagine Alice as deliberating between four options:

  • to recommend the drug in the hope of killing her enemy instantly

  • to recommend the drug in the hope of giving her patient a year of life

  • to recommend against the drug in order that her enemy should die in a month

  • to recommend against the drug in order that her patient have at least a month of life.

The first two options involve the same physical activity—the same words, say—and the last two options do as well. But when she considers the first two options, she construes them differently, and similarly with the last two.

Thursday, November 19, 2020

Intention doesn't transfer to inevitable consequences

Some people, maybe as part of a response to the closeness problem for Double Effect, think:

  1. Whenever I intend A while knowing that A inevitably causes B, I intend B.

This is false. Suppose I play a game late at night in order to have late night fun, knowing that late night fun will inevitably lead to my being tired in the morning. Now, if I intend something, I intend it as a means or as an end. I clearly don’t intend to be tired in the morning as a means to having had fun in the evening: there is no backwards causation. But I also don’t intend being tired in the morning as an end: the end was my late night fun, which led to being tired. So if I don’t intend it as a means or as an end, I don’t intend it at all, contrary to 1.

More precisely:

  1. I intend E as my end and know that E inevitably causes F.

  2. If I intend something, I intend it as a means or as an end.

  3. If I know that something is caused by my end, then I do not intend it as an end.

  4. If I know that something is caused by my end, then I do not intend it as a means.

  5. So, I do not intend F as an end or as a means. (2, 4, 5)

  6. So, I do not intend F. (3, 6)

  7. So, sometimes I act intending E and knowing that E inevitably causes some effect F without intending F. (2, 7)

  8. So, (1) is false.

Property dualism and relativity theory

On property dualism, we are wholly made of matter but there are irreducible mental properties.

What material object fundamentally has the irreducible mental properties? There are two plausible candidates: the body and the brain. Both of them are extended objects. For concreteness, let’s say that the object is the brain (the issue I will raise will apply in either case) Because the properties are irreducible and are fundamentally had by the brain, they are are not derivative from more localized properties. Rather, the whole brain has these properties. We can think (to borrow a word from Dean Zimmerman) that the brain is suffused with these fundamental properties.

Suppose now that I have an irreducible mental property A. Then the brain as a whole is suffused with A. Suppose that at a later time, I cease to have A. Then the brain is no longer suffused with A. Moreover, because it is the brain as a whole that is a subject of mental properties, it seems to follow that the brain must instantly move from being suffused as a whole with A to having no A in it at all. Now, consider two spatially separated neurons, n1 and n2. Then at one time they are both participate in the A-suffusion and at a later time neither participates in the A-suffusion. There is no time at which n1 (say) participates in A-suffusion but n2 does not. For if that were to happen, then A would be had by a proper part of the brain then rather than by the brain as a whole, and we’ve said that mental properties are had by the brain as a whole.

But this violates Relativity Theory. For if in one reference frame, the A-suffusion leaves n1 and n2 simultaneously, then in another reference frame it will leave n1 first and only later it will leave n2.

I think the property dualist has two moves available. First, they can say that mental properties can be had by a proper part of a brain rather than the brain as a whole. But the argument can be repeated for the proper part in place of the brain. The only stopping point here would be for the property dualist to say that mental properties can be had by a single point particle, and indeed that when mental properties leave us, at some point in time in some reference frames they are only had by very small, functionally irrelevant bits of the brain, such as a single particle. This does not seem to do justice to the brain dependence intuitions that drive dualists to property dualism over substance dualism.

The second move is to say that the brain as a whole has the irreducible mental property, but to have it as a whole is not the same as to have its parts suffused with the property. Rather, the having of the property is not something that happens to the brain qua extended, spatial or composed of physical parts. Since physical time is indivisible from space, mental time will then presumably be different from physical time, much as I think is the case on substance dualism. The result is a view on which the brain becomes a more mysterious object, an object equipped with its own timeline independent of physics. And if what led people to property dualism over substance dualism was the mysteriousness of the soul, well here the mystery has returned.

Wednesday, November 18, 2020

Substance dualism and relativity theory

Here is an interesting argument against substance dualism:

  1. Something only exists simultaneously with my body when it exists in space.

  2. My mind now exists simultaneously with my body.

  3. So, my mind now exists in space.

  4. Anything in space is material.

  5. So, my mind is material.

If this argument is right, then there is at least one important respect in which property dualism and physicalism are better off than substance dualism.

The reasoning behind (1) is Relativity Theory: the temporal sequence that bodies are in cannot be separated from space, forming an indissoluble unity with it, namely spacetime.

One way out of the argument is to deny (4). Perhaps the mind is immaterial but in space in a way derivative from the body’s being in space and the mind’s intimate connection with the body. On this view, the mind’s being in time would seem to have to be derivative from the body’s being in time. This does not seem appealing to me: the mind’s spatiality could be derivative from the spatiality of something connected with the mind, but that the mind’s temporality would be derivative from the temporality of something connected with the mind seems implausible. Temporality seems too much a fundamental feature of our minds.

However, there is a way to resolve this difficulty, by saying that the mind has two temporalities. It has a fundamental temporality of its own—what I have elsewhere called “internal time”—and it has a derivative temporality from its connection with spatiotemporal entities, including the body. When I say that my mind is fundamentally temporal, that refers to the mind’s internal time. When we say that my mind is derivatively temporal, that refers to my mind’s external time.

If this is right, then we have yet another reason for substance dualists to adopt an internal/external time distinction. If this were the only reason, then the need for the distinction would be evidence against substance dualism. But I think the distinction can do a lot of other work for us.

Love and physicalism

Every so often, I have undergraduates questioning the reduction of the mental to the physical on the basis of love. One rarely meets the idea that love would be a special kind of counterexample to physicalism in the philosophical literature. It is tempting to say that the physicalist who can handle qualia and intentionality can handle love. But perhaps not.

Maybe students just have a direct intuition that love is something that transcends the humdrum physical world?

Or maybe there is an implicit argument like this:

  1. Love has significance of degree or kind N.

  2. No arrangement of particles has significance of degree or kind N.

  3. So, love is not an arrangement of particles.

Here is a related argument that I think is worth taking seriously:

  1. Love has infinite significance.

  2. No finite arrangement of atoms has infinite significance.

  3. So, love is not a finite arrangement of particles.

  4. If physicalism is true, then love is a finite arrangement of particles.

  5. So, physicalism is not true.

One can replace “love” here with various other things, such as humanity, virtue, etc.

The incompleteness of current physics

  1. There is causation in the physical world.

  2. Causation is irreducible.

  3. Our fundamental physics does not use the concept of causation.

  4. So, our fundamental physics is incomplete as a description of the physical world.

Tuesday, November 17, 2020

Nomic functionalism

Functionalism says that of metaphysical necessity, whenever x has the same functional state as a system y with internal mental state M, then x has M as well.

What exactly counts as an internal mental state is not clear, but it excludes states like thinking about water for which plausibly semantic externalism is true and it includes conscious states like having a pain or seeing blue. I will assume that functional states are so understood that if a system x has functional state S, then a sufficiently good computer simulation of x has S as well.

A weaker view is nomic functionalism according to which for every internal mental state M (at least of a sort that humans have), there is a law of nature that says that everything that has functional state S has internal mental state M.

A typical nomic functionalist admits that it is metaphysically possible to have S without M, but thinks that the laws of nature necessitate M given S.

I am a dualist. As a result, I think functionalism is false. But I still wonder about nomic functionalism, often in connection with this intuition:

  1. Computers can be conscious if and only if functionalism or nomic functionalism is true.

Here’s the quick argument: If functionalism or nomic functionalism is true, then a computer simulation of a conscious thing would be conscious, so computers can be conscious. Conversely, if both computers and humans can be conscious, then the best explanation of this possibility would be given by functionalism or nomic functionalism.

I now think that nomic functionalism is not all that plausible. The reason for this is the intuition that a computer simulation of a cause normally only produces a computer simulation of the effect rather than the effect itself. Let me try to be more rigorous, though.

First, let’s continue from (1):

  1. Dualism is true.

  2. If dualism is true, functionalism is fale.

  3. Nomic functionalism is false.

  4. Therefore, neither functionalism nor nomic functionalism is true. (2–4)

  5. So, computers cannot be conscious. (1, 5)

And that’s really nice: the ethical worries about whether AI research will hurt or enslave inorganic persons disappear.

The premise I am least confident about in the above argument is (4). Nomic functionalism seems like a serious dualist option. However, I now think there is good inductive reason to doubt nomic functionalism.

  1. No known law of nature makes functional states imply non-functional states.

  2. So, no law of nature makes functional states imply non-functional states. (Inductively from 7)

  3. If functionalism is false, mental states are not functional states.

  4. So, mental states are not functional states. (2, 3, 9)

  5. So, no law of nature makes functional states imply mental states. (8 and 10)

  6. So, nomic functionalism is false. (11 and definition)

Regarding (7), if a law of nature made functional states imply non-functional states, that would mean that we have multiple realizability on the left side of the law but lacked multiple realizability on the right side. It would mean that any accurate computer simulation of a system with the given functional state would exhibit the particular non-functional state. This would be like a case where a computer simulation of water being heated were to have to result in actual water boiling.

I think the most promising potential counterexamples to (7) are thermodynamic laws that can be multiply realized. However, I think tht in those cases, the implied states are typically also multiply realizable.

A variant of the above argument replaces “law” with “fundamental law”, and uses the intuition that if dualism is true, then nomic functionalism would have to have fundamental laws that relate functional states to mental states.

Monday, November 16, 2020

Closeness and Double Effect

The Principle of Double Effect (PDE) is traditionally a defense against a charge of bringing about an effect that is absolutely wrong to intentionally bring about, a defense that holds that although one foresaw the effect, one did not intend it.

One of the main difficulties for PDE is the closeness problem. Typical examples of the closeness problem are things like dropping bombs on an enemy city in order to make the civilians look dead (Bennett), blowing up the fat man in the mouth of the cave when there is no other way out (Anscombe), etc.

If we think of intentions as arrows and the wrong-to-intend act as a target, one strategy for handling closeness problems is to “broaden intentions”, so that they hit the target more easily. Thus, if you intend something “close enough” to an effect you count as intending (or something similar to intending, say accomplishing) that effect. There are interesting general theories of this (e.g., O’Brien and Koons), but I do not think any of them cover all the cases well.

Another strategy, however, is to broaden the target. This strategy keeps intention very sharp and hyperintensional, but insists that what is forbidden to intend is broader. A number of people have done that (e.g., Quinn). What I want to do in this post is to offer a way of looking at a version of this strategy.

The PDE is correlative to absolute wrongs. There aren’t that many absolute wrongs. For instance, Judaism lists only three kinds of acts as absolute wrongs, things that may not be done no matter the benefits:

  • idolatry

  • murder

  • certain sexual sins (e.g., adultery and incest).

Now, intention enters differently into the definitions of these acts. Arguably, idolatry is very much defined by intentions. The very same physical bending of one’s midriff in the very same physical circumstances (e.g., standing facing an idol) can very easily be an act of idolatry or a back exercise, precisely depending on what one is intending by this bow. Such pairs of cases can be manufactured in the case of murder, but they will involve very odd assumptions. We can imagine a surgeon or an assassin cutting someone’s chest with the same movement, but it is in fact very unlikely that the movement will be the same. In the case of idolatry, we might say that more work is being done by intention and in the case of murder more work is being done by the physical act. And sexual wrongdoing is a very complex topic, but it is likely that intention enters in yet different ways, and differently in the case of different sexual wrongs.

We can think of an absolute prohibition as having the following structure:

  1. For all x1, ..., xn, when U(x1, ..., xn), it is absolutely wrong to intentionally bring it about that I(x1, ..., xn).

Here, U(x1, ..., xn) is a contextual description which needs to obtain but need not be intended to have a wrong of the given type, and I(x1, ..., xn) is a contextual description which needs to be intended. For instance, for murder, prima facie U(x1, x2) might specify that x1 is an act whose patient is known to be a juridically innocent person x2, while I(x1, x2) will specify that, say, x1 is the killing of x2. It’s enough that the murderer should know that the victim is an innocent person—the murderer does not need to intend to kill them qua innocent. But the murderer does need to intend something like the killing.

Note that in ordinary speech, when we give absolute prohibitions we speak with scope ambiguity. Thus, we are apt to say things like “It is wrong to intentionally kill an innocent person”, without making clear whether “intentionally” applies just to “kill” or also to “innocent person”, i.e., without making it clear what is in the U part of the prohibition and what is in the I part.

Observe also that in the case of idolatry, more work is being done by I than by U, while in the case of murder, the work done by the two parts of the structure is the same.

So, now, here is a general strategy for handling closeness. We keep intention sharp, but we broaden (i.e., logically weaken) I by shifting some things that we might have thought are in I into U, perhaps introducing “known” or “believed” operators. For instance, in the case of murder, we might say something like this:

  1. When x1 is known to be the imposition of an arrangement x2 on the parts or aspects of an innocent person that normally and in this particular case precludes life, it is absolutely wrong to bring about x1 with the intention that it be an imposition of arrangement x2 on parts or aspects of reality.

And in the case of idolatry, perhaps we keep more in I, only moving the difference between God and the false god to the nonintentional portion of the prohibition:

  1. When x is known to be a god other than God, it is absolutely wrong to intentionally bring it about that one worships x.

And here is an important point. How we do this—how we shuffle requirements between I and U—will differ from absolute prohibition to absolute prohibition. What we are doing is not a refinement of Double Effect, but a refinement of the (hopefully small) number of absolute prohibitions in our deontological theory. We do not need to have any general things to say across absolute prohibitions how we do this broadening of the intentional target.

There might even be further complexities. It could, for instance, be that we have role-specific absolute prohibitions, coming with other ways for aspects of the action to be apportioned between U and I.

Friday, November 13, 2020

Reducing Triple Effect to Double Effect

Kamm’s Principle of Triple Effect (PTE) says something like this:

  • Sometimes it is permissible to perform an act ϕ that has a good intended effect G1 and a foreseen evil effect E where E causally leads to a further good effect G2 that is not intended but is a part of one’s reasons for performing ϕ (e.g., as a defeater for the defeater provided by E).

Here is Kamm’s illustration by a case that does not have much moral significance: you throw a party in order to have a good time (G1); you foresee this will result in a mess (E); but you expect the partygoers will help you clean up (G2). You don’t throw the party in order that they help you clean up, and you don’t intend their help, but your expectation of their help is a part of your reasons for throwing the party (e.g., it defeats the mess defeater).

It looks now like PTE is essentially just the Principle of Double Effect (PDE) with a particular way of understanding the proportionality condition. Specifically, PTE is PDE with the understanding that foreseen goods that are causally downstream of foreseen evils can be legitimately used as part of the proportionality calculation.

One can, of course, have a hard-line PDE that forbids foreseen goods causally downstream of foreseen evils to be legitimately used as part of the proportionality calculation. But that hard-line PDE would be mistaken.

Suppose Alice has her leg trapped under a tree, and if you do not move the tree immediately, the leg will have to be amputated. Additionally, there is a hungry grizzly near Bob and Carl, who are unable to escape and you cannot help either of them. The bear is just hungry enough to eat one of Bob and Carl. If it does so, then because of eating that one, it won’t eat the other. The bear is heading for Bob. If you move the tree to help Alice, the bear will look in your direction, and will notice Carl while doing so, and will eat Carl instead of Bob. All three people are strangers to you.

It is reasonable to say that the fact that your rescuing Alice switches whom the bear eats does not remove your good moral reason to rescue Alice. However, if we have the hard-line PDE, then we have a problem. Your rescuing Alice leads to a good effect, Alice’s leg being saved, and an evil, Carl being eaten. As far as this goes, we don’t have proportionality: we should not save a stranger’s leg at the expense of another stranger’s life. So the hard-line PDE forbids the action. But the PDE with the softer way of understanding proportionality gives the correct answer: once we take into account the fact that the bear’s eating Carl saves Bob, proportionality is restored, and you can save Alice’s leg.

At the same time, I think it is important that the good G1 that you intend not be trivial in comparison to the evil E. If instead of its being a matter of rescuing Alice’s leg, it were a matter of picking up a penny, you shouldn’t do that (for more argument in that direction, see here).

So, if I am right, the proportionality evaluation in PDE has the following features:

  • we allow unintended goods that are causally downstream of unintended evils to count for proportionality, but

  • the triviality of the intended goods when compared to the unintended evils undercuts proportionality.

In other words, while the intended goods need not be sufficient on their own to make for proportionality, and unintended downstream goods may need to be taken into account for proportionality, nonetheless the intended goods must make a significant contribution towards proportionality.

Wednesday, November 11, 2020

Set theory and physics

Assume the correct physics has precise particle positions (similar questions can be asked in other contexts, but the particle position context is the one I will choose). And suppose we can specify a time t precisely, e.g., in terms of the duration elapsed from the beginning of physical reality, in some precisely defined unit system. Consider two particles, a and b, that exist at t. Let d be the distance between a and b at t in some precisely definable unit system.

Here’s a question that is rarely asked: Is d a real number?

This seems a silly question. How could it not be? What else could it be? A complex number?

Well, there are at least two other things that d could be without any significant change to the equations of physics.

First, d could be a hyperreal number. It could be that particle positions are more fine-grained than the reals.

Second, d could be what I am now calling a “missing number”. A missing number is something that can intuitively be defined by an English (or other meta-language) specification of an approximating “sequence”, but does not correspond to a real number in set theory. For instance, we could suppose for simplicity that d lies between 0 and 1 and imagine a physical measurement procedure that can determine the nth binary digit of d. Then we would have an English predicate Md(n) which is true just in case that procedure determined the n binary digit to be 1. But it could turn out that in set theory there is no set whose members are the natural numbers n such that Md(n). For the axioms of set theory only guarantee the existence of a set defined using the predicates of set theory, while Md is not a predicate of set theory. The idea of such “missing numbers” is coherent, at least if our set theory is coherent.

It seems reasonable to say that d is indeed a real number, and to say similar things about any other quantities that can be similarly physically specified. But what guarantees such a match between set theory and physics? I see four options:

  1. Luck: it’s just a coincidence.

  2. Our set theory governs physics.

  3. Physics governs our set theory.

  4. There is a common governor to our set theory and physics.

Option 1 is an unhappy one. Option 4 might be a Cartesian God who freely chooses both mathematics and physics.

Option 2 is interesting. On this story, there is a Platonically true set theory, and then the laws of physics make reference to it. So it’s then a law of physics that distances (say) always correspond to real numbers in the Platonically true set theory.

Option 3 comes in at least two versions. First, one could have an Aristotelian story on which mathematics, including some version of set theory, is an abstraction from the physical world, and any predicates that we can define physically are going to be usable for defining sets. So, physics makes sets. Second, one could have a Platonic multiverse of universes of sets: there are infinitely many universes of sets, and we simply choose to work within those that match our physics. On this view, physics doesn’t make sets, but it chooses between the universes of sets.