Monday, November 30, 2020

Independence, uniformity and infinitesimals

Suppose that a random variable X is uniformly distributed (in some intuitive sense) over some space. Then :

  1. P(X = y)=P(X = z) for any y and z in that space.

But I think something stronger should also be true:

  1. Let Y and Z be any random variables taking values in the same space as X, and suppose each variable is independent of X. Then P(X = Y)=P(X = Z).

Fixed constants are independent of X, so (1) follows from (2).

But if we have (2), and the plausible assumption:

  1. If X and Y are independent, then X and f(Y) are independent for any function f,

we cannot have infinitesimal probabilities. Here’s why. Suppose X and Y are independent random variables uniformly distributed over the interval [0, 1). Assume P(X = a) is infinitesimal for a in [0, 1). Then, so is P(X = Y).

Let f(x)=2x for x < 1/2 and f(x)=2x − 1 for 1/2 ≤ x. Then if X and Y are independent, so are X and f(Y). Thus:

  1. P(X = Y)=P(X = f(Y)).

Let g(x)=x/2 and let h(x)=(1 + x)/2. Then:

  1. P(Y = g(X)) = P(Y = X)

and

  1. P(Y = h(X)) = P(Y = X).

But now notice that:

  1. Y = g(X) if and only if X = f(Y) and Y < 1/2

and

  1. Y = h(X) if and only if X = f(Y) and 1/2 ≤ Y.

Thus:

  1. (Y = g(X) or Y = h(X)) if and only if X = f(Y)

and note that we cannot have both Y = g(X) and Y = h(X). Hence:

  1. P(X = Y)=P(X = f(Y)) = P(Y = g(X)) + P(Y = h(X)) = P(Y = X)+P(Y = X)=2P(X = Y).

Therefore:

  1. P(X = Y)=0,

which contradicts the infinitesimality of P(X = Y).

This argument works for any uniform distribution on an infinite set U. Just let A and B be a partition of U into two subsets of the same cardinality as U (this uses the Axiom of Choice). Let g be a bijection from U onto A and h a bijection from U onto B. Let f(x)=g−1(x) for x ∈ A and f(x)=h−1(x) for x ∈ B.

Note: We may wish to restrict (3) to intuitively “nice” functions, ones that don’t introduce non-measurability. The functions in the initial argument are “nice”.

Incompatible reasons for the same action

While writing an earlier post, I came across a curious phenomenon. It is, of course, quite familiar that we have incompatible reasons that we cannot act on all of: reasons of convenience often conflict with reasons of morality, say. This familiar incompatibility is due to the fact that the reasons support mutually incompatible actions. But what is really interesting is that there seem to be incompatible reasons for the same action.

The clearest cases involve probabilities. Let’s say that Alice has a grudge against Bob. Now consider an action that has a chance of bestowing an overall benefit on Bob and a chance of bestowing an overall harm on Bob. Alice can perform the action for the sake of the chance of overall harm out of some immoral motive opposed to Bob’s good, such as revenge, or she can perform the action for the sake of the chance of overall benefit out of some moral motive favoring Bob’s good. But it would make no sense to act on both kinds of reasons at once.

One might object as follows: The expected utility of the action, once both the chance of benefit and the chance of harm are taken into account, is either negative, neutral or positive. If it’s negative, only the harm-driven action makes sense; if it’s positive, only the benefit-driven action makes sense; if it’s neutral, neither makes sense. But this neglects the richness of possible rational attitudes to risk. Expected utilities are not the only rational way to make decisions. Moreover, the chances may be interval-valued in such a way that the expected utility is an interval that has both negative and positie components.

Another objection is that perhaps it is possible to act on both reasons at once. Alice could say to herself: “Either the good thing happens to Bob, which is objectively good, or the bad thing happens, or I am avenged, which is good for me.” Sometimes such disjunctive reasoning does make sense. Thus, one might play a game with a good friend and think happily: “Either I will win, which will be nice for me, or my friend will win, and that’ll be nice, too, since he’s my friend.” But the Alice case is different. The revenge reason depends on endorsing a negative attitude towards Bob, while one cannot do while seeking to benefit Bob.

Or suppose that Carl read in what he took to be holy text that God had something to say about ϕing, but Carl cannot remember if the text said that God commanded ϕing or that God forbade ϕing—it was one of the two. Carl thinks there is a 30% chance it was a prohibition and a 70% chance that it was a command. Carl can now ϕ out of a demonic hope to disobey God or he can ϕ because ϕing was likely commanded by God.

In the most compelling cases, one set of motives is wicked. I wonder if there are such cases where both sets of motives are morally upright. If there are such cases, and if they can occur for God, then we may have a serious problem for divine omnirationality which holds that God always acts for all the unexcluded reasons that favor an action.

One way to argue that such cases cannot occur for God is by arguing that the most compelling cases are all probabilistic, and that on the right view of divine providence, God never has to engage in probabilistic reasoning. But what if we think the right view of providence involves probabilistic reasoning?

We might then try to construct a morally upright version of the Alice case, by supposing that Alice is in a position of authority over Bob, and instead of being moved by revenge, she is moved to impose a harm on Bob for the sake of justice or to impose a good on him out of benevolent mercy. But now I think the case becomes less clearly one where the reasons are incompatible. It seems that Alice can reasonably say:

  1. Either justice will be served or mercy will be served, and I am happy with both.

I don’t exactly know why it is that (1) makes rational sense but the following does not:

  1. Either vengeance on Bob will be saved or kindness to Bob will be served, and I am happy with both.

But it does seem that (1) makes sense in a way in which (2) does not. Maybe the difference is this: to avenge requires setting one’s will against the other’s overall good; just punishment does not.

I conjecture that there are no morally upright cases of rationally incompatible reasons for the same action. That conjecture would provide an interesting formal constraint on rationality and morality.

Friday, November 27, 2020

An improvement on the objective tendency interpretation of probability

I am very much drawn to the objective causal tendency interpretation of chances. What makes a quantum die have chance 1/6 of giving any of its six results is that there is an equal causal tendency towards each result.

However, objective tendency interpretations have a serious problem: not every conditional chance fact is an objective tendency. After all, if P(A|B) represents an objective causal tendency of the system in state B to have state A, to avoid causal circularity, we don’t want to say that P(B|A) represents an objective causal tendency of the system in state A to have state B.

There is a solution to this: a more complex objective tendency interpretation somewhat in the spirit of David Lewis’s best-fit interpretation. Specifically:

  • the conditional chance of A on B is r if and only if Q(A|B)=r for every probability function Q such that (a) Q satisfies the axioms of probability and (b) Q(C|D)=q whenever r is the degree of tendency of the system in state D to have state C.

There are variants of this depending on the choice of formalism and axioms for Q (e.g., one can make Q be a classical countably additive probability, or a Popper function, etc.). One can presumably even extend this to handle lower and upper chances of nonmeasurable events.

Scratch coding in Minecraft

Years ago, for my older kids' coding education, I made a Minecraft mod that lets you program in Python. Now I made a Scratch extension that works with that mod for block-based programming, that I am hoping to get my youngest into. Instructions and links are here.




Wednesday, November 25, 2020

Intending as a means or as an end

I used to think that it is trivial and uncontroversial that if one intends something, one intends it as an end or as a means.

Some people (e.g., Aquinas, Anscombe, O’Brian and Koons, etc.) have a broad view of intention. On such views, if something is known to inevitably and directly follow from something that one intends, then one intends that, too. This rules out sophistical Double Effect justifications, such as a Procrustes who cuts off the heads of people who are too tall to fit the bed claiming that he intends to shorten rather than kill.

But if one has a broad view of intention, then I think one cannot hold that everything intended is intended as an end or as a means. The death of Procrustes’ victim is not a means: for it does nothing to help the victim fit the bed. But it’s not an end either: it is the fit for the bed that is the end (or something else downstream of that, such as satisfaction at the fit). So on broad views of intention, one has to say that Procrustes intends death, but does not intend it either as a means or as an end.

While this is a real cost of the broad theory of intention, I think it is something that the advocates of that theory should simply embrace. They should say there are at least three ways of intending something: as a means, as an end, and as an inevitable known side-effect (or however they exactly want to formulate that).

On the other hand, if we want to keep the intuition that to intend is to intend as a means or as an end, then we need to reject broad theories of intentions. In that case, I think, we should broaden the target of the intention instead.

In any case, the lesson is that the characterization of intending as intending-as-a-means-or-as-an-end is a substantive and important question.

Reasons as construals

Scanlon argues that intentions do not affect the permissibility of non-expressive actions because our intentions come from our reasons, and our reasons are like beliefs in that they are not something we choose.

In this argument, our reasons are the reasons we take ourselves to have for action. Scanlon’s argument can be put as follows (my wording, not his):

  1. I do not have a choice of which reasons I take myself to have.

  2. If I rationally do A, I do it for all the reasons for A that I take myself to have for doing A.

And the analogy with beliefs supports (1). However, when formulated like this, there is something like an equivocation on “reasons I take myself to have” between (1) and (2).

On its face reasons I take myself to have are belief-like: indeed, one might even analyze “I take myself to have reason R for A” as “I believe that R supports A”. But if they are belief-like in this way, I think we can argue that (2) is false.

Beliefs come in occurrent and non-occurrent varieties. It is only the occurrent beliefs that are fit to ground or even be analogous to the reasons on the basis of which we act. Suppose I am a shady used car dealer. I have a nice-looking car. I actually tried it out and found that it really runs great. You ask me what the car is like. I am well-practiced at answering questions like that, and I don’t think about how it runs: I just say what I say about all my cars, namely that it runs great. In this case, my belief that the car runs great doesn’t inform my assertion to you. I do not even in part speak on the basis of the belief, because I haven’t bothered to even call to mind what I think about how this car runs.

So, (2) can only be true when the “take myself to have” is occurrent. For consistency, it has to be occurrent in (1). But (1) is only plausible in the non-occurrent sense of “take”. In the occurrent sense, it is not supported by the belief analogy. For we often do have a choice over which beliefs are occurrent. We have, for instance, the phenomenon of rummaging through our minds to find out what we think about something. In doing so, we are trying to make occurrent our beliefs about the matter. By rummaging through our minds, we do so. And so what beliefs are occurrent then is up to us.

This can be of moral significance. Suppose that I once figured out the moral value of some action, and now that action would be very convenient to engage in. I have a real choice: do I rummage through my mind to make occurrent my belief about the moral value of the action or not? I might choose to just do the convenient action without searching out what it is I believe about the action’s morality because I am afraid that I will realize that I believe the action to be wrong. In such a case, I am culpable for not making a belief occurrent.

While the phenomenon of mental rummaging is enough to refute (1), I think the occurrent belief model of taking myself to have a reason is itself inadequate. A better model is a construal model, a seeing-as model. It’s up to me whether I see the duck-rabbit as a duck or as a rabbit. I can switch between them at will. Similarly, I can switch between seeing an action as supported by R1 and seeing it as supported by R2. Moreover, there is typically a fact of the matter whether I am seeing the duck-rabbit as a duck or as a rabbit at any given time. And similarly, there may be a fact of the matter as to how I construed the action when I finally settled on it, though I may not know what that fact is (for instance, because I don’t know when I settled on it).

In some cases I can also switch to seeing the action as supported by both R1 and R2, unlike in the case of the duck-rabbit. But in some cases, I can only see it as supported by one of the reasons at a time. Suppose Alice is a doctor treating a patient with a disease that when untreated will kill the patient in a month. There is an experimental drug available. In 90% of the cases, the drug results in instant death. In 10% of the cases, the drug extends the remaining lifetime to a year. Alice happens to know that this patient once did something really terrible to her best friend. Alice now has two reasons to recommend the drug to the patient:

  • the drug may avenge the evil done to her friend by killing the patient, and

  • the drug may save the life of the patient thereby helping Alice fulfill her medical duties of care.

Both reasons are available for Alice to act on. Unless Alice has far above average powers of compartmentalization (in a way in which some people perhaps can manage to see the duck-rabbit as both a duck and a rabbit at once), it is impossible for Alice to act on both reasons. She can construe the recommending of the pill as revenge on an enemy or she can construe it as a last-ditch effort to give her patient a year of life, but not both. And it is very plausible that she can flip between these. (It is also likely that after the fact, she may be unsure of which reason she chose the action for.)

In fact, we can imagine Alice as deliberating between four options:

  • to recommend the drug in the hope of killing her enemy instantly

  • to recommend the drug in the hope of giving her patient a year of life

  • to recommend against the drug in order that her enemy should die in a month

  • to recommend against the drug in order that her patient have at least a month of life.

The first two options involve the same physical activity—the same words, say—and the last two options do as well. But when she considers the first two options, she construes them differently, and similarly with the last two.

Thursday, November 19, 2020

Intention doesn't transfer to inevitable consequences

Some people, maybe as part of a response to the closeness problem for Double Effect, think:

  1. Whenever I intend A while knowing that A inevitably causes B, I intend B.

This is false. Suppose I play a game late at night in order to have late night fun, knowing that late night fun will inevitably lead to my being tired in the morning. Now, if I intend something, I intend it as a means or as an end. I clearly don’t intend to be tired in the morning as a means to having had fun in the evening: there is no backwards causation. But I also don’t intend being tired in the morning as an end: the end was my late night fun, which led to being tired. So if I don’t intend it as a means or as an end, I don’t intend it at all, contrary to 1.

More precisely:

  1. I intend E as my end and know that E inevitably causes F.

  2. If I intend something, I intend it as a means or as an end.

  3. If I know that something is caused by my end, then I do not intend it as an end.

  4. If I know that something is caused by my end, then I do not intend it as a means.

  5. So, I do not intend F as an end or as a means. (2, 4, 5)

  6. So, I do not intend F. (3, 6)

  7. So, sometimes I act intending E and knowing that E inevitably causes some effect F without intending F. (2, 7)

  8. So, (1) is false.

Property dualism and relativity theory

On property dualism, we are wholly made of matter but there are irreducible mental properties.

What material object fundamentally has the irreducible mental properties? There are two plausible candidates: the body and the brain. Both of them are extended objects. For concreteness, let’s say that the object is the brain (the issue I will raise will apply in either case) Because the properties are irreducible and are fundamentally had by the brain, they are are not derivative from more localized properties. Rather, the whole brain has these properties. We can think (to borrow a word from Dean Zimmerman) that the brain is suffused with these fundamental properties.

Suppose now that I have an irreducible mental property A. Then the brain as a whole is suffused with A. Suppose that at a later time, I cease to have A. Then the brain is no longer suffused with A. Moreover, because it is the brain as a whole that is a subject of mental properties, it seems to follow that the brain must instantly move from being suffused as a whole with A to having no A in it at all. Now, consider two spatially separated neurons, n1 and n2. Then at one time they are both participate in the A-suffusion and at a later time neither participates in the A-suffusion. There is no time at which n1 (say) participates in A-suffusion but n2 does not. For if that were to happen, then A would be had by a proper part of the brain then rather than by the brain as a whole, and we’ve said that mental properties are had by the brain as a whole.

But this violates Relativity Theory. For if in one reference frame, the A-suffusion leaves n1 and n2 simultaneously, then in another reference frame it will leave n1 first and only later it will leave n2.

I think the property dualist has two moves available. First, they can say that mental properties can be had by a proper part of a brain rather than the brain as a whole. But the argument can be repeated for the proper part in place of the brain. The only stopping point here would be for the property dualist to say that mental properties can be had by a single point particle, and indeed that when mental properties leave us, at some point in time in some reference frames they are only had by very small, functionally irrelevant bits of the brain, such as a single particle. This does not seem to do justice to the brain dependence intuitions that drive dualists to property dualism over substance dualism.

The second move is to say that the brain as a whole has the irreducible mental property, but to have it as a whole is not the same as to have its parts suffused with the property. Rather, the having of the property is not something that happens to the brain qua extended, spatial or composed of physical parts. Since physical time is indivisible from space, mental time will then presumably be different from physical time, much as I think is the case on substance dualism. The result is a view on which the brain becomes a more mysterious object, an object equipped with its own timeline independent of physics. And if what led people to property dualism over substance dualism was the mysteriousness of the soul, well here the mystery has returned.

Wednesday, November 18, 2020

Substance dualism and relativity theory

Here is an interesting argument against substance dualism:

  1. Something only exists simultaneously with my body when it exists in space.

  2. My mind now exists simultaneously with my body.

  3. So, my mind now exists in space.

  4. Anything in space is material.

  5. So, my mind is material.

If this argument is right, then there is at least one important respect in which property dualism and physicalism are better off than substance dualism.

The reasoning behind (1) is Relativity Theory: the temporal sequence that bodies are in cannot be separated from space, forming an indissoluble unity with it, namely spacetime.

One way out of the argument is to deny (4). Perhaps the mind is immaterial but in space in a way derivative from the body’s being in space and the mind’s intimate connection with the body. On this view, the mind’s being in time would seem to have to be derivative from the body’s being in time. This does not seem appealing to me: the mind’s spatiality could be derivative from the spatiality of something connected with the mind, but that the mind’s temporality would be derivative from the temporality of something connected with the mind seems implausible. Temporality seems too much a fundamental feature of our minds.

However, there is a way to resolve this difficulty, by saying that the mind has two temporalities. It has a fundamental temporality of its own—what I have elsewhere called “internal time”—and it has a derivative temporality from its connection with spatiotemporal entities, including the body. When I say that my mind is fundamentally temporal, that refers to the mind’s internal time. When we say that my mind is derivatively temporal, that refers to my mind’s external time.

If this is right, then we have yet another reason for substance dualists to adopt an internal/external time distinction. If this were the only reason, then the need for the distinction would be evidence against substance dualism. But I think the distinction can do a lot of other work for us.

Love and physicalism

Every so often, I have undergraduates questioning the reduction of the mental to the physical on the basis of love. One rarely meets the idea that love would be a special kind of counterexample to physicalism in the philosophical literature. It is tempting to say that the physicalist who can handle qualia and intentionality can handle love. But perhaps not.

Maybe students just have a direct intuition that love is something that transcends the humdrum physical world?

Or maybe there is an implicit argument like this:

  1. Love has significance of degree or kind N.

  2. No arrangement of particles has significance of degree or kind N.

  3. So, love is not an arrangement of particles.

Here is a related argument that I think is worth taking seriously:

  1. Love has infinite significance.

  2. No finite arrangement of atoms has infinite significance.

  3. So, love is not a finite arrangement of particles.

  4. If physicalism is true, then love is a finite arrangement of particles.

  5. So, physicalism is not true.

One can replace “love” here with various other things, such as humanity, virtue, etc.

The incompleteness of current physics

  1. There is causation in the physical world.

  2. Causation is irreducible.

  3. Our fundamental physics does not use the concept of causation.

  4. So, our fundamental physics is incomplete as a description of the physical world.

Tuesday, November 17, 2020

Nomic functionalism

Functionalism says that of metaphysical necessity, whenever x has the same functional state as a system y with internal mental state M, then x has M as well.

What exactly counts as an internal mental state is not clear, but it excludes states like thinking about water for which plausibly semantic externalism is true and it includes conscious states like having a pain or seeing blue. I will assume that functional states are so understood that if a system x has functional state S, then a sufficiently good computer simulation of x has S as well.

A weaker view is nomic functionalism according to which for every internal mental state M (at least of a sort that humans have), there is a law of nature that says that everything that has functional state S has internal mental state M.

A typical nomic functionalist admits that it is metaphysically possible to have S without M, but thinks that the laws of nature necessitate M given S.

I am a dualist. As a result, I think functionalism is false. But I still wonder about nomic functionalism, often in connection with this intuition:

  1. Computers can be conscious if and only if functionalism or nomic functionalism is true.

Here’s the quick argument: If functionalism or nomic functionalism is true, then a computer simulation of a conscious thing would be conscious, so computers can be conscious. Conversely, if both computers and humans can be conscious, then the best explanation of this possibility would be given by functionalism or nomic functionalism.

I now think that nomic functionalism is not all that plausible. The reason for this is the intuition that a computer simulation of a cause normally only produces a computer simulation of the effect rather than the effect itself. Let me try to be more rigorous, though.

First, let’s continue from (1):

  1. Dualism is true.

  2. If dualism is true, functionalism is fale.

  3. Nomic functionalism is false.

  4. Therefore, neither functionalism nor nomic functionalism is true. (2–4)

  5. So, computers cannot be conscious. (1, 5)

And that’s really nice: the ethical worries about whether AI research will hurt or enslave inorganic persons disappear.

The premise I am least confident about in the above argument is (4). Nomic functionalism seems like a serious dualist option. However, I now think there is good inductive reason to doubt nomic functionalism.

  1. No known law of nature makes functional states imply non-functional states.

  2. So, no law of nature makes functional states imply non-functional states. (Inductively from 7)

  3. If functionalism is false, mental states are not functional states.

  4. So, mental states are not functional states. (2, 3, 9)

  5. So, no law of nature makes functional states imply mental states. (8 and 10)

  6. So, nomic functionalism is false. (11 and definition)

Regarding (7), if a law of nature made functional states imply non-functional states, that would mean that we have multiple realizability on the left side of the law but lacked multiple realizability on the right side. It would mean that any accurate computer simulation of a system with the given functional state would exhibit the particular non-functional state. This would be like a case where a computer simulation of water being heated were to have to result in actual water boiling.

I think the most promising potential counterexamples to (7) are thermodynamic laws that can be multiply realized. However, I think tht in those cases, the implied states are typically also multiply realizable.

A variant of the above argument replaces “law” with “fundamental law”, and uses the intuition that if dualism is true, then nomic functionalism would have to have fundamental laws that relate functional states to mental states.

Monday, November 16, 2020

Closeness and Double Effect

The Principle of Double Effect (PDE) is traditionally a defense against a charge of bringing about an effect that is absolutely wrong to intentionally bring about, a defense that holds that although one foresaw the effect, one did not intend it.

One of the main difficulties for PDE is the closeness problem. Typical examples of the closeness problem are things like dropping bombs on an enemy city in order to make the civilians look dead (Bennett), blowing up the fat man in the mouth of the cave when there is no other way out (Anscombe), etc.

If we think of intentions as arrows and the wrong-to-intend act as a target, one strategy for handling closeness problems is to “broaden intentions”, so that they hit the target more easily. Thus, if you intend something “close enough” to an effect you count as intending (or something similar to intending, say accomplishing) that effect. There are interesting general theories of this (e.g., O’Brien and Koons), but I do not think any of them cover all the cases well.

Another strategy, however, is to broaden the target. This strategy keeps intention very sharp and hyperintensional, but insists that what is forbidden to intend is broader. A number of people have done that (e.g., Quinn). What I want to do in this post is to offer a way of looking at a version of this strategy.

The PDE is correlative to absolute wrongs. There aren’t that many absolute wrongs. For instance, Judaism lists only three kinds of acts as absolute wrongs, things that may not be done no matter the benefits:

  • idolatry

  • murder

  • certain sexual sins (e.g., adultery and incest).

Now, intention enters differently into the definitions of these acts. Arguably, idolatry is very much defined by intentions. The very same physical bending of one’s midriff in the very same physical circumstances (e.g., standing facing an idol) can very easily be an act of idolatry or a back exercise, precisely depending on what one is intending by this bow. Such pairs of cases can be manufactured in the case of murder, but they will involve very odd assumptions. We can imagine a surgeon or an assassin cutting someone’s chest with the same movement, but it is in fact very unlikely that the movement will be the same. In the case of idolatry, we might say that more work is being done by intention and in the case of murder more work is being done by the physical act. And sexual wrongdoing is a very complex topic, but it is likely that intention enters in yet different ways, and differently in the case of different sexual wrongs.

We can think of an absolute prohibition as having the following structure:

  1. For all x1, ..., xn, when U(x1, ..., xn), it is absolutely wrong to intentionally bring it about that I(x1, ..., xn).

Here, U(x1, ..., xn) is a contextual description which needs to obtain but need not be intended to have a wrong of the given type, and I(x1, ..., xn) is a contextual description which needs to be intended. For instance, for murder, prima facie U(x1, x2) might specify that x1 is an act whose patient is known to be a juridically innocent person x2, while I(x1, x2) will specify that, say, x1 is the killing of x2. It’s enough that the murderer should know that the victim is an innocent person—the murderer does not need to intend to kill them qua innocent. But the murderer does need to intend something like the killing.

Note that in ordinary speech, when we give absolute prohibitions we speak with scope ambiguity. Thus, we are apt to say things like “It is wrong to intentionally kill an innocent person”, without making clear whether “intentionally” applies just to “kill” or also to “innocent person”, i.e., without making it clear what is in the U part of the prohibition and what is in the I part.

Observe also that in the case of idolatry, more work is being done by I than by U, while in the case of murder, the work done by the two parts of the structure is the same.

So, now, here is a general strategy for handling closeness. We keep intention sharp, but we broaden (i.e., logically weaken) I by shifting some things that we might have thought are in I into U, perhaps introducing “known” or “believed” operators. For instance, in the case of murder, we might say something like this:

  1. When x1 is known to be the imposition of an arrangement x2 on the parts or aspects of an innocent person that normally and in this particular case precludes life, it is absolutely wrong to bring about x1 with the intention that it be an imposition of arrangement x2 on parts or aspects of reality.

And in the case of idolatry, perhaps we keep more in I, only moving the difference between God and the false god to the nonintentional portion of the prohibition:

  1. When x is known to be a god other than God, it is absolutely wrong to intentionally bring it about that one worships x.

And here is an important point. How we do this—how we shuffle requirements between I and U—will differ from absolute prohibition to absolute prohibition. What we are doing is not a refinement of Double Effect, but a refinement of the (hopefully small) number of absolute prohibitions in our deontological theory. We do not need to have any general things to say across absolute prohibitions how we do this broadening of the intentional target.

There might even be further complexities. It could, for instance, be that we have role-specific absolute prohibitions, coming with other ways for aspects of the action to be apportioned between U and I.

Friday, November 13, 2020

Reducing Triple Effect to Double Effect

Kamm’s Principle of Triple Effect (PTE) says something like this:

  • Sometimes it is permissible to perform an act ϕ that has a good intended effect G1 and a foreseen evil effect E where E causally leads to a further good effect G2 that is not intended but is a part of one’s reasons for performing ϕ (e.g., as a defeater for the defeater provided by E).

Here is Kamm’s illustration by a case that does not have much moral significance: you throw a party in order to have a good time (G1); you foresee this will result in a mess (E); but you expect the partygoers will help you clean up (G2). You don’t throw the party in order that they help you clean up, and you don’t intend their help, but your expectation of their help is a part of your reasons for throwing the party (e.g., it defeats the mess defeater).

It looks now like PTE is essentially just the Principle of Double Effect (PDE) with a particular way of understanding the proportionality condition. Specifically, PTE is PDE with the understanding that foreseen goods that are causally downstream of foreseen evils can be legitimately used as part of the proportionality calculation.

One can, of course, have a hard-line PDE that forbids foreseen goods causally downstream of foreseen evils to be legitimately used as part of the proportionality calculation. But that hard-line PDE would be mistaken.

Suppose Alice has her leg trapped under a tree, and if you do not move the tree immediately, the leg will have to be amputated. Additionally, there is a hungry grizzly near Bob and Carl, who are unable to escape and you cannot help either of them. The bear is just hungry enough to eat one of Bob and Carl. If it does so, then because of eating that one, it won’t eat the other. The bear is heading for Bob. If you move the tree to help Alice, the bear will look in your direction, and will notice Carl while doing so, and will eat Carl instead of Bob. All three people are strangers to you.

It is reasonable to say that the fact that your rescuing Alice switches whom the bear eats does not remove your good moral reason to rescue Alice. However, if we have the hard-line PDE, then we have a problem. Your rescuing Alice leads to a good effect, Alice’s leg being saved, and an evil, Carl being eaten. As far as this goes, we don’t have proportionality: we should not save a stranger’s leg at the expense of another stranger’s life. So the hard-line PDE forbids the action. But the PDE with the softer way of understanding proportionality gives the correct answer: once we take into account the fact that the bear’s eating Carl saves Bob, proportionality is restored, and you can save Alice’s leg.

At the same time, I think it is important that the good G1 that you intend not be trivial in comparison to the evil E. If instead of its being a matter of rescuing Alice’s leg, it were a matter of picking up a penny, you shouldn’t do that (for more argument in that direction, see here).

So, if I am right, the proportionality evaluation in PDE has the following features:

  • we allow unintended goods that are causally downstream of unintended evils to count for proportionality, but

  • the triviality of the intended goods when compared to the unintended evils undercuts proportionality.

In other words, while the intended goods need not be sufficient on their own to make for proportionality, and unintended downstream goods may need to be taken into account for proportionality, nonetheless the intended goods must make a significant contribution towards proportionality.

Wednesday, November 11, 2020

Set theory and physics

Assume the correct physics has precise particle positions (similar questions can be asked in other contexts, but the particle position context is the one I will choose). And suppose we can specify a time t precisely, e.g., in terms of the duration elapsed from the beginning of physical reality, in some precisely defined unit system. Consider two particles, a and b, that exist at t. Let d be the distance between a and b at t in some precisely definable unit system.

Here’s a question that is rarely asked: Is d a real number?

This seems a silly question. How could it not be? What else could it be? A complex number?

Well, there are at least two other things that d could be without any significant change to the equations of physics.

First, d could be a hyperreal number. It could be that particle positions are more fine-grained than the reals.

Second, d could be what I am now calling a “missing number”. A missing number is something that can intuitively be defined by an English (or other meta-language) specification of an approximating “sequence”, but does not correspond to a real number in set theory. For instance, we could suppose for simplicity that d lies between 0 and 1 and imagine a physical measurement procedure that can determine the nth binary digit of d. Then we would have an English predicate Md(n) which is true just in case that procedure determined the n binary digit to be 1. But it could turn out that in set theory there is no set whose members are the natural numbers n such that Md(n). For the axioms of set theory only guarantee the existence of a set defined using the predicates of set theory, while Md is not a predicate of set theory. The idea of such “missing numbers” is coherent, at least if our set theory is coherent.

It seems reasonable to say that d is indeed a real number, and to say similar things about any other quantities that can be similarly physically specified. But what guarantees such a match between set theory and physics? I see four options:

  1. Luck: it’s just a coincidence.

  2. Our set theory governs physics.

  3. Physics governs our set theory.

  4. There is a common governor to our set theory and physics.

Option 1 is an unhappy one. Option 4 might be a Cartesian God who freely chooses both mathematics and physics.

Option 2 is interesting. On this story, there is a Platonically true set theory, and then the laws of physics make reference to it. So it’s then a law of physics that distances (say) always correspond to real numbers in the Platonically true set theory.

Option 3 comes in at least two versions. First, one could have an Aristotelian story on which mathematics, including some version of set theory, is an abstraction from the physical world, and any predicates that we can define physically are going to be usable for defining sets. So, physics makes sets. Second, one could have a Platonic multiverse of universes of sets: there are infinitely many universes of sets, and we simply choose to work within those that match our physics. On this view, physics doesn’t make sets, but it chooses between the universes of sets.

Monday, November 9, 2020

The Math Tea argument

The Math Tea argument is an argument that there are real numbers that can’t be defined. The idea is this: there are only countably many definitions of real numbers (e.g., πe or "The middle root of the polynomial x3 − 5x2 + 2x + 4"), and uncountably many real numbers, so there are real numbers that have no definitions.

Elegant as this argument is, it has crucial set-theoretic flaws. For instance, there is no guarantee that there is a set of all the definable real numbers. The axioms of set theory tell us that for any predicate F in the language of set theory there is a set of all the numbers that satisfy F. But the predicate "is definable" is in English, not in set theory.

We can, however, argue for the following weaker claim. Assume set theory is true. Then either:

  1. There is a real number that cannot be defined in the language of set theory, or

  2. "A real number is missing": there is an English language formula F(n) whose only semantic predicate is set-theoretic satisfaction such that there is no real number x whose nth digit after the decimal point is 1 if F(n) and is 0 if not F(n).

Here is the argument. A formula of set-theory defines a real number if it has exactly one free variable and is satisfied by precisely one real number. Say that F(n) if and only if the nth formula of set theory (in lexicographic ordering) defining a real number defines a real number that does not have a 1 in the nth place after the decimal point. The only semantic predicate in F(n) is set-theoretic satisfaction. Suppose (2) is false. Then there is a real number x whose nth digit after the decimal point is 1 if F(n) and is 0 if not F(n). If x can be defined in the language of set theory by a formula ϕ, then suppose ϕ is the nth real-number-defining formula. Then F(n) if and only if x does not have a 1 in the nth place. But x has a 1 in the nth place if and only if F(n). Contradiction! So, x cannot be defined, and hence (1) is true.

Logically speaking, if ZF is consistent, ZFC is consistent both with (1) (this follows by letting the digits of x be defined by the set of all set-theoretic truths and noting that if ZF is consistent, we can consistently suppose there is a set of all set-theoretic truths, but that set of course cannot be defined) and with the denial of (1).

But philosophically speaking, we might reasonably say that (2) would imply that "there aren’t enough real numbers", which sounds wrong, so it seems more reasonable to accept (1) instead.

Restricted epistemic mysterianism

There are two forms of mysterianism about X (say, consciousness):

  1. Conceptual: It would not be possible for us to even conceptualize the true theory of X.

  2. Epistemic: It would not be possible for us to know the true theory of X.

Conceptual mysterianism about X entails epistemic mysterianism about X. In the case of typical Xs, like consciousness or intentionality or morality, epistemic mysterianism entails conceptual mysterianism. For if we could conceptualize the true theory of X, then God could reveal to us that that theory is true. (I restricted to “typical Xs”, for there are some truths that we could not know but which we could conceptualize. For instance, that the past existence of life on Mars is a reality unknown to me is something I can conceptualize, but I can’t possibly know it.)

However, one can weaken epistemic mysterianism to:

  1. Restricted Epistemic: It would not be possible for us to know the true theory of X merely by human epistemic resources.

Consider the following interesting conditional:

  1. If physicalism is true about consciousness, then restricted epistemic mysterianism is true about it.

Here is an argument against 4. Imagine that we find a new physics in the brains of precisely those organisms that it is plausible to think of as conscious (maybe cephalopods and higher vertebrates). For instance, maybe there is a new particle type that is only found in those brains, or perhaps some already known particle type behaves differently in those brains. Moreover, there is a close correlation between the behavior of the new physics and plausible things to say about consciousness in these critters. And when make a sophisticated enough AI, surprisingly that new physics also shows up in it. Given this, it would be reasonable to say that consciousness is to be identified with the behavior of that new physics.

But I think the following is true:

  1. If physicalism is true about consciousness and there is no new physics in the brains of conscious beings, then restricted epistemic mysterianism is true.

Here’s why. Assume physicalism. Some degree of multiple realizability of consciousness is true since cephalopods and mammals are both conscious, even though our brains are quite different—assuming the “new physics in brains” hypothesis is false (if it were true, the structural differences between cephalopod and mammal brains could be relevantly outbalanced by the similarities with respect to the “new physics”). Multiple realizability requires that consciousness be abstracted to some degree from the particular details of its embodiment in us. But there is no way of knowing how far it is to be abstracted. And without knowing that, we won’t know the true theory of consciousness.

If this is right, the true view of mind must be found among these three:

  • non-physicalism

  • restricted epistemic mysterianism (with or without conceptual mysterianism)

  • new physics.

On each of them, mind is mysterious. :-)

Logically complex intentions

In a paper that was very important to me when I wrote it, I argue that the Principle of Double Effect should deal with accomplishment rather than intention. In particular, I consider cases of logically complex intentions: “I am a peanut farmer and I hate people with severe peanut allergies…. I secretly slip peanuts into Jones’ food in order that she should die if she has a severe peanut allergies. I do not intend Jones’ death—I only intend the logically complex state of Jones dying if she has a severe peanut allergy.” I then say that what is wrong with this action is that if Jones has an allergy, then I have accomplished her death, though I did not intend her death. What was wrong with my action is that my plan of action was open to a possibility that included my accomplishing her death.

But now consider a different case. A killer robot is loose in the building and all the doors are locked. The robot will stop precisely when it kills someone: it has a gun with practically unlimited ammunition and a kill detector that turns it off when it kills someone. It’s heading for Bob’s office, and Alice bravely runs in front of it to save his life. And my intuition is that Alice did not commit suicide. Yet it seems that Alice intended her death as a means to saving Bob’s life.

But perhaps it is not right to say that you intended your death at all. Instead, it seems plausible that Alice intention is:

  1. If the robot will kill someone, it will kill Alice.

An additional reason to think that (1) is a better interpretation of Alice’s intentions than just her unconditionally intending to die is that if the robot breaks down before killing Alice, we wouldn’t say that Alice’s action failed. Rather, we would say that it was made moot.

But according to what I say in the accomplishment paper, if in fact the robot does not break down, then Alice accomplishes her own death. And that’s wrong. (I take it that suicide is wrong.)

Perhaps what we want to say is this. In conditional intention cases, when one intends:

  1. If p, then q

and p happens and one’s action is successful, then what one has contrastively accomplished is:

  1. its being the case that p and q rather than p and not q.

To contrastively accomplish A rather than B is not the same as to accomplish A simply. And there is nothing evil about contrastively accomplishing its being the case that the robot kills someone and kills Alice rather than the robot killing someone and not killing Alice. On the other hand, if we apply this analysis to the peanut allergy case, what the crazy peanut farmer contrastively accomplishes is:

  1. Jones having a peanut allergy and dying rather than having a peanut allergy and not dying.

And this is an evil thing to contrastively accomplish. Roughly, it is evil to accomplish A rather than B just in case A is not insignificantly more evil than B.

But what about a variant case? The robot is so programmed that it stops as soon as someone in the building dies. The robot is heading for Bob and it’s too late for Alice to jump in front of it. So instead Alice shoots herself. Can’t we say that she shot herself rather than have Bob die, and the contrastive accomplishment of her death rather than Bob’s is laudable? I don’t think so. For her contrastive accomplishment was accomplished by simply accomplishing her death, which while in a sense brave, was a suicide and hence wrong.

A difficult but important task someone should do: Work out the logic of accomplishment and contrastive accomplishment for logically complex intentions.

Friday, November 6, 2020

Conditional and unconditional desires, God's will, and salvation

Consider three cases:

  1. Bob doesn’t care either way whether Alice wants to go out with him. And he wants to go out with Alice if she wants to go out with him.

  2. Carl wants Alice’s desires to be fulfilled. And he wants to go out with Alice.

  3. Dave doesn’t care either way whether Alice wants to go out with him. And he wants to go out with Alice even if she doesn’t want to go out with him.

As dating partners, Dave is a creep, Bob is uncomplimentarily lukewarm and Carl seems the best.

Here’s how we could characterize Dave’s and Bob’s desires with respect to going out with Alice:

  • Bob’s desire is conditional.

  • Dave’s desire is unconditional.

What about Carl’s desire? I think it’s neither conditional nor unconditional. It is what we might call a simple desire.

The three desires interact differently with evidence about Alice’s lack of interest. Bob’s conditional desire leads him to give up on dating Alice. Dave’s creepy desire is unchanged. And Carl, on the other hand, comes to hope that Alice is interested notwithstanding the evidence to the contrary, and is motivated to act (perhaps moderately, perhaps excessively) to try to persuade Alice to want to go out with him.

One might query regarding Carl what happens if he definitively learns that his two desire to go out with Alice and to have Alice want to go out with him cannot both be fulfilled. Then, as far as the desires go, he could go either way: he could become a creep or he could resign himself. Resignation is obviously the right attitude. Note, however, that while resignation requires him to give up on going out with Alice, it need not require him to give up on desiring to go out with Alice (though if that desire lasts too long after learning that Alice has no interest, it is apt to screw up Carl’s life).

Now, it seems a pious thing to align one’s desires with God’s in all things. One “thing” is one’s salvation. One could have three attitudes analogous to the attitudes towards dating Alice:

  1. Conditional: Barbara desires to be saved if God wills it. But doesn’t care either way about whether God wills it.

  2. Simple: Charlotte desires to be saved. She desires that God’s will be done, and hopes and prays that God wills her salvation.

  3. Unconditional: Diana desires to be saved even if God doesn’t will it. She doesn’t care whether God wills it.

Barbara’s attitude is lukewarm and shows a lack of love of God, since she doesn’t simply want to be with God. Diana is harder to condemn than Dave, but nonetheless her attitude is flawed. Charlotte has the right attitude.

So, when we say we should align our desires with God’s in all things, that doesn’t seem to mean that all our desires should be conditional. It means, I think, to be like Charlotte: it desire an alignment

And there is one further distinction to be made, between God’s antecedent and God’s consequent will. The classic illustration is this: When Scripture says that God wills all people to be saved (1 Tim. 2:4), that’s God’s antecedent will. It’s what God wants independently of other considerations. But because of the inextricable intertwining of God’s love and God’s justice (indeed, God’s love is his justice), God also antecedently wants that those who reject him be apart from him. Putting these antecedent desires of God’s, God has a consequent desire to damn some, namely those who reject God.

I think what I said about Barbara, Charlotte and Diana clearly applies to God’s consequent will. But it’s less clear regarding God’s antecedent will. Necessarily, God antecedently wills all and only the goods. It seems not unreasonable to desire salvation only conditionally on its being a good thing, and hence to desire it only conditionally on its being antecedently willed by God. But I think Charlotte’s approach is also defensible. Charlotte desires to be with God for eternity and desires that being with God is a good thing.

Thursday, November 5, 2020

Is there a set of all set-theoretic truths?

Is there a set of all set-theoretic truths? This would be the set of sentences (in some encoding scheme, such as Goedel numbers) in the language of set theory that are true.

There is a serious epistemic possibility of a negative answer. If ZF is consistent, then there is a model M of ZFC such that every object in M is definable, i.e., for every object a of M, there is a defining formula ϕ(x) that is satisfied by a and by a alone in M (and if there is a transitive model of ZF, then M can be taken to be transitive). In such a model, it follows from Tarski’s Indefinability of Truth that there is no set of all set-theoretic truths. For if there were such a set, then that set would be definable, and we could use the definition of that set to define truth. So, if ZF is consistent, there is a model M of ZFC that does not contain a set of all the truths in M.

Interestingly, however, there is also a serious epistemic possibility of a positive answer. If ZF is consistent, then there is a model M of ZFC that does contain a set of all the truths in M. Here is a proof. If ZF is consistent, so is ZFC. Let ZFCT be a theory whose language is the language of set theory with an extra constant T, and whose axioms are the axioms of ZFC with the schemas of Separation and Replacement restricted to formulas of ZFC (i.e., formulas not using T), plus the axiom:

  1. x(x ∈ T → S(x))

where S(x) is a sentence saying that x is the code for a sentence (this is a syntactic matter, so it can be specified explicitly), and the axiom schema that has for every sentence ϕ with code n:

  1. ϕ ↔ n ∈ T.

Any finite collection of the axioms of ZFCT is consistent. For let M be a model of ZFC (if ZF is consistent, so is ZFC, so it has a model). Then all the axioms of ZFC will be satisfied in M. Furthermore, for any finite subset of the additional axioms of ZFCT, there is an interpretation of the constant T under which those axioms are true. To see this, suppose that our finite subset contains (1) (no harm throwing that in if it’s not there) and the instances ϕi ↔ ni ∈ T of (2) for i = 1, ..., m. It is provable from ZF and hence true in M that there is a set t such that x ∈ t if and only if x = n1 and ϕ1, or x = n2 and ϕ2, …, or x = nm and ϕm.

Moreover, any such set can be proved in ZF to satisfy:

  1. x(x ∈ t → S(t)).

Interpreting T to be that set t in M will make the finite subset of the additional axioms true.

So, by compactness, ZFCT has an interpretation I in some model M. In M there will be an object t such that t = I(T). That object t will be a set of all the truths in M that do not contain the constant T. Now consider the interpretation I of ZFC in M, which is I without any assignment of a value to the constant T (since T is not a constant of ZFC). Then ZFC will be true in M under I. Moreover, the object t in M will be a set of all the truths in M.

So, if ZF is consistent, then there is a model of ZFC with a set of all set-theoretic truths and a model of ZFC without a set of all set-theoretic truths.

The latter claim may seem to violate the Tarski Indefinability of Truth. But it doesn’t. For that set of all truths will not itself be definable. It will exist, but there won’t be a formula of set theory that picks it out. There is nothing mathematically new in what I said above, but it is an interesting illustration of how one can come close to violating Indefinability of Truth without actually violating it.

Now, what if we take a Platonic view of the truths of set theory? Should we then say that there really is a set of all set-theoretic truths? Intuitively, I think so. Otherwise, our class of all sets is intuitively “missing” a subset of the set of all sentences. I am inclined to think that the Axioms of Separation and Replacement should be extended to include formulas of English (and other human languages), not just the formulas expressible in set-theoretic language. And the existence of the set of all set-theoretic truths follows from an application of Separation to the sentence “n is the code for a sentence of set theory that is true”.

Wednesday, November 4, 2020

Quinn, Double Effect and closeness

In a famous paper, Warren Quinn suggests replacing the distinction between intending evil and foreseeing evil in the Principle of Double Effect (PDE) with a distinction between directly and indirectly harmful action. For concreteness, let’s talk about the death of innocents. Classical PDE reasoning says that it’s wrong to intend the death of an innocent, but it is permissible to accept it as a side-effect for a proportionate reason. Quinn thinks that this has the implausible consequence that craniotomy is permissible: that it is permissible to crush the skull of a fetus to get it through birth canal, because one is not intending the fetus’s death, but only the reduction in head size. This is a special case of the closeness problem: intending to crush the skull is too close to death for a moral distinction, but yet technically one can intend the crushing without intending the death, and so Double Effect makes a moral distinction where there is none.

Quinn suggests that what is instead wrong is to intentionally cause an effect on an innocent that has the following two properties:

  1. the effect is a harm, and

  2. this harm is foreseen to result in death.

The doctor is intending to crush the fetus’s skull: that is an intended effect on the fetus. This effect is a harm, and it is foreseen to result in death. So craniotomy is ruled out. Similarly, blowing up the fat man blocking the entrance of the cave in which other spelunkers are trapped is ruled out, because even though it is possible to blow someone up without intending that they die, being blow up is a clear case of harm, and it is foreseen to lead to death.

This is clever, but I think it fails. For we can imagine that a callous doctor does not intend any effect to the fetus. All he intends is the change in arrangement of a certain set of molecules in order to facilitate their removal from the uterus. These molecules happen to be the ones that the fetus is made of. But that they make up the body of the fetus need not be relevant to the doctor’s intention. If instead there were something other than a fetus present that for health reasons needed to be removed (not at all a remote possibility: consider the body of an already deceased fetus), and the molecules there were similarly arranged, our callous doctor would take exactly the same course of action. Similarly, the spelunkers need not be intending to break up the fat man’s body, but simply to disperse a cloud of molecules.

Now, we could say that the molecules constitute or even are the body of the fetus or of the fat man, and we could say that if you intend A and you know that A is or constitutes B, then you intend B. But if you say that, then you don’t need the Quinn view to get out of craniotomy. For you can then take Fitzpatrick’s solution to the problem of closeness that crushing the skull constitutes death, and hence that the doctor intends death. In fact, though, the constitution principle is false: intention is hyperintensional, and not only doesn’t transfer along constitution lines but we can intend the identical object under one description but not under another. Anyway, the point here is that the molecule problem shows that we need some other solution to the problem of closeness to make Quinn’s story work: the Quinn solution might help with some cases, but it cannot be taken to be the solution.

Double Effect and symbolic actions

There is an intrinsic value to standing against evil. One way to do that is to intentionally act to reduce the evil. But that’s not the only way. Another way of standing against evil is to protest it even when one reasonably expects one’s protest to have no effect. When we see standing against evil as something of significant intrinsic value, then sometimes it will even make sense to stand against evil even when we foresee that doing so will unintentionally increase the evil. It can be legitimate to protest an abuse of power even if one foresees that such protest will lead to further abuses of power, such as a crackdown on the protesters. Of course, prudence is needed, and one must keep proportionality in mind: if the abuses of power inspired by the protest are likely to be much worse than the ones being protested, it is better not to protest. Another way to stand against evil is to punish it. Again, this can make sense even when one does not expect the punishment to reduce the evil (e.g., perhaps the evil is a one-off and it is unlikely that there will be any further temptations to deter people from).

Similarly, there is an intrinsic value to standing for good. A central way to stand for good is to act to increase the good. But, again, it’s not the only way. Admiring, rewarding and praising also are ways of standing for good, even when they are not expected to increase the good.

The actions that constituting standing for good or against evil but that are not intentional acts to increase the good or reduce the evil may be called symbolic. “Symbolic” is often used as a way to downplay the importance of something. That is a mistake: the symbolic can be of great importance. Moreover, “symbolic” suggests a social dimension that need not be relevant. When an atheist hikes alone in order to contemplate the goodness of nature, that is a way of standing for the good of nature that is symbolic in the above sense but not social. Moreover, “symbolic” suggests a certain arbitrariness of choice of symbol. But there need not be such. There is nothing arbitrary in virtue of which admiring a beautiful view is a way of symbolically standing for the good. We thus need to understand “symbolic” in a broad way that is compatible with great intrinsic value, that need not be social in nature, and need not involve arbitrary socially instituted representations.

If we do this, then here is a promising way to make the kinds of deontological views that are tied to the Principle of Double Effect plausible. On these views, certain fundamental evils are wrong to intentionally produce but may be tolerated as side-effects. But now things look puzzling. Let’s say that we can end a war by dropping a bomb on the wicked leaders in the enemy headquarters in a busy city, and which bomb will also kill and maim thousands of innocents in the surrounding buildings, or one can end the war by kidnapping and maiming the enemy leader’s innocent child. The attack on the child is wrong while the attack on the headquarters is permissible on this deontological ethics, but that may just feel wrong. But if we see symbolic standing for good and against evil as really important, the difference becomes more plausible. In intending the maiming of the child, one is standing for evil: for it is unescapable that by intending an evil one stands for it. In refusing to maim the child, one is standing against evil. But in dropping the bomb, the mere foresight of the plight of thousands of innocents does not make one be standing for evil. One can still count as standing against evil by intending to kill the evildoers in the headquarters.

It is tempting to think that when standing against evil does not actually reduce evil, as in the case of the refusal to maim the child, the the action is merely symbolic, and moral weight of the obligation is low. But that is a mistake: “merely” is a poor choice of words when connected with “symbolic”. Symbolic actions can be of great import indeed.

Tuesday, November 3, 2020

The Grotius view of lying

Grotius had a weird view: it is never permissible to lie, but “for purposes of natural law”, only assertions to people who had a right to the truth were lies. Nazis at the door, he would have said, have no right to the truth, so one isn’t lying when one asserts known falsehoods to them. This view has always seemed clearly wrong.

But I just realized that there is actually an interesting argument for a very similar view. Start with these three principles:

  1. Every lie is an assertion.

  2. A defining feature of an assertion is that it is the sort of speech act that the sincerity norm (e.g., “Don’t say what you think is false!”) applies to.

  3. No norm applies in contravention of unequivocal moral norms.

Premise (1) is clearly true. Premise (2) is part and parcel of normative accounts of assertion (there is room for variance on what the sincerity norm exactly is, but that variance will not affect our main argument).

Premise (3) is highly controversial. It is a generalization of Aquinas’ principle that immoral “laws” are not really laws. The general idea is that morality not only overrides other norms that contradict it, but as it were sucks all the power out of them. When one knows that ϕing is morally forbidden, responses like “But the law of the land requires it” or “I’d be breaking the rules of the game if I ϕed” make no sense. For there is no normative force against morality. Here are two reasons to accept premise (3). The first is the controversial claim that all norms of action are a species of moral norms. (Here is a theistic argument for this: Norms are appropriately action-guiding; the only thing that can appropriately guide our action is what the love of God requires (we are to love God with all our heart); but to be guided by the love of God and to be guided by morality is the same thing.) The second is that if there are norms other than moral norms, they are created by our normative powers, but it is not plausible that we have the normative power to create norms that stand against the norms of morality (that is, for instance, why immoral promises are null and void).

Then:

  1. If the sincerity norm for a speech act ϕ contravenes unequivocal moral norms, the speech act is not an assertion. (By 2 and 3)

  2. If the sincerity norm for a speech act ϕ contravenes unequivocal moral norms, the speech act is not a lie. (By 1 and 4)

Now here is one way to fill out the rest of the argument:

  1. In Nazi at the door cases, we are morally required to say what we disbelieve (i.e., go against what the sincerity norm would require).

  2. So, in Nazi at the door cases, saying what we disbelieve is not a lie. (By 5 an 6)

And that gives us a version of the Grotius view.

My own view is to flip the last two steps of the argument, replacing 6 and 7 with:

  1. In Nazi at the door cases, saying what we disbelieve is a lie.

  2. So, in Nazi at the door cases it is still false that we are morally required to say what we disbelieve. (By 5 and 8)

  3. In Nazi at the door cases, if it is morally permissible to say what we disbelieve, it is morally required.

  4. So, in Nazi at the door cases, it is not morally permissible to say what we disbelieve. (By 9 and 10)

But a lot of people balk at 9. And they then have reason to accept the Grotius-like thesis 7.

So, all in all, if one accepts the normative view of assertion and one accepts the contravention principle 3, one has a choice between Kantian absolutism about lying and a Grotius-like view.

Monday, November 2, 2020

An odd argument for an omniscient being

Here’s a funny logically valid argument:

  1. The analytic/synthetic distinction between truths is the same as the a priori / a posteriori distinction.

  2. The analytic/synthetic distinction between truths makes sense.

  3. If 1 and 2, then every truth is knowable.

  4. So, every truth is knowable. (1–3)

  5. If every truth is knowable, then every truth is known.

  6. So, every truth is known. (4–5)

  7. If every truth is known, there is an omniscient being.

  8. So, there is an omniscient being. (6–7)

I won’t argue for 1 and 2: those are big-picture substantive philosophical questions. I am sceptical of both claims.

The argument for 3 is this. If the analytic/synthetic distinction makes sense, then the two concepts are exclusive and exhaustive among truths: a truth is synthetic just in case it’s not analytic. So, every truth is analytic or synthetic. But if 1 is true and the analytic/synthetic distinction makes sense, then it follows that every truth is a priori or a posteriori. But these phrases are short for a priori knowable and a posteriori knowable. Thus, if 1 is true and the analytic/synthetic distinction makes sense, then every truth is knowable.

The argument for 5 is the famous knowability paradox: If p is an unknown truth, then that p is an unknown truth is a truth that cannot be known (for if someone know that p is an unknown truth, then they would thereby know that p is a truth, and then it wouldn’t be an unknown truth, and no one can’t know what isn’t so).

One argument for 7 is an Ockham’s Razor argument: it is more plausible to think there is one being that knows all things than that the knowledge is scattered among many. A sketch of a deductive argument for 5 that skirts over some important technical issues is this: if you know a conjunction, you know all the conjuncts; let p be the conjunction of all truths; if every truth is known, then p is known; and someone who knows p knows all.

Pain and water

One way for physicalists to handle the apparent differences between mental and physical properties is to liken the difference to that between water and H2O. It is a surprising a posteriori fact that water is H2O. Similarly, it is a surprising a posteriori fact that pain is physical state ϕ135 (say).

Now, a posteriori facts are facts that are knowable by observation. But it is not clear that the proposition that pain is physical state ϕ135 is knowable by observation.

Here is why. There are two main candidates for what kind of a state ϕ135 could be: a brain state or a functional state. The choice between these two candidates depends on how strongly one feels about multiple realizability of mental states. If one is willing to say that only beings with brains like ours—say, complex vertebrates—feel pain, one might identify ϕ135 with a brain state. If one has a strong intuition that beings with other computational systems anatomically different from those of complex vertebrates—cephalopods, aliens, and robots—could have consciousness, one will opt for identifying ϕ135 as a functional state.

But in fact, assuming pain is a physical state, there is a broad spectrum of physical state candidates for identifying pain with, depending on how far we abstract from the actual physical realizers of our pains while keeping fixed the broad outlines of functionality (signaling damage and leading to aversive behavior). If we abstract very little, only brain states found in humans—and perhaps not all humans—will be pain. If we abstract a bit more, but still insist on anatomical correspondence, then brain states found in other complex vertebrates will be pain. If we drop the insistence on anatomical correspondence but do not depart too far, we may include amongst the subjects of pain other DNA-based organisms such as cephalopods. Further abstraction will let in living organisms with other chemical bases, and yet further abstraction will let in robots. And even when talking of the fairly pure functionalism applicable to robots, we will have serious questions about how far to abstract concepts such as “damage” and “aversive behavior”.

The question of where in this spectrum of more and more general physical states we find the state that is identical with pain does not appear to be a question to be settled by observation. By internal observation, we only see our own pain. By external observation, however, we cannot tell where in the spectrum of more and more general (perhaps along multiple dimensions) physical states pain is present, without begging the question (e.g., by assuming from the outset that certain behaviors show
the presence of pain, which basically forces our hand to a functionalism centered on those behavior).

Objection 1: An experimenter could replace the brain structures responsible for pain in her own brain by structures that are further from human ones, and observe whether she can still feel pain. Where the feeling of pain stops, there we have abstracted too far.

Response: There are serious problems with this experimental approach. First, mere replacement of brain pain centers will not allow one to test hypotheses on which what constitutes pain depends on the larger neural context. And replacement of the brain as a whole is unlikely to result in the experimenter surviving. Second, and perhaps more seriously, if replacements of the brain pain centers commit the same data to memory storage as brain pain centers do, after the experiment the agent will think that there was pain, even if there wasn’t any pain there, and if they have the same functional influence on vocal production as brain centers do, the agent will report pain, again even if there wasn’t any pain there.

Objection 2: We could know which physical state pain is identified with if God told us, and being told by God is a form of a posteriori knowledge.

Response: It seems likely that God’s knowledge of which physical states are pains, or of the fact that water is H2O, would be a priori knowledge. God doesn’t have to do scientific research to know necessary truths.

Objection 3: We can weaken the analogy and say that just as the identity between water and H2O is not a priori, so too the identity between pain and ϕ135 is not a priori, without saying that both are a posteriori.

Response: This is probably the move I’d go for if I were a physicalist. But by weakening this analogy, one weakens the position that it defends. For it is now admitted that there is a disanalogy between water-H2O and pain-ϕ135. There is something rather different about the mental case.