Thursday, December 17, 2020

A multiple faculty solution to the problem of conscience

I used to be quite averse to multiplying types of normativity until I realized that in an Aristotelian framework it makes perfect sense to multiply them by their subject. Thus, I should think that 1 = 1, I should look both ways before crossing the street, and I should have a heart-rate of no more than 100. But the norms underlying these claims have different subjects: my intellect, my will and my circulatory system (or perhaps better: I as thinking, I as willing and I as circulating).

In this post I want to offer two solutions to the problem of mistaken conscience that proceed by multiplying norms. The problem of mistaken conscience is two-fold as there are two kinds of mistakes of conscience. A strong mistake is when I judge something is required when it is forbidden. A weak mistake is when I judge something is permissible when it is forbidden.

Given that I should follow my conscience, a strong mistake of conscience seems to lead to two conflicting obligations: I should ϕ, because my conscience says so, and I should refrain from ϕing, because ϕing is forbidden. Call the claim that strong mistakes of conscience lead to conflicting obligations the Dilemma Thesis. The Dilemma Thesis is perhaps somewhat implausible on its face, but can be swallowed (as Mark Murphy does). However, more seriously, the Dilemma Thesis has the unfortunate result that strong mistakes of conscience are not, as such, mistakes. For the mistake was supposed to be that I judge ϕing as required when it is forbidden. But that is only a mistake when ϕing is not required. But according to the Conflict Thesis, it is required. So there is no mistake. (There may be a mistake about why it is required, and perhaps one can use that to defuse the problem, but I want to try something else in this post.) Moreover, a view that embraces the Dilemma Thesis needs to explain the blame asymmetry between the obligation to ϕ and the obligation not to ϕ: I am to blame if I go against conscience, but not if I follow conscience.

Weak mistakes are less of a problem, but they still raise the puzzle of why I am not blameworthy if I do what is forbidden when conscience says it’s permissible.

Moving towards a solution, or actually a pair of solution, start with this thought. When I follow a mistaken conscience, my will does nothing wrong but the practical intellect has made a mistake. In other words, we have two sets of norms: norms of practical intellect and norms of will. In these cases I judged badly but willed well. And it is clear why I am not blameworthy: for I become blameworthy by virtue of a fault of the will, not a fault of the intellect.

But there is still a problem analogous to the problem with the Dilemma Thesis. For it seems that:

  1. In a mistake of conscience, my judgment was bad because it made a false claim as to what I should will.

In the case of a strong mistake, say, I judged that I should will my ϕing whereas is in fact I should have nilled my ϕing. But I can’t say that and say that the will did what it should in ϕing.

This means that if we are to say that the will did nothing wrong and the problem was with the intellect, we need to reject (1). There are two ways of doing this, leading to different solutions to the problem of conscience.

Claim (1) is based on two claims about practical judgment:

  1. The practical intellect’s judgments are truth claims.

  2. These truth claims are claims about what I should will.

We can get out of (1) by denying (2) (with (3) then becoming moot) or by holding on to (2) but rejecting (3).

Anscombe denies (2), for reasons having nothing to do with mistakes of conscience. There is good precedent for denying (2), then.

I find the solution that denies (2) a bit murky, but I can kind of see how one would go about it. Oversimplifying, the intellect presents actions to the will on balance positively or negatively. This presentation does not make a truth claim. The polarity of the presentation by the intellect to the will should not be seen as a judgment that an action has a certain character, but simply as a certain way of presenting the judgment—with propathy or antipathy, one might say. Nonetheless there are norms of presentation built into the nature of the practical intellect. These norms are not truth norms, like the norms of the theoretical intellect, but are more like the norms of the functioning of the body’s thermal regulation system, which should warm up the body in some circumstances and cool it down in others, but does not make truth claims. There are actions that should be positively presented and actions that should be negatively presented. We can say that the actions that should be positively presented are right, but the practical intellect’s positive presentation of an action is not a presentation that the action is right, for that would be an odd circularity: to present ϕing positively would be to present ϕing as something that should be presented positively.

(In reality, the “on balance” positive and negative presentations typically have a thick richness to them, a richness corresponding “in flavor” to words like “courageous”, “pleasant”, etc. However, we need to be careful on this view not to think of the presentation corresponding “in flavor” to these words as constituting a truth claim that a certain concept applies. I am somewhat dubious whether this can all be worked out satisfactorily, and so I worry that the no-truth-claim picture of the practical intellect falls afoul of the thickness of the practical intellect’s deliverances.)

There is a second solution which, pace Anscombe, holds on to the idea that the practical intellect’s judgments are truth claims, but denies that they are claims about what I should will. Here is one way to develop this solution. There are times when an animal’s subsystem is functioning properly but it would be better if it did something else. For instance, when we are sick, our thermal regulation system raises our temperature in order to kill invading bacteria or viruses. But sometimes the best medical judgment will be that we will on the whole be better off not raising the temperature given a particular kind of invader, in which case we take fever-reducing medication. We have two norms here: a local norm of the thermal regulation system and a holistic norm of the organism.

Similarly, there are local norms of the will—to will what the intellect presents to it overall in a positive light, say. And there are local norms of the intellect—to present the truth or maybe that which the evidence points to as true. But there are holistic norms of the acting person (to borrow Wojtyla’s useful phrase), such as not to kill innocents. The practical intellect discerns these holistic norms, and presents them to the will. The intellect can err in its discernment. The will can fail to follow the intellect’s discernment.

The second solution is rather profligate with norms, having three different kinds of norms: norms of the will, norms of the intellect, and norms of the acting person, who comprises at least the will, the intellect and the body.

In a strong mistake of conscience, where we judge that we should ϕ but ϕing is forbidden, and we follow conscience and ϕ, here is what happens. The will rightly follows the intellect’s presentation by willing to ϕ. The acting person, however, goes wrong by ϕing. We genuinely have a mistake of the intellect: the intellect misrepresented what the acting person should do. The acting person went wrong, and did so simpliciter. However, the will did right, and so one is not to blame. We can say that in this case, the ϕing was wrong, but the willing to ϕ was right. And we can say how the pro-ϕing norm takes priority: the norm to will one’s ϕing is a norm of the will, so naturally it is what governs the will.

In a weak mistake of conscience, where we judge that it is permissible to ϕ but it’s not, again the solution is that under the circumstances it was permissible to will to ϕ, but not permissible to ϕ.

There is, however, a puzzle in connecting this story with failed actions. Consider either kind of mistake of conscience, and suppose I will to ϕ but I fail to ϕ due to some non-moral systemic failure. Maybe I will to press a forbidden button, but it turns out I am paralyzed. In that case, it seems that the only thing I did was willing to ϕ, and so we cannot say that I did anything wrong. I think there are two ways out of this. The first is to bite the bullet and say that this is just a case where I got lucky and did nothing wrong. The second is to say that my willing to ϕ can be seen as a trying to ϕ, and it is bad as an action of the acting person but not bad as an action of the will.

Tuesday, December 15, 2020

A proof that ought implies can

Some actions are are things I can do immediately: for instance, I can immediately raise my hand. Others require that I do something to enable myself to do the action: for instance, to teach in person, I have to go to the classroom, or to feed my children, I need to obtain food. So, here is a very plausible axiom of deontic logic:

  1. If I ought to do A, and A is not an action I can do immediately, then I ought to bring it about that I can immediately do A.

Now, say that I remotely can do an action provided that I can immediately do it, or I can immediately bring it about that I can immediately do it, or I can immediately bring it about that I can immediately bring it about that I can immediately do it, or ….

It follows from (1) and a bit of reasoning that:

  1. If I ought to do A, then I remotely can do A, or I have an infinite regress of prerequisite obligations.

But:

  1. It is false that I have an infinite regress of prerequisite obligations.

So:

  1. If I ought to do A, then I remotely can do A.

Monday, December 14, 2020

God and Beauty

Here is my talk from the Cracow philosophy of religion conference in September:


Thursday, December 10, 2020

The possibility test for intentions

This test for whether one is intending some effect E of an action is often employed (e.g., by Germain Grisez) in the Double Effect literature:

  1. If it is logically possible for an action with an intention J to be fully successful even though E does not happen, then E is not included in J.

Claim (1) follows in standard modal logic (with no need for anything fancy like S5) from:

  1. If an intention J includes E, then the inclusion of E is an essential property of J.

  2. Necessarily, if an action is done with an intention that includes E and E does not occur, then the action is not fully successful.

For suppose that E is included in J. Then in every possible world where an action is done with J, the action is done with an intention that includes E by (2)) and so in every possible world where an action is done with J, the action is not fully successful if E does not occur, by (3). Hence, there is no possible world where an action is done with J and is fully successful even though E does not happen. Thus, we have (1).

At the same time, (1) sounds awfully strong. Even if the possible world where the action is successful despite the lack of E requires a miracle, E is not included in J. For instance, suppose God is able to keep the soul of a human being bound to a single atom. That means that someone whose intention was to blow the man blocking the mouth of the cave literally to single atoms was not intending death, since there is a possible world where the person’s soul remains bound to a single atom, and in that world the action is clearly successful.

To deny (1), one needs to deny (2) or (3). I think the best route to denying (2) is a strong dose of semantic externalism: the content of an intention is dependent in part on things outside the individual. Perhaps on Earth the very same intention may be an intention to drink water, while on Twin-Earth the very same intention may be an intention to drink XYZ. I am sceptical of this: it seems to me that the best way to understand the water-XYZ issue is that intentions are partly grounded in facts outside the individual, and so it is a different intention on Twin-Earth than on Earth, even if it is partly grounded in the same facts in the individual.

But even if one is impressed by the water-XYZ issue, it seems one should be willing to accept the following variant on 2:

  1. If an intention J includes E and occurs at t, then in any possible world that exactly matches the actual world up to an including t the intention J includes E at t.

The argument for (1) can now be modified to yield an argument for:

  1. If an action with an intention J occurs at t, and if there is a possible world that matches the actual world up to and including t and where the action with J is fully successful but where E does not happen, then E is not included in J.

And if one’s motivation for denying (1) is to avoid the conclusion that intending to blow the man in the mouth of the cave to single atoms does not include intending death, then (5) is just as bad. For God could miraculously keep the soul bound to a single atom without anything being any different up to and including the time of the action.

If we don’t want (1), we won’t want (5), either.

So a better bet is to deny (3). A start towards a denial of (3) would be to talk of something like “stretch goals”. It seems that an action may have a stretch goal and yet be successful even if that stretch goal is unachieved. However, the stretch goal is surely intended.

I am not sure. If the stretch goal is intended, then it seems that the right thing to say is that the action is successful but not fully successful if the stretch goal is not met.

In any case, we might grant the claim about stretch goals, and introduce the concept of an intention being perfectly satisfied, which includes the satisfaction of all stretch goals, and then replace “fully successful” with “perfectly successful” in (1) and (5). And I think this will still generate the result about blowing the fat man to atoms, because the death of the fat man—the separation of soul from body—is not a stretch goal either. (If anything, one might imagine that his survival is a stretch goal.)

All this makes me want to say that (3) really is true, and we cannot avoid the conclusion that it is possible to intend to blow the man in the mouth of the cave to single atoms without intending to kill him. But I am now inclined to think that an intention to kill is not a necessary condition for murder, and so the action could still be a murder.

Monday, December 7, 2020

Independence, spinners and infinitesimals

Say that a “spinner” is a process whose output is an angle from 0 (inclusive) to 360 (exclusive). Take as primitive a notion of uniform spinner. I don’t know how to define it. A necessary condition for uniformity is that every angle has the same probability, but this necessary condition is not sufficient.

Consider two uniform and independent spinners, generating angles X and Y. Consider a third “virtual spinner”, which generates the angle Z obtained by adding X and Y and wrapping to be in the 0 to 360 range (thus, if X = 350 and Y = 20, then Z = 10). This virtual spinner is intuitively statistically independent of each of X and Y on its own but not of both.

Suppose we take the intuitive statistical independence at face value. Then:

  • P(Z = 0)P(X = 0)=P(Z = X = 0)=P(Y = X = 0)=P(Y = 0)P(X = 0),

where the second equality followed from the fact that if X = 0 then Z = 0 if and only if Y = 0. Suppose now that P(X = 0) is an infinitesimal α. Then we can divide both sides by α, and we get

  • P(Z = 0)=P(Y = 0).

By the same reasoning with X and Y swapped:

  • P(Z = 0)=P(X = 0).

We conclude that

  • P(X = 0)=P(Y = 0).

We thus now have an argument for a seemingly innocent thesis:

  1. Any two independent uniform spinners have the same probability of landing at 0.

But if we accept that uniform spinners have infinitesimal probabilities of landing at a particular value, then (1) is false. For suppose that X and Y are angles from two independent uniform spinners for which (1) is true. Consider a spinner whose angle is 2Y (wrapped to the [0, 360) range). This doubled spinner is clearly uniform, and independent of X. But its probability of yielding 0 is equal to the probability of Y being 0 or 180, which is twice the probability of Y being 0, and hence twice the probability of X being 0, in violation of (1) if P(X = 0)>0.

So, something has gone wrong for friends of infinitesimal probabilities. I see the following options available for them:

  1. Deny that Z = 0 has non-zero probability.

  2. Deny that Z is statistically independent of X as well as being statisticlaly independent of Y.

I think (3) is probably the better option, though it strikes me as unintuitive. This option has the interesting consequence: we cannot independently rerandomize a spinner by giving it another spin.

The careful reader will notice that this is basically the same argument as the one here.

Wednesday, December 2, 2020

More on the side-effect harm/help asymmetry

Wright and Bengson note an apparent intuitive asymmetry in our side-effect judgments. We blame people for not avoiding bad effects, even when these bad effects are not intended, but we do not praise people for not avoiding good effects when these good effects are not intended.

I wonder if the explanation for this asymmetry isn’t this:

  1. Typical good people strive to avoid bad side-effects to others

  2. Typical bad people don’t strive to avoid good side-effects to others.

The reason for (2) is that typical bad people are selfish rather than malevolent: their badness consists in the fact that they put themselves before others, not in their going out of their way to deprive others of goods as such. But typical good people are positively benevolent, so we have (1).

Now, given (1), if you fail to avoid a bad side-effect, that makes you be worse than a typical good person. And that calls for significant castigation. But given (2), if you fail to avoid a good side-effect, that doesn’t make you better than a typical bad person. Granted, you could still be praised for being better than a very bad person, but that would be damning with faint praise. So, (1) and (2) neatly predicts the asymmetry in our practices of praise and blame.

But now imagine that we lived in a more polarized society, where typical bad people were actually malevolent rather than selfish. Against that background, it would make sense to praise someone for not avoiding a good effect to another. This is similar to the way that we would not praise a 21st-century upper-class man for refraining from duelling, but we would praise a 19th-century one for the same thing. For the vice of duelling is no longer rampant like it was, and to say that someone never engages in duels is damning with faint praise. Praise is comparative, and comparisons depend on reference class.

Sometimes that reference class is the person’s past and present. And that provides cases where we would praise someone for not striving to avoid good side-effects. If out of hatred someone previously strove to avoid good effects to a particular other, and then stopped such striving, then praise would be in order.

We thus need to be careful in drawing conclusions from praise and blame practices, because these practices depend on statistical facts. If the above is right, the side-effect asymmetry may simply be due to reference class issues rather than any deeper facts about intentions, side-effects and value.

But I think there is probably a further asymmetry between praise and blame. While, as noted, we do not praise people for doing going things most people in the reference class do, we do in fact blame people for doing bad things that most people in the reference class do. While we do not praise our 21st century contemporaries for refraining from dueling, we would have been right to castigate our 19th century contemporaries for that vice. That “everybody is doing it” often makes praise feel nearly completely inappropriate, but it only somewhat decreases the degree of blame rather than eliminating it.

Another problem for infinitesimal probabilities

Here’s another problem with independence for friends of infinitesimal probabilities.

Let ..., X−2, X−1, X0, X1, X2, ... be an infinite sequence of independent fair coin tosses. For i = 0, 1, 2, ..., define Ei to be heads if Xi and X−1 − i are the same and tails otherwise.

Now define these three events:

  • L: X−1, X−2, ... are all heads

  • R: X0, X1, ... are all heads

  • E: E0, E1, ... are all heads.

Friends of infinitesimal probabilities insist that P(R) and P(L) are positive infinitesimals.

I now claim that E is independent of R, and the same argument will show that E is independent of L. This is because of this principle:

  1. If Y0, Y1, ... is a sequence of independent random variables, and f and g are functions such that f(Yi) and g(Yi) are independent of each other for each fixed i, then the sequences f(Y0),f(Y1),... and g(Y0),g(Y1),... are independent of each other.

But now let Yi = (Xi, X−1 − i). Then Y0, Y1, ... is a sequence of independent random variables. Let f(x, y)=x and let g(x, y) be heads if x = y and tails otherwise. Then it is easy to check that f(Yi) and g(Yi) are independent of each other for each fixed i. Thus, by (1), f(Y0),f(Y1),... and g(Y0),g(Y1),... are independent of each other. But f(Yi)=Xi and g(Yi)=Ei. So, X0, X1, ... and E0, E1, ... are independent of each other, and hence so are E and R.

The same argument shows that E and L are independent.

Write AB for the conjunction of A and B and note that EL, ER and RL are the same event—namely, the event of all the coins being heads. Then:

  1. P(E)P(L)=P(EL)=P(RL)=P(R)P(L)

Since friends of positive infinitesimals insist that P(R) and P(L) are positive infinitesimals, we can divide both sides by P(L) and get P(E)=P(R). The same argument with L and R swapped shows that P(E)=P(L). So, P(L)=P(R).

But now let Xi* = Xi + 1, and define L* to be the event of X−1*, X−2* being all heads, and R* the event of X0*, X1*,… being all heads. The exact same argument as above will show that P(L*)=P(R*). But friends of infinitesimal probabilities have to say that P(R*)>P(R) and P(L*)<P(L), and so we have a contradiction if P(L)=P(R) and P(L*)=P(R*).

I think the crucial question is whether (1) is still true in settings with infinitesimal probabilities. I don’t have a great argument for it. It is, of course, true in classical probabilistic settings.

Monday, November 30, 2020

Independence, uniformity and infinitesimals

Suppose that a random variable X is uniformly distributed (in some intuitive sense) over some space. Then :

  1. P(X = y)=P(X = z) for any y and z in that space.

But I think something stronger should also be true:

  1. Let Y and Z be any random variables taking values in the same space as X, and suppose each variable is independent of X. Then P(X = Y)=P(X = Z).

Fixed constants are independent of X, so (1) follows from (2).

But if we have (2), and the plausible assumption:

  1. If X and Y are independent, then X and f(Y) are independent for any function f,

we cannot have infinitesimal probabilities. Here’s why. Suppose X and Y are independent random variables uniformly distributed over the interval [0, 1). Assume P(X = a) is infinitesimal for a in [0, 1). Then, so is P(X = Y).

Let f(x)=2x for x < 1/2 and f(x)=2x − 1 for 1/2 ≤ x. Then if X and Y are independent, so are X and f(Y). Thus:

  1. P(X = Y)=P(X = f(Y)).

Let g(x)=x/2 and let h(x)=(1 + x)/2. Then:

  1. P(Y = g(X)) = P(Y = X)

and

  1. P(Y = h(X)) = P(Y = X).

But now notice that:

  1. Y = g(X) if and only if X = f(Y) and Y < 1/2

and

  1. Y = h(X) if and only if X = f(Y) and 1/2 ≤ Y.

Thus:

  1. (Y = g(X) or Y = h(X)) if and only if X = f(Y)

and note that we cannot have both Y = g(X) and Y = h(X). Hence:

  1. P(X = Y)=P(X = f(Y)) = P(Y = g(X)) + P(Y = h(X)) = P(Y = X)+P(Y = X)=2P(X = Y).

Therefore:

  1. P(X = Y)=0,

which contradicts the infinitesimality of P(X = Y).

This argument works for any uniform distribution on an infinite set U. Just let A and B be a partition of U into two subsets of the same cardinality as U (this uses the Axiom of Choice). Let g be a bijection from U onto A and h a bijection from U onto B. Let f(x)=g−1(x) for x ∈ A and f(x)=h−1(x) for x ∈ B.

Note: We may wish to restrict (3) to intuitively “nice” functions, ones that don’t introduce non-measurability. The functions in the initial argument are “nice”.

Incompatible reasons for the same action

While writing an earlier post, I came across a curious phenomenon. It is, of course, quite familiar that we have incompatible reasons that we cannot act on all of: reasons of convenience often conflict with reasons of morality, say. This familiar incompatibility is due to the fact that the reasons support mutually incompatible actions. But what is really interesting is that there seem to be incompatible reasons for the same action.

The clearest cases involve probabilities. Let’s say that Alice has a grudge against Bob. Now consider an action that has a chance of bestowing an overall benefit on Bob and a chance of bestowing an overall harm on Bob. Alice can perform the action for the sake of the chance of overall harm out of some immoral motive opposed to Bob’s good, such as revenge, or she can perform the action for the sake of the chance of overall benefit out of some moral motive favoring Bob’s good. But it would make no sense to act on both kinds of reasons at once.

One might object as follows: The expected utility of the action, once both the chance of benefit and the chance of harm are taken into account, is either negative, neutral or positive. If it’s negative, only the harm-driven action makes sense; if it’s positive, only the benefit-driven action makes sense; if it’s neutral, neither makes sense. But this neglects the richness of possible rational attitudes to risk. Expected utilities are not the only rational way to make decisions. Moreover, the chances may be interval-valued in such a way that the expected utility is an interval that has both negative and positie components.

Another objection is that perhaps it is possible to act on both reasons at once. Alice could say to herself: “Either the good thing happens to Bob, which is objectively good, or the bad thing happens, or I am avenged, which is good for me.” Sometimes such disjunctive reasoning does make sense. Thus, one might play a game with a good friend and think happily: “Either I will win, which will be nice for me, or my friend will win, and that’ll be nice, too, since he’s my friend.” But the Alice case is different. The revenge reason depends on endorsing a negative attitude towards Bob, while one cannot do while seeking to benefit Bob.

Or suppose that Carl read in what he took to be holy text that God had something to say about ϕing, but Carl cannot remember if the text said that God commanded ϕing or that God forbade ϕing—it was one of the two. Carl thinks there is a 30% chance it was a prohibition and a 70% chance that it was a command. Carl can now ϕ out of a demonic hope to disobey God or he can ϕ because ϕing was likely commanded by God.

In the most compelling cases, one set of motives is wicked. I wonder if there are such cases where both sets of motives are morally upright. If there are such cases, and if they can occur for God, then we may have a serious problem for divine omnirationality which holds that God always acts for all the unexcluded reasons that favor an action.

One way to argue that such cases cannot occur for God is by arguing that the most compelling cases are all probabilistic, and that on the right view of divine providence, God never has to engage in probabilistic reasoning. But what if we think the right view of providence involves probabilistic reasoning?

We might then try to construct a morally upright version of the Alice case, by supposing that Alice is in a position of authority over Bob, and instead of being moved by revenge, she is moved to impose a harm on Bob for the sake of justice or to impose a good on him out of benevolent mercy. But now I think the case becomes less clearly one where the reasons are incompatible. It seems that Alice can reasonably say:

  1. Either justice will be served or mercy will be served, and I am happy with both.

I don’t exactly know why it is that (1) makes rational sense but the following does not:

  1. Either vengeance on Bob will be saved or kindness to Bob will be served, and I am happy with both.

But it does seem that (1) makes sense in a way in which (2) does not. Maybe the difference is this: to avenge requires setting one’s will against the other’s overall good; just punishment does not.

I conjecture that there are no morally upright cases of rationally incompatible reasons for the same action. That conjecture would provide an interesting formal constraint on rationality and morality.

Friday, November 27, 2020

An improvement on the objective tendency interpretation of probability

I am very much drawn to the objective causal tendency interpretation of chances. What makes a quantum die have chance 1/6 of giving any of its six results is that there is an equal causal tendency towards each result.

However, objective tendency interpretations have a serious problem: not every conditional chance fact is an objective tendency. After all, if P(A|B) represents an objective causal tendency of the system in state B to have state A, to avoid causal circularity, we don’t want to say that P(B|A) represents an objective causal tendency of the system in state A to have state B.

There is a solution to this: a more complex objective tendency interpretation somewhat in the spirit of David Lewis’s best-fit interpretation. Specifically:

  • the conditional chance of A on B is r if and only if Q(A|B)=r for every probability function Q such that (a) Q satisfies the axioms of probability and (b) Q(C|D)=q whenever r is the degree of tendency of the system in state D to have state C.

There are variants of this depending on the choice of formalism and axioms for Q (e.g., one can make Q be a classical countably additive probability, or a Popper function, etc.). One can presumably even extend this to handle lower and upper chances of nonmeasurable events.

Scratch coding in Minecraft

Years ago, for my older kids' coding education, I made a Minecraft mod that lets you program in Python. Now I made a Scratch extension that works with that mod for block-based programming, that I am hoping to get my youngest into. Instructions and links are here.




Wednesday, November 25, 2020

Intending as a means or as an end

I used to think that it is trivial and uncontroversial that if one intends something, one intends it as an end or as a means.

Some people (e.g., Aquinas, Anscombe, O’Brian and Koons, etc.) have a broad view of intention. On such views, if something is known to inevitably and directly follow from something that one intends, then one intends that, too. This rules out sophistical Double Effect justifications, such as a Procrustes who cuts off the heads of people who are too tall to fit the bed claiming that he intends to shorten rather than kill.

But if one has a broad view of intention, then I think one cannot hold that everything intended is intended as an end or as a means. The death of Procrustes’ victim is not a means: for it does nothing to help the victim fit the bed. But it’s not an end either: it is the fit for the bed that is the end (or something else downstream of that, such as satisfaction at the fit). So on broad views of intention, one has to say that Procrustes intends death, but does not intend it either as a means or as an end.

While this is a real cost of the broad theory of intention, I think it is something that the advocates of that theory should simply embrace. They should say there are at least three ways of intending something: as a means, as an end, and as an inevitable known side-effect (or however they exactly want to formulate that).

On the other hand, if we want to keep the intuition that to intend is to intend as a means or as an end, then we need to reject broad theories of intentions. In that case, I think, we should broaden the target of the intention instead.

In any case, the lesson is that the characterization of intending as intending-as-a-means-or-as-an-end is a substantive and important question.

Reasons as construals

Scanlon argues that intentions do not affect the permissibility of non-expressive actions because our intentions come from our reasons, and our reasons are like beliefs in that they are not something we choose.

In this argument, our reasons are the reasons we take ourselves to have for action. Scanlon’s argument can be put as follows (my wording, not his):

  1. I do not have a choice of which reasons I take myself to have.

  2. If I rationally do A, I do it for all the reasons for A that I take myself to have for doing A.

And the analogy with beliefs supports (1). However, when formulated like this, there is something like an equivocation on “reasons I take myself to have” between (1) and (2).

On its face reasons I take myself to have are belief-like: indeed, one might even analyze “I take myself to have reason R for A” as “I believe that R supports A”. But if they are belief-like in this way, I think we can argue that (2) is false.

Beliefs come in occurrent and non-occurrent varieties. It is only the occurrent beliefs that are fit to ground or even be analogous to the reasons on the basis of which we act. Suppose I am a shady used car dealer. I have a nice-looking car. I actually tried it out and found that it really runs great. You ask me what the car is like. I am well-practiced at answering questions like that, and I don’t think about how it runs: I just say what I say about all my cars, namely that it runs great. In this case, my belief that the car runs great doesn’t inform my assertion to you. I do not even in part speak on the basis of the belief, because I haven’t bothered to even call to mind what I think about how this car runs.

So, (2) can only be true when the “take myself to have” is occurrent. For consistency, it has to be occurrent in (1). But (1) is only plausible in the non-occurrent sense of “take”. In the occurrent sense, it is not supported by the belief analogy. For we often do have a choice over which beliefs are occurrent. We have, for instance, the phenomenon of rummaging through our minds to find out what we think about something. In doing so, we are trying to make occurrent our beliefs about the matter. By rummaging through our minds, we do so. And so what beliefs are occurrent then is up to us.

This can be of moral significance. Suppose that I once figured out the moral value of some action, and now that action would be very convenient to engage in. I have a real choice: do I rummage through my mind to make occurrent my belief about the moral value of the action or not? I might choose to just do the convenient action without searching out what it is I believe about the action’s morality because I am afraid that I will realize that I believe the action to be wrong. In such a case, I am culpable for not making a belief occurrent.

While the phenomenon of mental rummaging is enough to refute (1), I think the occurrent belief model of taking myself to have a reason is itself inadequate. A better model is a construal model, a seeing-as model. It’s up to me whether I see the duck-rabbit as a duck or as a rabbit. I can switch between them at will. Similarly, I can switch between seeing an action as supported by R1 and seeing it as supported by R2. Moreover, there is typically a fact of the matter whether I am seeing the duck-rabbit as a duck or as a rabbit at any given time. And similarly, there may be a fact of the matter as to how I construed the action when I finally settled on it, though I may not know what that fact is (for instance, because I don’t know when I settled on it).

In some cases I can also switch to seeing the action as supported by both R1 and R2, unlike in the case of the duck-rabbit. But in some cases, I can only see it as supported by one of the reasons at a time. Suppose Alice is a doctor treating a patient with a disease that when untreated will kill the patient in a month. There is an experimental drug available. In 90% of the cases, the drug results in instant death. In 10% of the cases, the drug extends the remaining lifetime to a year. Alice happens to know that this patient once did something really terrible to her best friend. Alice now has two reasons to recommend the drug to the patient:

  • the drug may avenge the evil done to her friend by killing the patient, and

  • the drug may save the life of the patient thereby helping Alice fulfill her medical duties of care.

Both reasons are available for Alice to act on. Unless Alice has far above average powers of compartmentalization (in a way in which some people perhaps can manage to see the duck-rabbit as both a duck and a rabbit at once), it is impossible for Alice to act on both reasons. She can construe the recommending of the pill as revenge on an enemy or she can construe it as a last-ditch effort to give her patient a year of life, but not both. And it is very plausible that she can flip between these. (It is also likely that after the fact, she may be unsure of which reason she chose the action for.)

In fact, we can imagine Alice as deliberating between four options:

  • to recommend the drug in the hope of killing her enemy instantly

  • to recommend the drug in the hope of giving her patient a year of life

  • to recommend against the drug in order that her enemy should die in a month

  • to recommend against the drug in order that her patient have at least a month of life.

The first two options involve the same physical activity—the same words, say—and the last two options do as well. But when she considers the first two options, she construes them differently, and similarly with the last two.

Thursday, November 19, 2020

Intention doesn't transfer to inevitable consequences

Some people, maybe as part of a response to the closeness problem for Double Effect, think:

  1. Whenever I intend A while knowing that A inevitably causes B, I intend B.

This is false. Suppose I play a game late at night in order to have late night fun, knowing that late night fun will inevitably lead to my being tired in the morning. Now, if I intend something, I intend it as a means or as an end. I clearly don’t intend to be tired in the morning as a means to having had fun in the evening: there is no backwards causation. But I also don’t intend being tired in the morning as an end: the end was my late night fun, which led to being tired. So if I don’t intend it as a means or as an end, I don’t intend it at all, contrary to 1.

More precisely:

  1. I intend E as my end and know that E inevitably causes F.

  2. If I intend something, I intend it as a means or as an end.

  3. If I know that something is caused by my end, then I do not intend it as an end.

  4. If I know that something is caused by my end, then I do not intend it as a means.

  5. So, I do not intend F as an end or as a means. (2, 4, 5)

  6. So, I do not intend F. (3, 6)

  7. So, sometimes I act intending E and knowing that E inevitably causes some effect F without intending F. (2, 7)

  8. So, (1) is false.

Property dualism and relativity theory

On property dualism, we are wholly made of matter but there are irreducible mental properties.

What material object fundamentally has the irreducible mental properties? There are two plausible candidates: the body and the brain. Both of them are extended objects. For concreteness, let’s say that the object is the brain (the issue I will raise will apply in either case) Because the properties are irreducible and are fundamentally had by the brain, they are are not derivative from more localized properties. Rather, the whole brain has these properties. We can think (to borrow a word from Dean Zimmerman) that the brain is suffused with these fundamental properties.

Suppose now that I have an irreducible mental property A. Then the brain as a whole is suffused with A. Suppose that at a later time, I cease to have A. Then the brain is no longer suffused with A. Moreover, because it is the brain as a whole that is a subject of mental properties, it seems to follow that the brain must instantly move from being suffused as a whole with A to having no A in it at all. Now, consider two spatially separated neurons, n1 and n2. Then at one time they are both participate in the A-suffusion and at a later time neither participates in the A-suffusion. There is no time at which n1 (say) participates in A-suffusion but n2 does not. For if that were to happen, then A would be had by a proper part of the brain then rather than by the brain as a whole, and we’ve said that mental properties are had by the brain as a whole.

But this violates Relativity Theory. For if in one reference frame, the A-suffusion leaves n1 and n2 simultaneously, then in another reference frame it will leave n1 first and only later it will leave n2.

I think the property dualist has two moves available. First, they can say that mental properties can be had by a proper part of a brain rather than the brain as a whole. But the argument can be repeated for the proper part in place of the brain. The only stopping point here would be for the property dualist to say that mental properties can be had by a single point particle, and indeed that when mental properties leave us, at some point in time in some reference frames they are only had by very small, functionally irrelevant bits of the brain, such as a single particle. This does not seem to do justice to the brain dependence intuitions that drive dualists to property dualism over substance dualism.

The second move is to say that the brain as a whole has the irreducible mental property, but to have it as a whole is not the same as to have its parts suffused with the property. Rather, the having of the property is not something that happens to the brain qua extended, spatial or composed of physical parts. Since physical time is indivisible from space, mental time will then presumably be different from physical time, much as I think is the case on substance dualism. The result is a view on which the brain becomes a more mysterious object, an object equipped with its own timeline independent of physics. And if what led people to property dualism over substance dualism was the mysteriousness of the soul, well here the mystery has returned.

Wednesday, November 18, 2020

Substance dualism and relativity theory

Here is an interesting argument against substance dualism:

  1. Something only exists simultaneously with my body when it exists in space.

  2. My mind now exists simultaneously with my body.

  3. So, my mind now exists in space.

  4. Anything in space is material.

  5. So, my mind is material.

If this argument is right, then there is at least one important respect in which property dualism and physicalism are better off than substance dualism.

The reasoning behind (1) is Relativity Theory: the temporal sequence that bodies are in cannot be separated from space, forming an indissoluble unity with it, namely spacetime.

One way out of the argument is to deny (4). Perhaps the mind is immaterial but in space in a way derivative from the body’s being in space and the mind’s intimate connection with the body. On this view, the mind’s being in time would seem to have to be derivative from the body’s being in time. This does not seem appealing to me: the mind’s spatiality could be derivative from the spatiality of something connected with the mind, but that the mind’s temporality would be derivative from the temporality of something connected with the mind seems implausible. Temporality seems too much a fundamental feature of our minds.

However, there is a way to resolve this difficulty, by saying that the mind has two temporalities. It has a fundamental temporality of its own—what I have elsewhere called “internal time”—and it has a derivative temporality from its connection with spatiotemporal entities, including the body. When I say that my mind is fundamentally temporal, that refers to the mind’s internal time. When we say that my mind is derivatively temporal, that refers to my mind’s external time.

If this is right, then we have yet another reason for substance dualists to adopt an internal/external time distinction. If this were the only reason, then the need for the distinction would be evidence against substance dualism. But I think the distinction can do a lot of other work for us.

Love and physicalism

Every so often, I have undergraduates questioning the reduction of the mental to the physical on the basis of love. One rarely meets the idea that love would be a special kind of counterexample to physicalism in the philosophical literature. It is tempting to say that the physicalist who can handle qualia and intentionality can handle love. But perhaps not.

Maybe students just have a direct intuition that love is something that transcends the humdrum physical world?

Or maybe there is an implicit argument like this:

  1. Love has significance of degree or kind N.

  2. No arrangement of particles has significance of degree or kind N.

  3. So, love is not an arrangement of particles.

Here is a related argument that I think is worth taking seriously:

  1. Love has infinite significance.

  2. No finite arrangement of atoms has infinite significance.

  3. So, love is not a finite arrangement of particles.

  4. If physicalism is true, then love is a finite arrangement of particles.

  5. So, physicalism is not true.

One can replace “love” here with various other things, such as humanity, virtue, etc.

The incompleteness of current physics

  1. There is causation in the physical world.

  2. Causation is irreducible.

  3. Our fundamental physics does not use the concept of causation.

  4. So, our fundamental physics is incomplete as a description of the physical world.

Tuesday, November 17, 2020

Nomic functionalism

Functionalism says that of metaphysical necessity, whenever x has the same functional state as a system y with internal mental state M, then x has M as well.

What exactly counts as an internal mental state is not clear, but it excludes states like thinking about water for which plausibly semantic externalism is true and it includes conscious states like having a pain or seeing blue. I will assume that functional states are so understood that if a system x has functional state S, then a sufficiently good computer simulation of x has S as well.

A weaker view is nomic functionalism according to which for every internal mental state M (at least of a sort that humans have), there is a law of nature that says that everything that has functional state S has internal mental state M.

A typical nomic functionalist admits that it is metaphysically possible to have S without M, but thinks that the laws of nature necessitate M given S.

I am a dualist. As a result, I think functionalism is false. But I still wonder about nomic functionalism, often in connection with this intuition:

  1. Computers can be conscious if and only if functionalism or nomic functionalism is true.

Here’s the quick argument: If functionalism or nomic functionalism is true, then a computer simulation of a conscious thing would be conscious, so computers can be conscious. Conversely, if both computers and humans can be conscious, then the best explanation of this possibility would be given by functionalism or nomic functionalism.

I now think that nomic functionalism is not all that plausible. The reason for this is the intuition that a computer simulation of a cause normally only produces a computer simulation of the effect rather than the effect itself. Let me try to be more rigorous, though.

First, let’s continue from (1):

  1. Dualism is true.

  2. If dualism is true, functionalism is fale.

  3. Nomic functionalism is false.

  4. Therefore, neither functionalism nor nomic functionalism is true. (2–4)

  5. So, computers cannot be conscious. (1, 5)

And that’s really nice: the ethical worries about whether AI research will hurt or enslave inorganic persons disappear.

The premise I am least confident about in the above argument is (4). Nomic functionalism seems like a serious dualist option. However, I now think there is good inductive reason to doubt nomic functionalism.

  1. No known law of nature makes functional states imply non-functional states.

  2. So, no law of nature makes functional states imply non-functional states. (Inductively from 7)

  3. If functionalism is false, mental states are not functional states.

  4. So, mental states are not functional states. (2, 3, 9)

  5. So, no law of nature makes functional states imply mental states. (8 and 10)

  6. So, nomic functionalism is false. (11 and definition)

Regarding (7), if a law of nature made functional states imply non-functional states, that would mean that we have multiple realizability on the left side of the law but lacked multiple realizability on the right side. It would mean that any accurate computer simulation of a system with the given functional state would exhibit the particular non-functional state. This would be like a case where a computer simulation of water being heated were to have to result in actual water boiling.

I think the most promising potential counterexamples to (7) are thermodynamic laws that can be multiply realized. However, I think tht in those cases, the implied states are typically also multiply realizable.

A variant of the above argument replaces “law” with “fundamental law”, and uses the intuition that if dualism is true, then nomic functionalism would have to have fundamental laws that relate functional states to mental states.

Monday, November 16, 2020

Closeness and Double Effect

The Principle of Double Effect (PDE) is traditionally a defense against a charge of bringing about an effect that is absolutely wrong to intentionally bring about, a defense that holds that although one foresaw the effect, one did not intend it.

One of the main difficulties for PDE is the closeness problem. Typical examples of the closeness problem are things like dropping bombs on an enemy city in order to make the civilians look dead (Bennett), blowing up the fat man in the mouth of the cave when there is no other way out (Anscombe), etc.

If we think of intentions as arrows and the wrong-to-intend act as a target, one strategy for handling closeness problems is to “broaden intentions”, so that they hit the target more easily. Thus, if you intend something “close enough” to an effect you count as intending (or something similar to intending, say accomplishing) that effect. There are interesting general theories of this (e.g., O’Brien and Koons), but I do not think any of them cover all the cases well.

Another strategy, however, is to broaden the target. This strategy keeps intention very sharp and hyperintensional, but insists that what is forbidden to intend is broader. A number of people have done that (e.g., Quinn). What I want to do in this post is to offer a way of looking at a version of this strategy.

The PDE is correlative to absolute wrongs. There aren’t that many absolute wrongs. For instance, Judaism lists only three kinds of acts as absolute wrongs, things that may not be done no matter the benefits:

  • idolatry

  • murder

  • certain sexual sins (e.g., adultery and incest).

Now, intention enters differently into the definitions of these acts. Arguably, idolatry is very much defined by intentions. The very same physical bending of one’s midriff in the very same physical circumstances (e.g., standing facing an idol) can very easily be an act of idolatry or a back exercise, precisely depending on what one is intending by this bow. Such pairs of cases can be manufactured in the case of murder, but they will involve very odd assumptions. We can imagine a surgeon or an assassin cutting someone’s chest with the same movement, but it is in fact very unlikely that the movement will be the same. In the case of idolatry, we might say that more work is being done by intention and in the case of murder more work is being done by the physical act. And sexual wrongdoing is a very complex topic, but it is likely that intention enters in yet different ways, and differently in the case of different sexual wrongs.

We can think of an absolute prohibition as having the following structure:

  1. For all x1, ..., xn, when U(x1, ..., xn), it is absolutely wrong to intentionally bring it about that I(x1, ..., xn).

Here, U(x1, ..., xn) is a contextual description which needs to obtain but need not be intended to have a wrong of the given type, and I(x1, ..., xn) is a contextual description which needs to be intended. For instance, for murder, prima facie U(x1, x2) might specify that x1 is an act whose patient is known to be a juridically innocent person x2, while I(x1, x2) will specify that, say, x1 is the killing of x2. It’s enough that the murderer should know that the victim is an innocent person—the murderer does not need to intend to kill them qua innocent. But the murderer does need to intend something like the killing.

Note that in ordinary speech, when we give absolute prohibitions we speak with scope ambiguity. Thus, we are apt to say things like “It is wrong to intentionally kill an innocent person”, without making clear whether “intentionally” applies just to “kill” or also to “innocent person”, i.e., without making it clear what is in the U part of the prohibition and what is in the I part.

Observe also that in the case of idolatry, more work is being done by I than by U, while in the case of murder, the work done by the two parts of the structure is the same.

So, now, here is a general strategy for handling closeness. We keep intention sharp, but we broaden (i.e., logically weaken) I by shifting some things that we might have thought are in I into U, perhaps introducing “known” or “believed” operators. For instance, in the case of murder, we might say something like this:

  1. When x1 is known to be the imposition of an arrangement x2 on the parts or aspects of an innocent person that normally and in this particular case precludes life, it is absolutely wrong to bring about x1 with the intention that it be an imposition of arrangement x2 on parts or aspects of reality.

And in the case of idolatry, perhaps we keep more in I, only moving the difference between God and the false god to the nonintentional portion of the prohibition:

  1. When x is known to be a god other than God, it is absolutely wrong to intentionally bring it about that one worships x.

And here is an important point. How we do this—how we shuffle requirements between I and U—will differ from absolute prohibition to absolute prohibition. What we are doing is not a refinement of Double Effect, but a refinement of the (hopefully small) number of absolute prohibitions in our deontological theory. We do not need to have any general things to say across absolute prohibitions how we do this broadening of the intentional target.

There might even be further complexities. It could, for instance, be that we have role-specific absolute prohibitions, coming with other ways for aspects of the action to be apportioned between U and I.

Friday, November 13, 2020

Reducing Triple Effect to Double Effect

Kamm’s Principle of Triple Effect (PTE) says something like this:

  • Sometimes it is permissible to perform an act ϕ that has a good intended effect G1 and a foreseen evil effect E where E causally leads to a further good effect G2 that is not intended but is a part of one’s reasons for performing ϕ (e.g., as a defeater for the defeater provided by E).

Here is Kamm’s illustration by a case that does not have much moral significance: you throw a party in order to have a good time (G1); you foresee this will result in a mess (E); but you expect the partygoers will help you clean up (G2). You don’t throw the party in order that they help you clean up, and you don’t intend their help, but your expectation of their help is a part of your reasons for throwing the party (e.g., it defeats the mess defeater).

It looks now like PTE is essentially just the Principle of Double Effect (PDE) with a particular way of understanding the proportionality condition. Specifically, PTE is PDE with the understanding that foreseen goods that are causally downstream of foreseen evils can be legitimately used as part of the proportionality calculation.

One can, of course, have a hard-line PDE that forbids foreseen goods causally downstream of foreseen evils to be legitimately used as part of the proportionality calculation. But that hard-line PDE would be mistaken.

Suppose Alice has her leg trapped under a tree, and if you do not move the tree immediately, the leg will have to be amputated. Additionally, there is a hungry grizzly near Bob and Carl, who are unable to escape and you cannot help either of them. The bear is just hungry enough to eat one of Bob and Carl. If it does so, then because of eating that one, it won’t eat the other. The bear is heading for Bob. If you move the tree to help Alice, the bear will look in your direction, and will notice Carl while doing so, and will eat Carl instead of Bob. All three people are strangers to you.

It is reasonable to say that the fact that your rescuing Alice switches whom the bear eats does not remove your good moral reason to rescue Alice. However, if we have the hard-line PDE, then we have a problem. Your rescuing Alice leads to a good effect, Alice’s leg being saved, and an evil, Carl being eaten. As far as this goes, we don’t have proportionality: we should not save a stranger’s leg at the expense of another stranger’s life. So the hard-line PDE forbids the action. But the PDE with the softer way of understanding proportionality gives the correct answer: once we take into account the fact that the bear’s eating Carl saves Bob, proportionality is restored, and you can save Alice’s leg.

At the same time, I think it is important that the good G1 that you intend not be trivial in comparison to the evil E. If instead of its being a matter of rescuing Alice’s leg, it were a matter of picking up a penny, you shouldn’t do that (for more argument in that direction, see here).

So, if I am right, the proportionality evaluation in PDE has the following features:

  • we allow unintended goods that are causally downstream of unintended evils to count for proportionality, but

  • the triviality of the intended goods when compared to the unintended evils undercuts proportionality.

In other words, while the intended goods need not be sufficient on their own to make for proportionality, and unintended downstream goods may need to be taken into account for proportionality, nonetheless the intended goods must make a significant contribution towards proportionality.

Wednesday, November 11, 2020

Set theory and physics

Assume the correct physics has precise particle positions (similar questions can be asked in other contexts, but the particle position context is the one I will choose). And suppose we can specify a time t precisely, e.g., in terms of the duration elapsed from the beginning of physical reality, in some precisely defined unit system. Consider two particles, a and b, that exist at t. Let d be the distance between a and b at t in some precisely definable unit system.

Here’s a question that is rarely asked: Is d a real number?

This seems a silly question. How could it not be? What else could it be? A complex number?

Well, there are at least two other things that d could be without any significant change to the equations of physics.

First, d could be a hyperreal number. It could be that particle positions are more fine-grained than the reals.

Second, d could be what I am now calling a “missing number”. A missing number is something that can intuitively be defined by an English (or other meta-language) specification of an approximating “sequence”, but does not correspond to a real number in set theory. For instance, we could suppose for simplicity that d lies between 0 and 1 and imagine a physical measurement procedure that can determine the nth binary digit of d. Then we would have an English predicate Md(n) which is true just in case that procedure determined the n binary digit to be 1. But it could turn out that in set theory there is no set whose members are the natural numbers n such that Md(n). For the axioms of set theory only guarantee the existence of a set defined using the predicates of set theory, while Md is not a predicate of set theory. The idea of such “missing numbers” is coherent, at least if our set theory is coherent.

It seems reasonable to say that d is indeed a real number, and to say similar things about any other quantities that can be similarly physically specified. But what guarantees such a match between set theory and physics? I see four options:

  1. Luck: it’s just a coincidence.

  2. Our set theory governs physics.

  3. Physics governs our set theory.

  4. There is a common governor to our set theory and physics.

Option 1 is an unhappy one. Option 4 might be a Cartesian God who freely chooses both mathematics and physics.

Option 2 is interesting. On this story, there is a Platonically true set theory, and then the laws of physics make reference to it. So it’s then a law of physics that distances (say) always correspond to real numbers in the Platonically true set theory.

Option 3 comes in at least two versions. First, one could have an Aristotelian story on which mathematics, including some version of set theory, is an abstraction from the physical world, and any predicates that we can define physically are going to be usable for defining sets. So, physics makes sets. Second, one could have a Platonic multiverse of universes of sets: there are infinitely many universes of sets, and we simply choose to work within those that match our physics. On this view, physics doesn’t make sets, but it chooses between the universes of sets.

Monday, November 9, 2020

The Math Tea argument

The Math Tea argument is an argument that there are real numbers that can’t be defined. The idea is this: there are only countably many definitions of real numbers (e.g., πe or "The middle root of the polynomial x3 − 5x2 + 2x + 4"), and uncountably many real numbers, so there are real numbers that have no definitions.

Elegant as this argument is, it has crucial set-theoretic flaws. For instance, there is no guarantee that there is a set of all the definable real numbers. The axioms of set theory tell us that for any predicate F in the language of set theory there is a set of all the numbers that satisfy F. But the predicate "is definable" is in English, not in set theory.

We can, however, argue for the following weaker claim. Assume set theory is true. Then either:

  1. There is a real number that cannot be defined in the language of set theory, or

  2. "A real number is missing": there is an English language formula F(n) whose only semantic predicate is set-theoretic satisfaction such that there is no real number x whose nth digit after the decimal point is 1 if F(n) and is 0 if not F(n).

Here is the argument. A formula of set-theory defines a real number if it has exactly one free variable and is satisfied by precisely one real number. Say that F(n) if and only if the nth formula of set theory (in lexicographic ordering) defining a real number defines a real number that does not have a 1 in the nth place after the decimal point. The only semantic predicate in F(n) is set-theoretic satisfaction. Suppose (2) is false. Then there is a real number x whose nth digit after the decimal point is 1 if F(n) and is 0 if not F(n). If x can be defined in the language of set theory by a formula ϕ, then suppose ϕ is the nth real-number-defining formula. Then F(n) if and only if x does not have a 1 in the nth place. But x has a 1 in the nth place if and only if F(n). Contradiction! So, x cannot be defined, and hence (1) is true.

Logically speaking, if ZF is consistent, ZFC is consistent both with (1) (this follows by letting the digits of x be defined by the set of all set-theoretic truths and noting that if ZF is consistent, we can consistently suppose there is a set of all set-theoretic truths, but that set of course cannot be defined) and with the denial of (1).

But philosophically speaking, we might reasonably say that (2) would imply that "there aren’t enough real numbers", which sounds wrong, so it seems more reasonable to accept (1) instead.

Restricted epistemic mysterianism

There are two forms of mysterianism about X (say, consciousness):

  1. Conceptual: It would not be possible for us to even conceptualize the true theory of X.

  2. Epistemic: It would not be possible for us to know the true theory of X.

Conceptual mysterianism about X entails epistemic mysterianism about X. In the case of typical Xs, like consciousness or intentionality or morality, epistemic mysterianism entails conceptual mysterianism. For if we could conceptualize the true theory of X, then God could reveal to us that that theory is true. (I restricted to “typical Xs”, for there are some truths that we could not know but which we could conceptualize. For instance, that the past existence of life on Mars is a reality unknown to me is something I can conceptualize, but I can’t possibly know it.)

However, one can weaken epistemic mysterianism to:

  1. Restricted Epistemic: It would not be possible for us to know the true theory of X merely by human epistemic resources.

Consider the following interesting conditional:

  1. If physicalism is true about consciousness, then restricted epistemic mysterianism is true about it.

Here is an argument against 4. Imagine that we find a new physics in the brains of precisely those organisms that it is plausible to think of as conscious (maybe cephalopods and higher vertebrates). For instance, maybe there is a new particle type that is only found in those brains, or perhaps some already known particle type behaves differently in those brains. Moreover, there is a close correlation between the behavior of the new physics and plausible things to say about consciousness in these critters. And when make a sophisticated enough AI, surprisingly that new physics also shows up in it. Given this, it would be reasonable to say that consciousness is to be identified with the behavior of that new physics.

But I think the following is true:

  1. If physicalism is true about consciousness and there is no new physics in the brains of conscious beings, then restricted epistemic mysterianism is true.

Here’s why. Assume physicalism. Some degree of multiple realizability of consciousness is true since cephalopods and mammals are both conscious, even though our brains are quite different—assuming the “new physics in brains” hypothesis is false (if it were true, the structural differences between cephalopod and mammal brains could be relevantly outbalanced by the similarities with respect to the “new physics”). Multiple realizability requires that consciousness be abstracted to some degree from the particular details of its embodiment in us. But there is no way of knowing how far it is to be abstracted. And without knowing that, we won’t know the true theory of consciousness.

If this is right, the true view of mind must be found among these three:

  • non-physicalism

  • restricted epistemic mysterianism (with or without conceptual mysterianism)

  • new physics.

On each of them, mind is mysterious. :-)

Logically complex intentions

In a paper that was very important to me when I wrote it, I argue that the Principle of Double Effect should deal with accomplishment rather than intention. In particular, I consider cases of logically complex intentions: “I am a peanut farmer and I hate people with severe peanut allergies…. I secretly slip peanuts into Jones’ food in order that she should die if she has a severe peanut allergies. I do not intend Jones’ death—I only intend the logically complex state of Jones dying if she has a severe peanut allergy.” I then say that what is wrong with this action is that if Jones has an allergy, then I have accomplished her death, though I did not intend her death. What was wrong with my action is that my plan of action was open to a possibility that included my accomplishing her death.

But now consider a different case. A killer robot is loose in the building and all the doors are locked. The robot will stop precisely when it kills someone: it has a gun with practically unlimited ammunition and a kill detector that turns it off when it kills someone. It’s heading for Bob’s office, and Alice bravely runs in front of it to save his life. And my intuition is that Alice did not commit suicide. Yet it seems that Alice intended her death as a means to saving Bob’s life.

But perhaps it is not right to say that you intended your death at all. Instead, it seems plausible that Alice intention is:

  1. If the robot will kill someone, it will kill Alice.

An additional reason to think that (1) is a better interpretation of Alice’s intentions than just her unconditionally intending to die is that if the robot breaks down before killing Alice, we wouldn’t say that Alice’s action failed. Rather, we would say that it was made moot.

But according to what I say in the accomplishment paper, if in fact the robot does not break down, then Alice accomplishes her own death. And that’s wrong. (I take it that suicide is wrong.)

Perhaps what we want to say is this. In conditional intention cases, when one intends:

  1. If p, then q

and p happens and one’s action is successful, then what one has contrastively accomplished is:

  1. its being the case that p and q rather than p and not q.

To contrastively accomplish A rather than B is not the same as to accomplish A simply. And there is nothing evil about contrastively accomplishing its being the case that the robot kills someone and kills Alice rather than the robot killing someone and not killing Alice. On the other hand, if we apply this analysis to the peanut allergy case, what the crazy peanut farmer contrastively accomplishes is:

  1. Jones having a peanut allergy and dying rather than having a peanut allergy and not dying.

And this is an evil thing to contrastively accomplish. Roughly, it is evil to accomplish A rather than B just in case A is not insignificantly more evil than B.

But what about a variant case? The robot is so programmed that it stops as soon as someone in the building dies. The robot is heading for Bob and it’s too late for Alice to jump in front of it. So instead Alice shoots herself. Can’t we say that she shot herself rather than have Bob die, and the contrastive accomplishment of her death rather than Bob’s is laudable? I don’t think so. For her contrastive accomplishment was accomplished by simply accomplishing her death, which while in a sense brave, was a suicide and hence wrong.

A difficult but important task someone should do: Work out the logic of accomplishment and contrastive accomplishment for logically complex intentions.

Friday, November 6, 2020

Conditional and unconditional desires, God's will, and salvation

Consider three cases:

  1. Bob doesn’t care either way whether Alice wants to go out with him. And he wants to go out with Alice if she wants to go out with him.

  2. Carl wants Alice’s desires to be fulfilled. And he wants to go out with Alice.

  3. Dave doesn’t care either way whether Alice wants to go out with him. And he wants to go out with Alice even if she doesn’t want to go out with him.

As dating partners, Dave is a creep, Bob is uncomplimentarily lukewarm and Carl seems the best.

Here’s how we could characterize Dave’s and Bob’s desires with respect to going out with Alice:

  • Bob’s desire is conditional.

  • Dave’s desire is unconditional.

What about Carl’s desire? I think it’s neither conditional nor unconditional. It is what we might call a simple desire.

The three desires interact differently with evidence about Alice’s lack of interest. Bob’s conditional desire leads him to give up on dating Alice. Dave’s creepy desire is unchanged. And Carl, on the other hand, comes to hope that Alice is interested notwithstanding the evidence to the contrary, and is motivated to act (perhaps moderately, perhaps excessively) to try to persuade Alice to want to go out with him.

One might query regarding Carl what happens if he definitively learns that his two desire to go out with Alice and to have Alice want to go out with him cannot both be fulfilled. Then, as far as the desires go, he could go either way: he could become a creep or he could resign himself. Resignation is obviously the right attitude. Note, however, that while resignation requires him to give up on going out with Alice, it need not require him to give up on desiring to go out with Alice (though if that desire lasts too long after learning that Alice has no interest, it is apt to screw up Carl’s life).

Now, it seems a pious thing to align one’s desires with God’s in all things. One “thing” is one’s salvation. One could have three attitudes analogous to the attitudes towards dating Alice:

  1. Conditional: Barbara desires to be saved if God wills it. But doesn’t care either way about whether God wills it.

  2. Simple: Charlotte desires to be saved. She desires that God’s will be done, and hopes and prays that God wills her salvation.

  3. Unconditional: Diana desires to be saved even if God doesn’t will it. She doesn’t care whether God wills it.

Barbara’s attitude is lukewarm and shows a lack of love of God, since she doesn’t simply want to be with God. Diana is harder to condemn than Dave, but nonetheless her attitude is flawed. Charlotte has the right attitude.

So, when we say we should align our desires with God’s in all things, that doesn’t seem to mean that all our desires should be conditional. It means, I think, to be like Charlotte: it desire an alignment

And there is one further distinction to be made, between God’s antecedent and God’s consequent will. The classic illustration is this: When Scripture says that God wills all people to be saved (1 Tim. 2:4), that’s God’s antecedent will. It’s what God wants independently of other considerations. But because of the inextricable intertwining of God’s love and God’s justice (indeed, God’s love is his justice), God also antecedently wants that those who reject him be apart from him. Putting these antecedent desires of God’s, God has a consequent desire to damn some, namely those who reject God.

I think what I said about Barbara, Charlotte and Diana clearly applies to God’s consequent will. But it’s less clear regarding God’s antecedent will. Necessarily, God antecedently wills all and only the goods. It seems not unreasonable to desire salvation only conditionally on its being a good thing, and hence to desire it only conditionally on its being antecedently willed by God. But I think Charlotte’s approach is also defensible. Charlotte desires to be with God for eternity and desires that being with God is a good thing.

Thursday, November 5, 2020

Is there a set of all set-theoretic truths?

Is there a set of all set-theoretic truths? This would be the set of sentences (in some encoding scheme, such as Goedel numbers) in the language of set theory that are true.

There is a serious epistemic possibility of a negative answer. If ZF is consistent, then there is a model M of ZFC such that every object in M is definable, i.e., for every object a of M, there is a defining formula ϕ(x) that is satisfied by a and by a alone in M (and if there is a transitive model of ZF, then M can be taken to be transitive). In such a model, it follows from Tarski’s Indefinability of Truth that there is no set of all set-theoretic truths. For if there were such a set, then that set would be definable, and we could use the definition of that set to define truth. So, if ZF is consistent, there is a model M of ZFC that does not contain a set of all the truths in M.

Interestingly, however, there is also a serious epistemic possibility of a positive answer. If ZF is consistent, then there is a model M of ZFC that does contain a set of all the truths in M. Here is a proof. If ZF is consistent, so is ZFC. Let ZFCT be a theory whose language is the language of set theory with an extra constant T, and whose axioms are the axioms of ZFC with the schemas of Separation and Replacement restricted to formulas of ZFC (i.e., formulas not using T), plus the axiom:

  1. x(x ∈ T → S(x))

where S(x) is a sentence saying that x is the code for a sentence (this is a syntactic matter, so it can be specified explicitly), and the axiom schema that has for every sentence ϕ with code n:

  1. ϕ ↔ n ∈ T.

Any finite collection of the axioms of ZFCT is consistent. For let M be a model of ZFC (if ZF is consistent, so is ZFC, so it has a model). Then all the axioms of ZFC will be satisfied in M. Furthermore, for any finite subset of the additional axioms of ZFCT, there is an interpretation of the constant T under which those axioms are true. To see this, suppose that our finite subset contains (1) (no harm throwing that in if it’s not there) and the instances ϕi ↔ ni ∈ T of (2) for i = 1, ..., m. It is provable from ZF and hence true in M that there is a set t such that x ∈ t if and only if x = n1 and ϕ1, or x = n2 and ϕ2, …, or x = nm and ϕm.

Moreover, any such set can be proved in ZF to satisfy:

  1. x(x ∈ t → S(t)).

Interpreting T to be that set t in M will make the finite subset of the additional axioms true.

So, by compactness, ZFCT has an interpretation I in some model M. In M there will be an object t such that t = I(T). That object t will be a set of all the truths in M that do not contain the constant T. Now consider the interpretation I of ZFC in M, which is I without any assignment of a value to the constant T (since T is not a constant of ZFC). Then ZFC will be true in M under I. Moreover, the object t in M will be a set of all the truths in M.

So, if ZF is consistent, then there is a model of ZFC with a set of all set-theoretic truths and a model of ZFC without a set of all set-theoretic truths.

The latter claim may seem to violate the Tarski Indefinability of Truth. But it doesn’t. For that set of all truths will not itself be definable. It will exist, but there won’t be a formula of set theory that picks it out. There is nothing mathematically new in what I said above, but it is an interesting illustration of how one can come close to violating Indefinability of Truth without actually violating it.

Now, what if we take a Platonic view of the truths of set theory? Should we then say that there really is a set of all set-theoretic truths? Intuitively, I think so. Otherwise, our class of all sets is intuitively “missing” a subset of the set of all sentences. I am inclined to think that the Axioms of Separation and Replacement should be extended to include formulas of English (and other human languages), not just the formulas expressible in set-theoretic language. And the existence of the set of all set-theoretic truths follows from an application of Separation to the sentence “n is the code for a sentence of set theory that is true”.

Wednesday, November 4, 2020

Quinn, Double Effect and closeness

In a famous paper, Warren Quinn suggests replacing the distinction between intending evil and foreseeing evil in the Principle of Double Effect (PDE) with a distinction between directly and indirectly harmful action. For concreteness, let’s talk about the death of innocents. Classical PDE reasoning says that it’s wrong to intend the death of an innocent, but it is permissible to accept it as a side-effect for a proportionate reason. Quinn thinks that this has the implausible consequence that craniotomy is permissible: that it is permissible to crush the skull of a fetus to get it through birth canal, because one is not intending the fetus’s death, but only the reduction in head size. This is a special case of the closeness problem: intending to crush the skull is too close to death for a moral distinction, but yet technically one can intend the crushing without intending the death, and so Double Effect makes a moral distinction where there is none.

Quinn suggests that what is instead wrong is to intentionally cause an effect on an innocent that has the following two properties:

  1. the effect is a harm, and

  2. this harm is foreseen to result in death.

The doctor is intending to crush the fetus’s skull: that is an intended effect on the fetus. This effect is a harm, and it is foreseen to result in death. So craniotomy is ruled out. Similarly, blowing up the fat man blocking the entrance of the cave in which other spelunkers are trapped is ruled out, because even though it is possible to blow someone up without intending that they die, being blow up is a clear case of harm, and it is foreseen to lead to death.

This is clever, but I think it fails. For we can imagine that a callous doctor does not intend any effect to the fetus. All he intends is the change in arrangement of a certain set of molecules in order to facilitate their removal from the uterus. These molecules happen to be the ones that the fetus is made of. But that they make up the body of the fetus need not be relevant to the doctor’s intention. If instead there were something other than a fetus present that for health reasons needed to be removed (not at all a remote possibility: consider the body of an already deceased fetus), and the molecules there were similarly arranged, our callous doctor would take exactly the same course of action. Similarly, the spelunkers need not be intending to break up the fat man’s body, but simply to disperse a cloud of molecules.

Now, we could say that the molecules constitute or even are the body of the fetus or of the fat man, and we could say that if you intend A and you know that A is or constitutes B, then you intend B. But if you say that, then you don’t need the Quinn view to get out of craniotomy. For you can then take Fitzpatrick’s solution to the problem of closeness that crushing the skull constitutes death, and hence that the doctor intends death. In fact, though, the constitution principle is false: intention is hyperintensional, and not only doesn’t transfer along constitution lines but we can intend the identical object under one description but not under another. Anyway, the point here is that the molecule problem shows that we need some other solution to the problem of closeness to make Quinn’s story work: the Quinn solution might help with some cases, but it cannot be taken to be the solution.

Double Effect and symbolic actions

There is an intrinsic value to standing against evil. One way to do that is to intentionally act to reduce the evil. But that’s not the only way. Another way of standing against evil is to protest it even when one reasonably expects one’s protest to have no effect. When we see standing against evil as something of significant intrinsic value, then sometimes it will even make sense to stand against evil even when we foresee that doing so will unintentionally increase the evil. It can be legitimate to protest an abuse of power even if one foresees that such protest will lead to further abuses of power, such as a crackdown on the protesters. Of course, prudence is needed, and one must keep proportionality in mind: if the abuses of power inspired by the protest are likely to be much worse than the ones being protested, it is better not to protest. Another way to stand against evil is to punish it. Again, this can make sense even when one does not expect the punishment to reduce the evil (e.g., perhaps the evil is a one-off and it is unlikely that there will be any further temptations to deter people from).

Similarly, there is an intrinsic value to standing for good. A central way to stand for good is to act to increase the good. But, again, it’s not the only way. Admiring, rewarding and praising also are ways of standing for good, even when they are not expected to increase the good.

The actions that constituting standing for good or against evil but that are not intentional acts to increase the good or reduce the evil may be called symbolic. “Symbolic” is often used as a way to downplay the importance of something. That is a mistake: the symbolic can be of great importance. Moreover, “symbolic” suggests a social dimension that need not be relevant. When an atheist hikes alone in order to contemplate the goodness of nature, that is a way of standing for the good of nature that is symbolic in the above sense but not social. Moreover, “symbolic” suggests a certain arbitrariness of choice of symbol. But there need not be such. There is nothing arbitrary in virtue of which admiring a beautiful view is a way of symbolically standing for the good. We thus need to understand “symbolic” in a broad way that is compatible with great intrinsic value, that need not be social in nature, and need not involve arbitrary socially instituted representations.

If we do this, then here is a promising way to make the kinds of deontological views that are tied to the Principle of Double Effect plausible. On these views, certain fundamental evils are wrong to intentionally produce but may be tolerated as side-effects. But now things look puzzling. Let’s say that we can end a war by dropping a bomb on the wicked leaders in the enemy headquarters in a busy city, and which bomb will also kill and maim thousands of innocents in the surrounding buildings, or one can end the war by kidnapping and maiming the enemy leader’s innocent child. The attack on the child is wrong while the attack on the headquarters is permissible on this deontological ethics, but that may just feel wrong. But if we see symbolic standing for good and against evil as really important, the difference becomes more plausible. In intending the maiming of the child, one is standing for evil: for it is unescapable that by intending an evil one stands for it. In refusing to maim the child, one is standing against evil. But in dropping the bomb, the mere foresight of the plight of thousands of innocents does not make one be standing for evil. One can still count as standing against evil by intending to kill the evildoers in the headquarters.

It is tempting to think that when standing against evil does not actually reduce evil, as in the case of the refusal to maim the child, the the action is merely symbolic, and moral weight of the obligation is low. But that is a mistake: “merely” is a poor choice of words when connected with “symbolic”. Symbolic actions can be of great import indeed.