Thursday, December 17, 2020

A multiple faculty solution to the problem of conscience

I used to be quite averse to multiplying types of normativity until I realized that in an Aristotelian framework it makes perfect sense to multiply them by their subject. Thus, I should think that 1 = 1, I should look both ways before crossing the street, and I should have a heart-rate of no more than 100. But the norms underlying these claims have different subjects: my intellect, my will and my circulatory system (or perhaps better: I as thinking, I as willing and I as circulating).

In this post I want to offer two solutions to the problem of mistaken conscience that proceed by multiplying norms. The problem of mistaken conscience is two-fold as there are two kinds of mistakes of conscience. A strong mistake is when I judge something is required when it is forbidden. A weak mistake is when I judge something is permissible when it is forbidden.

Given that I should follow my conscience, a strong mistake of conscience seems to lead to two conflicting obligations: I should ϕ, because my conscience says so, and I should refrain from ϕing, because ϕing is forbidden. Call the claim that strong mistakes of conscience lead to conflicting obligations the Dilemma Thesis. The Dilemma Thesis is perhaps somewhat implausible on its face, but can be swallowed (as Mark Murphy does). However, more seriously, the Dilemma Thesis has the unfortunate result that strong mistakes of conscience are not, as such, mistakes. For the mistake was supposed to be that I judge ϕing as required when it is forbidden. But that is only a mistake when ϕing is not required. But according to the Conflict Thesis, it is required. So there is no mistake. (There may be a mistake about why it is required, and perhaps one can use that to defuse the problem, but I want to try something else in this post.) Moreover, a view that embraces the Dilemma Thesis needs to explain the blame asymmetry between the obligation to ϕ and the obligation not to ϕ: I am to blame if I go against conscience, but not if I follow conscience.

Weak mistakes are less of a problem, but they still raise the puzzle of why I am not blameworthy if I do what is forbidden when conscience says it’s permissible.

Moving towards a solution, or actually a pair of solution, start with this thought. When I follow a mistaken conscience, my will does nothing wrong but the practical intellect has made a mistake. In other words, we have two sets of norms: norms of practical intellect and norms of will. In these cases I judged badly but willed well. And it is clear why I am not blameworthy: for I become blameworthy by virtue of a fault of the will, not a fault of the intellect.

But there is still a problem analogous to the problem with the Dilemma Thesis. For it seems that:

  1. In a mistake of conscience, my judgment was bad because it made a false claim as to what I should will.

In the case of a strong mistake, say, I judged that I should will my ϕing whereas is in fact I should have nilled my ϕing. But I can’t say that and say that the will did what it should in ϕing.

This means that if we are to say that the will did nothing wrong and the problem was with the intellect, we need to reject (1). There are two ways of doing this, leading to different solutions to the problem of conscience.

Claim (1) is based on two claims about practical judgment:

  1. The practical intellect’s judgments are truth claims.

  2. These truth claims are claims about what I should will.

We can get out of (1) by denying (2) (with (3) then becoming moot) or by holding on to (2) but rejecting (3).

Anscombe denies (2), for reasons having nothing to do with mistakes of conscience. There is good precedent for denying (2), then.

I find the solution that denies (2) a bit murky, but I can kind of see how one would go about it. Oversimplifying, the intellect presents actions to the will on balance positively or negatively. This presentation does not make a truth claim. The polarity of the presentation by the intellect to the will should not be seen as a judgment that an action has a certain character, but simply as a certain way of presenting the judgment—with propathy or antipathy, one might say. Nonetheless there are norms of presentation built into the nature of the practical intellect. These norms are not truth norms, like the norms of the theoretical intellect, but are more like the norms of the functioning of the body’s thermal regulation system, which should warm up the body in some circumstances and cool it down in others, but does not make truth claims. There are actions that should be positively presented and actions that should be negatively presented. We can say that the actions that should be positively presented are right, but the practical intellect’s positive presentation of an action is not a presentation that the action is right, for that would be an odd circularity: to present ϕing positively would be to present ϕing as something that should be presented positively.

(In reality, the “on balance” positive and negative presentations typically have a thick richness to them, a richness corresponding “in flavor” to words like “courageous”, “pleasant”, etc. However, we need to be careful on this view not to think of the presentation corresponding “in flavor” to these words as constituting a truth claim that a certain concept applies. I am somewhat dubious whether this can all be worked out satisfactorily, and so I worry that the no-truth-claim picture of the practical intellect falls afoul of the thickness of the practical intellect’s deliverances.)

There is a second solution which, pace Anscombe, holds on to the idea that the practical intellect’s judgments are truth claims, but denies that they are claims about what I should will. Here is one way to develop this solution. There are times when an animal’s subsystem is functioning properly but it would be better if it did something else. For instance, when we are sick, our thermal regulation system raises our temperature in order to kill invading bacteria or viruses. But sometimes the best medical judgment will be that we will on the whole be better off not raising the temperature given a particular kind of invader, in which case we take fever-reducing medication. We have two norms here: a local norm of the thermal regulation system and a holistic norm of the organism.

Similarly, there are local norms of the will—to will what the intellect presents to it overall in a positive light, say. And there are local norms of the intellect—to present the truth or maybe that which the evidence points to as true. But there are holistic norms of the acting person (to borrow Wojtyla’s useful phrase), such as not to kill innocents. The practical intellect discerns these holistic norms, and presents them to the will. The intellect can err in its discernment. The will can fail to follow the intellect’s discernment.

The second solution is rather profligate with norms, having three different kinds of norms: norms of the will, norms of the intellect, and norms of the acting person, who comprises at least the will, the intellect and the body.

In a strong mistake of conscience, where we judge that we should ϕ but ϕing is forbidden, and we follow conscience and ϕ, here is what happens. The will rightly follows the intellect’s presentation by willing to ϕ. The acting person, however, goes wrong by ϕing. We genuinely have a mistake of the intellect: the intellect misrepresented what the acting person should do. The acting person went wrong, and did so simpliciter. However, the will did right, and so one is not to blame. We can say that in this case, the ϕing was wrong, but the willing to ϕ was right. And we can say how the pro-ϕing norm takes priority: the norm to will one’s ϕing is a norm of the will, so naturally it is what governs the will.

In a weak mistake of conscience, where we judge that it is permissible to ϕ but it’s not, again the solution is that under the circumstances it was permissible to will to ϕ, but not permissible to ϕ.

There is, however, a puzzle in connecting this story with failed actions. Consider either kind of mistake of conscience, and suppose I will to ϕ but I fail to ϕ due to some non-moral systemic failure. Maybe I will to press a forbidden button, but it turns out I am paralyzed. In that case, it seems that the only thing I did was willing to ϕ, and so we cannot say that I did anything wrong. I think there are two ways out of this. The first is to bite the bullet and say that this is just a case where I got lucky and did nothing wrong. The second is to say that my willing to ϕ can be seen as a trying to ϕ, and it is bad as an action of the acting person but not bad as an action of the will.

Tuesday, December 15, 2020

A proof that ought implies can

Some actions are are things I can do immediately: for instance, I can immediately raise my hand. Others require that I do something to enable myself to do the action: for instance, to teach in person, I have to go to the classroom, or to feed my children, I need to obtain food. So, here is a very plausible axiom of deontic logic:

  1. If I ought to do A, and A is not an action I can do immediately, then I ought to bring it about that I can immediately do A.

Now, say that I remotely can do an action provided that I can immediately do it, or I can immediately bring it about that I can immediately do it, or I can immediately bring it about that I can immediately bring it about that I can immediately do it, or ….

It follows from (1) and a bit of reasoning that:

  1. If I ought to do A, then I remotely can do A, or I have an infinite regress of prerequisite obligations.


  1. It is false that I have an infinite regress of prerequisite obligations.


  1. If I ought to do A, then I remotely can do A.

Monday, December 14, 2020

God and Beauty

Here is my talk from the Cracow philosophy of religion conference in September:

Thursday, December 10, 2020

The possibility test for intentions

This test for whether one is intending some effect E of an action is often employed (e.g., by Germain Grisez) in the Double Effect literature:

  1. If it is logically possible for an action with an intention J to be fully successful even though E does not happen, then E is not included in J.

Claim (1) follows in standard modal logic (with no need for anything fancy like S5) from:

  1. If an intention J includes E, then the inclusion of E is an essential property of J.

  2. Necessarily, if an action is done with an intention that includes E and E does not occur, then the action is not fully successful.

For suppose that E is included in J. Then in every possible world where an action is done with J, the action is done with an intention that includes E by (2)) and so in every possible world where an action is done with J, the action is not fully successful if E does not occur, by (3). Hence, there is no possible world where an action is done with J and is fully successful even though E does not happen. Thus, we have (1).

At the same time, (1) sounds awfully strong. Even if the possible world where the action is successful despite the lack of E requires a miracle, E is not included in J. For instance, suppose God is able to keep the soul of a human being bound to a single atom. That means that someone whose intention was to blow the man blocking the mouth of the cave literally to single atoms was not intending death, since there is a possible world where the person’s soul remains bound to a single atom, and in that world the action is clearly successful.

To deny (1), one needs to deny (2) or (3). I think the best route to denying (2) is a strong dose of semantic externalism: the content of an intention is dependent in part on things outside the individual. Perhaps on Earth the very same intention may be an intention to drink water, while on Twin-Earth the very same intention may be an intention to drink XYZ. I am sceptical of this: it seems to me that the best way to understand the water-XYZ issue is that intentions are partly grounded in facts outside the individual, and so it is a different intention on Twin-Earth than on Earth, even if it is partly grounded in the same facts in the individual.

But even if one is impressed by the water-XYZ issue, it seems one should be willing to accept the following variant on 2:

  1. If an intention J includes E and occurs at t, then in any possible world that exactly matches the actual world up to an including t the intention J includes E at t.

The argument for (1) can now be modified to yield an argument for:

  1. If an action with an intention J occurs at t, and if there is a possible world that matches the actual world up to and including t and where the action with J is fully successful but where E does not happen, then E is not included in J.

And if one’s motivation for denying (1) is to avoid the conclusion that intending to blow the man in the mouth of the cave to single atoms does not include intending death, then (5) is just as bad. For God could miraculously keep the soul bound to a single atom without anything being any different up to and including the time of the action.

If we don’t want (1), we won’t want (5), either.

So a better bet is to deny (3). A start towards a denial of (3) would be to talk of something like “stretch goals”. It seems that an action may have a stretch goal and yet be successful even if that stretch goal is unachieved. However, the stretch goal is surely intended.

I am not sure. If the stretch goal is intended, then it seems that the right thing to say is that the action is successful but not fully successful if the stretch goal is not met.

In any case, we might grant the claim about stretch goals, and introduce the concept of an intention being perfectly satisfied, which includes the satisfaction of all stretch goals, and then replace “fully successful” with “perfectly successful” in (1) and (5). And I think this will still generate the result about blowing the fat man to atoms, because the death of the fat man—the separation of soul from body—is not a stretch goal either. (If anything, one might imagine that his survival is a stretch goal.)

All this makes me want to say that (3) really is true, and we cannot avoid the conclusion that it is possible to intend to blow the man in the mouth of the cave to single atoms without intending to kill him. But I am now inclined to think that an intention to kill is not a necessary condition for murder, and so the action could still be a murder.

Monday, December 7, 2020

Independence, spinners and infinitesimals

Say that a “spinner” is a process whose output is an angle from 0 (inclusive) to 360 (exclusive). Take as primitive a notion of uniform spinner. I don’t know how to define it. A necessary condition for uniformity is that every angle has the same probability, but this necessary condition is not sufficient.

Consider two uniform and independent spinners, generating angles X and Y. Consider a third “virtual spinner”, which generates the angle Z obtained by adding X and Y and wrapping to be in the 0 to 360 range (thus, if X = 350 and Y = 20, then Z = 10). This virtual spinner is intuitively statistically independent of each of X and Y on its own but not of both.

Suppose we take the intuitive statistical independence at face value. Then:

  • P(Z = 0)P(X = 0)=P(Z = X = 0)=P(Y = X = 0)=P(Y = 0)P(X = 0),

where the second equality followed from the fact that if X = 0 then Z = 0 if and only if Y = 0. Suppose now that P(X = 0) is an infinitesimal α. Then we can divide both sides by α, and we get

  • P(Z = 0)=P(Y = 0).

By the same reasoning with X and Y swapped:

  • P(Z = 0)=P(X = 0).

We conclude that

  • P(X = 0)=P(Y = 0).

We thus now have an argument for a seemingly innocent thesis:

  1. Any two independent uniform spinners have the same probability of landing at 0.

But if we accept that uniform spinners have infinitesimal probabilities of landing at a particular value, then (1) is false. For suppose that X and Y are angles from two independent uniform spinners for which (1) is true. Consider a spinner whose angle is 2Y (wrapped to the [0, 360) range). This doubled spinner is clearly uniform, and independent of X. But its probability of yielding 0 is equal to the probability of Y being 0 or 180, which is twice the probability of Y being 0, and hence twice the probability of X being 0, in violation of (1) if P(X = 0)>0.

So, something has gone wrong for friends of infinitesimal probabilities. I see the following options available for them:

  1. Deny that Z = 0 has non-zero probability.

  2. Deny that Z is statistically independent of X as well as being statisticlaly independent of Y.

I think (3) is probably the better option, though it strikes me as unintuitive. This option has the interesting consequence: we cannot independently rerandomize a spinner by giving it another spin.

The careful reader will notice that this is basically the same argument as the one here.

Wednesday, December 2, 2020

More on the side-effect harm/help asymmetry

Wright and Bengson note an apparent intuitive asymmetry in our side-effect judgments. We blame people for not avoiding bad effects, even when these bad effects are not intended, but we do not praise people for not avoiding good effects when these good effects are not intended.

I wonder if the explanation for this asymmetry isn’t this:

  1. Typical good people strive to avoid bad side-effects to others

  2. Typical bad people don’t strive to avoid good side-effects to others.

The reason for (2) is that typical bad people are selfish rather than malevolent: their badness consists in the fact that they put themselves before others, not in their going out of their way to deprive others of goods as such. But typical good people are positively benevolent, so we have (1).

Now, given (1), if you fail to avoid a bad side-effect, that makes you be worse than a typical good person. And that calls for significant castigation. But given (2), if you fail to avoid a good side-effect, that doesn’t make you better than a typical bad person. Granted, you could still be praised for being better than a very bad person, but that would be damning with faint praise. So, (1) and (2) neatly predicts the asymmetry in our practices of praise and blame.

But now imagine that we lived in a more polarized society, where typical bad people were actually malevolent rather than selfish. Against that background, it would make sense to praise someone for not avoiding a good effect to another. This is similar to the way that we would not praise a 21st-century upper-class man for refraining from duelling, but we would praise a 19th-century one for the same thing. For the vice of duelling is no longer rampant like it was, and to say that someone never engages in duels is damning with faint praise. Praise is comparative, and comparisons depend on reference class.

Sometimes that reference class is the person’s past and present. And that provides cases where we would praise someone for not striving to avoid good side-effects. If out of hatred someone previously strove to avoid good effects to a particular other, and then stopped such striving, then praise would be in order.

We thus need to be careful in drawing conclusions from praise and blame practices, because these practices depend on statistical facts. If the above is right, the side-effect asymmetry may simply be due to reference class issues rather than any deeper facts about intentions, side-effects and value.

But I think there is probably a further asymmetry between praise and blame. While, as noted, we do not praise people for doing going things most people in the reference class do, we do in fact blame people for doing bad things that most people in the reference class do. While we do not praise our 21st century contemporaries for refraining from dueling, we would have been right to castigate our 19th century contemporaries for that vice. That “everybody is doing it” often makes praise feel nearly completely inappropriate, but it only somewhat decreases the degree of blame rather than eliminating it.

Another problem for infinitesimal probabilities

Here’s another problem with independence for friends of infinitesimal probabilities.

Let ..., X−2, X−1, X0, X1, X2, ... be an infinite sequence of independent fair coin tosses. For i = 0, 1, 2, ..., define Ei to be heads if Xi and X−1 − i are the same and tails otherwise.

Now define these three events:

  • L: X−1, X−2, ... are all heads

  • R: X0, X1, ... are all heads

  • E: E0, E1, ... are all heads.

Friends of infinitesimal probabilities insist that P(R) and P(L) are positive infinitesimals.

I now claim that E is independent of R, and the same argument will show that E is independent of L. This is because of this principle:

  1. If Y0, Y1, ... is a sequence of independent random variables, and f and g are functions such that f(Yi) and g(Yi) are independent of each other for each fixed i, then the sequences f(Y0),f(Y1),... and g(Y0),g(Y1),... are independent of each other.

But now let Yi = (Xi, X−1 − i). Then Y0, Y1, ... is a sequence of independent random variables. Let f(x, y)=x and let g(x, y) be heads if x = y and tails otherwise. Then it is easy to check that f(Yi) and g(Yi) are independent of each other for each fixed i. Thus, by (1), f(Y0),f(Y1),... and g(Y0),g(Y1),... are independent of each other. But f(Yi)=Xi and g(Yi)=Ei. So, X0, X1, ... and E0, E1, ... are independent of each other, and hence so are E and R.

The same argument shows that E and L are independent.

Write AB for the conjunction of A and B and note that EL, ER and RL are the same event—namely, the event of all the coins being heads. Then:

  1. P(E)P(L)=P(EL)=P(RL)=P(R)P(L)

Since friends of positive infinitesimals insist that P(R) and P(L) are positive infinitesimals, we can divide both sides by P(L) and get P(E)=P(R). The same argument with L and R swapped shows that P(E)=P(L). So, P(L)=P(R).

But now let Xi* = Xi + 1, and define L* to be the event of X−1*, X−2* being all heads, and R* the event of X0*, X1*,… being all heads. The exact same argument as above will show that P(L*)=P(R*). But friends of infinitesimal probabilities have to say that P(R*)>P(R) and P(L*)<P(L), and so we have a contradiction if P(L)=P(R) and P(L*)=P(R*).

I think the crucial question is whether (1) is still true in settings with infinitesimal probabilities. I don’t have a great argument for it. It is, of course, true in classical probabilistic settings.