Wednesday, August 20, 2008

Deception and lying

There is good reason to think lying is always wrong. Lying is wrong on Kantian grounds: it treats the other person as a tool to one's ends rather than as an autonomous rational being, and the practice of lying would undercut itself if universalized. Lying is wrong on natural law grounds: it is clearly a perversion of the nature of assertoric speech, using speech for the opposite of its natural end of communicating truth. Lying is malevolent, except perhaps in outré cases: in lying, we act to bring it about that the other has a false belief, and it is surely intrinsically bad to have a false belief. Lying is wrong on personalist grounds: in making an assertion one solicits the other's trust, but in deliberately speaking falsely, one betrays that trust in the act of soliciting it. And lying is wrong on theological grounds: God is truth, and the Book of Revelation lists liars among the damned.

On the other hand, even those who are willing to agree that lying is always wrong are unlikely to think there is anything wrong with sticking one's hat out on a stick so that one's enemy might shoot at it while one sneaks away. It is hard enough to protect the innocent against unjust aggressors without lying (and, alas, sometimes impossible). But to do so without any deceit is nigh impossible.

But some people—even very smart people—do in fact consider lying and deceit to be the same thing. After all, in both cases, it seems, one is trying to do the same thing, namely to induce a false belief, and if so, then the malevolence argument would make deceit be wrong for one of the reasons that lying is.

I once found this very puzzling. And then a colleague gave me the beginning of an answer. In cases of deceit, one is trying to get the other to do something, rather than trying to get the other to believe something. I think this story can be filled out in a way that makes for a neat distinction between deceit and all but perhaps outr´ cases of lying (more on those later). On the face of it, one might argue that if I stick out my hat, my intention is to bring it about that

  1. my enemy will think I am under the hat, and will shoot, and the commotion will cover my escape.
It seems that the enemy's belief that I am under the hat is essential to the success of the plan.

But this argument is mistaken. What is essential is that the enemy should take herself to have evidence that I am under the hat. She does not have to believe that I am under the hat to shoot. She only needs to take herself to have more evidence for my being there than for my being in any other particular place. That is all that is needed to rationally justify her shooting under the hat. And her belief that she has this evidence is in fact a true belief—she indeed does have such evidence. Now, an epistemically less cautious enemy may actually form the belief that I am under the hat. But here I can apply double effect. She forms the false belief on the basis of the evidence. I intend her to have the evidence and to shoot. The evidence is sufficient to lead to her shooting. I do not have to intend her to form that false belief. I suppose things go better for me if she does, but I need only intend that

  1. my enemy will take herself to have more evidence for me to be under the hat than anywhere else, and will shoot, and the commotion will cover my escape.
(A lot of these ideas developed in conversation with the aforementioned colleague. In fact it may be that there is very little that is mine here.)

The same can be said when I lay a false trail at a cross-roads when I am pursued by the enemy. I only intend what is needed for the accomplishment of my plan. Belief that I've taken road A, when I've taken road B, is not needed. All that's needed is that my enemy have strong evidence that I've taken road A, since having strong evidence that I've taken road A is sufficient to justify her following road A. There is no evil in her having such strong evidence. The evidence consists, after all, of a truth—the truth that there are footprints leading A-ward.

The principle of double effect can justify some cases of deception—I may foresee the other's forming a false belief, but I don't intend that belief formation, either as an end or as a means. And, typically, I don't even foresee that belief formation—I only foresee the possibility of it, since I do not know how epistemically cautious the other person will be. All that I intend is for the other person to have evidence for a false belief, and to act on that evidence.

Of course, in some cases of deceit, one is positively intending that the other have a false belief. For instance, a student plagiarist might desire not merely that her parents have evidence of her innocence, but that her parents positively believe her innocent. If she then manufactures evidence for her innocence with the intention that her parents believe her innocent, the above will be no excuse.

If this story is right, and if it is not to justify well-intentioned lies every bit as much as deceits, then there must be a crucial difference between how assertions function and how evidence functions. Assertions cannot simply be intended as yet another piece of evidence. For if they are, then in affirming a falsehood, we are not trying to induce any false belief in the other, but we are simply manufacturing misleading evidence. And, indeed, I do think assertions directly justify beliefs, in ways that are not merely evidential.

We can now go back to the reasons for believing lying to be wrong, and see if they apply to cases of deceit where one is not intending false belief but only misleading evidence. The Kantian "using" argument may not work (I used to think it would work, but I am not so clear on that). Maybe one is not circumventing the other's rationality, but only ensuring that the other act on unclear evidence. Nor is it clear that the practice of generating misleading evidence is not universalizable. Even if everybody who has good reason to deceive generates misleading evidence, there will be enough cases where non-misleading evidence is generated unconsciously that the evidence will still have some weight. Making footprints or putting a hat on a stick are not actions that have a natural end that is being circumvented here in the way in which lying circumvents the natural end of assertion. So the natural law argument against lying fails to show deception to be wrong. If the double effect considerations above are correct, the malevolence argument fails. The personalist argument also fails, because when we take something as evidence, rather than as testimony, trust in another person need not be involved. I do not trust persons to leave footprints leading to them—I have no right to feel betrayed if they leave footprints pointing in other directions. God is truth, but the cases of deceit that I have defended are not directly opposed to truth, since they do not involve an attempt to cause a false belief.

Final comment: Twice I mentioned that there could be outrĂ© cases of lying where there is no intention of causing false belief. These would be cases where one does not expect to be believed. There could, for instance, be cases where one knows that the other person is expecting one to lie, and so one says something false, in order to lead the other to true belief. I don't know if this is really a betrayal of trust since there is no trust. I don't know if people would count this as lying—it doesn't, for instance, meet the Catholic Catechism's definition of lying as a false assertion intended to deceive. But if one wishes to count this as a case of lying, it is a form of lying that may be significantly morally different from the others.


Mike Almeida said...

I'm not so sure that lying is always wrong. What if I lie to myself? I bring myself to believe what I know is a falsehood: namely, that my chances of surviving the next surgery is greater than .5. The belief is clearly unwarranted, and I might know that. But I cling to the idea that I'm in the group that is going to survive. I say, "sure, the chances of surviving are less than .5. But my chances are higher, since I'm convinced I'm in the survivors group. I may use all sorts of tricks to get myself to believe it. It's a very familiar response to such situations.
It might be worth noting that Plantinga considers such a response consistent with proper functioning. Indeed, he thinks that such beliefs might well override better warranted beliefs without any violation of proper functioning. So, at least some people think that we were designed to form such beliefs in such circumstances. If so, it is not likely to be wrong to form such beliefs.

Alexander R Pruss said...


Strictly speaking, self-deception is not lying, because there is no trust or communication involved, just as it would not be lying if I used neurourgery to induce in you the belief that I am heir to the throne of England. Still if the malevolence argument works, it rules these cases out. I am willing to bite the bullet.

Mike Almeida said...

Strictly speaking, self-deception is not lying, because there is no trust or communication involved

This seems mistaken twice over. First, I can lie to people who do not trust me. Second, I trust myself. I just know (or, in most cases I know) when I'm lying to myself. That does not prevent me from lying. I also know when a child is lying. That doesn't prevent him from lying either.

Alexander R Pruss said...

To make an assertion is to offer testimony, and a constitutive part of offering testimony is a solicitation of trust through offering an appearance of sincerity (this offering may be contextual).

Consider someone saying: "I hereby insincerely affirm that I am a descendant of King George III." I don't know what kind of speech act this is, but this person has not asserted that she is a descendant of GIII.

Now, there can be an appearance of sincerity directed at oneself (sincere sounding mental speech), and so maybe there can even be a kind of self-trust.

I agree that it is possible to lie while knowing one will not be believed, just as it would be possible for me to try to win a chess game against Deep Blue, while knowing I will lose.

So you may be right that it is possible to lie to oneself. I think it depends in part how linguistic thought is, since I take lies to be essentially linguistic.

Alexander R Pruss said...

One more thought: From a Kantian point of view, self-deception seems particularly problematic.

Anonymous said...

Does it matter that in sticking out one's hat to get away in a shootout, there really is not more evidence that one is somewhere one is not? What I mean is, when you stick out your hat you offer a piece of evidence that you're somewhere you're not. That piece of evidence makes it rational for the shooter to shoot at the hat, even if he doesn't go further and form the (false) belief you're where he's shooting. He doesn't go so far as to form that belief, but he does form the belief that the evidence indicates you to be under the hat.

But if the shooter had access to certain other evidence, such as his oppenent's plan to stick out the hat, then he would not shoot at the hat. In a case of deception one presents evidence selectively to get someone to act in a way he would not act were he privy to all the evidence, since presumably access to all the evidence would lead him to form a true belief, whereas in cases of deception one attempts to get someone to act on the basis of evidence that supports a false belief (even if the person does not form the false belief supported by the evidence available to him). Assume that it takes a judgment to evaluate available evidence to determine the conclusion this evidence suggests, even if this judgment is not followed by another that forms a belief about the truth of this conclusion. I suppose for deception to be legitimate, one would have to say that the deceiver is always leading the deceived to believe "The evidence available to me suggests X" as opposed to "The evidence suggests X," since the latter would be a false belief while the former wouldn't be.

Jarringly, this view does seem to suggest that lots of cases normally called lying and condemned as such are in fact cases of deception, and only condemnable because of the evil ends they are directed toward. A robber who points his finger through his jacket to make the bank teller think he's pointing a gun at her may only be hoping she forms the belief that the evidence available to her suggests that he in fact has a gun. (I admit this is de facto implausible, except for any robbers who read this blog. Most robbers would just as soon have the teller think they really have a gun.) But the robber might not be lying; he might only be deceiving for the sake of an evil end, the teller's handing over money not rightfully his.

Anonymous said...

Also, I may be having some trouble imagining when deception wouldn't serve just as well as lying. At the end of The Dark Knight (spoiler), Batman gets the city of Gotham to think he is responsible for some murders, because it will be for the city's benefit to act on the belief that Batman did the murders. But presumably the city could act just as well on the basis of the belief that the available evidence suggests that Batman did the murders. Batman need only deceive, not lie, to achieve his end of having the city act well.

And when I think of the usual cases commonly described as lying, I find myself seeing them as, at bottom, mere deceptions, or as cases in which deception would work just as well, i.e. cases in which the liar would be just as successful if he only wanted people to believe that the evidence available to them suggested some conclusion, and that they act appropriately.

Even if I want Jane Doe to think me kind (or rich or whatever) so that she'll love me, her belief in my kindness, which would lead to her love for me, could have successfullly substituted for it the belief that the evidence available to her indicates I am kind, for then she would still love me. Or perhaps not; perhaps love would kick in only if she (falsely) believed in my kindness, dictating that my act be one of lying and not of deception. I think I've confused the account and am now calling lots of lies mere deceptions.

Alexander R Pruss said...

1. I don't think the bank robber is lying.

2. That an action is a mere deceit and does not involve an intention that the other believe a falsehood does not imply that the action is permissible. While we have a conclusive reason not to lie (because lying is wrong), we have a prima facie reason not to deceive (e.g., because we have a prima facie reason to contribute to others' epistemic quests). So the robber's deceit will still be morally objectionable if she does not have a reason sufficient to override the prima facie considerations, as typically she does not.

3. I don't know exactly what it means to say that "the evidence indicates p", when "evidence" means something other than "evidence available to one". Does it mean "sum total of the evidence available to persons"? (Including angelic persons?)

4. Intentions have to have something to do with how one expects one's goal to be achieved. Now in normal cases (i.e., cases of normal interpersonal trust), testimony is appropriately taken as a reason for belief in what is testified to, rather than as evidence. In a way, testimony functions rather like simple requests. When someone makes a simple request such as "Could you please move over" or "Do you have the time?", we automatically obey, barring good reason to the contrary. Likewise, if someone tells us something, in the ordinary course of things (i.e., when we do not have reason to think the person unreliable or insincere, when we do not have evidence against the claim made), we just believe. We do not weigh evidence. We may form the additional belief that the utterance is evidence for p, but in normal cases that additional belief, if present at all, is not expressly formulated.

Thus, in the ordinary course of things, I can expect the following to happen when I tell you that p. You will believe that p, and because of your belief that p, you will act a certain way. Moreover, there is a pretty strong sense of "I can expect" here, in that by telling you p, I have invited you to accept my testimony, i.e., to believe p.

Now, imagine a liar who insincerely says p in order to get me to do A. I thus believe p, and therefore do A. Could we say that the liar did not intend me to believe p? Well, if the liar did not intend me to believe p, but intended me instead to believe that there is evidence for p and to act on this fact, then the liar has failed. For the success of an action plan requires not merely that the end should be fulfilled (my doing A), but that the end should follow from the action in the intended way. If I didn't come to believe that there is evidence for p or I didn't act on the evidence but on the belief, then the hypothetical liar has failed. But it seems deeply implausible to suppose that the liar's action plan was a failure, especially in light of the fact that I did exactly what the liar invited me to.

Mike Almeida said...

To make an assertion is to offer testimony, and a constitutive part of offering testimony is a solicitation of trust through offering an appearance of sincerity (this offering may be contextual)

Either my intuitions are way off, or I'm not understanding this. Why couldn't I come out of my home one fine morning with no one around, look to the sky and assert, "this is the nicest Texas morning I've experienced in my life'. I'm not testifying to anyone, I'm the only one there; I'm not soliciting trust. Whose trust? I'm making an assertion that only I will hear. But certainly it is an assertion, even absent trust solicitation and absent testimony.

Alexander R Pruss said...

I am inclined to think asserting to oneself is like promising to oneself. I don't think one can really promise things to oneself. (The basic problem is that the promisee can always release the promiser from a promise. So a promise to self would not have any binding power.)

There is an act we describe as "promising to oneself", but I think it's not really a promise, in that it does not generate the kind of obligation a promise does (as you can see, I rather like the idea of characterizing speech acts by their normative consequences), if it generates any kind of obligation at all.

Here's an idea. I am now alone and whispering three things: "2+2=3. 2+2=4. 2+2=5." Are any or all of these three sentences assertions that I am making? I just whispered them in order to generate an example for this comment. I do not intend to communicate the fact that 2+2=3 or that 2+2=5 to anybody, even myself. Nor does it seem right to say that I am lying. But if I were asserting all three sentences, then two of them would be lies.

Let us suppose that I wanted to utter these three arithmetical sentences to myself, and wanted to make the middle one be an assertion. What would I have to do? I suppose I would have to do something mental, engage in some kind of intending perhaps, with regard to the middle one. But what exactly am I intending about the utterance "2+2=4" that makes it an assertion?

Maybe there is some primitive thought that, if thought along with an act of speech, makes that act of speech into an assertion. I don't have an argument ruling this out. But I prefer a more reductive theory on which what makes an act of speech an assertion is an intention that is, at least in part, further analyzable. This intention may be an intention to commit oneself to the truth of something (to stand behind its truth, as it were), or maybe an intention to communicate something. I don't know--I don't have an analysis that I am happy with, but I prefer my sketchy attempts to just taking it as primitive.

I wonder if the difference is that you do not think of "assertion" as beefily as I do. To get a bit clearer on what we mean by the word, let me ask this question. If an actor on stage says: "I can call spirits from the vasty deep" (Owen Glendower in Henry IV, Part I), is the sentence an assertion, and is the actor asserting? I am not sure about the first, but the answer to the second is negative.

Mike Almeida said...

I am inclined to think asserting to oneself is like promising to oneself. I don't think one can really promise things to oneself.

Why? Suppose I've just finished talking to my boss. I walk into the back yard alone and utter "my boss is a complete idiot".

The next day my boss asks me whether I asserted that he was an idiot. Could I truly say no? Now suppose my voice was accidentally recorded by neighbor, who was not present when I spoke and had by chance left his portable recorder on.

Since my neighbor dislikes me, he has played the recording to my boss. When I deny that I asserted that he was an idiot, my boss plays the recording back to me. "Do you still deny that you asserted that I was an idiot?".

I think it would be very hard to say that I didn't assert that, in light of this evidence.

Alrenous said...

I'd like to take the ball off the field entirely.

it is surely intrinsically bad to have a false belief.

Then why can I think of several examples of good outcomes from false beliefs?

Alexander R Pruss said...

Because sometimes good comes from an intrinsically bad cause. :-)

Alrenous said...

But I would not expect unalloyed good to ever come from something that's intrinsically bad.

However, placebo effect.

However, people want things they shouldn't want. (Lie to them about availability.)

And so on.

Incidentally, I'll cut to the chase. Let's consider the fact that our knowledge is incomplete.

Some of these gaps in knowledge are critical, as we have found in the past when we closed some.

But by having a false belief, we can make up for this knowledge gap. ("Effectiveness is the measure of truth.") We can produce an algorithm that simulates the knowledge (to some extent) until such time as we gain the legitimate belief.

If false beliefs are intrinsically bad, how or why does this work?