Suppose you have inherited a heavily-automated house with a DIY voice control system made by an eccentric relative who programmed various functions to be commanded by a variety of political statements, all of which you disagree with.
Thus, to open a living room window you need to say: “A donkey would make a better president than X”, where X is someone who you know would be significantly better at the job than any donkey.
You have a guest at home, and the air is getting very stuffy, and you
feel a little nauseous. You utter “A donkey would make a better
president than X” just to open
a window. Did you lie to your guest? You knowingly said something that
you knew would be taken as an assertion by any reasonable person. But,
let us suppose, you intended your words solely as
a command to the house.
Normally, you’d clarify to your guest, ideally before issuing the voice command, that you’re not making an assertion. And if you failed to clarify, we would likely say that you lied. So simply intending the words to be a command to the house rather than an assertion to the guest may not be enough to make them be that.
Maybe we should say this:
- You assert to Y providing (a) you utter words that you know would be taken to be an assertion to Y by a reasonable person and by Y, (b) you intend to utter these words, and (c) you failed to put reasonable effort into finding a way to clarify that you are not asserting to Y.
The conjunctive condition in (a) is a bit surprising, but i think both conjuncts need to be there. Suppose that your guest has the unreasonable belief that people typically program their home automation systems to run on political statements and rarely make political statements except to operate such systems, and hence would not take your words as an assertion. Then you don’t need to issue a clarification, even though you would be deceiving a reasonable person. Similarly, you’re not lying if you tell your home automation system “Please open the window” and your paranoid guest has the unreasonable belief that this is code for some political statement that you know to be false.
One might initially think that (c) should say that you actually failed to issue the clarification. But I think that’s not quite right. Perhaps you are feeling faint and only have strength for one sentence. You tell the home automation system to open the window, and you just don’t have the strength to to clarify to your guest that you’re not making a political statement. Then I think you haven’t lied or asserted—you made a reasonable effort by thinking about how you might clarify things, and finding no solution.
It’s interesting that condition (c) is rather morally loaded: it makes reference to reasonable effort.
Here is an interesting consequence of this loading. Similar things have to be said about promising as about asserting.
- You promise to Y providing (a) you utter words that you know would be taken to be a promise to Y by a reasonable person and by Y, (b) you intend to utter these words, and (c) you failed to put reasonable effort into finding a way to clarify that you are not promising to Y.
If this is right, then the practice of promising might be dependent on prior moral concepts, namely the concept of reasonable effort. And if that’s right, then contract-based theories of morality are viciously circular: we cannot explain what promises are without making reference to moral concepts.