Wednesday, February 23, 2011

Deviant logic

In the chapter on deviant logic in his philosophy of logic book, Quine makes the claim that:

  1. The deviant logician changes the subject rather than disagreeing with classical logic.
Thus, the logician who denies excluded middle is not using the words "or" and "not" to indicate disjunction and negation. She is, perhaps with good reason (though Quine is sceptical of that), using some other connectives. Thus, when she denies "not not p entails p", she is not disagreeing with us when we assert "not not p entails p". The basic thought running behind this is that:
  1. The rules of classical logic are grounded in the meanings of the logical connectives (using "connectives" very widely to include negation, quantifiers, etc.)
and so any departure from the rules is a change of subject.

There is a powerful kind of argument against deviant logic here. Claims (1) and (2) seem to tell us that it is not really possible to disagree with classical logic without self-contradiction. I am either using my words in the sense that they have in classical logic, in which case I had better not disagree with classical logic on pain of contradiction, or else I am using the words in a different sense and hence not disagreeing.

I now want to describe a class of apparently non-classical logics that do not change the subject. Thus, either a deviant logician doesn't always change the subject, or else these logics are not actually deviant. The idea is this. We have rules like:

  • You can infer p from p.
  • If r is a conjunction of p with q, then you can infer r from p and q, p from r and q from r (conjunction introduction and elimination).
  • If r is a negation of a negation of p, then you can infer p from r.
  • If you can infer p and a negation of p from r, and s is a negation of r, then you can infer s from r.
And so on. The interesting thing is the second rule does not tell us that p and q have a conjunction. And indeed that is how I am imagining the system deviating from classical logic. We simply disallow certain conjunctions, negations, etc.—there will be sentences that perhaps have no negation, and pairs of sentences that perhaps have no conjunction. If we represent the language along the lines of First Order Logic, there may be cases where "A" is a sentence and "B" is a sentence but "A and B" counts as malformed. The rules of disallowing combinations may take all sorts of forms. For instance, we might simply prohibit any sentences that contain a double negation. This would result in a severe intuitionist-type limitation on what can be proved.

The logic, thus, has standard classical rules in an important sense. The rules are correct whenever they can be applied—whenever there are output sentences that work. The subject is not changed—"or" means or, "and" means and, and "not" means not—but it can be a substantive claim whether for a pair of sentences A and B, there is a sentence that we might wish to denote "A or B".

This restriction does not count as a change of subject. Indeed, Quine himself notes that there can be languages which are incapable of translating all the English truthfunctionally and quantificationally connected sentences, and he seems to think that these languages do have connectives that mean the same thing as English ones. In fact, English itself has restrictions on the formation of sentences. Past several levels of embedding, there just is no way to make distinctions. You probably can't express "(A or (A and not (B or (B and (C or D) and E) or F) and not A))" in English. Yet English does not have a deviant logic. It's just that English's logic is likely incomplete.

There are two ways of looking at this. One way is to say that what I have offered is a family of genuinely deviant logics that don't change the subject, and hence that Quine's argument against deviant logics fails. The other way—and it is what I prefer—is to say that what I have given is in an important sense a family of non-deviant, and even classical, logics, but one that differs from First Order Logic.

I think it could be a good thing to define the connectives in terms of valid inference (perhaps understood in terms of entailment). For instance, one might say that:

  1. A partially-defined functor C that takes a pair of sentences p and q into a new sentence C(p,q) is a conjunction if and only if you can validly infer p as well as q from C(p,q) and C(p,q) from the pair of premises p and q whenever C(p,q) is defined.
(We also need an extension to wffs.) If we do this, then excluded middle is true by definition in the following sense:
  1. Whenever p is a disjunction of q with a negation of q, then p is true.
But no claim is made that every sentence has a negation or that every pair of sentences has a disjunction. That would be a substantive claim. But whenever a sentence has a negation and can be disjoined with that negation, the result of the latter disjunction is true. That is a claim that is true by definition of "negation" and "disjunction".

This also lets one stipulate into place new connectives like tonk. Tonk is a connective such that one can infer q from "p tonk q" and "p tonk q" from p. The problem with tonk is that once one has the connective, it seems one derive anything (e.g., 1+1=2, so 1+1=2 tonk q, so q, for any q). But not quite. One can only derive everything with tonk if one adds the additional thesis that sufficiently many pairs of sentences have tonks. For instance, if we grammatically restrict tonking so that one is only allowed to tonk a sentence with itself, we can continue to have a sound logic.

Why care about such logics? Well, they might be helpful with the Liar Paradox. They might provide a way of doing the sort of thing that Field does to resolve the Liar by invoking a deviant logic but within a logic that has all the classical rules of inference.

I think Sorensen's "The Metaphysics of Words" [PDF] is very relevant to the above.

2 comments:

  1. My first thought is that most logics distinguish between "formation rules" and "inference rules" and what you've described are logics with deviant FRs but classical IRs. That's an interesting proposal.

    I am not sure if this is how you are thinking of it though. "We need an extension to wffs" threw me off--FRs _are_ the rules for wffs.

    One question that arises is, if you wanted to pursue this route, whether you could define/restrict FRs without reference to IRs. For example, in a strict grammatical sense, there is nothing wrong with "Colorless green ideas sleep furiously" but the sentence just means nothing; if this is the sort of thing we'd want to call improperly formed then our idea of "formation" is dependent on the semantics/IRs.

    ReplyDelete
  2. Talking of deviant FRs is a nice way to put it. Thank you. As I've said in one of my recent posts, I am an amateur in respect of logic.

    By "extension to wffs", I meant "extension to those wffs that are not sentences", i.e., to open wffs.

    One could restrict FRs purely syntactically. For instance, suppose you prohibit double negation. Then you can't prove excluded middle (assuming that commitment to excluded middle is built into the IRs only in the rule going from ~~p to p). That's pretty radical. Or one might do it more limitedly. Field has a family of stories on which you get out of the Liar by denying excluded middle for sentences that have "true" in them. Well, like all logics with deviant IRs, this runs into the Quine dilemma: either they're incoherent or they're changing the subject. But we can restrict the FRs by forbidding double negation for sentences that have "true" in them, and the problem disappears.

    I don't find this syntactic proceeding particularly appealing as a philosopher, except that as a mathematician I am kind of intrigued.

    My own deep suspicion, as articulated in various ways in our various past discussions, is that for natural language the FRs are semantic through and through, or at least proof-theoretic. Think, for instance, of metaphorical language. "I am one of Fr. Timothy's sheep" is true (and I say: true simpliciter, not "merely metaphorically true"--I don't think there is a coherent concept of "merely metaphorically true" that doesn't threaten to engulf just about all of natural language), and in normal contexts "I am a Tuabire" makes sense and is false, but "I am one of Fr. Timothy's sheep and I am a Tuabire" makes no sense, unless the metaphor of sheephood has been extended in such wise as to convey something by this particular breed (perhaps the metaphor naturally extends in the case of some breeds; "I am one of Fr. Timothy's sheep and I am a Merino" might be meaningful but would be false, since in this context being a Merino would naturally mean a luxuriance of agape).

    A standard view would then be: Fine, but we can avoid this mess by moving to a formal language. But I am coming to suspect this: A formal language that avoids this mess does so by paying one or more of these costs:
    (a) incoherent logic (i.e., every sentence is a theorem)
    (b) denial of some classical IR and hence falling afoul of Quine's dilemma
    (c) loss of expressive power.

    That (c) is involved in the move to formal languages is not deeply controversial. After all, the loss of poetry and metaphor is clearly a very serious loss of expressive power. (Plus particular formal language lose a lot of other things. Take the Geach problems about formalizing "is a good basketball player" and "is a good person". But some of these problems can be handled by additional grammatical moves, as in my post on metaphysically Aristotelian logic.) But for the hard-headed philosopher of language what is be more pressing (though I don't really think it should be more pressing) is the loss of an all-encompassing truth predicate (and ditto for a number of other semantic predicates).

    ReplyDelete