Sentence tokens come in many types, such as stupid sentence tokens, true sentence tokens, sentence tokens written in green ink, tokens of "Snow is white", tokens of "Snow is white" written in a serif font and in a 4pt typeface or smaller, etc. Most of these types of sentence tokens do not qualify as "sentence types". In fact, in the above, the only sentence type is tokens of "Snow is white". Types of sentence tokens are abstractions from sentence tokens. But there are different kinds and levels of abstraction, and so not all types of sentence tokens count as "sentence types".
I will argue that the notion of a sentence type is to a large extent merely pragmatic. We consider the following to each be a token of the same sentence type:
- Snow is white.
- Snow is white.
- Snow is white.
- Snow, it is white.
Say that a difference between the appearances (visual or auditory) of tokens that does not make for a difference in sentence type is a "merely notational difference". So, the way logicians think of language is roughly something like this. First, we abstract away merely notational differences. The result of this abstraction is sentence types. Then we can do logic with sentence types, and doing logic with sentence types helps us to do other abstractions. Thus, Lewis abstracts from differences that do not affect which worlds verify the sentence, and the result are his unstructured propositions (which he, in turn, identifies with sets of worlds). Or we might abstract from differences that do not affect meaning, and get propositions. (This simplifies by assuming there are no indexicals.)
But one could do things differently. For instance, we could say that differences in typeface are not merely notational differences, but in fact make for a different sentence type. Our logic would then need to be modified. In addition to rules like conjunction-introduction and universal-elimination, we would need rules like italic-introduction and bold-elimination. Moreover, these rules do not contribute in an "interesting way" to the mathematical structures involved. (John Bell once read to me a referee's report on a paper of his. As I remember it, it was something like this: "The results are correct and interesting. Publish." There are two criteria for good work in mathematics: it must, of course, be correct but it must also be interesting.) Moreover, there will be a lot of these rules, and they're going to be fairly complicated, because we'll need a specification of what is a difference between two symbol types (say, "b" and "d") and what is a difference between the same symbol type in different fonts. Depending on how finely we individuate typefaces (two printouts of the same document on the same printer never look exactly alike), this task may involve specifying a text recognition algorithm. This is tough stuff. So there is good pragmatic reason to sweep all this stuff under the logician's carpet as merely notational differences.
Or one could go in a different direction. One could, for instance, identify the differences between sentences (or, more generally, wffs) that are tautologically equivalent as merely notational differences. Then, "P or Q" and "Q or P or Q" will be the same sentence type. Why not do that? One might respond: "Well, it's possible to believe that P or Q without believing that Q or P or Q. So we better not think of the differences as merely notational." However, imagine Pierre. He has heard me say that London is pretty and the Queen saying that London is ugly. But he has failed to recognize behind the difference in accents that my token of "London" and the Queen's token of it both name the same city. If we were to express Pierre's beliefs, it would be natural to say "Pierre believes that [switch to Pruss's accent] London [switch back] is pretty and that [switch to Her Majesty's accent] London [switch back] is ugly." So the belief argument against identifying "P or Q" with "Q or P or Q" pushes one in the direction of the previous road—that of differentiating very finely.
On this approach, propositional logic becomes really easy. You just need conjunction-introduction and disjunction-introduction.
Or one could do the following: Consider tokens of "now" to be of different word types (the comments on the arbitrariness of sentence types apply to word types) when they are uttered at different times. Then, tokens of "now" are no longer indexicals. Doing it this way, we remove all indexicality from our language. Which is nice!
Or one can consider "minor variations". For instance, logic textbooks often do not give parenthesis introduction and elimination rules, as well as rules on handling spaces in sentences. As a result, a good deal of the handling of parentheses and spaces is left for merely notational equivalence to take care of. It's easy to vary how one handles a language in these ways.
There does not seem to be any objective answer for any language where exactly merely notational differences leave off. There seem to be some non-pragmatic lines one can draw. We do not want sentence types to be so broad that two tokens of the same non-paradoxical and non-indexical type can have different truth values. Nor, perhaps, do we want to identify sentence tokens as being of the same type just because they are broadly logically equivalent when the equivalence cannot be proved algorithmically. (Problem: Can the equivalences between tokens in different fonts and accents be proved algorithmically? Can one even in principle have a perfect text scanning and speech recognition algorithm?) But even if we put in these constraints this still leaves a lot of flexibility. We could identify all tautologously equivalent sentences as of the same type. We could even identify all first order equivalent sentences as of the same type.
Here is a different way of seeing the issue, developed from an idea emailed to me by Heath White. A standard way of making a computer language compiler is to split the task up into two stages. The first stage is a "lexer" or "lexical analyzer" (often generated automatically by a tool like flex from a set of rules). This takes the input, and breaks it up into "tokens" (not in the sense in which I use the word)—minimal significant units, such as variable names, reserved keywords, numeric literals, etc. The lexical analyzer is not in general one-to-one. Thus, "f( x^12 + y)" will get mapped to the same sequence of tokens as "f(x^12+y )"—differences of spacing don't matter. The sequence of tokens may be something one can represent as FUNCTIONNAME("f") OPENPAREN VARIABLENAME("x") CARET NUMERICLITERAL(12) PLUS VARIABLENAME("y") CLOSEPAREN. After the lexical analyzer is done, the data is handed over to the parser (often generated automatically by a tool like yacc or bison from a grammar file).
Now, in practice, the hand-off between the lexer and the parser is somewhat arbitrary. If one really wanted to and was masochistic enough, one could write the whole compiler in the lexer. Or one could write a trivial parser, one that spits out each character (or even each bit!) as a separate token, and then the parser would work really hard.
Nonetheless, as Heath pointed out to me, there may be an objective answer to where notational difference leaves off. For it may be that our cognitive structure includes a well-defined lexer that takes auditory (speech), visual (writing or sign language) or tactile (Braille or sign language for the deaf-blind) observations and processes them into some kind of tokenized structure. If so, then two tokens are of the same sentence type provided that the structure would normally process them into the same structure. If so, then sentence type will in principle be a speaker-relative concept, since different people's lexers might work differently. To be honest, I doubt that it works this way in me. For instance, I strongly doubt that an inscription of "Snow is white" and an utterance of "Snow is white" give rise to any single mental structure in me. Maybe if one defines the structure in broad enough functional terms, there will be a single structure. But then we have arbitrariness as to what we consider to be functionally relevant to what.
The lesson is not that all is up for grabs. Rather, the lesson is that the distinctions between tokens and types should not be taken to be unproblematic. Moreover, the lesson supports my view—which I think is conclusively proved by paradoxical cases—that truth is a function of sentence token rather than sentence type.
No comments:
Post a Comment