Question: Does grammar primarily govern the relations between word-types or those between word-tokens?
Answer: Grammar does not primarily govern the relations between word-tokens. In human spoken and written languages, there are are tokens corresponding to parts of sentence types like the subject or the predicate. Thus, a token of the sentence "Paris is full of snow" contains a token of "Paris" and a token of "is full of snow". But this is a mere accident of our languages. We can easily imagine languages where the grammatical parts of a sentence type do not correspond to physical parts of a sentence token. For instance, we could imagine that a language can only be spoken via Goedel numbering. In such a language, we can still have a complex grammar, and there will be tokens of sentences—e.g., Arabic numeral expressions of Goedel numbers—but there need be no tokens of individual words. We could, I suppose, stipulate that a word is tokened when a sentence containing it is tokened, but that only gives us acts of tokening and not tokens. And, ontologically, it is not clear that there would be a separate act of tokening for each of the parts—maybe one could just say the sentence as a whole, without thinking about the parts. (I am a coarse-grained action theorist.)
One can imagine languages where the only tokens are tokens of sentences, but where there is still a Montague grammar. But in such a language, the arguments of the functors do not correspond to tokens.
Since we want the phenomenon of grammar to be as uniform as we can make it across imaginable languages, we should not take grammar to govern the relations between tokens, or even potential tokens, because some languages just don't have enough of these.
But strictly speaking we should not take grammar to govern the relations between types. For in a language where there are no tokens corresponding to, say, individual nouns, but only sentence tokens, the grammar may still take account of nouns. But these nouns won't be types, because a type is the sort of thing that is supposed to have a token. Rather, in such a language we would introduce abstract entities to play the role of types, but these abstract entities wouldn't actually be types, since there would be no type-token relation defined for them. We could call these entities "linguistic items". The grammar of the language would then specify how linguistic items can combine into other linguistic items, in the standard Montague grammar way. And then some special, distinguished linguistic items—for instance, sentences—would have the additional property of being expressible by a token. And these linguistic items would also be types.
So in fact the answer to the question is "neither". Grammar governs the relations between linguistic items. But these items need be neither types nor tokens. (This undercuts Goodman-Quine attempts to do grammar at the token level.) Then there is something other than grammar, which we might call coding, which governs the relation between the special linguistic items that are expressible by a token and their tokens. And of course there will be semantics/pragmatics (I do not distinguish these, though of course most do).
So what are these linguistic items? One option, inspired by Rob Koons: Carefully delineated social practices. I am not sure this will work, but it might. Second option: Don't worry! Just do your grammar, coding and semantics/pragmatics in terms of linguistic items, and Ramseyfy. I also don't know if this will work, but it might.
1 comment:
Third option: Equivalence classes of sentence-types.
Post a Comment