Technical and legal writing often contains stipulations. The stipulations change the meanings of words already in the language and sometimes introduce neologisms. It seems, however, that technical and legal writing in English is still writing in English. After all, the stipulations are given in English, and stipulation is a mechanism of the English language, akin to macros in some computer programming languages. But we can now suppose that there is a pair of genuinely distinct natural languages, A and B, such that the grammatical structure of A is a subset of the grammatical structure of B, so that if we take any sentence of A, we can translate it word-by-word or word-by-phrase to a sentence of B. Now we can imagine Jan is a speaker of B and as a preamble she goes through all the vocabulary of A and stipulates its meaning in B. She then speaks just like a speaker of A.
When Jan utters something that sounds just like a sentence of A, and means the same thing as the sentence of A, is she speaking B or A? It seems she is speaking B. Stipulation is a mechanism of B, after all, and she is simply heavily relying on this mechanism.
Of course, there probably is no such pair of natural languages. But there will be partial cases of this, particularly if A is restricted to, say, a technical subset, and if we have a high tolerance for artificial-sounding sentences of B. And we can imagine that eventually a human language will develop (whether "naturally" or by explicit construction) that not only allows the stipulation of terms, but has highly flexible syntax, like some programming languages. At this point, they will be able to speak their extensible language, but with one preamble sound just like speakers of French and with another just like speakers of Mandarin. But the language itself wouldn't be a superset of French or Mandarin. And eventually the preamble could be skipped. The language could have a convention where by adopting a particular accent and intonation, one is implicitly speaking within the scope of a preamble made by another speaker, a preamble that stipulated which accent and intonation counted as a switch to the scope of that preamble. Then all we would need to do is to have a speaker (or a family of speakers) give a French-preamble and another speaker give a Mandarin-preamble. As soon as any speaker of our flexible language starts accenting and intoning as in French or Mandarin, their language falls under the scope of the preamble. (The switch of accent and tone will be akin to importing a module into a computer program.) But it's important to note that the production of a preambles should not be thought of as a change in the language any more than saying "Let's call the culprit x" changes English. It's just another device within the old language.
What's the philosophical upshot of these thought experiments? Maybe not that much. But I think they confirm some thoughts about language that people have had already. First, the question of when a language is being changed and when one is simply making use of the flexible facilities of the original question is probably not well-defined. Second, given linguistic flexibility, the idea of context-free sentences and of lexical meaning independent of context is deeply problematic. Stipulative preambles are a kind of context, and any sentence can have its meaning affected by them. There might be default meanings in the absence of some marker, but the absence of a marker is itself a marker. Third, we get further confirmation of the point here that syntax is in general semantically fraught, since it is possible to make the choice of preamble be conditional on how the world is. Fourth, this line of thought makes more plausible the idea that in some important sense we are all speaking subsets of the same language (cf. universal grammar).
This post is based on a line of inquiry I'm pursuing: What can we learn about language from computer languages?