Showing posts with label Tarski. Show all posts
Showing posts with label Tarski. Show all posts

Monday, May 5, 2025

Unrestricted quantification and Tarskian truth

It is well-known—a feature and not a bug—that Tarski’s definition of truth needs to be given in a metalanguage rather than the object language. Here I want to note a feature of this that I haven’t seen before.

Let’s start by considering how Tarski’s definition of truth would work for set theory.

We can define satisfaction as a relation between finite gappy sequences of objects (i.e., sets) and formulas where the variables are x1, .... We do this by induction on formulas.

How does this work? Following the usual way to formally create an inductive definition, we will do something like this:

  1. A satisfaction-like relation is a relation between finite sequences of sets and formulas such that:

    1. the relation gets right the base cases, namely, a sequence s satisfies xn ∈ xm if and only if the nth entry of s is a member of the mth entry of s, and satisfies xn = xm if and only if the nth entry of s is identical to the mth entry

    2. the relation gets right the inductive cases (e.g., s satisfies xnϕ if and only if for every sequence s that includes an nth place and agrees with s on all the places other than the nth place we have s satisfying ϕ, etc.)

  2. A sequence s satisfies a formula ϕ provided that every satisfaction-like relation holds between s and ϕ.

The problem is that in (2) we quantify over satisfaction-like relations. A satisfaction-like relation is not a set in ZF, since any satisfaction-like relation includes ((a),ϕ=) for every set a, where (a) is the sequence whose only entry is a at the first location and ϕ= is x1 = x1. Thus, a satisfaction-like relation needs to be a proper class, and we are quantifying over these, which suggests ontological commitment to these proper classes. But ZF set theory does not have proper classes. It only has virtual classes, where we identify a class with the formula defining it. And if we do that, then (2) comes down to:

  1. A sequence s satisfies ϕ if for every satisfaction-like formula F the sentence F(s,ϕ) is true.

And that presupposes the concept of truth. (Besides which, I don’t know if we can define a satisfaction-like formula.) So that’s a non-starter. We need genuine and not merely virtual classes to give a Tarski-style definition of truth for set theory. In other words, it looks like the meta-language in which we give the Tarski-style definition of truth for set theory not only needs a vocabulary that goes beyond the object-language’s vocabulary, but it needs a domain of quantification that goes beyond the object-language’s domain.

Now, suppose that we try to give such a Tarskian definition of truth for a language with unrestricted quantification, namely quantification over literally everything. This is very problematic. For now the satisfaction-like relation includes the pair ((a),ϕ=) for literally every object a. This relation, then, can neither be a set, nor a class, nor a proper superclass, nor a supersuperclass, etc.

I wonder if there is a way of getting around this difficulty by having some kind of a primitive “inductive definition” operator instead of quantifying over satisfaction-like relations.

Another option would be to be a realist about sets but a non-realist about classes, and have some non-realist story about quantification over classes.

I bet people have written on this stuff, as it’s a well-explored area. Anybody here know?

Wednesday, March 19, 2025

Provability and truth

The most common argument that mathematical truth is not provability uses Tarski’s indefinability of truth theorem or Goedel’s first incompleteness theorem. But while this is a powerful argument, it won’t convince an intuitionist who rejects the law of excluded middle. Plus it’s interesting to see if a different argument can be constructed.

Here is one. It’s much less conclusive than the Tarski-Goedel approach. But it does seem to have at least a little bit of force. Sometimes we have experimental evidence (at least of the computer-based kind) for a mathematical claim. For instance, perhaps, you have defined some probabilistic setup, and you wonder what the expected value of some quantity Q is. You now set up an apparatus that implements the probabilistic setup, and you calculate the average value of your observations of Q. After a billion runs, the average value is 3.141597. It’s very reasonable to conclude that the last digit is a random deviation, and that the mathematically expected value of Q is actually π.

But is it reasonable to conclude that it’s likely provable that the expected value of Q is π? I don’t see why it would be. Or, at least, we should be much less confident that it’s provable than that the expected value is π. Hence, provability is not truth.

Monday, February 17, 2025

Incompleteness

For years in my logic classes I’ve been giving a rough but fairly accessible sketch of the fact that there are unprovable arithmetical truths (a special case of Tarski’s indefinability of truth), using an explicit Goedel sentence using concatenation of strings of symbols rather than Goedel encoding and the diagonal lemma.

I’ve finally revised the sketch to give the full First Incompleteness theorem, using Rosser’s trick. Here is a draft.

Thursday, November 5, 2020

Is there a set of all set-theoretic truths?

Is there a set of all set-theoretic truths? This would be the set of sentences (in some encoding scheme, such as Goedel numbers) in the language of set theory that are true.

There is a serious epistemic possibility of a negative answer. If ZF is consistent, then there is a model M of ZFC such that every object in M is definable, i.e., for every object a of M, there is a defining formula ϕ(x) that is satisfied by a and by a alone in M (and if there is a transitive model of ZF, then M can be taken to be transitive). In such a model, it follows from Tarski’s Indefinability of Truth that there is no set of all set-theoretic truths. For if there were such a set, then that set would be definable, and we could use the definition of that set to define truth. So, if ZF is consistent, there is a model M of ZFC that does not contain a set of all the truths in M.

Interestingly, however, there is also a serious epistemic possibility of a positive answer. If ZF is consistent, then there is a model M of ZFC that does contain a set of all the truths in M. Here is a proof. If ZF is consistent, so is ZFC. Let ZFCT be a theory whose language is the language of set theory with an extra constant T, and whose axioms are the axioms of ZFC with the schemas of Separation and Replacement restricted to formulas of ZFC (i.e., formulas not using T), plus the axiom:

  1. x(x ∈ T → S(x))

where S(x) is a sentence saying that x is the code for a sentence (this is a syntactic matter, so it can be specified explicitly), and the axiom schema that has for every sentence ϕ with code n:

  1. ϕ ↔ n ∈ T.

Any finite collection of the axioms of ZFCT is consistent. For let M be a model of ZFC (if ZF is consistent, so is ZFC, so it has a model). Then all the axioms of ZFC will be satisfied in M. Furthermore, for any finite subset of the additional axioms of ZFCT, there is an interpretation of the constant T under which those axioms are true. To see this, suppose that our finite subset contains (1) (no harm throwing that in if it’s not there) and the instances ϕi ↔ ni ∈ T of (2) for i = 1, ..., m. It is provable from ZF and hence true in M that there is a set t such that x ∈ t if and only if x = n1 and ϕ1, or x = n2 and ϕ2, …, or x = nm and ϕm.

Moreover, any such set can be proved in ZF to satisfy:

  1. x(x ∈ t → S(t)).

Interpreting T to be that set t in M will make the finite subset of the additional axioms true.

So, by compactness, ZFCT has an interpretation I in some model M. In M there will be an object t such that t = I(T). That object t will be a set of all the truths in M that do not contain the constant T. Now consider the interpretation I of ZFC in M, which is I without any assignment of a value to the constant T (since T is not a constant of ZFC). Then ZFC will be true in M under I. Moreover, the object t in M will be a set of all the truths in M.

So, if ZF is consistent, then there is a model of ZFC with a set of all set-theoretic truths and a model of ZFC without a set of all set-theoretic truths.

The latter claim may seem to violate the Tarski Indefinability of Truth. But it doesn’t. For that set of all truths will not itself be definable. It will exist, but there won’t be a formula of set theory that picks it out. There is nothing mathematically new in what I said above, but it is an interesting illustration of how one can come close to violating Indefinability of Truth without actually violating it.

Now, what if we take a Platonic view of the truths of set theory? Should we then say that there really is a set of all set-theoretic truths? Intuitively, I think so. Otherwise, our class of all sets is intuitively “missing” a subset of the set of all sentences. I am inclined to think that the Axioms of Separation and Replacement should be extended to include formulas of English (and other human languages), not just the formulas expressible in set-theoretic language. And the existence of the set of all set-theoretic truths follows from an application of Separation to the sentence “n is the code for a sentence of set theory that is true”.

Sunday, September 4, 2011

An easy constructive proof of a version of Tarski's Undefinability of Truth

Tarski's Undefinability of Truth theorem says that given a language that contains enough material cannot have a truth predicate, i.e., a predicate that holds of all and only the true sentences. This yields Goedel's Incompleteness Theorem if you let the predicate be IsProvable.

 Here's a proof in a string setting. Suppose that L is a language that (under some interpretation--I will generally drop that qualification for simplicity) lets you talk about finite strings of characters. Suppose L has a concatenation function +: a+b is a string consisting of the characters of a followed by the characters of b.  Suppose further that every character has a name in L given by surrounding the character with asterisks.  Thus, *+* is a name for the plus sign.  Suppose that there is a function Q in L such that if a is a string, then Q(a) is a string that consists of the names of the asterisk-based names for the characters in a interspersed with pluses.  I will call Q(a) a quotation of a.  Thus Q("abc")="*a*+*b*+*c*".  I will say that a substring q of a string s is a quotation in s provided that q is a substring of s of the form "*a*+*b*+*c*+..." and q cannot be extended to a longer quotation.  I will also use "*abc*" (etc.) to abbreviate "*a*+*b*+*c*".

Suppose that we can define a predicate T in L that is veridical, i.e., T(a) is true only if a is true.  We will now construct a sentence g in L such that g is true and T(g) is false.  This shows that there is no predicate true of all and only all sentences of g.  Here's how.  Let g be the following sentence:
  • (x)[(z)(z=*AlmostMe* → ((FirstQuotes(x,z) & FirstQuoteRemoved(x,z)))→~T(x))]
Here, AlmostMe is an abbreviation (I will put abbreviations in bold) for the following sequence of characters:
  • (x)[(z)z=() → ((FirstQuotes(x,z) & FirstQuoteRemoved(x,z)))→~T(x))]
I.e., AlmostMe is an abbreviation for g except for the quotation of AlmostMe inside g.  FirstQuotes(x,z) is an abbreviation of a complex predicate that says that the first quotation inside x is a quotation of z.  FirstQuoteRemoved(x,z) is an abbreviation of a predicate that says that z is what you get when you take x and replace the first quotation in it with "()".

Lemma 1. One can define FirstQuotes(x,z) and FirstQuoteRemoved(x,z) satisfying the above description.

I'll leave out the proof of this easy fact.

Lemma 2. The one and only string x that satisfies both FirstQuotes(x,*AlmostMe*) and FirstQuoteRemoved(x,*AlmostMe*) is g.

Here's an informal proof of Lemma 2.  The first quotation in x is indeed a quotation of AlmostMe, and so FirstQuotes(x,*AlmostMe*) does indeed hold.  Moreover, if we remove that quotation of AlmostMe and replace it with "()", we get AlmostMe.  So g does satisfy both predicates.  

Suppose now that h satisfies both predicates.  We must show that h=g.  Start with the fact that FirstQuoteRemoved(h,*AlmostMe*).  This shows that h is of the form:
  • (x)[(z)(z=*...* → ((FirstQuotes(x,z) & FirstQuoteRemoved(x,z)))→~T(x))]
where *...* is some quotation.  But because FirstQuotes(x,*AlmostMe*), that first quotation must be a quotation of AlmostMe.  But then h is g.  

Given Lemma 2, the proof of our theorem is easy.  By First Order Logic, g is equivalent to:
  • (x)((FirstQuotes(x,*AlmostMe*) & FirstQuoteRemoved(x,*AlmostMe*))→~T(x))
But the one and only x that satisfies the antecedent of the conditional is g.  Hence, g is true if and only if ~T(g).  Now, g is either true or false.  If it is false, then ~T(g) is true as T is veridical, and so g is true, which is a contradiction.  Therefore, g is true.  But if it is true, then ~T(g) and so g does not satisfy T.  That completes the proof.

I'm going to try out a version of this proof on undergraduates one of these days.

Thursday, June 9, 2011

T-schema and bivalence

Tarski's T-schema says that for any sentence "s":

  1. "s" is true if and only if s.
Suppose that "s" is neither true nor false. Then the left hand side of (1) is false, but the right hand side is neither true nor false. It seems to me that a reasonable multivalent logic will not allow an "if and only if" sentence to be true when one side of it is false and the other side is not false. So, it seems that the T-schema requires bivalence.

It's odd that I never noticed this before.

Monday, March 21, 2011

Names, quantifiers, Aristotelian logic and one-sided relations

This is going to be a pretty technically involved post and it will be written very badly, as it's really just notes for self. Start with this objection to Aristotelian logic. A good logical system reveals the deep logical structure of sentences. But Aristotelian logic takes as fundamental sentences like:
  1. Everyone is mortal.
  2. Socrates is mortal.
In so doing, Aristotelian logic creates the impression that (1) and (2) have similar logical form, and it is normally taken to be that modern quantified logic has shown that (1) and (2) have different logical forms, namely:
  1. x(Mortal(x))
  2. Mortal(Socrates).
I shall show, however, that there is a way of thinking about (1) and (2), as well as about (3) and (4), that makes them have the same deep logical form, as Aristotelian logician makes it seem. (This is a very surprising result for me. Until I discovered these ideas this year, I had a strong antipathy to Aristotelian logic.) Moreover, this will give us some hope of understanding the medieval idea of one-sided relations. The medievals thought, very mysteriously, that creation is a one-sided relation: we are related to God by the created by relation, but God is not related to us by the creates relation.

Now to the technical stuff. Recall Tarski's definition of truth in terms of satisfaction. I think the best way to formulate the definition is by means of a substitution sequence. A substitution sequence s is a finite sequence of variable-object pairs, which I will write using a slash. E.g., "x1"/Socrates,"x2"/Francis,"x3"/Bucephalus is a substitution sequence. The first pair in my example consists of the variable letter "x1", a linguistic entity (actually in the best logic we might have slot identifiers instead of variable letters) and Socrates—not the name "Socrates" (which is why the quotation marks are as they are). We then inductively define the notion of a substitution sequence satisfying a well-formed formula (wff) under an interpretation I. An interpretation I is a function from names and predicates to objects and properties respectively. And then we have satisfaction simpliciter which is satisfaction under the intended interpretation, and that's what will interest me. So henceforth, I will be the intended interpretation. (I've left out models, because I am interested in truth simpliciter.) We proceed inductively. Thus, s satisfies a disjunction of wffs if and only if it satisfies at least one of the wffs, and so on, the negation of a wff if and only if it does not satisfy the wff, and so on.

Quantifiers are a little more tricky. The sequence s satisfies the wff ∀xF iff for every object u, the sequence "x"/u,s (i.e., the sequence obtained by prepending the pair "x"/u" at its head) satisfies F. The sequence s satisfies ∃xF iff for some object u, the sequence "x"/u,s satisfies F.

What remains is to define s's satisfaction of an atomic wff, i.e., one of the form P(a1,...,an) where a1,...,an are a sequence of names or variables. The standard way of doing this is as follows. Let u1,...,un be a sequence of objects defined as follows. If ai is a variable "x", then we let ui be the first object u occuring in s paired with the variable "x". If for some i there is none such pair in s, then we say s doesn't satisfies the formula. If ai is a name "n", then we let ui=I("n"). We then say that s satisfies P(a1,...,an) if and only if u1,...,un stand in I(P).

Now notice that while the definition of satisfaction for quantified sentences is pretty neat, the definition of satisfaction for atomics is really messy, because it needs to take into account the question of which slot of the predicate has a variable in it and which one has a name.

There is a different way of doing this. This starts with the Montague grammar way of thinking about things, on which words are taken to be functors from linguistic entities to linguistic entities. Let us ask, then, what kind of functors are represented by names. Here is the answer that I think is appealing. A name, say "Socrates", is a functor from wffs with an indicated blank to wffs. In English, the name takes a wff like "____ likes virtue" and returns the wff (in this case sentence) "Socrates likes virtue". (The competing way of thinking of names is as zero-ary functors. But if one does it this way, one also needs variables as another kind of zero-ary functor, which I think is unappealing since variables are really just a kind of slot, or else one has a mess in treating atomics differently depending on which slots are filled with names and which with variables.) We can re-formulate First Order Logic so that a name like "Socrates" is (or at least corresponds to) a functor from wff-variable pairs to new wffs. Thus, when we apply the functor "Socrates" to the wff "Mortal(x)" and the variable "x", we get the wff (sentence, actually) "Mortal(Socrates)". And the resulting wff no longer has the variable "x" freely occurring in it. But this is exactly what quantifiers do. For instance, the universal quantifier is a functor that takes a wff and a variable, and returns a new wff in which the variable does not freely occur.

If we wanted the grammar to indicate this with particular clarity, instead of writing "Rides(Alexander, Bucephalus)", we would write: "Alexanderx Bucephalusy Rides(x,y)". And this is syntactically very much like "∀xy Rides(x,y)".

And if we adopted this notation, the Tarski definition of satisfaction would change. We would add a new clause for the satisfaction of a name-quantified formula: s satisfies nxF, where "n" is a name, if and only if "x"/I("n"),s satisfies F. Now once we got to the satisfaction of an atomic, the predicate would only be applied to variables, never to names. And so we could more neatly say that s satisfies P(x1,...,xn) if and only if every variable occurs in the substitution sequence and u1,...,un stand in I(P) where ui is the first entity u occurring in s in a pair of the form "xi"/u.  Neater and simpler, I think.

Names, thus, can be seen as quantifiers. It might be thought that there is a crucial disanalogy between names and the universal/existential quantifiers, in that there are many names, and only one universal and only one existential quantifier. But the latter point is not clear. In a typed logic, there may be as many universal quantifiers as types, and as many existential ones as types, once again. And the number of types may be world-dependent, just as the number of objects.

If I am right, then if we wanted to display the logical structure of (1) and (2), or of (3) and (4) for that matter, we would respectively say:
  1. x Mortal(x)
  2. Socratesx Mortal(x).
And there is a deep similarity of logical structure—we simply have different quantifiers. And so the Aristotelian was right to see these two as similar.

Now, the final little bit of stuff. Obviously, if "m" and "n" are two names, then:
  1. "mnF(x,y)" is true iff "nmF(x,y)" is true,
just as:
  1. "∀xyF(x,y)" is true iff "∀yxF(x,y)" is true.
But the two sentences in (8), although they are logically equivalent, arguably express different propositions. And I submit that so do the two sentence in (7). And we even have a way of marking the difference in English, I think. Ordinarily, what the left hand side in (7) says is that u has the property PxnyF(x,y) while the right hand side in (8) says that v has the property PymxF(x,y), where u and v are what "m" and "n" respectively denote, and PxH(x) is the (abundant) property corresponding to the predicate H (the P-thingy is like the lambda functor, except it returns a property, not a predicate). These are distinct claims.

The medievals then claim that in the case of God we have this. They say that "Godx nF(x,y)" is true in virtue of "ny GodF(x,y)" being true. It is to the referent of "n" that the property Py GodF(x,y) is attributed, and the sentence that seems to attribute a property to God is to be analyzed in terms of the one that attributes a property to the referent of "n".

Monday, October 19, 2009

Tarski's definition of truth-in-L

Tarski's definition is often noted—typically critically—as being applicable only to the languages he gave it in. Thus, he defined truth-in-L, or more generally satisfaction-in-L, for several cases of L. However, I think this misses something that goes on in the reader when she understands Tarski's account: the reader, upon reading Tarski, gains the skill to generate the definition of truth-in-L for other languages L (at least ones that are sufficiently formalized). One just gets it (I think Max Black makes this point). A standard way of defining A in C (where C is a context and A is a context-sensitive concept to be define) is to give some "direct definition" of the form

  1. x is a case of A in C iff F(x,C).
However, Tarski's case exemplifies a different way of defining "A in C": one teaches (perhaps by example) a procedure (perhaps specified ostensively) which, for every admissible C, will generate a definition of A-in-C. Call this "procedural definition". A direct definition has an obvious advantage with respect to comprehensibility. However, a procedural P definition does advance the understanding. For instance, suppose that instead of giving a definition of a heart that applies to all species, I teach you a method which, when properly exercised upon Ks, gives you a definition of the heart-of-a-K.

Now, in ordinary cases, one can move from a procedural definition to a direct definition as follows:

  1. x is a case of A in C iff x satisfies the definition of A-in-C that P would produce given C.

However, in the Tarskian case, we cannot do this for the simple reason that (2) would end up being circular if A is satisfaction! To understand what it is to satisfy a definition one needs to know that which one is trying to define. So in Tarski's case—and pretty much in Tarski's case alone—procedural definition is not the same as direct definition.

Nonetheless, a procedural definition, even when it does not give rise to a direct definition, is valuable—as long as the grasp of the procedure does not depend on the concept to be defined. And here, I think, is the real failure of Tarski's definition: one's grasp of the concept of a predicate—which is central to the method—is dependent on one's grasp of the concept of satisfaction.

Friday, October 9, 2009

Non-semantic definitions of truth

Here is a good reason to think that Tarski-style attempts at a definition of truth that do not make use of semantic concepts are going to fail. Such attempts are likely to make use of concepts like predicate and name. But these concepts are semantic concepts. A predicate is something can be applied to a name, and a name is something to which a predicate can be applied, and application is a semantic concept. Moreover, the definition of truth is going to have to presuppose an identification of the application function for the language (which takes a predicate and one or more names or free variables, and generates well formed formula, say by taking the predicate, appending a parenthesis, then appending a comma-delimited list of the names/variables, and then a parenthesis). But there is a multitude of functions from linguistic entities to linguistic entities, and to say which of them is application will be to make a semantic statement about the language.

Wednesday, October 7, 2009

What's wrong with Tarski's definition of application?

Tarski's definition of truth depends on a portion which is, essentially, a disjunctive definition of application. As Field has noted in 1974, unless that definition of application is a naturalistically acceptable reduction, Tarski has failed in the project of reducing truth to something naturalistically acceptable. Field thinks the disjunctive definition of application is no good, but his argument that it is unacceptable is insufficient. I shall show why the definition is no good.

In the case of English (or, more precisely, the first order subset of English), the definition is basically this:

  1. P applies to x1, x2, ... (in English) if and only if:
    • P = "loves" and x1 loves x2, or
    • P = "is tall" and x1 is tall, or
    • P = "sits" and x1 sits, or
    • ...
The iteration here is finite and goes through all the predicates of English.

Before we handle this definition, let's observe that this is a case of a schematic definition. In a schematic definition, we do not give every term in the definition, but we give a rule (perhaps implicitly by giving a few portions and writing "...") by which the whole definition can be generated.

Now consider another disjunctive definition that is generally thought to be flawed:

  1. x is in pain if and only if:
    • x is human and x's C-fibers are firing, or
    • x is Martian and x's subfrontal oscillator has a low frequency, or
    • x is a plasmon and x's central plasma spindle is spinning axially, or
    • ...
Why is this flawed? There is a simple answer. The rule to generate the additional disjuncts is this: iterate through all the natural kinds K of pain-conscious beings and write down the disjunct "x is a K and FK(x)" where FK(x) is what realizes pain in Ks. But this definition schema is viciously circular, even though the infinite definition it generates is not circular. If all the disjuncts were written out in (2), the result would be a naturalistically acceptable statement, with no circularity. However, the rule for generating the full statement—the rule defining the "..." in (2)—itself makes two uses of the concept of pain (once when restricting the Ks to pain-conscious beings and the other when talking of what realizes pain in Ks). Thus, giving the incomplete (2) does not give one understanding of pain, since to understand (2) one must already know what the nature of pain is. (The same diagnosis can be made in the case of Field's nice example of valences. To understand which disjuncts to write down in the definition in any given world with its chemistry, one must have the concept of a valence.)

Now, the Tarskian definition of application has the same flaw, albeit this flaw does not show up in the special cases of English and First Order Logic (FOL). The flaw is this: How are we to fill in the "..." in (1)? In the case of English we give this rule. We iterate through all the predicates of English. For each unary predicate Q, the disjunct is obtained by first writing down "P =", then writing down a quotation mark, then writing down Q, then writing down a quotation mark, then writing down a space followed by "and x1" flanked by spaces, then writing down Q. Then we iterate through all the binary predicates expressible by transitive verbs, and write down ... (I won't bother giving the rule—the "love" line gives the example). We continue through all the other ways of expressing n-ary predicates in English, of which there is a myriad.

Fine, but this is specific to the rules of English grammar, such the subject-verb-object (SVO) order in the transitive verb case. If we are to have an understanding of what truth and application mean in general, we need a way of generating the disjuncts that is not specific to the particular grammatical constructions of English (or FOL). There are infinitely many ways that a language could express, say, binary predication. The general rule for binary predication will be something like this: Iterate through all the binary predicates Q of the language, and write down (or, more generally, express) the conjunction of two conjuncts. The first conjunct says that P is equal to the predicate Q, and the second conjunct applies Q to x1 and x2. We have to put this in such generality, because we do not in general know how the application of Q to x1 and x2 is to be expressed. But now we've hit a circularity: we need the concept of a sentence that "applies" a predicate to two names. This is a syntactic sense of "applies" but if we attempt to define this in a language independent way, all we'll be able to say is: a sentence that says that the predicate applies to the objects denoted by the names, and here we use the semantic "applies" that we are trying to define.

It's worth, to get clear on the problem, to imagine the whole range of ways that a predicate could be applied to terms in different languages, and the different ways that a predicated could be encapsulated in a quoted expression. This, for instance, of a language where a subject is indicated by the pattern with which one dances, a unary predicated applied to that subject is indicated by the speed with which one dances (the beings who do this can gauge speeds very finely) and a quote-marked form of the predicate is indicated by lifting the left anterior antenna at a speed proportion to the speed with which that predicate is danced. In general, we will have a predicate-quote functor from predicates to nominal phrases and an application functor from (n+1)-tuples consisting of a predicate plus n nominal phrases to sentences. Thus, the Tarskian definition will require us to distinguish the application functor for the language in order to form a definition of truth for that language. But surely one cannot understand what an application functor is unless one understands application, since the application functor is the one that produces sentences that say that a given predicate applies to the denotations of given nominal phrases.

A not unrelated problem also appears in the fact that a Tarskian definition of the language presupposes an identification of the functors corresponding to truth-functional operations like "and", "or" and "not". But it is not clear that one can explain what it is for a functor in a language to be, say, a negation functor without somewhere saying that the functor maps a sentence into one of opposite truth value. And if one must say that, then the definition of truth is circular. (This point is in part at least not original.)

The Tarskian definition of truth can be described in English for FOL and for English. But to understand how this is to be done for a general language requires that one already have the concept of application (and maybe denotation—that's slightly less obvious), and we cannot know how to fill out the disjuncts in the disjunctive definition, in general, without having that concept.

Perhaps Tarski, though, could define things in general by means of translation into FOL. Thus, a sentence s is true in language L if and only if Translation(s,L,L*) is true in L*, where L* is a dialect of FOL suitable for dealing with translations of sentences of L (thus, its predicates and names are the predicates and names take from L, but its grammar is not that of L but of FOL). However, I suspect that the concept of translation will make use of the concept of application. For instance, part of the concept of a translation will be that a sentence of L that applies a predicate P to x will have to be translated into the sentence P(x). (We might, alternately, try to define translation in terms of propositions: s* translates s iff they express the same proposition. But if we do that, then when we stipulate the dialect L* of FOL, we'll have to explain which strings express which propositions, and in particular we'll have to say that P(x) expresses the proposition that P applies to x, or something like that.) The bump in the carpet moves but does not disappear.

None of this negates the value of Tarski's definition of truth as a reduction of truth to such concepts as application, denotation, negation (considered as a functor from sentences to sentences), conjunction (likewise), disjunction, universal quantification and existential quantification.

Tuesday, September 1, 2009

Dropping the T-schema

It would be a pity to have to drop the T-schema. But if I had to do that, I'd justify myself as follows. Sometimes sentences of the form "It is true that p" are just an emphatic way of affirming p. (Observe: "It is true that banks lend money" is a statement about banks, not about a proposition or a linguistic item. Yet if there were a real predication of truth, the sentence would be about a proposition or a linguistic item.) In those cases, the T-schema obviously holds. However, these cases are not really cases of talking about truth—they are just a stylistic device, akin to the way that an atheist might say "God knows that p" instead of "p". Unless one is prepared to affirm with deflationists that all uses of "is true" are like that, one cannot generalize from these uses to the more substantial uses, since the two are different uses.

Sunday, July 12, 2009

Irrealism and Tarski

According to Tarski, Schema (T), of which instances have the form:

  1. "..." is true if and only if ...,
where the same text is put for the two instances of "...", is compatible with both realism and irrealism, with correspondence theory and coherentism.

Let's explore this claim. Suppose we are irrealists (nevermind that we might then prefer some other term, like "epistemicist") who have some epistemic notion of truth, e.g, a sophisticated version of the claim that S is true if and only if it would be arrived at in the ideal limit of inquiry. Abbreviate the epistemic definition of the truth of S as E(S). I will at times use the the ideal limit formulation for explicitness, but it should really be considered a stand-in for whatever more sophisticated story is to be given.

If we accept both Schema (T) and the epistemic definition of truth, then we have to accept every instance of:

  1. E("...") if and only if ....

But (2) gets us into trouble. First of all, if we accept the Law of Excluded Middle (LEM)—that for all p, p or not p—then we have to accept the implausible claim that for all p, E(p) or E(~p). For many values of p, that is simply implausible for any of the epistemic versions of E. Thus, it is not plausible that in the ideal limit of inquiry we will conclude that Napoleon died with an even number of hairs on his head, and it is not plausible that in the ideal limit of inquiry we will conclude that it wasn't the case that Napoleon died with an even number of hairs on his head.

So, our irrealist who accepts (1) will, it appears, have to deny LEM. This shows that Schema (T) is not neutral between realists and irrealists. For while a realist can accept Schema (T) and either believe or not believe LEM, the irrealist is forced by the acceptance of Schema (T) to deny LEM. And if we see LEM as self-evidently true (though that remark begs the question against the intuitionists), then Schema (T) will in fact be unavailable to our irrealist.

Let us consider the irrealism further. Here is an instance of (2) (with the toy version of ideal-limit irrealism):

  1. We would in the ideal limit find out that there are conscious beings in the Andromeda Galaxy if and only if there are conscious beings in the Andromeda Galaxy.
This is a startling claim. Moreover, it is a claim that is part of a large family of equally startling claims relating how things are far away and what we would find out. These claims, furthermore, are not merely accidentally true, since the characterization of truth had better not be an accident.

Let's push on further with instances of (2). For instance:

  1. The ideal limit of inquiry is never reached if and only if in the ideal limit of inquiry we would conclude that the ideal limit of inquiry is never reached.
But the right hand side of the biconditional doesn't hold: in the ideal limit of inquiry we would not conclude that the ideal limit of inquiry is reached. So, the left hand side doesn't hold. Consequently, we have an a priori argument that the ideal limit of inquiry is reached. But unless one is a theist (who thinks that God has always already reached that ideal limit), it is absurd to suppose we'd have an a priori argument for that—that would yield give an atheist an a priori argument for the claim that we won't all perish tomorrow. The present example is one that cannot be leveled against irrealists who do not engage in any kind of idealization. But I suspect that non-idealizing irrealist views degenerate into relativism.

If this is all right, then in fact the irrealist cannot afford to accept Schema (T), and Tarski is wrong in thinking Schema (T) is neutral.

But non-acceptance of Schema (T) comes with a price, too. We either have to allow that truth of "There is conscious life in the Andromeda Galaxy" does not suffice to show that there is conscious life in the Andromeda Galaxy, or we have to allow that there could be conscious life in the Andromeda Galaxy, even though it is not true that there is conscious life in the Andromeda Galaxy. That is absurd. Of course, as an argument, this is question-begging.

Let's see if we can do better. If the irrealist's use of the word "truth" does not conform with Schema (T), the word "truth" does not match what seem pretty clearly to be central cases of our use of the word. Thus, when the irrealist says that "truth" depends on inquiry, the irrealist is not actually talking of what we mean by "truth", and is not disagreeing with the realist. And assuming that the irrealist doesn't say crazy things like (3) and (4), it is not clear wherein the irrealist is being an irrealist. (I would be quite happy if it were shown that irrealism is impossible.) But if the realist can give a correspondence theory of the concept of "truth" that conforms with Schema (T), then the conformity with Schema (T) would be evidence that the realist is not using "truth" in a Pickwickian sense.

To put the main points differently, epistemicism can be first and second order. First-order epistemicism affirms all the instances of (2). Second-order epistemicism affirms all the instances of

  1. "..." is true if and only if E("...").
Now: (a) first-order epistemicism makes sense but is crazy, (b) second-order epistemicism together with Schema (T) leads to first-order epistemicism, and (c) second-order epistemicism without Schema (T) uses the word "truth" differently from how we use it, since our usage is governed, in part, by Schema T. The challenge for the epistemicist is either to deny that first-order epistemicism is crazy, or to show how second-order epistemicism without Schema (T) is talking about "truth".

Saturday, June 20, 2009

Tarski's (T) schema

Tarski's (T) schema says that:

  1. X is true if, and only if, p
in every case in which X is a "name" for the sentence p. Elsewhere, Tarski makes it clear that every definition of p counts as a "name" for p. So, here's something fun. While, necessarily, every instance of the (T) schema is true, it is not the case that every instance of the (T) schema is necessarily true. For instance, if the first sentence that Janet uttered today is "Snow is white", then the following is an instance of the (T) schema:
  1. The first sentence that Janet uttered today is true if, and only if, snow is white.
Indeed, (2) is true. But (2) is, plainly, not a necessary truth, since Janet's first sentence today could have been different.

Were the (T) schema Tarski's definition of truth, this could be the start of a criticism. For we do expect instances of definitional sentences to be necessary truth. E.g.,

  1. Patrick's best friend is a bachelor if, and only if, Patrick's best friend is a never-married, marriageable man
is an instance of the definition of a bachelor, and it is a necessary truth. The issue here is that standard definitions are of the form:
  1. F(X) if, and only if, G(X)
where X occurs in the definiendum and the definiens. Not so in the (T) schema. But, again, that seems to be alright because the (T) schema, while a material condition that any definition of truth must satisfy, is not taken by Tarski to be a definition of truth.

Wednesday, June 17, 2009

Semantics

I am reading Tarski's "The Semantic Conception of Truth" and came across this paragraph which I just had to blog:

It is perhaps worth while saying that semantics as it is conceived in this paper (and in former papers of the author) is a sober and modest discipline which has no pretensions of being a universal patent-medicine for all the ills and diseases of mankind, whether imaginary or real. You will not find in semantics any remedy for decayed teeth or illusions of grandeur or class conflicts. Nor is semantics a device for establishing that everyone except the speaker and his friends is speaking nonsense.
(Sorry if there are typos—I am writing this with vim over ssh from my Treo.)