Showing posts with label proof. Show all posts
Showing posts with label proof. Show all posts

Thursday, March 6, 2025

Logical consequence

There are two main accounts of ψ being a logical consequence of ϕ:

  • Inferentialist: there is a proof from ϕ to ψ

  • Model theoretic: every model of ϕ is a model of ψ.

Both suffer from a related problem.

On inferentialism, the problem is that there are many different concepts of proof all of which yield an equivalent relation of between ϕ and ψ. First, we have a distinction as to how the structure of a proof is indicated: is a tree, a sequence of statements set off by subproof indentation, or something else. Second, we have a distinction as to the choice of primitive rules. Do we, for instance, have only pure rules like disjunction-introduction or do we allow mixed rules like De Morgan? Do we allow conveniences like ternary conjunction-elimination, or idempotent? Which truth-functional symbols do we take as undefined primitives and which ones do we take as abbreviations for others (e.g., maybe we just have a Sheffer stroke)?

It is tempting to say that it doesn’t matter: any reasonable answers to these questions make exactly the same ψ be logical consequence of the same ϕ.

Yes, of course! But that’s the point. All of these proof systems have something in common which makes them "reasonable"; other proof systems, like ones including the rule of arbitrary statement introduction, are not reasonable. What makes them reasonable is that the proofs they yield capture logical consequence: they have a proof from ϕ to ψ precisely when ψ logically follows from ϕ. The concept of logical consequence is thus something that goes beyond them.

None of these are the definition of proof. This is just like the point we learn from Benacerraf that none of the set-theoretic “constructions of the natural numbers” like 3 = {0, 1, 2} or 3 = {{{0}}} gives the definition of the natural numbers. The set theoretic constructions give a model of the natural numbers, but our interest is in the structure they all have in common. Likewise with proof.

The problem becomes even worse if we take a nominalist approach to proof like Goodman and Quine do, where proofs are concrete inscriptions. For then what counts as a proof depends on our latitude with regard to the choice of font!

The model theoretic approach has a similar issue. A model, on the modern understanding, is a triple (M,R,I) where M is a set of objects, R is a set of relations and I is an interpretation. We immediately have the Benacerraf problem that there are many set-theoretic ways to define triples, relations and interpretations. And, besides that, why should sets be the only allowed models?

One alternative is to take logical consequence to be primitive.

Another is not to worry, but to take the important and fundamental relation to be metaphysical consequence, and be happy with logical consequence being relative to a particular logical system rather than something absolute. We can still insist that not everything goes for logical consequence: some logical systems are good and some are bad. The good ones are the ones with the property that if ψ follows from ϕ in the system, then it is metaphysically necessary that if ϕ then ψ.

Monday, March 18, 2019

Σ10 alethic Platonism

Here is an interesting metaphysical thesis about mathematics: Σ10 alethic Platonism. According to Σ10 alethic Platonism, every sentence about arithmetic with only one unbounded existential quantifier (i.e., an existential quantifier that ranges over all natural numbers, rather than all the natural numbers up to some bound), i.e., every Σ10 sentence, has an objective truth value. (And we automatically get Π10 alethic Platonism, as Π10 sentences are equivalent to negations of Σ10 sentences.)

Note that Σ10 alethic Platonism is sufficient to underwrite a weak logicism that says that mathematics is about what statements (narrowly) logically follow from what recursive axiomatizations. For Σ10 alethic Platonism is equivalent to the thesis that there is always a fact of the matter about what logically follows from what recursive axiomatization.

Of course, every alethic Platonist is a Σ10 alethic Platonist. But I think there is something particularly compelling about Σ10 alethic Platonism. Any Σ10 sentence, after all, can be rephrased into a sentence saying that a certain abstract Turing machine will halt. And it does seems like it should be possible to embody an abstract Turing machine as a physical Turing machine in some metaphysically possible world with an infinite future and infinite physical resources, and then there should be a fact of the matter whether that machine would in fact halt.

There is a hitch in this line of thought. We need to worry about worlds with “non-standard” embodiments of the Turing machine, embodiments where the “physical Turing machine” is performing an infinite task (a supertask, in fact an infinitely iterated supertask). To rule those worlds out in a non-arbitrary way requires an account of the finite and the infinite, and that account is apt to presuppose Platonism about the natural numbers (since the standard mathematical definition of the finite is that a finite set is one whose cardinality is a natural number). We causal finitists, however, do not need to worry, as we think that it is impossible for Turing machines to perform infinite tasks. This means that causal finitists—as well as anyone else who has a good account of the difference between the finite and the infinite—have good reason to accept Σ10 alethic Platonism.

I haven't done any surveys, but I suspect that most mathematicians would be correctly identified as at least being Σ10 alethic Platonists.

Logicism and Goedel

Famously, Goedel’s incompleteness theorems refuted (naive) logicism, the view that mathematical truth is just provability.

But one doesn’t need all of the technical machinery of the incompleteness theorems to refute that. All one needs is Goedel’s simple but powerful insight that proofs are themselves mathematical objects—sequence of symbols (an insight emphasized by Goedel numbering). For once we see that, then the logicist view is that what makes a mathematical proposition true is that a certain kind of mathematical object—a proof—exists. But the latter claim is itself a mathematical claim, and so we are off on a vicious regress.

Monday, November 19, 2018

Do we have to know that seven is finite to know that three is finite?

Three is a finite number. How do we know this?

Here’s a proof that three is finite:

  1. 0 is finite. (Axiom)

  2. For all n, if n is finite, then n + 1 is finite. (Axiom)

  3. 3=0+1+1+1. (Axiom)

  4. So, 0+1 is finite. (By a and b)

  5. So, 0+1+1 is finite. (By b and d)

  6. So, 0+1+1+1 is finite. (By b and e)

  7. So, 3 is finite. (By c and f)

Let’s assume we can answer the difficult question of how we know axioms (a) and (b), and allow that (c) is just true by definition.

I want to raise a different issue. To know that three is finite by means of the above argument, it seems we have to know that the argument is a proof.

One might think this is easy: a proof is a sequence of statements such that each non-axiomatic statement logically follows from the preceding ones, and it’s clear that (d)-(g) each follow from the previous by well-established rules of logic.

One could ask about how we know these rules of logic to be correct—but I won’t do that here. Instead, I want to note that it is false that every sequence of statements such that each non-axiomatic statement logically follows from the preceding ones is a proof. This is the case only for finite sequences of statements. The following infinite sequence of statements is not a proof, even though every statement follows from preceding ones: “…, so I am Napoleon, so I am Napoleon, so I am Napoleon.”

Very well, so to know that (a)-(g) is a proof, I need to know that (a)-(g) are only finitely many statements. OK, let’s count: (a)-(g) are seven statements. So it seems we have to know that seven is finite (or something just as hard to know) in order to use the proof to know that three is finite.

This, of course, would be paradoxical. For to use a proof analogous to (a)-(g) to show that seven is finite, we would need a proof of eleven steps, and so we would need to know that eleven is finite to know that the proof is a proof.

Maybe we can just see that seven is finite? But then we gain nothing by (a)-(g), since the knowledge-by-proof will depend on just seeing that seven is finite, and it would be simpler and more reliable just to directly see that three is finite.

It might be better to say that we can just see that the proof exhibited above, namely (a)-(g), is finite.

It seems that knowledge-by-proof in general depends on recognition of the finite. Or else on causal finitism.

Tuesday, October 30, 2018

Independence of FOL-validity

A sentence ϕ of a dialect of First Order Logic is FOL-valid if and only if ϕ is true in every non-empty model under every interpretation. By the Goedel Completeness Theorem, ϕ is valid if and only if ϕ is a theorem of FOL (i.e., has a proof from no axioms beyond any axioms of FOL). (Note: This does not use the Axiom of Choice since we are dealing with a single sentence.)

Here is a meta-logic fact that I think is not as widely known as it should be.

Proposition: Let T be any consistent recursive theory extending Zermelo-Fraenkel set theory. Then there is a sentence ϕ of a dialect of First Order Logic such that according to some models of T, ϕ is FOL-valid (and hence a theorem of FOL) and according to other models of T, ϕ is not FOL-valid (and hence not a theorem of FOL).

Note: The claim that ϕ is FOL-valid according to a model M is shorthand for the claim that a certain complex arithmetical claim involving the Goedel encoding of ϕ is true according to M.

The Proposition is yet another nail in the coffins of formalism and positivism. It tells us that the mere notion of FOL-theoremhood has Platonic commitments, in that it is only relative to a fixed family of universes of sets (or at least a fixed model of the natural numbers or a fixed non-recursive axiomatization) does it make unambiguous sense to predicate FOL-theoremhood and its lack. Likewise, the very notion of valid consequence, even of a finite axiom set, carries such Platonic commitments.

Proof of Proposition: Let G be a Rosser-tweaked Goedel sentence for T with G being Σ1 (cf. remarks in Section 51.3 here). Then G is independent of T. In ZF, and hence in T, we can prove that there is a Turing machine Q that halts if and only if G holds. (Just make Q iterate over all natural numbers, halting if the number witnesses the existential quantifier at the front of the Σ1 sentence G.) But one can construct an FOL-sentence ϕ such that one can prove in ZF that ϕ is FOL-valid if and only if Q halts (one can do this for any Turing machine Q, not just the one above). Hence, one can prove in T that ϕ is FOL-valid if and only if I holds.

Thus, in T it is provable that ϕ is FOL-valid if and only if G holds. But T is a consistent theory (otherwise one could formalize in T the proof of its inconsistency). Since G is independent of T, it follows that the FOL-validity of ϕ is as well.

Friday, February 23, 2018

A slightly different causal finitist approach to finitude

The existence of non-standard models of arithmetic makes defining finitude problematic. A finite set is normally defined as one that can be numbered by a natural number, but what is a natural number? The Peano axioms sadly underdetermine the answer: there are non-standard models.

Now, causal finitism is the metaphysical doctrine that nothing can have an infinite causal history. Causal finitism allows for a very neat and pretty intuitive metaphysical account of what a natural number is:

  • A natural number is a number one can causally count to starting with zero.

Causal counting is counting where each step is causally dependent on the preceding one. Thus, you say “one” because you remember saying “zero”, and so on. The causal part of causal counting excludes a case where monkeys are typing at random and by chance type up 0, 1, 2, 3, 4. If causal finitism is false, the above account is apt to fail: it may be possible to count to infinite numbers, given infinite causal sequences.

While we can then plug this into the standard definition of a finite set, we can also define finitude directly:

  • A finite set or plurality is one whose elements can be causally counted.

One of the reasons we want an account of the finite is so we get an account of proof. Imagine that every day of a past eternity I said: “And thus I am the Queen of England.” Each day my statement followed from what I said before, by reiteration. And trivially all premises were true, since there were no premises. Yet the conclusion is false. How can that be? Well, because what I gave wasn’t a proof, as proofs need to be finite. (I expect we often don’t bother to mention this point explicitly in logic classes.)

The above account of finitude gives an account of the finitude of proof. But interestingly, given causal finitism, we can give an account of proof that doesn’t make use of finitude:

  • To causally prove a conclusion from some assumptions is to utter a sequence of steps, where each step’s being uttered is causally dependent on its being in accordance with the rules of the logical system.

  • A proof is a sequence of steps that could be uttered in causally proving.

My infinite “proof” that I am the Queen of England cannot be causally given if causal finitism is true, because then each day’s utterance will be causally dependent on the previous day’s utterance, in violation of causal finitism. However, interestingly, the above account of proof does not guarantee that a proof is finite. A proof could contain an infinite number of steps. For instance, uttering an axiom or stating a premise does not need to causally depend on previous steps, but only on one’s knowledge of what the axioms and premises are, and so causal finitism does not preclude having written down an infinite number of axioms or premises. However, what causal finitism does guarantee is that the conclusion will only depend on a finite number of the steps—and that’s all we need to make the proof be a good one.

What is particularly nice about this approach is that the restriction of proofs to being finite can sound ad hoc. But it is very natural to think of the process of proving as a causal process, and of proofs as abstractions from the process of proving. And given causal finitism, that’s all we need.

Tuesday, October 3, 2017

Infinite proofs

Consider this fun “proof” that 0=1:

      …

  • So, 3=4

  • So, 2=3

  • So, 1=2

  • So, 0=1.

What’s wrong with the proof? Each step follows from the preceding one, after all, and the only axiom used is an uncontroversial axiom of arithmetic that if x + 1 = y + 1 then x = y (by definition, 2 = 1 + 1, 3 = 1 + 1 + 1, 4 = 1 + 1 + 1 + 1 and so on).

Well, one problem is that intuitively a proof should have a beginning and an end. This one has an end, but no beginning. But that’s easily fixed. Prefix the above infinite proof with this infinite number of repetitions of “0=0”, to get:

  • 0=0

  • So, 0=0

  • So, 0=0

  • So, 0=0

      …

      …

  • So, 3=4

  • So, 2=3

  • So, 1=2

  • So, 0=1.

Now, there is a beginning and an end. Every step in the proof follows from a step before it (in fact, from the step immediately before it). But the conclusion is false. So what’s wrong?

The answer is that there is a condition on proofs that we may not actually bother to mention explicitly when we teach logic: a proof needs to have a finite number of steps. (We implicitly indicate this by numbering lines with natural numbers. In the above proof, we can’t do that: the “second half” of the proof would have infinite line numbers.)

So, our systems of proof depend on the notion of finitude. This is disquieting. The concept of finitude is connected to arithmetic (the standard definition of a finite set is one that can be numbered by a natural number). So is arithmetic conceptually prior to proof? That would be a kind of Platonism.

Interestingly, though, causal finitism—the doctrine that nothing can have an infinite causal history—gives us a metaphysical verificationist account of proof that does not presuppose Platonism:

  • A proof is a sequence of steps such that it is metaphysically possible for an agent to verify that each one followed by the rules from the preceding steps and/or the axioms by observation of each step.

For, given causal finitism, only a finite number of steps can be in the causal history of an act of verification of a proposition. (God can know all the steps in an infinite chain, but God isn’t an observer: an observer’s observational state is caused by the observations.)

Saturday, December 11, 2010

Thursday, August 19, 2010

Proof

The folk seem to think that science can "prove" things. I used to think this just meant that they were confused about how science works. But there is a more charitable reading, which I got from a comment by Dan Johnson on prosblogion. Rather than taking the folk to be confused about how science works, we can take seriously the idea that meaning is a function of use, and take the folk to simply use the word "prove" differently from how philosophers do. The legal sense of "prove", as in "prove beyond reasonable doubt", seems to prove the point. :-) For if it is not otiose to specify that a proof is "beyond reasonable doubt", it must be possible to prove in a way that admits doubt! And hence a "proof", in the ordinary sense of the word, does not mean what philosophers and mathematicians mean by the word.

Tuesday, March 24, 2009

Teaching logic

While teaching logic for the first time (I am teaching to graduate students, but I haven't taught any logic before), I've finally realized how much I like constructive proofs. I gave a completely constructive proof of weak propositional completeness, by giving a simple proof algorithm that underlies this perl prover program. Basically, you start by proving excluded middle for each atomic. Then you just do proof by cases on all possible combinations of truth values of atomics. This only works for a finite number of atomics, so it only gives weak completeness. But the proof method is entirely constructive, simple enough that one can use it oneself (e.g., I used it myself when proving one of the De Morgan directions in class). I am not claiming it's at all new—it's not—but I had fun rediscovering it.

Then, today or Thursday (depending on whether time allows), I'll give a semantic proof of propositional compactness in the case of countably many atomics. (I am sure this proof has a name attached to it, but it's easier to make up than to look up.) Again, it's going to be entirely constructive, and pretty simple. It just uses induction and the lemma that if A is f-satisfiable (i.e., every finite subset of A is satisfiable), and Q is an atomic, then either A plus Q is f-satisfiable, or A plus not-Q is f-satisfiable. As it stands, this only works for countably many atomics, but in fact it also works for any case where there is a well-ordering on the atomics (and hence by the Axiom of Choice, it extends to the general case; but it is kind of nice to note that once one fixes the well-ordering on the atomics, it's entirely constructive). I'm not a logician. None of this is at all new. But I'm having lots of fun, hopefully of an innocent variety.

But I have also made an interesting sociological discovery: Students find set theory harder than first order logic. They seem to think of set theory as more abstract and strange. This was really weird to me at first, but that's just because I've been using sets for the larger part of my life, and only got to rigorous first order logic recently. Next time I teach the material, I will have to move more slowly through the beginning of set theory to make it clear that "member of" is not transitive but "subset of" is, that {{1,2,3}} has only one member, and so on. (Why, oh why, don't they teach computer programming starting with Grade 2 in all schools, so one could suppose familiarity with various kinds of abstract data structures by the time of college, which would allow for a nice stock of common concepts? It's a more useful intellectual and practical skill than, say, cursive handwriting.)

Thursday, January 29, 2009

Propositional logic

I am now teaching Fitch-style propositional logic proofs with and, or and not. I had some fun last night and wrote a fun little proof generator in perl (or, perhaps more precisely, modperl). This morning I'm going to teach the students the simple brute-force method that the code uses (basically, just go through all the possible combinations of truth values), which I assume is pretty standard. While the method tends to generate proofs that are unduly long, I think there is a value in having a method that is guaranteed to work even if one is suffering from prover's block. Besides, the algorithm makes it intuitively clear why truth-table completeness holds for propositional logic.

I was going to post a link to a web service that runs the prover. But I then thought that a student at another institution might cheat with it (even though the proofs generated have a somewhat identifiable look, and there is always the risk of bugs in my code), and so I didn't do it. If you're a faculty member, or are someone I know and trust, and are curious to see the code run, email me and I might send you a private link. Of course, someone determined on cheating can use the source code that I posted, but I bet there is other source code posted online that does things like that.

[The code in the link continues to evolve. For instance, it now does some easy simplifications optionally. - Note added later]

Tuesday, January 13, 2009

Mathematical proofs

Once as a grad student I handed in a proof to a logician. The proof was a good proof—by the standards of mathematicians (the proof was of a probabilistic fact, and I had previously published a number of peer-reviewed articles in probability theory, so I knew what I was doing). The logician absolutely hated it and did not think it was a proof.

What a logician means by a "proof" and what a mathematician means by a "proof" are different. I think the difference is roughly this: A mathematician's proof is an informal argument that there exists a logician's proof.