Monday, February 28, 2011

Syntactic self-reference without diagonal lemma or Gödel numbers

For the proof of Goedel's incompleteness theorem and in work on the Liar Paradox it is usual to use the Diagonal Lemma to secure self-reference. The challenge of self-reference is this. Given a predicate Q, find a syntactically definable predicate P such that
  1. (s)(P(s) → R(s))
is provably the one and only sequence of symbols satisfying P. Then (1) says that Q holds of itself. (To get the (Strengthened) Liar Paradox, just make R(s) say that s is not true.) But the proof of the diagonal lemma is hard to understand.

I find the following way of securing self-reference easier to understand. Start with a language that has nestable quotation marks, which I'll represent with ‘...’, and some string manipulation tools. I'll use straight double quotation marks for meta-language quotation. Add to the language a new symbol "@" which is ungrammatical (i.e., no well-formed formula may contain it). For any sequence of symbols s, we define two new sequences of symbols N(s) and Q(s) by the following rules. If s contains no quoted expressions or contains imbalanced opening and closing quotation marks, N(s) and Q(s) are just "@". If s contains a quoted expression, Q(s) is the first quoted expression, without its outermost quotation marks (but with any nested quotations being included), and N(s) is the result of taking s and replacing that first quoted occurrence of Q(s), as well as its surrounding single quotation marks, with "@". Thus:
  1. Q("abc‘def‘ghi’’+jkl")="def‘ghi’"
  2. N("abc‘def‘ghi’’+jkl")="abc@+jkl".
It is easy to see that Q and N are syntactically defined. Now, let M(s) be equal to N(s) if N(s)=Q(s) and let M(s) be an empty sequence "" otherwise. Again, M(s) is syntactic. Now consider this sentence:
  1. (s)(‘(s)(@=M(s) → R(s))’=M(s) → R(s)).
It is easy to prove (given a bit of string manipulation resources) that the only sequence s that satisfies the antecedent of the conditional is (4) itself. So we have constructed the syntactic predicate P(s). It is: ‘(s)(@=M(s) → R(s))’=M(s).

One can also adapt this to work with Goedel numbers and hence presumably for use in proving incompleteness.

[Removed a nasty typo.]

Sunday, February 27, 2011

Desiderata for a theory of atonement

My previous post on atonement implicitly identified one constraint ("must") and one desideratum ("should") for a theory of atonement:

  1. The theory must be able to apply in cases where the person saved lacks personal sin.
  2. The theory should not require explicit beliefs on the part of the person saved.
There is another desideratum that I think is important but somewhat vague:
  1. At least one of the facts that Jesus Christ actually lived among us, died on the cross and rose again should in every case be central to the mechanism of salvation.

This condition rules out theories on which the mechanism of atonement is that we are transformed by the example of Jesus Christ (this will be a subset of what my previous post calls "epistemic theories"). For in those theories, the central part of the mechanism of atonement is not that Jesus Christ actually lived, died and rose again, but that we believe that Jesus Christ actually lived, died and rose again. The reason Jesus Christ had to actually live, die and rise again is not for the mechanism of salvation to work, but only because God is not a deceiver and so God could not teach us that Jesus Christ lived, died and rose again unless this was actually true. But the soteriologically important thing on such theories is the belief that this happened, not that this happened. And hence such theories are unsatisfactory.

Saturday, February 26, 2011


I was once rather amused by an undergraduate in my Philosophy of Love and Sex class who complained that the sexual ethics material we were reading only applied to humans. On that topic, I rather enjoyed this story.

Mints, cats, double effect and proportionality

It's noon. You and two other innocents, A and B, are imprisoned by a dictator in separate blast-proof cells. All the innocents are strangers, and you know of no morally relevant differences between them (whether absolutely or relative to you). A's and B's cells both contain bomb and timer apparatuses that A and B cannot do anything about. B's bomb timer is turned off. A's timer is set to blow her up at 1:00 pm. In your cell, there is a yummy mint on a weight-sensitive switch connected to the apparatus in B's cell. If the mint is removed, B's timer will be set to go off at 1:00 pm. The dictator will check up on the situation shortly before 1:00 pm, and will turn off A's timer if you've done something that caused B's timer to turn on. Anybody who survives past 1:00 pm will then be released.[note 1]

So you reason to yourself. "I like mints. If I eat the mint, I will cause B's death, but A will be saved. My causing of B's death will be non-intentional, and on balance the consequences to human life are neutral. But I get a mint out of it. So the Principle of Double Effect should permit me to eat the mint."

If this reasoning is good, the Principle of Double Effect is close to useless. Strict deontologists think it's wrong to kill one innocent to save millions. Most think it's wrong to kill one innocent to save two. But just about every deontologist will say that it's wrong to kill one innocent to save one innocent and one cat. Now, consider this case. The dictator hands you a gun, and tells you that if you don't kill innocent B, she'll kill innocent A and a cat. You clearly shouldn't. But if you thought it was acceptable to take the mint, then you could reason thus: "It would be interesting to see what a bullet hole in a shirt pocket looks like (and the shirt doesn't belong to B—it is prison attire, belonging to the dictator). If I aim the gun at B's shirt pocket, and press the trigger, the bullet will make a hole in the shirt pocket. And as an non-intended side-effect, it will subsequently cause B's death. But that's fine, because on balance the consequences to human life are neutral, as then B will be saved—plus a cat!" And since you can always think up some minor good that is served by pulling a trigger (finger exercise, practice aiming, etc.), you will get results any deontologist should reject.

So something is wrong with the reasoning—or Double Effect is wrong. I do not think, however, that Double Effect is wrong—I think it's indispensible. So what I will say is this. Double Effect requires that the evil effect not be intended and that there be a proportionality between the side-effect and the intended effect. What the above cases show is that, as a number of authors have noted, proportionality is not a matter of utilitarian calculation. Not only should we have on-balance positive consequences, but the intended effect should be a good proportionate to the foreseen evil. And the foreseen evil is not "that one person fewer will be alive than otherwise", but the foreseen evil is that a particular person should die. The deaths of different people are incommensurable evils even when we know no morally significant differences between the people.

In some cases the virtuous agent may count the numbers of people. But not in these cases. It is callous and unloving to get a mint or produce a bullet hole at the cost of B's death. It trivializes the value of B's life. There is a dilemma here. Either one is acting in the way that causes B's death for the sake of saving A, or not. If one is not, then B literally died so that one might have a mint or be intellectually gratified by the sight of a bullet hole. And so one trivializes B's life. If one is acting to save A, then one is not trivializing B's life. But in that case one is intending B's death, and deontology forbids that.

Here is a variant analysis that comes to the same thing, perhaps. There are cases where one can only do something in one of two ways: by intending a basic evil or by having a morally vicious set of intentions. The cases I gave are like that: one can only take the mint or produce the bullet hole by intending B's death or by having a set of intentions that trivialize B's life. In either case, one is unloving to B. It's hard to say which is the worse.

(This is related to the looping trolley case. There, I think one is either intending the absorption of kinetic energy by the one person, which is problematic, or one is intending a slight increase in length of life or slightly increase in probability of survival on the part of the five, which trivializes the death of the one.)

Friday, February 25, 2011

Epistemic theories of the atonement

Every orthodox Christian agrees that:

  1. Salvation occurs at least in part because of Christ's death on the cross.
The "at least in part" is because Christ's earlier life and subsequent resurrection no doubt play a role. It is also uncontroversial that this has something to do with atonement and sin, but there are many theories here. Epistemic theories say:
  1. The explanatory connection between Christ's death on the cross and the salvation of an individual always involves the individual's epistemic encounter with Christ's crucifixion.
For instance, it may be that Christ's death expresses to the sinner the weight of the sinner's sin and seeing the free acceptance of the penalty transforms the sinner. Epistemic theories as I defined them need not hold that the epistemic encounter is the whole story. Someone could, for instance, hold that there are two essential components to atonement, one of them an epistemic component and the other a penal substitution component. Such a theorist would count as an epistemic theorist.

But there is a plausible argument against this:

  1. Nobody is saved except because of Christ's death on the cross.
  2. Some are saved who have no epistemic encounter with Christ's crucifixion.
  3. Hence, the explanatory connection between Christ's death on the cross and salvation does not always involve an epistemic encounter with Christ's crucifixion.
And so, it seems, epistemic theories of atonement are false.

I think (3) is a central part of Christian orthodoxy, assuming that by "nobody" we mean no human beings other than Christ (contextually restricted quantifiers!). One way to see this is to consider the debate over Mary's Immaculate Conception. The doctrine says that Mary was conceived without original sin. Probably the deepest theological objection to the doctrine has centered on arguments that the doctrine is incompatible with (3). If rejecting (3) were an option for a Christian, the defenders of the doctrine would have had ample motivation to reject (3). But they didn't—instead, they offered theories that attempted to reconcile (3) with the Immaculate Conception. It is not my point to evaluate the arguments for or against the Immaculate Conception (though of course I do accept the Immaculate Conception) but simply to note that both sides admitted that (3) is non-negotiable.

Now, it may seem that (4) directly contradicts the epistemic view (2), and hence begs the question. That's not quite right. Claim (2) is that whenever there is an explanatory connection between Christ's sacrifice and salvation, that connection is at least in part epistemically mediated. As far as that goes, this is compatible with the possibility, denied by (3), that some are saved without any such explanatory connection.

Why accept (4)? Because of the following three classes of persons:

  • Jews and gentiles who were saved prior to the time of Christ.
  • Those who are saved without ever hearing about Christ's death.
  • Those (e.g., at least baptized infants) who are saved despite dying prior to having developed an ability to have an epistemic encounter with Christ's crucifixion.
In each of these types of cases, it certainly seems that we have (4).

I want to consider now one kind of reply. We could modify (2) by restricting the quantifiers. For instance, we could apply (2) only to those who have achieved the age of reason and positing that all who die prior to the age of reason are saved, thereby ruling out the third class of cases as offering an argument for (4). This would be an unacceptable variant of Pelagianism. The person who died in infancy would be saved not by Christ, but by natural causes—namely, the causes of death. If some who die in infancy are saved—and certainly at least those baptized people who die in infancy are saved—even they had better be saved only by Christ.

Or we could, if we were willing to bite the bullet on the case of infants in some way, restrict the quantifiers in (2) not to apply to those who died prior to Christ's death, thereby ruling out the first class of examples as offering an argument for (4). I think this, too, is a kind of Pelagianism. Moreover, consider the weirdness of supposing that an Inuit who died at 2:59 pm on Good Friday could be saved not by the cross, while an Inuit who died two minutes later needed to be saved by the cross.

Another move one might make would be to deny that, at least since the time of Christ's death, anyone is saved without ever hearing the Gospel. This is a hard-line response to my argument. For sociological reasons, I suspect this response to my argument is not going to be that popular. I suspect that most of the people who take a hard-line on those who die without hearing about Christ's death take some substitutionary sacrifice theory of the atonement. This is not because there is a good logical connection between these two views—indeed, substitutionary sacrifice theories of the atonement appear to me to be our best bet for explaining how one can be saved without expressly hearing the Gospel—but simply because the kind of tough-mindedness that inclines one to a hard-line on salvation outside the apparent boundaries of the Church is apt to incline one to a substitutionary sacrifice theory.

A different response is that a transformative epistemic encounter with the crucifixion occurs after death for those who are saved despite having died without hearing about Christ's death. Such a view would not only be committed to post-death purgation—i.e., to purgatory. That is not a problem. But it would, further, require the thesis that baptized infants who die prior to hearing about Christ's sacrifice go to purgatory, if only for an instant, and that view simply seems wrong. For one, it downplays the effects of baptism.

One might, however, suppose a miraculous epistemic encounter prior to death. God can miraculously make it possible for an infant, or even embryo, to understand the central doctrines of Christianity, whether explicitly or more vaguely. That this view posits a miracle is no objection. Salvation always involves a miracle. I do not know how plausible this way out will be for particular epistemic theorists. But I think in the end this is the only satisfactory account available to them.

So, unless one wants to posit a miraculous raising of intellectual abilities—and I do not reject this option—epistemic theories of atonement should be rejected.

But I don't think the substitutionary sacrifice theorist is off the hook either. For the above argument gives us a necessary condition for a theory of atonement: it must explain the connection between Christ's sacrifice and the salvation of an infant. If the theory is that Christ is paying the penalty for the individual's sin, then that theory will not be sufficient to account for the salvation of infants who have never committed any sins.

There are two separate issues here, I think. One is the issue of overcoming personal sin. That issue does not come up for the infant, as far as we know (I am inclined to some epistemic caution on this point). The other is the issue of attaining salvation. Many Catholic theologians have said that lack of personal sin is insufficient for salvation. A supernatural love is necessary and sufficient for salvation, a love that can only come from grace. Atonement is not only atonement for sin. It is, as its corny but apparently genuine "at-one-ment" etymology indicates, a matter of uniting us with God. While sin keeps us from union with God, union with God is not constituted by the absence of sin. It requires something more than absence of sin. And for fallen humanity, even in the case of non-sinful members such as infants, this "something" more must be held to come from the Cross. A puzzle or maybe even mystery, then, is how it is that the "something else", the supernatural agapê, comes from Christ's sacrifice. I am inclined to think that a crucial component here is that by our membership in the Body of Christ, Christ's sacrifice is our sacrifice, and the agapê of his sacrifice is our agapê.


An agent is omnirational provided that

  1. whenever he makes a decision, he is impressed by all the unexcluded reasons that there in fact are for him for all the relevant options, being impressed by a reason exactly to the extent to which he has reason to be impressed by it in virtue of the reason's force and the force of relevant higher order reasons
  2. when he decides to do A, he does A for all the unexcluded reasons that he in fact has for doing A.
In the case of an omnirational being who has multiple potential unexcluded reasons for an action, there is no difficulty to the question which reasons he actually acted on—in fact, he acted on them all.

If God is simple, he is omnirational. And God is simple.

Thursday, February 24, 2011

Non-natural facts explain

Consider this thesis:

  1. Only natural facts explain contingent truths.
(So if there are non-natural facts, they are explanatorily epiphenomenal.) I will argue that (1) is false.


  1. Only facts knowable by the scientific method are natural facts.
  2. The non-existence of immaterial beings that do not interact with the physical world is not knowable by the scientific method.
Now, consider the coherent but false theory that there are two generations of highly intelligent supernatural mathematicians, the first of which are called the "Great Ones" and the second of which are called the "Daughters of the Great Ones", and that they do not interact with the physical world. Then:
  1. It is a contingent fact that the Daughters of the Great Ones don't exist.
  2. That the Daughters of the Great Ones don't exist is explained by the fact that the Great Ones don't exist.
  3. That the Great Ones don't exist is not knowable by the scientific method.
And so:
  1. That the Great Ones don't exist is a non-natural fact that explains the contingent fact that the Daughters of the Great Ones don't exist.
  2. Therefore, at least one non-natural fact explains a contingent truth.
And this contradicts (1). So, (1) is false. (And if negative states of affairs can be causes, then maybe we can even say that the non-existence of the Great Ones causes the non-existence of their Daughters.)

Wednesday, February 23, 2011

Deviant logic

In the chapter on deviant logic in his philosophy of logic book, Quine makes the claim that:

  1. The deviant logician changes the subject rather than disagreeing with classical logic.
Thus, the logician who denies excluded middle is not using the words "or" and "not" to indicate disjunction and negation. She is, perhaps with good reason (though Quine is sceptical of that), using some other connectives. Thus, when she denies "not not p entails p", she is not disagreeing with us when we assert "not not p entails p". The basic thought running behind this is that:
  1. The rules of classical logic are grounded in the meanings of the logical connectives (using "connectives" very widely to include negation, quantifiers, etc.)
and so any departure from the rules is a change of subject.

There is a powerful kind of argument against deviant logic here. Claims (1) and (2) seem to tell us that it is not really possible to disagree with classical logic without self-contradiction. I am either using my words in the sense that they have in classical logic, in which case I had better not disagree with classical logic on pain of contradiction, or else I am using the words in a different sense and hence not disagreeing.

I now want to describe a class of apparently non-classical logics that do not change the subject. Thus, either a deviant logician doesn't always change the subject, or else these logics are not actually deviant. The idea is this. We have rules like:

  • You can infer p from p.
  • If r is a conjunction of p with q, then you can infer r from p and q, p from r and q from r (conjunction introduction and elimination).
  • If r is a negation of a negation of p, then you can infer p from r.
  • If you can infer p and a negation of p from r, and s is a negation of r, then you can infer s from r.
And so on. The interesting thing is the second rule does not tell us that p and q have a conjunction. And indeed that is how I am imagining the system deviating from classical logic. We simply disallow certain conjunctions, negations, etc.—there will be sentences that perhaps have no negation, and pairs of sentences that perhaps have no conjunction. If we represent the language along the lines of First Order Logic, there may be cases where "A" is a sentence and "B" is a sentence but "A and B" counts as malformed. The rules of disallowing combinations may take all sorts of forms. For instance, we might simply prohibit any sentences that contain a double negation. This would result in a severe intuitionist-type limitation on what can be proved.

The logic, thus, has standard classical rules in an important sense. The rules are correct whenever they can be applied—whenever there are output sentences that work. The subject is not changed—"or" means or, "and" means and, and "not" means not—but it can be a substantive claim whether for a pair of sentences A and B, there is a sentence that we might wish to denote "A or B".

This restriction does not count as a change of subject. Indeed, Quine himself notes that there can be languages which are incapable of translating all the English truthfunctionally and quantificationally connected sentences, and he seems to think that these languages do have connectives that mean the same thing as English ones. In fact, English itself has restrictions on the formation of sentences. Past several levels of embedding, there just is no way to make distinctions. You probably can't express "(A or (A and not (B or (B and (C or D) and E) or F) and not A))" in English. Yet English does not have a deviant logic. It's just that English's logic is likely incomplete.

There are two ways of looking at this. One way is to say that what I have offered is a family of genuinely deviant logics that don't change the subject, and hence that Quine's argument against deviant logics fails. The other way—and it is what I prefer—is to say that what I have given is in an important sense a family of non-deviant, and even classical, logics, but one that differs from First Order Logic.

I think it could be a good thing to define the connectives in terms of valid inference (perhaps understood in terms of entailment). For instance, one might say that:

  1. A partially-defined functor C that takes a pair of sentences p and q into a new sentence C(p,q) is a conjunction if and only if you can validly infer p as well as q from C(p,q) and C(p,q) from the pair of premises p and q whenever C(p,q) is defined.
(We also need an extension to wffs.) If we do this, then excluded middle is true by definition in the following sense:
  1. Whenever p is a disjunction of q with a negation of q, then p is true.
But no claim is made that every sentence has a negation or that every pair of sentences has a disjunction. That would be a substantive claim. But whenever a sentence has a negation and can be disjoined with that negation, the result of the latter disjunction is true. That is a claim that is true by definition of "negation" and "disjunction".

This also lets one stipulate into place new connectives like tonk. Tonk is a connective such that one can infer q from "p tonk q" and "p tonk q" from p. The problem with tonk is that once one has the connective, it seems one derive anything (e.g., 1+1=2, so 1+1=2 tonk q, so q, for any q). But not quite. One can only derive everything with tonk if one adds the additional thesis that sufficiently many pairs of sentences have tonks. For instance, if we grammatically restrict tonking so that one is only allowed to tonk a sentence with itself, we can continue to have a sound logic.

Why care about such logics? Well, they might be helpful with the Liar Paradox. They might provide a way of doing the sort of thing that Field does to resolve the Liar by invoking a deviant logic but within a logic that has all the classical rules of inference.

I think Sorensen's "The Metaphysics of Words" [PDF] is very relevant to the above.

Tuesday, February 22, 2011

An argument for the material conditional account of indicatives

The material conditional account of indicatives is that "If s, then u" is true if and only if s is false or u is true or both.

  1. (Premise) If the indicative conditional has the same truth values as the material conditional in the standard cases which are alleged to be counterexamples to the material conditional account, then the material conditional account is correct.
  2. (Premise) The indicative conditional has mind-independent truth value.
  3. (Premise) If the indicative conditional has mind-independent truth value, then it has the same truth values as the material conditional in the standard cases which are alleged to be counterexamples to the material conditional account.
  4. Therefore, the material conditional account is correct.
In this argument, I am convinced of premises (1) and (3), but not sure of premise (2). Consequently, what the argument convinces me of is that either (2) is false or (4) is true. Premise (1) is not that controversial, I think. The material conditional account is simple and elegant, verifies modus ponens and contraposition, is well-defined and mind-independent. The only problem is that it appears to give the wrong answers for certain standard cases. If this appearance were undercut, the material conditional account would be the winner.

The hard work is going to be justify (3). Let us start by giving three representative alleged counterexamples, classified by the truth values of the antecedent and consequent:

  1. "If I will have dinner with the queen tonight, I will eat dinner tonight in my pajamas." (Antecedent and consequent are both false.)
  2. "If I will have dinner with the queen tonight, everyone that I will have dinner with tonight will be a family member." (Antecedent is false and consequent is true.)
  3. "If it is snowing in the United States, it is snowing in Central Texas." (Suppose this was uttered a couple of days ago when it was snowing in Central Texas. Antecedent and consequent were both true.)
The material conditional account says that all three conditionals are true. But all three conditionals sound wrong (assuming I am not a member of the royal family and that I wouldn't wish to insult the queen).

I will argue that:

  1. If the indicative conditional has mind-independent truth value, then (5)-(7) are all true.
The method of argument generalizes to all the standard counterexamples, and thus yields (3).

Here's the way I will argue for (8). Let "a" be the antecedent in the alleged counterexample. Let "c" be the consequent. Suppose I have the belief, justified or not, that at least one of "not-a" and "c" is true, and I have no further, more specific beliefs about the matters in a and in c. Since I believe that at least one of "not-a" and "c" is true, I should be able to sincerely say to someone:

  1. I may not know much about the queen, dinners, pajamas, snow, etc., but I do believe that at least one of "not-a" and "c" is true. Hence, if a, then c.
This seems very reasonable.

Suppose now that I learn all the relevant facts about the queen, dinners, pajamas, snow, etc. In particular, I learn such facts as that people tend not to wear pajamas for dinner with the queen, that central Texas is one of the somewhat less likely places in the US to have snow, etc. I also learn the truth values of "a" and "c". None of the things I learn gives me reason to retract the claim that at least one of "not-a" and "c" is true. And neither have I any reason to retract the conclusion I drew, that if a, then c.

Therefore, when I said (9), I said something true. If it wasn't true, I would have reason to withdraw it. But the difference between the circumstances in my story in which I said the conditional in (9) and standard circumstances was in my beliefs—when I said (9), I lacked various beliefs that normal people in our culture have. Thus, if the indicative conditional has mind-independent true value, I have to conclude that actually the conditional "if a, then c" is also true. And so we have an argument for (8).

Monday, February 21, 2011

Frankfurt, flickers and voting

A standard example in the literature of Frankfurt cases is where a guy is deciding whom to vote for. If he isn't going to freely vote for the candidate Dr. Black wants him to vote for, then Dr. Black will force him to vote for that candidate. But he is going to freely vote for the candidate, so Dr. Black doesn't intervene.

Here's a funny thing about this case. Dr. Black can't force his victim to vote for a particular candidate. At most Dr. Black can force his victim to check a box, press a lever, or the like. But checking a box or pressing a lever is not voting, because the validity of a vote requires that one not have been compelled. For the same reason, you can't run a Frankfurt case where the action is a making of a promise, an entry into a contract, a marriage, etc. (I don't know if you can force someone to make an assertion.) Many of our actions are of a sort that logically cannot be compelled.

Sunday, February 20, 2011

A lesson from Frankfurt cases

Here is one lesson one might take away from Frankfurt cases: Causal necessitation is not the same thing as logical necessitation by past conditions conjoined with laws. If the libertarian's intuitions are driven by the idea that free choices can't be causally necessitated, then Frankfurt cases have no effect, because the genius of the cases is precisely to construct cases where there is logical necessitation by past conditions conjoined with laws but no causal necessitation.

Saturday, February 19, 2011

Metaphysically Aristotelian quantification

There is a sense in Aristotelian metaphysics that "there are only substances". They are all there is a focal sense. Yet if we can talk about and quantify over accidents or modes, surely there are accidents or modes.

Here, then, is a simple quantified logic that preserves the Aristotelian intuition. This logic is developed only in the case of modes (or tropes) that are non-relational—that subsist in a single substance. The logic has the standard resources of first order sentential logic, together with the standard universal quantifier symbols ∀x and ∃x which quantify over substances x. But additionally there are two new quantifier symbols: ∀ax and ∃ax which quantify over a's modes x. Thus, "Some table has an accident" becomes:

  1. x(Table(x) and ∃xy(Accident(y,x))).

Then we can say that only the substances exist simpliciter—only they are quantified over by the standard quantifiers Ax and Ex. Modes "exist" only relative to the substance of which they are modes—they are grounded in that substance, as is indicated in the language by the subscripted quantifiers.

We can say that the mode-quantifier ∃ax yields existential quantification in an analogical sense. And we can spell out the analogy at least to some degree by giving rules of inference that are structurally analogous to those for the focal-sense quantifier ∃x.

Here's another application of the notion of relative existence. We might, for instance, hesitate to say that characters in novels really exist, but we might think (I am hesitant about that, too) that novels really exist. We might then think that for any novel N, there is a pair of quantifiers ∃Nx and ∃Nx over the entities-in-N. If S is some Star Trek novel, then when we say that ∃Sx(Klingon(x)), we are not really saying that there really are Klingons. We are saying that virtually, in-the-novel, relative-to-the-novel there are Klingons. This is not a fact about Klingons but about the novel, and our primary ontological commitment is to the novel. Of course then our logic then needs to be suitably designed so that we cannot infer from ∃Sx(Klingon(x)) that ∃x(Klingon(x)). This can all be done, and what I shall do below for modes can be done for characters in novels. Again, quantification over characters is quantification in an analogical sense.

The rest of this post is almost entirely technical and can be skipped.

We leave the truth-functional rules unchanged. We modify the quantificational rules as follows:

Universal elimination: From ∀xF(x) and Substance(a), you get to infer F(a). From ∀axF(x) and Mode(d,a) you get to infer F(d).

Universal introduction: If you have a subproof assuming Substance(c) and concluding with F(c), and the subproof cites nothing involving c from outside of itself, then you get to infer ∀xF(x). If you have a subproof assuming Mode(c,a) and concluding with F(c), and the subproof cites nothing involving c from outside of itself, then you get to infer ∀axF(x).

Existential elimination: If you have ∃xF(x) and a subproof from (F(c) and Substance(c)) to S, where the subproof cites nothing involving c from outside of itself and c does not appear in S, then you get to infer S. If you have ∃axF(x) and a subproof from (F(c) and Mode(c,a)) to S, where the subproof cites nothing involving c from outside of itself and c does not appear in S, then you get to infer S.

Existential introduction: From Substance(a) and F(a), you get to infer ∃xF(x), and from Mode(c,a) and F(c), you get to infer ∃axF(x).

And we add an additional equality introduction rule: If you have Mode(c,a) and Mode(c,b), then you get to infer a=b.

Models contain a substantial domain S and a function m that assigns to each member of S a set of objects, with the property m(x) and m(y) have no elements in common if x and y are distinct. We can define interpretations and satisfaction in a straightforward way, restricting the interpretations of the Substance and Mode predicates in such a way that I(Substance) is always equal to S and I(Mode) is the set of all pairs (x,y) such that x is in S and y is a member of m(x). (We don't put this rule in in the case of existence-in-a-novel.)

I haven't checked it, but I expect that we have soundness and completeness.

If, like Spinoza and unlike Aristotle, we want to allow for nested modes, this can be done, too.

Friday, February 18, 2011


Sometimes one wants a word like "un-negation" or "de-negation"—a word for the sentence "s" as it relates to "~s". For instance, when teaching logic, one wants to say that if one is asked to prove a negated sentence, one's best bet is often to prove a contradiction from that sentence's un-negation. I just found out that there is a handy word for this. It's "negand". Shiny! I never knew that. So one can say things like:
  1. Believing a negative proposition is the same as disbelieving its negand.
(I actually don't know if that's true, but one can say it.  But I am inclined to think it is true.)  There are also times when one wants to refer to a proposition and "a negation or negand of it".  I just love that word.

Last time I needed a word for this in class, I talked of "s" as the "de-negation" of "not-s", but "negand" is much better.

Book indexing script

I had to index my modality book, so I wrote a little perl script to help me (it also needs the Roman module from CPAN) and it generated this index.  The idea is that one inserts special plain-text codes in my Microsoft Word file for the book which mark the ranges to index for each term and that mark where the page breaks in the galleys are (actually, Logan Gage, my TA, marked the page breaks), and then one runs the perl script which generates an html file with the index (which one can then import into Word if one so sees fit).

The main special codes are these:
  • {{entry name:}} This is put at the beginning of a passage that will be indexed under "entry name"
  • {{:entry name}} This is put at the end of the passage
  • {{nickname>official name}}  This specifies that any entries flagged with the nickname get re-indexed under the official name.  For instance, to save myself typing, in the body of the text I would use codes like {{EMR:}}...{{:EMR}}, and then I'd put an entry that says {{EMR>Extreme Modal Realism}}
  • {{synonym~entry name}}  This generates a "see entry name" entry in the index, under synonym.
  • @@n@@  This marks the beginning of page n.
There are no special facilities for generating an "n" after a page number for a footnote--one just surrounds the footnote superscript marker with {{entry name:}}...{{:entry name}} and gets a reference to the page it's on.  This won't be good for endnotes that need to be indexed.  There is no facility for "see also".

I also used a Word macro so that I could highlight some text, and it would surround it with the {{entry name:}}...{{:entry name}} codes (getting the name of the entry from the clipboard).

If you want to use the script for something and need help, email me.

Double Effect conference online tomorrow

The Anscombe Centre is running what looks to be a really good conference on Double Effect [PDF] on Saturday, February 19th, 9:30-17:30 GMT. They'll be accepting questions by email (and a selection of the email questions will be asked by the chair). They ask that you send an email to if you're planning on attending electronically. No registration fee for online attendance, but they do accept donations.
Here is a fuller schedule.
I'll be there virtually, albeit sleepily, starting from around the second talk (the first starts at 3:30 am my time). If anybody wants to chat with me during or between sessions, go here, and make up a nickname (ideally one such that I'll know who you are). (That's not an official conference venue.)

Anselm's ontological argument

Consider this very simple formulation of Anselm's first argument:

  1. (Premise) A maximally great being exists in my mind.
  2. (Premise) If x exists in my mind and not in reality, then x is not a maximally great being.
  3. Therefore, a maximally great being exists in my mind and reality.
This argument feels fishy. But now consider this parallel:
  1. (Premise) A maximally great being exists in my room.
  2. (Premise) If x exists in my room and not in the rest of the universe, then x is not a maximally great being.
  3. Therefore, a maximally great being exists in my room and in the rest of the universe.
This argument is clearly valid. Moreover, premise (5) is very plausible true. A maximally great being cannot exist merely in one room without existing elsewhere, just as a maximally great being cannot be the creator of merely one human being without being the creator of all others. And there are at least circumstances in which (4)-(6) does not beg the question, for instance when the person who is concluding to (6) is having a religious experience as of a maximally great being in this room.

So, (1)-(3) feels odd in a way in which (4)-(6) does not. Why? I suppose it is that to exist in my mind does not appear to be parallel to existing in my room. To exist in my room is to exist and be located in my room. But to exist in my mind does not seem to be the same as to exist and be located in my mind. Still, this does show that there is a way of taking (1)-(3) that makes it be a decent argument. For there is a sense of "exists in my mind" that does make it be parallel to "exists in my room". If my mind occupies a region of space, this is particularly easy: to be in my mind could be like being in my room or in the core of the earth. But if my mind does not occupy a region of space, there can still be a parallel. According to Aquinas, God counts as spatially omnipresent by virtue of knowledge and power. Well, he can be present in my mind by virtue of knowledge and power. Indeed, Christians talk of the Holy Spirit dwelling in them.

So now we have a reading of (1)-(3) that makes it a perfectly fine argument, as long as one has reason to accept (1), say due to an experience of the Holy Spirit's dwelling in one's mind.

I doubt that this is actually Anselm's argument. The main textual evidence against it is that Anselm takes God's existing in mind to be simply a matter of having the concept of God. I don't know how conclusive this is against the above reading. Anselm distinguishes between having a concept and being able to use the relevant words. It could turn out that in the case of God, to have the concept of God requires that God dwell in the mind. But all in all, as I said, I doubt this is Anselm's argument.

That said, the above reading does show that there are ways of reading (1)-(3) that make it be a fine argument. And I think it also highlights another point that not enough, perhaps, has been made of. The more existing in my mind is a genuine way of existing, like existing in my room, the better the argument sounds. But we know that at least the somewhat later medievals were drawn to views of mind on which to know an essence is to have that essence genuinely in one's mind, informing one's mind. On such views of mind, the argument may well be plausible, assuming one has reason to affirm (1) (for instance, because there is a presumption that if one has mastered C-talk, then one has the concept C).

Thursday, February 17, 2011

A confession

I am partly responsible for every sin anybody has committed at least during my lifetime.

Here's why. I have sinfully acted in ways that made my prayers less deep, less frequent and less effective. But chief amongst the things I should be praying for is that God rescue me and my neighbor from sin. When my neighbor sins and I did not pray for my neighbor as I ought to have, I am partly responsible for my neighbor's sin—I have at least negligently failed to do something that, as far as I know, would have decreased the probability of my neighbor's sinning.

There are two general ways this has happened. First, there are the many cases where I directly failed to pray as deeply or frequently as I should have. Second, there are many cases where I sinned through something other than neglect of prayer. In the latter cases, I made myself more wicked through the sins, and hence made my prayers less effective—it is the prayer of the righteous that, it is promised, avails much—and made myself be less good at prayer. Besides, often, I could have spent in prayer the time during which I was sinning.

This is the communion of sinners. But we have a merciful God who became one of us and has made the communion of the saints possible.

Wednesday, February 16, 2011

No one knows that naturalism is true

  1. (Premise) No one knows that mathematical (or ethical or aesthetic) truths can be grounded in natural facts.
  2. (Premise) If no one knows that mathematical (or ethical or aesthetic) truths can be grounded in natural facts, then no one knows that natural facts are all the facts there are.
  3. So, no one knows that natural facts are all the facts there are.

Tuesday, February 15, 2011

Leibniz's other ontological argument

Leibniz was perhaps the first to explicitly realize that from

  1. Possibly, God exists
one can derive:
  1. God exists,
and that (1) is a non-trivial assumption that needs an argument. I am not clear on whether Leibniz's reasoning went through S5, as in the Plantinga ontological argument, but that's not the part of Leibniz's argument that interests me right now. What interests me is Leibniz's argument for (1). Leibniz gave two. One was based on his logical calculus of properties. I think that one failed, though Leibniz liked it. But the other was one that he thought was less powerful, but good enough for practical purposes:
  1. If a concept C is in common use, probably it is the concept of something possible.
  2. The concept of God is in common use.
  3. Therefore, probably, it is possible that God exists.
I think there is a lot to this argument.

There are all sorts of ways of building on Leibniz's argument:

  1. Suppose no clear impossibility has been found in C, while C has been in common-use over a great period of time, by a great number of users a number of whom were of high intelligence and to whom C was of great intellectual importance, then probability C is a concept of a possible being.
  2. But no clear impossibility has been found in the concept of God, etc., etc.
  3. Therefore, probably, it is possible that God exists.
  4. Therefore, probably, God exists.

One might try to parody the argument by finding a concept C such that the existence of something falling under C entails the non-existence of God. Here are some candidates:

  • a universe not created by God
  • a human being not created by God
  • an evil that God would not be justified in permitting.
I do not think any of these concepts have had nearly the same degree of common use, over the same length of time and by the same number of users as the concept of God. (For one, God is probably more central to the lives of typical theists than a universe not created by God is to the lives of typical atheists.) And the probability that (6) confers on the possibility of something falling under C surely depends on these quantities. Thus, the God case still wins out.

Monday, February 14, 2011

Indicative conditionals and material implication

I am partial to the view that "if p, then q" has the same truth conditions as "not-p or q", but there is pragmatic stuff going on. But after thinking hard about Field's remarks on the implications of the Montague Paradox for certain solutions to the liar paradox (in a Beall anthology), I've been drawn to think more about an odd but significant difference between "If p, then q" and "not-p or q".

Here is the less striking way to show the difference. Consider:

  1. (x)(Human(x) → Mortal(x))
  2. (x)(~Human(x) or Mortal(x))
Now, in (2) there is a symmetry between the disjuncts, where (1) has an asymmetry. Consider a particular case:
  1. Human(Seabiscuit) → Mortal(Seabiscuit)
  2. ~Human(Seabiscuit) or Mortal(Seabiscuit)
I am happy to grant that (3) and (4) are both true. But they are true for different reasons. I want to say that (3) is trivially true because of the falsity of its antecedent. Period. The fact that Seabiscuit is mortal is not explanatorily relevant to the truth of (3). But (4) is equally true because of the non-humanity of Seabiscuit as because of his mortality. There is a symmetry there.

Here is another way to see the difference. Some people think that sentences like "x is F" are nonsense (gloss: don't express a proposition) when x fails to be of a certain kind for which attribution of Fness makes sense. These people will, for instance, deny that "The chair is true-or-false" makes any sense. (I am inclined to think it makes perfect sense, but is false; chairs aren't propositions and don't express propositions, so they are neither true nor false.) Now, plausibly, if "q" doesn't make sense, neither does: "r or q". That's just nonsense. So, on such a view:

  1. This chair is true-or-false or this chair is not a proposition
is nonsense. And so is:
  1. This chair is not a proposition or this chair is true-or-false.
  1. If this chair is a proposition, this chair is true-or-false
makes perfect sense as an ordinary indicative conditional. It is trivially true because the antecedent is false. If this is right, then we can have meaningful conditionals whose consequents aren't meaningful, but not so for disjunctions. This may force a restriction on the use of modus ponens in subproofs.

For another illustration of this last point, go back to (1) and (2). The quantification seems to be unrestricted, including such entities as numbers and properties. But it is not clear that "Mortal(7)" makes sense. If not, and if disjunction requires both disjuncts to make sense, then (2) is in trouble. But (1) is just fine.

In a lot of programming languages, logical disjunction operators are actually asymmetrical. This means that if you do something like "f(x) || g(x)" ("||" being disjunction) in perl or C, the function g(x) is not evaluated when f(x) turns out to return truth. The disjunction operator shortcuts in this way. As a result, you can do things like "x == 0 || y/x == z" without worrying about the fact that the second disjunct is non-sense if x is zero, because the second disjunct is unevaluated when x is zero. But in human language, I suspect that there is no similar shortcutting in disjunction. Apart from implicatures, there is a symmetry in disjunction. But, I suggest, there is such a shortcutting in conditionals.

But I don't know exactly how far the shortcutting in conditionals goes. I am not sure I want to say:

  1. If the sky is green, then **##^^).
But at least to my ear it sounds better to say this than:
  1. The sky isn't green or **##^^).

Saturday, February 12, 2011

Horwich's minimalist theory of truth

Horwich's theory of truth is generated by all the unparadoxical instances of:

  1. <s> is true if and only if s
together with the assumption that only propositions are true, where "<s>" is the proposition that s, for a sentence s. It is important that the schema (1) be defined purely syntactically. But it is not clear that this can be done. For consider this substitution instance of (1):
  1. <This sentence is short> is true if and only if This sentence is short.
The problem, of course, is that the referent of "this sentence" in (2) is liable to be (2), rather than "This sentence is short", and hence we get the wrong truth value. (There is also a minor grammaticality worry because of the capital "T" in the middle of the sentence.) Or take this substitution instance of (1):
  1. <A if and only if B> is true if and only if A if and only if B.
This doesn't say what it's intended to say. Fixing up is needed. But it is not clear that one can syntactically specify how the fixing up is to be done in all cases.

Friday, February 11, 2011

The Liar Paradox and conjunction introduction

Let F be some property had by and only by the sequence of symbols:

  1. The sequence of symbols with property F is not a sentence expressing a truth, and 2+2=4.
Then, (1) is nonsense. (For if it makes sense, then it is true if and only if it is not true.) Since sequences of symbols that are nonsense don't express propositions, and hence don't express truths:
  1. The sequence of symbols with property F is not a sentence expressing a truth.
And we know:
  1. 2+2=4.
Hence, (1), appearances to the contrary notwithstanding, is not a conjunction of (2) and (3). For if it were a conjunction of (2) and (3), it would make sense and be true.

In my previous post, I tried to create space for the idea that in natural language not every pair of sentences can be conjoined. The above argument extends this to sufficiently rich artificial languages, since the above case could be formulated in an artificial language.

We can keep classical logic rules such as:

  1. If c is the conjunction of a and b, and a and b are both true, then c is true,
as long as we recognize that in many natural languages and some artificial languages there can be sentences a and b that have no genuine conjunction. They may have a syntactic conjunction—a sequence of symbols formed out of the symbols in a and b and the conjunction symbol—but the preceding post shows that a syntactic conjunction is not the same as a conjunction simpliciter. And (4) must be understood in terms of genuine conjunction, not merely syntactic conjunction. This modification of classical logic does, of course, screw up the meta-theory. It doesn't affect soundness results, but completeness results may be in trouble.

Thursday, February 10, 2011

Conjunction and natural language

One of the least controversial rules of logic is conjunction-introduction: from the premises A and B, one infers the conjunction A&B. But now consider the following argument in ordinary English:

  1. (Premise) The following claims are all false: 1=2, 2=3, 3=4.
  2. (Premise) 5=5.
  3. The following claims are all false: 1=2, 2=3, 3=4 and 5=5. (By conjunction-introduction)
Premises 1 and 2 are true, but the conclusion is false.

Of course what we should say is that (3) is not in fact a conjunction of (1) and (2). This shows an interesting thing: it's not that easy to define a conjunction syntactically in natural language. The obvious rule of taking sentences "A" and "B" and forming the sentence "A and B" fails (even if we bracket the fact that in written English we often need to adjust the capitalization of the first word of the second sentence, and adjust punctuation), as (3) shows. In logic/mathematics-influenced written English we can conjoin "A" with "B" by forming the sentence "(A) and (B)". But that's not grammatical ordinary English.

We can, of course, do some paraphrase. Thus, we can say one of the following:

  1. 5=5 and the following claims are all false: 1=2, 2=3, 3=4.
  2. It is false that 1=2, it is false that 2=3, it is false that 3=4, but 5=5.
But while (4) and (5) express propositions that are obviously logically equivalent to the conjunction of the propositions expressed by (1) and (2), it is not clear that they express the conjunction of the proposition expressed by (1) with the proposition expressed by (2). Moreover, it is not clear that we can in purely syntactic terms specify how to give such a paraphrase in every case.

This is another reason to think that logic within natural language is at least somewhat tricky. The neat distinction in artifical languages between syntactic and semantic properties is much harder to draw. The notion of a conjunction of two sentences may well have no syntactic characterization.

Moreover, there may be sentences that in ordinary English have no conjunction or that have no disjunction. This is because the order of operations in English is foggy. In spoken English, we can do something with tone of voice and emphasis, but it is clear that this cannot be made to work always. In particular, if A,B,C,D,E,F,G,H are ordinary English sentences, I doubt that there is an ordinary English equivalent to "((A or (B and C)) and D) or (E and ((F and G) or H))". Thus, at some point we will have a failure of forming conjunctions or a failure of forming disjunctions.

This is relevant to this post.

Tuesday, February 8, 2011

A recipe for counterexamples to the Hume-Edwards-Campbell Principle

The Hume-Edwards-Campbell (HEC) principle says that if you have a bunch of items, and each one is explained, then the whole bunch might be explained. In particular, any infinite regress might be a complete explanation. The Hume-Edwards principle replaces the "might" with "is". I've published counterexamples to the HEC before, but here is a cool recipe for generating counterexamples.

Let p1,p2,... be an explanatory regress of propositions, so p2 explains p1, p3 explains p2, and so on. Suppose (as might easily be the case) that there is some proposition q such that (a) q couldn't be self-explanatory, and (b) the pi are all clearly completely explanatorily irrelevant to q. Now, let qi=pi&q. Then q1,q2,... are an infinite explanatory regress. But if q couldn't be self-explanatory, this regress can't be completely explanatory as it does nothing to advance the explanation of q.

I might have got the basic idea here from Dan Johnson. I can't remember. The counterexamples depend on the idea that if A explains B and Q is irrelevant, then A&Q explains B&Q. I am a bit less sure of that than when I started writing this post (which was quite a while ago).

Sunday, February 6, 2011

Deep Thoughts XXXII

"[T]here is no such thing as being too good."[note 1]

[This is a counterpoint to this. Like it, it rests on ensuring a consistent context of evaluation.]

Saturday, February 5, 2011

A curious fact about open future views

Here is a curious fact. Open futurists and fatalists both are committed to the claim that

  1. It is not true that tomorrow I will freely choose what to eat for lunch.
But they agree for different reasons. Fatalists think this is so because they think no one will ever freely choose. Open futurists think this is so because whether I will freely choose what to eat for lunch depends on various future contingencies, such as whether I freely choose to stay up until 10 am tomorrow, and then sleep from 10 am to 4 pm tomorrow. It is only the closed futurist non-fatalist who gets to affirm that tomorrow I will freely choose what to eat for lunch.

In fact, just as fatalists are, open futurists may be committed to the claim that

  1. It is not true that I will ever make any indeterministic free choices.
For surely whether I will ever make any indeterministic free choices depends on future contingencies, such as whether in the next moment God chooses to take me off to my eternal destiny and remove all indeterminism from my future (or, if one is an atheistic open futurist, whether quantum events rip me to shreds next moment).

Of course, the agreement between open futurists and fatalists only goes so far. Typical open futurists will say that in the past I made some free choices, while fatalists will deny that. Also, fatalists are going to replace the "not true" in (1) and (2) with "false", which some open futurists will resist. But there is still an irony that a view motivated by a desire to maintain one's freedom makes it impossible to say that one will make a free choice.

Friday, February 4, 2011

Closure for knowledge

Here is a closure principle I don't know a counterexample to:

If you know that that s is the conclusion of a sound argument and (non-aberrantly) therefore believe that s, then you know that s.

Wednesday, February 2, 2011

Propositional logic

As a hobby, from time to time I am thinking about the best way to think about logic, knowing full well that much of what I am thinking about duplicates what other people have done, but since it's just a hobby, that's fine. One of my convictions is that logic should be developed in such a way that it works not only with simple artificial languages like First Order Logic as the target language. Logic concerns truthbearers, and it should be developed in a way that is agnostic as what the truthbearers are—they might be sentences of First Order Logic, sentences of English or propositions.

More generally than truthbearers, logic concerns schemata. Schemata when operated on by sufficiently many quantifiers yield propositions (I think of a name as a kind of quantifier). I don't yet have a clear picture of how I want to characterize schemata.

So what do I have? Well, I have a pretty good picture of how to start thinking about propositional logic—the logic of truthbearers. Let B be the collection of all truthbearers. Let V={T,F} be the truth values. Let V0 be any set with only one element. Let Vn be the set of all n-tuples of members of V; let Bn be the set of all n-tuples of members of B. Let On be all functions from Vn to V. Let O be the union of all the sets On. We call the members of O "truth-tables". Now, for any truth-table f in On, there is an (n+1)-ary relation Cf on B, and we read Cf(b1,...,bn,bn+1) "as bn+1 f-connects b1,...,bn". If there exists b1,...,bn such that bn+1 f-connects b1,...,bn, then we say that bn+1's main connective is f. A nice axiom to have, but one that I would ultimately want to do without, is:

  1. Unique connective parseability: Each truthbearer has at most one main connective.
A related axiom, also too strong in my opinion, is:
  1. Unique component parseability: For any truthbearer b and any truth-table f, there is at most one sequence of truthbearers that b f-connects.
If we're really lucky, we have:
  1. Unique parseability: Both unique connective parseability and unique component parseability hold.
Many artificial languages, but certainly not English, satisfy:
  1. Unique compositionality: For any truth-table f and any sequence b1,...,bn of truthbearers, there is at most one truthbearer that f-connects the sequence.
For instance, in English, there is more than one way of expressing a conjunction, and hence unique compositionality fails. Say that b* is a direct subtruthbearer of b provided that b f-connects some sequence containing b*. Say that b* is a subtruthbearer of b if and only if there is a finite chain of direct subtruthbearer relations from b* to b. Many nice collections of truthbearers will satisfy:
  1. Well-foundedness: The subtruthbearer relation is a partial well-ordering (i.e., there are no infinite decreasing chains of subtruthbearers).
We can say that a truthbearer is atomic provided it has no subtruthbearers.

If f is a truth-table in On, say that the system is f-compositional provided that for any sequence b1,...,bn there is a truthbearer bn+1 that f-connects the sequence. For instance, English is negation, finite-conjunction and finite-disjunction compositional, as can be seen by the fact that for any sentence, we can form a negation of it by prefixing with "It is not the case that", for any finite collection of sentences we can form a conjunction by prefixing with "All of the following are the case" and then stringing the sentences with appropriate punctuation, and a disjunction by prefixing with "At least one of the following is the case".

For a proof theory, we specify bunches of rules, such as standard introduction and elimination rules. We specify them using f-connectedness. For instance, if f is binary conjunction (i.e., the function that takes (T,T) to T and all other pairs in V2 to F), then a reasonable rule says that at if at some point in a proof you have b1 and b2 accessible, then you may write down anything that f-connects them.

That's the start, on the side of syntax. The start on the side of semantics is straightforward. A truth assignment is a function v from truthbearers to V such that whenever bn+1 f-connects b1,...,bn, then v(bn+1)=f(v(b1),...,v(bn)) (if n=0 then v(b1) is equal to the value of v at the unique point of V0).

One can now easily prove this: If well-foundedness and unique parseability hold, then any function from atomic truthbearers to V can be uniquely extended to a truth assignment.

But in the end I'd like to work with logics that aren't uniquely parseable. For instance, English probably isn't uniquely parseable. "The sky is blue and the sea is blue and snow is white" can be parsed as a ternary conjunction, but it may also be parsed as a binary conjunction of "The sky is blue and the sea is blue" and "Snow is white". A nice replacement semantic property is:

  1. Weak (strong) truth-definability: Any function from atomic truthbearers to V can be (uniquely) extended to a truth assignment.
There are all sorts of other cool properties one can define. The payoff of all of that is that one will get to do logic with natural languages or directly at the level of propositions.

Like I said, I know a lot of this stuff has been worked out by real logicians (I think someone once even told me what real logicians call what I called "unique parseability"). But just as it's fun to build a telescope focuser yourself rather than buying one online, so too it's fun to develop logic rather than getting one from the library, as long as one is only doing this as a hobby.

Tuesday, February 1, 2011

Sceptical non-theism

Sceptical theists respond to the problem of evil by, very roughly, telling us that we can't say what kinds of worlds God is more likely to create. This is a very rough formulation, and perhaps not entirely fair, but I think it is defensible. After all, if we have no reason to think that the values we know are representative of the larger realm of value, then we are unable to say what kinds of worlds a being whose actions are entirely guided by correct value considerations is likely to create.

I defined sceptical theism in such a way that it does not entail theism. Thus, an atheist could be a sceptical theist, and in fact it's fair to say that Anthony Flew, before he became a theist and while he was pressing the claim that the God hypothesis has no empirical consequences, was committed to something like sceptical theism.

It seems likely—though this can be questioned—that the conjunction of theism with sceptical theism commits one to wide-spread scepticism. If we don't know what sorts of worlds God is more likely to create, and we think this world is one that God created, we really shouldn't trust induction, etc. This is good reason for theists not to opt for sceptical theism.

But there is another position to be considered: sceptical non-theism. Sceptical non-theism tells us that we can't say what kinds of worlds are more likely to exist if God doesn't exist.

Just as sceptical theism does not imply theism, so too sceptical non-theism does not imply non-theism. If one is both a non-theist and a sceptical non-theist, then one is pushed towards more general scepticism. But if one is a theist and a sceptical non-theist, then one can resist scepticism, it seems. So sceptical non-theism doesn't carry the danger to the theist that sceptical theism does. Moreover, sceptical non-theism answers the problem of evil just as well as sceptical theism does. The theist who is a sceptical non-theist can say: "Granted, it sure looks like these horrendous evils are very unlikely given theism. I grant you that! But I have no reason to think that they are any more likely given non-theism. And in order for these evils to be an argument against theism, they would have to be more likely given non-theism."

Furthermore, sceptical non-theism is at least as well motivated as sceptical theism. Consider the fact that non-theism is a disjunction of a large number of very different views, including: polytheism, kakotheism, pantheism and naturalism (I am tempted to include "open theism" here, too). Naturalism in turn is a disjunction of a large number of very different views, since any logically coherent all-encompassing physical theory gives rise to a version of naturalism. On any one of the disjuncts, it's hard to figure out what sort of world would likely exist. And it's hard to figure out which disjunct is more likely than which. Hence, figuring out what world would be more likely to exist if non-theism held seems to be an insuperable task. This remains true even if we replace "non-theism" with "naturalism".

I do not endorse sceptical non-theism. But I do endorse the thesis that it is a better move for theists than sceptical theism.