Saturday, October 31, 2009

The changing past

A decade ago, the end of World War II was 54 years in the past. Right now, the end of World War II is 64 years in the past. If A-theory is true, these are genuine properties of WWII, and it has changed in respect of them. But something in the past cannot change. So A-theory is false.

The A-theorist's best answer to this argument is, I think, that this is a mere Cambridge change. A mere Cambridge change is when an object does not change in respect of intrinsic properties, but something else around it changes, which makes appropriate the application of a different predicate to it. The classic example is that x may grow shorter than y without changing in height—simply because y grew taller. The change in x was Cambridge and that in y was real.

A mere Cambridge of an object x change requires something else, a y, that really changes, where x's change consists in x having a description that makes reference to x's unchanged and y's changed qualities. Let us try to see how to do this for WWII.

Option 1: WWII has the unchanging property of ending in 1945. But 1945 has a changing property—it once was future, then was present, then was past, eventually being 54 years in the past, and now being 64 years in the past. So WWII, or WWII's end, undergoes a Cambridge change in virtue of 1945 (or a specific date in 1945) undergoing a real change. But the idea that 1945 should undergo a real change is at least a bit problematic, I think. It is plausible to say that 1945 is in the past, and so it shouldn't be able to really change. The alternative seems to be to makes times be something abstract—but the idea of abstract entities really changing is also troubling. So one would need to make 1945 be an enduring concrete entity. That's weird.

Option 2: WWII has the unchanging property of ending, say, 15,000,001,945 years after the beginning of the universe, but the universe has a changing age. When we say that WWII ended n years ago, we mean that the year that WWII ended, say 15,000,001,945 cosmic era, is equal to the age of the universe minus n. So, the end of WWII doesn't change, but the age of the universe does. This seems to work, but leads to further puzzles. What kind of an enduring entity is the universe? What is this property of age that so inexorably grows?

Option 3: WWII has the unchanging property of ending in 1945, but there is an objectively changing fact—the fact of which time is present. And in virtue of the latter changing fact, which does not consist in a change in any entity, 1945 undergoes Cambridge change. This view requires a relaxation of the account of Cambridge change—it doesn't require entities to change. (One might try to say that propositions reporting what time is present change in truth value. But that had better be true in virtue of something else.) I think the idea of change that does not happen in virtue of anything's changing is dubious.

Friday, October 30, 2009


Here is the right way to see A-theory (this is particularly accurate as an account of Tom Crisp's presentism). There are infinitely many possible worlds, one of which, w0, is actual. Which world is actual changes with time: tomorrow, say, w17 will be actual (a world encodes everything that is objectively the case—but the A-theorist thinks there is an objective difference between how things are today and how they will be tomorrow). Of course, w0 and w17 are not unrelated: at w0 it is true that tomorrow* w17 will be actual, while at w17 it was true that yesterday* w17 was actual (here, "tomorrow*" and "yesterday*" are narrow-scope versions of our usually wide-scope terms).

Moreover, there are three crucial relations that can hold between worlds: S and E.

We say that S(w1,w2) if and only if w1 and w2 are simultaneous—i.e., it is the same time in both of them. When I say that it might have been the case that right now I am writing a post on growing-block theories, this implies there is a world w simultaneous with the actual world w0 such that at w, I am writing such a post. This relation is an equivalence relation—it is reflexive, symmetric and transitive.

The earlier-than relation E(w1,w2) holds if and only if at w2 it is true that w1 was actual. However, it is important to see that the S and E relations are not of a kind. S holds between the actual world and many, many worlds that have never been, are not, and never will be actual. E only holds (in some order) between the actual world and worlds that have been or will be actual. The relation E is transitive. Moreover, no two worlds related by S are related by E. In particular no world is earlier than itself..

Socrates was sitting if and only if there is a world w1 such that E(w1,w0) and at w1 Socrates is sitting.

The relations S and E probably have to be taken to be primitive.

If E(w1,w2) then I will say that w2 is in w1's future and w1 is in w2's past.

We can now precisely characterize closed-future and closed-past views. We have a closed future (past) if and only if the collection of worlds that the actual world is earlier (later) than the actual world is totally ordered by E. It is a consequence of this that no two worlds later (earlier) than the actual world are simultaneous. Everybody, I assume, believes in a closed past. Closed-futurist A-theory, then, is the view the collection of all worlds can be partitioned into disjoint subcollections, in one of which subcollections are all the worlds that aren't E-related to any world (these are the worlds that have no past or future), and each of the other subcollections is totally ordered by E.

Open futurists, on the other hand, think that there are two future worlds that are not E-related—in fact, they typically think there are two future worlds that are simultaneous.

Presentism then adds the further claim that at a world w, only those things are existent that are presently* existent. While A-theory is a theory of the structure of time, presentism is a theory of the ontology of each world. They neatly complement each other, because the presentist has an elegant answer to the question of what it is for an object or event to exist at a time:

  1. x existed at t in w if and only if there is a world w1 such that (a) it is t in w1, (b) E(w1,w) and (c) x exists in w1
  2. x exists at t in w (where w is such that it is t in it) if and only if x exists in w
  3. x will exist at t in w if and only if there is a world w1 such that (a) it is t in w1, (b) E(w,w1) and (c) x exists in w.

Our timeline (which is branching if open-futurism is true) is the collection of the actual world and all worlds that are E-related to the actual world.

This formulation shows the problem of how A-theory, divine immutability and omniscience could all be true (after all, doesn't it contradict immutability if God has to keep on updating his beliefs) is the same as the problem of how contingentism, strong aseity and omniscience could all be true, where contingentism is the doctrine that not everything is necessary and strong aseity is the doctrine that God has exactly the same intrinsic properties in all worlds. Moreover, this formulation also shows that an A-theorist who believes in divine immutability is someone who believes a restricted version of strong aseity—restricted to the worlds in our timeline. There is thus a good plausibilistic argument from A-theoretic immutability to strong aseity—why restrict to our timeline?

Where does the B-theorist stand in regard to all this? She insists that at every world it is true that in the past and the future, the very same world is actual. Of course, a different Lewisian "centered" world will be actual in the future. The above is really a matter of formalism, so it does not solve any really hard problems, so it does not solve the problem of how worlds and centered worlds differ.

The formalism does, however, highlight some problems for presentism. Consider the problem of induction for presentists. Normally, only what happens in the actual world is relevant for inductive inferences. We know that most worlds are different enough from the actual world that doing induction over them will mix us up completely. But to have any hope of induction being useful, the presentist has to insist on making use not just of data about what happens in the actual world, but also data about what happens in worlds earlier than the actual. I don't know that the presentist can answer this. Here's another problem. Finding out that I promise something in a world w1 gives me very little reason to do the promised deed in w0. But if I also find out that E(w1,w0), then I have reason to do this. This seems magical, and unless the A-theorist can give us a good substantive account of the E-relation, this will be unexplained.

Thursday, October 29, 2009

Presentism and "ago"

According to most proponents of presentism, propositions are tensed. Thus, when yesterday you said that it is raining and I said it today, we expressed the same proposition, which, perhaps, was false yesterday and is true today. Moreover, presentists believe that one cannot refer de re to non-present objects.

For presentism to have any hope of being able to express all of reality, the presentist needs an "Ago" operator, where Ago(t,p) is a proposition that backdates p by t (units of time). If p is expressed by a present tense sentence, Ago(t,p) can be expressed by a past tense sentence. Thus, if p is the proposition that it is raining, Ago(3 days,p) is the proposition that three days ago it was raining.

Here is a plausible fact about the logic of "Ago":

  1. Ago(t,p) is now true iff p was true t units of time ago
The seriously actualist presentist adds this:
  1. No proposition that refers de re to a presently non-existent entity can be true. (If we like, we can probably qualify this as: "no positive proposition".)
Let p be the proposition that there is someone who will vote for Obama. Then:
  1. Ago(70 years,p) is true.
I.e., 70 years ago, there was someone who would vote for Obama. Thus, by (1):
  1. p was true 70 years ago.
But p makes de re reference to Obama. Since Obama didn't exist 70 years ago, it follows by (2) that:
  1. p was not true 70 years ago.
And this, of course, contradicts (4).

Thus, the presentist cannot hold (1)-(3) together.

Thus, the straightforward presentist reading of the claim that 70 years ago there was someone who would vote for Obama as the claim that 70 years ago someone existed who would vote for Obama is one that doesn't fit with (1) and (2). But there is a non-straightforward way of giving the truth-conditions that does:

  1. There (is) a time t at which there (is) a person x who (votes) for Obama at t and who also existed on October 29, 1939,
where "(is)" is short for "was, is or will be" and "(votes)" is short for "voted, votes or will vote".

An interesting question is whether such truth conditions are available for all possible examples of this sort.

There is, however, a different route for the presentist. She could deny (1). This would be analogous to Robert Adams' move of allowing that a proposition might be true at a world without being such that were that world actual, the proposition would be true. Such a view a view, when married to Crisp's concept of abstract times, would have the problematic consequence, that in general, at a time t, the time t was not present. (For t will contain propositions that make de re reference to now-actual objects that didn't exist at t, and so at t the maximal proposition would have been different from what it actually is.) To me, (1) seems very plausible.

Probably, most presentists will simply deny (2), allowing for de re reference to non-existents by means of haecceities. They will then open themselves to Lewis's objection that they are not really presentists, but there is probably a way out of that. It would, however, be an interesting thing if they had to deny (2)—this would mean that presentists cannot be serious actualists in the sense involved in (2). And if presentists could not be are not serious actualists, then their claim that only present objects are actual is not quite as revolutionary as it seems.

Wednesday, October 28, 2009

Erotic love relationship needs

Bryan Weaver and Fiona Woollard seem to think that there are people whose needs for erotic companionship could not be met by one monogamous relationship. I hereby hypothesize that for all x, if x is such that his or her needs for erotic companionship could not be met by one monogamous relationship, then x is such that his or her needs for erotic companionship could not be met by any number of relationships.

Tuesday, October 27, 2009

Hating the devil

An interesting disagreement among orthodox Christians, even among orthodox Catholics, is whether the devil should be hated. I have run into a number of people who think in the affirmative. In fact, anecdotal evidence suggests that that is the more common position. On the other hand, I think we should not hate the devil—in fact, we should love him.

Here are some plausibilistic arguments for my position:

  1. Surely, we should not hate the souls in hell. But if the reason for hating the devil is that he cannot repent of his wickedness, then the same applies to the souls in hell. And if the reason for hating the devil is his evil works and his empty promises, then that's a bad reason—it's a reason for hating the evil works and the empty promises, but not for hating the devil.
  2. Anything that is good deserves to be loved to the extent that it is good. Anything that exists is good to the extent that it exists. Thus, the devil deserves to be loved to the extent that he exists. And to the extent that he does not exist, surely then it is not he, who exists, who is to be hated, but the fact that he does not exist fully should be hated. (Yes, one can hate its being the case that p.)
  3. Love and hatred are closely tied to actions. Now the actions we should engage in with respect to the devil are ones that are good for him, and hence they are more like loving than like hateful actions. For instance, we should reject the devil's temptations. That is good, because by rejecting the temptations we make him be responsible for fewer evils than he would be responsible for if we yielded, and it is bad for one to be responsible for evils. We should shun the devil's company. But to be in the devil's company, we would have to be wicked. And it harms a person to be provided with wicked companions. Furthermore, we should strive to frustrate the devil's wicked plans. While the frustration of one's plans may be bad for one in one way, in a more salient way, it is good for one when the plans are wicked. It is a bad thing for one to succeed at evil.

On the other hand, one might worry that love has a unitive dimension, and then one might argue that we should, surely, not seek to be united to the devil—that is just too dangerous. However, we can be united simply by doing good to someone, and there are ways of doing good to the devil that do not carry undue danger—for instance, we can, as noted above, do good to the devil by frustrating his evil designs. Another good we could do to the devil, should God assign this to us (we are mysteriously told that we'll judge angels), could be to condemn him to punishment, if it is intrinsically good to be punished for one's wickedness.

At the same time, the love should not have much intensity. The devil is dangerous, and we should not think too much about him. Maybe I have already done too much.

Friday, October 23, 2009

Westlund on companion love

Andrea Westlund in her piece "The Reunion of Marriage" (in the Monist's Marriage issue) gives an account of the "companion love" in marriage as centered on the argumentative forging of shared reasons.

While there is some argumentative forging of shared reasons in marriage, it seems to me that any account of marriage that makes the production of shared reasons be central is a conceit of affluent Western culture (bet you never expected that phrase from me!). I imagine two peasants. They fall in love, marry, pray together, raise children together, work the fields with the children, are taken care of in their old age by some of the children, and go to their eternal reward (not all necessarily in this order—in particular, falling in love may follow marrying, and the praying together hopefully happens all through the process).

The couple's joint life follows a pattern set by religious and secular tradition, the cycles of nature, and economic necessities. In the ideal case, they do indeed share ends—they jointly pursue food, drink, shelter, clothing, eternal salvation, reproduction and various joys, all for and with one another and their children. Many of their shared reasons are a function of what they individually have antecedent reason to pursue (e.g., clothing and eternal salvation) and which become a joint end when they come together in love. But in those cases there is no need for a production of reasons—they have the shared reasons in virtue of their shared humanity and their shared circumstances, as well as, perhaps, their love. (I am suspicious of the idea of love giving rather than recognizing much in the way of reasons. One could try to argue that love takes individual reasons and transforms them into joint ones.)

There is, of course, a dialogical struggle to recognize the reasons they already have—they are not perfect phronimoi who automatically are cognizant of all the reasons present for them. And there will likely be much argument over means, but that is not what Westlund is talking about.

Still there will be aspects of their relationship where they do have significant freedom. On long winter evenings, do they play dice, tell stories and jokes, sing, dance, sew and/or carve? Which non-required religious devotions do they embrace as a family? Which of their needy neighbors will they support and in what way? But in the case of devotion and charitable activity, this is merely the working out of a shared plan for particularizing and pursuing imperfect duties which they have, independently of any forging of theirs, a reason to fulfill. If the couple is lucky enough not to be too exhausted from the day's work, there may be some time for evening recreational activities, and there there will be a need to choose shared ends—but that simply does not seem to be of the essence to the marriage. It would be unfortunate if the couple were unable to do this, but their companion love does not depend on the availability of this.

Tevye: ... But do you love me?
Golde: Do I love you?
For twenty-five years, I've washed your clothes,
Cooked your meals, cleaned your house,
Given you children, milked the cow.
After twenty-five years, why talk about love right now?
Tevye: Do you love me?
Golde: I'm your wife!
Tevye: I know. But do you love me?
Golde: Do I love him?
For twenty-five years, I've lived with him,
Fought with him, starved with him.
For twenty-five years, my bed is his.
If that's not love, what is? (The Fiddler on the Roof)

Wednesday, October 21, 2009

Some liar paradoxes without truth

Let "@" be the name of the actual world.

  1. The proposition expressed by (1) in English is not entailed by the proposition that @ is actual.
  2. The proposition expressed by (2) in English is not compossible with the proposition that @ is actual.
  3. The proposition expressed by (3) in English is not necessary.
  4. The proposition expressed by (4) in English is not known by anybody.
  5. The proposition expressed by (5) in English cannot be known by anybody.

That (1) and (2) are paradoxical is obvious. That (3) is paradoxical is easy to see. For if (3) is false, then (3) is necessarily true. If (3) is true, then then it is only contingently true. But the argument that if (3) is false, then (3) is necessarily true works in all worlds. So in no world is (3) false. So (3) cannot be contingently true.

The paradoxicality of (4) is a bit more fun, though I am less sure of it. If (4) is false, then (4) is known by somebody and hence true. So, (4) cannot be false. But now that we have a logically sound argument for (4), we know (4)—or at least we could, and then we can consider the argument in the possible world where we do know it. But if we know (4), then (4) is false.

What about (5)? Well, if (5) can be known by anybody, it can be true and known. But it cannot be both true and known. So, (5) cannot be known by anybody. But this is a good argument for the truth of (5), so even if we don't know (5), somebody can know it on the basis of this argument. But then (5) is false.

Tuesday, October 20, 2009

Is pollution bad for the earth?

A curious thought hit me today: What could it mean for something, say pollution, to be bad for the earth? We have, I think, a fairly good idea of what it is for something to be good or bad for a human, a dog, a wasp, a tree and maybe even a bacterium. But for a planet? For humans, dogs, etc., there are roughly three accounts of well-being: (a) the hedonist account that well-being is pleasure and absence of pain, (b) the desire account that well-being is (roughly) fulfillment of desires and lack of frustration of desires, and (c) the flourishing account. Now, (a) requires consciousness and (b) requires mind, so neither is applicable to a tree, a bacterium or the earth.

That leaves the flourishing account. But while I have some idea about canine and waspish flourishing, I have very little idea about planetary flourishing. For instance, does hosting life make a planet flourish, or to the contrary, do planets flourish more when they are devoid of life? After all, if the average member of a natural kind is likely to have a normal degree of flourishing, it appears that lifeless planets have a normal degree of flourishing. So as long as we don't literally blow the earth into pieces, it seems that whatever pollution we inflict on it, we won't push it below the normal level of well-being.

But perhaps we need to distinguish different kinds of planets, and different kinds of planets have different kinds of flourishing. Thus, maybe, a planet in a "habitable zone" in a stellar system has the support of organic life as part of its flourishing. But what kind of organic life is needed for flourishing? Is the planet better off for hosting more complex life-forms? (Is a house better off for having people rather than geckos in it?) Or for a greater diversity of life-forms? (Is a house better off for having people and cockroaches rather than just people?) It seems plausible that unless we have a metaphysical teleology, either of the Aristotelian or the theistic sort, for planets in the habitable zone, these questions have no answer. And even if we have such a teleology, the epistemology of that teleology will be difficult, because the earth is the only habitable planet we know of, and typically we learn about the teleological properties of a natural kind by observing multiple instances.

But perhaps it is a mistake to think of the earth as rocks, water and atmosphere. Rather, the suggestion goes, the ecosystem is not just hosted by the earth, but is a part of the earth. I am not sure we should buy that. While parthood might not in general be transitive, it seems plausible that since we are parts of the ecosystem, then if the ecosystem were a part of the earth, we would be parts of the earth. But surely we are not parts of the earth. We live on earth, but we are not parts of it any more than we are parts of the galaxy (though the earth is a part of the galaxy).

But let us grant that the ecosystem is a part of the earth—or maybe that "the earth" is sometimes a metonymy for the ecosystem. In that case, pollution that causes destruction of a part of the ecosystem without a compensating growth elsewhere does seem to be contrary to the flourishing of the earth. But more detailed study of flourishing still seems mired in epistemic problems. It is very hard to figure out the teleology of the ecosystem as a whole, unless we accept revelation and say that the teleology is the support of humanity.

Monday, October 19, 2009

Tarski's definition of truth-in-L

Tarski's definition is often noted—typically critically—as being applicable only to the languages he gave it in. Thus, he defined truth-in-L, or more generally satisfaction-in-L, for several cases of L. However, I think this misses something that goes on in the reader when she understands Tarski's account: the reader, upon reading Tarski, gains the skill to generate the definition of truth-in-L for other languages L (at least ones that are sufficiently formalized). One just gets it (I think Max Black makes this point). A standard way of defining A in C (where C is a context and A is a context-sensitive concept to be define) is to give some "direct definition" of the form

  1. x is a case of A in C iff F(x,C).
However, Tarski's case exemplifies a different way of defining "A in C": one teaches (perhaps by example) a procedure (perhaps specified ostensively) which, for every admissible C, will generate a definition of A-in-C. Call this "procedural definition". A direct definition has an obvious advantage with respect to comprehensibility. However, a procedural P definition does advance the understanding. For instance, suppose that instead of giving a definition of a heart that applies to all species, I teach you a method which, when properly exercised upon Ks, gives you a definition of the heart-of-a-K.

Now, in ordinary cases, one can move from a procedural definition to a direct definition as follows:

  1. x is a case of A in C iff x satisfies the definition of A-in-C that P would produce given C.

However, in the Tarskian case, we cannot do this for the simple reason that (2) would end up being circular if A is satisfaction! To understand what it is to satisfy a definition one needs to know that which one is trying to define. So in Tarski's case—and pretty much in Tarski's case alone—procedural definition is not the same as direct definition.

Nonetheless, a procedural definition, even when it does not give rise to a direct definition, is valuable—as long as the grasp of the procedure does not depend on the concept to be defined. And here, I think, is the real failure of Tarski's definition: one's grasp of the concept of a predicate—which is central to the method—is dependent on one's grasp of the concept of satisfaction.

Saturday, October 17, 2009

Seeing cause and effect

So last night we were out observing (through telescopes; the anthropology was merely accidental), and there were some guys in the shadows apparently smoking a controlled substance. Suddenly, they get on their motorcycles and clear out. Moments later we see a police car coming into the area. So, pace Hume, we saw cause and effect. Moreover, Hume's analysis of the phenomenology is incorrect. It was not the case that seeing the cause led to (caused?!) an expectant feeling in us, because we saw the cause—the police car—after seeing the effect. (Presumably, the smokers saw it before they cleared out.)

I also saw IC 1396, M 31, M 32, M 33, M 45, M 52, M 110, the Double Cluster, NGC 6940, NGC 6960, NGC 6992, NGC 7000, NGC 7009, NGC 7027, NGC 7235, NGC 7635, Uranus and Jupiter. Of these, the Veil Nebula was particularly impressive. I had never seen it before, nor had I seen any photos of it, so I didn't know exactly what I was looking for. I couldn't find it without a filter. Finally, I put an OIII filter on my finder scope, and it showed up as a faint and large arc. With the filter on the 13", then, it looked really nice—lots of complexity. That, too, is an effect—an effect of a supernova. But while I was in a position to know the Veil to be an effect, because it was labeled "SNR" (supernova remnant) in a catalog on my PDA, I did not see it as an effect, in the way I saw the guys leaving as an effect.

Friday, October 16, 2009

A fun circularity

Yesterday, I was interested in a paper because I was interested in that paper. Here's the story. I was interested in a paper by John Norton. A colleague mentioned that he had come across a paper and described the topic it was on. It was closely related to the topic of the paper that interested me, and hence I became interested in the paper that the colleague had come across. However, as it turned out, it was the same paper, though in a revised version.

Sometimes an enthymematic explanation is circular, but the circularity disappears once the details are filled in.

Thursday, October 15, 2009


Suppose that only propositions can be true or false. In a much earlier post I expressed a suspicion of conjunctive definitions. But if bivalence is right, then the following definition of falsity seems very plausible:

  • x is false iff x is a proposition and x is not true.
But this is conjunctive, so I should be suspicious of it.

I could say that the suspicious conjunctiveness shows that in fact this isn't the right definition of falsity. Instead, I should have a definition friendlier to non-bivalent views, such as that x is false if and only if not-x is true. I am not sure I want to do that, though.

Another move would be to dig in my heels and say that there are two properties. There is falsity and falsity*. x is false* iff x is not true. x is false iff x is a proposition and x is false*. Chairs are false*, but only a proposition can be false. The more basic and natural property is falsity*. But English, for whatever pragmatic reason, has a single word for "false" and lacks a single word for "false*". Thus the English "false" denotes a less basic property, but this has some pragmatic explanation. However, in philosophizing, we should work as much as possible with the more basic concept, that of falsity*. Extending truth to sentences and beliefs, we then get to say that "All mimsy were the borogoves" is false*, just as my computer and "The sky is now pink" are false*.

An argument for retribution

  1. (Premise) Every basic kind of desire is either appropriate or a distortion of an appropriate kind of desire.
  2. (Premise) The desire for revenge is a basic kind of desire.
  3. (Premise) If the desire for x (where x is the sort of thing that can be done) is appropriate, then x is sometimes appropriate.
  4. (Premise) If revenge is sometimes appropriate, retributive punishment is sometimes appropriate.
  5. (Premise) The only desire that the desire for revenge could be a distortion of is a desire for retributive punishment.
  6. Therefore, retributive punishment is sometimes appropriate.

Wednesday, October 14, 2009

Why do we dislike it when bad things happen to us?

It is easy to give a theistic answer to the question in the title:

  1. Bad things should be avoided, and so it is likely that God would make rational beings dislike them.
Presumably, the naturalistic story is going to be roughly something like this:
  1. We tend to avoid things we dislike (this may even be analytic), and bad things tend to be detrimental to our fitness, so there is selection for dislike of bad things.
But there is still a puzzle: Why is it that bad things tend to be detrimental to our evolutionary fitness? Is it not a great coincidence on a naturalistic account that such highly varied qualities as ignorance, loss of limb and cowardice have both the property of badness and the property of being detrimental to fitness?

Of course some folks may say that there is no puzzle here, because our belief that these qualities are bad is caused by the fact that they are detrimental to fitness. However, that only answers why it is that there is a correlation between being believed to be bad and being detrimental to fitness, while the puzzle was about the correlation between being actually bad and being detrimental to fitness. Some of the folks I am imagining will go on to say that there is no such thing as badness, only beliefs about badness, and others will go relativistic and say that to be bad is to be believed to be bad. The problems with these options are obvious and well-known.

The sensible naturalist had better be a realist about the good and the bad. And then the correlation between badness and lack of fitness is, indeed, puzzling.

Tuesday, October 13, 2009

How surprising is evil?

According to the argument from evil:

  1. The evils of this world are much more surprising given theism than given atheism.
But if (1) were true, then we would expect:
  1. Theists tend to be much more surprised by evil than atheists.
However, I do not think (2) is in fact observed, and this provides evidence against (1).

Objection 1: Theists are irrational, and irrational people may not be surprised by the objectively surprising.

Response: This proposed explanation of the non-occurrence of (2) would itself lead to a further prediction:

  1. The more rational a theist, the more likely she is to be surprised by evil.
But (3) is definitely not observed. In fact, the contrary is probably the case.

Objection 2: This is a version of the problem of old evidence. In old evidence cases, one is not surprised by the evidence as one already knew it.

Response: Still, if (1) is true, we would at least expect:

  1. Theists, and if not in general then at least the more rational ones, are significantly more surprised than atheists to learn of new and particularly heinous evils.
But I do not think this is actually observed.

None of this is a conclusive refutation of (1). But it does decrease the likelihood of (1).

Monday, October 12, 2009

Some naive thoughts on syntax

I am neither a linguist nor a philosopher of language, so what I will say is naive and may be completely silly.

It seems to be common to divide up the task of analyzing language between syntax and semantics. Syntax determines how to classify linguistic strings into categories such as "sentence", "well-formed formula", "predicate", "name", etc. If the division is merely pragmatic, that's fine. But if something philosophical is supposed to ride on the division, we should be cautious. Concepts like "sentence" and "predicate" are ones that we need semantic vocabulary to explain—a sentence is the sort of thing that could be true or false, or maybe the sort of thing that is supposed to express a proposition. A predicate is the sort of thing that can be applied to one or more referring expressions.

If one wants syntax to be purely formal, we should see it as classifying permissible utterances into a bunch of formal categories. As pure syntactitians, we should not presuppose any particular set of categories into which the strings are to be classified. If we are not to suppose any specific semantic concepts, the basic category should be, I think, that of a "permissible conversation" (it may well be that the concept of a "conversation" is itself semantic—but it will be the most general semantic concept). Then, as pure syntactitians, we study permissible conversations, trying to classify their components. We can model a permissible conversation as a string of characters tagged by speaker (we could model the tagging as colors—we put what is spoken by different people in different colors). Then as pure syntactitians, we study the natural rules for generating permissible conversations.

It may well be that in the case of a human language, the natural generating rules for speakers will make use of concepts such as "sentence" and "well-formed formula", but this should not be presupposed at the outset.

Here is an interesting question: Do we have good reason to suppose that if we restricted syntax to something to be discovered by this methodology, the categories we would come up with would be at all the familiar linguistic categories? I think we are not in a position to know the answer to this. The categories that we in fact have were not discovered by this methodology. They were discovered by a mix of this methodology and semantic considerations. And that seems the better way to go to generate relevant syntactic categories than the road of pure syntax. But the road that we in fact took does not allow for a neat division of labor between syntax and semantics, since many of our syntactic categories are also natural semantic ones, and their semantic naturalness that goes into making them linguistic relevant.

Friday, October 9, 2009

Non-semantic definitions of truth

Here is a good reason to think that Tarski-style attempts at a definition of truth that do not make use of semantic concepts are going to fail. Such attempts are likely to make use of concepts like predicate and name. But these concepts are semantic concepts. A predicate is something can be applied to a name, and a name is something to which a predicate can be applied, and application is a semantic concept. Moreover, the definition of truth is going to have to presuppose an identification of the application function for the language (which takes a predicate and one or more names or free variables, and generates well formed formula, say by taking the predicate, appending a parenthesis, then appending a comma-delimited list of the names/variables, and then a parenthesis). But there is a multitude of functions from linguistic entities to linguistic entities, and to say which of them is application will be to make a semantic statement about the language.


The virtues support each other in two ways: (i) having one helps gain another; (ii) each helps to achieve the ends of the others. In regard to (ii), note that it is easier to achieve the goals of prudence if one is chaste, sober and eats in moderation, to achieve the goals of generosity if one is prudent and brave, to achieve the self-knowledge that humility aims at if one is wise and sober, and so on. This is partly distinct from (i).

The vices, on the other hand, support each other in sense (i), but hamper each other in sense (ii). Thus, laziness may lead to gluttony (having nothing better to do, one may just eat) and lust may lead to greed (in order to impress potential sexual partners): having a vice helps one gain another. But, in fact, the goals of the vices hamper one another. Lust is expensive, and hence hampers the goals of greed. Wrath makes it harder to make money and keep sexual partners. All the vices, including vanity itself, hamper the goals of vanity by making one appear ridiculous. Conversely, sloth and cowardice hamper the goals of all the other vices.

So, while type (i) support among the virtues is a delightful thing, because the virtues also help to achieve one another's goals, type (i) support among the vices is a baneful thing, because the vices hamper the achievement of one another's goals, but nonetheless the vices lead to one another.

This is a fine, and very broadly both Kantian and Aristotelian, answer to the question of why be virtuous.

Thursday, October 8, 2009


Here is an argument for S4. We want metaphysical necessity to be the strongest kind of necessity without arbitrary restrictions. If one responds that conceptual or strictly logical necessity are stronger, the answer is that they are, nonetheless, arbitrarily restricted, being dependent on a particular set of rules of inference and axioms. (The only non-arbitrary way to specify which which axioms are permitted is to say that it is all the fundamental metaphysically necessary propositions that are axioms, and then we presumably get metaphysical necessity.) Now, if L is a necessity operator, then LL is also a necessity operator. If LL is not equivalent to L, then LL is a stronger necessity operator. If LL counts as arbitrarily restricted, then we have reason to think that so does L, since L is even more restricted than LL, and it seems arbitrary to work with L instead of LL or LLL. And if LL doesn't count as arbitrarily restricted, then L is not the strongest non-arbitrarily restricted necessity operator. So if L is metaphysical necessity, L and LL are equivalent.

The dual of this argument is that metaphysical possibility is the most fundamental sort of possibility. But if M is metaphysical possibility, and MM is not equivalent to M, then MM will be a more fundamental possibility. So, if M is metaphysical possibility, M and MM are equivalent.

Wednesday, October 7, 2009

What's wrong with Tarski's definition of application?

Tarski's definition of truth depends on a portion which is, essentially, a disjunctive definition of application. As Field has noted in 1974, unless that definition of application is a naturalistically acceptable reduction, Tarski has failed in the project of reducing truth to something naturalistically acceptable. Field thinks the disjunctive definition of application is no good, but his argument that it is unacceptable is insufficient. I shall show why the definition is no good.

In the case of English (or, more precisely, the first order subset of English), the definition is basically this:

  1. P applies to x1, x2, ... (in English) if and only if:
    • P = "loves" and x1 loves x2, or
    • P = "is tall" and x1 is tall, or
    • P = "sits" and x1 sits, or
    • ...
The iteration here is finite and goes through all the predicates of English.

Before we handle this definition, let's observe that this is a case of a schematic definition. In a schematic definition, we do not give every term in the definition, but we give a rule (perhaps implicitly by giving a few portions and writing "...") by which the whole definition can be generated.

Now consider another disjunctive definition that is generally thought to be flawed:

  1. x is in pain if and only if:
    • x is human and x's C-fibers are firing, or
    • x is Martian and x's subfrontal oscillator has a low frequency, or
    • x is a plasmon and x's central plasma spindle is spinning axially, or
    • ...
Why is this flawed? There is a simple answer. The rule to generate the additional disjuncts is this: iterate through all the natural kinds K of pain-conscious beings and write down the disjunct "x is a K and FK(x)" where FK(x) is what realizes pain in Ks. But this definition schema is viciously circular, even though the infinite definition it generates is not circular. If all the disjuncts were written out in (2), the result would be a naturalistically acceptable statement, with no circularity. However, the rule for generating the full statement—the rule defining the "..." in (2)—itself makes two uses of the concept of pain (once when restricting the Ks to pain-conscious beings and the other when talking of what realizes pain in Ks). Thus, giving the incomplete (2) does not give one understanding of pain, since to understand (2) one must already know what the nature of pain is. (The same diagnosis can be made in the case of Field's nice example of valences. To understand which disjuncts to write down in the definition in any given world with its chemistry, one must have the concept of a valence.)

Now, the Tarskian definition of application has the same flaw, albeit this flaw does not show up in the special cases of English and First Order Logic (FOL). The flaw is this: How are we to fill in the "..." in (1)? In the case of English we give this rule. We iterate through all the predicates of English. For each unary predicate Q, the disjunct is obtained by first writing down "P =", then writing down a quotation mark, then writing down Q, then writing down a quotation mark, then writing down a space followed by "and x1" flanked by spaces, then writing down Q. Then we iterate through all the binary predicates expressible by transitive verbs, and write down ... (I won't bother giving the rule—the "love" line gives the example). We continue through all the other ways of expressing n-ary predicates in English, of which there is a myriad.

Fine, but this is specific to the rules of English grammar, such the subject-verb-object (SVO) order in the transitive verb case. If we are to have an understanding of what truth and application mean in general, we need a way of generating the disjuncts that is not specific to the particular grammatical constructions of English (or FOL). There are infinitely many ways that a language could express, say, binary predication. The general rule for binary predication will be something like this: Iterate through all the binary predicates Q of the language, and write down (or, more generally, express) the conjunction of two conjuncts. The first conjunct says that P is equal to the predicate Q, and the second conjunct applies Q to x1 and x2. We have to put this in such generality, because we do not in general know how the application of Q to x1 and x2 is to be expressed. But now we've hit a circularity: we need the concept of a sentence that "applies" a predicate to two names. This is a syntactic sense of "applies" but if we attempt to define this in a language independent way, all we'll be able to say is: a sentence that says that the predicate applies to the objects denoted by the names, and here we use the semantic "applies" that we are trying to define.

It's worth, to get clear on the problem, to imagine the whole range of ways that a predicate could be applied to terms in different languages, and the different ways that a predicated could be encapsulated in a quoted expression. This, for instance, of a language where a subject is indicated by the pattern with which one dances, a unary predicated applied to that subject is indicated by the speed with which one dances (the beings who do this can gauge speeds very finely) and a quote-marked form of the predicate is indicated by lifting the left anterior antenna at a speed proportion to the speed with which that predicate is danced. In general, we will have a predicate-quote functor from predicates to nominal phrases and an application functor from (n+1)-tuples consisting of a predicate plus n nominal phrases to sentences. Thus, the Tarskian definition will require us to distinguish the application functor for the language in order to form a definition of truth for that language. But surely one cannot understand what an application functor is unless one understands application, since the application functor is the one that produces sentences that say that a given predicate applies to the denotations of given nominal phrases.

A not unrelated problem also appears in the fact that a Tarskian definition of the language presupposes an identification of the functors corresponding to truth-functional operations like "and", "or" and "not". But it is not clear that one can explain what it is for a functor in a language to be, say, a negation functor without somewhere saying that the functor maps a sentence into one of opposite truth value. And if one must say that, then the definition of truth is circular. (This point is in part at least not original.)

The Tarskian definition of truth can be described in English for FOL and for English. But to understand how this is to be done for a general language requires that one already have the concept of application (and maybe denotation—that's slightly less obvious), and we cannot know how to fill out the disjuncts in the disjunctive definition, in general, without having that concept.

Perhaps Tarski, though, could define things in general by means of translation into FOL. Thus, a sentence s is true in language L if and only if Translation(s,L,L*) is true in L*, where L* is a dialect of FOL suitable for dealing with translations of sentences of L (thus, its predicates and names are the predicates and names take from L, but its grammar is not that of L but of FOL). However, I suspect that the concept of translation will make use of the concept of application. For instance, part of the concept of a translation will be that a sentence of L that applies a predicate P to x will have to be translated into the sentence P(x). (We might, alternately, try to define translation in terms of propositions: s* translates s iff they express the same proposition. But if we do that, then when we stipulate the dialect L* of FOL, we'll have to explain which strings express which propositions, and in particular we'll have to say that P(x) expresses the proposition that P applies to x, or something like that.) The bump in the carpet moves but does not disappear.

None of this negates the value of Tarski's definition of truth as a reduction of truth to such concepts as application, denotation, negation (considered as a functor from sentences to sentences), conjunction (likewise), disjunction, universal quantification and existential quantification.

Tuesday, October 6, 2009


I've run this argument before. But let's do it again, maybe more clearly. If some things can have non-mereological parts, the following scenario is possible: an entity has m parts to begin with, and then it loses k and is left with n=mk parts. It would be really weird if this couldn't happen in the case where n=1, but could happen in the case where n=2, say. So, plausibly, this can happen in the case where n=1. Suppose Fred, thus, loses all but one of his parts. The remaining part is not identical with Fred—if it were identical with Fred, then prior to the loss of the other parts, Fred would have been identical with a proper part of himself. So at the end, Fred has one part. But the following two claims seem plausible, too:

  1. x is a proper part of y if x is a part of y and x is not identical with y
  2. if x is a proper part of y, then y has at least one other proper part than x.
And the case contradicts the conjunction of (1) and (2).

I take it that the advocate of non-mereological parts will have to deny (2). This introduces a new class of quasi-simples. A quasi-simple is an entity that has at most one proper part. Like a simple, it is not possible to subdivide a quasi-simple any further. But unlike a simple, a quasi-simple is allowed to have a proper part. This is weird indeed.

It is a puzzling question when two or more simples compose a whole. But once one allows quasi-simples, we get the further puzzling question when a simple composes a quasi-simple.

Monday, October 5, 2009

Is anything worth doing?

"Is anything worth doing?" (in a broad sense of "doing") is a question which, if it is worth thinking about, needs to be answered in the positive. But it is clear that the question is worth thinking about. Hence, the answer to it is positive.

I am confident that it can be established that if something is worth doing (or even if the words "is worth doing" express a concept), then naturalism is false. Thus, naturalism is not worth believing in. For if it is false, it is not worth believing in, and if it is true, nothing is worth doing and hence in particular nothing is worth believing in. (I love these sorts of arguments!)

How to establish that if something is worth doing, then naturalism is false? One approach is this. If something is worth doing, then "is worth doing" expresses a property. The only plausible fully naturalistic accounts of the expressiveness of our language are going to have a heavy dollop of causation (or at least explanation) in them. But if naturalism is true, then something's being worth doing (perhaps as opposed to that something is believed to be worth doing) does not enter into causal relations, and is explanatorily inert (unlike perhaps the truthmakers of mathematical truths, which do not enter into causal relations on naturalistic views, but may be explanatorily potent). Of course, this argument sketch has giant holes. But my intuition is that the holes can be filled in.

Saturday, October 3, 2009


According to frequentism, the probability of an event E happening is equal to the limit of NE(n)/n as n goes to infinity, where N(n) is the number of times that E-type outcome occurs in the first n independent trials. (If there are only finitely many trials in the history of the universe, we've got a serious problem, since then we get the conclusion—surely inconsistent with current physics—that all probabilities are rational numbers. I am guessing that in that case we need to make a counterfactual move—if we were to go to infinity, what limit would we get?)

But now here is a puzzle for the frequentist: Why is it that N(n)/n in fact has a limit at all? The non-frequentist has an answer—the Law of Large Numbers implies that, with probability one, N(n)/n converges to the probability of E, if E has a probability. But it would be circular for the frequentist to offer this explanation.

Friday, October 2, 2009

From the Grim Reaper paradox to the Kalaam argument

A Grim Reaper (GR) timed to go off at t0 is an entity which does the following at exactly t0. If Fred is not alive at t0, the GR does nothing at t0. If Fred is alive at t0, the GR instantaneously annihilates Fred. (If instantaneous action is not logically possible, one can complicate the situation by allowing shorter and shorter time intervals for these actions.) The GR Paradox then is this scenario. Fred is alive at 11:00 am today, and that he does not die today unless killed by a GR and he does not get resurrected today. There are infinitely many GRs, timed to go off in a staggered way at the respectively times t1,t2,... where tn is equal to 11:00 am + 1/n minutes. Well, by 11:02 am, Fred is certainly dead, since it is impossible that he survive a time at which a GR is timed to go off. But when was he killed? He wasn't killed by the 11:00 am + 1 minute GR, because if he were alive just before 11:01 am, then he would have been alive at 11:00 am + 1/2 minute, when another GR went off, and he can't survive a GR going off. It seems that none of the GRs could have killed him, because before each, there was another. So we have a contradiction: he both was and was not killed. Somebody has suggested that Fred is killed by the mereological sum of all the GRs, but that's mistaken in the present setting because the GRs check if Fred is already dead before they do anything, so in the present setting, none of them actually do anything—and if they don't do anything, how can they kill Fred?

The Kalaam argument needs the premise that there couldn't be a backwards infinite sequence of events. Here is an argument for this:

  1. If there could be a backwards infinite sequence of events, Hilbert's Hotel would be possible.
  2. If Hilbert's Hotel were possible, the GR Paradox could happen.
  3. The GR Paradox cannot happen.
  4. Therefore, there cannot be a backwards infinite sequence of events.
Actually, one could make steps 1 and 2 into a single step, but this is more fun, and, if it works, establishes the interesting corollary that Hilbert's Hotel couldn't exist.

Argument for (1): If there could be a backwards infinite sequence of events, there could be a backwards infinite sequence of events during each of which a hotel room is created, none of which are destroyed. An infinite number of hotel rooms would then be the result.

Argument for (2): If Hilbert's Hotel were possible, each room in it could be a factory in which a GR is produced. Moreover, it is surely possible that the staff in room n should set the GR to go off at 11 am + 1/n minutes. And that would result in the GR Paradox.

The argument for (3) was already given at the beginning of the paper.

For about two years, I've smelled this argument coming, but I think my vanity has kept me from seeing it. I still have to confess that I have a really hard time accepting the corollary that Hilbert's Hotel couldn't exist—that corollary seems extremely counterintuitive to me. I wish I had some good way out.

On the other hand, establishing a major premise of an argument for the existence of God is a very happy outcome.

The surprising effectiveness of non-rigorous mathematics in physics

It is well-known how surprisingly effective mathematics is in science. But it is perhaps even more surprising, I think, how effective non-rigorous mathematics is. Physicists by and large do not do mathematics with the rigor with which mathematicians do it (not that mathematicians are that rigorous—basically, I think of the "proofs" published by mathematicians as informal arguments for the existence of a proof in the logician's sense). But, amazingly enough, it works. Neither Newton's nor Leibniz's calculus was rigorous. Yet physics based on calculus did just fine before the 19th century when calculus was made rigorous. Physicists often make approximations—for instance, taking the first term or two in some expansion—without proving any bounds on the approximation, but tend to get it right. Likewise, it is, I suspect, not uncommon for a scientist to write down a set of partial differential equations governing some system, and then say things like "Solutions must be like this..." without ever proving that the equations in fact have a solution. (It won't do, logically speaking, to say: "It must have a solution since it describes a physical system." For in practice none of the equations describe physical systems—they describe approximations to physical systems.)

One might think that a mathematical proof that is not logically valid is like tracing your ancestry to Charlemagne with only two gaps. But it's not like that at all in the sorts of mathematical arguments physicists use. They do tend to get it right, despite not doing things rigorously.

Thursday, October 1, 2009

Yet another argument against naturalism

The following argument is valid:

  1. (Premise) Every reasonable desire can be fulfilled.
  2. (Premise) The desire for moral perfection is reasonable.
  3. (Premise) Moral perfection requires being such that one is morally responsible and yet cannot do wrong.
  4. (Premise) If naturalism is true, a state that entails moral responsibility and an inability to do wrong is not attainable.
  5. Therefore, naturalism is false.

The argument being valid, the question is whether it is sound. I think (1) is plausible if we take "reasonable" in a strong enough sense. It is easy to argue for (4), since our best theories involve such a degree of indeterminism that, if they are complete descriptions of human beings, the possibility of doing wrong will always be there. That leaves (2) and (3). There is an argument from authority for (2): Kant thought so (and made an argument somewhat similar to this one). It does seem that a part of the moral life is the pursuit of moral perfection, and the moral life is reasonable in a strong sense.

That leaves (3). Let's consider two alternate views of moral perfection.

"Moral perfection only requires that one be morally responsible and never any longer actually do wrong." This is too weak, surely. It would mean that anybody whose existence ends with a morally responsible choice to do something right achieves moral perfection just prior to that choice.

"Moral perfection requires having all the virtues to a complete degree. Having the virtues to a complete degree is incompatible with self-initiated wrongdoing, but is not incompatible with losing the virtues or being forced through neurological manipulation into wrongdoing." This view is plausible, but I think the argument can still be run on this view, albeit with some complications. The challenge is whether an analogue of (4) is still true. I think it is. The morally perfect person is not blind to temptation—i.e., she is aware of the goods that temptation offers. (Courage is not achieved by not noticing danger.) But if she is aware of these goods and naturalism holds, then it is surely possible for her to choose these goods, where the choice is constituted by an indeterministic event in the brain, even if she has brain structures that are virtuously pointed the right way. And such a choice would be, surely, a morally responsible one, being a choice of a (lesser) good that comes from one's appreciation of that good. (If it be said that only deterministic choices are morally responsible, then moral responsibility is not available given naturalism in our indeterministic universe, and, again, moral perfection is unattainable.)