Sunday, December 30, 2007

Perverse rewards

Dear Public Diary,
Can anything be done about the perverse rewards of academic life in philosophy, where a new way of being wrong is rewarded an old way of being right typically gets no reward (unless it has been forgotten and is being rediscovered by one), and the more subtle an error in the argument for the false conclusion--and hence, the more harmful the argument--the greater the reward?

Perhaps prayer and fasting is the only solution. It should be particularly effective, at least in one's own case. Mea culpa.

p.s. Of course God brings good out of evil. An original error can move the field forward, even towards truth. But that God brings good from evil is no excuse for doing evil.

Saturday, December 29, 2007

Music and the problem of evil

Suppose I have superhuman hearing. While you are listening to Beethoven's Ninth, I hear, with great precision, every single sound wave impacting each of my eardrums. But I do not actually assemble them into a coherent piece of music.[note 1] As far as aesthetic appreciation goes, I might as well be looking at a CD under a scanning electron microscope.

It is quite easy to see all the physical details of a work of art without seeing the work as a whole, which work gives meaning to the parts. The details may look nothing like the whole. And suppose now that we did not even see all the details, first, because our perceptual processes already processed the data in some lossy way, perhaps a way irrelevant to the aesthetic qualities (think of someone who, whenever a piece of music came into his ears, received instead a visual representation of a Fourier transform of a distorted version of the sound), and, second, because we did not perceive the whole. Then our judgment as to the aesthetic qualities of the whole, as to the fittingness of parts, would be of very dubious value.

Now it is plausible that a universe created by God is very much like a work of art. A work of art we see only a portion of and in a way that involves perceptual pre-processing of a sort that may lose many significant aspects of the axiological properties of the work.

If that is how we saw things, then we would find a portion of the "sceptical theism" position quite plausible: we would find it quite plausible that various local evils we see fit into global patterns that give them a very different significance from what we thought. I am not saying the local evils disappear, that they are not evil. But the meaning is very different. We see this in music, in literature, in painting.

[A]ll people are under control in their own spheres; but to everyone it seems as if there is no control over them. As for you, you only have to bother about what you want to be, because whatever and however you want to be, the craftsman knows where to put you. Consider a painter. Various colors are set before him, and he knows where to put each color. The sinner, of course, wanted to be the color black; does that mean the craftsman is not in control, and doesn't know where to put him? How many things he can do, in full control, with the color black! How many detailed embellishments the painter can make! He paints the hair with it, paints the eyebrows. To paint the forehead he only uses white. - St. Augustine, Sermon 125

And just as there may be aesthetic values we are unaware of, there may be moral values we are unaware of.

All this points towards a version of sceptical theism. But I think we should not go too far in that direction. For unlike a piece of music, the work of art that the universe is is executed not out of soundwaves that have little individual worth, the universe is a work that incorporates persons--beings in the image and likeness of God. This makes the divine work much more gloriously impressive especially if God doesn't determine our free actions, but it also means that there is real, intrinsic meaning in the local situations we find, in the pains, joys, sufferings and ecstasies of life. While the meaning of these can be transformed, evils will still be evils. The problem of evil is not solved in this way, but it is, I think, mitigated significantly.

Moreover, thinking in this way solves a problem that plagues standard sceptical theist solutions, namely that they undercut design arguments for the existence of God. For although we might be unable to perceive the significance of the whole, we might perceive significance in the part, and the beauty of a figure in a painting, a chapter in a novel or musical movement can be sufficient to establish something about the talent of the artist.

I explore some of these themes in this piece I once presented at a conference, but the online version is sadly bereft of its illustrations in part for copyright reasons.

Friday, December 28, 2007

One thing I have learned from Hume

I have learned at least one very valuable thing from Hume: there is no real metaphysical problem in dualist mind-body causation or in temporally backwards causation.

This is a surprising thing to take from Hume, given that Hume does find dualist mind-body causation troublesome and his account of causation makes temporally backwards causation incoherent. But here is my line of reasoning.

We learn from the Inquiry that what intuitively seem the least problematic cases of causation, namely kinematic interaction between solid objects in contact with each other, are as mysterious as cases of causation that we might intuitively find more surprising, like action at a distance. Leibniz thought there was something deeply odd about gravitational action at a distance--it was as if there was a "mutual love, as if matter had senses"[note 1]. But on Hume's analysis, the puzzlement by someone like Leibniz about action at a distance and the lack of puzzlement about mechanistic interaction is simply due to our being overly familiar with mechanistic interaction. However, if we engage in some mental estrangement from the mechanistic interaction, we realize it is just as mysterious as action at a distance.

Likewise, I think mind-body causation and backwards causation are strange, but they are no more mysterious than mechanistic interaction. Since we should not reject mechanistic interaction (unlike Hume, I am willing to take it at face value), neither should we reject the possibilities of the former.

Granted, puzzlement at mind-body causation or backwards causation is not the only argument against these. But it is psychologically the most powerful. The only other form of argument against these is something like this: "On account A of causation, mind-body causation or backwards causation is impossible. Account A is true. Hence, mind-body causation or backwards causation is impossible." But we learn from Hume's valiant failed attempt at a regularity account of causation just how hard it is to come up with an account of causation. In fact, I think all accounts of causation that do not simply take causation to be primitive fail. And accounts of causation that do take causation to be primitive have no special difficulty about mind-body causation or backwards causation.

Thursday, December 27, 2007

McTaggart on time

McTaggart is famous for his argument that there is no such thing as time as it is commonly conceived--there is only a sequence with a betweenness relation but no ordering.

The part of the argument that has received most attention is the clever argument that an A-series--the series of times ordered as past, present and future--is incoherent. This argument is that (a) the Battle of Waterloo was exactly the same when it was in the future, then when it was in the present and then when it was in the past, but (b) it was not exactly the same because it changed from being future, to being present to being past. Since (a) and (b) conflict, the notions of pastness, presentness and futurity are incoherent.

What I want to say something about, however, is the second part of McTaggart's argument. The second part of the argument is that a B-series--the series of points in time ordered by an earlier-than relation--cannot do justice to what we mean by "time" because the earlier-than ordering from depends on the A-series. The third part was to note that our perception of time is innately contradictory because of flexibility in the length of the "now".

Crucial to the second part of McTaggart's argument is the idea that the A-series is needed to give a direction to the set of times. Given the set of all times and a betweenness relation on them (time t1 is between times t0 and t2, say), we can get two different orderings compatible with the betweenness relation (e.g., we can take t0 to be earlier than t1 and t1 to be earlier than t2, or we can take t2 to be earlier than t1 and t1 to be earlier than t0), and unless we use the A-series to specify that the right ordering is the one that takes the past to be earlier than the future, we have no way of choosing between these two.

But here McTaggart is mistaken in two ways. First, he has given us no reason to think that the earlier-than ordering is supposed to be defined in terms of the A-series concepts of past, present and future, rather than the other way around. For he gives us no reason to suppose that the A-series has any special resources to distinguish between past and future. Granted, we might posit that the distinction is primitive, but we if we do that, we can just as well posit that the choice between one of the two candidates for the earlier-than relation is to be settled by saying that one of them is primitively the right relation for the job. (The growing block theorist does have an answer, but McTaggart's argument against the A-series supposes an eternalist A-series.)

Second, there certainly is candidates for the job of distinguishing the relation. We might, very simply, take a variant of Kant's solution. The earlier-than relation is the one that points in the direction of predominant causation--typically, when A causes B and A and B are non-simultaneous, then A is prior to B. If there are any exceptions to this (cases of prophecy might be such), they seem to be rare. This account has the theoretical advantage that it leaves one less thing to explain--why the earlier-than relation happens to be the direction of predominant causation. Granted, one might explain that through a reductive account of causation where the direction of time is part of the reductive base (e.g., Hume's account), but I don't think any such account is plausible.

Wednesday, December 26, 2007

An argument against hedonism

Hedonism is the claim that how well off one is is a function of pleasure. Suppose you experienced the greatest pleasure of your life between times t0 and t1. For the ensuing, I will assume that the mental supervenes locally on the physical, but even if that is not true (and I doubt it is true), we can modify the description.
If hedonism is true, then the following life is better than yours. Fred begins his existence a day before a time t0*, in the neural state you were in a day before t0. During this day he has the same experiences as you had over the day before t0. He then undergoes the pleasurable experience you had between t0 and t1. As soon as that is over, his neural state is reset to the state it had at t0*. Then he re-experiences the pleasure you had between t0 and t1. Then his memory is reset again. Then he re-experiences that pleasure. And so on, for two hundred years.
Let's say the most pleasant experience of your life was the first time you managed to ride a bicycle without training wheels. Then Fred has that experience, over and over, each time feeling and thinking it's the first time.
Unless the experience you had between t0 and t1 was some kind of supernatural experience like that of union with God, and it is not that kind of pleasure that typical hedonists are talking about, I think Fred's life is horrible. It is a nightmare, but Fred of course thinks it is just great.
But hedonism claims Fred is better off than you are, which is absurd.
Note: One might have personal identity worries about Fred's persistence. However, a bout of amnesia during which one loses memory of a period of time does not destroy personal identity, as long as there are earlier memories. That was why I posited that Fred spends a day sharing the experiences you had for the day before t0, so that the memory of these experiences will anchor his identity through the two hundred years of recurrence.

Tuesday, December 25, 2007

Essences can be contingent

Merry Christmas everyone!

I have nothing profound to say for Christmas, so here is a bit of philosophy. The Logos became a human being. He existed eternally, but only from around 4 BC was he a human being. Here is an interesting philosophical conclusion one can draw from this: Being human is not always a modally essential property of a human being, where a modal essential property of x is one that x cannot lack. For the Logos has the property of being human, but it was possible for him not to have this property.

This raises an interesting question. Is humanity a modally essential property of us? If yes, then the same property can be modally essential in one being but not in another. This isn't very surprising. (If we allow disjunctive properties, it's easy to come up with such properties. Thus, an electron modally essentially has the property of being a non-elephant or weighing 7000 pounds. It is possible for an elephant to have this disjunctive property, but it will not have it essentially--if it gains or loses weight, it'll lose the property.)

Suppose not. That would have the following interesting consequence: We could hold on to Aquinas' idea that when the body is dead and only the soul is alive after death and before the resurrection, the human being does not exist, while at the same time accepting that we will exist at that time, reduced to a soul.

But I suspect that we are modally essentially human, unlike the Logos. So I have to give a different story about the resurrection. I actually think it's possible to be a human being without a body, though this is a severely defective state.

Monday, December 24, 2007

Church Fathers and Summa Theologica eBooks

Not long ago, I prepared for myself a Plucker eBook of the Church Fathers, based on the great collection at New Advent. With permission of Kevin Knight who runs New Admin (thank you!), I have now posted the ebook here. It's 32mb.

I also posted a Plucker eBook of St Thomas's Summa Theologica at the same location.

The Plucker format can be read on most modern PDAs (PalmOS, PPC or Linux), computers (Windows XP, Mac OS X or Linux) and the IRex iLiad, and the ebook download page includes information on how to get reading software for your device.

I like the idea of having the Church Fathers with me always on my Palm TX. Right now I'm reading St. Irenaeus. [edited]

Commonality of nature and the Incarnation

St. Athanasius insists that it was crucial for Christ's redemption of us that Christ both share in the divine nature and in the human nature: in the divine nature to unite us with God, and in the human nature in order to unite us with God. The bond of a common nature with us made his redemptive work applicable to us.

The idea that the common human nature is a genuine bond is a fruitful one. (A lot of science-fiction from the middle of the last century takes this bond to be important. Yes, the aliens of the stories are persons, but there is a special bond that human persons share. However, a number of science-fiction writers confused this special bond with some kind of human superiority to the aliens they populated their stories with. But that is mistaken, a mistake which we will avoid if we remember C. S. Lewis's discussion of two kinds of patriotism--the bad kind where one likes one's country because one thinks one's country is better and the good kind where one simply has affection for one's country and its institutions and culture.)

It is, however, tempting after Kant to see what is significant about us as not our humanity which integrally includes both the personal and the animal aspects of our existence, but just the personal aspects. If we see what is significant about us as just personhood, then Athanasius' account of why the Incarnation was needed loses some of its force. For if what is significant about us is personhood, then the second person of the Trinity already had personhood prior to the Incarnation. Admittedly that personhood was not precisely like ours--if St. Thomas is right, we can term the Logos and ourselves "persons" only by analogy. But nonetheless there is an analogy there, and the fleshly nature of the Incarnation becomes less clearly needed.

It is theologically important to hold on to the idea that we are not just persons. We are also animals. We are human beings with all that this entails. That is one reason why accounts that attempt to reconcile evolution with the divine plan by insisting that God only cared about producing persons, and left it to a chance he did not control whether these persons should be mammals or reptiles, biped or quadrapeds, and so on, are theologically mistaken. A part of the significance of the Incarnation is that our concrete enfleshment matters. The kind of persons we are is defined in large part by our flesh, and the kind of flesh we have is defined in large part by its aptness towards personal activity. Ignoring the concrete enfleshment is apt to lead us to philosophical error, such as the error of those who think that there are two co-located beings in front of this computer, one a person and the other an animal, an error that leads to moral mistakes on issues like abortion and euthanasia.

What is this commonality of nature that all of humans have and which St. Athanasius thought so important? Platonists will say it is our common participation in a single thing, the Form of Humanity. Aristotelians will say that it is our possession of numerically distinct essences, which are, nonetheless, qualitatively the same. The Platonic story fits somewhat better with St. Athanasius' account, but both accounts provide an ontological basis for the commonality of nature.

Christ, having reconciled us human beings with God will also re-integrate our nature, bringing the animal and the personal together, when he transforms us in the resurrection, completing his new creation in us. Blessed be his name!

The Word became flesh. Let us bend the knees of our body and of our soul before him as we celebrate with joy this jarring truth.

Sunday, December 23, 2007

Lying

Consider some moral claims (they might be prima facie or ultima facie--it does not matter for what I am doing here):

  1. It is wrong to intentionally kill innocent people.
  2. It is wrong to intentionally go against the terms of one's promises.
  3. It is wrong to intentionally appropriate things that belong to someone else.
  4. It is wrong to intentionally engage in sexual relations with someone one is not married to. (Some will want to say that this prohibition only applies to one if one is married, but I think it applies in general. But this won't matter)
  5. It is wrong to intentionally say what one believes is not true.

Here, item (5) stands out as not quite parallel to the others. In all the others, the subjective state of the agent enters in through the term "intentionally". But in (5), subjectivity enters twice, once through the "intentionally" and a second time through the "does not believe". I want to suggest that things would be neater if instead of (5), we took the basic form of the moral prohibition in question to be:

  1. It is wrong to intentionally say what is not true.
This seems more closely parallel to the form of (1)-(4).

I think one can derive (5) from (6). If one is intentionally saying what one believes not to be true, then one is acting in a way that one believes will accomplish the intentional saying of what is not true. But it is wrong to act in a way that one believes will accomplish a forbidden thing if one succeeds. Hence, if (6) is true, one is acting wrongly in intentionally saying what one believes not to be true.

Observe that for (1)-(4) there are doubly subjectivized variants. It is wrong to kill innocent people, but it is also wrong to kill those that one believes to be innocent people (it is wrong to shoot at a deer if one mistakes it for an innocent person). And so on. But the doubly subjectivized variants are secondary, derivative from (1)-(4) and a principle about the wrongness of doing what one believes will accomplish a forbidden thing if one succeeds.

Suppose we take (6) to be the basic form of the moral prohibition, and (5) to be derivative. Then we can say that in the primary case--the case of the liar who not only says what she thinks is false, but what is actually false--what we have is an offense primarily against truth, and only secondarily against sincerity. And I think it is right to see lying as primarily opposed to the value of truth. This, I think, also fits well with the examples (especially the first) in this post.

Seeing the duty to avoid false speech as grounded in an obligation of truth also makes it plausible that we should not speak if we do not have good reason to think that what we are saying is true.

Saturday, December 22, 2007

Knowing what pains are like without ever having had one

The following principle is very plausible:
(*) One can only know what a pain is like if one has had a pain.
The principle is also plausible if one replaces "pain" with "pleasure", "auditory experience", "olfactory sensation", "pleasure", "a feeling of warmth", etc. However, (*) is false unless a certain version of disjunctivism is true. My argument for this shall apply to all the related principles as well.

Suppose we have Mary and Patricia. Mary is a neuroscientist who has never felt a pain. Patricia has had several physically painful, but non-traumatic, experiences in the past. She is not currently in pain. As long as Patricia remembers her painful experiences, we will say that Patricia knows what a pain is like. Moreover, let us suppose that it is not painful for Patricia to remember her painful experiences. It is typically painful to remember traumatic pains, and it is often painful to remember psychological pains, but it is often not painful to remember non-traumatic physical pains, and we can suppose that Patricia's are such. This supposition does not put into question Patricia's knowledge of what pain is like. We learn from these considerations that a version of (*) where "has had" is replaced with "is having" is very implausible.

Now, suppose that Mary implants in herself the memories of Patricia's painful experiences. If Patricia knows what pain is like on the basis of the memories, Mary should be able to know what pain is like on the basis of what she has just implanted in herself. Now, granted, what Mary has as a result of the memory implant are not strictly speaking memories, but apparent memories. If Patricia painfully fell of a bike at age 11, then Mary now has an apparent memory of painfully falling off a bike at age 11. But this memory is merely apparent because Mary did not in fact painfully fall of her bike at age 11. But it does not matter whether the memories be veridical or merely apparent for the knowledge of what pain is like. Mary knows that these are merely quasi-memories (to use Shoemaker's term), since she implanted them herself. But it seems plausible that they give her just as good an insight into what it would be like to have a pain as Patricia's genuine memories do.

Patricia knows by means of being able to see in memory what the experience of pain was like--without actually feeling the pain she is observing in memory. That the painful experience actually happened seems irrelevant to this, and if it is irrelevant to this, then it seems that Mary ought to be able to see in quasi-memory what the experience of pain was like. Subjectively speaking, after all, to remember and quasi-remember (or apparently remember) surely feels the same.

There is an objection to this argument, namely disjunctivism. If experiences feel differently depending on whether they are veridical or not (Ram Neta defends this view), then Patricia's memory and Mary's quasi-memory may not be similar enough for Mary to be able to know what the pain was like. But a disjunctivism about memory does not seem deeply plausible. Our memories are often foggy even when veridical. That they really do feel different when they are veridical does not appear all that plausible to me. But I have no solid argument against this disjunctivist thesis, and hence all I can say is that if this kind of disjunctivism is false, then one can know what pains are like without having had any.

Note: If the mental locally supervenes on the physical (at least for humans), Mary doesn't even need Patricia for this. She can just run a computer simulation of her brain, body and environment to figure out what her neural memory-correlate state (i.e., the neural state correlated with the memory of the event) would be several years after herself painfully falling off a bicycle, and then implant that neural state in herself.

Related question: Can God, without making use of the Incarnation, know what pains are like? The main reason to deny that apart from the Incarnation, God can know what pains are like is something like (*), combined with a claim that God doesn't feel pain qua God. My above argument, however, shows that (*) is probably false. And this undercuts the main reason for denying that God knows what pains are like. If one could show that my counterexample to (*) is the only kind there can be--so one must either be in pain or have a memory of pain or have a quasi-memory of pain to know what a pain is like--then one could rescue the argument against the possibility of God knowing what pains are like, since divine perfection is incompatible with non-veridical quasi-memories.

Friday, December 21, 2007

Is pain bad because of its raw subjective feel?

I want to consider three arguments that pain is not bad because of its raw feel. If it is bad, it is because of something else associated with that raw feel. (This has at least one application. One way to "answer" the question of why God would allow non-human animals to feel pain is to deny that animals feel pain. A different strategy would be to argue that although they feel pain, their experience does not have in it the ingredient which makes human pain bad, or at least which makes it as bad as it is. If pain is not bad because of its raw feel, then the ingredient that makes pain bad is something else--perhaps something having to do with how we conceptualize the pain--and it might turn out that this is lacking in animals.)

1. The severity of pain is on a continuum, but very brief instances whose severity is near the bottom of the continuum are not bad at all. Pinch yourself. The feeling seems to be somewhere on the pain continuum, albeit very low down. But the feeling of pinching yourself is not at all a bad feeling to have. So pain is bad only when it has non-trivial severity. The main objection to this argument is to claim that the kinds of feelings that are not bad to have are no longer on the pain continuum--they are not pains at all. I am not sure how convincing I find this objection as it stands, but I do have a response. Take one of these items very low down on the pain continuum, allegedly below the level of pain, like the feeling of being pinched. Now if this feeling persisted unchanged for an hour, one would find it quite uncomfortable, it would be bad, and I think one would correctly consider it a low-level pain. However, if the raw feel is unchanged throughout the hour, it follows that it is not the raw feel that makes the experience bad, but something else, such as the duration of the raw feel or, perhaps better, the memory of the duration of the raw feel, and so we get to the conclusion I want. Moreover, it seems that if one feels continuous pain for an hour, one feels pain at every time, and so the raw feel of a pinch would still be a pain, since that is what one is feeling throughout the hour. The latter set of considerations show that there is another argument against the intrinsic badness of pain: a pain of low intensity is not going to be bad if it lasts for a short enough amount of time.

2. One can fail to notice one is having a pain. You wake up with an unfamiliar sensation. You think about it a little, perhaps shift around, and you realize that it's a shoulder pain. I suggest that at least until you realized that the sensation was painful or at least unpleasant, the feeling wasn't bad for you. So, not all pains are bad, it seems. Here, one can try the same kind of objection. Maybe it only starts to hurt once you realize what is going on. But if so, then what is that unfamiliar sensation you woke up with, if it's not a pain? Does that unfamiliar sensation really change into a pain when you realize what it is?

3. If you manage to get distracted from a pain and focus on something else, so that you don't mind the pain very much at all, it seems that the pain is less bad for you. So even if pain is bad for you, the degree to which it is bad is determined in large part by how focused you are on the pain and what attitude you take to it, rather than by the raw feel. Now one might think the raw feel changes in kind as you get distracted from the pain. But is that really how our perception works? If I focus my attention not on the red cube in my field of vision, but on the golden sphere, without shifting my eyes at all[note 1], is it the case that the appearance of the red cube changes, that it starts to look less red or be less cubical? It seems like the right way to describe what happens, instead, may wel be that one's attitude towards the appearance changes. If so, then when we are distracted from a pain, the pain's raw feel doesn't change, but how bad the pain is for us does. Now one might object as follows. There is some little bit of badness that the raw feel is responsible for, and other factors, such as focusing on the pain and/or minding it, are responsible for most of the badness of the pain. But I think this is mistaken. For if the raw feel gives rise to a small bit of badness, then no matter how little one focuses on a pain and how little one minds it, there is always going to be this little badness. In particular, the limit of badness as one's focus on the pain and one's minding of the pain goes to zero will be non-zero. But that seems wrong. As one's focus and minding of the pain goes to zero, the badness seems to go continuously to zero as well.

Final question: If it is not the raw feel that makes pain bad, then what makes pain bad? Is it that it distracts us from things? Is it our attitudes towards it, such as typically not wanting to have the pain (Mark Murphy thinks this)? I don't know exactly. I am attracted myself to the idea that there is such a thing as veridical pain and it is not intrinsically bad, but only bad extrinsically (e.g., because it distracts us), but I also find this idea hard to believe and harder to live.

Thursday, December 20, 2007

Biblia Clerus

The Holy See's Congregation for the Clergy has a (new?) website that looks really useful: Biblia Clerus, a set of online resources (also downloadable) that includes Scripture with linked Patristic commentary, as well as conciliar documents, Denzinger, papal writings, the Code of Canon Law, and lots of other useful materials.

Unjustified true belief

Suppose a supernatural being tells me, and I know that he is speaking truthfully, that if I so choose, he can make sure that over the next year I will unjustifiedly acquire a lot of true beliefs about all kinds of topics that are important and interesting, and, moreover, I will adhere to these beliefs quite firmly even though they will remain unjustified and will not constitute knowledge. Moreover, he promises that the true beliefs will not be misleading[note 1], and that in the process, I will not acquire any false beliefs that I wouldn't have otherwise acquired. Furthermore, if I accept the offer, I will forget that I have accepted it.

Should I accept the offer?

On the one hand, truth is worth having. On the other hand, it seems that my acquiring these beliefs will involve epistemic vice. If I agree to the offer, I am acting like a doxastic consequentialist: getting things right justifies what might be thought by some to be inappropriate means.

I don't for sure know the answer to the question. But I want to observe one thing. The answer to the question does not seem to lie within epistemology as it is usually practiced. I already know the beliefs wouldn't be knowledge. I also know they wouldn't be epistemically justified. But now that I know all that, I need to decide whether or not to allow myself to gain these beliefs. And this, I think, is a question about what a good human life is like, about what virtue and vice are. It seems to me to be a moral question.

If this is right, then ultimately the question how one should act in the doxastic sphere is a moral question. For although this case is contrived so as to make the questions it raises more obvious, I think similar issues of value are present in any decision on a course of doxastic action.

This isn't an argument that epistemic norms, insofar as they have normative force, are a species of moral norms (something that I also think is true), but rather it is an argument that any guidance we get from epistemic norms is subordinated to moral norms, even when the doxastic life is all that is relevantly in view.

Wednesday, December 19, 2007

Knowledge, community and eternity

When I figure out or learn something, I typically find myself with an urge to share it--with family, friends or blog readers. The new knowledge just pulls me to sharing. Some of the pull is vanity. But I don't think it's all vanity. Even if I had to share the knowledge anonymously, without ever having the satisfaction of knowing if anybody ever appreciated it, I would still feel the urge to share it. This isn't a decisive argument that it's not all vanity, but it is evidence. I think I am not alone in this.

While some goods can be enjoyed alone almost as well as together with others, much of the value of knowledge seems to be the value of communal knowledge--and when I talk of "knowledge", I mean to include here "understanding", "insight" and the like. It is natural to share knowledge--that is in large part why we have language. (Self-concealment correlates with psychological and physical problems, but apparently there is insufficient evidence at present to determine what the causal relationship if any there is here--see The Psychology of Secrets by Anita Kelly. And in any case, when I talk of what is natural, I am talking of a normative naturalness, not a statistical normalcy.)

Even those who want to have an esoteric secret doctrine tend to want to have a community of cognescenti with whom they can share it. Or at the very least, and most annoyingly, they want others to know that they have secret knowledge that no one else has.

The good of knowledge is, then, incomplete when the knowledge is solitary. Likewise, the good of knowledge is incomplete when the knowledge is evanescent. While eating a chocolate can be satisfactory even though the chocolate disappears in a few seconds, and the memory fades in minutes to hours, to know something for a short bit of time and then forget it completely is to miss out something important about knowledge. Knowledge is supposed to be a stable and lasting good. We see this in Plato (though he drew from this the wrong conclusions about what can be the object of knowledge, in large part because he was an A-theorist). Suppose we were to learn the answer to some difficult scientific problem, and if we were to about to die and live never again (of course, I believe that death is not the end of life, but this is a hypothetical question), and we could pass the answer to no one. This could be quite tormenting, and it might almost have been better not to know the answer. And, no, this is not just about bragging rights.

One of the things I have wondered about is how much of the meaning of our lives would remain if death were the end of life, and all humanity were to return "again to the nebula" (to use Russell's phrase). Perhaps some valuable things might not lose that much of their savor under that hypothesis. But knowledge, I think, would be much impoverished if it were all coming to an end. One reason for that is the structure of human knowledge. Finding things out always opens more new questions, and so knowledge points to more questions which in turn point to more knowledge--Nicholas Rescher talks about this really nicely. But I think there may also be something about knowledge itself, about the connection between knowledge and eternity.

The good of knowledge, thus, seems to point to community and eternity, being incomplete without either.

For the Christian, this reflection might point to the Trinity (God's self-knowledge is essentially shared between three Persons who have one intellect), the Incarnation and beatific vision (this self-knowledge is graciously shared with us), and eschatology (our knowledge will, indeed, last--and even our bodies will rise again, so even the kind of knowledge we have as embodied beings will return). Love is greater than knowledge (in its fullness it includes knowledge but goes beyond it), but knowledge (or at least understanding, and justified true belief) is theologically significant as well. After all, Christ is Logos and Sophia.

Tuesday, December 18, 2007

Abortion for fetal disability

Even some on the whole pro-life people think that an abortion because of fetal disability can be justified. But such an abortion suffers from a particular vice. I am not saying it is worse than other kinds of abortion, since these others may have their particular vices, too, but it is morally bad in a uniquely problematic way.

The problem is that in such an abortion, a child is killed by one or both parents for not measuring up to a standard through no fault of her own. A way of seeing what is problematic with such an abortion is to reflect on the disposition of a couple who would have had such an abortion if their child had turned out to have a disability of a certain magnitude, but because their child either did not have a disability, or did not have a disability of that magnitude, they did not abort. Such a couple, unless they have changed their attitude (as hopefully they have), do not seem to love their child unconditionally. For they had a standard such that had their child not measured up to it, they would have had the child killed.

What I said above assumes that the fetus is the numerically same individual as the later child. I have argued for this thesis elsewhere, but to those who are not convinced by that thesis, what I said above will not be convincing. However, if one is generally pro-life, one likely accepts this thesis, and hence one should accept that to have a disposition to abort should there be a sufficiently serious disability--whether or not one acts on that disposition--is morally deeply problematic, both in regard to a child who is aborted and in regard to a child who is not.

I also think the above considerations have some weight even if one drops the assumption that the fetus is the numerically same individual as the later child, but instead assumes--as seems very plausible--that the fetus is sufficient to determine the identity of the later child. (I.e., that it is false that in one world, fetus A grows into child B, while in another world, the same fetus A grows into a numerically distinct child B.) For in such a case, we can reasonably say that to judge whether or not to abort on the basis of what the child is going to be like is indeed to pass judgment on that later child, since there is a definite possible future child whose numerical identity is already determined, and to do that is to endanger unconditional love for that child should that child live. (This is different from a case of contraception, where typically there is no definite child determined at the time the contraception is used; it may be that some argument like this can be adapted into an argument against contraception, but this argument as it stands does not of itself seem to prohibit contraception.)

This argument is a special case of one that I have made elsewhere. But I think the case of disability is a particularly clear case of the more general problem. To give credit where credit is due, both arguments are inspired by insightful remarks Wilfried Ver Eecke once made in a conversation with me about unconditionality of love, psychoanalysis and abortion.

Monday, December 17, 2007

Men and women are one species

I've always been puzzled by the following problem. The setting for it is the metaphysical Aristotelian concept of a "species", not the biological one (in the biological sense this is easy). How do we know that women and men are the same species? I.e., how do we know that the species that we belong to is human rather than there being two species, woman and man?

I think a partial answer can be given by taking into account the following observation (I've learned it from David Alexander here who attributes it Peter Geach, though neither may endorse my application): In general, from the fact that x is a good F and x is a G, one cannot infer that x is a good G.

Here, I intend "is a good F" to mean something like "flourishes at F-ness" or "is good at being an F". Moreover, I am thinking here in the context of Greek notions, so that to be a good human includes both having the virtues of the intellect and will, as well as the excellences of the body. This is at times a somewhat awkward use of "good", but I shall adopt it.

I shall be rough here. I know what I say is not exactly right. For full precision, one needs to work not with the coarse tools of entailment and necessity, but with more finegrained tools of explanation and truthmaking. But what I shall say seems approximately right.

Despite what I said above, sometimes inferences like the ones questioned above seem exactly right:
(1) If x is a good lieutenant in a military force, then x is a good officer in the same force.
(The "in the same force" condition is needed, because a spy might be an officer in more than one army, but is unlikely to be a good officer in more than one.) The converse I am less sure of, but it is also plausible:
(2) If x is a good officer in some military force and x is a lieutenant, then x is a good lieutenant in the same force.

Suppose, now, that F and G are kinds such that, necessarily, all Fs are Gs but not conversely, and necessarily x is a good F if and only if x is a G and x is a good G. I shall say that "F is normatively subordinated to G".

Conjecture 1: If F is a species and G is a higher genus, then F is not normatively subordinated to G.

Conjecture 1 embodies an Aristotelian notion of the primacy of species, in the normative realm. And I think the normative aspect of species-hood is central for Aristotelians (I would like read the characterization of the essence as to ti ên einai as normative, though it may be stretching the Greek: what [the thing] was [supposed] to be). The species encodes the normative properties for the individuals of that kind. If we can explain the normative properties of an x insofar as it is an F in terms of its aptness at fulling G-ness, then F-ness is not the normatively basic property here. F-ness specifies the x further, but does not add any nomrative force. For reasons of explanatory power, we should try to find as general a kind as we can without sacrificing any normativity when we are searching for. To be a good human is more than just being human and being a good mammal. One can be really good at mammality while being far from human flourishing. The converse, I think, is false, though: if we fully flourish at humanity, we also flourish at mammality.

Now one is a good woman if and only if one is a woman and a good human; similarly for a man. This is a controversial claim, but I think correct. Therefore woman and man are normatively subordinated to human. If woman and man were species, then human would be a higher genus, and hence Conjecture 1 would be violated. Hence, if Conjecture 1 holds, woman and man are not species.

An interesting question is whether one can come up with a full characterization of species in similar normative terms. Here is something that might come close.

Conjecture 2: A natural kind G is a species iff both (a) G is not normatively subordinated to any larger natural kind, and (b) if F is a proper natural subkind of G such that any good F is necessarily a good G, then F is normatively subordinated to G.

(The "F is normatively subordinated to G" condition in (b) can be replaced by "necessarily a good G who is an F is a good F", because more than half of the definition of normative subordination is implied by the antecedent of the conditional in (b).)

For instance, mammal is not a species. For human is a proper natural subkind of mammal such that to flourish at being a human entails flourishing at being a mammal, but human is not normatively subordinated to mammal. One can be really good at mammality while a miserable failure at all other dimensions of humanity. But one cannot be really good at humanity and a woman while being a failure at being a woman.

A different way to look at the above is to note that the flourishing of a man or a woman as such is basically no different--it is just a particular form of the flourishing of a human.

Sunday, December 16, 2007

An 18th order desire

I guess I have a 18th order desire. This desire is to have a 17th order desire. Why would I want to have a 17th order desire? Because it would be so cool to have a desire of such a high order. And I could pull out my 17th order desire at parties with other philosophers and impress them. So I've got both an instrumental and a non-instrumental reason for my desire. And I wouldn't be surprised if this desire turns out to be settled.

I am not quite so desirous of having a 16th order desire. Sure, it would be kind of cool to have a desire of such high order, but I think prime number order desires are cooler.

What's the point of this little tale? Simply that higher order desires, no matter of how high an order, can be just as frivolous--and probably even more frivolous--than the typical first order desire. There is, thus, little reason to privilege higher order desires over lower order ones, giving them some kind of an authority whereby they get to define our welfare.

Maybe you'll question whether anybody can really have that 18th order desire. Well, I'd like to have that desire, and maybe I actually do, if only so I could honestly brag about it (that's not actually an instrumental reason of the crassest sort, because of the "honestly"). And that means that I I've already got a 19th order desire, namely the desire to have the 18th order desire mentioned at the beginning of this post. I wouldn't be surprised if this 19th order desire lasted for quite a while.

And, hey, it would be quite cool to have nth order desires for every prime number n, and to have no non-prime order desires except for the first order (I guess we need to have the first order ones to make sure we don't forget to take care of our bodily needs). Suppose I actually want that, and that desire is settled, reflected upon, etc. Then here we have an infinitieth order desire--and as frivolous and unimportant as desires get.

Or maybe you'll object that these very high order desires are very weak. Sure. And it would be a mark of insanity if they were very strong. But that underscores my point that there is nothing deeply rationally special about high order desires.

Saturday, December 15, 2007

God has no name

Early Christians considered it important that God has no name, in contradistinction to pagans who had multiple gods and naturally wanted to know which god the Christians worshiped. Eusebius reports of Attalus, being roasted on an iron chair, that "when asked what was the name of God, he answered, 'God has no name like a human being has'." St. Justin Martyr in his second Apology argues that names are given by one's elders, and hence God has no name. Aristides in his Apology says: "He has no name, for everything which has a name is kindred to things created." After quoting Trismegistus to the same effect, Lactantius writes: "God, therefore, has no name, because He is alone; nor is there any need of a proper name, except in cases where a multitude of persons requires a distinguishing mark, so that you may designate each person by his own mark and appellation. But God, because He is always one, has no peculiar name." (The difference in reasons given suggests that there was a well-established doctrine that God had no name, but the reasons for the doctrine were not universally agreed on.)

The idea of God's namelessness is fruitful. It is true that there is the tetragrammaton", but that seems to have been completely unused by Christians until very recently. The early Christians would have thought that the use of a proper name made God too much like a pagan deity. (And, indeed, there is evidence of pagan deities with names akin to the tetragrammaton, e.g., in Ugaritic texts.) The Jewish cessation of use of a proper name for God, and its systematic oral replacement by "Adonai" or "Elohim", would have been seen not as protection against uttering a name too holy for our sinful lips, but as a deepening of the understanding of monotheism, of God's utter transcendence.

But in a way God has a name. The man Jesus Christ is his name to us. Christ is the Logos, the Word that reveals God, the word pointing towards God (I am reading "pros ton Theon" in John 1:1 in a way complementary to the usual reading of "with God"). But his name is unlike the names of humans. His name is a person, consubstantial with him. Nothing less than himself is sufficient for us to call him by. Yet, like a name, he is made sensible in the incarnation.

Friday, December 14, 2007

Who is a combatant?

Jus in bello prohibits deliberate killing of non-combatants. But who is combatant? Plainly, a uniform a combatant does not make. If a dictator decreed that all toddlers are to wear military uniforms, that would not make them into combatants. Nor would they be combatants if he issued them with guns which they never fired. (The question of what should be done if they did shoot is a more involved one.)

I find particularly challenging the case of people pressed into military service who have the intention to refrain from violent acts. Under compulsion, they wear a uniform and carry a gun, but are no more combative than an unarmed toddler. In World War II, only 15-20% of American soldiers themselves along the line of fire would fire at the enemy in any given battle, even if the engagement lasted two or three days (I got this from Grossman's book on killing; Grossman doesn't say if the same 80-85% of soldiers were refraining from shooting in different engagements).

Suppose, then, that you are fighting a just war, and know (e.g., from intelligence reports) that 80% of enemy servicemen have a personal commitment either never to shoot or to shoot only in the air, and are wearing the uniform and carrying weapons under compulsion. You have an armed enemy serviceman in your gunsights. You do not have time to observe him long enough to figure out if he is one a uniformed pacifist or a soldier. Is it licit to shoot him?

If not, then justly waging wars against enemies whose soldiers are likely to be like that seems nigh impossible. On the other hand, how can it be licit to shoot someone who is more likely than not as innocent of bellicose activity as any conscientious objector? If you are a police sniper who sees in the distance five people, one of whom you know is a terrorist about to trigger a bomb via remote control and the other four are innocent bystanders, surely you are not permitted to pick off all five when you can't tell which one is the terrorist.

I see five solutions to this problem:

  1. Drop the moral prohibition against killing innocent people who aren't any danger to anybody. This seems clearly wrong.
  2. Prohibit lethal engagement in such situations. This makes much of what most people would consider the just conduct of war impossible. But perhaps double effect would still allow use of weapons not targeted at particular individuals, with the intention of killing the guilty.
  3. Argue that by wearing uniform and carrying a gun on the side of injustice in a war, one has made oneself a part of an unjust war effort, and that in and of itself makes one a combatant. After all, even if one is not shooting oneself, one is boosting the war effort by providing a certain amount of cover for those who are shooting, etc. And anybody who is boosting the war effort on the side of injustice may be fairly shot. This seems mistaken. If our own POWs were forced to wear enemy uniform, and at gunpoint mixed with enemy troops, we would not consider them combatants on the enemy side--i.e., traitors--as long as they refrained from shooting at us. And the nationality of compelled uniformed people should make little moral difference here.
  4. Follow Germain Grisez in saying that even in wartime, it is always wrong to intentionally kill anybody, guilty or innocent. But one can use guns and bombs and the like to intentionally make enemy soldiers incapable of harming us, and we can do this by the Principle of Double Effect (PDE) without intending that the enemy soldiers die. Their death is not a means to our goal of self-protection. All we need for self-protection is that they be out of commission for the course of the war. If this is correct, then it does not matter if 4/5 of the enemy is uniformed and pacifists, since their deaths are "collateral damage", just like the deaths of civilians standing near the enemy HQ when the HQ is bombed. But is this plausible? While some weapons can be thought of as disabling the enemy, with the enemy's death an unintended side-effect (e.g., if I hit an attacker over the head with a club, then it is plausible to suppose that I do not intend to kill him, but merely put him out of commission; if he dies, that's an unfortunate side-effect). But it feels like a stretch to use this kind of justification for all weapons. Suppose a sniper shoots an enemy soldier in the head. Let us say, with Grisez, that this is to disable the enemy soldier. But how is the sniper disabling the soldier? By destroying the brain. And how does destroying the brain disable the soldier? By killing him, it seems. And so the killing is intended, as a means, contrary to Grisez. Now maybe one can argue that one is merely trying to destroy the parts of the brain involved in fighting, and the fact that one destroys the whole brain is a mere unintended side-effect. This feels sophistical. And, anyway, if it is wrong to kill the innocent, it seems to be also wrong to intentionally destroy healthy portions of their brains.
  5. Use another Double Effect line of justification. One aims at the enemy serviceman's heart and fires, say. But one isn't intending that the enemy serviceman be dead, or even that his heart be unable to oxygenate his body sufficiently for fighting (as per the previous suggestion). Rather, one is intending a conditional effect: One is intending that this serviceman be dead, or at least that this heart be unable to oxygenate his body sufficiently for fighting, if he is genuinely a combatant. One doesn't know that he is a uniformed pacifist (if one knew that, shooting him would be plainly wrong), and one's intended goal can be conditional. This solution strikes me as the least unsatisfactory of the five, but is not that satisfactory. It would allow the police sniper to shoot the five people one of whom is a terrorist. On the other hand, this solution coheres with the following intuition: It makes relatively little moral difference whether you throw a grenade at five people, killing them at once, or shoot each one individually. (Though the latter may be much more traumatic.) But it could be licit to throw a grenade at five people, four of whom were innocent and one of whom was a terrorist about to kill many people.

Here is an interesting and, I think, important conclusion. When deciding whether the proportionality condition in jus ad bellum holds--whether the war would eliminate more evils than it would cause--one needs to count among the evils the many non-bellicose enemy soldiers who would die as a result of the war. This can have real consequences. Suppose that one estimates that if an invading force is unopposed, they will murder 500,000 of one's people, some soldiers and some non-soldiers, but will cause no other evils. Suppose, further, that by opposing them, one will be able to reduce the death-toll on one's own side to 100,000, but one will need to kill a million enemy soldiers to do so. In killing a million enemy soldiers, one might be killing 800,000 completely innocent and non-bellicose people. Here, the proportionality condition would not seem to be met--one kills 800,000 innocent people to save 400,000. One should instead surrender. Unless, of course, one can argue that the enemy will not stop at killing the 500,000, which in practice is likely. But in any case, one must take into account the innocent death toll among enemy soldiers into account when figuring out if a war is licit.

Thursday, December 13, 2007

Long snakes and Relativity Theory

This is an exercise in some rather gruesome metaphysics of parthood. If you don't like gruesome examples, stop reading. There will be a payoff, though--a defense of animalism.

Imagine a long snake, all stretched out, and for simplicity assume uniform linear mass distribution. Let's say the length of the snake is 10 meters and its diameter is 0.1 meters. Suppose that something cuts off the rear 1/4 of the snake, and the cut happens at near-light speed--maybe a blade descends on the poor snake at 90% of the speed of light. Suppose that almost instantaneously before the blade touches the unfortunate snake, a butterfly almost instantaneously brushes its wing against the tip of the snake's tail and never has any other contact with the snake or its bits. (Let's say "almost instantaneously" means "in the amount of time during which light travels 0.01 meters.) Then, it seems, butterfly touched the snake. Call the reference frame in which the above description takes place "frame A". But, it is a fact that then there is a reference frame, call it "frame B", in which when the butterfly touched the snake, the tail was already cut off. It seems that in this reference frame, the butterfly did not touch a (or the) snake--he merely touched a cut-off tail.

It follows that the following propositions cannot all be true:

  1. In frame A, the butterfly touched the snake.
  2. In frame B, the butterfly did not touch the snake.
  3. Whether two substances touch does not depend on the reference frame.

So we must deny at least one of (1), (2) or (3). Note that affirming (1) and (2) and discarding (3) would have the interesting consequence that whether battery has been committed depends on the reference frame. For suppose that an accident cuts off my leg at near light speed. Then there can be a situation where you very quickly mutilate the "foot" at the end of that leg (I put it in quotation marks, because there is an issue whether a disconnected foot is a foot) just before the leg is cut off. By exactly the same reasoning, then, in one reference frame it seems you've committed battery--you've mutilated a part of me--and in another you haven't. (In neither reference frame do I feel the mutilation, because the leg is cut off before the nerve signals come from the "foot" to my central nervous system.) I don't think whether battery has been committed should differ between reference frames. (Would you be guilty in one frame and innocent in another?)

So, I need to reject either (1) or (2) or both. It seems to me that (1) is harder to deny than (2). To deny (1) we would have to allow that the tail is fully connected to the snake, and yet not a part of it because it is about to be cut. So, we should deny (2). Hence, something can be a part of a body even though it is already severed from the body.

How long can this weird state of affairs go on? I don't really know. We could say that it goes on until the part is severed in all reference frames. But while that is an attractive idea, it neglects the fact that we are talking about organic parts, and what matters here is organic connections, not relativistic connections. The relevant scale of velocities is the speed of the fastest organic signals, not the speed of light. Suppose that the snake's brain sent a nerve signal to the tip of the tail, and the nerve signal passed the cut-point just before the cutting began. Because nerve signals move much slower than light, before the nerve signal arrives at the tip of the tail, it will already be the case that in all reference frames the tail is severed. Still, I think the tail might count as part of the snake, as long as the signal is traveling there. Let's say the signal tells the tip of the tail to wiggle. Maybe we can say that while the tail yet wiggles under the influence of that nerve signal, the tail is a part of the snake.

Does any of this matter, except as abstruse metaphysics? Maybe. Take animalism, the theory that you and I are animals. A standard objection to animalism is that we can survive as brains in a vat, but a brain in a vat is not an animal. However, the above considerations suggest that, at least for more complex beasts like snakes and humans, connection to the nervous system has a relevance to determining what still is and what no longer is a part of the body. The nervous system, then, has a kind of centrality in more complex animals from the organic point of view. And this makes it plausible the animal could survive as an organism when pared down to just a nervous system, assuming appropriate life-support mechanisms. And it is not a far leap from that to suppose that we can survive as organisms with just central nervous systems, and maybe even with just the central part of the central nervous system, namely the brain (in a vat, of course).

Now, if animalism is true, then we were all once fetuses--I was the numerically same organism as a fetus. Thus, a human fetus is one of us, and presumably killing it is wrong. Thus, we have here a loose line of argumentation from Relativity Theory to the wrongness of abortion. Isn't philosophy fun?

Wednesday, December 12, 2007

Perception of time

The perception of time continues to be empirically investigated. Here is something interesting.

Universalization of maxims

Suppose that I want to see if the following innocent maxim satisfies the first form of the Categorical Imperative:
(1) When a friend is hungry, and you have more than enough food, offer her something to eat.
Kant says I am supposed to imagine (1) universalized into a law of nature. But what is the universalization? Two options:
(1a) For all x and y, when x's friend y is hungry, and x has more than enough food, x offers y something to eat.
(1b) For all x and y, when x believes that x's friend y is hungry and that x has more than enough food, x offers y something to eat.
In this case, both universalizations work--no contradiction (in will or conception) ensues whichever is the one I take to be the relevant one. But there is still the question of which is in in fact the universalization relevant to the Categorical Imperative.

Sometimes something substantial depends on this. Take a case like the one Korsgaard considers. Betty wants to murder George who is hiding in your basement. Betty doesn't know that you know she wants to murder George, and she doesn't want you to know. Instead, with seeming innocence, she asks you if George is at your house. Korsgaard thinks that you can universalize lying to deceitful potential murderers (i.e., potential murderers who are deceiving you about the intentions), because even if everybody lied to deceitful potential murderers, the lying would have the desired effect, because the deceitful murderer would think she successfully deceived you, and hence she would think that you wouldn't be lying back to her. Again, in this case there are two universalizations possible:
(2a) For all x and y, if y is a deceitful murderer, x lies to y about the location of y's prospective victim.
(2b) For all x and y, if x believes y to be a deceitful murderer, x lies to y about the location of y's prospective victim.
For Korsgaard's argument to work, (2b) has to be the right universalization. For if (2a) were a universal law, then deceitful murderers would inductively know that (mysteriously enough) whenever they ask people about the location of their victims, they're lied to, and so the universalization of the maxim would destroy the maxim's effectiveness.

On the face of it, too, universalization (2b) is preferable to universalization (2a), because it's hard to imagine what motivation people have for lying to people they don't believe to be murderers. Maybe, though, in (2a) we are supposed to imagine not only that the maxim is universalized, but that the people know or at least believe that it applies. Thus, maybe, the correct universalization is the hypothesis:
(2c) For all x and y, if y is a deceitful murderer, x lies to y about the location of y's prospective victim out of the maxim "Lie to deceitful murderer in order to save the victim's life."
The universal truth of (2c) entails that in such circumstances, the murderer's deceit is never successful.

So now we have three ways of universalizing--(2a), (2b) and (2c). Which is right? I suspect that (1b) and (2b) aren't the right universalization. First, they mistake the maxim. The maxim is not: "Feed those that I think are hungry" (leaving aside the condition for the availability of food, for simplicity's sake). What gives me reason to feed them is not my thought that they are hungry, but the fact of their biting, unpleasant hunger. It's not about me, but about the hungry. The mistake of thinking the maxim is "Feed those that I think are hungry" is like the mistake of reasoning: "I think that p. But if p, then q. Therefore q." The latter argument is logically invalid, except as an awkward way of saying "p. But if p, then q. Therefore q." The correct maxim is "Feed those that are hungry."

Second, these subjectivized universalizations give the wrong answers in some cases. Suppose I am the doctor in an asylum. It's an odd asylum, however. All the inmates think that they are the doctor, and that everybody else (including me) is an inmate. Consider the maxim: "If you are the doctor in an asylum, prescribe correct medication for the inmates." (It'll need some qualifications.) No problem universalizing this along the lines of (1a), (2a) or (2c). But suppose you universalize along the lines of (1b) or (2b). Then you get everybody who thinks she is a doctor prescribing what she thinks is correct medication for those she thinks are the inmates. And that wouldn't do at all--indeed, it would involve a contradiction in conception, since everybody would be be getting the wrong medication, thereby contradicting the point of prescribing medication. The correct maxim is not: "If you think you are the doctor..."

Objection: Inmates in an asylum do not count for universalization purposes, because they are not rational agents.

Response: Well, tweak the story a bit. They are not literally insane. Instead, they all have some medical problems that require medication, and they have a sanely acquired set of strange false beliefs--they believe themselves to be doctors, even though they are not. (It's easy to come up with half a dozen scenarios where they might come to that belief.)

Is belief required for the appropriateness of assertion?

It is plausible that it is appropriate to assert a proposition only if one believes or maybe even knows that proposition, and it is a lie to assert a proposition if one disbelieves it. Oddly enough, this plausible claim is false.

Example 1: A friend who is both honest and an expert in ichthyology tells you that a certain sentence in German, which sentence exceeds your own abilities of German comprehension, is a truth about fish. Moreover, she tells you that this truth is widely disbelieved by the general public, and that based on her earlier conversation with you, you also disbelieve it. You memorize the sentence. Later on, while speaking in your (poor) German to a friend, you utter that sentence assertively. The sentence expresses a proposition you disbelieve. Yet, you do nothing wrong in asserting it. Nor is this case uncommon. We may quite often parrot what scientists say without understanding it.

Example 2: You are dying and leave with the executor of your estate two sealed letters for your daughter to be opened when she is 65. Letter A is to be given to her if she has had a successful career in aeronautical engineering--currently, you have no idea whether she will or not. Otherwise, letter B is to be given. Letter A opens: "By now you have had a successful career in aeronautical engineering, as I had always wished for you." Letter B opens: "All of my life I have been pressing you to have a successful career in aeronautical engineering, but things did not go my way." Suppose your daughter in fact gets letter A. Then the assertion that she has had a successful career in aeronautical engineering is not the assertion of a belief you had when writing the letter, and whether it is the assertion of a belief you now have depends on contentious claims about what the afterlife is like, claims independent of the appropriateness of the letter. (Similar examples can be manufactured by computer error messages: "This program has received SIGENV and must be terminated" does not express anybody's belief.)

If this is right, then a correct account of the norm of assertion does not involve the requirement that one believe or know what one is asserting. It is, however, possible that the norm of assertion requires that one know that the assertion, when it is made, is the assertion of something true. (Count a letter to be given later as an assertion made at the time of the giving.) But it might be simpler just to say that an assertion is minimally appropriate iff it is true, and then derive the requirement that one believe that what is asserted will be true not from anything specific to assertions, but from the general requirement to act only in ways that one believes to be appropriate.

Since I believe only in moral normativity in the case of humans, for me the view implies that there is a duty to speak the truth. Thus, people who make mistakes on calculus exams act wrongly--but if they've studied enough, they are not culpable.

Monday, December 10, 2007

Weakening transworld depravity

Assume Molinism. Plantinga has shown that if transworld depravity holds, then God could be justified in creating a world that would contain evil, since any world containing a significantly free creature is a world that would contain an evil, and it is worthwhile for God to create a world that contains a significantly free creature. Transworld depravity is the thesis that, given what the conditionals of free will in fact are, in any world in which there is a significantly free creature, that significantly free creature sins. This is intuitively a highly improbable thesis, as has been pointed out by more than one author. (Quick argument: Suppose that Jones faces only one choice in his life: a choice between doing an evil he enjoys only slightly and a great good he enjoys greatly. Suppose Jones has no bad habits and is clearheaded. Then the probability of his choosing the good is fairly high. But there are infinitely many possible creatures like Jones. The probability that all of them in a situation like that would choose evil is low.)

But there are weaker theses than transworld depravity that would get Plantinga the claim that God is justified creating a world that would contain evil. Here are some:

  • Given the conditionals of free will, any world containing a significantly free creature contains at least one free creature (perhaps a different one) who sins.
  • Given the conditionals of free will, any world containing at least a billion significantly free creatures contains at least one free creature who sins. But God would have good reason to create a billion significantly free creatures, especially if the majority of their choices were good.
  • Given the conditionals of free will, any world containing infinitely many significantly free creatures contains at least one free creature who sins. But God would have good reason to create infinitely many significantly free creatures, especially if the majority of their choices were good. (And, yes, for aught that we know our world could be like that. We do not know that there isn't an infinite number of significantly free creatures in our world.)
  • Given the conditionals of free will, any world containing infinitely many "strongly significantly free" creatures contains at least one free creature who sins. A creature is strongly significantly free if it is significantly free and the structure of incentives for one of its significantly free acts is such that neither choosing good is overwhelmingly probable nor is choosing bad overwhelmingly probable. There seems to be a value in strong significant freedom.
  • Given the conditionals of free will, any world containing at least aleph10000000 strongly significantly free creatures contains at least one free creature who sins. But God can have very good reason to create a world that contains at least aleph10000000 strongly significantly free creatures.
And while we likely are in a position to say that the thesis of transworld depravity is improbable, it is not clear that we are in a position to say that every thesis like one of the above is improbable.

An argument against an infinite past

This is a version of an argument by Bill Craig, with probability in place of the Principle of Sufficient Reason. I don't actually think this argument is sound, but the premises might well be plausible to a number of people. Suppose, then, for a reductio, that it is possible for a world to have an infinite past. Let H be the following hypothesis: The world has an infinite past and future (nobody who allows an infinite past will balk at an infinite future, surely), and contains Jones, who counts up from minus infinity (not inclusive) to zero (inclusive), uttering one number a day. Thus, on some day he uttered "-4848", and on the next he uttered "-4847" and so on. Then on some day he finished by uttering "0".

For any time t, let Et be the hypothesis that Jones has finished counting at a time t* such that t - 1day < t* ≤ t, i.e., that Jones has finished within in the 24 hours preceding t. Let p(t)=P(Et|H). Since H does not mention any specific times, by the principle of indifference, p(t) has to have the same value for every value of t. Thus, for all t, p(t)=p(0).

But now consider the following infinite sequence of events: ...,E-3 days,E-2 days,E-1 day,E0,E1 day,E2 days,E3 days,.... Given H, it is certain that exactly one of them happens. Thus, P(... or E-3 days or E-2 days or E-1 day or E0 or E1 day or E2 days, or E3 days or ...|H) = 1. Moreover, these events are mutually exclusive, so the left hand side of this equation is equal to: ...+p(-3 days)+p(-2 days)+p(-1 day)+p(0)+p(1 day)+p(2 days)+p(3 days)+.... But each of the summands here is the same, namely p(0). If p(0) is positive, then this sum is infinite, and hence not equal to 1. If p(0) is zero, then this sum is zero, and hence not equal to 1. And p(0) can't be negative since it's a probability. Hence, impossibility ensues no matter what value p(0) has. (And, no, infinitesimals won't help. That was shown by Tim McGrew--see this paper of mine.) If all of this works, then we need to reject as absurd the assumption that an infinite past is possible. And once we reject this assumption, the Kalaam argument becomes available.

There are two weak points in the argument. The first is that there is an actual difference between the hypotheses Et for different values of t. If one accepts an A-theory of time, according to which what time it is now is an objective feature of the universe, then one has to agree there is a difference between these hypotheses--it is an objectively different thing for Jones to finish counting today than to have finished counting yesterday. Likewise, if one takes a substantival theory of time, one will see a difference. But the Leibnizian like me, who takes time to be purely relational, will not see a difference between the hypotheses: if one shifts over the history of the world by a day, one changes nothing. The second weak point is the assumption that one can apply classical probability theory to events like Et conditioned on H, which, again, I am suspicious of. (But I accept the Principle of Sufficient Reason, and that can be used in place of the probabilistic reasoning.)

Friday, December 7, 2007

Epistemic norms are a species of moral norms

"Don't accept the testimony of unreliable witnesses." "Avoid having contradictory beliefs." "Discard beliefs all of the justification for which has been undercut." "Accept the best available explanation that is not absurd." "If you assigned a probability to a hypothesis H, and then you received evidence E, you should now assign probability P(H|E)=P(E|H)P(H)/P(E) to the hypothesis."

But why? Well, if I don't follow these injunctions, then I am less likely to achieve knowledge, comprehensiveness, understanding, true belief, etc., and more likely to be ignorant and to believe falsely. Moreover, following these injunctions will develop habits in me that are more likely in the future to lead me to gain knowledge, comprehensiveness, understanding, true belief, etc., and to avoid ignorance and false belief.

But if that is all there is to it, then epistemic injunctions run the danger of not being norms at all. Rather, they seem to be disguised conditionals like:

  1. If you accept the testimony of unreliable witnesses, you are likely to gain false beliefs.
  2. If you don't accept the best available explanation that is not absurd, you're unlikely to gain comprehensiveness in your beliefs.
While every fact is normative in some way, these will not be normative in the relevant way, in the way of imperatives, any more than
  1. If you don't raise your right arm, at most one of your arms will be raised
is normative in that way.

This is unless knowledge, comprehensiveness, understanding, true belief, etc. are worth having, unless they are good. If they are good, then they are to be pursued, and their opposites are to be avoided. But now we see that the force of epistemic norms just comes down to the fact that, as Aquinas put it, "good is to be done and pursued, and evil is to be avoided". But the pursuit of the good and the avoidance of the bad is what morality is. Hence, the imperative force of epistemic norms--that which makes them be genuinely normative--is the same as the imperative force of moral norms. Epistemic norms just are moral norms, but moral norms concerning a particular, and non-arbitrary, subset of the goods and bads, namely the epistemic goods and bads. Likewise, there is a subset of moral norms dealing with goods and bads that come up in medical practice, and we call that "bioethics", and there is a subset of moral norms dealing with goods and bads to the agent, and we call these "norms of prudence", and so on. Non-communal epistemological norms are, in fact, a subset of the norms of prudence. Any subset of the goods and bads defines a subset of morality.

One might object that only some goods and bads fall in the purview of morality. Thus, while good is to be pursued and evil avoided, only in the case of the moral goods is this a moral injunction. But I find quite implausible the idea of identifying specifically "moral" goods. I will argue against the distinction between epistemic and moral goods in two parts. The first part of the argument will establish, among other things, that epistemic norms are a species of prudential norms. The second part will argue that prudential norms are a species of moral norms.

To help someone learn something--i.e., to help her gain certain instances of epistemic goods--for the sake of her learning is to benefit her, and can be just as much an instance of kindness or charity as relieving her pain. (Of course, not every instance of teaching is kind or charitable, just as not every relieving of pain is kind or charitable--the parallel continues to hold.) To distinguish helping others attain epistemic goods from helping others attain non-epistemic goods and to say that only the latter is moral, is to take an unacceptably narrow view of morality--indeed, I think the only major moral view that makes such a claim is hedonistic utilitarian, and its making this claim is a count against it. But if there is no difference in regard to whether we are acting in accordance with morality whether we help others achieve epistemic or non-epistemic goods, why should there be a difference in our own case? The epistemic goods in our own case are not different in kind from the epistemic goods in the case of others. If pursuit of the human good of others involves helping them achieve epistemic goods, so too the pursuit of the human good of ourselves involves helping ourselves achieve epistemic goods. But pursuit of our own human good is what prudence calls us to. Hence, epistemic norms are a species of moral norms. It is no less a part of prudence to strive for true belief than it is to surround oneself with beauty or to keep one's body healthy; it is just as much a duty of prudence to keep from false beleif as it is to avoid promoting ugliness in one's environment and disease in one's body.

Now, one might say that there is a defensible distinction between the agent's goods and the goods of others, and it is only the pursuit of the goods of others that morality is concerned with. But this is mistaken. It is an essential part in learning to be moral to realize that I am (in relevant respects) no different from anybody else, that I shouldn't make an exception for myself, that I am one of many, that if others are cut, they bleed just as I do. Utilitarianism and Kantianism recognizes this. Aquinas recognizes this in respect of charity (he thinks we owe more charity to ourselves, because we owe more to those who are closer to us, but there is no difference in the kind of duty; in charity we love people because the God whom we love loves them, and so we love ourselves in charity for the same reason that we love others in charity). And a theistic ethics that grounds our duties to people in their being in the image of God, or in God's loving them, will just as much yield duties in regard to one's own goods as duties in regard to the goods of others, since the agent is in the image of God and loved by God just as others are. And if we have duties to our friends, and our friends are in morally relevant respects "other selves", then we likewise have duties to ourselves (Aristotle would certainly endorse this). It is true that some social-contract accounts of morality do not recognize this, but so much the worse for them.

Prudential norms and prudential virtues, then, are a species of moral norms and moral virtues. And epistemic norms and epistemic virtues are a species of prudential norms and prudential virtues.

Thursday, December 6, 2007

The maximally great island

We can formulate Plantinga's modal ontological argument as follows:

  1. Possibly, a maximally great being exists.
  2. A maximally great being exhibits maximal excellence in all possible worlds.
  3. Therefore, there necessarily exists a being that exhibits maximal excellence in all possible worlds. (By 1, 2 and S5.)

Here is a surprising fact. Gaunilo's maximally great island objection doesn't work. For let's try to construct a parallel:

  1. Possibly, a maximally great island exists.
  2. A maximally great island would have insularity and the maximal excellence compatible with insularity (i.e., being an island) in all possible worlds.
  3. Therefore, there necessarily exists a being that exhibits the maximal excellence compatible with insularity in all possible worlds.

But premise (5) is unjustified. For while we have reason to think that a maximally great being would be maximally excellent in each world, we do not have reason to think that a maximally great island would have insularity in each world. Typical islands we know are not essentially insular. Indeed, there are pieces of ground that are islands at high tide but that are not islands at low tide, so insularity is not an essential property for them. In fact, I don't know that any island has insularity essentially. Any island could survive being joined to the mainland by a narrow land-bridge. And even if it were possible to have an island that is essentially insular, it would be unclear that a maximally great island would be essentially insular. Essential maximal excellence is very plausible a great-making property. But it is far from clear that essential insularity is a great-making property.

But without essential insularity, the most we can show in the place of (5) seems to be this:

  1. A maximally great island exhibits insularity in some worlds, in which it exhibits the maximal excellence compatible with insularity, and in all other words, if any, it exhibits maximal excellence.
Note that in the worlds where the entity is not island, there is no need to limit its excellence to the excellence compatible with insularity. So what conclusion can we draw from (4) and (7)? Here it is:
  1. There necessarily exists an entity which in some worlds exhibits the maximal excellence compatible with insularity and in all other worlds, if any, it exhibits maximal excellence.
In particular, we have not shown that this being is actually an island.

But isn't what (8) shows just as absurd? A being that is an island in some worlds and God-like in others? Actually, I suspect (8) is true. God is maximally excellent in all worlds and in some worlds he is literally an island. What do I mean? Well, as a Christian, I take it that it is possible for God to become incarnate as a human being. But surely not just as a human being. One might think that by the same token God could become incarnate as any sort of being, say an island, so in some worlds he is both God and an island, whereas in the actual world he is God and man. Indeed, wouldn't we expect that the maximally great island would be God, if that were possible? Now one might object that God can only become incarnate as the sort of being that can be a person. But an island can be a person. We can imagine all kinds of persons: persons of carbon and water and the like like us, persons of plasma, etc. Why not a person of earthy stuff? All kinds of complex computational phenomena could be instantiated by geological interactions within an island. Of course, such a person might end up functioning very slowly. But that's fine. And dualists like me will demand a soul for them. But that, too, is fine. Some possible islands have souls, then.

This strategy won't work for every possible parody. In particular, they won't work for parodies involving maximally great beings that exhibit some quality incompossible with divinity, like sinfulness. But it is not clear that it makes sense to talk of a maximally great among sinners, say. Sinfulness is defined by falling short of moral greatness, so a maximally great among sinners is going to be minimally sinful. But for any sin, there is a lesser possible sin, so there is no such thing as minimal sinfulness, probably.

Can morality be a system of hypothetical imperatives?

We have, it seems, a hypothetical imperative to brush our teeth:
(1) Brush your teeth if you want them to be healthy.
Suppose I do want my teeth to be healthy. What normative status does (1) have?

First, I could suppose that (1) has no relevant (the reason for "relevant" is that every fact are normative) normative force at all. It is simply a statement that:
(1a) Brushing teeth conduces to the health of teeth.
On interpretation (1a), a hypothetical imperative isn't an imperative at all--there is no 'ought', just an 'is'. However, it seems we get an imperative once we combine (1a) with the desire to have my teeth by healthy. But it makes no logical sense to "combine a statement with a desire" to get an imperative. That I have a desire is not a relevantly normative fact about me--it's just an 'is'. Rather, on this interpretation, we are implicitly presupposing an imperative like:
(2) Fulfill your desires (ceteris paribus).
(Or maybe more plausibly, we should work in terms of goals rather than desires--exactly the same points apply.) And this imperative is not hypothetical in the relevant sense (it's conditional in respect of the "ceteris paribus", but that's not the relevant sense of "hypothetical")--its force cannot be dependent on what our desires are, at pain of vicious circularity. So, on this interpretation, hypothetical imperatives are not really imperatives, but we get imperatives when we combine them with some imperative like (2).

The second option is that (1) is genuinely relevantly normative: it expresses a genuine imperative. The imperative is conditional, but in that respect it does not differ from:
(3) If you have promised to whistle Yankee Doodle, you should whistle Yankee Doodle (ceteris paribus).
I do not think that when people are talking of "hypothetical imperatives" they simply mean "conditional imperatives". That isn't the relevant sense of "hypothetical". The duty to keep promises if you've made any is conditional, but not relevantly hypothetical--it applies no matter whether you desire it to or not.

Now the folks who say that morality is a system of hypothetical imperatives do not simply mean that all moral truths are conditional in form. Rather, they mean that the imperative force of morality comes from us adopting morality as a goal or having a desire to live morally. But all this once again presupposes something like (2), of which the imperatives not to murder if you want to be moral and to brush your teeth if you want to be healthy are consequences.

So, it seems to me that hypothetical imperatives presuppose categorical (in the relevant sense) ones like (2). And (2) is not all that plausible, I think. But suppose (2) is true. Then, we can ask what the status of (2) is. Why should I follow my goals? Why should I do what I want? I suspect that any plausible justification of (2) will be broadly moral in nature, based on some notion of human desires as reflective of the good, or of humans as having a duty to be true to themselves. Otherwise, it is not plausible that (2) should have any real authority. (And of course it would not do to ground the authority of (2) in terms of a higher order desire to follow my desires.) If so, then the notion of morality as a system of hypothetical imperatives has imploded.

Tuesday, December 4, 2007

Do non-human animals feel pain?

Assume some form of dualism--whether hylomorphic, substance or property. What is our reason for thinking that non-human animals feel pain? It is presumably that there are situations similar to those that humans find themselves in where animals both exhibit neurological responses similar to human neurological responses correlated with pain and behavioral responses similar to human behavioral responses correlated with pain. Humans in these situations feel pain, and, hence, so do non-human animals that have the same responses.

The argument has the following form: In animals of kind A, states of type C (having such-and-such neurological and behavioral states in the presence of bodily damage) are correlated with states of type D. Therefore, in every kind of animal that can find itself in a state of type C, states of type C are correlated with states of type D. This is in general a pretty weak argument. Consider: "In bats, movements of forelimb muscles are correlated with flying. Therefore, in every kind of animal that can exhibit movements of forelimb muscles, the movements when unimpeded are correlated with flying. (In particular, pigs fly.)" I do not mean that the argument is useless.   Suppose that we were alien visitors and having just landed the only animals we were able to observe were bats. We observed that movements of forelimb muscles are correlated with flying. We might well form the working hypothesis that in all other animals, forelimb muscle movements are correlated with flying. But this hypothesis would carry only a little epistemic weight.

Suppose we are still the aliens who have only observed bats. We now notice burrows on the ground. Sonar reveals that the burrows are being made by small animals--moles--that dig lots of tunnels in the ground, and further sonar observation shows that there is sufficient food for them underground. We conclude as a working hypothesis that these animals spend much of their lives underground. Sonar observations suggest that the animals indeed do have forelimbs. But--and I realize this is a bit unlikely--we cannot tell from the sonar whether or not there are wings (they might be folded back over the body). By the earlier working hypothesis we could conclude that if the moles' forelimbs were to move unimpeded (i.e., in an open space), the moles would fly. But this would be unjustified. For given what we have seen of the moles' lifestyle, we see that flying would be of little if any use to them.

Thus, a defeater to generalizing claims of the form "Animals when in state C exhibit D" from one natural kind to another is found when exhibiting D in state C has significant usefulness in the case of the former kind but has little or no usefulness in the case of the latter kind. And if dualism is true, this seems to be how it is in the case of pain. Conscious pain seems to be of no use to non-human animals. One might say that pain is useful for getting an animal to avoid certain situations. But that may well be incorrect in the case of non-human animals: neural states may well be sufficient to cause the behavior of these animals. It seems plausible that if we could do all the impossibly difficult physics calculations, we could predict what the animals will do based their physical states and the states of the environment. But if dualism is true, the pain is something over and beyond the physical states.

Humans have free will and make decisions through a process that goes beyond the physical functioning of the body. Or so at least hylomorphic and substance dualists are going to say. It is plausible that pain--which according to the dualist goes beyond the physical--is needed in order to inform this non-physical process. But this kind of a use won't be there in the case of non-human animals. Moreover, knowledge has non-instrumental value in the case of humans, and the kind of knowledge that pain conveys thus may have non-instrumental value in the case of humans. It does not seem that plausible that this kind of knowledge has much non-instrumental value for non-human animals.

I am not saying animals don't feel pain. I am simply saying that dualists have good reason to find the argument for animals having pain to be evidentially weak.

An objection to the above is that God exists, and we would be deceived in seeing animals apparently feel pain, and God would not allow for such deception. This objection has some force, but we had better not take the claim that God would not allow deception to imply that in all cases of something seeming a certain way, it is in fact that way. We already know that some animal behaviors seem humanlike but in fact have a different significance in non-humans, so we know to be cautious in inferences from humans to other animals. (Interestingly, the deception argument does not apply in the case of animals who lived before human beings came on the scene. Thus, someone impressed by this argument could hold that before the Fall, there was no animal pain.)

As an application, what I have said implies that the argument for the non-existence of God from the evil in animals' suffering pain is quite weak for dualists. It is weak first of all because the argument for animals' suffering pain will be rightly judged weak by dualists. But it is even weaker than that. For the main reason that could remain, after the above arguments, for a dualist to keep on presuming that animals feel pain is that perhaps something of the non-instrumental value that pain has in human life could be there in animals--maybe truth about the state of one's body is innately valuable. But if that is the argument, then the pain is not an evil, at least not in itself.