Friday, December 10, 2010

The reliability of intuitions

Often, analytic philosophers give some case to elicit intuitions. Intuitions elicited by certain kinds of cases count for less. Here is one dimension of this: Intuitions elicited by worlds different from ours are, other things being equal, reliable in inverse proportion to how different the worlds are from ours. Here is the extreme case: intuitions elicited by cases of impossible worlds.

For an example of such an intuition, consider the argument against divine command theory that even if God commanded torture of the innocent, torture of the innocent would be wrong. Now, the obvious response is that it's impossible for God, in light of his goodness, to command torture of the innocent. But, the opponent of divine command theory continues, if God were per impossibile to command it, it would still be wrong, but according to divine command theory, it would be right. (E.g., Wes Morriston has given an argument like that.)

I've criticized this particular argument elsewhere (not that I think divine command theory is right).[note 1] But here is a point that is worth making. This argument elicits our intuition by a case taking place in an impossible world. But impossible worlds are very different from ours. So we have good reason to put only little weight on intuitions elicited by per impossibile cases.

This does not mean that we should put no weight on them.

Wednesday, December 8, 2010

Responsibility and God

I've been playing with the idea that while responsibility in our case is always contrastive, and tied to choices that must be understood contrastively, God's responsibility is non-contrastive. Thus, I am for writing this post rather than doing some grading. But God is responsible simpliciter for the existence of kangaroos.

Tuesday, December 7, 2010

Wittgensteinian views of religious language

Wittgensteinians lay stress on the idea that

  1. One cannot understand central worldview concepts without living as part of a community that operates with these concepts.
The non-Christian cannot understand the Christian concept of the Trinity; the Christian and the atheist cannot understand the Jewish concept of God's absolute unity as understood by Maimonedes; the theist cannot understand the concept of a completely natural world; and the non-Fascist cannot understand the concept of the Volk. It is only by being a part of a community in which these concepts are alive that one gains an understanding of them.

Often, a corollary is drawn from this, that while internal critique or justification of a worldview tradition such as Christianity, naturalism or Nazism is possible, no external critique or justification is possible. In fact, there is an argument for this corollary.

  1. (Premise) One's evidence set cannot involve any propositions that involve concepts one does not understand.
  2. (Premise) Necessarily, if a proposition p uses a concept C, and a body of propositions P is evidence for or against p for an agent x, then some member of P involves C.
  3. If x is not a member of the community operating with a central worldview concept C, then x does not have any evidence for or against any proposition involving C. (1-3)
  4. (Premise) External critique or justification of a worldview of a community is possible only if someone who is not a member of the community can have evidence for or against a proposition involving a central worldview concept of that community.
  5. Therefore, external critique or justification of a worldview of a community is not possible. (4 and 5)
This is a particularly unfortunate result in the case of something like Nazism, and may suggest an unacceptable relativism.

The argument is valid but unsound, and I think unsalvageable. I think that (5) is false, and on some plausible interpretations of (1), (2) and (3) are false as well.

First of all, people successfully reason with scientific concepts that they do not understand, like the concept of a virus or of gravity. They inherit the concept from a scientic community that they are not a member of, and while they do not understands the concept, they get enough about the inferential connections involving the concept that the concept should become useful. Thus, even if I do not really understand the concept of a virus, my evidence set can include facts about viruses that I know by virtue of testimony[note 1], and inferential connections with other facts, such as that if x has the common cold, then many viruses are present in x's body. Thus, (2) is false.

As for (3), I don't know for sure if it's false, but seems quite possible that while C does not occur in one's evidence set, it might occur in one's rules of inference. And there does not seem to be anything wrong with having a concept in one's evidence set that one does not understand.

But perhaps you are not convinced by the critique of (2) and (3). I suspect this is because you take (1) to be more radical than I do. The "cannot understand" in (1) is understood as entailing "cannot operate with"—even the weak sort of grasp that the layperson has of scientific concepts is denied to non-members of a community in the case of central worldview concepts. On this interpretation of (1), (2) and (3) are false. I am inclined to think that this interpretation of (1) is the incorrect one because it renders (1) false. The central worldview concepts of a community do not seem to be significantly different from the central concepts of a scientific community. Still, I see the force of such a beefed-up (1), at least in the case of the concepts of the Christian faith (not so much because of the need for community membership as such, but because of the need for grace to enlighten one's understanding).

In any case, (5) is false on either understanding of (1). The reason is simple. To support or criticize a position, one does not need evidence for or against a position. One only needs evidence for or against the second order claim that the position is true. Often, this a distinction without much of a difference. I have evidence that

  1. there is life on Mars
if and only I have evidence that
  1. the proposition that there is life on Mars is true.
However, this is so only because in (8) I refer to the proposition under the description "that there is life on Mars." But take a different case. I go to a mathematics lecture. Unfortunately, as I shortly discover, it's in German. I sit through it uncomprehendingly. At the end of it, I turn to a friend who knows German and ask her what she thought. She is an expert in the field and says: "It was brilliant, and I checked that his central lemma is right." I still don't know what the speaker's central lemma is, but I know that it is true. I do not have evidence for the lemma, and it could even be (say, if the talk is in a field of mathematics I don't know anything about) that I don't have the requisite concepts for grasping the lemma, but I have evidence that the lemma is true.

Likewise, it is possible to have evidence for and against the claim that the community's central worldview propositions are true, without grasping these propositions and having evidence for or against them. For instance, while I may not be able to understand what the members of the community are saying in their internal critiques, but I may understand enough of the logical form of these critiques and of the responses to them to be able to make a judgment that the critiques are probably successful. Moreover, even if I do not understand some concept, I may grasp some metalinguistic facts such as that if x is a Gypsy, then x is not a part of anything in the extension of the term "the Volk", or that if the only things that exists at w are the particles of current physics, and at w their only properties and relations are those of current physics, then w is in the extension of the term "completely natural". Given such facts, I can gain arguments for or against the thesis that the central worldview claims of the community are true. Thus, (5) is false.

There is a hitch in my argument against (5). External evaluation of the community seems to require that while I have no grasp of particular central terms, I have some grasp of the larger grammar of sentences used by members of the community and I understand some of the non-central terms in their language. But what if I don't? This could, of course, happen. The community could speak an entirely foreign language that I am incapable of parsing.

I can make two responses. The first is that (5) is a general claim about communities whose central worldview concepts I do not understand, and that general claim has been shown to be false. There could be some radical cases where the outsider's lack of understanding is so complete that external critique or justification is impossible. But such cases do not in fact occur for us. Humans share basic structures of generative grammar and a large number of basic concepts due to a common environment.

The second response is that the behavior of members of the community can provide evidence for and against the correctness of their central ideas. If their airplanes keep on crashing, there is good reason to think their scientific concepts are bad. If they lead a form of life that does not promote the central human goods, there is good reason to think that their ethics is mistaken, while if they lead a form of life that does promote the central human goods, there is good reason to their ethics is sound. Now, of course, I could be wrong. Maybe for religious reasons they want their airplanes to crash and design them for that. Maybe they abstain from some central human goods for the sake of some God-revealed higher good. Maybe they are a bunch of hypocrites, and they aren't really achieving the central human goods. However, such possibilities only show that I cannot be certain in my external evaluation. But the claim that external justification and critique is possible is not the claim that one can achieve certainty in external justification and critique. What I've said shows we can achieve high probability even in cases where the community's language is radically not understandable.

Things might be different if we're dealing with an alien species of intelligent beings. But I suspect we could still come to probabilistic judgments, just somewhat less confident ones.

I think the above considerations not only show that the argument (1)-(6) fails, but that we're unlikely to get any successful argument along those lines.

Saturday, December 4, 2010

Consequence arguments and responsibility

Consequence arguments like Peter van Inwagen's basically conclude that if determinism is true, then if p is any truth, Np is also a truth. Here, roughly (different formulations will differ here), Np says that p is true and that there is nothing anyone could do to make p false.

Supposedly this conclusion is a problem for the compatibilist. But why? Why can't the compatibilist just say: "I freely and responsibly did A, even though N(I did A)"?

I suspect that the consequence argument has a further step that is routinely left out, and this involves an application of the inference rule:

  1. gamma: If Np, then no one is responsible for p.
But gamma might be invalid. Take a Frankfurt scenario. The neurosurgeon watches me. If within two minutes, I don't make the choice to vote for candidate A, or I make a choice to vote for any other candidate, she forces me to vote (or pseudo-vote, since a forced vote may not legally be a vote) for A. Thirty seconds into the two-minute period, I freely choose to vote for A, and do so. I am responsible for my voting for A. Now suppose that the neurosurgeon is not free, and in fact is an automaton that no one could ever have done anything about, and that my choice how to vote is the first free choice ever done. I am still responsible for voting for A, and hence responsible for its being the case that I vote for A. However, there is nothing anyone could do to make it be false that I vote for A.

The argument above does use a somewhat problematic step: going from "I am responsible for voting for A" to "I am responsible for its being the case that I vote for A". But the point is sufficient to show that gamma is problematic.

My suspicion is that gamma is correct in the case of finite agents and direct agent-responsibility of the sort involved in criminal law, but not in the case of the kind of outcome-responsibility that is involved in tort law. For the kind of outcome-responsibility that is involved in tort law is tied to but-for conditionals: but for my doing something, the harm wouldn't have happened. It is correct to say in the Frankfurt case that I don't have that sort of responsibility. Had I not chosen to vote for A, I would still have voted (or at least pseudovoted) for A. But I do have agent-responsibility for voting for A. If the elections were the American ones, I should be liable in criminal law for voting for A (I am Canadian, so for me to vote in U.S. elections would be an instance of fraud), but not in civil law (even if my vote causes some harm to someone), in the Frankfurt case.

Friday, December 3, 2010

Beta 2: a theorem

Finch and Warfield's version of the Consequence Argument for incompatibilism uses:

  1. beta 2: If Np and p entails q, then Nq
Here, "Np" is: p and nobody (no human?) can ever do anything about p. The argument for incompatibilism is easy. Let P be the distant past and L the laws. Suppose p is a proposition that is determined by the distant past and the laws. Then:
  1. P&L entails p. (Premise)
  2. N(P&L). (Premise)
  3. Therefore, Np. (1-3)
In other words, if something is determined by the distant past and the laws, nobody can ever do anything about it. In particular, if all actions are determined by the distant past and the laws, no one can do anything about any actions. And this is supposed to imply that there is no freedom.

Here's a cool thing I arrived at in class when teaching about the argument. Suppose we try to come up with a definition of the N operator. Here's a plausible version:

  • Np if and only if p and there does not exist an action A, agent x and time t such that (a) x can do A at t; and (b) (x does A at t)→~p.
Here, "→" is the subjunctive conditional. So, Np holds if and only if p and nobody could do anything such that if she did it, we would have not-p.

Anyway, here's an interesting thing. Beta 2 is a theorem if we grant these axioms:

  1. If q entails r, and pq, and p is logically possible, then pr.
  2. If x can do A at t then it is logically possible that x does A at t.
Axiom (6) is really plausible. Axiom (5) is a consequence of David Lewis's account of counterfactuals. Analogues of it are going to hold on accounts that tie counterfactuals to conditional probabilities.

The proof of beta 2 from (5) and (6) is easy. Suppose that Np is true and p entails q. For a reductio, suppose that ~Nq. If ~Nq, then either ~q or there are A, x and t such that (a) x can do A at t; and (b) (x does A at t)→~q. Since Np is true, p is true, and hence q is true as p entails q. So the ~q option is out. So there are A, x and t such that x can do A at t, and were x to do A at t, it would be the case that ~q. But ~q entails ~p, since p entails q, so by (5) and (6) it follows that were x to do A at t, it would eb the case that ~p. And so ~Np, which contradicts the assumption that Np and completes the proof.

So it looks like the consequence argument is victorious. The one controversial premise, beta 2, is a theorem given very plausible axioms.

Unfortunately, there is a problem. With the proposed definition of N, premise (2) says that there is no action anybody can do such that were they to do it, it would be the case that ~(P&L). While this is extremely plausible, David Lewis famously denies this on his essay whether one can break the laws. I think he's wrong to deny it, but the argument in this formulation directly begs the question against him.

Note that in the definition of the N operator, we might also replace the → with a might-conditional: were x to do A at t, it might be the case that ~p. (This gives the M operator in the Finch and Warfield terminology; see also Huemer's argument.) The analogue of (5) for might-conditionals is about as plausible. So once again we get as a theorem an appropriate beta-type principle.

Wednesday, December 1, 2010

A simple design argument

  1. P(the universe has low entropy | naturalism) is extremely tiny.
  2. P(the universe has low entropy | theism) is not very small.
  3. The universe has low entropy.
  4. Therefore, the low entropy of the universe strongly confirms theism over naturalism.

Low-entropy states have low probability. So, (1) is true. The universe, at the Big Bang, had a very surprisingly low entropy. It still has a low entropy, though the entropy has gone up. So, (3) is true. What about (2)? This follows from the fact that there is significant value in a world that has low entropy and given theism God is not unlikely to produce what is significantly valuable. At least locally low entropy is needed for the existence of life, and we need uniformity between our local area and the rest of the universe if we are to have scientific knowledge of the universe, and such knowledge is valuable. So (2) is true. The rest is Bayes.

When I gave him the argument, Dan Johnson made the point to me that this appears to be a species of fine-tuning argument and that a good way to explore the argument is to see how standard objections to standard fine-tuning arguments fare against this one. So let's do that.

I. "There is a multiverse, and because it's so big, it's likely that in one of its universes there is life. That kind of a universe is going to be fine-tuned, and we only observe universes like that, since only universes like that have an observer." This doesn't apply to the entropy argument, however, because globally low entropy isn't needed for the existence of an observer like me. All that's needed is locally low entropy. What we'd expect to see, on the multiverse hypothesis, is a locally low entropy universe with a big mess outside a very small area--like the size of my brain. (This is the Boltzmann brain problem>)

II. "You can't use as evidence anything that is entailed by the existence of observers." While this sort of a principle has been argued for, surely it's false. If we're choosing between two evolutionary theories, both of them fitting the data, both equally simple, but one of them making it likely that observers would evolve and the other making it unlikely, we should choose the one that makes it likely. But I can grant the principle, because my evidence--the low entropy of the universe--is not entailed by the existence of observers. All that the existence of observers implies (and even that isn't perhaps an entailment) is locally low entropy. Notice that my responses to Objections I and II show a way in which the argument differs from typical fine-tuning arguments, because while we expect constants in the laws of nature to stay, well, constant throughout a universe, not so for entropy.

III. "It's a law of nature that the value of the constants--or in this case of the universe's entropy--is exactly as it is." The law of nature suggestion is more plausible in the case of some fundamental constant like the mass of the electron than it is in the case of a continually changing non-fundamental quantity like total entropy which is a function of more fundamental microphysical properties. Nonetheless, the suggestion that the initial low entropy of the universe is a law of nature has been made in the philosophy of sceince literature. Suppose the suggestion is true. Now consider this point. There is a large number--indeed, an infinite number--of possible laws about the initial values of non-fundamental quantities, many of which are incompatible with the low initial entropy. The law that the initial entropy is low is only one among many competing incompatible laws. The probability given naturalism of initially low entropy being the law is going to be low, too. (Note that this response can also be given in the case of standard fine-tuning arguments.)

IV. "The values of the constant--or the initially low entropy--does not require an explanation." That suggestion has also been made in the philosophy of science literature in the entropy case. But the suggestion is irrelevant to the argument, since none of the premises in the argument say anything about explanation. The point is purely Bayesian.

Tuesday, November 30, 2010

Darwinian evolution and determinism

Once I was looking at an old issue of a journal, probably the Review of Metaphysics from the 1950s or 60s, and I came across an intriguing paper arguing that evolution does not help explain the complex structures we find in organisms. The paper tacitly presupposed determinism and in effect noted that there was an exact correspondence between the possible states of the universe now, call it t1, and the possible states of the universe before the advent of living things, call that time t0. There is then an exact correspondence between the possible states at t1 that exhibit the sort of complexity C we are trying to explain and the possible states at t0 that would, over the course of t1t0 units of time, give rise to C. Therefore, if the direct probability of C arising at t1 at random is incredibly low, the probability of getting a state at t0 that would give rise to C at t1 is exactly the same, and hence also incredibly low, and evolution has made no progress. Consequently, evolution does nothing to undercut design arguments for the existence of God.
Now, the argument as it stands has two obvious holes. First, it assumes not only determinism, but two-way determinism. Determinism says that from any earlier state and the laws, the later states logically follow. Two-way determinism adds that from any later state and the laws, the earlier states logically follow. Fortunately for the argument, actual deterministic theories have been two-way deterministic. Second, the argument assumes that the exact correspondence between states at t0 and at t1 preserves probabilities. This need not be true. If we consider the set [0,1] (all numbers between 0 and 1, both inclusive), and the function f(x)=x2, then f provides an exact correspondence between [0,1] and [0,1], but if X is uniformly distributed on [0,1], then the probability that X is in [0,1/4] is 1/4, while the probability that f(X) is in [0,1/4] is 1/2 (since for f(X) to be in [0,1/4], X need only be in [0,1/2]). But, again, in the kind of classical physics setting that underlies classical thermodynamic results like the Poincaré recurrence theorem, the transformations between states preserve phase-space volume, and it is very plausible that if you preserve phase-space volume, you preserve probabilities.
Once we add two-way determinism and phase-space volume preservation, which are reasonable assumptions in a classical setting, the argument is in much better shape. (Actually, if you can still have something relevantly like phase-space volume preservation, you could drop the determinism. I don't know enough physics to know how helpful this is.) The argument is now this. Let S be the set of all possible physical states of the universe. For any real number t, the two-way deterministic physics defines a one-to-one and onto function ft from S to S, such that by law the universe is in state s at time t0 if and only if it is in state ft(s) at time t0+t. Let C1 be the subset of S containing all states the exhibit the complexity feature C. Let C0 be the subset of S containing all states that would result in a state in C1 after the passage of t1t0 units of time. In other words, C0={s:ft(s) is in C1}, where t=t1t0. Then the probability of C0 is the same as the probability of C1. Hence, if our world's present state's being in C1 was too unlikely for chance to be a reasonable expectation, then the Darwinian explanation in terms of the world having been in a state from C0 at t0 is no better. In particular, if a theistic design hypothesis would do better than randomness if it were a matter of generating a state in C1 from scratch, Darwinism hasn't done anything to weaken the inference to that theistic hypothesis since C0 is just as unlikely as C1. Even if the evolutionary theory is correct, we still need an explanation of why the universe's state was in C0 at t0.
This argument is on its face pretty neat. One weakness is the physics it relies on. But bracket that. The kind of measure-preservation that classical dynamics had is likely to be at least a decent approximation to our actual dynamics. But there is a more serious hole in the argument.
The hole is this. If what evolution was supposed to explain is why it is that the universe is now in a state exhibiting C, the argument would work. But that isn't what evolution is supposed to explain. Suppose C is the existence of minded beings like us. Then it seems that we are puzzled why C is exhibited at some time or other, not why
  1. C is exhibited now.
Sure, evolution can't do a very good job explaining why C is exhibited now, as opposed to, say, 10 million years ago.
So perhaps the explanandum is not that C is exhibited at t1 but that
  1. C is exhibited at some time or other.
But we can predict (1) with unit probability without any posit of evolution simply by assuming that the dynamical system is ergodic: an ergodic system will exhibit C infinitely often from almost every starting point, given reasonable assumptions on C. Thus, if (2) is the explanandum, and we have the classical setting, we don't need evolution. We just need enough time. And, by the same token, (2) is no basis for a design argument.
Maybe the puzzle is not about (1) or (2), but about:
  1. C is exhibited within 14 billion of the beginning of our universe.
Ergodicity makes (2) all but inevitable, but it is puzzling that C should be exhibited so soon. After all, 14 billion years is not that much. It's only about three times the age of the sun. This account of what it is that evolution accomplishes in respect of C seems to turn on its head Darwin's emphasis on the "countless ages" that evolution required—in fact, evolution accomplished its task very quickly, and that speed is what the theory explains. Ergodicity without the selective mechanisms would very likely take much longer.
One problem with this as the account of what evolution does to explain C is that currently we do not have very good mathematical estimates of how long we can expect evolutionary processes to take to produce something like C, where C has any significant amount of complexity. So perhaps we do not really know if evolution explains (3).
Another move that one can make is to say that evolution does explain (1), and it does so by giving a plausible genealogical story about C, but the evolutionary explanation does not confer a non-tiny probability on (1). If so, then the evolutionary explanation may be a fine candidate for a statistical explanation of (1), but it will not be much of a competitor to the design hypothesis if the design hypothesis confers a moderate probability to (1).
In fact, we can use the above observations to run a nice little design argument. Suppose that C is the existence of intelligent contingent beings. Then for an arbitrary time t, the hypothesis of theistic design gives at least a moderate probability of the existence of intelligent contingent beings at t, since God is at least moderately likely to fill most of time with intelligent creatures. (And Christian tradition suggests that he in fact did, creating angels first and then later human beings.) Therefore, evolutionary theory assigns incredibly tiny probability to (1)—equal to the probability of getting C from scratch at random—but the design hypothesis assigns a much higher probability to (1). We thus have very strong confirmation of theism.[note 1]
But that assumes an outdated dynamics. Whether the argument can be made to work in a more realistic physics is an open question.

Monday, November 29, 2010

Explanation of action and naturalism

Here are some premises:

  1. If x did A because s, and either (a) it is false that s or (b) the fact that s is not a part of any explanation of x's doing A, then x's doing A because s is defective.
  2. If naturalism is true, then a human's doing something is a natural state of affairs.
  3. If naturalism is true, then moral facts do not explain any natural state of affairs.
  4. That something is morally required is a moral fact.
  5. Some human being non-defectively did something because it was morally required.
And here is the conclusion:
  1. Naturalism is not true.

Premises 2 and 4 are hard to dispute. Premise 5 seems plausible: it would be very odd indeed if every case of acting from duty were defective, assuming of course morality is an objective fact. Premise 3 is a bit tougher. If moral facts reduce to natural facts, then there is no reason to assert 3. For instance, one might reduce "A is required" to "A maximizes utility" and then reduce utility to natural facts about desire or pleasure. Neither step in the reduction seems plausible to me, though both have been defended. Now, naturalism doesn't want there to be non-natural explanations of natural facts. Natural facts either have no explanation or have only a natural explanation, according to the naturalist. So, unless there is a reduction of moral facts to natural ones, 3 is pretty plausible.

That leaves premise 1. I may have a counterexample to premise 1. Suppose I know that it will rain tomorrow, so today I buy an umbrella. It seems that I bought the umbrella because it will rain tomorrow, but the fact that it will rain tomorrow is not a part of any explanation of my buying the umbrella. If this case is non-defective, then 1 is false. However, perhaps, there is something rationally defective in this case. For, perhaps, my reason for buying the umbrella shouldn't be tomorrow's rain, but that the forecast predicts rain.

Currently, I am inclined against 1.

Friday, November 26, 2010

Naturalist theories of mind and corporate personhood

All theories of mind need to do justice to the multiple realizability intuition:

  1. Conscious beings in general, and persons in specific, could have a physical constitution very different from ours (e.g., silicon, plasma cloud, etc.), with the computational algorithms being significantly different as well.
On a naturalist theory of mind, all there is to a person or a conscious being is the physical constitution together with external connections. Therefore, on naturalism, what (1) says is that there could be persons radically different from us in their overall constitution. This means that the naturalist theory of mind must have a very flexible account of what it is to be a person or a conscious being. Presumably, this account is going to be something like this: Conscious beings are ones that represent the external world in certain ways—the best stories about this are causal in nature—and respond in other ways (or at least are of a kind to do this). The specification of the ways in which representation and response are done is not going to be too specific—it must be at a high enough level of generality to do justice to (1). And then persons are going to be the subset of conscious beings tha have (or at least are of a kind to have) a particularly sophisticated form of representation and response—perhaps the right kind of representation of the internal patterns of response together with a self-directed response to those patterns.

Here, now, is my hypothesis. Any naturalist story that does justice to (1) will be apt to count many human social groups as both conscious and as persons. Social groups do represent the environment and themselves, and respond to such representations in various sophisticated ways, including self-reflection analogous to that which persons engage in. Social groups have corporate representations that are not the same as individual representations and corporate desires that are not the same as individual desires. To a very rough first approximation, a social group believes p provided that a majority of the members believes p in a way that is appropriately explanatorily connected with their group membership (e.g., their belief is in the right way explained by or explains their group membership), and desires p provided that it has the right kind of tendency to pursue p. Anything that can design an airplane is likely conscious and a person. But an airplane can be designed by both an individual human, and a social group such as two brothers.

  1. If naturalism holds, then many human social groups are conscious and a number of these are persons.
Notice that the computational sophistication in human social groups can be very high. For human social groups contain a number of human brains. Think of a computating cluster: a cluster while having some ponderousness can compute anything its parts can.

But:

  1. Human social groups, other than perhaps the Church, are not persons.
(The naturalist is unlikely to worry about the exception.) If this won't do as a direct intuition, then we can argue for it on ethical grounds in the case of many social groups. For instance, an academic Department will often be such as to force the naturalist who does justice to (1) to count it as a person. But a respect is due to a person which is not due to an academic Department. A University administration should not dissolve a Department willy-nilly, but the gravity of dissolving a Department is not nearly comparable to the gravity of killing a person.

The dualist does justice to (1) without falling into an assertion that social groups are persons in a very simple way: a necessary condition for being conscious is having a soul or something like that, and a plasma cloud could have that, and social groups, at least other than the Church, in fact don't have that. (I am not saying that social groups couldn't have that, though I think they couldn't. I am inclined not to consider the Church literally a person, either.)

Objection 1: The naturalist can make it a condition of personhood that one not have persons as proper parts.

Response: If naturalism is true, the nerves in my shoulder could so grow that they would engage in the kind of computation characteristic of persons, and then a person would be a proper part of a person.

Objection 2: Social groups don't exist.

Response: It would be tough for a naturalist to hold that social groups don't exist and human beings do—both are appropriately posited by developed special sciences.

Tuesday, November 23, 2010

Consequence argument against Calvinism

  1. (Premise) If p is true, and I can't prevent p from holding, and p entails q, then I can't prevent q from holding. (cf. Finch and Warfield's modified beta)
  2. (Premise) If Calvinism is true, and God sovereignly wills p, then I cannot prevent God from sovereignly willing p.
  3. (Premise) If Calvinism is true, then I do A only if God sovereignly wills that I do A.
  4. That God sovereignly wills p entails p.
  5. Therefore, if Calvinism is true, and I do A, then I can't have prevented my doing A. (1, 2, 3 and 4)
  6. (Premise) I am not responsible for what I can't have prevented.
  7. Therefore, if Calvinism is true, I am not responsible for anything I do.
  8. (Premise) I am responsible for something I do.
  9. Therefore, Calvinism is false.

Action in and out of character

This post is obviously overgeneralized, but I think it is still heuristically useful.

The compatibilist has trouble with out of character action. The incompatibilist has trouble with in character action.

It is obvious that we sometimes are responsible for out of character action, and that we sometimes are responsible for in character action. Thus, to evaluate a particular compatibilist proposal, it's worth checking whether it allows for responsibility for out of character action, and to evaluate a particular incompatibilist proposal, it's worth checking whether it allows for responsibility for in character action.

Monday, November 22, 2010

Agent and substance causation

Some people think that events are never causes, except in a derivative sense.  It is substances that are causes (and when one or more substances are cause because they stands in some relations, then their standing these relations is an event, and we can derivatively count it as a cause).  It seems very natural for someone who takes a substance-theory of causation to take an agent-causal theory of action.  Doing so does not carry the cost that agent-causal theories of action normally carry, namely the cost of supposing two kinds of causation.  So a substance-theory of causation would seem to be a great match for an agent-causal theory of action.

However, I think that the substance-causal theorist may lose one of the benefits of agent-causal theories of action.  The traditional agent-causal theorist can make a neat distinction between my voluntarily doing something and my "doing" something in the non-agential way in which I depress the grass when I lie on it or circulate the blood throughout my body.  The non-voluntary "doing" is a matter of event-causation, while the the voluntary doing is a matter of agent-causation.  But on the substance-causal view, both the non-voluntary and the voluntary cases are instances of substance-causation, with one and the same cause--namely me.  Granted, the substance-causal theorist can distinguish the voluntary doings from the non-voluntary "doings" by saying that reasons enter in a certain way into the explanation of the former but not into the explanation of the latter, but this is exactly the sort of thing the event-causalist would say--an advantage of agent-causation has been lost.

This isn't really an argument for or against any theory.  The loss in this regard is balanced by a greater overall theoretical simplicity in having only one kind of causation.

Saturday, November 20, 2010

Scientific realism

Despite having a pretty good Pittsburgh education in the philosophy of science, I never before read Ernan McMullin's "A Case for Scientific Realism". I was especially struck by one thing that I had never noticed before, which Fr. McMullin briefly notes in one context: things are different, realism-wise, in regard to fundamental physics and other areas of science. The rest of this post is me, not McMullin.

Observe that the pessimistic meta-induction works a lot better for fundamental physics than for the special sciences. The meta-induction says that past theories have tended to be eventually refuted, and hence so will the present ones be. (It's really hard to make the statement precise, but nevermind that for now.) But it is false that the special sciences' theories have tended to be eventually refuted. Some, like the geocentric and heliocentric theories in astronomy and the phlogiston theory of combustion, have indeed been refuted. But many theories have stood for millenia. Here is a sample of these theories: (a) there are seasons that come in a cycle, and the cycle is correlated with various botanical phenomena; (b) tigers eat humans and deer; deer eat neither tigers nor humans; (c) rain comes from clouds; (d) herbivores run from apparent danger; (e) much of the earth's energy comes from the sun. And so on. We do not think of these as scientific theories any more because they are so venerable and well-confirmed. This means that we sometimes mistakenly assent to the inductive premise of the meta-induction because those venerable scientific theories that have not been refuted have often become common-sense and hence we exclude them from the sample.

Nonetheless, the pessimistic meta-induction seems to have some force in regard to fundamental physics: there, the change is much more rapid, and very little remains of past theories. We do sometimes get results like the "classical limit" theorems for Quantum Mechanics where we can show that the earlier theory's predictions approximated the predictions of the newer theory, but this approximation in prediction does not typically yield the approximate truth of the earlier theory. The one kind of exception we sometimes get is that sometimes a part of what used to be a fundamental theory survives, but no longer as fundamental—atoms, for instance.

Non-fundamental concepts—such as cell or season—can survive significant shifts in fundamental theories, but obviously fundamental concepts like force or particle find it much more difficult to do so. There is a kind of multiple realizability in the concepts of the special sciences (not along the metaphysical but the conceptual dimension of a two-dimensional modal semantics) which makes them more resilient.

Van Fraassen proposes we be realists about the observable claims of science and non-realists about the unobservable. This is, I think, really implausible. Van Fraassen would have us believe in ova but not in sperm, just because the ovum is large enough to be seen with the naked eye while a sperm is not. But I think there is a view in the vicinity that is worth taking seriously: that we should be realists about non-fundamental science and at least somewhat skeptical of fundamental science.

Friday, November 19, 2010

Impeding the progress of science

We were considering the following argument in my Metaphysics class:

  1. Scientific realism impedes the progress of fundamental physics.
  2. If a theory impedes the progress of a science, it's probably false.
  3. Fundamental physics is a science.
  4. Therefore, scientific realism is probably false.
Anyway, after class Alina Beary, one of our grad students, gave a really cool counterexample to (2), and she gave me permission to blog it: The theory that there is such a thing as pain impedes the progress of biology. Just think of the progress we could make if our experimental practices weren't limited by worries about causing pain! Yet, the fact that the existence of pain impedes biological progress does not provide any significant evidence against the existence of pain.

There are other arguments involving (2). For instance, one might (perhaps incorrectly) think that dualism impedes the progress of neuroscience. But (2) is false, so that wouldn't give us significant evidence for the falsity of dualism.

I suppose one might try to distinguish between ways in which a theory can impede the progress of a science, and then some qualified version of (2) would still be true. That would be interesting, but I don't know to do this.

Thursday, November 18, 2010

The character of God in the Bible

The Old Testament has a picture of its central character, God, that is on its surface inconsistent, with apparently contradictory features. But a deeper reading shows a deep consistency: a consistent but from our point of view complex character displayed in a variety of circumstances, from a variety of points of view, and also reflected in the emotions of narrators and interactions of other characters.

I shall not try to defend this reading of the Old Testament here. It cannot be done in a post, and maybe not even in a book, and certainly not by me. One must drink in the texts. Personally, I have found very helpful our Department Bible study in this regard. We are doing Book III of the Psalms (Pss. 73-89), and this has been one of the things that has led to this post.

Now, there come to mind four prima facie plausible explanations for the portrayal of a single character across a large body of literature by a large set of authors.

  1. Imitation by a number of authors of a canon of primary texts or stories originally by a single author.
  2. Harmonization by selection of texts and/or editorial work on particular texts.
  3. Cooperative authorship.
  4. A modeling of the character on an actual person with whom the diverse set of authors all interacted "in real life."

If (4) is the right explanation, then the fact that the authors wrote over a period of many centuries, in different social circumstances, together with the essential otherness of central character of the texts, makes it most unlikely that any mere human was the model. And the simplest explanation is that the authors were in fact interacting with the person they claim to be describing—Y*WH, the God of Israel. Therefore, if (4) is true, then we have strong evidence that God exists. Observe that it is not uncommon for the same person to have apparent surface differences as seen in different contexts and by different people—we call this "complexity" in the person and it lends reality to the person (which character complexity in the case of God is, I think, compatible with ontological simplicity, but that's a different question).

Note that the deep consilience not only suggests that the various authors interacted with the same person, but that they did not do so in a shallow way. It is possible to have portrayals of the same person by different people who were acquainted with the subject where there isn't such a consilience—I feel this way in the case of Plato and Xenophon's respective portrayals of Socrates, though I could be wrong (I have not drunk in the Xenophon texts sufficiently).

If (1) were the right explanation, we would expect shallow consistency in the portrayal of the character, and quite likely some deep inconsistencies, whereas we observe the opposite. It is hard for one author to take another author's character and portray that character in a consistent way, and the likely result of an attempt to portray that character is that one will have a similarity of outward mannerisms, but to a careful reader (or viewer) it just won't be the same character but an impostor. For instance, the Sherlock Holmes of the "New Adventures of Sherlock Holmes" TV series from the '50s is a case in point (this is the most absurd example from the seires). But when two authors portray different surface detail with a deep consistency, then we have something quite unexpected on a copying hypothesis. Granted, this could result from literary genius combined with depth of appreciation of another's work on the part of the copyist, but such a combination is rare. Most literary geniuses create characters on their own, often even when the character bears the name of some historical figure. And the Hebrew Scriptures weren't just written by two or three authors, but by a much greater number. Thus, explanation (1) does not fit the phenomena very well.

As for (2), again harmonization might explain doctrinal agreement and agreement as to surface features, but unless the harmonization takes the form of a rewriting of the whole body of texts by a literary genius, it would not produce a deep consilience in the central character. And no such unified rewriting in fact happened: the Hebrew Scriptures retain a great diversity of genres and styles. Another striking feature is that at least as regarding texts from before around the 4th century BC, it does not appear that there was much in the way of centralized selection. It seems that the main criterion for canonicity in the first century—to the extent that the concept of canonicity existed—was not deep consilience in the character of God, but something more extrinsic like Hebrew-language authorship combined with venerable age.

Option (3) could work with a small number of contemporaneous authors—but certainly not with the great number of authors of the Hebrew Scriptures strung out across centuries.

So that leaves option (4), and so we have good reason to think that at least a number of the authors of the Hebrew Scriptures had encountered the character of God in reality.

What does the New Testament add to the argument? I think the deep consilience with apparent surface difference continues. So the argument is strengthened. And another point emerges. Jesus Christ, although typically not explicitly portrayed as God, is portrayed in a way that gives him a deep consilience of character with the Y*WH of the Old Testament. Just to give one example, he appropriates, in a credible way, God's desire to gather the Israelites to himself like a mother hen.

May we be thus gathered to him.

Of course, I do not claim originality for this argument. It is inspired by similar arguments seen in various places. Nor do I promote this argument as a way of convincing atheists. Because the evidence of the deep consilience needs to be gathered over years of drinking in the Scriptures, and maybe this can only be done while living the life of the community that has produced the Scriptures (i.e., the life of the Church or of the Synagogue), this argument, while of significant epistemic weight, may only be evidentially useful to Christians. Yet, God can help someone not living the life of the community to see the consilience, so it could have some value outside the community, too.