Showing posts with label design argument. Show all posts
Showing posts with label design argument. Show all posts

Wednesday, November 10, 2021

Online talk: A Norm-Based Design Argument

Thursday November 11, 2021, at 4 pm Eastern (3 pm Central), the Rutgers Center for Philosophy of Religion and the Princeton Project in Philosophy of Religion present a joint colloquium: Alex Pruss (Baylor), "A Norm-Based Design Argument".

The location will be https://rutgers.zoom.us/s/95159158918

Wednesday, October 13, 2021

A pedagogical universe

Our science developed over milennia, progressing from false theory to less false theory. Why did we not give up long ago? I take it this is because the false theories, nonetheless, had rewards associated with them: although false, they allowed for prediction and technological control in ways that were useful (in a broad sense) to us.

Thus, the success of our science depends not just on a “uniformity of nature” on which the correct fundamental scientific theories are elegant and uniform. Most of our historical progress in physics has not involved correct scientific theories—and quite possibly, we do not have any correct fundamental theories in physics yet. The success of our science required low-hanging fruit for us to pick along the way, fruit that would guide us in the direction of truth.

We can imagine worlds where the ultimate physics requires an enormous degree of sophistication (much as we expect to be the case in our world) and there is little in the way of low-hanging fruit (except maybe for the lowest level of low-hanging fruit, involving the regularities needed to enable evolution of intelligence in the first place) in the form of approximately true theories that rewards us with prediction and control so that beings like us would just give up on science. Our world is better than that.

Indeed, our world seems to be pedagogically arranged for us, arranged to gradually teach us science (and other things), much as we teach our children, with intellectual and practical rewards. There is a design argument for the existence of God from this (closely related to this one).

Thursday, October 10, 2019

Approximatable laws

Some people, most notably Robin Collins, have run teleological arguments from the discoverability of the laws of nature.

But I doubt that we know that the laws of nature are discoverable. After all, it seems we haven’t discovered the laws of physics yet.

But the laws of nature are, surely, approximatable: it is within our power to come up with approximations that work pretty well in limited, but often useful, domains. This feature of the laws of nature is hard to deny. At the same time, it seems to be a very anthropocentric feature, since the both the ability to approximate and the usefulness are anthropocentric features. The approximatability of the laws of nature thus suggests a universe whose laws are designed by someone who cares about us.

Objection: Only given approximatable laws is intelligence an advantage, so intelligent beings will only evolve in universes with approximatable laws. Hence, the approximatable laws can be explained in a multiverse by an anthropic principle.

Response: Approximatability is not a zero-one feature. It comes in degrees. I grant that approximatable laws are needed for intelligence to be an advantage. But they only need to be approximatable to the degree that was discovered by our prehistoric ancestors. There is no need for the further approximatability that was central to the scientific revolution. Thus an anthropic principle explanation only explains a part of the extent of approximatability.

Friday, October 4, 2019

A tension in some theistic Aristotelian thinkers

Here is a tension in the views of some theistic Aristotelian philosophers. On the one hand, we argue:

  1. The mathematical elegance and discoverability of the laws of physics is evidence for the existence of God

but we also think:

  1. There are higher-level (e.g., biological and psychological) laws that do not reduce to the laws of physics.

These higher-level laws, among other things, govern the emergence of higher-level structures from lower-level ones and the control that the higher-level structures exert over the lower-level ones.

The higher-level laws are largely unknown except in the broadest outline. They are thus not discoverable in the way the laws of physics are claimed to be, and since no serious proposals are yet available as to their exact formulation, we have no evidence as to their elegance. But as evidence for the existence of God, the elegance and discoverability of a proper subset of the laws is much less impressive. In other words, (1) is really impressive if all the laws reduce to the laws of physics. But otherwise, (1) is rather less impressive. I’ve never never seen this criticism.

I think, however, there is a way for the Aristotelian to still run a design argument.

Either all the laws reduce to the laws of physics or not.

If they all reduce to the laws of physics, pace Aristotelianism, we have a great elegance and discoverability design argument.

Suppose now that they don’t. Then there is, presumably, a great deal of complex connection between structural levels that is logically contingent. It would be logically possible for minds to arise out of the kinds of arrangements of physical materials we have in stones, but then the minds wouldn’t be able to operate very effectively in the world, at least without massively overriding the physics. Instead, minds arise in brains. The higher-level laws rarely if ever override the lower-level ones. Having higher-level laws that fit so harmoniously with the lower-level laws is very surprising a priori. Indeed, this harmony is so great as to be epistemically suspicious, suspicious enough that the need for such a harmony makes one worry that the higher-level laws are a mere fiction. But if they are a mere fiction, then we go back to the first option, namely reduction. Here we are assuming the higher level stuff is irreducible. And now we have a great design argument from their harmony with the lower-level laws.

Wednesday, March 21, 2018

Bohmianism and God

Bohmian mechanics is a rather nice way of side-stepping the measurement problem by having a deterministic dynamics that generates the same experimental predictions as more orthodox interpretations of Quantum Mechanics.

Famously, however, Bohmian mechanics suffers from having to make the quantum equilibrium hypothesis (QEH) that the initial distribution of the particles matches the wavefunction, i.e., that the initial particle density is given by (at least approximately) |ψ|2. In other words, Bohmian mechanics requires the initial conditions to be fine-tuned for the theory to work, and we can then think of Bohmian mechanics as deterministic Bohmian dynamics plus QEH.

Can we give a fine-tuning argument for the existence of God on the basis of the QEH, assuming Bohmian dynamics? I think so. Given the QEH, nature becomes predictable at the quantum level, and God would have good reason to provide such predictability. Thus if God were to opt for Bohmian dynamics, he would be likely to make QEH true. On the other hand, in a naturalistic setting, QEH seems to be no better than an exceedingly lucky coincidence. So, given Bohmian dynamics, QEH does support theism over naturalism.

Theism makes it possible to be an intellectually fulfilled Bohmian. But I don’t know that we have good reason to be Bohmian.

Tuesday, January 17, 2017

Vertical uniformity of nature

One often talks of the “uniformity of nature” in the context of the problem of induction: the striking and prima facie puzzling fact that the laws of nature that hold in our local contexts also hold in non-local contexts.

That’s a “horizontal” uniformity of nature. But there is also a very interesting “vertical” uniformity of nature. This is a uniformity between the types of arrangements that occur at different levels like the microphysical, the chemical, the biological, the social, the geophysical and the astronomical. The uniformity is different from the horizontal one in that, as far as we know, there are no precisely formulable laws of nature that hold uniformly between levels. But there is still a less well defined uniformity whose sign is that same human methods of empirical investigation (“the scientific method”) work in all of them. Of course, these methods are modified: elegance plays a greater role in fundamental physics than in sociology, say. But they have something in common, if only that they are mere refinements of ordinary human common sense.

How much commonality is there? Maybe it’s like the commonality between novels. Novels come in different languages, cultural contexts and genres. They differ widely. But nonetheless to varying degrees we all have a capacity to get something out of all of them. And we can explain this vague commonality quite simply: all novels (that we know of) are produced by animals of the same species, participating to a significant degree in an interconnected culture.

Monotheism can provide an even more tightly-knit unity of cause that explains the vertical uniformity of nature—one entity caused all the levels. Polytheism can provide a looser unity of cause, much more like in the case of novels—perhaps different gods had different levels in nature delegated to them. Monotheism can do something similar, if need be, by positing angels to whom tasks are delegated, but I don’t know if there is a need. We know that one artist or author can produce a vast range of types of productions (think of a Michelangelo or an Asimov).

Any case, the kind of vague uniformity we get in the vertical dimension seems to fit well with agential explanations. It seems to me that a design argument for a metaphysical hypothesis like monotheism, polytheism or optimalism based on the vertical uniformity might not have some advantages over the more standard argument from the uniformity of the laws of nature. Or perhaps the two combined will provide the best argument.

Monday, November 24, 2014

Simplicity, language and design

  1. Simplicity is best understood linguistically (e.g., brevity of expression in the right kind of language).
  2. Simplicity is a successful (though fallible) guide to truth.
  3. If (1) and (2), then probably the universe was made for language users or by a language user.
  4. If the universe was made for language users, it was made by an intelligent being.
  5. If the universe was made by a language user, it was made by an intelligent being.
  6. So, probably, the universe was made by an intelligent being.

Tuesday, October 7, 2014

Mersenne's supreme bad vs. supreme good argument

In his 1624 The Impiety of Deists, Atheists and Libertines of This Time... (dedicated to Cardinal Richelieu, by the way), Mersenne gives this fascinating little argument:

Nobody fails to acknowledge that if there is a supremely good being [un estre souverainement bon], it merits the name of God, since we don't mean anything by that name other than that which has all [the] sorts of perfections, and which lacks nothing. Now I will show that this supreme good exists. If it didn't exist, its privation would exist, which would be a supreme bad [mal], and consequently the supreme non-being, since the bad and the non-being are the same thing: but it doesn't in the least seem that the privation exists more than its actuality, which must necessary precede it. Thus one must confess that there is a supreme goodness, and then that there cannot be a supreme badness. So we have a supreme being, since we deny a supreme non-being, it being necessary that the one or the other exist....
There is actually more than one argument here. There is an interesting and deeply metaphysical argument based on evil as the privation of a good. But there is also the kernel of a rather interesting and simple argument:
  1. It would be supremely bad if God doesn't exist.
  2. The world doesn't exemplify a supreme bad.
  3. So, God exists.
My son suggests using the goodness in the world to argue for (2). That would be an interesting hybrid design argument.

Monday, November 21, 2011

Self-organization: Another step in the dialectics

Suppose that it turns out that, given laws of nature like ours, all sorts of neat self-organization—like what we see in evolution—will follow from most sets of initial conditions. Does this destroy the design argument for the existence of God? After all, that there is complexity of the sort we observe appears to cease to be surprising.

A standard answer is: No, because we still need an explanation of why the laws of nature are in fact such as to enable this kind of self-organization, and theism provides an excellent such explanation.

But what if it turns out, further, that in some sense most laws, or the most likely laws (maybe simpler laws are more likely than more complex ones), enable self-organization processes. So not only is it unsurprising that we would get initial conditions that are likely to lead to self-organization, it is also not unlikely that we would have laws that lead to self-organization. It seems that this undercuts the modified design argument.

But I think there is a further design argument. The result that most, or the most likely, laws would likely lead to self-organization would have to be a very deep and powerful mathematical truth. What explains why this deep mathematical truth obtains? Maybe it follows from certain axioms. But why is it the case that axioms such as to lead to that truth obtain? Well, we can say that they are necessary, but that isn't a very good explanation: it is not an informative explanation. (If it turned out that modal fatalism is true, we still wouldn't be satisfied with explaining all natural phenomena by invoking their necessity. Spinoza certainly wasn't, and this he was right about, though he was wrong that modal fatalism is true.) Theism provides a family of deeper and more informative answers: mathematics is grounded in the nature of a perfect being, and hence it is unsurprising that mathematics has much that is beautiful and good in it, and in particular it is unsurprising that mathematics includes self-organization theorems, since self-organization theorems are beautiful and good features of mathematical reality.

I said that theism provides a family of answers, since different theistic theories give different accounts of how it is that mathematical truth is grounded in God. Thus, one might think, with St Augustine, that mathematical truth is grounded in God's intellect. On the theory I defend in my Worlds book, necessary truths—and in particular, mathematical truths—are grounded in the power of God.

There is, of course, an obvious argument from the beauty of mathematics to the existence of God along similar lines. But that argument is subject to the rejoinder that the beauty of mathematics is a selection effect: what mathematics mathematicians are interested in is to a large degree a function of how beautiful it is. (Mathematicians are not interested in random facts about what the products of ten-digit numbers are.) However, I think the present argument side-steps the selection effect worry.

Wednesday, December 1, 2010

A simple design argument

  1. P(the universe has low entropy | naturalism) is extremely tiny.
  2. P(the universe has low entropy | theism) is not very small.
  3. The universe has low entropy.
  4. Therefore, the low entropy of the universe strongly confirms theism over naturalism.

Low-entropy states have low probability. So, (1) is true. The universe, at the Big Bang, had a very surprisingly low entropy. It still has a low entropy, though the entropy has gone up. So, (3) is true. What about (2)? This follows from the fact that there is significant value in a world that has low entropy and given theism God is not unlikely to produce what is significantly valuable. At least locally low entropy is needed for the existence of life, and we need uniformity between our local area and the rest of the universe if we are to have scientific knowledge of the universe, and such knowledge is valuable. So (2) is true. The rest is Bayes.

When I gave him the argument, Dan Johnson made the point to me that this appears to be a species of fine-tuning argument and that a good way to explore the argument is to see how standard objections to standard fine-tuning arguments fare against this one. So let's do that.

I. "There is a multiverse, and because it's so big, it's likely that in one of its universes there is life. That kind of a universe is going to be fine-tuned, and we only observe universes like that, since only universes like that have an observer." This doesn't apply to the entropy argument, however, because globally low entropy isn't needed for the existence of an observer like me. All that's needed is locally low entropy. What we'd expect to see, on the multiverse hypothesis, is a locally low entropy universe with a big mess outside a very small area--like the size of my brain. (This is the Boltzmann brain problem>)

II. "You can't use as evidence anything that is entailed by the existence of observers." While this sort of a principle has been argued for, surely it's false. If we're choosing between two evolutionary theories, both of them fitting the data, both equally simple, but one of them making it likely that observers would evolve and the other making it unlikely, we should choose the one that makes it likely. But I can grant the principle, because my evidence--the low entropy of the universe--is not entailed by the existence of observers. All that the existence of observers implies (and even that isn't perhaps an entailment) is locally low entropy. Notice that my responses to Objections I and II show a way in which the argument differs from typical fine-tuning arguments, because while we expect constants in the laws of nature to stay, well, constant throughout a universe, not so for entropy.

III. "It's a law of nature that the value of the constants--or in this case of the universe's entropy--is exactly as it is." The law of nature suggestion is more plausible in the case of some fundamental constant like the mass of the electron than it is in the case of a continually changing non-fundamental quantity like total entropy which is a function of more fundamental microphysical properties. Nonetheless, the suggestion that the initial low entropy of the universe is a law of nature has been made in the philosophy of sceince literature. Suppose the suggestion is true. Now consider this point. There is a large number--indeed, an infinite number--of possible laws about the initial values of non-fundamental quantities, many of which are incompatible with the low initial entropy. The law that the initial entropy is low is only one among many competing incompatible laws. The probability given naturalism of initially low entropy being the law is going to be low, too. (Note that this response can also be given in the case of standard fine-tuning arguments.)

IV. "The values of the constant--or the initially low entropy--does not require an explanation." That suggestion has also been made in the philosophy of science literature in the entropy case. But the suggestion is irrelevant to the argument, since none of the premises in the argument say anything about explanation. The point is purely Bayesian.

Wednesday, July 21, 2010

A defense (well, sort-of) of specified complexity as a guide to design

I will develop Dembski's specified complexity in a particular direction, which may or may not be exactly his, but which I think can be defended to a point.

Specified Complexity (SC) comes from the fact that there are three somewhat natural probability measures on physical arrangements. For definiteness, think of physical arrangements as black-and-white pixel patterns on a screen, and then there are 2n arrangements where n is the number of pixels.

There are three different fairly natural probability measures on this.

1. There is what one might call "a rearrangement (or Humean) measure" which assigns every arrangement equal probability. In the pixel case, that is 2-n.

2. There is "a nomic measure". Basically, the probability of an arrangement is the probability that, given the laws (and initial conditions? we're going to have two ways of doing it--one allowing the initial conditions to vary, and one to vary), such an arrangement would arise.

3. There is what one might call "a description measure". This is relative to a language L that can describe pixel arrangements. One way to generate a description measure is to begin by generating random finite-length strings of symbols from L supplemented with an "end of sentence" marker which, when generated, ends a string. Thus, the probability of a string of length k is m-k where m is the number of symbols in L (including the end of sentence marker). Take this probability measure and condition on (a) the string being grammatical and (b) describing a unique arrangement. The resulting conditional probability measure on the sentences of L that describe a unique arrangement then gives rise to a probability measure on the arrangements themselves: the description probability of an arrangement A is the (conditionalized as before) probability that a sentence of L describes A.

So, basically we have the less anthropocentric nomic and rearrangement measures, and the more anthropocentric description measure. The rearrangement measure has no biases. The nomic measure has a bias in favor of what the laws can produce. The description measure has a bias in favor of what can be more briefly described.

We can now define SC of two sorts. An arrangement A has specified rearrangement (respectively, nomic) complexity, relative to a language L, provided that A's rearrangement (respectively, nomic) measure is much smaller than its L-description measure. (There is some technical stuff to be done to extend this to less specific arrangements--the above works only for fully determinate arrangements.)

For instance, consider the arrangement where all the pixels are black. In a language L based on First Order Logic, there are some very short descriptions of this: "(x)(Bx)". So, the description measure of the all-black arrangement will be much bigger than the description measure of something messy that needs a description like "Bx1&Bx2&Wx3&...&Bxn". On the other hand, the rearrangement measure of the all-black arrangement is the same as that of any other arrangement. In this case, then, the L-description measure of the all-black arrangement will be much greater than its rearrangement measure, and so we will have specified rearrangement complexity, relative to L. Whether we will have nomic rearrangement complexity depends on the physics involved in the arrangement.

All of the above seems pretty rigorous, or capable of being made so.

Now, given the above, we have the philosophical question: Does SC give one reason to suppose agency? Here is where things get more hairy and less rigorous.

An initial problem: The concept of SC is language-relative. For any arrangement A, there is a language L1 relative to which A lacks complexity and a language L2 relative to which A has complexity. So SC had better be defined in terms of a privileged kind of language. I think this is a serious problem for the whole approach, but I do not know that it is insuperable. For instance, easily inter-translatable languages are probably going to give rise to similar orders of magnitude within the description measures. We might require that the language L be the language of a completed and well-developed physics. Or we might stipulate L to be some extension of FOL with the predicates corresponding to the perfectly normal properties. There are tough technical problems here, and I wish Dembski would do more here. Call any language that works well here "canonical".

Once we have this taken care of, it it can be done, we can ask: Is there any reason to think that SC is a mark of design?

Here, I think Dembski's intuition is something like this: Suppose I know nothing of an agent's ends. What can I say about the agent's intentions? Well, an agent's space of thoughts is going to be approximately similar to a canonical language (maybe in some cases it will constitute a canonical language). Without any information on the agent's ends, it is reasonable to estimate the probabilities of an agent having a particular intention in terms of the description measure relative to a canonical language.

But if this is right, then the approach has some hope of working, doesn't it? For suppose you have nomic specified complexity of an arrangement A relative to a canonical language. Then P(A|no agency) will be much smaller than the description measure of L, which is an approximation to P(A|agency) with no information about the sort of agency going on. Therefore, A incrementally confirms the agency hypothesis. The rest is a question of priors (which Dembski skirts by using absolute probability bounds).

I think the serious problems for this approach are:

  • The problem of canonical languages.
  • The problem that in the end we want this to apply even to supernatural designers who probably do not think linguistically. Why think that briefer descriptions are more likely to match their intentions?
  • We do have some information on the ends of agents in general--agents pursue what they take to be valuable. And the description measure does not take value into account. Still, insofar as there is value in simplicity, and the description measure favors briefer descriptions, the description measure captures something of value.

Friday, May 15, 2009

The higher level regularity of nature

It just struck me that while it is very puzzling why there is law-like regularity at the bottom level—in fundamental physics—the puzzle about why there are law-like regularities at higher levels—in astronomy, psychology, biology, chemistry and non-basic physics—is a separate puzzle. In other words, even if we had an explanation of regularity at the bottom level, we would not thereby have an explanation of why there are higher explanatory levels where there are also regularities, albeit somewhat more approximate ones. Thus, when we are puzzled by the laws of nature, there are two things to be explained:

  1. Why there is regularity at the level of fundamental physics.
  2. Why this regularity, together with the initial conditions, gives rise to regularities at multiple higher levels of organization.
Notice that our intuitions about the power of induction are just about all based on the higher level regularities.

This gives rise to what one might call a generalized fine-tuning argument. The standard fine-tuning argument asks why the laws of nature (and especially the constants in them) are such that life arises. The generalized fine-tuning argument asks why it is that the laws of nature and initial conditions are such that multiple explanatory levels (either left unspecified like that, or enumerated: astronomy, psychology, biology, chemistry, etc.) arise from these laws and conditions.

Whether the generalized fine-tuning argument is good argument for the existence of God depends on two things: (a) how likely it is that apart from the theistic hypothesis that such multiple levels should arise, and (b) how likely it is on the theistic hypothesis that they should arise.

As for (b), I think in Aquinas and Leibniz we find compelling accounts of how an infinite but simple deity would have good reason to create a world that images his infinity via a diversity of elements and his simplicity via a unity running through these diverse elements. Unity at multiple explanatory levels allows even more of that diversity and unity.

What about (a)? I don't know. I think the question is easier when the levels are enumerated, as then the considerations from the standard fine-tuning arguments can be used. But the general question is quite interesting, too.

Saturday, December 29, 2007

Music and the problem of evil

Suppose I have superhuman hearing. While you are listening to Beethoven's Ninth, I hear, with great precision, every single sound wave impacting each of my eardrums. But I do not actually assemble them into a coherent piece of music.[note 1] As far as aesthetic appreciation goes, I might as well be looking at a CD under a scanning electron microscope.

It is quite easy to see all the physical details of a work of art without seeing the work as a whole, which work gives meaning to the parts. The details may look nothing like the whole. And suppose now that we did not even see all the details, first, because our perceptual processes already processed the data in some lossy way, perhaps a way irrelevant to the aesthetic qualities (think of someone who, whenever a piece of music came into his ears, received instead a visual representation of a Fourier transform of a distorted version of the sound), and, second, because we did not perceive the whole. Then our judgment as to the aesthetic qualities of the whole, as to the fittingness of parts, would be of very dubious value.

Now it is plausible that a universe created by God is very much like a work of art. A work of art we see only a portion of and in a way that involves perceptual pre-processing of a sort that may lose many significant aspects of the axiological properties of the work.

If that is how we saw things, then we would find a portion of the "sceptical theism" position quite plausible: we would find it quite plausible that various local evils we see fit into global patterns that give them a very different significance from what we thought. I am not saying the local evils disappear, that they are not evil. But the meaning is very different. We see this in music, in literature, in painting.

[A]ll people are under control in their own spheres; but to everyone it seems as if there is no control over them. As for you, you only have to bother about what you want to be, because whatever and however you want to be, the craftsman knows where to put you. Consider a painter. Various colors are set before him, and he knows where to put each color. The sinner, of course, wanted to be the color black; does that mean the craftsman is not in control, and doesn't know where to put him? How many things he can do, in full control, with the color black! How many detailed embellishments the painter can make! He paints the hair with it, paints the eyebrows. To paint the forehead he only uses white. - St. Augustine, Sermon 125

And just as there may be aesthetic values we are unaware of, there may be moral values we are unaware of.

All this points towards a version of sceptical theism. But I think we should not go too far in that direction. For unlike a piece of music, the work of art that the universe is is executed not out of soundwaves that have little individual worth, the universe is a work that incorporates persons--beings in the image and likeness of God. This makes the divine work much more gloriously impressive especially if God doesn't determine our free actions, but it also means that there is real, intrinsic meaning in the local situations we find, in the pains, joys, sufferings and ecstasies of life. While the meaning of these can be transformed, evils will still be evils. The problem of evil is not solved in this way, but it is, I think, mitigated significantly.

Moreover, thinking in this way solves a problem that plagues standard sceptical theist solutions, namely that they undercut design arguments for the existence of God. For although we might be unable to perceive the significance of the whole, we might perceive significance in the part, and the beauty of a figure in a painting, a chapter in a novel or musical movement can be sufficient to establish something about the talent of the artist.

I explore some of these themes in this piece I once presented at a conference, but the online version is sadly bereft of its illustrations in part for copyright reasons.

Wednesday, October 24, 2007

Might Intelligent Design turn out to be right?

I am going to argue that for all anybody knows, evolutionary science might develop in such a way that an Intelligent Design (ID) argument would be plausible. Hence, while one might well be justified in saying that ID arguments right now do not work, one is not justified in saying that future ID arguments won't work given a fully developed evolutionary science. Moreover, our current state of biological knowledge, interpreted in an uncontroversial and friendly way, gives us relatively little, if any, reason to accept the claim that ID arguments won't work given a fully developed evolutionary science. (In the interests of full disclosure I should say that I do not think any extant ID argument succeeds in establishing the existence of a designer.)

I shall understand a successful ID argument for a design hypothesis H (say, that God has designed the world) to be an argument that starts with some biological fact F about the world going over and beyond the mere existence of life, a fact such as that the world contains intelligent life, or that the world contains the mammalian eye, or that the world contains highly complex organisms, and then argues:

  1. F is very unlikely to happen if the only processes in play are those of evolutionary biology.
  2. F is not unlikely to happen on the relevant design hypothesis H.
  3. Therefore, F provides significant evidence for the design hypothesis over and against the hypothesis that the only processes in play are those of evolutionary biology.

Assuming that before we give the argument our probability for our design hypothesis (say, that God exists and has created the world) is not too low, we're going to get a successful ID argument as soon as we can find a biological fact F satisfying (1) and (2). My claim, now, is that the present state of evolutionary science gives us little reason to believe that we will not find such a fact F. Note that the above is not the only way of formulating an ID argument. But if we are not in a position to know that no ID argument of this sort is successful, then we are likewise not in a position to know that no ID argument of some sort or otheris successful.

Let's start by considering the fact that evolutionary science gives a statistical explanation of events. It is not shown that, given an earlier state of affairs, bipeds had to evolve. All that is done in an evolutionary explanation is that an evolutionary pathway is traced out and it is argued that that pathway had a certain probability.

Next observe the fact that for most high-level biological states of affairs, we have very little in the way of estimates of estimates of the objective probability of the states of affairs arising. It would be a very, very difficult task, for instance, to estimate the probability that winged vertebrates would exist on earth given the state of things a billion years ago. Thus, although evolutionary explanations are in a significant respect statistical in nature, the statistics have by and large not been worked out.

Now, even if evolutionary science is the whole truth about biology after the initial beginning of life, there will surely be biological features of the world whose objective probability given the initial conditions is tiny. Gould talks of how if we turned the clock back and re-ran the evolutionary processes, we would get something completely different from what we have. That may or may not be true, but surely there are some biological features the probability of whose arising was tiny. For instance, consider the precise kind of pattern that a copperhead snake has on its skin. It may well be that there are myriads of patterns that would do the same job as this pattern does. It may well be that the probability that a snake-like animal would arise with that precise pattern, given the initial state of things, is tiny. (Presumably, if we conjoin enough features, we easily get cases where the probability is about as small as we like.)

Now I have two arguments for my conclusion.

Argument 1: Take a particular feature, intelligence, intelligence of the sort humans have. I think it is far beyond the current state of the art in biology to give estimates of the probability that intelligence would arise given how things were a billion years ago. Intelligence was a solution to certain evolutionary problems, but had things gone somewhat differently, these problems might not have arisen and, for all we know, many, many other solutions are possible. Moreover, we have very little in the way of estimates of the sort of organismic complexity that is needed for intelligence. All in all, we have little reason to assert that intelligence is not very unlikely given how things were a billion years ago. We just don't know how to estimate such a probability, and, unless evolutionary computing ends up yielding experimental data that shows that the evolution of intelligence is easy, it may be quite a while before we know how to estimate such a probability.

A developed evolutionary science will, presumably, assign some probability to the arising of intelligence. But I have argued we have little reason to think that this probability is not going to be very low. Suppose that this probability will in fact be very low, and developed evolutionary science discovers this. Then, a design argument, where the fact in question is the existence of intelligent life (or intelligent life on earth?), will be a good one. For we will have (1). That, by itself, is not enough. Even if it is very unlikely that the precise pattern on a copperhead should arise through evolutionary processes, there being too many other patterns that could do the job, there is no good ID argument based on this pattern, because we do not have reason to believe that a designer would not be unlikely make precisely that pattern.

However, in the case where the fact F is intelligence, and if our design hypothesis is that the world is created by the God of traditional monotheism, then the fact F is not unlikely given the existence of God, since the God of traditional monotheism is perfectly good and generous, and hence it is not unlikely that he would want to create intelligent beings (on earth? that might be an issue to consider) to bestow his goodness on. Hence, we have (2).

Thus, if we do not have much reason to deny that (1) holds of intelligence--and the present state is such that we do not--we neither have much reason to deny that a design argument based on the existence of intelligence will work. Instead, we should just suspend judgment on this. Hence, for all we know, at least one ID argument will be successful.

Argument 2: There is a myriad of biological facts, such as the precise pattern on a copperhead, each of which satisfies (1). This is no evidence at all against evolutionary biology, but just a fact about probabilistic processes like those of mutation, recombination, selection, etc. A priori improbable things happen all the time. Bob wins a lottery. A dart lands within 0.000000000001 mm of point x (this is extremely improbable whatever the point x is, but of course the dart has to land somewhere, so whenever a dart is thrown, it lands within 0.000000000001 mm of some point, and hence always something improbable happens). No surprises there. Let S be the set of very unlikely biological facts like that.

Now to have reason to hold that no ID argument for some relevant design hypothesis will be successful, one would have to have to reason to judge that none of the facts in S satisfies (2). First note that this judgment would go beyond the competence of biological science. Biological science does not estimate the probability that God, if he existed, would design a snake with such-and-such a pattern on its back. So biological science by itself would not be sufficient to establish the unavailability of an ID argument.

Now in the case of the pattern on the back of the snake, we have little reason at present to think that God would design that pattern rather than any of the many others that the snake could have had. Now in the unlikely case that we might discover that the pattern encodes some text in some natural way, the relative probability of that pattern over the others might rise. But we don't expect that to happen.

Nonetheless, I do not think we have much reason to believe that none of the facts in S satisfies (2), and we even do not have much reason to believe that none of the facts in S will be discovered to satisfy (2). So, once again, we have little reason, if any, to believe that a working ID argument won't be discovered in the future.

Conclusions: Being sure that ID will be unsuccessful is unjustified. But there is a proviso that I have to add, which was implicit in my assumption that (1) and (2) are all one needs for ID's success: unless there is good independent reason to deny the conclusion of the ID arguments (this was the assumption that the probability for H before the ID argument isn't too low).