Showing posts with label Intelligent Design. Show all posts
Showing posts with label Intelligent Design. Show all posts

Friday, June 22, 2018

Language and specified complexity

Roughly speaking—but precisely enough for our purposes—Dembski’s criterion for the specified complexity of a system is that a ratio of two probabilities, pΦ/pL, is very low. Here, pL is the probability that by generating bits of a language L at random we will come up with a description of the system, while pΦ is the physical probability of the system arising. For instance, when you have the system of 100 coins all lying heads up, pΦ = (1/2)100 while pL is something like (1/27)9 (think of the description “all heads” generated by generating letters and spaces at random), something that pΦ/pL is something like 6 × 10−18. Thus, the coin system has specified complexity, and we have significant reason to look for a design-based explanation.

I’ve always been worried about the language-dependence of the criterion. Consider a binary sequence that intuitively lacks specified complexity, say this sequence generated by random.org:

  • 0111101001100111010101011001100111001110000110011110101101101101001011011000011101100111100111111111

But it is possible to have a language L where the word “xyz” means precisely the above binary sequence, and then relative to that language pL will be much, much bigger than 2−100 = pΦ.

However, I now wonder how much this actually matters. Suppose that L is the language that we actually speak. Then pL measures how “interesting” the system is relative to the interests of the one group of intelligent agents we know well—namely, ourselves. And interest relative to the one group of intelligent agents we know well is evidence of interest relative to intelligent agents in general. And when a system is interesting relative to intelligent agents but not probable physically, that seems to be evidence of design by intelligent agents.

Admittedly, the move from ourselves to intelligent agents in general is problematic. But we can perhaps just sacrifice a dozen orders of magnitude to the move—maybe the fact that something has an interest level pL = 10−10 to us is good evidence that it has an interest level at least 10−22 to intelligent agents in general. That means we need the pΦ/pL ratio to be smaller to infer design, but the criterion will still be useful: it will still point to design in the all-heads arrangement of coins, say.

Of course, all this makes the detection of design more problematic and messy. But there may still be something to it.

Tuesday, April 16, 2013

What if Intelligent Design is irrefutable?

Consider this valid argument:

  1. (Premise) Intelligent Design is irrefutable.
  2. (Premise) Intelligent Design is incompatible with evolution.
  3. (Premise) If p is incompatible with q, and p is established, then q can be refuted.
  4. So, evolution is not established.

Premise 3 will be taken to be false by many if "refute" and "establish" are understood in the knowledge sense or even in the sense of knowledge-type justification, since knowledge and knowledge-type justification is not closed under entailment. Some will say that although it is established that I have two hands, and that I am a brain in a vat is incompatible with that, that I am a brain in a vat cannot be refuted. I think this response is mistaken—I know I am not a brain in a vat—but I won't insist on it.

On the other hand, if "refute" and "establish" are taken in the sense of assigning low and high rational probability, respectively, then 3 is surely true: if p and q are incompatible, and one has high probability, the other has low probability. Thus, it seems, that those who claim that evolution is established should either hold that Intelligent Design is compatible with evolution or stop arguing that it is irrefutable.

But perhaps what I just said isn't right. Maybe to refute q is both to get to assigning a low probability to q and to obtain significant incremental disconfirmation. Then 3 is false. For suppose that q has very low rational priors. Then we can establish p without getting any incremental disconfirmation for q—we get incremental confirmation for q, and q ends up with high probability, but p's probability is basically unchanged from the priors. Maybe it is empirically established that I have two hands, but the brain in a vat scenario continues to be ruled out by low priors.

If this response is right, then the evolutionary theorist who wants to claim that evolution is established while yet accepting 1 and 2 needs to hold that the rational priors of intelligent design are low. But are they?

Friday, April 5, 2013

Design, evolution and many worlds

The following image graphs the outcome of a simulation of a random process where a particle starts in the middle of the big square and moves by small steps in random directions until it reaches an edge.



It sure looks from the picture like there was a bias in the particle in favor of movement to the right, and that the particle was avoiding the black lines (you can see at a few points where it seems to be approaching the black lines and then jumps back) and searching for the red edge on the right.  If you saw the particle behaving in this way, you might even think that the particle has some rudimentary intelligence or is being guided.  To increase the impression of this, one could imagine this particle doing something like this through a complex labyrinth.

But in fact the picture shows a run that doesn't involve any such bias or intelligence or guidance.  However, it took 23774 runs until I got the one in the picture!  What I did is I had the computer repeatedly simulate random runs of a particle, throwing out any where where the particle hit the black boundary lines before it hit the red edge.  In other words, there is a bias at work  However, it is not a bias in the step-by-step movements of the particle, but a selection bias--to get the picture above, I had to discard 23773 pictures like this:



Sampling multiple cases with a selection bias can produce the illusion of design.  Most cases look like the second diagram, but if I only get to observe cases that meet the criteria of hitting the red edge before hitting any black edge, I get something that looks designed to meet the criteria (it looks like the process is run by biased chances, whereas the bias comes from conditioning on the criteria).

Now, suppose that evolutionary processes occur at a large number of sites--maybe a very large number of planets in a single large universe or maybe in a large number of universes.  Suppose, further, that intelligence is unlikely to evolve at any particle evolutionary site.  Maybe most sites only produce have unicellular critters or vast blankets of vegetation.  But a few produce intelligence.  We then will have a sampling bias in favor of the processes happening at sites where intelligence results.  And at such sites, the evolutionary processes will look like they have a forward-looking bias in favor of the production of intelligence, just as in my first diagram it looks like there is a bias in favor of getting to the red line and avoiding the black lines (think of the diagram as phase space, with the black lines marking where total extinction occurs and the red line marking where there is intelligent life).  

This means that we will have the appearance of design aimed at intelligence.  This forces a caution for both intelligent design theorists and evolutionary theorists if there is reason to think there is a large number of sites with life.  

The intelligent design theorists need to be very cautious about taking apparent end-directedness in the phylogenetic history that led to human beings to be evidence for design.  For given a large number of life sites and the anthropic principle, we expect to see apparent directedness at the production of intelligence in the process, just as in my first picture there is apparent red-directedness and black-avoidance.  This means that intelligent design theorists would do well to focus on apparent design in lineages that do not lead to us, since such design is not going to suffer from the same anthropic selection bias.  The cool stuff that spiders do is better fodder for intelligent design arguments than the mammalian eye, because the mammalian eye "benefited" from anthropic selection.  However, this also weakens the design arguments.  For design (pace some prominent intelligent design theorists) involves offering an explanation of a phenomenon in terms of the reasons a plausible candidate for a designer would likely be responsive to.  If the phenomenon is one that promotes the development of intelligent life, the design explanation could be quite good, for there are excellent reasons for a designer to produce intelligent life--intelligent life is objectively valuable.  But if the phenomenon is a spider's catching of flies, the reasons imputed to the designer become less compelling, and hence the design explanation becomes weakened.  

On the other hand, evolutionary theorists need to be careful in making precise generalizations about things like rates of beneficial mutations that apply equally to our ancestors and to the ancestors that other organisms have not in common with us.  For given a large number of sites where life develops, we would expect differences in such things due to the anthropic sampling bias.

This also suggests that we actually could in principle have evidence that decides between the hypotheses: (a) intelligent design, (b) naturalistic evolution at a small number of sites and (c) naturalistic evolution at a large number of sites.  

Suppose we find that the rate of favorable mutations among our ancestors was significantly higher than  the rate of favorable mutations not among our ancestors.  This offers support for (c), and maybe to a lesser degree for (a), but certainly against (b).  But suppose we find equality in the rates of favorable mutations among our ancestors and among our non-ancestors.  Then that offers evidence against (c) and depending on whether the rates are as we would expect on evolutionary grounds, it is evidence for (b) or for (a).  

I am assuming here that the number of sites is finite or there is some way of handling the issues with probabilities in infinite cases.

Tuesday, February 7, 2012

An interesting question about evolutionary theory

Current evolutionary theory is normally taken to assume that there is no correlation between mutations and fitness. Now, take some appropriate measure of correlation (if there is no measure of correlation at all, it is hard to see what scientific meaning there is in saying that there is no correlation between mutation and fitness), and let E(c) be a theory just like evolutionary theory, but where the no-correlation assumption is replaced by the assumption that the correlation has degree c. Thus, orthodox evolutionary theory is E(0), while optimistically-skewed evolutionary theories (such as those we'd expect if Molinism is true and God exists, for instance) will be E(c) for c>0, and pessimistically-skewed ones will be E(c) for c<0.

It is clear that for c sufficiently close to 0, E(c) will fit the same empirical data as E(0) fits. Simplicity suggests that c=0, but the resurrection of the cosmological constant is a reminder that a constant can be very close to zero but eventually positing a non-zero value may be justified.

It is an interesting question as to what upper and lower bounds can be found for c, given a particular measure of correlation. It is also an interesting question what value of c gives the best fit to our observations. If the best-fit value of c is significantly positive or negative, that would lend credence to Intelligent Design (of an optimistic or pessimistic sort, respectively).

In toy situations, this is the sort of thing that is amenable to computer studies—maybe people have even done this? My intuition is that even small departures of c from 0 would produce very noticeable results. But of course it could be that c is very, very tiny, in the way the cosmological constant is.

Friday, March 18, 2011

HB 2454

I rarely comment on current politics.  Still, I want to say something here.  A bill has been proposed in the Texas Legislature to ban discrimination on the basis of Intelligent Design (ID) research at colleges.  To lay my cards on the table, I think it is still an open question whether the amount of time available for evolutionary processes was sufficient for the sort of complexities we observe to be at all likely to observe, and I suspect we are still quite some distance from having mathematical models of the development of anything with sufficient complexity to close the question.  So research on ID should, I think, continue.  And no doubt unjustified discrimination connected with research on ID exists.  But the bill is really embarrassing:
Sec. 51.979.  PROHIBITION OF DISCRIMINATION BASED ON RESEARCH RELATED TO INTELLIGENT DESIGN. An institution of higher education may not discriminate against or penalize in any manner, especially with regard to employment or academic support, a faculty member or student based on the faculty member's or student's conduct of research relating to the theory of intelligent design or other alternate theories of the origination and development of organisms.
Here are three reasons for embarrassment:

  1. Theories of "the origination and development of organisms" concern not evolutionary theory as such but reproductive and developmental biology.  As a commenter here noted, an alternate theory in this realm is "storkism" (presumably the theory that human children come from storks rather than from human mating).  ID concerns something else, something more like the origination and development of types of organisms.
  2. A French Department should be able to discriminate against a prospective faculty member whose primary research is on ID rather than French language, culture and/or literature.  Likewise, it is perfectly reasonable for a Biology Department that required students in a class to do laboratory research on the present functioning of red blood cells to discriminate against a student who, instead, did a research on ID.  Maybe an implicit exception for the bona fide requirements of a task can be assumed, but it would also take some of the teeth out of the bill.
  3. Everyone, whatever they think of ID, should agree that it is reasonable for a college to deny tenure/promotion, refrain from hiring or giving a low grade on the basis of intellectually shoddy ID research.  Now, the bill either does or does not allow discrimination on the basis of shoddy ID research.  If it does not, then it is clearly unacceptable--it provides a delightful formula for tenure and promotion: do research on ID, and they have to promote you no matter how bad the research is, or else you sue.  Suppose, charitably, discrimination on the basis of shoddy ID research would stil be permissible.  But now the bill is close to useless.  For those scientists who are likely to discriminate on the basis of ID research also say that it is their professional judgment all ID research (or at least all ID-supportive research) is intellectually shoddy.  So if they can still discriminate on the basis of shoddiness of research, the bill does nothing to protect ID researchers.

Wednesday, July 21, 2010

A defense (well, sort-of) of specified complexity as a guide to design

I will develop Dembski's specified complexity in a particular direction, which may or may not be exactly his, but which I think can be defended to a point.

Specified Complexity (SC) comes from the fact that there are three somewhat natural probability measures on physical arrangements. For definiteness, think of physical arrangements as black-and-white pixel patterns on a screen, and then there are 2n arrangements where n is the number of pixels.

There are three different fairly natural probability measures on this.

1. There is what one might call "a rearrangement (or Humean) measure" which assigns every arrangement equal probability. In the pixel case, that is 2-n.

2. There is "a nomic measure". Basically, the probability of an arrangement is the probability that, given the laws (and initial conditions? we're going to have two ways of doing it--one allowing the initial conditions to vary, and one to vary), such an arrangement would arise.

3. There is what one might call "a description measure". This is relative to a language L that can describe pixel arrangements. One way to generate a description measure is to begin by generating random finite-length strings of symbols from L supplemented with an "end of sentence" marker which, when generated, ends a string. Thus, the probability of a string of length k is m-k where m is the number of symbols in L (including the end of sentence marker). Take this probability measure and condition on (a) the string being grammatical and (b) describing a unique arrangement. The resulting conditional probability measure on the sentences of L that describe a unique arrangement then gives rise to a probability measure on the arrangements themselves: the description probability of an arrangement A is the (conditionalized as before) probability that a sentence of L describes A.

So, basically we have the less anthropocentric nomic and rearrangement measures, and the more anthropocentric description measure. The rearrangement measure has no biases. The nomic measure has a bias in favor of what the laws can produce. The description measure has a bias in favor of what can be more briefly described.

We can now define SC of two sorts. An arrangement A has specified rearrangement (respectively, nomic) complexity, relative to a language L, provided that A's rearrangement (respectively, nomic) measure is much smaller than its L-description measure. (There is some technical stuff to be done to extend this to less specific arrangements--the above works only for fully determinate arrangements.)

For instance, consider the arrangement where all the pixels are black. In a language L based on First Order Logic, there are some very short descriptions of this: "(x)(Bx)". So, the description measure of the all-black arrangement will be much bigger than the description measure of something messy that needs a description like "Bx1&Bx2&Wx3&...&Bxn". On the other hand, the rearrangement measure of the all-black arrangement is the same as that of any other arrangement. In this case, then, the L-description measure of the all-black arrangement will be much greater than its rearrangement measure, and so we will have specified rearrangement complexity, relative to L. Whether we will have nomic rearrangement complexity depends on the physics involved in the arrangement.

All of the above seems pretty rigorous, or capable of being made so.

Now, given the above, we have the philosophical question: Does SC give one reason to suppose agency? Here is where things get more hairy and less rigorous.

An initial problem: The concept of SC is language-relative. For any arrangement A, there is a language L1 relative to which A lacks complexity and a language L2 relative to which A has complexity. So SC had better be defined in terms of a privileged kind of language. I think this is a serious problem for the whole approach, but I do not know that it is insuperable. For instance, easily inter-translatable languages are probably going to give rise to similar orders of magnitude within the description measures. We might require that the language L be the language of a completed and well-developed physics. Or we might stipulate L to be some extension of FOL with the predicates corresponding to the perfectly normal properties. There are tough technical problems here, and I wish Dembski would do more here. Call any language that works well here "canonical".

Once we have this taken care of, it it can be done, we can ask: Is there any reason to think that SC is a mark of design?

Here, I think Dembski's intuition is something like this: Suppose I know nothing of an agent's ends. What can I say about the agent's intentions? Well, an agent's space of thoughts is going to be approximately similar to a canonical language (maybe in some cases it will constitute a canonical language). Without any information on the agent's ends, it is reasonable to estimate the probabilities of an agent having a particular intention in terms of the description measure relative to a canonical language.

But if this is right, then the approach has some hope of working, doesn't it? For suppose you have nomic specified complexity of an arrangement A relative to a canonical language. Then P(A|no agency) will be much smaller than the description measure of L, which is an approximation to P(A|agency) with no information about the sort of agency going on. Therefore, A incrementally confirms the agency hypothesis. The rest is a question of priors (which Dembski skirts by using absolute probability bounds).

I think the serious problems for this approach are:

  • The problem of canonical languages.
  • The problem that in the end we want this to apply even to supernatural designers who probably do not think linguistically. Why think that briefer descriptions are more likely to match their intentions?
  • We do have some information on the ends of agents in general--agents pursue what they take to be valuable. And the description measure does not take value into account. Still, insofar as there is value in simplicity, and the description measure favors briefer descriptions, the description measure captures something of value.

Tuesday, August 12, 2008

Is Intelligent Design a scientific theory?

Intelligent Design (ID) can be thought of as having two parts: a negative part that claims that evolutionary explanations of various biological features of the world are unsatisfactory, and a positive part that says that these features are best explained by positing intelligent agency.

Is Intelligent Design a scientific theory? Not really, if only for the simple reason that the positive side has not been worked out with a sufficient level of detail to merit the term "scientific theory". If a corpse is found with a certain set of wounds, and scientific examination makes it very unlikely that the wounds were inflicted by non-agential processes because the wounds spell out a word, the conclusion "An agent did this" is a fine one for a forensic scientist to draw. But this conclusion, while scientific, does not seem to merit the term "scientific theory". Nor is the issue that this is an isolated case. If lots of corpses with such wounds were found, the claim that each of them is the result of intelligent agency is still not a scientific theory. I think one important reason for this is that there is a serious lack of detail here. Likewise, it would not count as a scientific theory to say that the deaths were the result of "natural causes", with no further specification of the cause. (The lack of detail is related to the accusation of unfalsifiability; obviously, the less detail is given, the harder it is to falsify a view.)

Now, individual proponents of ID might give more detail than the mere claim that agency is behind the biological processes. Thus, they may specify how many agents were involved (e.g., one), where the agents intervened (e.g., at boundaries between species, or maybe of some higher taxa) and how they intervened (e.g., by miraculously causing mutations). Once more detail is given, they have more hope that the claim will become a scientific theory.

But even if individual proponents of ID give more detail, it will still not be correct to say that ID is a scientific theory. Rather, ID will at best be a family of disparate scientific theories. Merely rejecting evolution and holding to agency is not sufficiently contentful to unify the family into a single theory, just as George who thinks the butler did it, Patricia who thinks aliens did it, and Hercule who thinks it was suicide do not hold a single theory, even though they all agree that the death was the result of agential design rather than an accident.

This is important vis-à-vis one political consideration. Some folks would like to have ID taught in school as a theory alternative to evolution (interestingly, I have been told that the Discovery Institute does not take this position). But if ID is not actually a single scientific theory, then it is not parallel to evolution. For neo-Darwinian evolution is much more of a unified theory, although of course individual evolutionary scientists hold to variants of it. Now, one particular positive theory falling under the ID might perhaps be an alternative (whether good or bad) to evolutionary theory. But no one particular positive ID theory has sufficient acceptance even in the ID community as far as I know.

At the same time, the claim that ID is not a scientific theory is compatible with ID being science, just as a particular conclusion of a forensic scientist may not have sufficient detail to count as a scientific theory, but may nonetheless be a scientific conclusion. For, science is more than just the production of scientific theories. (For instance, the criticism of scientific theories is also a scientific practice.)

Monday, August 11, 2008

Is Intelligent Design theologically shallow?

Occasionally, one hears Intelligent Design (ID) accused of being theologically shallow. Now, no doubt, many of the advocates of ID are theologically shallow, as are many of the opponents of ID. But the question is whether there is anything theologically shallow about holding ID to be true. As far as I can tell, ID is something like the following two-part thesis:

(a) Some of the biological features of organisms are designed by non-human intelligent agency; and (b) this fact can be known on the basis of biological study of these features (together with the application of mathematical, conceptual and/or other tools).
The reason for the "non-human" qualifier is that otherwise (a) would be uninterestingly satisfied by artificially selected features in domesticated animals.

What, then, is theologically shallow about ID? Part (a) has always been accepted by Jewish and Christian theists, and does not appear at all shallow—indeed, it is connected with a depth of reflection on providential divine involvement in the world, creation, the problem of evil, and so on. Unless the claim is the implausible one that Judaism and Christianity are at root theologically shallow, the theological problem would seem to have to be not with part (a), but with part (b).

Now, if one has a strongly anti-rational theological stance, one might think that any attempt to argue to a conclusion about divine activity on the basis of empirical data is reflective of a shallow rationalism. If so, then one will think that (b) is indicative of a theological shallowness. But I do not think (b) is indicative of a theological shallowness. In fact, it seems to me to be a deeper view to say with Aquinas that God is both an unfathomable mystery and yet his existence and the fact of his creating the world can be known on the basis of observed data. (I am not saying Aquinas advocated ID—he did not—but he did think that we could get to the existence of God, and to some facts about God's creative activity, on the basis of philosophical reflection on things we have observed.) Maybe there is something particularly shallow in holding that science should be a part of one of the routes to knowledge about God's creative activity, but I do not see it. Indeed, it seems to me to be a rather deep view to think that God is imaged in our world in all kinds of ways, and since science tells us about our world, it is relevant to knowing about God.

Perhaps, though, it is not the bare statement of ID that is theologically shallow, but what is shallow is something else. Two options come to mind. One is that the motivations of ID proponents are shallow. Perhaps, ID proponents think that the only way to justify belief in God is through scientific data. That is, indeed, a shallow view. Or maybe they think that only by positing scientifically discernible divine involvement can one save the doctrine that God designed human beings. That might be a shallow view, unless there are some deep arguments behind it. But it does not seem to me to be right to call a view shallow just because the proponents of it are motivated by another view which is shallow.

The second option is that what is shallow is not so much the two-part claim of ID, but the way that ID proponents flesh out the claim, e.g., by asserting that there is evidence of miraculous divine interventions. Again, even if this fleshing out were shallow, it would not follow that ID itself is a shallow doctrine, but that it is fleshed out in a shallow way.

But I want to consider the latter criticism a bit further. Why would it be shallow to say that God created some organisms through miraculous interventions? Now, if one thinks that all claims of miraculous interventions are theologically shallow, one will say this. But that is a sweeping generalization that seems hard to justify. There does not appear to be anything particularly shallow to the idea that God's ways of manifesting his love in creation are not bounded by the laws of nature. Now, it might be shallow to claim that God could not do such-and-such non-miraculously. But it does not seem shallow to claim that he could do such-and-such miraculously, nor that he did. Granted, this view may be unattractive to those like Leibniz who think a good designer always makes something that runs just fine without him. But is denying this standoffish view of divine activity shallow? If anything, positing a world where God sometimes works in and through natural causes, and sometimes beyond them, seems to lead to a richer view.

None of this is an argument for ID. In an earlier post, I have argued that at least the Dembskian variety of ID fails, and I do not know any variety of ID to succeed. But it is important not to criticize views on spurious grounds, such as the accusations of theological shallowness.

In any case, I am not even sure that p's being is "deep" is any evidence for p, or that p's being "shallow" is any evidence against p.

Wednesday, May 7, 2008

Dembski's definition of specified complexity

A central part of Dembski's definition of specified complexity is a way of measuring whether an event E is surprising. This is not just a probabilistic measure. If you roll eleven dice and get the "unsurprising" sequence 62354544555, this sequence has the same probability 1/611 as the intuitively more "surprising" sequences 12345654321 or 11111111111. It would be a mistake (a mistake actually made by some commenters on the design argument) to conclude from the probabilistic equality that there is no difference in surprisingness, since what one should conclude that is that surprisingness is not just a matter of the probabilities. Instead of talking about "surprisingness", however, Dembski talks about "specification". The idea is that you can "specify" the sequences 12345654321 or 11111111111 ahead of time in a neat way. The first you specify as the only sequence of eleven dice throws consisting of a strict monotonic ending precisely where a strict monotonic decline begins. The second is one of only six sequences of eleven dice throws that each yield the same result.

I will describe Dembski's account of specification, and that will be somewhat technical, and then I will criticize it, and consider a way of fixing it up which is not entirely satisfactory.

Dembski proposes a measure of specification.[note 1] Suppose we have a probability space S (e.g., the space of all sequence of eleven dice throws) with a probability measure PH (defined by some chance hypothesis H). Let f be a "detached" real-valued function on S (a lot more on detachment later). Then an event E in the probability space S is just a measurable subset of the probability space. For any real-valued function f defined on S and real number y, let fy be the set of all points x in S such that f(x)≥y. This is an event in S. Indeed, fy is the event of being at a point x in our probability space where f(x) is at least y.

We now say that an event E in S is specified to significance a provided that there is a function f on S "detached" from E (a lot more on detachment later on) and a real number y such that fy contains E and PH(fy)<a.

For instance, in our eleven dice throw case, if x is a sequence of eleven dice throw results, let f(x) be the greatest number n such that at least n of the throw results in x are the same. Then, f11 is equivalent to the event that all eleven of the dice throws were the same. Let E be the event of the sequence 11111111111 occurring. Then E is contained in f11, and PH(f11)=1/611<10-8, and so E is specified to significance 10-8, as long as we can say that f is detached from E. Similarly, we can let f be the length of the largest interval over which a sequence of dice throws is monotonic increasing plus the length of the largest interval over which a sequence of dice throws is monotonic decreasing, and then our sequence 12345654321 will be a member of f12, and if f is detachable, we can thus compute a significance for this result.

The crucial part of the account is the notion of "detachability". Without such a condition, every improbable event E is significant. Given our intuitively unsurprising sequence 62354544555 (which was as a matter of fact generated by a pretty random process: I made it by using random.org[note 2]). Let f be the function assigning 1 to the sequence 62354544555 and 0 to every other sequence. Then our given sequence is the only member of f1, and so without any detachability condition on f, we would conclude that we have specification to a high degree of significance. But of course this is cheating. The function f was jerryrigged by me to detect the event we were looking at, and one can always thus jerryrig a function. To check for specification, however, we need a function f that could in principle have been specified beforehand, i.e., before we found out what the result of the dice throwing experiment was. If we get significance with such a function, then we can have some confidence that our event E is specified.

Dembski, thus, owes us an account of detachability. In No Free Lunch, he offers the following:

a rejection function f is detachable from E if and only if a subject possesses background knowledge K that is conditionally independent of E (i.e., P(E|H& K) = P(E|H)) and such that K explicitly and univocally identifies the function f.

Or, to put it in our notation, f is detachable from E iff the epistemic agent has background knowledge K such that PH(E|K)=PH(E). It is hard to overstress how central this notion of detachability is to Dembski's account of specification, and therefore to his notion of specified complexity, and thus to his project.

But there is a serious problems with detachability: I am not sure that the independence condition PH(E)=PH(E|K) makes much sense. Ordinarily, the expression P(...|K) makes sense only if K is an event in the probability space or K is a random variable on the probability space (i.e., a measurable function on the probability space). In this case, K is "knowledge". This is ambiguous between the content of the knowledge and the state of knowing. Let's suppose first that K is the content of the knowledge—that's, after all, what we normally mean in probabilistic epistemology when we talk of conditioning on knowledge. So, K is some proposition which, presumably, expresses some event—probabilities are defined with respect to events, not propositions, strictly speaking.[note 3] What is this proposition and event? The knowledge is supposed to "identify" the function f. It seems, then, that K is a proposition of the form "There is a unique function f such that D(f)", where D is an explicit and univocal identification.

But on this reading of "knowledge", the definition threatens uselessness. Let K be the proposition that there is a unique function f such that f(x)=1 if and only if x equals 62354544555 and f(x)=0 otherwise. This function f was our paradigm of a non-detachable function. But what is PH(E|K)? Well, K is a necessary truth: It is a fact of mathematics that there is a unique function as described. If PH is an objective probability, then all necessary truths have probability 1, and so to condition on a necessary truth changes nothing: PH(E|K)=PH(E), and we get detachability for free for f, and indeed for every other function.

So on the reading where K is the content of the knowledge, if necessary truths get unit probability, Dembski's definition is pretty much useless—every function that has a finite mathematical description becomes detachable, since truths about whether a given finite mathematical description uniquely describes a function are necessary truths.

But perhaps PH is an epistemic probability, so that necessary truths might have probability less than 1. One problem with this is that much of the nice probabilistic apparatus now breaks down. How on earth do we define a probability space in such a way that we can assign probabilities less than 1 to necessary truths? Do we partition the space of possibilities-and-impossibilities into regions where it is true that there is a unique function f such that f(x)=1 iff x=62354544555 and f(x)=0 otherwise and regions where this is false? I am not sure what we can make of probabilities in the regions where this is false. Presumably they are regions where mathematics breaks down. How do we avoid incoherence in applying probability theory—as Dembski wants to!—over the space of possibilities-and-impossibilities?

Moreover, it seems to me that on any reasonable notion of epistemic probabilities, those necessary truths that the epistemic agent would immediately see as necessary truths were they presented to her should get probability 1. Any epistemic agent who is sufficiently smart to follow Dembski's arguments and who knows set theory would immediately see as a necessary truth the claim that there is a unique function f on S such that f(x)=1 iff x=62354544555 and f(x)=0 otherwise. So even if we allow that some necessary truths, such as that horses are mammals, might get epistemic probabilities less than 1, the ones that matter for Dembski are not like that—they are self-evident necessary truths in the sense that once you understand them, you understand that they are true. The prospects for an account of epistemic probability that does not assign 1 to such necessary truths strike me as unpromising, though I think this is the route Dembski actually wants to go according to Remark 2.5.7 of No Free Lunch.

Besides, as a matter of fact, any agent who is sufficiently smart to understand Dembski's methods will be one who will assign 1 to the claim that there is a unique function f as above. So on the objective probability reading, Dembski's definition of detachability applies to all finitely specifiable functions. On the epistemic, it does so too, at least for agents who are sufficiently smart. This makes Dembski's definition just about useless for any legitimate purposes.

Let's now try the second interpretation of K, where K is not the content of the knowledge, but the event of the agent's actually knowing the identification of f. This is more promising, I think. Let p be the proposition that there is a unique function f on S such that f(x)=1 iff x=62354544555 and f(x)=0 otherwise. Let us suppose, then, that K is the event of the agent knowing that p. It is essential, we've seen, to judging f to be non-detachable that PH(K) be not equal to 1. This requires a theory of knowledge where for an agent to know p is more than just for an agent to be in a position to know p, as when the agent knows things that self-evidently entail p. An actual explicit belief is required for knowledge on this view. Seen this way, PH(K)<1, since the agent might never have thought about K. Since K is a bona fide event on this view, we can apply probability theory without any worries about dealing with incoherence. So far so good.

But new problems show up. It is essential to Dembski's application of his theory to Intelligent Design that it apply in cases where people have only thought of f after seeing the event E—cases of "old evidence". Take, for instance, Dembski's example of the guy whose allegedly random choices of ballot orderings heavily favored one party. Dembski proposes a function f that counts the number of times that one party is on the top of the ballot. But I bet that Dembski did not actually think of this function before he heard of the event E of skewed ballot orderings. Moreover, hearing of the event surely made him at least slightly more likely to think of this function. If he never heard of this event, he might never have thought about the issue of ballot orderings, and hence about functions counting them. There is surely some probabilistic dependence between Dembski's knowing that there is such a function and the event E. Similarly, seeing the sequence 11111111111 does make one more likely to think of the function counting the number of repetitions. One might have thought of that function anyway, but the chance of thinking of it is higher when one does see the result. Hence, there is no independence, and, thus, no detachability.

This problem is particularly egregious in some of the biological cases that ultimately one might want to apply Dembski's theory to. Let's consider the event E that there is intelligent life. Let K be any state of knowledge identifying a function. Surely, there is probabilistic dependence between E and K. After all, PH(K|~E)=0, since were there no intelligent life, nobody would know anything, as there would be nobody to do the knowing. Thus, PH(E|K)=1, which entails that E and K are not probabilistically independent unless P(E)=1.

So the problem is that in just about no interesting case where we already knew about E will f be detachable from E, and yet the paradigmatic applications of Dembski's theory to Intelligent Design are precisely such cases. Here is a suggestion for how to fix this up (inspired by some ideas in Dembski's The Design Inference). We allow a little bit of dependence between E and K, but require that the amount of dependence not be too big. My intuition is that smaller the significance a of the specification (note that the smaller the significance a, the more significant the specification—that's how it goes in statistics), the more dependence we can permit. To do that right, we'd have to choose an appropriate measure of dependence, but since I'm just sketching this, I will leave out the details.

However, there is a difficulty. The difficulty is that in "flagship cases" of Intelligent Design, such as the arising of intelligence or of reproducing life-forms, there is a lot of dependence between E and K, since our language is in large part designed (consciously or not) for discussing these kinds of events. It is in large part because reproducing life-forms are abundant on earth that our language makes it easy to describe reproduction, and that our language makes it easy to describe reproduction significantly increases the probability that we will think of functions f that involve reproductive concepts. In these cases, the amount of dependence between E and K will be quite large.

There may still be cases where there is little dependence, at least relative to some background data. These will be cases where our language did not develop to describe the particular cases observed but developed to describe other cases, perhaps similar to the ones observed but largely probabilistically independent of them. Thus, our language about mechanics and propulsion plainly did not develop to describe bacterial flagella, and it may be that the existence of bacterial flagella is probabilistically independent of the things for which our language developed. So maybe the above account works if K is a state of knowing a specification that includes bacterial flagella. Or not! There are hard questions here. One of the hard questions is with regard to how particular K is. Is K the event of one particular knower, say William Dembski, having the identification of f? If so, then there is a lot of probabilistic dependence between the existence of bacterial flagella and K, since the probability of Dembski's existing in a world where there are no bacterial flagella is very low, since history would have gone very differently without bacterial flagella, and probably Dembski would never have come into existence.

Or is K the event of some knower or other having the identification of f? Then, to evaluate the dependence between K and the existence of bacterial flagella we would have to examine the almost intractable question of what a world without bacterial flagella would have been like.

Friday, October 26, 2007

Does evolutionary theory exclude miraculous divine intervention?

I shall argue that current evolutionary theory (ET) is compatible with the Intervention Claim (IC) that some biological facts about the development of species are explained by one or more miraculous divine interventions at some point in evolutionary history. This implies that evolution is compatible with at least some of the controversial conclusions of interventionist ID.

Now the argument. We can take "evolutionary theory" to have two parts: a concrete and a general part. The concrete part consists of a constantly growing number of specific evolutionary histories explaining the development of particular features of particular organisms, of their geographical distribution, etc., each history having a particular template, in which natural selection tends to play a prominent, but not exclusive, role. The general part will be discussed below.

Two independent arguments now show that IC is compatible with the concrete part of current ET. First, IC merely claims that some biological facts are explained by divine intervention. But the concrete part of current ET does not give evolutionary histories for all features of all organisms. Thus, even if all of the evolutionary histories that form the concrete part of current ET are correct, nonetheless IC might be true, because it could be that an organism or feature for which an evolutionary history is not given by the concrete part of current ET in fact developed through divine intervention. (By the way, in case it is tempting to run an inductive argument from the concrete part of current ET for the claim that every biological feature can be explained without divine intervention, that temptation should abate after thinking about my duct tape parable.)

Secondly, the evolutionary histories involve mutation and recombination, but are mostly agnostic about the precise causes of mutation and of particular recombinations. ET is compatible with determinism, so the histories do not include the claim that mutation and recombination was random or unexplainable. We now know some of the sources of mutation and some of the causal processes involved in recombination, but we would not be so rash as to say that we know all of these sources and processes, and neither would ET be falsified by finding new ones, nor do particular evolutionary histories identify particular causes of mutation or recombination (this particular molecule was hit by this particular cosmic ray, which was emitted by this star, etc.) Therefore, the evolutionary histories are compatible with miracles being involved in the explanation of a particular mutation or recombination event, miracles such as God shifting a molecule around. Certainly, even if some very rare evolutionary history identifies the cause of, say, a mutation event as, say, the impact of a cosmic ray, the history is surely not going to say anything about the particular source of this cosmic ray, and hence will be logically compatible with the claim that, say, God miraculously redirected a cosmic ray to hit this particular molecule at this particular angle.

So we have two arguments for the compatibility of the concrete parts of ET with IC. What about the general part of ET? This makes some sweeping claims, such as that all living things on earth have developed from a single ancestor organism. Now this general claim is compatible with IC, which simply claims that some aspects of the development involved miracles. Now, if ET claimed that all of the biological development from the ancestor organism can be explained in terms of natural selection, then ET would be incompatible with IC. But ET makes no such claim. Indeed, opponents of evolutionary theory are sometimes accused of conflating evolution with natural selection. In modern evolutionary theory, adaptive explanation plays a prominent role, but not an exclusive one. There are other mechanisms involved. To give just one example, one might explain something's arising as a spandrel.

The general part of current ET does not claim to list all of the kinds of processes that were involved in the development from the single ancestor organism to the present biological population on earth. It probably does claim that natural selection was one of the most explanatorily prominent, or maybe even the most prominent one, of these processes. But that claim does not conflict logically with IC, since IC does not claim that miraculous divine intervention was the most important force in biological history. IC does not claim that there was miraculous divine intervention in every organism's history, but even if it were to claim that, this would be compatible with the claim that it is not the explanatorily most prominent part of the development. (Consider a view: God had a plan, and he occasionally made minor tweaks so that things would come out as he wanted.) Thus, IC is not incompatible with the general claims of current ET, if we read these charitably as not including an exhaustive list of the explanatory mechanisms involved.

Objection 1: The general part of ET does include a restriction on the mechanisms involved--it says that these processes are naturalistic.

Response: ET is a scientific theory. It is not a part of a scientific theory to say things like:
(*) "Event E happened by means of some natural cause or other."
It is the part of a scientific theory to say things like:
(**) "Event E happened by means of at least one of the following natural causes: C1, C2, C3."
We can see what kind of evidence is relevant to (**) (e.g., evidence that a random sampling of events like E had them all caused by C1, C2 and/or C3). We cannot see what sort of evidence would be relevant to (*), apart from the bare fact that E happened. Scientific theories have some specificity--they do not simply say that something happened due to some cause or other, and neither do they simply say that something happened due to some natural cause or other. In fact, it seems to me that a decisive argument against calling Intelligent Design in general a scientific theory lies in its lack of specificity:
(***) "An intelligent agent (of some sort or other) intervened (somewhere) in the history of the world (on account of some set of motives or other) to bring about event E."
But (*) is worse than (***) in respect of specificity. We should not, thus, take claims like (*) to be a part of ET.

Objection 2: An axiom in evolutionary theory is that there is no positive correlation between the occurrence of a mutation and the resulting fitness of an organism. This implies that IC is false, since if IC were true, then mutations more likely to make the organism more fit would be more probable, since God would be more likely to miraculously produce them.

Response: Intuitively, there will be more mutations that decrease fitness than those that increase it. (Think of a computer program, and changing a bit of code randomly. Most of the time, it'll either have no effect or the program will crash.) The number of mutations in the history of the world is very, very large. I could estimate this with some data about mutation rates per basepair per generation, but rather than tracking down that data, let's just say it's 1015--it's going to be much higher than that. IC only claims that there are some miraculous interventions. Let's say there are seven (I am only arguing for logical compatibility of IC and contemporary ET, so I can make up a number here). That is going to be such a tiny fraction of the mutations in the biological history of our planet, that it will still be true that the majority of mutations that make a difference make a negative difference to fitness, and hence it will still be true that there is no positive correlation between fitness and mutation.

Moreover, this will be such a tiny fraction of the number of mutations in the history of the world that any effect it has on overall statistics of mutations is going to be well within very narrow error bounds, so any statistical claims about mutations will still be true. I take it we make no statistical claims of the form "exactly x percent of mutations have property P", but rather ones like "approximately x percent of mutations have property P", and variation of seven out of, say, 1015 mutations, given that the total is so great, is not going to affect the truth of such claims.

Wednesday, October 24, 2007

Might Intelligent Design turn out to be right?

I am going to argue that for all anybody knows, evolutionary science might develop in such a way that an Intelligent Design (ID) argument would be plausible. Hence, while one might well be justified in saying that ID arguments right now do not work, one is not justified in saying that future ID arguments won't work given a fully developed evolutionary science. Moreover, our current state of biological knowledge, interpreted in an uncontroversial and friendly way, gives us relatively little, if any, reason to accept the claim that ID arguments won't work given a fully developed evolutionary science. (In the interests of full disclosure I should say that I do not think any extant ID argument succeeds in establishing the existence of a designer.)

I shall understand a successful ID argument for a design hypothesis H (say, that God has designed the world) to be an argument that starts with some biological fact F about the world going over and beyond the mere existence of life, a fact such as that the world contains intelligent life, or that the world contains the mammalian eye, or that the world contains highly complex organisms, and then argues:

  1. F is very unlikely to happen if the only processes in play are those of evolutionary biology.
  2. F is not unlikely to happen on the relevant design hypothesis H.
  3. Therefore, F provides significant evidence for the design hypothesis over and against the hypothesis that the only processes in play are those of evolutionary biology.

Assuming that before we give the argument our probability for our design hypothesis (say, that God exists and has created the world) is not too low, we're going to get a successful ID argument as soon as we can find a biological fact F satisfying (1) and (2). My claim, now, is that the present state of evolutionary science gives us little reason to believe that we will not find such a fact F. Note that the above is not the only way of formulating an ID argument. But if we are not in a position to know that no ID argument of this sort is successful, then we are likewise not in a position to know that no ID argument of some sort or otheris successful.

Let's start by considering the fact that evolutionary science gives a statistical explanation of events. It is not shown that, given an earlier state of affairs, bipeds had to evolve. All that is done in an evolutionary explanation is that an evolutionary pathway is traced out and it is argued that that pathway had a certain probability.

Next observe the fact that for most high-level biological states of affairs, we have very little in the way of estimates of estimates of the objective probability of the states of affairs arising. It would be a very, very difficult task, for instance, to estimate the probability that winged vertebrates would exist on earth given the state of things a billion years ago. Thus, although evolutionary explanations are in a significant respect statistical in nature, the statistics have by and large not been worked out.

Now, even if evolutionary science is the whole truth about biology after the initial beginning of life, there will surely be biological features of the world whose objective probability given the initial conditions is tiny. Gould talks of how if we turned the clock back and re-ran the evolutionary processes, we would get something completely different from what we have. That may or may not be true, but surely there are some biological features the probability of whose arising was tiny. For instance, consider the precise kind of pattern that a copperhead snake has on its skin. It may well be that there are myriads of patterns that would do the same job as this pattern does. It may well be that the probability that a snake-like animal would arise with that precise pattern, given the initial state of things, is tiny. (Presumably, if we conjoin enough features, we easily get cases where the probability is about as small as we like.)

Now I have two arguments for my conclusion.

Argument 1: Take a particular feature, intelligence, intelligence of the sort humans have. I think it is far beyond the current state of the art in biology to give estimates of the probability that intelligence would arise given how things were a billion years ago. Intelligence was a solution to certain evolutionary problems, but had things gone somewhat differently, these problems might not have arisen and, for all we know, many, many other solutions are possible. Moreover, we have very little in the way of estimates of the sort of organismic complexity that is needed for intelligence. All in all, we have little reason to assert that intelligence is not very unlikely given how things were a billion years ago. We just don't know how to estimate such a probability, and, unless evolutionary computing ends up yielding experimental data that shows that the evolution of intelligence is easy, it may be quite a while before we know how to estimate such a probability.

A developed evolutionary science will, presumably, assign some probability to the arising of intelligence. But I have argued we have little reason to think that this probability is not going to be very low. Suppose that this probability will in fact be very low, and developed evolutionary science discovers this. Then, a design argument, where the fact in question is the existence of intelligent life (or intelligent life on earth?), will be a good one. For we will have (1). That, by itself, is not enough. Even if it is very unlikely that the precise pattern on a copperhead should arise through evolutionary processes, there being too many other patterns that could do the job, there is no good ID argument based on this pattern, because we do not have reason to believe that a designer would not be unlikely make precisely that pattern.

However, in the case where the fact F is intelligence, and if our design hypothesis is that the world is created by the God of traditional monotheism, then the fact F is not unlikely given the existence of God, since the God of traditional monotheism is perfectly good and generous, and hence it is not unlikely that he would want to create intelligent beings (on earth? that might be an issue to consider) to bestow his goodness on. Hence, we have (2).

Thus, if we do not have much reason to deny that (1) holds of intelligence--and the present state is such that we do not--we neither have much reason to deny that a design argument based on the existence of intelligence will work. Instead, we should just suspend judgment on this. Hence, for all we know, at least one ID argument will be successful.

Argument 2: There is a myriad of biological facts, such as the precise pattern on a copperhead, each of which satisfies (1). This is no evidence at all against evolutionary biology, but just a fact about probabilistic processes like those of mutation, recombination, selection, etc. A priori improbable things happen all the time. Bob wins a lottery. A dart lands within 0.000000000001 mm of point x (this is extremely improbable whatever the point x is, but of course the dart has to land somewhere, so whenever a dart is thrown, it lands within 0.000000000001 mm of some point, and hence always something improbable happens). No surprises there. Let S be the set of very unlikely biological facts like that.

Now to have reason to hold that no ID argument for some relevant design hypothesis will be successful, one would have to have to reason to judge that none of the facts in S satisfies (2). First note that this judgment would go beyond the competence of biological science. Biological science does not estimate the probability that God, if he existed, would design a snake with such-and-such a pattern on its back. So biological science by itself would not be sufficient to establish the unavailability of an ID argument.

Now in the case of the pattern on the back of the snake, we have little reason at present to think that God would design that pattern rather than any of the many others that the snake could have had. Now in the unlikely case that we might discover that the pattern encodes some text in some natural way, the relative probability of that pattern over the others might rise. But we don't expect that to happen.

Nonetheless, I do not think we have much reason to believe that none of the facts in S satisfies (2), and we even do not have much reason to believe that none of the facts in S will be discovered to satisfy (2). So, once again, we have little reason, if any, to believe that a working ID argument won't be discovered in the future.

Conclusions: Being sure that ID will be unsuccessful is unjustified. But there is a proviso that I have to add, which was implicit in my assumption that (1) and (2) are all one needs for ID's success: unless there is good independent reason to deny the conclusion of the ID arguments (this was the assumption that the probability for H before the ID argument isn't too low).