I will develop Dembski's specified complexity in a particular direction, which may or may not be exactly his, but which I think can be defended to a point.
Specified Complexity (SC) comes from the fact that there are three somewhat natural probability measures on physical arrangements. For definiteness, think of physical arrangements as black-and-white pixel patterns on a screen, and then there are 2n arrangements where n is the number of pixels.
There are three different fairly natural probability measures on this.
1. There is what one might call "a rearrangement (or Humean) measure" which assigns every arrangement equal probability. In the pixel case, that is 2-n.
2. There is "a nomic measure". Basically, the probability of an arrangement is the probability that, given the laws (and initial conditions? we're going to have two ways of doing it--one allowing the initial conditions to vary, and one to vary), such an arrangement would arise.
3. There is what one might call "a description measure". This is relative to a language L that can describe pixel arrangements. One way to generate a description measure is to begin by generating random finite-length strings of symbols from L supplemented with an "end of sentence" marker which, when generated, ends a string. Thus, the probability of a string of length k is m-k where m is the number of symbols in L (including the end of sentence marker). Take this probability measure and condition on (a) the string being grammatical and (b) describing a unique arrangement. The resulting conditional probability measure on the sentences of L that describe a unique arrangement then gives rise to a probability measure on the arrangements themselves: the description probability of an arrangement A is the (conditionalized as before) probability that a sentence of L describes A.
So, basically we have the less anthropocentric nomic and rearrangement measures, and the more anthropocentric description measure. The rearrangement measure has no biases. The nomic measure has a bias in favor of what the laws can produce. The description measure has a bias in favor of what can be more briefly described.
We can now define SC of two sorts. An arrangement A has specified rearrangement (respectively, nomic) complexity, relative to a language L, provided that A's rearrangement (respectively, nomic) measure is much smaller than its L-description measure. (There is some technical stuff to be done to extend this to less specific arrangements--the above works only for fully determinate arrangements.)
For instance, consider the arrangement where all the pixels are black. In a language L based on First Order Logic, there are some very short descriptions of this: "(x)(Bx)". So, the description measure of the all-black arrangement will be much bigger than the description measure of something messy that needs a description like "Bx1&Bx2&Wx3&...&Bxn". On the other hand, the rearrangement measure of the all-black arrangement is the same as that of any other arrangement. In this case, then, the L-description measure of the all-black arrangement will be much greater than its rearrangement measure, and so we will have specified rearrangement complexity, relative to L. Whether we will have nomic rearrangement complexity depends on the physics involved in the arrangement.
All of the above seems pretty rigorous, or capable of being made so.
Now, given the above, we have the philosophical question: Does SC give one reason to suppose agency? Here is where things get more hairy and less rigorous.
An initial problem: The concept of SC is language-relative. For any arrangement A, there is a language L1 relative to which A lacks complexity and a language L2 relative to which A has complexity. So SC had better be defined in terms of a privileged kind of language. I think this is a serious problem for the whole approach, but I do not know that it is insuperable. For instance, easily inter-translatable languages are probably going to give rise to similar orders of magnitude within the description measures. We might require that the language L be the language of a completed and well-developed physics. Or we might stipulate L to be some extension of FOL with the predicates corresponding to the perfectly normal properties. There are tough technical problems here, and I wish Dembski would do more here. Call any language that works well here "canonical".
Once we have this taken care of, it it can be done, we can ask: Is there any reason to think that SC is a mark of design?
Here, I think Dembski's intuition is something like this: Suppose I know nothing of an agent's ends. What can I say about the agent's intentions? Well, an agent's space of thoughts is going to be approximately similar to a canonical language (maybe in some cases it will constitute a canonical language). Without any information on the agent's ends, it is reasonable to estimate the probabilities of an agent having a particular intention in terms of the description measure relative to a canonical language.
But if this is right, then the approach has some hope of working, doesn't it? For suppose you have nomic specified complexity of an arrangement A relative to a canonical language. Then P(A|no agency) will be much smaller than the description measure of L, which is an approximation to P(A|agency) with no information about the sort of agency going on. Therefore, A incrementally confirms the agency hypothesis. The rest is a question of priors (which Dembski skirts by using absolute probability bounds).
I think the serious problems for this approach are:
- The problem of canonical languages.
- The problem that in the end we want this to apply even to supernatural designers who probably do not think linguistically. Why think that briefer descriptions are more likely to match their intentions?
- We do have some information on the ends of agents in general--agents pursue what they take to be valuable. And the description measure does not take value into account. Still, insofar as there is value in simplicity, and the description measure favors briefer descriptions, the description measure captures something of value.
It is true that if you were to eliminate chance processes and all known and unknown unintentional proccesses, then what's left are intentional processes i.e design. The problem is that in this kind of eliminative argument, you're never going to be able to eliminate all unintentional processes. It's hard to test unknown explanations. What Dembski does is either calculate the odds of equiprobable chance, determine it is low, then skip step 2 and immediately jump to design, or calculate the odds of a (poor) model of some proposed natural process, determine it is unlikely, then again jump to design. Behe's irreducible complexity often stands in for "step 2." There's a lot of smoke of mirrors in doing this that takes a little expertise to disentangle, but at the end of the day it ends up being an argument from ignorance common in early creationism just dressed up in a little fancier math.
ReplyDeleteThat is illicit.
see:
http://www.talkdesign.org/faqs/theftovertoil/theftovertoil.html
http://philosophy.wisc.edu/sober/ID&PRword.PDF