Friday, June 22, 2018

Language and specified complexity

Roughly speaking—but precisely enough for our purposes—Dembski’s criterion for the specified complexity of a system is that a ratio of two probabilities, pΦ/pL, is very low. Here, pL is the probability that by generating bits of a language L at random we will come up with a description of the system, while pΦ is the physical probability of the system arising. For instance, when you have the system of 100 coins all lying heads up, pΦ = (1/2)100 while pL is something like (1/27)9 (think of the description “all heads” generated by generating letters and spaces at random), something that pΦ/pL is something like 6 × 10−18. Thus, the coin system has specified complexity, and we have significant reason to look for a design-based explanation.

I’ve always been worried about the language-dependence of the criterion. Consider a binary sequence that intuitively lacks specified complexity, say this sequence generated by random.org:

  • 0111101001100111010101011001100111001110000110011110101101101101001011011000011101100111100111111111

But it is possible to have a language L where the word “xyz” means precisely the above binary sequence, and then relative to that language pL will be much, much bigger than 2−100 = pΦ.

However, I now wonder how much this actually matters. Suppose that L is the language that we actually speak. Then pL measures how “interesting” the system is relative to the interests of the one group of intelligent agents we know well—namely, ourselves. And interest relative to the one group of intelligent agents we know well is evidence of interest relative to intelligent agents in general. And when a system is interesting relative to intelligent agents but not probable physically, that seems to be evidence of design by intelligent agents.

Admittedly, the move from ourselves to intelligent agents in general is problematic. But we can perhaps just sacrifice a dozen orders of magnitude to the move—maybe the fact that something has an interest level pL = 10−10 to us is good evidence that it has an interest level at least 10−22 to intelligent agents in general. That means we need the pΦ/pL ratio to be smaller to infer design, but the criterion will still be useful: it will still point to design in the all-heads arrangement of coins, say.

Of course, all this makes the detection of design more problematic and messy. But there may still be something to it.

No comments: