Roughly speaking—but precisely enough for our purposes—Dembski’s criterion for the specified complexity of a system is that a ratio of two probabilities, *p*_{Φ}/*p*_{L}, is very low. Here, *p*_{L} is the probability that by generating bits of a language *L* at random we will come up with a description of the system, while *p*_{Φ} is the physical probability of the system arising. For instance, when you have the system of 100 coins all lying heads up, *p*_{Φ} = (1/2)^{100} while *p*_{L} is something like (1/27)^{9} (think of the description “all heads” generated by generating letters and spaces at random), something that *p*_{Φ}/*p*_{L} is something like 6 × 10^{−18}. Thus, the coin system has specified complexity, and we have significant reason to look for a design-based explanation.

I’ve always been worried about the language-dependence of the criterion. Consider a binary sequence that intuitively lacks specified complexity, say this sequence generated by random.org:

- 0111101001100111010101011001100111001110000110011110101101101101001011011000011101100111100111111111

But it is possible to have a language *L* where the word “xyz” means precisely the above binary sequence, and then relative to that language *p*_{L} will be much, much bigger than 2^{−100} = *p*_{Φ}.

However, I now wonder how much this actually matters. Suppose that *L* is the language that we actually speak. Then *p*_{L} measures how “interesting” the system is relative to the interests of the one group of intelligent agents we know well—namely, ourselves. And interest relative to the one group of intelligent agents we know well is evidence of interest relative to intelligent agents in general. And when a system is interesting relative to intelligent agents but not probable physically, that seems to be evidence of design by intelligent agents.

Admittedly, the move from ourselves to intelligent agents in general is problematic. But we can perhaps just sacrifice a dozen orders of magnitude to the move—maybe the fact that something has an interest level *p*_{L} = 10^{−10} to us is good evidence that it has an interest level at least 10^{−22} to intelligent agents in general. That means we need the *p*_{Φ}/*p*_{L} ratio to be smaller to infer design, but the criterion will still be useful: it will still point to design in the all-heads arrangement of coins, say.

Of course, all this makes the detection of design more problematic and messy. But there may still be something to it.