I’ve been thinking a bit about Integerated Information Theory (IIT) as a physicalist-friendly alternative to functionalism as an account of consciousness.
The basic idea of IIT is that we measure the amount of consciousness in a system by subdividing the system into pairs of subsystems and calculating how well one can predict the next state of each of the two subsystems without knowing the state of the other. If there is a partition which lets you make the predictions well, then the system is considered reducible, with low integrated information, and hence low consciousness. So you look for the best-case subdivision—one where you can make the best predictions as measured by Shannon entropy with a certain normalization—and say that the amount Φ of “integrated information” in the system varies in reverse order with the quality of these best predictions. And then the amount of consciousness Φ in the system corresponds to the amount of integrated information.
Aaronson gives a simple mathematical framework and what sure look like counterexamples: systems that intuitively don’t appear to be mind-like and yet have a high Φ value. Surprisingly, though, Tononi (the main person behind IIT) has responded by embracing these counterexamples as cases of consciousness.
In this post, I want to offer a counterexample with a rather different structure. My counterexample has an advantage and a disadvantage with respect to Aaronson’s. The advantage is that it is a lot harder to embrace my counterexample as an example of consciousness. The disadvantage is that my example can be avoided by an easy tweak to the definition of Φ.
It is even possible that my tweak is already incorporated in the official IIT 4.0. I am right now only working with Aaronson’s perhaps simplified framework (for one, his framework depends on a deterministic transition function), because the official one is difficult for me to follow. And it is also possible that I am just missing something obvious and making some mistake. Maybe a reader will point that out to me.
The idea of my example is very simple. Imagine a system consisting of two components each of which has N possible states. At each time step, the two components swap states. There is now only one decomposition of the system into two subsystems, which makes things much simpler. And note that each subsystem’s state at time n has no predictive power for its own state at n + 1, since it inherits the other subsystem’s state at n + 1. The Shannon entropies corresponding to the best predictions are going to be log2N, and so Φ of the system is 2log2N. By making N arbitrarily large, we can make Φ arbitrarily large. In fact, if we have an analog system with infinitely many states, then Φ is infinite.
Advantage over Aaronson’s counterexamples: There is nothing the least consciousness-like in this setup. We are just endlessly swapping states between two components. That’s not consciousness. Imagine the components are hard drives and we just endlessly swap the data between them. To make it even more vivid, suppose the two hard drives have the same data, so nothing actually changes in the swaps!
Disadvantage: IIT can escape the problem by modifying the measure Φ of integrated information in some way in the special case where the components are non-binary. Aaronson’s counterexamples use binary components, so they are unaffected. Here are three such tweaks. (i) Just to divide by the logarithm of the maximum number of states in a component (seems ad hoc). (ii) Restrict the system to one with binary components, and therefore require that any component with more that two possible states be reinterpreted as a collection of binary components encoding the non-binary state (but which binarization should one choose?). (iii) Define Φ of a non-binary system as a minimum of the Φ values over all possible binarizations. Either (i) or (iii) kills my counterexample.
No comments:
Post a Comment