Friday, June 21, 2024

Conjectures about a system in the context of Integrated Information Theory

I show really be done with Integrated Information Theory (IIT), in Aaronson’s simplified formulation, but I noticed a rather interesting difficult.

In my previous post on the subject, I noticed that a double grid system where there are two grids stacked on top of one another, with the bottom grid consisting of inputs and the upper grid of outputs, and each upper value being the logical OR of the (up to) five neighboring input values will be conscious according to IIT if all the values are zero and the grid is large enough.

In this post, I am going to give some conjectures about the mathematics rather than even a proof sketch. But I think the conjectures are pretty plausible and, if true, it shows something fishy about IIT’s measure of integrated information.

Consider our dual grid system, except now the grids are with some exceptions rectangular, with a length of M along the x-axis and a width of N along the y-axis (and the stacking along the z-axis). But there are the following exceptions to the rectangularity:

  • at x-coordinates M/4 − 1 and M/4 the width instead of being N is N/8

  • at x-coordinates M/2 − 1 and M/2 the width is N/10.

In other words, at two x-coordinate areas, the grids have bottlenecks, of slightly different sizes. We suppose M is significantly larger than N, and N is very, very large (say, 1015).

Let Ak be the components on the grids with x-coordinates less than k and let Bk be the remaining components. I suspect (with a lot of confidence) that the optimal choice for a partition {A, B} that minimizes the “modified Φ value” Φ(A,B)/min (|A|,|B|) will be pretty close to {Ak, Bk} where k is in one of the bottlenecks. Thus to estimate Φ, we need only look at the Φ and modified Φ values for {AM/4, BM/4} and {AM/2, BM/2}. Note that if k is M/4 or M/2, then min (|A|,|B|) is approximately 2MN/4 and 2MN/2, respectively, since there are two grids of components.

I suspect (again with a lot of confidence) that Φ(Ak,Bk) will be approximately proportional to the width of the grid around coordinate k. Thus, Φ(AM/4,BM/4)/min (AM/4,BM/4) will be approximately proportional to (N/8)/(2NM/4) = 0.25/M while Φ(AM/2,BM/2)/min (AM/2,BM/2) will be approximately proportional to (N/10)/(2NM/2) = 0.1/M.

Moreover, I conjecture that the optimal partition will be close to {Ak, Bk} for some k in one of the bottlenecks. If so, then our best choice will be close to {AM/2, BM/2}, and it will yield a Φ value approximately proportional to N/10.

Now modify the system by taking each output component at an x-coordinate less than M/4 and putting four more output components besides the original output component, and with the very same value as the original output component. 



I strongly suspect that the optimal partition will again be obtained by cutting the system at one of the two bottlenecks. The Φ values of at the M/4 and M/2 bottlenecks will be unchanged—mere duplication of outputs does not affect information content—but the modified Φ values (obtained by dividing Φ(A,B) by min (|A|,|B|)) will be (N/8)/(6NM/4) = 0.083/M and (N/10)/(2NM/2) = 0.1/M. Thus the optimal choice will be to partition the system at the M/4 bottleneck. This will yield a Φ value approximately proportional to N/8. Which is bigger than N/10.

For concreteness, let’s now imagine that each output is an LED. We now see that if we replace some of the LEDs by five LEDs (namely, the ones in the left-hand quarter of the system), we increase the amount of integrated information from N/10 to N/8. This has got to be wrong. Simply by duplicating LEDs we don’t add anything to the information content. And we certainly don’t make a system more conscious just by lighting up a portion of it with additional LEDs.

Notice, too, that IIT has a special proviso: if one system is a part of another with a higher degree of consciousness, the part system has no consciousness. So now imagine that a Φ value proportional to N/10 is sufficiently large for significant consciousness, so our original system, without extra output LEDs, is conscious. Now, besides the left quarter of the LEDs, add the quadruples of new LEDs that simply duplicate the original LED values (they might not even be electrically connected to the original system: they might sense whether the original LED is on, and light up if so). According to IIT, then, the new system is more conscious than the old—and the old system has had its consciousness destroyed, simply by adding enough duplicates of its LEDs. This seems wrong.

Of course, my conjectures and back-of-the-evelope calculations could be false. 

2 comments:

Gary Huber said...

I guess the big question in my mind is: could you train it to drive a car?

Alexander R Pruss said...

Certainly not if all the inputs are left at zero.

But even if we let the inputs be non-zero, I don't see how we can get any useful computation out of OR gates alone with no feedback.