Tuesday, February 2, 2016

A "Freudian" argument against some theories of consciousness

  1. The kinds of computational complexity found in our conscious thought are also found in our unconscious thought.
  2. So, consciousness does not supervene on the kinds of computational complexity found in an entity.
Of course, (1) is an empirical claim, and it might turn out to be false, though I think it is quite plausible. If so, then we have the backup argument:
  1. The kinds of computational complexity found in our conscious thought possibly are all found in unconscious thought.
  2. So, consciousness does not supervene on the kinds of computational complexity found in an entity.

2 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. I think this is a perfectly valid point. Have you given any thought to approaches like those of Alva Noe (cf. "Out of Our Heads") and Kevin O'Regan (cf. "Why Red Doesn't Sound Like a Bell")? They owe a lot to Heidegger, Wittgenstein, and especially Merleau-Ponty (as well as people like James Gibson and Hubert Dreyfus, surely). Their view is, essentially, that conscious perception occurs through skilled engagement with the world. We have certain skills which allows us to come to grips with our environment, and the innate knowledge of how to apply those skills to get a better and better grasp on our surroundings. So, for example, if I'm holding a sponge in my hand, I instinctively squeeze it a little to get a better and better understanding of how rigid it is; I run my fingers over it to figure out how rough or smooth it is, etc. This skillful engagement (involving lots of micro-movements of which I -- having mastered this skill -- am as unaware as the virtuoso pianist is unaware of individual finger contractions) is what causes the sponge to show up for me, while leaving a certain infeffability and irreducibility about the act of perceiving it.

    In any case, I completely agree that a computational theory makes no sense. And it leads to misguided research programs. For example, the AI people seem to still imagine that increasing the speed of computation will get them closer to consciousness, despite the fact that our brains (by comparison) move remarkably slowly! Worse yet, neuroscientists are studying a sort of neo-Phrenology (Raymon Tallis' term; not mine) in thinking that, if the brain "lights up" (already a misleading idea) in a particular SECTION, then it is vision; but, in another section, it is hearing. This is already ludicrous prima facie, but it's also been shown to be false in experiments where ferrets were "re-wired", so to speak, so that their ears stimulated their "visual cortex" and their eyes stimulated their "auditory centers". These ferrets still saw and heard just fine. It isn't location, and it isn't computation. I don't think it's anything inside of us at all. I think the brain is just enabling us to engage in the right ways with the world, and the consciousness/awareness/perception is in the engagement.

    ReplyDelete