Suppose a naturalistic computational theory of mind is true: To have mental states of a given kind is to engage in a particular kind of computation. Now imagine a conscious computer thinking various thoughts and arranged around standard logic gates. Modify the computer to have an adjustment knob on each of its logic gates. The adjustment knob can be set to any number between 0 and 1, such that if the knob is set to set to p, then the chance (say, over a clock cycle) that the gate produces the right output is p. Thus, with the knob at 1, the gate always produces the right output, with the knob at 0, it produces the opposite output, with the knob at 0.5, it functions like a fair coin. Make all the randomness independent.
Now, let Cp be the resulting computer with all of its adjustment knobs set to p. On our computational theory of mind, C1 is a conscious computer thinking various thoughts. Now, C0.5 is not computing anything: it is simply giving random outputs. This is true even if in fact, by an extremely unlikely chance, these outputs always match the ones that C1 gives. The reason for this is that we cannot really characterize the components of C0.5 as the logic gates that they would need to be for C0.5 to be computing the same functions as C1. Something that has a probability 0.5 of producing a 1 and a probability 0.5 of producing a 0, regardless of inputs, is no more an and-gate than it is a nand-gate, say.
So, on a computational theory of mind, C0.5 is mindless. It’s not computing. Now imagine a sequence of conscious computers Cp as p ranges from 0.5 to 1. Suppose that it so happens that the corresponding “logic gates” of all of them always happen to give the same answer as the logic gates of C1. Now, for p sufficiently close to 1, any plausible computational theory of mind will have to say that Cp is thinking just as C1 is. Granted, Cp’s gates are less reliable than C1’s, but imperfect reliability cannot destroy thought: if it did, nothing physical in a quantum universe would think, and the naturalistic computational theorist of mind surely won’t want to accept that conclusion.
So, for p close to 1, we have thought. For p = 0.5, we do not. It seems very plausible that if p is very close to 0.5, we still have no thought. So, somewhere strictly between p = 0.5 and p = 1, a transition is made from no-thought to thought. It seems implausible to think that there is such a transition, and that is a count against computational theories of mind.
Moreover, because all the gates actually happen to fire in the same way in all the computers in the Cp sequence, and consciousness is, on the computational theory, a function of the content of the computation, it is plausible that for all the values of p < 1 for which Cp has conscious states, Cp has the same conscious states as C1. Either Cp does not count as computing anything interesting enough for consciousness or it counts as imperfectly reliably computing the same thing as C1 is. Thus, the transition from C0.5 to C1 is not like gradually waking up from unconsciousness. For when we gradually wake up from unconsciousness, we have an apparently continuous sequence of more and more intense conscious states. But the intensity of a conscious state is to be accounted for computationally on a computational theory of mind: the intensity is a central aspect of the qualia. Thus, the intensity has to be a function of what is being computed. And if there is only one relevant thing computed by all the Cp that are computing something conscious-making, then what we have as p goes from 0.5 to 1 is a sudden jump from zero intensity to full intensity. This seems implausible.
"For when we gradually wake up from unconsciousness, we have an apparently continuous sequence of more and more intense conscious states."
ReplyDeleteI agree with your conclusion (I don't think that human consciousness is merely due to a kind of computation--that theory is probably influential because it is reifies a good analogy) but am not sure that all waking is gradual in the way the your sentence above states. Perhaps waking up from a visually intense dream is a possible counterexample--perhaps the imagery changes from dream contents to world, but maybe not its intensity.
A visually intense dream is not an instance of unconsciousness. :-)
ReplyDeleteI think normal consciousness includes connectedness to the environment, and dreams are mosthly not connected. There are components to being conscious: see for example https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3311716/
ReplyDeleteThat's a different sense of consciousness than the one I have in mind. The one I have in mind is being aware, being the subject of qualia, being in a state such that there is "what it is like to be in that state". Dreams are a form of consciousness, while dreamless sleep of course is not.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteGiven such a qualia based definiton of the consciousness, I suppose dreams would have a kind of consciousness. So if we assume your intuition about going from no to full qualia not making sense in a graduated scenario from randomness to order, as you say, your argument does go through.
ReplyDeleteI guess I just do not share the commonly stated sorites intuition about "fading qualia." There are plenty of examples of phase transtions in nature, and no reason not to think that conscious contents cannot form as such a phase transition.
I think when talking about mental states it is important to distinguish intentional states from phenomenal/conscious states. The computational theory is much more plausible about intentional states than about phenomenal states. So, e.g. a computational theory of the thought "I am in pain" is much more plausible than a computational theory of the pain-state itself.
ReplyDeleteHowever, as a critique of computational theories of conscious states, I think this is really good. And it shows something more general, perhaps. Whatever the property is that makes a state conscious, it is occurrently present to the mind. It is not a probabilistic property or a modal property or a counterfactual property or a dispositional property. Computational theories depend on these latter types of property: a mechanism is computing a function only if it will likely compute the function, or computes this function in many nearby worlds, or would compute this function, or tends to compute it, etc.
That is probably a criterion that can be used on lots of theories of consciousness.
Heath: that's a good point. I do think that it is more defensible to think that the intentionality fades away than that the qualia do.
ReplyDelete