Suppose the ACME AI company manufactures an intelligent, conscious and perfectly reliable computer, C0. (I assume that the computers in this post are mere computers, rather than objects endowed with soul.) But then a clone company manufactures a clone of C1 out of slightly less reliable components. And another clone company makes a slightly less reliable clone of C2. And so on. At some point in the cloning sequence, say at C10000, we reach a point where the components produce completely random outputs.
Now, imagine that all the devices from C0 through C10000 happen to get the same inputs over a certain day, and that all their components do the same things. In the case of C10000, this is astronomically unlikely, as the super-unreliable components of the C10000 produce completely random outputs.
Now, C10000 is not computing. Its outputs are no more the results of intelligence than the copy of Hamlet typed by the monkeys is the result of intelligent authorship. By the same token, C10000 is not conscious on computational theories of consciousness.
On the other hand, C0’s outputs are the results of intelligence and C0 is conscious. The same is true for C1, since if intelligence or consciousness required complete reliability, we wouldn’t be intelligent and conscious. So somewhere in the sequence from C0 to C10000 there must be a transition from intelligence to lack thereof and somewhere (perhaps somewhere else) a transition from consciousness to lack thereof.
Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.
More generally, this means that given functionalism about mind, there must be a dividing line in measures of reliability between cases of consciousness and ones of unconsciousness.
I wonder if this is a problem. I suppose if the dividing line is somehow natural, it’s not a problem. I wonder if a natural dividing line of reliability can in fact be specified, though.
The transition between being conscious and not being conscious that happens when you fall asleep seems pretty vague. I don't see why you find it implausible that "being conscious" could be vague in much the same way "being red" or "being intelligent" might be vague. In fact the evidence from experience (falling asleep etc) seems to directly suggest that it is vague.
ReplyDeleteWhen I fall asleep, I may become conscious of less and less. But I can't get myself to deny that either it is definitely true at any given time that I am at least a little conscious or it is definitely true that I am not at all conscious.
ReplyDeleteMaybe for you, consciousness is an all-or-nothing affair,
ReplyDeletebut that could be like a light being either red or green;
it does not follow that no colors are only as red as not.
So I wonder why you say that
it is not plausible that consciousness is a vague property.
I think that a functionalist could say that
your thought-experiment shows that it is
(unless their answer to the problem of vagueness
is to have sharp lines everywhere),
and so I doubt that this would be a problem
(although intuitively it does seem like one).
Isn't this also a reductio on the idea that we can tell intelligence of origin from an arbitrary output alone? Consider if the output were the answers to a 3-question true-false test, where 1 in 8 are the odds for getting all 3 correct by chance alone.
ReplyDeleteIn the above example it's obvious that the output taken by itself is not necessarily indicative of intelligent processing in the answering.
To clarify the relevance of what I wrote above, if we start out by assessing consciousness with a vague measure, so that our measurement of consciousness is vague, can we actually tell whether just the measurement, or also the reliability of the consciousness, or finally if the consciousness itself is vague as well?
ReplyDeleteFor people who are impressed by the falling asleep argument, I think I do need a different argument.
ReplyDeleteHere is one. If we decrease the reliability of all the components in sync, I think we should be able to ensure that in any of the machines, either the machine (unreliably) computes the same thing as C0 was computing or else it doesn't compute at all. But if consciousness supervenes on computation, then all the machines that compute the same thing as C0 have the same state of consciousness--and the remaining machines have no state of consciousness. This is unlike the case of gradually falling asleep, where we have a sequence of *different* states of consciousness, approaching zero.
But suppose I am wrong and the consciousness in the sequence must continuously decrease to zero. Then there will have to be a function from degrees of reliability to degrees of consciousness. Choices of this function seem just as arbitrary as the choice of a cutoff.
If consciousness is a "thing" that is fed with qualia somehow coming from a brain, then falling asleep wouldn't necessarily involve the consciousness slowly dissolving. It might just be that the brain is falling asleep, so qualia it generates get simpler and sparser. And when qualia stop arriving, consciousness doesn't receive anything.
ReplyDelete