In a computer, we have multiple layers of abstraction. There is an underlying analog hardware level (which itself may be an approximation to a discrete quantum world, for all we know)—all our electronic hardware is, technically, analog hardware. Then there is a digital hardware level which abstracts from the analog hardware level, by counting voltages above a certain threshold as a one, below another—lower—threshold as a zero. And then there are higher layers defined by the software. But it is interesting that there is already semantics present at the digital level: three volts (say) means a one while half a volt (say) means a zero.
At the (single-threaded) software level, we think of the computer as being in a sequence of well-defined discrete states. This sequence unfolds in time. However, it is interesting to note that the time with respect to which this sequence unfolds is not actually real physical time. One reason is this. At the analog hardware level, during state transitions there will be times when the voltage levels are in an area that does not define a digital state. For instance, in 3.3V TTL logic, a voltage below 0.8V is considered a zero, a voltage above 2.0V is considered a one, but in between what we have is “undefined and results in an invalid state”. Since physical changes at the analog hardware level are continuous, whenever there is a change between a zero and a one, there will be a period of physical time at which the voltage is in the “undefined” range.
It seems then that the well-defined software state thus can only occur at a proper subset of the physical times. Between these physical times are physical times at which the digital states, and hence the software states that are abstractions from them, are undefined. This is interesting to think about in connection with the hypothesis of a conscious computer. Would a conscious computer be conscious “all the time” or only during the times when software states are well defined?
But things are more complicated than that. The technical means by which undefined states are dealt with is the system clock, which sends a periodic signal to the various parts of the processor. The system is normally so designed that when the clock signal reaches a component of the processor (say, a flip-flop), that component’s electrical states have a well-defined digital value (i.e., are not in the undefined range). There is thus an official time at which a given component’s digital values are defined. But at the analog hardware level, that official time is slightly different for different components, because of “clock skew”, the physical phenomenon that clock signals reach different components at different times. Thus, when we say that component A is in state 1 and component B is in state 0 at the same time, the “at the same time” is not technically defined by a single physical time, but rather by the (normally) different times at which the same clock signal reaches A and B.
In other words, it may not be technically correct to say that the well-defined software state occurs at a proper subset of the physical times. For the software state is defined by the digital state of multiple components, and the physical times at which these digital state “count” is going to be different for different components because of clock skew. In fact, I assume that the following can and does sometimes happen: component B is designed so that the clock signal reaches it after it has reached component A, and by the time component B is reached by the clock signal, component A has started processing new data and no longer has a well-defined digital state. Thus at least in principle (and I don’t know enough about the engineering to know if this happens in practice) it could be that there is no single physical time at which all the digital states that correspond to a software state are defined.
If this is right, then when we go back to our thought experiment of conscious computer, we should say this: The times of the flow of consciousness in that computer are not even a subset of the physical times. They are, rather, an abstraction, what we might call “software time”. If this is right, the question of whether the computer is presently conscious will be literally nonsense. The computer’s software time, which its consciousness is strung out along, has a rather complex relationship to real time.
So what?
I don’t know exactly. But I think there are a few directions one could take this line of thought:
Consciousness has to be strung out in a well-defined way along real time, and so computers cannot be conscious.
It is likely that similar phenomena occur in our brains, and so either our consciousness is not based on our brains or else it is not strung out along real time. The latter makes the A-theory of time less plausible, because the main motive for the A-theory is to do justice to our experience of temporality. But if our experience of temporality is tied to an abstracted software time rather than real time, then doing justice to our experience of temporality is unlikely to reach the truth about real time. This in turn suggests to me the conditional: If the A-theory of time is true, then some sort of dualism is true.
The problem that transitions between meaningful states (say, the ones and zeros of the digital hardware level) involve non-meaningful states between them is likely to afflict any plausible theory on which our mental functioning supervenes on a physical system. In digital computers, the way a sequence of meaningful states is reconstructed is by means of a clock signal. This leads to an empirical prediction: If the mental supervenes on the physical, then our brains have something analogous to a clock signal. Otherwise, the well-defined unity of our consciousness cannot be saved.
9 comments:
Hi Dr. Pruss,
I am interested in line of thought #3 you introduce in the post where the non-meaningful hardware states suggest a problem for the supervening of mental functioning on a physical system. If I understand this scenario correctly, the ones and zeros of the digital hardware level are meaningful states analogous to a well-defined unity of consciousness in human mental states.
Why not associate computer hardware with human hardware? I find it plausible to suggest that the meaningful states of ones and zeros at the hardware level of a machine would correspond to physical states of the brain in a human. If the ones and zeros at the hardware level are like physical states of the brain, then could the world to mind direction of fit for intentional mental states account for mental states supervening on the physical? In the same way that the world, in the form of my raised arm, matches my intention to raise my arm, so too would the world, in the form of a computer's hardware state, match the intention of the computer's software. For reasons like this, it seems to me that the human body ought to be compared with computer hardware and the human mind with computer software. Can you share why you don’t take this approach in this case? I’m hoping to deal with a related topic in a paper for a masters thesis and would be interested to hear any reasons why the aforementioned association between humans and computers might be a dangerous assumption.
If you were to associate the hardware states of computers with the physical states of the brain rather than associating hardware states with consciousness itself, it seems this would have implications for the empirical prediction you mentioned. Ultimately, I think it offers an escape from having to identify a human equivalent of a clock signal to account for unified consciousness. Because on the consideration of hardware states as more closely aligned with physical states of the brain rather than associating hardware states to consciousness itself, the problem of supervening would become a problem for physical rather than mental function. The burden would now seem to fall on physicalists who aim to explain how consciousness can emerge in machines given the problem of non-meaningful states you have outlined.
Thanks,
Hubbell
"The hardware states of computers" is ambiguous between the underlying analog states and the abstraction of digital states. Which one did you mean in your last paragraph?
I had in mind states at the digital hardware level.
Although, it seems to me that the suggestion of a human-like consciousness emerging from computer hardware would present a challenge whether a physicalist chooses to associate physical states of the brain with either the analog or digital levels of hardware. The potential for undefined or non-meaningful states at both the analog and digital level would render both options problematic. But I suppose this would not be a big problem if physicalists are comfortable saying that a consciousness will emerge only in well defined states.
But it seems that the physical states of the brain are more analogous to the analog hardware states.
I had digital states in mind because I supposed that most computationalists or advocates of supervenient physicalism would consider digital states of hardware to be more analogous to physical states of the brain. It also made sense to me that the semantics present at the digital hardware level provide motivation for associating digital hardware states with physical states of the brain. I expected that physicalists would like to appeal to multiple realizability to show that the semantics of the 0s and 1s at digital hardware level can be realized by brains or digital hardware.
If you have better reasons for considering analog states as more closely analogous to physical brain states, it would be great to hear any of your ideas or resources that you can point out.
Well, I was assuming that when one talks of "the brain being in physical state S", there is no semantics involved, just the positions of particles. But when one says "the circuit is in digital state 1001", semantics is already involved, because one needs to *interpret* the analog electrical states of the circuit as 0s or 1s. Or, to put it differently, digital states are multiply realizable in a way that analog states are not. But physical states of brains are not multiply realizable in the way digital states are.
I appreciate the response. I think I need to give more careful consideration to my conception of physical brain states without having such concern to account for the realization of various functional states. Thanks!
Perhaps the technical problem of digitally invalid states can be handled as follows: in a deterministic system, at any given time it will be determined what (if any) digitally valid state the system would evolve into absent external interference. So, perhaps, a state that an electronics engineer considers invalid still has the same semantic features as the digital state that it would evolve into absent external interference.
Here's a problem. Imagine that we live in a deterministic universe and the first digital computer is going to be produced in a hundred years. Do we want to say that the semantic features of that digital computer already exist now, because the solar system is determined to produce it?
But perhaps there is some way to rule out problems like that.
I wonder if I could rule out this problem by appealing to an asymmetry between types and tokens with respect to the necessity of real time for semantic features? The "type" of semantic feature could have existed 100 years prior, without respect to any digitally valid or invalid states from which semantics evolved. But for the electronics engineer, digitally invalid states contributing to a "token" semantic feature seem real, at least insfoar as this deterministic universe affords temporal becoming. If I could make a type-token distinction, I would perhaps be willing to grant that the reality of digitally valid or invalid states are not necessary for that "type" of semantic feature. But I’m not convinced that the technical problem of digitally invalid states has gone away for the electronics engineer who experiences a "token" of the semantic feature.
In your more recent post on the "Timeless flow of consciousness" you mention a problem where subjective consciousness does not seem to require real temporal order. I see this consciousness problem as similar to the semantics problem in that physicalists want to show that because temporal order is not necessary for subjective consciousness as a "type," then it somehow follows that "token" subjective consciousness would not require real time. If I understand correctly, Searle appears to make this suggestion: "organized unity across time is essential to the healthy functioning of the conscious organism, but it is not necessary for the very existence of conscious subjectivity."
http://faculty.wcas.northwestern.edu/~paller/dialogue/csc1.pdf
This, perhaps, could benefit from something like a type-token clarification. I take Searle to be suggesting here that real time is not necessary for the very existence of the "type" of subjective consciousness. However, persistence across time is essential to "token" conscious organisms.
I’m not sure if this is a valid way around the problem, but I’ve enjoyed thinking on topics raised by your recent posts.
Post a Comment