Wednesday, September 22, 2021

Against digital phenomenology

Suppose a digital computer can have phenomenal states in virtue of its computational states. Now, in a digital computer, many possible physical states can realize one computational state. Typically, removing a single atom from a computer will not change the computational state, so both the physical state with the atom and the one without the atom realize the same computational state, and in particular they both have the same precise phenomenal state.

Now suppose a digital computer has a maximally precise phenomenal state M. We can suppose there is an atom we can remove that will not change the precise phenomenal state it is in. And then another. And so on. But then eventually we reach a point where any atom we remove will change the precise phenomenal state. For if we could continue arbitrarily long, eventually our computer would have no atoms, and then surely it wouldn’t have a phenomenal state.

So, we get a sequence of physical states, each differing from the previous by a single atom. For a number of initial states in the sequence, we have the phenomenal state M. But then eventually a single atom difference destroys M, replacing it by some other phenomenal state or by no phenomenal state at all.

The point at which M is destroyed cannot be vague. For while it might be vague whether one is seeing blue (rather than, say, purple) or whether one is having a pain (rather than, say, an itch), whether one has the precise phenomenal state M is not subject to vagueness. So there must be a sharp transition. Prior to the transition, we have M, and after it we don’t have M.

The exact physical point at which the transition happens, however, seems like it will have to be implausibly arbitrary.

This line of argument suggests to me that perhaps functionalists should require phenomenal states to depend on analog computational states, so that an arbitrarily small of the underlying physical state can still change the computational state and hence the phenomenal state.


IanS said...

I’m not following this. A digital computer, viewed formally, has a large but finite number of states, transition rules between them, and rules about the interaction between its formal states and its environment. If the computer has phenomenal states, it is by virtue of these things. The physical implementation is irrelevant. It either works to give the right formal states, etc., or it doesn’t. If you remove too many atoms from a computer, it may give the ‘wrong’ formal states etc. But then it is a formally different computer.

From another direction, do ‘phenomenal states’ have to be states? Maybe they are processes. We talk about ‘feeling’ pain, ‘hearing’ sound etc.

Alexander R Pruss said...

I probably should have talked about removing electrons one at a time. Think about logic levels. There are precise logic levels specified by the datasheet for the processor, e.g., low = 0-0.8v, high = 2-3.3v. So by the datasheet if you remove electrons so that some zero bit hits 0.9v, you have an undefined computational state. But the metaphysics of the system doesn't care what the datasheet says. And chances are everything will work at 0.9v, despite being out of spec, and you still have M. But at some point you metaphysically no longer have a state that supports M (e.g., if this bit is essential to M, you don't have it when you hit 2v). But what defines the exact point of transition from M to not-M?

William said...

What we see in physical systems that undergo a phase transition is that there is a range in which the transition becomes more and more likely. So a system that freezes at 0 C may when the experiment is done many times freeze at a range between 0.1 and -1.3 C, for example.

Similarly, if I during an experiment see that something is red until, with decreasing of the ambient lighting, it is black to me, the range of responses with "red" or black" transitions will have a range in the lighting (as measured by a device that is more precise about amounts of light energy than my eyes can tell).

Alexander R Pruss said...

Yeah, but that's not what's going to happen in the mental phenomenon case, because whether a given physical state (or process, Ian is right that that may be more plausible) gives rise to M is determined by the physical state and so it can't be random whether M occurs.

Alexander R Pruss said...

The more I think about it, the more I think that what I said in the conclusion is right: all computers are ultimately analog (at the macro level; it may be that ultimately at the quantum level we have a discrete reality), and it is the analog computational state that would actually underlie the mental state, if a computer were conscious, and the mental state would be correspondingly analog, and hence varying slightly with the slightest change of the underlying physical state.

William said...

A physical quantum state might determine a limited number of outcomes, yet which of those outcome it determines may be random as far as we can know.

IanS said...

Viewed classically, everything physical is ultimately analog. But the point of a digital computer is that you can ignore its analog nature, as long as it is working properly. If one computer could have phenomenal states, so could an otherwise similar computer that used inverted logic levels. What matters is that it goes through the right sequence of logical states (with the right relation to its environment, if that is relevant).

If you tweak a voltage, as in your thought experiment, when does the computer cease to be conscious? I’d say, when it fails in the ordinary sense. That is, when the voltage is so far out of spec that the output of the next logic gate is ‘wrong’, or the state in the next clock cycle is ‘wrong’. (Strictly, the machine might still be conscious [it may be able to survive a few errors], but it would be conscious differently.)

I think your thought experiment shows that if a computer could have phenomenal states (whether states in the strict sense or processes) they would have no special metaphysical status. But I don’t think proponents of computer consciousness would object to this.

William said...

I like the idea that computers are ultimately analog and that we tend to ignore their analog nature, because it agrees with my intuition that the analog aspect of ourselves is what permits us to be conscious, not just the functions we simulate on a computer. The computer map is not the human territory.

Alexander R Pruss said...


"That is, when the voltage is so far out of spec that the output of the next logic gate is ‘wrong’, or the state in the next clock cycle is ‘wrong’."

I don't think that works.

Let's take the clock version for simplicity.

1. Suppose that the conscious state begins with clock cycle n and ends with clock cycle n+k. Further, suppose the computer is annihilated after clock cycle n+k. Then even if everything is OK at clock cycle n+k, the state at clock cycle n+k+1 is wrong. So now on your proposal it looks like whether the computer is conscious at clock cycle n+k depends on the future--it depends on whether there will be another clock cycle.

One might try to fix this counterfactually: the state is too far out of spec at n+k provided that it WOULD be wrong at n+k+1 if the computer were to continue functioning as before. But that probably doesn't work for quantum reasons. Absent Molinism, we don't have facts about whether the computer WOULD be wrong at n+k+1, but only facts about how probable it is that it would be wrong. And now we have the same problem of the analog as before: exactly how big would the probability of an error at n+k+1 have to be for that probability to be sufficiently great to invalidate the state at n+k?

2. In any case, it seems that saying that the state is too far off at cycle n+k iff it is wrong at cycle n+k+1 simply shifts the problem---for how far out of spec does it have to be at n+k+1 for it to be "wrong"?


I think another solution would be to say that there can be non-epistemic vagueness as to whether two systems are in the exact same phenomenal state. That seems wrong to me. One can have vagueness as to whether they are in approximately the same phenomenal state, but not whether they are in *exactly* the same one. Or at least that's the intuition underlying my argument.

IanS said...

Yes, those are good points. Here is another try:

When you start tweaking voltages, the computer is no longer the original computer.

For small tweaks it will be ‘formally the same’ (= ‘isomorphic’, meaning that by suitably redefining the acceptable voltage ranges, you can interpret it as having the same states (read as sets of 0s and 1s) as the original, and that each such state follows causally from the preceding one in the same way as in the original.) Then if the original computer was in a phenomenal state, the tweaked one, in ‘the same’ (= the corresponding) formal state, would be in ‘the same’ (= the corresponding) phenomenal state.

For larger tweaks, this will not be possible. The tweaked computer may be a formally different digital computer, or not strictly a digital computer at all, but a weird analogue-digital hybrid. Then we can’t be sure that the two computers will have precisely corresponding phenomenal states.

There is still a problem with this. The formally different computers could still have some corresponding phenomenal states. (Because the differences may not be relevant to those phenomenal states – there are lots things happening in my brain that don’t make it to consciousness.) But on what basis would you match the phenomenal states of formally different computers?

Alexander R Pruss said...

"But on what basis would you match the phenomenal states of formally different computers?"

I take it that there is always a sharp fact of the matter whether two entities are in the exact same phenomenal state. I suspect this fact is problematic for physicalism.

IanS said...

I am partially red-green colourblind. This has surprisingly little effect on my daily life. Occasionally, I fail to notice red flowers. But when I do notice them (when someone points them out, or when I get closer to the bush), they look red, as they should. But I still wonder, is the red I see ‘the same’ as the red that ‘normal’ people see?

I understand the usual vocabulary of red (scarlet, crimson, ruby etc.) as well as most men who are not interior decorators. I find that it fits my experience of red things, and other people’s reports of colour, as it should. Or so I like to think. :-) But I certainly see some things differently from ‘normal’ people – colour blindness test charts, for a start.

Leave aside the other aspects of phenomenal experience, and consider only colour. I know that my phenomenal red is sometimes different from that of ‘normal’ people. Is it ever ‘exactly the same’? If it were, how could I know? Epistemology aside, what would that even mean?

Alexander R Pruss said...

One of my close male relatives, who is also partially red-green colorblind, once reported some bricks as looking "reddish green". That is pretty much an impossible color for normal trichromats, and it suggests that the phenomenology of red and of green is rather different for him than for me. Other indications of differences in phenomenology is that while for normal trichromats, red is a particularly vivid color, for him it is a darkish color (and indeed in poor lighting he has had trouble distinguishing red from black).

My understanding of this form of colorblindness is that it results from the green and red spectral receptivity curves overlapping much more than they do for normal trichromatic vision. My intuition is that such shifts are likely to produce differences in phenomenology.

But this is all speculative. I do think that we don't know very much about this. But I think there is a fact of the matter whether the phenomenal experiences are exactly the same even when we cannot determined this fact.

By the way, this is really interesting stuff:

IanS said...

Thank you for the link. Another example of ‘impossible colour’: An elderly friend had a cataract operation. She said that afterwards, the sky looked ‘bluer than blue’. The effect faded over a few weeks. No doubt, her lens had yellowed over the years and her processing had adapted to this. It took some time to recalibrate to the new clear lens.