There are two kinds of functionalism about the mind.
One kind upholds the thesis that if two systems exhibit the same overall function, i.e., the same overall functional mapping between sequences of system inputs and sequences of system outputs, then they have the same mental states if any. Call this systemic functionalism.
The other kind says that mental properties depend not just on overall system function, but also on the functional properties of the internal states and/or subsystems of the system. Call this subsystemic functionalism. The subsystemic functionalist allows that two systems may have the same overall function, but because the internal architecture (whether software or hardware) that achieve this overall function are different, the mental states of the systems could be different.
Systemic functionalism allows for a greater degree of multiple realizability. If we have subsystemic functionalism, we might meet up with aliens who behave just like we do, but who nonetheless have no mental states or mental states very different from ours, because the algorithms that are used to implement the input-to-output mappings in them are sufficiently different.
If subsystemic functionalism is true, then it seems impossible for us to figure out what functional properties constitute mental states, except via self-experimentation.
For instance, we would want to know whether the functional properties that constitute mental states are neuronal-or-above or subneuronal. If they are neuronal-or-above, then replacing neurons with prostheses that have the same input-to-output mappings will preserve mental states. If they are subneuronal, such replacement will only preserve mental states if the prostheses not only have the same input-to-output mappings, but also are functionally isomorphic at the relevant (and unknown to us) subneuronal level.
But how could we figure out which is the case? Here is the obvious thing to try: Replace neurons with prostheses whose internal architecture does not have much functional resemblance to neurons but which have the same input-to-output mappings. But assuming standard physicalist claims about there not being “swervy” top-down causation (top-down causation that is unpredictable from the microphysical laws), we know ahead of the experiment that the subject will behave exactly as before. Yet if we have rejected systemic functionalism, sameness of behavior does not guarantee sameness of mental states, or any mental states at all. So doing the experiment seems pointless: we already know what we will find (assuming we know there is no swervy top-down causation), and it doesn’t answer our question.
Well, not quite. If I have the experiment done on me, then if I continue to have conscious states after complete neuronal prosthetic replacement, I will know (in a Cartesian way) that I have mental states, and get significant evidence that the relevant system level is neuronal-or-above. But I won’t be able to inform anybody of this. If I tell people: “I am still conscious”, if they have rejected systemic functionalism, they will just say: “Yeah, he/it would say that even if he/it weren’t, because we have preserved the systemic input-to-output mappings.” And there will be significant limits to what even I can know. While I could surely know that I am conscious, I doubt that I would be able to trust my memory to know that my conscious states haven’t changed their qualia.
So with self-experimentation, I could know tht the relevant system level is neuronal-or-above. Could I know even with self-experimentation that the relevant system level is subneuronal. That’s a tough one. At first sight, one might consider this: Replace neurons with prostheses gradually and have me observe whether my conscious experiences start to change. Maybe at some point I stop having smell qualia, because the neurons involved in smell have been replaced with subsystemically functionally non-isomorphic systems. Oddly, though, given the lack of swervy top-down causation, I would still report having smell qualia, and act as if I had them, and maybe even think, albeit mistakenly, that I have them. I am not sure what to make of this possibility. It’s weird indeed.
Moreover, a version of the above argument shows that there is no experiment that we could do that would persons other than at most the subject know whether systemic or subsystemic functionalism is true, assuming there is no swervy top-down causation.
Things become simpler in a way if we adopt systemic functionalism. It becomes easier to know when we have strong AI, when aliens are conscious, whether neural prostheses work or destroy thought, etc. The downside is that systemic functionalism is just behaviorism.
On the other hand, if there is swervy top-down causation, and this causation meshes in the right way with mental functioning, then we are once again in the experimental philosophy of mind business. For then neurons might function differently when in a living brain than what the microphysical laws predict. And we could put in prostheses that function outside the body just like neurons, and see if those also function in vivo just like neurons. If so, then the relevant functional level is probably neuronal-or-above; if not, it's probably subneuronal.
No comments:
Post a Comment