Tuesday, November 17, 2020

Nomic functionalism

Functionalism says that of metaphysical necessity, whenever x has the same functional state as a system y with internal mental state M, then x has M as well.

What exactly counts as an internal mental state is not clear, but it excludes states like thinking about water for which plausibly semantic externalism is true and it includes conscious states like having a pain or seeing blue. I will assume that functional states are so understood that if a system x has functional state S, then a sufficiently good computer simulation of x has S as well.

A weaker view is nomic functionalism according to which for every internal mental state M (at least of a sort that humans have), there is a law of nature that says that everything that has functional state S has internal mental state M.

A typical nomic functionalist admits that it is metaphysically possible to have S without M, but thinks that the laws of nature necessitate M given S.

I am a dualist. As a result, I think functionalism is false. But I still wonder about nomic functionalism, often in connection with this intuition:

  1. Computers can be conscious if and only if functionalism or nomic functionalism is true.

Here’s the quick argument: If functionalism or nomic functionalism is true, then a computer simulation of a conscious thing would be conscious, so computers can be conscious. Conversely, if both computers and humans can be conscious, then the best explanation of this possibility would be given by functionalism or nomic functionalism.

I now think that nomic functionalism is not all that plausible. The reason for this is the intuition that a computer simulation of a cause normally only produces a computer simulation of the effect rather than the effect itself. Let me try to be more rigorous, though.

First, let’s continue from (1):

  1. Dualism is true.

  2. If dualism is true, functionalism is fale.

  3. Nomic functionalism is false.

  4. Therefore, neither functionalism nor nomic functionalism is true. (2–4)

  5. So, computers cannot be conscious. (1, 5)

And that’s really nice: the ethical worries about whether AI research will hurt or enslave inorganic persons disappear.

The premise I am least confident about in the above argument is (4). Nomic functionalism seems like a serious dualist option. However, I now think there is good inductive reason to doubt nomic functionalism.

  1. No known law of nature makes functional states imply non-functional states.

  2. So, no law of nature makes functional states imply non-functional states. (Inductively from 7)

  3. If functionalism is false, mental states are not functional states.

  4. So, mental states are not functional states. (2, 3, 9)

  5. So, no law of nature makes functional states imply mental states. (8 and 10)

  6. So, nomic functionalism is false. (11 and definition)

Regarding (7), if a law of nature made functional states imply non-functional states, that would mean that we have multiple realizability on the left side of the law but lacked multiple realizability on the right side. It would mean that any accurate computer simulation of a system with the given functional state would exhibit the particular non-functional state. This would be like a case where a computer simulation of water being heated were to have to result in actual water boiling.

I think the most promising potential counterexamples to (7) are thermodynamic laws that can be multiply realized. However, I think tht in those cases, the implied states are typically also multiply realizable.

A variant of the above argument replaces “law” with “fundamental law”, and uses the intuition that if dualism is true, then nomic functionalism would have to have fundamental laws that relate functional states to mental states.

7 comments:

William said...

"a computer simulation of a cause normally only produces a computer simulation of the effect rather than the effect itself"

I really like this. To make it into an empirical truth, let's restate this as:

1. Computation over a symbolic representation of a cause produces a symbolic representation of an effect.

This allows computers to get complete and correct results from calculating symbolic matters, including such things as language interpretation and the proving of theorems, since in such cases the results (such as a proof) can themselves be symbolic.

To have consciousness happen via computation, though, would mean that consciousness itself was a purely symbolic thing, which seems wrong.

David Duffy said...

Given the extremely tight relationship between physico-chemical changes in the brain and our mental lives, surely your first premise is actually the weakest one. So, given dualism, any conscious computers will have immaterial parts. Contra William above, I regularly simulate matters in my imagination, and it seems to me that these must be causes of my physical actions to some extent, unless one stoops to extremely contorted definitions of causation.

Alexander R Pruss said...

David:

Sure, these are causes of physical actions, but not in the right way. Suppose an artist mentally simulates making a pot. Then the artist's simulated actions result in a simulated pot. It may be that the whole simulation process then leads the artist to make a real pot. But that's a separate, further causal process, beyond the one in imagination.

Another attempt at a counterexample to the thesis about simulation is that if we simulate a heating process, the computer doing the simulating also gets hot. But it doesn't get hot specifically because it's simulating a heating process--it would get hot even if it simulated a cooling process.

Alexander R Pruss said...

William:

Let me strengthen the end of your argument: Nomic functionalism presupposes that consciousness is not just a "purely symbolic thing". For if consciousness were a purely symbolic thing, we would have functionalism, not nomic functionalism.

William said...

David:

I agree that imagining jumping forward at the starting line might influence my start of the race, but it takes more than just a simulation to move forward.

I think that 1) above is just a more general and perhaps more vague restatement of the Chinese Room argument anyway.

David Duffy said...

"it doesn't get hot specifically because it's simulating a heating process" - yes, but simulating a simulation does not lead to such problems, it seems to me. Similarly, simulating a logician solving a problem, or any such mental operations.

As to the artist or the runner, if we manipulate (a la Woodward) his mental simulation, we will change the outcome of his subsequent actions.

Anyway, I don't want to rehash all the old arguments.

Alexander R Pruss said...

David:

Remember that my argument is aimed at those who already reject functionalism, namely at what I've called nomic functionalists.

The nomic functionalist thinks that brain function is not the same as thought. Rather, brain function gives rise to thought by a contingent law of nature.

In fact, when x gives rise to y *by a contingent law of nature*, simulating x does not even give rise to a simulation of y, unless a simulation of the law of nature is programmed into the simulation. Thus, a simulation of the positions of the planets does not give rise to a simulation of gravitational attractions, unless the law of gravitation is programmed into the simulation. So, if we wanted the simulation of brain function to give rise to a simulation of thought, we would need to program in a simulation of the law of nature linking brain function to thought.