Tuesday, August 4, 2015

Computational functionalism and cognitive malfunction

One can't really separate software from hardware in a principled way (for boundary cases, think of FPGAs, microcode, etc.), so instead of thinking about computers and their programs, we really should simply think of programmed machines. We can think of a programmed machine as something that embodies a function from possible inputs to possible outputs. When a vector of inputs x is given a programmed machine and the machine functions correctly, it computes the output y=f(x). One version of computational functionalism holds that mental states are grounded in computations: what grounds my being in mental state M is that I have been given such-and-such a vector of inputs x and have computed f(x) where f is the function I embody.

We go wrong mentally. We malfunction. Our brains function in ways contrary to our design plan due to glitches of all sorts, some a bit more on the software side (these are treated by psychologists and psychiatrists) and some a bit more on the hardware side (these are treated by psychiatrists and neurologists) though of course the software/hardware division is vague. Now some malfunctions do knock us out. But many do not. We remain conscious. And we don't just remain conscious in the respects in which we are functioning correctly. We remain conscious in the respects in which we are functioning incorrectly. Of course, the mental states that we exhibit in those cases can be weird. On one of the spectrum they may involve arithmetical errors and minor inferential failure, and on the other they involve psychosis. What will the computational functionalist say about such cases?

Well, presumably they will have to say that we still compute values of functions that we embody, but we embody abnormal functions. This, however, is a seriously problematic proposal. For what defines me as computing f(x)? It isn't, of course, just the fact that I get y as the answer (where in fact y=f(x)). For there are infinitely many functions that yield the value y given the input x. Rather, I see two initially plausible answers. The first answer is that what makes me compute f(x) is a pattern of non-normative counterfactual facts of the form:

  • Were I given input a, I would produce output f(a)
for a large set of values of a. But that can't be right. Any such set of facts could be finked. (Imagine that a neurosurgeon implanted a device such that were I to be given any input other than the x that I am actually given, I would explode.) The second answer is that what makes me compute f(x) is a pattern of normative facts of the form:
  • Were I given input a, I should produce output f(a)
for a large set of values of a. But the problem is that when I embody an abnormal function, we don't have a pattern of facts like this, because the function that I embody--if that's the right way to think about this--is one whose outputs I should not produce!

If this argument is right, then both a non-normative and a normative (Aristotelian) computational functionalism has a serious problem with abnormal mental states.

The normative computational functionalist has more resources, though. Could she perhaps say that given that I embody an abnormal function f, I should compute f(a)? Maybe, but the basic question here is what grounds the fact that I embody the particular function that I embody. It's not the would-facts, but it's also not the should-facts, it seems, so what is it?

1 comment:

  1. I am not a fan of functionalism, but I wonder if you are not missing out on the massive parallelism of (biological) function in the brain by defining the consciousness-producing element as a single function that is either correct or incorrect in output.

    What about an alternative theory of ten thousand parallel functions, of which only ten percent are incorrect in a certain illness? In that case the 90% that is correct might stll be producing consciousness (though I think they couldn't, really, as just math functions anyway).

    ReplyDelete