Tuesday, March 10, 2009

Causal theories of mind

Suppose we have a causal theory of mind, like Lewis's. In this theory, states get their nature from the way they tend to causally interact. Now, suppose that Black, our faithful neurosurgeon with his neuroscope, observes Freddy's brain for all of Freddy's life. Moreover, Black has a script for how every neural event in Freddy's life is to go. As soon as there is a deviation from the script, Black blows up Freddy. Now, all of the counterfactuals about Freddy's neural states are destroyed. For instance, assuming it's not in Black's script that Freddy ever is visually aware of a giraffe, then instead of having the counterfactual: "Were a giraffe in Freddy's field of vision, Freddy would likely form such-and-such a belief", we have the counterfactual: "Were a giraffe in Freddy's field of vision, Freddy would explode." But suppose that all goes according to script. Then the neurosurgeon doesn't interfere, and so Freddy thinks like everybody else—despite all the counterfactuals being all wrong. If the causal theory is defined by counterfactuals about the mental states, this refutes the causal theory.

OK, that was too quick. Maybe the idea is to look at the counterfactuals that would hold of Freddy in a "normal environment" to define the states. But that won't do. Consider the mental state of seeing that things aren't normal. We can't define that simply in terms of normal environments I bet. Moreover, even supposing we can somehow abstract Freddy from his environment, we could make Black be a part of Freddy. How? Well, make Black be a little robot. Then give this robot one more function: it is also a very reliable artificial heart. Then implant the robot in Freddy's chest in place of his heart. It no longer makes sense to ask how Freddy would act in the absence of Black, since in the absence of Black—who is now Freddy's artificial heart—Freddy would be dead.

Maybe you think that Freddy is just a brain, so the heart is just part of the environment. Fine. Take some part of the brain that is important only for supplying nutrition to the rest of the brain, but that is computationally irrelevant. Replace it by Black (a robot that fulfills the functions of that part, but that would blow up Freddy were Freddy to depart from the script). And again we've got a problem.

We can perhaps even put Black more intimately inside Freddy. We could make Black be a mental process of Freddy's that monitors adherence to the script.

So the causal theory requires a counterfactual-free account of causal roles. The only option I see is an Aristotelian one. So the prospects for a causal theory of mind that uses only the ingredients of post-Aristotelian science are slim.

5 comments:

  1. "In this theory, states get their nature from the way they tend to causally interact."

    Why not analyze the "tendency" of a state as a bundle of dispositions, rather than a bundle of counterfactuals?

    Then it looks like your case is just a matter of masked (finkish) dispositions. Which is a problem only for those that think dispositions hold just if the relevant counterfactuals do.

    ReplyDelete
  2. I am not sure one can have any dispositions apart from an Aristotelian metaphysics. But let's put that aside.

    In any case, I am pretty sure one can't have the sorts of dispositions that are involved in causal theories of mind apart from Aristotelian metaphysics. The issue is with the mutual interdefinition of mental states (the Big Ramsey Sentence move). I am having a really hard time formulating the argument well. Here's my best attempt right now.

    Let's say that a pain has a disposition to trigger aversive behavior, and that a pain is implemented by a particular electrical pattern of activity P. Consider now the case where Black is a part of Freddy's brain, and suppose that, as a matter of fact, P, while having a disposition to trigger aversive behavior, doesn't in fact happen to do so, because there are no beliefs about the source of the pain. Moreover, it is part of Black's script that aversive behavior is not triggered. P, granted, still has a disposition to trigger aversive behavior in a normal human brain given appropriate beliefs about the source of the pain. But it has no disposition to trigger aversive behavior in the brain it is in fact in. In that brain, the only untriggered disposition it has is a disposition to trigger an explosion.

    But what is relevant for the identification of the mental state is not the disposition that P has in a normal human brain, but the disposition that P has in the brain it is in fact in.

    For, we can imagine that human and Martian brains both sometimes exhibit patterns just like P. But in human brains, the P-like pattern triggers aversive behavior, while in Martian brains P-like patterns trigger behavior in favor of the intensification of that pattern. This is quite imaginable: the same pattern of electrical impulses that implements pain in one brain can implement pleasure in another. The pattern in the Martian brain has the dispositional property of causing aversive behavior when in a normal human brain. But that is not enough to make that pattern be a pain--for it to be a pain, it must have that dispositional property in the brain that it is in fact in. (It is tempting to say: "In the kind of brain that it is in fact in." But that requires natural kinds of brains, and normalcy conditions for these, and all that just pushes one towards Aristotelian metaphysics. Which is my point.) For, remember, that on the best causal theories (e.g., Lewis), all of the mental states are mutually interdefined through a big Ramsey sentence. Each gets its triggering conditions from the outputs of the other states. This kind of holistic move means that the dispositions have triggering conditions that involve the neural environment in which they are in fact found. But if that neural environment that includes Black, then this is not satisfied.

    Some of the above argument is formulated on the assumption that the electrical pattern P that implements a pain could exist in very different brains. This could be questioned. Suppose, perhaps, that the identity conditions for P are that it is an electrical pattern in such-and-such a brain. Presumably, the such-and-such brain is precisely the brain that P is in (if we have Aristotelian normalcy available, we could say "precisely the kind of brain that P is in"). But the brain that P is in includes Black. And if P is identified in this way, as being the state of a brain that has Black as part of it, then P lacks any untriggered dispositions except the disposition to trigger an explosion.

    ReplyDelete
  3. Hi Alex, I'm wondering about all the counterfactuals like, in the absence of Black and the presence of X (where X replaces Black's essential-for-Freddy's-life functions) then were A the case Freddy would do B... would they not suffice for the causal theorist?

    ReplyDelete
  4. I was wondering about that at one point, but the argument in my first comment, above, suggests this won't work. (See what I say about Martians.)

    ReplyDelete
  5. Maybe; I don't like causal theories, but rather natural kinds and such. But normally there's no Black, so I see your Martian scenario as indicating that a causal theory won't have to worry about the possibility of Black: Could she bite that bullet and say that such unlikely monsters might indeed have minds very different to humans?

    ReplyDelete