Here's another argument against gear-minds. Start with the intuition that one cannot make a mind cease to exist without causally interacting with the individual whose mind it is or a part of the individual. Now imagine that the gears that the mind is made of are like those in this diagram . They have radiating spokes, and between the spokes is a hollow area. This reduces the weight of the gear while maintaining a significant portion of the strength.
Now, imagine that an inflexible spike is simultaneously inserted into the middle of every gap between spokes, without the spike touching any gear. Because the gears are no longer able to turn more than, say, a sixth of a rotation, and because any mental operation would surely require a larger turn of at least one wheel (we can stipulate this about our gear-person), a result of the introduction of the spikes is that the individual is no longer capable of any mental functioning. The gears can turn a little, but not enough to result in a mental operation. Moreover, no counterfactuals of the form "If input A were given, the individual would believe Q" are true any more. This means that if the functionalism requires such counterfactuals or the capability of mental functioning, as non-Aristotelian functionalisms are apt to require, there is no longer a mind. But because the spikes went between the spokes in such a way that no contact was made with any part of the individual, this violates the principle that one cannot make a mind cease to exist without causally interacting with the individual or a part thereof.
Here's another problem with the gear model of the mind: Presumably, a particular thought will result from a particular sequence of gear turns. Now, suppose that the gear turns are reversed by switching the energy input, resulting in the gears returning to their original position before the thought. What just happened? Did the machine "unthink" the thought? The concept seems incoherent.
ReplyDeleteImagine we have a car whose transmission only engages when the gears reach a certain point every turn. Now you could use the same argument that gear-cars are impossible, which is absurd.
ReplyDeleteDualists, I'd like to know: Does a split brain patient have two souls? If not, then half the brain can function entirely without a soul. And if so, physical severing of the brain can sever the soul. If so, then isn't it natural, by induction, to suppose that death of the brain entails death of the soul?
I don't see the relevance of your remark to the precise argument given. The point here is that non-Aristotelian functionalism based on counterfactuals doesn't work. And I think it doesn't work for cars, either. My comment is really just a species of the well-known problems under the head of "the conditional fallacy."
ReplyDeleteAs for split brain patients, why not suppose that both halves of the brain are connected to the soul, but because the patient's inputs are really disintegrated, so are her behaviors.
A. Pruss,
ReplyDeleteMy first remark is relevant because it consitutes a reductio ad absurdum of the argument. You can't just ignore it.
Your response to my second point seems to implicitly admit that sensory input causes behavior in a way that doesn't need to use the soul (at least, doesn't on both sides, because if it did, the soul would act as an observable "connection" between the two), and thus by Occam's Razor we are justified in rejecting the existence of a soul .
I am not claiming here that the soul is needed to explain observable behavior.
ReplyDeleteAs for gear-cars, it seems that the parallel of this claim may be false: "But because the spikes went between the spokes in such a way that no contact was made with any part of the individual, this violates the principle that one cannot make a mind cease to exist without causally interacting with the individual or a part thereof." There are objects that can be made to cease to exist without causally interacting with them. Suppose you make a picture by laying out a pattern made of rocks. I can then destroy the picture by filling in all the gaps with other rocks without touching the existing pattern. But it is implausible that minds are like that.
A. Pruss,
ReplyDeleteWell, I would argue that you ARE touching the existing pattern, because the "pattern" consists of the combination of rocks (in some locations) and vacancies (in others). When you fill the vacancies you are causally interacting with them.
Ultimately I think this is just a bunch of semantic gymnastics that is rather pointless and tells us nothing about what is true in the actual world. For that we should look to empirical evidence, not metaphysical hodgepodge .
The first response is pretty neat. My intuitions go against it, though. Why should inserting a spike that doesn't affect actual functioning affect whether consciousness occurs? What if I quickly move the spike in and out several times? Does consciousness flicker in and out as one does that, exactly in sync, even though the actual functioning is unaffected?
ReplyDeleteAs for the actual world and the empirical stuff, I am here concerned with arguing against certain general theories about what minds and consciousness consist in. These theories purport to be fully general. For instance, a functionalist says that anything, in any possible world, functionally isomorphic to a functioning mind is a functioning mind.
A. Pruss,
ReplyDeleteAnd here is where my analogy comes in: You could make the same argument about cars (Replace "Consciousness" with "driveability over significant distances, steering, propulsion, etc."), and yet its conclusion would clearly be absurd - we accept the existence of gear cars without fuss.
Looking back over the argument I gave, it is conditional. It says that if functionalism requires counterfactuals to define function, then it doesn't work. In the case of cars, we can define function in terms of designer intentions. So, yes, I could have said that it applies just fine to cars.
ReplyDeleteOf course, Millikan and others have tried to define function in evolutionary terms, without adverting to counterfactuals, Aristotelian stuff or designer intentions. But Rob Koons and I have argued that this fails. (E.g., see here.)