Monday, March 16, 2009

More on evolutionary theories of mind

According to evolutionary theories of mind, that we have evolved under certain selective pressures not only causally explains our mental functioning, but in fact is essential to that very functioning. Thus, if an exact duplicate of one of us came into existence at random, with no selection, it would not have a mind. The reason is that components of minds have to have proper functions, and proper functions in us are to be analyzed through natural selection.

Of course, there could be critters whose proper function is to be analyzed in terms of artificial selection, or even in terms of design by an agent. But as it happens, we are not critters like that, says the evolutionary theorist of mind. Nonetheless, it is important that the account of proper function be sufficiently flexible that artificial selection would also be able to give rise to proper function (after all, how would one draw the line between artificial and natural selection, when artificial selectors—say, human breeders—are apt themselves to be a part of nature?). Moreover, typically, the evolutionary analysis of proper function is made flexible enough that agential design gives rise to proper function as well. The basic idea—which is more sophisticated in the newer accounts to avoid counterexample—is that it is a proper function of x to do A if and only if x-type entities tend to do A and x-type entities now exist in part because of having or having had this tendency. Thus, a horse's leg has running fast as one of its proper functions, because horse's legs do tend to run fast, and now exist in part because of having had this tendency. A guided missile has hitting the target as a proper function, because it tends to do that, and guided missiles exist in part because of having this tendency (if they didn't have this tendency, we wouldn't have made them).

Whatever the merits of these kinds of accounts of proper function, I think it is easy to see that such an account will not be satisfactory for philosophy of mind purposes. To see this, consider the following evolutionary scenario (a variant on one that the ancient atomists posited). Let w0 be the actual world. Now consider a world w1, where at at t0 there is one super-powerful alien agent, Patricia, and she has evolved in some way that will not concern us. Suddenly, at t0, a rich infinite variety of full-formed organisms comes into existence, completely at random, scattered throughout an infinity of planets. There are beings like dogheaded men, and beings like mammoths, and beings like modern humans, behaving just like normal humans. On the evolutionary theorist's view, these are all zombies. A minute later, at t1, Patricia instantaneously kills off all the organisms that don't match her selective criteria. Her selective criteria in w1 happen to be exactly the same ones that natural selection implemented in w0 by the year 2009. Poof, go the mammoth-like beings in w1, since natural selection killed them off by 2009 in w0. However, humanoids remain.

At t1, the survivors in w1 have proper functions according to the evolutionary theorist. Moreover, they have the exact same proper functions as their analogues in w0 do, since they were selected for on the basis of exactly the same selective principle. This was a case of artificial selection, granted, but still selection.

But it is absurd that a bunch of zombies would instantaneously become conscious simply because somebody killed off a whole bunch of other zombies. So the evolutionary account of proper function, as applied to the philosophy of mind, is absurd.

Maybe our evolutionary theorist will say: Well, they don't get proper functions immediately. Only the next generation gets them. Selection requires a generation to pass. However, she can only say this if she is willing to say that agency does not give rise to proper function. After all, agency may very well work by generating a lot of items, and then culling the ones that the agent does not want. Pace Plantinga, I do not think it is an absurd thing to say that agency does not give rise to proper function, but historically a lot of evolutionary accounts of proper function were crafted so as to allow for design-based proper functions. Moreover, it would seem absurd to suppose that a robot we directly made couldn't be intelligent at all but its immediate descendant could be.

I think the above shows that we shouldn't take agential design to generate proper function (at least not normally; maybe a supernatural agent could produce teleological facts, but that would be by doing something more than just designing in the way an engineer does), at least not if we want proper function to do something philosophically important for us. Nor do I think should we take evolution to generate proper function (my earlier post on this is particularly relevant here). Unless we are Aristotelians—taking proper function not to be reducible to non-teleological facts—we have no right to proper function. And thus if the philosophy of mind requires proper function, it requires Aristotelian metaphysics.

11 comments:

  1. Alex,

    I realize that this is getting on to be an old posting, but I am curious as to why you believe that "According to evolutionary theories of the mind, that we have evolved under certain selective pressures ... is essential to [our mental] functioning", or that "components of minds have to have proper functions".

    Can you provide any citations to support these two odd claims? I am unaware of any proponent of an evolutionary theory of mind who would endorse either of these positions. The premises of evolutionary theories of the mind are in fact more along the lines of the following:

    1. It is a contingent fact that all minds on Earth have evolved as a result of natural selection.

    2. Given that no one has yet managed to think of an alternative mechanism that would with a reasonable probability produce minds in the time available (circa 14 billion years), any other minds in the Universe will presumably have evolved under similar selection pressures (including, possibly, artificial selection).

    And evolutionary theories (of the mind or of anything else) accord no fundamental role to "proper function". Although some naturalist philosophers (Ruth Millikan being the obvious example) do seem to take the notion seriously, a statement like "components of minds have to have proper functions" would vastly overstate their commitment to it.

    ReplyDelete
  2. According to Block, Dretske seems to be an example of the first.

    ReplyDelete
  3. I guess that Dretske in "Naturalizing the MInd" could be construed as claiming that evolution by selection (of some kind) is essential for the development of mind, but that's not how I read him. He is, I think, making the far weaker claim that the mental properties of two physically indistinguishable organisms might be (and in fact probably would be) different if their histories were different.

    It is also evident that Dretske does believe that the mind can and should be analyzed in terms of proper function, but again I don't think he ever claims this to be a logical necessity. In particular, he rejects Swampman-like examples for two reasons: (a) because the lack of any identifiable proper function in spontaneously created objects makes any attempt at analysis of the mind or purpose of such an object meaningless, and (b) "from the improbability of the events (spontaneous materialization) that would have this result." [Naturalizing the Mind, p. 148]

    And of course Dretske's theory of the mind is just one of many possible evolutionary theories. If you were intent on constructing an argument against Dretske's particular idiosyncratic views, why not say this instead of attributing them to some generalized "evolutionary theory of mind"?

    ReplyDelete
  4. It does seem that (a) does make Dretske an appropriate target of my argument. But I don't really care that much about Dretske.

    Rather, I care more generally about causal or functionalist theories of mind. And I think it's not hard to come up with good arguments to show that causal and functionalist theories of mind require the concept of proper function (finkish problems sink attempts that use counterfactuals). Moreover, I think one can argue that standard (i.e., non-Aristotelian) naturalism can only account for proper function in an evolutionary way. The net result of that line of thought is that causal or functionalist theories of mind are incompatible with standard naturalism. And that result is what is of interest to me.

    ReplyDelete
  5. There is a world of difference between causal or functionalist theories of mind and "evolutionary theories of mind" (which form a much broader class).

    And standard naturalism cares little or nothing at all about proper function. It is a purely subjective notion, as the failure of every attempt (by Pollock, Millikan, Wright etc.) to analyze it in terms of objective criteria has amply demonstrated.

    ReplyDelete
  6. Sure, there is a difference. But that is quite compatible with the existence of arguments that show that functional and causal theories require the concept of proper function, even if their adherents do not notice this.

    Two ways to see this is to consider cases of malfunction and the need for dispositional properties.

    One can still have a pain even though the parts of one's mind that are responsible for acting on the pain are malfunctioning. But if one defines pain in terms of actual causal connections, then malfunction threatens the definition. So, perhaps, one needs to define pain in terms of normal causal connections. At least, I suspect that I can come up with easy counterexamples to any alternative. :-)

    And a lot of the causal and functional properties are dispositional in nature. But unless we have something like proper function in play, the problems of finkishness aren't going to be overcome.

    ReplyDelete
  7. Apologies for some confusion on my part. It is evident to me now that your target is functionalism in its broad sense, rather than just functionalists who appeal to some form of proper function.

    So your argument boils down to the claim that functionalists should accept your premises, even though many (probably almost all) do not. This is a weak basis for a reductio, unless you can show that your premises [as outlined in the first paragraph of your original post] follow from the premises of functionalism as a matter of logical necessity.

    I strongly doubt that this is so. In particular, I would dispute the presumed self-evidence of your claim that "One can still have a pain even though the parts of one's mind that are responsible for acting on the pain are malfunctioning." This is already begging the question against functionalism.

    ReplyDelete
  8. Let me expand on the pain case. Presumably, a part of the story about pain is going to be that pain causes a disposition to try to escape the painful stimulus. Now, this disposition need not actually be triggered for there to be pain--i.e., one need not actually be trying to escape the painful stimulus (one might see that it's hopeless, or one might want to brave the pain for some reason). This disposition will have some complex triggering conditions, such as that one believes there is a way out, etc.

    Alright, now consider a case of a pain that causes the disposition to escape the painful stimulus. The disposition is not infallible--nothing neural is. All we can say is that there would be such-and-such a probability of its being triggered were the triggering conditions to obtain. Now, the first issue is: What does the probability of hypothetical non-malfunction have to be for the pain to exist? 40%? 70%? 90%? 95%? It does not seem likely to me that an numerical answer would exist. (Nor, if it did, would the number be empirically testable. Suppose we somewhat damage the subject's escape-from-painful-stimulus module, but leave everything else alone. Since we've left everything else alone, the subject will exhibit all the standard pain behaviors including claiming to be in pain, except that the propensity to escape from the painful stimulus is decreased. There will be, I suspect, no empirical way to tell whether the setup constitutes pain or not.)

    But let's suppose an exact number is possible. Here is the next problem. We now are saying that x is a pain only if x causes a disposition in the escape module that has reliability, say, 0.75. This reliability of the escape module, which environment is it measured in? Obviously, the reliability of any physical system is a subject of many environmental factors, such as temperature, electromagnetic interference, etc. Are we talking of the reliability of the module in the subject's actual environment, or the reliability of the module in a normal environment?

    Suppose in the normal environment. Then we get problems about defining the concept of an normal environment. I think our best bets will be an appeal to Aristotelianism, design or evolution (the normal environment is the one for which a module evolved). But then we are back to the evolutionary theories of mind if we're naturalists (and hence reject the Aristotelian or design views).

    Suppose in the actual environment. Then I've got the functionalist. :-) For then I use a Frankfurt example on the functionalist.


    A different case would be the case of mathematical beliefs caused by malfunctioning arithmetical modules. That really does happen. People do form false beliefs about simple additions and multiplications.

    ReplyDelete
  9. There are of course many variants of functionalism, and there may well be a few variants that define mental states in terms of "dispositions". But bare functionalism defines states in terms of causal relations, not dispositions. And in some versions of functionalism, particularly machine-state functionalism, these causal relations are deterministic.

    So, for example, "pain" in machine-state functionalism is identified with a large class of mental states, each state having well-defined causal effects on other mental states as well as behavior. In some of its manifestations, pain will cause avoidance behavior, or swearing, or a lack of concentration, or simply just the memory "that was painful". (Talk of "dispositions" constitutes a heuristic way of describing this class of mental states, not a definition of any one such state.)

    Only a small minority of pain states will cause all or most of the possible pain symptoms, but if a mental state produces not a single one of these symptoms (not even the memory of pain), then according to functionalism no pain has occurred. Given that ipso facto there is no empirical way to refute this claim, it is a perfectly reasonable premise for functionalists to assume.

    I should add further that one possible elaboration of functionalism is to define mental states in terms of their effect on or correlation with brain states. Then pain, for example, simply does not exist in the absence of a certain type of brain state.

    ReplyDelete
  10. First of all, we have little reason to suppose that nature contains any deterministic connections.

    Here is a little reductio. Suppose actual mental effects are always required for the characterization of a mental state. Now, actual effects in a brain always take some minimum amount of time (take the minimum spacing between subsystems, and divide by c). So, if mental state S requires as an effect some mental state T (or, a disjunction of T1, T2, ..., Tn), we get an endless forward regress. Therefore, it is not possible to have a mental state unless you live forever. But surely this isn't a good argument for eternal life.

    So, some mental state can occur without actually having any other mental effects. Maybe it requires environmental effects, though? However, all our connections to the environment are fallible, and a mental state would surely be the same even if the environmental effect failed to come off.

    So, it must be the case that some mental state can occur without that state having any further effects. This would be a state for which one can give a sufficient condition in terms of causes. Now maybe the functionalist can accept that possibility. But I doubt it. I doubt there will be any state which will be sufficiently characterizable in terms of causes alone.

    Here is a second little reductio. Whether I feel a pain at t does not depend on what happens after t. That I feel a pain at t is compatible with my being annihilated a moment later. (What is a "moment"? Let's stipulate it to be d/c where d is the minimum distance between mental systems.) But if so, then I can feel a pain without the pain having any effect.

    This reductio one can get out of by saying that pain is not one of the causally defined states. Rather, a pain occurs whenever certain causal connections between more primitive states occur. A consequence of this will be that none of the conscious states will end up being defined by their causal connections.


    Here is another argument that dispositions are needed. Plainly, I have plenty of beliefs that are currently causally inefficacious. But they are still mental states of mine, albeit unconscious ones. Maybe the claim is that every belief I now have has had some effect in the past. Could be, but those effects could be too insignificant to be sufficient to characterize the belief. I learn something. I say: "OK, interesting, better store that up for future use." And so off it goes into a quiescent belief or memory. And that quiescent belief or memory never really has any impact, because maybe I never come back to it.

    ReplyDelete
  11. "First of all, we have little reason to suppose that nature contains any deterministic connections."

    It is a foundational premise of machine-state functionalism that the mind can be analyzed as a deterministic finite-state machine. There is little force in objecting to a philosophical position just because you personally see scant reason to suppose it is true. Philosophers believe many things that we have little reason to suppose are true.

    In any case, there are perfectly good reasons for believing that the world might be deterministic. Newtonian physics in its fully-developed 19th-century form was deterministic, and there exist determninistic interpretations of quantum mechanics (de Broglie-Bohm theories) even now.

    "Here is a little reductio. ... actual effects in a brain always take some minimum amount of time ... So, if mental state S requires as an effect some mental state T ... we get an endless forward regress."

    If your objection here is that it is not possible to attribute a fixed mental state to the mind at any precise instant in time, then no one who believes in quantum mechanical uncertainty would lose any sleep over that claim. If, on the other hand, your objection is that the characterization of mental states in machine-state functionalism is apparently circular or ungrounded - in the sense that state T1 is identified by the fact that it produces behavior X and is followed by state T2, but we can only identify T2 by the fact that it produces behavior Y and is followed by state T3, and so on - then I agree that functionalists need to be very careful in explaining the explanatory scope of the causal relations.

    However, I should point out that the regress would not be "endless", as the mind is presumed to be a finite-state machine. In the simplest toy models, which involve a 2-state Turing machine, it can be established very quickly which state the machine is in. For more complicated machines it is of course considerably more difficult to use "behavioral" responses (outputs) to do this, and in the case of a Turing machine as complicated as the human brain it would take enormously many human lifetimes. But this obstacle is an empirical one, not a logical one.

    In any case, it is perfectly consistent with functionalism that particular mental states coincide with particular brain states. In such a case the two states would occur simultaneously, and if (as in principle they might be) the brain states are observable by means of a brain scan, then any problems with "infinite regress" of the causal relations or the patient dying immediately after feeling pain would disappear. The behavioral response used to identify the mental state would then be the properties of the brain scan, and particular brain scan "states" would persist exactly as long as the correlated mental states.

    ReplyDelete