tag:blogger.com,1999:blog-3891434218564545511.post43311661217510963..comments2024-03-28T19:56:42.305-05:00Comments on Alexander Pruss's Blog: More on evolutionary theories of mindAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-3891434218564545511.post-29062129391210262009-04-09T11:26:00.000-05:002009-04-09T11:26:00.000-05:00"First of all, we have little reason to suppose th..."First of all, we have little reason to suppose that nature contains any deterministic connections."<BR/><BR/>It is a foundational premise of machine-state functionalism that the mind can be analyzed as a deterministic finite-state machine. There is little force in objecting to a philosophical position just because you personally see scant reason to suppose it is true. Philosophers believe many things that we have little reason to suppose are true.<BR/><BR/>In any case, there are perfectly good reasons for believing that the world might be deterministic. Newtonian physics in its fully-developed 19th-century form was deterministic, and there exist determninistic interpretations of quantum mechanics (de Broglie-Bohm theories) even now.<BR/><BR/>"Here is a little reductio. ... actual effects in a brain always take some minimum amount of time ... So, if mental state S requires as an effect some mental state T ... we get an endless forward regress."<BR/><BR/>If your objection here is that it is not possible to attribute a fixed mental state to the mind at any precise instant in time, then no one who believes in quantum mechanical uncertainty would lose any sleep over that claim. If, on the other hand, your objection is that the characterization of mental states in machine-state functionalism is apparently circular or ungrounded - in the sense that state T1 is identified by the fact that it produces behavior X and is followed by state T2, but we can only identify T2 by the fact that it produces behavior Y and is followed by state T3, and so on - then I agree that functionalists need to be very careful in explaining the explanatory scope of the causal relations.<BR/><BR/>However, I should point out that the regress would not be "endless", as the mind is presumed to be a finite-state machine. In the simplest toy models, which involve a 2-state Turing machine, it can be established very quickly which state the machine is in. For more complicated machines it is of course considerably more difficult to use "behavioral" responses (outputs) to do this, and in the case of a Turing machine as complicated as the human brain it would take enormously many human lifetimes. But this obstacle is an empirical one, not a logical one.<BR/><BR/>In any case, it is perfectly consistent with functionalism that particular mental states coincide with particular brain states. In such a case the two states would occur simultaneously, and if (as in principle they might be) the brain states are observable by means of a brain scan, then any problems with "infinite regress" of the causal relations or the patient dying immediately after feeling pain would disappear. The behavioral response used to identify the mental state would then be the properties of the brain scan, and particular brain scan "states" would persist exactly as long as the correlated mental states.webchttps://www.blogger.com/profile/14313880346820667902noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-32524180099454328382009-04-07T11:49:00.000-05:002009-04-07T11:49:00.000-05:00First of all, we have little reason to suppose tha...First of all, we have little reason to suppose that nature contains any deterministic connections. <BR/><BR/>Here is a little reductio. Suppose actual mental effects are always required for the characterization of a mental state. Now, actual effects in a brain always take some minimum amount of time (take the minimum spacing between subsystems, and divide by <EM>c</EM>). So, if mental state S requires as an effect some mental state T (or, a disjunction of T1, T2, ..., Tn), we get an endless forward regress. Therefore, it is not possible to have a mental state unless you live forever. But surely this isn't a good argument for eternal life. <BR/><BR/>So, some mental state can occur without actually having any other mental effects. Maybe it requires environmental effects, though? However, all our connections to the environment are fallible, and a mental state would surely be the same even if the environmental effect failed to come off.<BR/><BR/>So, it must be the case that some mental state can occur without that state having any further effects. This would be a state for which one can give a sufficient condition in terms of causes. Now <EM>maybe</EM> the functionalist can accept that possibility. But I doubt it. I doubt there will be any state which will be sufficiently characterizable in terms of causes alone.<BR/><BR/>Here is a second little reductio. Whether I feel a pain at t does not depend on what happens after t. That I feel a pain at t is compatible with my being annihilated a moment later. (What is a "moment"? Let's stipulate it to be <EM>d/c</EM> where <EM>d</EM> is the minimum distance between mental systems.) But if so, then I can feel a pain without the pain having any effect.<BR/><BR/>This reductio one can get out of by saying that <EM>pain</EM> is not one of the causally defined states. Rather, a pain occurs whenever certain causal connections between more primitive states occur. A consequence of this will be that none of the conscious states will end up being defined by <EM>their</EM> causal connections. <BR/><BR/><BR/>Here is another argument that dispositions are needed. Plainly, I have plenty of beliefs that are currently causally inefficacious. But they are still mental states of mine, albeit unconscious ones. Maybe the claim is that <EM>every</EM> belief I now have has had some effect in the past. Could be, but those effects could be too insignificant to be sufficient to characterize the belief. I learn something. I say: "OK, interesting, better store that up for future use." And so off it goes into a quiescent belief or memory. And that quiescent belief or memory never really has any impact, because maybe I never come back to it.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-41668485423016424412009-04-07T11:30:00.000-05:002009-04-07T11:30:00.000-05:00There are of course many variants of functionalism...There are of course many variants of functionalism, and there may well be a few variants that define mental states in terms of "dispositions". But bare functionalism defines states in terms of causal relations, not dispositions. And in some versions of functionalism, particularly machine-state functionalism, these causal relations are deterministic.<BR/><BR/>So, for example, "pain" in machine-state functionalism is identified with a large class of mental states, each state having well-defined causal effects on other mental states as well as behavior. In some of its manifestations, pain will cause avoidance behavior, or swearing, or a lack of concentration, or simply just the memory "that was painful". (Talk of "dispositions" constitutes a heuristic way of describing this class of mental states, not a definition of any one such state.)<BR/><BR/>Only a small minority of pain states will cause all or most of the possible pain symptoms, but if a mental state produces not a single one of these symptoms (not even the memory of pain), then according to functionalism no pain has occurred. Given that ipso facto there is no empirical way to refute this claim, it is a perfectly reasonable premise for functionalists to assume.<BR/><BR/>I should add further that one possible elaboration of functionalism is to define mental states in terms of their effect on or correlation with brain states. Then pain, for example, simply does not exist in the absence of a certain type of brain state.webchttps://www.blogger.com/profile/14313880346820667902noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-25384287639238219502009-04-05T08:29:00.000-05:002009-04-05T08:29:00.000-05:00Let me expand on the pain case. Presumably, a par...Let me expand on the pain case. Presumably, a part of the story about pain is going to be that pain causes a disposition to try to escape the painful stimulus. Now, this disposition need not actually be triggered for there to be pain--i.e., one need not actually be trying to escape the painful stimulus (one might see that it's hopeless, or one might want to brave the pain for some reason). This disposition will have some complex triggering conditions, such as that one believes there is a way out, etc. <BR/><BR/>Alright, now consider a case of a pain that causes the disposition to escape the painful stimulus. The disposition is not infallible--nothing neural is. All we can say is that there would be such-and-such a probability of its being triggered were the triggering conditions to obtain. Now, the first issue is: What does the probability of hypothetical non-malfunction have to be for the pain to exist? 40%? 70%? 90%? 95%? It does not seem likely to me that an numerical answer would exist. (Nor, if it did, would the number be empirically testable. Suppose we somewhat damage the subject's escape-from-painful-stimulus module, but leave everything else alone. Since we've left everything else alone, the subject will exhibit all the standard pain behaviors including claiming to be in pain, except that the propensity to escape from the painful stimulus is decreased. There will be, I suspect, no empirical way to tell whether the setup constitutes pain or not.)<BR/><BR/>But let's suppose an exact number is possible. Here is the next problem. We now are saying that x is a pain only if x causes a disposition in the escape module that has reliability, say, 0.75. This reliability of the escape module, which environment is it measured in? Obviously, the reliability of any physical system is a subject of many environmental factors, such as temperature, electromagnetic interference, etc. Are we talking of the reliability of the module in the subject's <EM>actual</EM> environment, or the reliability of the module in a <EM>normal</EM> environment?<BR/><BR/>Suppose in the normal environment. Then we get problems about defining the concept of an normal environment. I think our best bets will be an appeal to Aristotelianism, design or evolution (the normal environment is the one for which a module evolved). But then we are back to the evolutionary theories of mind if we're naturalists (and hence reject the Aristotelian or design views). <BR/><BR/>Suppose in the actual environment. Then I've got the functionalist. :-) For then I use a <A HREF="http://alexanderpruss.blogspot.com/2009/03/causal-theories-of-mind.html" REL="nofollow">Frankfurt example on the functionalist</A>.<BR/><BR/><BR/>A different case would be the case of mathematical beliefs caused by malfunctioning arithmetical modules. That really does happen. People do form false beliefs about simple additions and multiplications.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-49249620163729371422009-04-05T05:41:00.000-05:002009-04-05T05:41:00.000-05:00Apologies for some confusion on my part. It is evi...Apologies for some confusion on my part. It is evident to me now that your target is functionalism in its broad sense, rather than just functionalists who appeal to some form of proper function.<BR/><BR/>So your argument boils down to the claim that functionalists should accept your premises, even though many (probably almost all) do not. This is a weak basis for a reductio, unless you can show that your premises [as outlined in the first paragraph of your original post] follow from the premises of functionalism as a matter of logical necessity.<BR/><BR/>I strongly doubt that this is so. In particular, I would dispute the presumed self-evidence of your claim that "One can still have a pain even though the parts of one's mind that are responsible for acting on the pain are malfunctioning." This is already begging the question against functionalism.webchttps://www.blogger.com/profile/14313880346820667902noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-4199179007097782012009-04-03T09:39:00.000-05:002009-04-03T09:39:00.000-05:00Sure, there is a difference. But that is quite co...Sure, there is a difference. But that is quite compatible with the existence of arguments that show that functional and causal theories <EM>require</EM> the concept of proper function, even if their adherents do not notice this. <BR/><BR/>Two ways to see this is to consider cases of malfunction and the need for dispositional properties. <BR/><BR/>One can still have a pain even though the parts of one's mind that are responsible for acting on the pain are malfunctioning. But if one defines pain in terms of actual causal connections, then malfunction threatens the definition. So, perhaps, one needs to define pain in terms of <EM>normal</EM> causal connections. At least, I suspect that I can come up with easy counterexamples to any alternative. :-)<BR/><BR/>And a lot of the causal and functional properties are dispositional in nature. But unless we have something like proper function in play, the problems of finkishness aren't going to be overcome.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-67208198737384185312009-04-03T09:32:00.000-05:002009-04-03T09:32:00.000-05:00There is a world of difference between causal or f...There is a world of difference between causal or functionalist theories of mind and "evolutionary theories of mind" (which form a much broader class).<BR/><BR/>And standard naturalism cares little or nothing at all about proper function. It is a purely subjective notion, as the failure of every attempt (by Pollock, Millikan, Wright etc.) to analyze it in terms of objective criteria has amply demonstrated.webchttps://www.blogger.com/profile/14313880346820667902noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-51454485612984179502009-04-01T11:41:00.000-05:002009-04-01T11:41:00.000-05:00It does seem that (a) does make Dretske an appropr...It does seem that (a) does make Dretske an appropriate target of my argument. But I don't really care that much about Dretske. <BR/><BR/>Rather, I care more generally about causal or functionalist theories of mind. And I think it's not hard to come up with good arguments to show that causal and functionalist theories of mind require the concept of proper function (finkish problems sink attempts that use counterfactuals). Moreover, I think one can argue that standard (i.e., non-Aristotelian) naturalism can only account for proper function in an evolutionary way. The net result of that line of thought is that causal or functionalist theories of mind are incompatible with standard naturalism. And that result is what is of interest to me.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-70085534692834289972009-04-01T11:29:00.000-05:002009-04-01T11:29:00.000-05:00I guess that Dretske in "Naturalizing the MInd" co...I guess that Dretske in "Naturalizing the MInd" could be construed as claiming that evolution by selection (of some kind) is essential for the development of mind, but that's not how I read him. He is, I think, making the far weaker claim that the mental properties of two physically indistinguishable organisms might be (and in fact probably would be) different if their histories were different.<BR/><BR/>It is also evident that Dretske does believe that the mind can and should be analyzed in terms of proper function, but again I don't think he ever claims this to be a logical necessity. In particular, he rejects Swampman-like examples for two reasons: (a) because the lack of any identifiable proper function in spontaneously created objects makes any attempt at analysis of the mind or purpose of such an object meaningless, and (b) "from the improbability of the events (spontaneous materialization) that would have this result." [Naturalizing the Mind, p. 148]<BR/><BR/>And of course Dretske's theory of the mind is just one of many possible evolutionary theories. If you were intent on constructing an argument against Dretske's particular idiosyncratic views, why not say this instead of attributing them to some generalized "evolutionary theory of mind"?webchttps://www.blogger.com/profile/14313880346820667902noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-65740173725327385662009-03-30T09:13:00.000-05:002009-03-30T09:13:00.000-05:00According to Block, Dretske seems to be an example...According to <A HREF="http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/representing.html" REL="nofollow">Block</A>, Dretske seems to be an example of the first.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-35179354365349941452009-03-30T05:13:00.000-05:002009-03-30T05:13:00.000-05:00Alex,I realize that this is getting on to be an ol...Alex,<BR/><BR/>I realize that this is getting on to be an old posting, but I am curious as to why you believe that "According to evolutionary theories of the mind, that we have evolved under certain selective pressures ... is essential to [our mental] functioning", or that "components of minds have to have proper functions".<BR/><BR/>Can you provide any citations to support these two odd claims? I am unaware of any proponent of an evolutionary theory of mind who would endorse either of these positions. The premises of evolutionary theories of the mind are in fact more along the lines of the following:<BR/><BR/>1. It is a contingent fact that all minds on Earth have evolved as a result of natural selection.<BR/><BR/>2. Given that no one has yet managed to think of an alternative mechanism that would with a reasonable probability produce minds in the time available (circa 14 billion years), any other minds in the Universe will presumably have evolved under similar selection pressures (including, possibly, artificial selection).<BR/><BR/>And evolutionary theories (of the mind or of anything else) accord no fundamental role to "proper function". Although some naturalist philosophers (Ruth Millikan being the obvious example) do seem to take the notion seriously, a statement like "components of minds have to have proper functions" would vastly overstate their commitment to it.webchttps://www.blogger.com/profile/14313880346820667902noreply@blogger.com