Showing posts with label natural selection. Show all posts
Showing posts with label natural selection. Show all posts

Thursday, August 21, 2014

A possible limitation of explicitly probabilistic reasoning

Bayesian reasoners will have their credences converge to the truth at different rates depending on their prior probabilities. But it's not as if there is one set of prior probabilities that will always lead to optimal convergence. Rather, some sets of priors lead to truth faster in some worlds and some lead to truth faster in others. This is trivially obvious: for any world w, one can have priors that are uniquely tuned for w, say by assigning a probability extremely close to 1 to every contingent proposition true at w and a probability extremely close to 0 to every contingent proposition false at w. Of course, there is the question of how one could get to have such priors, but one might just have got lucky!

So, a Bayesian reasoner's credence converges to the truth at rates depending on her priors and what kind of a world she is in. For instance, if she is in a very messy world, she will get to the truth faster if she has lower prior credences for elegant universal generalizations, while if she is in a more elegant world (like ours!), higher prior credences for such generalizations will lead her to truth more readily.

Now suppose that our ordinary rational ampliative reasoning processes are not explicitly probabilistic but can be to a good approximation modeled by a Bayesian system with a prior probability assigment P0. It is tempting to think that then we would do better to explicitly reason probabilistically according to this Bayesian system. That may be the case. But unless we have a good guess as to what the prior probability assignment P0 is, this is not an option. Sure, let's suppose that our rational processes can be modeled quite well with a Bayesian system with priors P0. But we won't be able to ditch our native reasoning processes in favor of the Bayesian system if we do not have a good guess as to what the priors P0 are. And we have no direct introspective access to the priors P0 implicit in our reasoning processes, while our indirect access to them (e.g., through psychological experiments about people's responses to evidence) is pretty crude and inaccurate.

Imagine now that, due to God and/or natural selection, we have ampliative reasoning processes that are tuned for a world like ours. These processes can be modeled by Bayesian reasoning with priors P0, which priors P0 would then be tuned well for a world like ours. But it may be that our best informed guess Q0 as to the priors will be much more poorly tuned to our world than the priors P0 actually implicit in our reasoning. In that case, switching from our ordinary reasoning processes to something explicitly probabilistic will throw away the information contained in the implicit priors P0, information placed there by the divine and/or evolutionary tuning process.

If this is right, then sometimes or often when trying to do a formal probabilistic reconstruction of an intuitive inductive argument we will do less well than simply by sticking to the inductive argument. For our ordinary intuitive inductive reasoning is, on this hypothesis, tuned well for our world. But our probabilistic reconstruction may not benefit from this tuning.

On this tuning hypothesis, experimental philosophy is actually a good path to epistemological research. For how people reason carries implicit information as to what priors fit our world well.

Monday, March 16, 2009

More on evolutionary theories of mind

According to evolutionary theories of mind, that we have evolved under certain selective pressures not only causally explains our mental functioning, but in fact is essential to that very functioning. Thus, if an exact duplicate of one of us came into existence at random, with no selection, it would not have a mind. The reason is that components of minds have to have proper functions, and proper functions in us are to be analyzed through natural selection.

Of course, there could be critters whose proper function is to be analyzed in terms of artificial selection, or even in terms of design by an agent. But as it happens, we are not critters like that, says the evolutionary theorist of mind. Nonetheless, it is important that the account of proper function be sufficiently flexible that artificial selection would also be able to give rise to proper function (after all, how would one draw the line between artificial and natural selection, when artificial selectors—say, human breeders—are apt themselves to be a part of nature?). Moreover, typically, the evolutionary analysis of proper function is made flexible enough that agential design gives rise to proper function as well. The basic idea—which is more sophisticated in the newer accounts to avoid counterexample—is that it is a proper function of x to do A if and only if x-type entities tend to do A and x-type entities now exist in part because of having or having had this tendency. Thus, a horse's leg has running fast as one of its proper functions, because horse's legs do tend to run fast, and now exist in part because of having had this tendency. A guided missile has hitting the target as a proper function, because it tends to do that, and guided missiles exist in part because of having this tendency (if they didn't have this tendency, we wouldn't have made them).

Whatever the merits of these kinds of accounts of proper function, I think it is easy to see that such an account will not be satisfactory for philosophy of mind purposes. To see this, consider the following evolutionary scenario (a variant on one that the ancient atomists posited). Let w0 be the actual world. Now consider a world w1, where at at t0 there is one super-powerful alien agent, Patricia, and she has evolved in some way that will not concern us. Suddenly, at t0, a rich infinite variety of full-formed organisms comes into existence, completely at random, scattered throughout an infinity of planets. There are beings like dogheaded men, and beings like mammoths, and beings like modern humans, behaving just like normal humans. On the evolutionary theorist's view, these are all zombies. A minute later, at t1, Patricia instantaneously kills off all the organisms that don't match her selective criteria. Her selective criteria in w1 happen to be exactly the same ones that natural selection implemented in w0 by the year 2009. Poof, go the mammoth-like beings in w1, since natural selection killed them off by 2009 in w0. However, humanoids remain.

At t1, the survivors in w1 have proper functions according to the evolutionary theorist. Moreover, they have the exact same proper functions as their analogues in w0 do, since they were selected for on the basis of exactly the same selective principle. This was a case of artificial selection, granted, but still selection.

But it is absurd that a bunch of zombies would instantaneously become conscious simply because somebody killed off a whole bunch of other zombies. So the evolutionary account of proper function, as applied to the philosophy of mind, is absurd.

Maybe our evolutionary theorist will say: Well, they don't get proper functions immediately. Only the next generation gets them. Selection requires a generation to pass. However, she can only say this if she is willing to say that agency does not give rise to proper function. After all, agency may very well work by generating a lot of items, and then culling the ones that the agent does not want. Pace Plantinga, I do not think it is an absurd thing to say that agency does not give rise to proper function, but historically a lot of evolutionary accounts of proper function were crafted so as to allow for design-based proper functions. Moreover, it would seem absurd to suppose that a robot we directly made couldn't be intelligent at all but its immediate descendant could be.

I think the above shows that we shouldn't take agential design to generate proper function (at least not normally; maybe a supernatural agent could produce teleological facts, but that would be by doing something more than just designing in the way an engineer does), at least not if we want proper function to do something philosophically important for us. Nor do I think should we take evolution to generate proper function (my earlier post on this is particularly relevant here). Unless we are Aristotelians—taking proper function not to be reducible to non-teleological facts—we have no right to proper function. And thus if the philosophy of mind requires proper function, it requires Aristotelian metaphysics.

Thursday, March 12, 2009

Evolutionary theories of mind

An evolutionary theory of mind is not just a theory that minds have in fact evolved. Rather, it is a theory that it is essential to mindedness that one be the product of selection (natural or artificial). For instance, one may be an evolutionary theorist of mind because one thinks that intentionality must be understood in evolutionary terms, or because one is a functionalist and thinks that the notion of "proper function" that functionalism needs must be grounded in selective facts, or because one thinks that mental states have normative conditions (e.g., "neural state n is a believing that p only if it is the case that n should occur only if p"). An evolutionary theorist of mind is already willing to bite quite a bullet. Take Davidson's swampman: lightning strikes a swamp, and an exact physical duplicate of Davidson by chance comes out. Since there was no selection, the swampman is not a person, though he is exactly like Davidson physically. Of course if one were a dualist, one wouldn't be surprised by this, since the swampman could differ from Davidson in respect of soul, but the evolutionary theorist of mind doesn't believe in souls. The evolutionary theorist of mind is willing to bite the bullet on the swampman.

Here is an argument against evolutionary theories of mind. As it stands it is an argument against theories on which selection is metaphysically necessary for mindedness, though one might be able to do more with the argument. Moreover, the argument may well apply to other evolutionary analyses of concepts.

The argument is a reductio. Start with the following two theses:

  1. (Evolutionary theory of mind.) If none of the physical entities existing in a spacetime region U are the products of selection, there are no physical minds in U.
  2. (Almost global supervenience of physical minds.) Suppose worlds w1 and w2 are exact physical duplicates, except in an impotent region R of spacetime. Then w1 contains a physical mind outside of R if and only if w2 contains a physical mind outside of R.
Here, a "physical mind" is a mind entirely constituted or implemented by a purely physical system. A region R of spacetime is "impotent" provided that no event or substance in R can affect anything outside R.

Now for our clever construction. Imagine a world w1 which contains a planet much like earth, where history looks pretty much like it looks on earth, and which also contains a Great Grazing Ground (GGG), which is an infinite (we only need: potentially infinite) impotent region. Moreover, by a strange law of nature, or maybe the activity of some swampaliens, whenever an organism on earth is about to die, it gets hyperspatially and instantaneously transported to the GGG, and a fake corpse, which is an exact duplicate of what its real corpse would have been, gets instantaneously put in its place on earth. (I will call it "earth" for convenience but I shan't worry about its numerical identity with our world's earth.) Furthermore, there is no life or intelligence outside of earth and the GGG. Moreover, the organism dies as soon as it arrives in the GGG.

Our world's earth has minds, and the earth in w1 has a history that is just about the same. The only difference is that all the deaths of organisms occur not on earth but in the GGG, because they get transported there before death. But this does not affect any selective facts. Thus, the evolutionary theorist of mind should say that the situation in w1's earth is similar enough to that on our earth that we should say that w1's earth contains minds.[note 1]

The hard work is now done. For imagine a world that is exactly like w1 outside of the GGG, but inside the GGG, immortal and ever-reproducing aliens rescue each organism on arrival, fixing it so it doesn't die, and even make the organism capable of reproduction again. Furthermore, they do the same for the organism's descendants in the GGG. The GGG is a place of infinite (at least potentially) resources, with everybody having immortality and reproduction, with the aliens shifting organisms further and further out to ensure their survival.

Now in w2, there is no selection: Nobody ever dies or ceases to reproduce.[note 2] Thus, by (1), there are no physical minds outside the GGG in w2—all the earthly critters are zombies. But by (2) there are physical minds outside the GGG in w2, because w2 is an exact duplicate of w1 outside of the GGG. Hence we have absurdity.

Suppose our evolutionary theorist of mind denies (2). Then we have the following absurdity: It is up to the aliens in the GGG to determine whether or not there are physical minds outside the GGG, by deciding whether to rescue the almost dead organisms that pop into the GGG. But how can beings in an impotent region bring about that there are, or are not, physical minds outside that region? That would be worse than magic (magic is presumably causal).

Furthermore, while the numerical identities of the organisms on earth in w1 and w2 might depend on their history, they surely do not depend on what happens in the GGG, since the GGG is impotent. So we may actually suppose that the earthly organisms are numerically the same between w1 and w2. Thus, outside the GGG, w1 and w2 are exactly the same physically, and have exactly the organisms, but some of these very same organisms (say, Fred or Martha) have physical minds in w1 and do not have physical minds in w2.

This is truly absurd. Hence, evolutionary theories of mind should be rejected.[note 3]