Tuesday, April 2, 2024

Aristotelian functionalism

Rob Koons and I have argued that the best functionalist theory of mind is one where the proper function of a system is defined in an Aristotelian way, in terms of innate teleology.

When I was teaching on this today, it occured to me (it should have been obvious earlier) that this Aristotelian functionalism has the intuitive consequence that only organisms are minded. For although innate teleology may be had by substances other than organisms, inorganic Aristotelian material substances do not have the kind of teleology that would make for mindedness on any plausible functionalism. Here I am relying on Aristotle’s view that (maybe with some weird exceptions, like van Inwagen’s snake-hammock) artifacts—presumably including computers—are not substances.

If this is right, then the main counterexamples to functionalism disappear:

Next recall Leibniz’s mill argument: if a machine can be conscious, a mill full of giant gears could be conscious, and yet as we walked through such mill, it would be clear that there is no consciousness anywhere. But now suppose we were told that the mill has an innate function (not derived from the purposes of the architect) which governed the computational behavior of the gears. We would then realize that the mill is more than just what we can see, and that would undercut the force of the Leibnizian intuition. In other words, it is not so hard to believe that a mill with innate purpose is conscious.

Further, note that perhaps the best physicalist account of qualia is that qualia are grounded in the otherwise unknowable categorical features of the matter making up our brains. This, however, has a somewhat anti-realist consequence: the way our experiences feel has nothing to do with the way the objects we are experiencing are. But an Aristotelian functionalist can tell this story. If I have a state whose function is to represent red light, then I have an innate teleology that makes reference to red light. This innate teleology could itself encode the categorical features of red light, and since this innate teleology, via functionalism, grounds our perception of red light, our perception of red light is “colored” by the categorical features not just by our brains, but by the categorical features of red light (even if we are hallucinating the red light). This makes for a more realist theory of qualia, on which there is a non-coincidental connection between the external objects and how they seem to us.

Observe, also, how the Aristotelian story has advantages of panpsychism without the disadvantages. The advantage of panpsychism is that the mysterious gap between us and electrons is bridged. The disadvantages are two-fold: (a) it is highly counterintuitive that electrons are conscious (the gap is bridged too well) and (b) we don’t have a plausible story about how the consciousness of the parts gives rise to a consciousness of the whole. But on Aristotelian functionalism, it is teleology that we have in common with electrons, so we do not need to say that electrons are conscious—but because mind reduces to teleological function, though not of the kind electrons have, we still have bridging. And we can tell exactly the kind of story that non-Aristotelians do about how the function of the parts gives rise to the consciousness of the whole.

There is, however, a serious downside to this Aristotelian functionalism. It cannot work for the simple God of classical theism. But perhaps we can put a lot of stress on the idea that “mind” is only said analogously between creatures and God. I don’t know if that will work.

5 comments:

Alfred W. Smith said...

Interesting post. I have a question then regarding AI. Do you believe it's *possible* AI becomes so intelligent that it can start solving philosophical issues and eventually prove or disprove the existence of God?

-Alfred

Alexander R Pruss said...

You don't need to be all that intelligent to come up with new proofs of the existence of God.
https://page.mi.fu-berlin.de/cbenzmueller/papers/C40.pdf

Alfred W. Smith said...

Very interesting paper!

The reason I am asking is that I am coming back to the Church after many years of atheism, and I agree with the major arguments for God's existence. But I am afraid of the possibility that AI proves that God doesn't exist, after acquiring an ability to reason at a higher level than has ever been achieved before. I know people like Ed Feser don't believe it's possible, but many certainly disagree so I am not satisfied with what I have found so far.

Any thoughts on this? Is this a concern of yours, if so - why (not)? If you have any recommended readings they are also welcome. I truly appreciate it!
-Alfred

Alexander R Pruss said...

If you think (as it looks like) that the evidence points to the existence of God, then you think it is likely that God exists. Let's say your probability that God exists is 0.95. Then the probability that AI proves God doesn't exist is less than 0.05. For one can only prove something that is true (otherwise, it's not a proof, just an apparent proof), and so the probability that AI proves God doesn't exist is no bigger--and in fact smaller than--than the probability that God doesn't exist.

Similarly, if you are convinced on the basis of the evidence that your spouse is not a reptile, you don't need to worry that an internal scan will reveal her to be a reptile.

Francis Gorniak said...

Very fascinating. One hang-up I have with Aristotelian functionalist views is that there still seems to be something of a reflection of the combination problem for panpsychism in precisely delineating the boundary between the 'immanent teleology' and normative goal-striving characteristic of a living being and what seem to less 'integrated' and goal-directed forms of normative goal-drivenness on the cellular level. Perhaps development in relational systems biology might help here:
https://www.researchgate.net/publication/366177364_Latest_Robert_Rosen_and_Relational_System_Theory_an_Overview