Wednesday, April 1, 2020

If we're not brains, computers can't think

The following argument has occurred to me:

  1. We are not brains.

  2. If we are not brains, our brains do not think.

  3. If our brains do not think, then computers cannot think.

  4. So, computers cannot think.

I don’t have anything new to say about (1) right now: I weigh a lot more than three pounds; my arms are parts of me; I have seen people whose brains I haven’t seen.

Regarding (2), if our brains think and yet we are not brains then we have the too many thinkers problem. Moreover, if brains and humans think, then that epistemically undercuts (1), because then I can’t tell if I’m a brain or a human being.

I want to focus on (3). The best story about how computers could think is a functionalist story on which thinking is the operation of a complex system of functional relationships involving inputs, outputs, and interconnections. But brains are such complex systems. So, on the best story about how computers could think, brains think, too.

Is there some non-arbitrary way to extend the functionalist story to avoid the conclusion that brains think? Here are some options:

  1. Organismic philosophy of mind: Thought is the operation of an organism with the right functional characteristics.

  2. Restrictive ontology: Only existing functional systems think; brains do not exist but organisms do.

  3. Maximalism: Thought is to be attributed to the largest entity containing the relevant functional system.

  4. Inputs and outputs: The functional system that thinks must contain its input and output facilities.

Unfortunately, none of these are a good way to save the idea that computers could think.

Computers aren’t organisms, so (5) does not help.

The only restrictive ontology on the table where organisms exist but brains do not is one on which the only complex objects are organisms, so (6) in practice goes back to (5).

Now consider maximalism. For maximalism to work and not reduce down to the restrictive ontology solution, these two things have to be the case:

  1. Brains exist

  2. Humans are not a part of a greater whole.

Option (b) requires a restrictive ontology which denies the existence of nations, ecosystems, etc. Our best restrictive ontologies either deny the existence of brains or relegate them to a subsidiary status, as non-substantial parts of substances. The latter kind of ontology is going to be very restrictive about substances. On such a restrictive ontology, I doubt computers will count as substances. But they also aren’t going to be non-substantial parts of substances, so they aren’t going to exist at all.

Finally, consider the inputs and outputs option. But brains have inputs and outputs. It seems prejudice to insist that for thought the inputs and outputs have to “reach further into the world” than those of a brain which only reaches the rest of the body. But if we do accept that inputs and outputs must reach further, then we have two problems. The first is that while we are not brains, we could certainly continue to think after the loss of all our senses and muscles. The second is that if our inputs and outputs must reach further into the world, then a hearing-aid is a part of a person which appears false (though recently Hilary Yancey has done a great job defending the possibility of prostheses being body parts in her dissertation here at Baylor).

3 comments:

  1. I think this is very good, and very important. As Wittgenstein has taught us: "Only of a living human being and what resembles (behaves like) a living human being can one say: it has sensations; it sees; is blind; hears, is deaf; is conscious or unconscious."

    Predicates of "thinking" can only meaningfully be ascribed to living creatures as a whole.

    But, one thing I find interesting is that computers don't even compute. Computation requires a knowledge of mathematics, and computers have no knowledge at all. Computers are just really intricate collections of on-off switches, and it baffles me to no end that people talk of them as "thinking" or "knowing" anything (except in metaphor or jest, where the slow-running computer is said to be "thinking it over" or something).

    ReplyDelete
  2. Computers were historically developed to mimic people engaging in computational activities, not brain operations in particular, so I think a slightly weakened version of (5) would in fact address the immediate problem you are raising: Computers model active organisms, not brains of organisms in particular, and therefore if computers can think it would not necessarily imply that brains can think, only that organisms can. Someone who took this line could easily hold that all the analogies between computers and brains are in fact computer/organism analogies that were misunderstood by people who incorrectly took brains to be the subject of thinking.

    Of course, the plausibility of the weakening (allowing artificial organisms, nonbiological organisms, etc.) and the resulting definition of 'organism' is another question. But it would break the 'computers think, therefore brains think' inference.

    ReplyDelete
  3. Brandon,

    My understanding of the history of computing is that it was developed to aid us in computation; not to mimic anything. If that was the intent, they've failed entirely, since computers don't even compute; let alone think.

    But, more to the point, I don't think Pruss' argument is based on the idea that computers were designed to mimic brains or anything else. He just gave the most plausible account on which a computer could be said to think, and then showed that that same account ought to hold that brains also think.

    ReplyDelete