I have an empirical hypothesis that one of the main reasons why a lot of ordinary people think a machine can’t be conscious is that they think life is a necessary condition for consciousness and machines can’t be alive.
The thesis that life is a necessary condition for consciousness generalizes to the thesis that life is a necessary condition for mental activity. And while the latter thesis is logically stronger, it seems to have exactly the same plausibility.
Now, the claim that life is a necessary condition for mental activity (I keep on wanting to say that life is a necessary condition for mental life, but that introduces the confusing false appearance of tautology!) can be understood in two ways:
Life is a prerequisite for mental activity.
Mental activity is in itself a form of life.
On 1, I think we have an argument that computers can’t have mental activity. For imagine that we’re setting up a computer that has mental activity, but we stop short of making it engage in the computations that would make it engage in mental activity. I think it’s very plausible that the resulting computer doesn’t have any life. The only thing that would make us think that a computer has life is the computational activity that underlies supposed mental activity. But that would be a case of 2, rather than 1: life wouldn’t be a prerequisite for mental activity, but mental activity would constitute life.
All that said, while I find the thesis that life is a necessary condition for mental activity, I am more drawn to 2 than to 1. It seems intuitively correct to say that angels are alive, but it is not clear that we need anything more than mental activity on the part of angels to make them be alive. And from 2, it is much harder to argue that computers can’t think.
1 comment:
How could a computer have mental activity, though? It is only by defining "mental activity" in such a way that a simulation of it could be the real thing. Then the necessity of awareness for mental activity is taken to mean that the computer is aware (a sort of life). But in fact the logic goes the other way: the self-evident impossibility of awareness coming into being as a direct result of arranging matter in certain ways means that computers cannot have mental activity. They could have it as a result of being possessed by spirits (much as our brains have it when they have souls), of course. That aforementioned impossibility is self-evident (it is rejected by scientists, but on the grounds, I think, that we reject Euclidean geometry and Millian arithmetic and Aristotelian truth, so why not? The problem with that is that what the scientists say is true, in the sense that it is a convincing story that they see no serious holes in, but so what?) ...
Post a Comment