I find the following line of thought to have a lot of intuitive pull:
Some mental states have great non-instrumental ethical significance.
No physical brain states have that kind of non-instrumental ethical significance.
So, some mental states are not physical brain states.
When I think about (2), I think in terms similar to Leibniz’s mill. Leibniz basically says that if physical systems could think, so could a mill with giant gears (remember that Leibniz invented a mechanical calculator running on gears), but we wouldn’t find consciousness anywhere in such a mill. Similarly, it is plausible that the giant gears of a mill could accomplish something important (grind wheat and save people from starvation or simulate protein folding leading to a cure for cancer), and hence their state could have great instrumental ethical significance, but their state isn’t going to have the kind of non-instrumental ethical significance that mental states do.
I worry, though, whether the intuitive evidence for (2) doesn’t rely on one’s already accepting the conclusion of the argument.
14 comments:
I don't think it does, because we all have the intuition that a machine is not alive, and we then have to somehow trump that intuition, if we are physicalists, with the thought that if physicalism is true then some machines are alive; it is not that we need to be dualists to have those intuitions. (Of course, all logic takes us to something that we already had.)
Those are fascinating intuitions though: why do we think that the physical is like that, like rocks and metal gears and such? We see phenomenal stuff, imbued with meaning and much of it alive! Is it the success of chemistry and physics? I don't know (so I have a similar worry).
Philip
I meant aware, that it is not aware
sorry
The life thing is interesting. I guess in the vicinity is this argument:
1. Only the wellbeing of living things has moral significance.
2. If you can suffer pain, your wellbeing has moral significance.
3. So, only living things can suffer pain.
4. Computers cannot be alive.
5. So, computers cannot suffer pain.
Myself, I don't believe 1, unless perhaps one has a very broad sense of life, a sense sufficient to encompass disembodied beings (e.g., angels). But if one has such a broad sense of life, then 4 is less clear.
Is 4 less clear because the switches in modern computers are so small?
Surely a machine made of cogs is not alive/aware.
Similarly, Searle's Chinese Room is not haunted by a spirit called into being by the moving of cards with Chinese writing on them.
But of course, the size of the switches should not make such a difference.
So, I don't see why 4 should be less clear ...
I don't see why a machine made of cogs couldn't have a soul of a sort that makes it be alive.
We do seem to be, in a sense, machines made of chemical cogs, each with a soul of a sort that makes it be alive; but we do, surely, have intuitions that Chalmers' zombies would not be alive (except in the biological sense), that they would not be aware.
... and you do see that the moving around (in certain ways, as specified in tomes) of cards with obscure symbols on them (i.e. Chinese) does not of itself (i.e. by magic) summon into being a spirit with certain attributes (i.e. awareness of Chinese), I presume ...
Post a Comment