Here’s yet another variant on a life-based argument against machine consciousness. All of these arguments depend on related intuitions about life. I am not super convinced by them, but they have some evidential force I think.
Only harm to a living thing can be a great intrinsic evil.
If machines can be conscious, then a harm to a machine can be a great intrinsic evil.
Machines cannot be alive.
So, harm to a machine cannot be a great intrinsic evil. (1 and 3)
So, machines cannot be conscious. (2 and 4)
5 comments:
2 is certainly plausible because of the "can be", but couldn't someone argue that "machine consciousness" would be so limited so as to sidestep this issue? It shouldn't be able to even feel pain, for example. If all it can do is "consciously" make calculations, for example, and cannot even *potentially* feel pain, emotions, or raise existential questions, could it be a great intrinsic evil to destroy it?
Would require the thinking machine enthusiast to concede that "machine consciousness" wouldn't even be comparable to that of the most handicapped human, though.
One could think that, but it really doesn't seem plausible to me. It seems to me that the big difficulty is with consciousness as such, not consciousness of pain.
Dr. Pruss, do you have a definition of what it means to be alive? Or are you trying to extract principles through analyzing test cases?
I mean to say an analysis of conditions necessary and/or sufficient for being alive, not a lexical definition.
I don't know how to analyze life. God, angels, people, giraffes, mushrooms and algae are all alive. What do they have in common or analogously? I don't know (other than the uninformative answer: life).
Post a Comment