Some people think that self-consciousness is a big deal, that it’s the sort of thing that might be hard for an artificial intelligence system to achieve.
I think consciousness and intentionality are a big deal, that they are the sort of thing that would be hard or impossible for an artificial intelligence system to achieve. But I wonder whether if we could have consciousness and intentionality in an artificial intelligence system, would self-consciousness be much of an additional difficulty. Argument:
If a computer can have consciousness and intentionality, a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”.
If a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”, then it can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature of me is 300K”.
Necessarily, anything that can have a conscious awareness whose object would be aptly expressible with the phrase “that the temperature of me is 300K” is self-conscious.
So, if a computer can have consciousness and intentionality, a computer can have self-consciousness.
Premise 1 is very plausible: after all, the most plausible story about what a conscious computer would be aware of is immediate environmental data through its sensors. Premise 2 is, I think, also plausible for two reasons. First, it’s hard to see why awareness whose object is expressible in terms of “here” would be harder than awareness whose object is expressible in terms of “I”. That’s a bit weak. But, second, it is plausible that the relevant sense of “here” reduces to “I”: “the place I am”. And if I have the awareness that the temperature in the place I am is 300K, barring some specific blockage, I have the cognitive skills to be aware that my temperature is 300K (though I may need a different kind of temperature sensor).
Premise 3 is, I think, the rub. My acceptance of premise 3 may simply be due to my puzzlement as to what self-consciousness is beyond an awareness of oneself as having certain properties. Here’s a possibility, though. Maybe self-consciousness is awareness of one’s soul. And we can now argue:
A computer can only have a conscious awareness of what physical sensors deliver.
Even if a computer has a soul, no physical sensor delivers awareness of any soul.
So, no computer can have a conscious awareness of its soul.
But I think (5) may be false. Conscious entities are sometimes aware of things by means of sensations of mere correlates of the thing they sense. For instance, a conscious computer can be aware of the time by means of a sensation of a mere correlate—data from its inner clock.
Perhaps, though, self-consciousness is not so much awareness of one’s soul, as a grasp of the correct metaphysics of the self, a knowledge that one has a soul, etc. If so, then materialists don’t have self-consciousness, which is absurd.
All in all, I don’t see self-consciousness as much of an additional problem for strong artificial intelligence. But of course I do think that consciousness and intentionality are big problems.