Some people think that self-consciousness is a big deal, that it’s the sort of thing that might be hard for an artificial intelligence system to achieve.
I think consciousness and intentionality are a big deal, that they are the sort of thing that would be hard or impossible for an artificial intelligence system to achieve. But I wonder whether if we could have consciousness and intentionality in an artificial intelligence system, would self-consciousness be much of an additional difficulty. Argument:
If a computer can have consciousness and intentionality, a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”.
If a computer can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature here is 300K”, then it can have a conscious awareness whose object would be aptly expressible by it with the phrase “that the temperature of me is 300K”.
Necessarily, anything that can have a conscious awareness whose object would be aptly expressible with the phrase “that the temperature of me is 300K” is self-conscious.
So, if a computer can have consciousness and intentionality, a computer can have self-consciousness.
Premise 1 is very plausible: after all, the most plausible story about what a conscious computer would be aware of is immediate environmental data through its sensors. Premise 2 is, I think, also plausible for two reasons. First, it’s hard to see why awareness whose object is expressible in terms of “here” would be harder than awareness whose object is expressible in terms of “I”. That’s a bit weak. But, second, it is plausible that the relevant sense of “here” reduces to “I”: “the place I am”. And if I have the awareness that the temperature in the place I am is 300K, barring some specific blockage, I have the cognitive skills to be aware that my temperature is 300K (though I may need a different kind of temperature sensor).
Premise 3 is, I think, the rub. My acceptance of premise 3 may simply be due to my puzzlement as to what self-consciousness is beyond an awareness of oneself as having certain properties. Here’s a possibility, though. Maybe self-consciousness is awareness of one’s soul. And we can now argue:
A computer can only have a conscious awareness of what physical sensors deliver.
Even if a computer has a soul, no physical sensor delivers awareness of any soul.
So, no computer can have a conscious awareness of its soul.
But I think (5) may be false. Conscious entities are sometimes aware of things by means of sensations of mere correlates of the thing they sense. For instance, a conscious computer can be aware of the time by means of a sensation of a mere correlate—data from its inner clock.
Perhaps, though, self-consciousness is not so much awareness of one’s soul, as a grasp of the correct metaphysics of the self, a knowledge that one has a soul, etc. If so, then materialists don’t have self-consciousness, which is absurd.
All in all, I don’t see self-consciousness as much of an additional problem for strong artificial intelligence. But of course I do think that consciousness and intentionality are big problems.
Hi Alex,
ReplyDeleteJust a question: Do you need intentionality in the argument?
I mean, can't you go from consciousness to self-consciousness without including intentionality (not denying it, either)?
Excuse me if none of this makes sense, I’m not even sure if I even have a double-digit IQ!
ReplyDelete"So, if a computer can have consciousness and intentionality, a computer can have self-consciousness."
Do you mean the same computer by virtue of possessing both of these attributes is (or can become without being changed by an external agent) self-aware? If so, I don’t see why this should be the case. I think animals also have consciousness and intentionality. (Though, I hasten to add that I might be misunderstanding what you mean by “intentionality.”) It seems that conscious states are intentional, they are experiences OF something.
Now, do merely conscious animals have concepts? (Which would include “here” and “there” as well as “me” – right?) If not, maybe that is at least one reason why they don’t have self-awareness. With the computer, maybe it experiences ‘the temperature of unit 546 is 300k’ but doesn’t recognize that this is convertible with ‘I’m 300k’
Do you need to have an intellect to have the ability to understand concepts? If so, do minimally self-aware animals like chimps have intellects? (Or, again, do you need concepts to be self-aware?)
Well, hopefully I said something that made some sense. Unrelatedly, how would you explain how God’s being able to create from nothing is consistent with the claim that ‘from nothing, nothing comes?’ I searched through your blog and didn’t see anything that seemed to be addressed to it, though, my search was not exhaustive.
Angra:
ReplyDeleteBeing aware of the temperature here as being 300K takes intentionality. I think there is no consciousness without intentionality, so I could have omitted intentionality as it's implied by consciousness, but some probably think that you can have raw intentionality-free consciousness, like a raw itch.
Mr Killackey:
These are really good questions.
No, not necessarily the same computer. The point is something like this one: "If a calculator can multiply together two ten digit numbers, a calculator can multiply together two twenty digit numbers." Of course, the *same* calculator may not be able to do it, but once we see that the former is possible, it's very plausible (though not certain) that the latter is as well.
I agree that one might worry about the transition from "unit 546 is at 300K" to "I am at 300K." But the worry is no bigger, I think, than in the case of "here" claims: "It is 300K here" (and not just "at the location of sensor 188"). But now you've got me worried a bit about the "here" claims. Not very worried, though.
As for animals and concepts, that's more speculative. I think all consciousness is conceptual, and so I think all conscious animals--even down to worms and like, if these turn out to be conscious, which they might well--*use* concepts. However, although they use concepts, I expect that most conscious animals are unable to think *about* concepts, because they lack the relevant second-order concepts. Some non-human animals, however, seem to have a theory of mind. Those non-human animals thus seem to be thinking about thinking, and there is some sort of second-order conceptualization there, if that's so. But even so, I expect they aren't thinking about concepts in themselves, in abstraction from particular acts of thought that involve these concepts.
--
I read "from nothing, nothing comes" as meaning that nothing comes into being without an efficient cause. When God creates something in time, the thing comes into being without a material cause, but it has an efficient cause (or a super-efficient cause, maybe?), namely God.
What if self-consciousness isn't awareness whose object is expressible in terms of 'I', precisely because self-consciousness isn't consciousness of an object at all, and certainly not some object named 'I'? Could self-consciousness be non-observational and criterionless? Does 'I' need to be a referring expression for your argument to work?
ReplyDeleteAdam,
ReplyDeleteWell, I am Alex Pruss, Alex Pruss exists, so "I" as used by me refers to something that exists. I guess I was assuming that "I" as used by anyone refers to something that exists, but that assumption seems uncontroversial.
The suggestion that self-consciousness isn't consciousness of an object at all is an intriguing one. But then I can't see which of my states of mind counts as self-consciousness, and in fact I can't even tell if I have self-consciousness. I was assuming in my thinking about this that self-consciousness, whatever it is, is something that I pretty obviously have.
"Alex Pruss" sure refers to something. But "I am Alex Pruss" refers only if it's an identity statement. If I told someone, "I am not Adam Myers" (a lie), I am not telling them "Adam Myers is not Adam Myers." So "I" and whatever in your view "I" might refer to when I use it, doesn't seem to be coextensive. I shd. try to see if I can convince you of Elizabeth Anscombe's views about 'I' sometime.
ReplyDelete"I" (as used by me) and "Alex Pruss" have the same reference--they both refer to me--but different sense, to use Aquinas's distinction in (the usual English translation of) Frege's vocabulary.
ReplyDelete