Strong AIs are finite persons who are implemented by software. (Definition.)
The correct theory of personal identity for Strong AIs would be a version of the psychological theory.
Necessarily, the same theory of personal identity applies to all possible finite persons.
We are finite persons.
So, if Strong AIs are possible, a version of the psychological theory of personal identity applies to us.
But the psychological theory of personal identity is false.
So, Strong AIs are impossible.
Of course, the hard part is to argue for (6), since (6) is so widely accepted.
Alex,
ReplyDeleteI'd raise (for example) the following issues:
a. Why is 1. the definition?
For example, let's say aliens from another planet make an AI with reasoning abilities (inductive and deductive) far surpassing their own, and that includes far surpassing our own, except the AI has no sense of right and wrong, but some sense of an alien analog. Would that count as strong AI? Would that be a person? Would the aliens?
b. Why is 2 true?
It seems to me that any plausible arguments against the TPI in humans would - if successful - probably apply to strong AI as well. If you think not, why not?
At least, I think that 2. needs some defending.
P.S: I think all theories against the possibility of strong AI will eventually be defeated by strong AI ;)
I am not sure what it takes to be a person. There certainly can be individual persons who have no sense of right and wrong, though.
ReplyDeleteAs for 2, well, the thought I had is that the only plausible alternatives to the psychological theory are (a) brute identity and (b) theories where the identity of the person piggybacks on the identity of the hardware, the main candidates being: animal, brain, body and soul.
I think the brute identity theory is not very plausible when the entity is an artifact, but I can see pressure being put on this.
That leaves the hardware based theories. But, bracketing the soul theory for a moment, they are not very plausible in the case of AIs. It's just too easy for AIs to jump between and span hardware for the hardware-based theories to be plausible. It is implausible that if you ran a strong AI in the cloud, it would cease to exist and a new one would come to exist as soon as one server handed off the processing to another server. For if you said that, then you would have to say that it would cease to exist and a new one would come to exist if the computation were handed off between one CPU core and another, and that's quite implausible.
The soul theory, though, is worth paying more attention to. If by law of nature or divine fiat a new soul comes into existence when a system with the computational capabilities needed for strong AI forms, then we could have a non-psychological theory that allows a lot of jumping between servers and the like, by simply supposing that by law of nature or divine fiat the soul jumps around following the psychology.
Not many people like to combine the soul theory with strong AI, though Swinburne has mentioned to me the possibility of God creating souls for sophisticated computers.
I tend to agree that it's possible that a person has no sense of right and wrong, if - say - it has an analog that is almost exactly the same when it functions properly. But I don't know it's possible that a person has no sense of right and wrong and instead has an alien analog that is considerably different when there is no malfunctioning of the brain/mind.
ReplyDeleteIn any case, here's a more direct argument against 1: let's say that a computer (hardware + software) passes any Turing test, is always mistaken for a human being in online interactions, comes up with proof of all open problems mathematicians are working on (or proofs that there is no proof from the axioms, etc.), comes up with a new physics model that has predictive power surpassing that of QM and that of GR, etc.
I would say that that's a strong AI even if it has no subjective experiences at all. But in that case, it's not a person.
While I don't think strong AI will lack subjective experiences - though I do think their minds probably will be vastly different from ours -, it's logically possible (conceptually possible if you like), and I think also metaphysically possible.
Also, if the mind of AI is vastly different from ours, that may well not count as a person.
With regard to identity theory, I'd be inclined towards something involving hardware, software, and relevant causal connections, though I don't think there is a non-vague solution, so our concept of identity breaks down when we start with sorites-like arguments (which I think is a problem for sorites-like arguments).
Regarding the AI in the cloud, or the two cores, it's not clear to me that that's one AI rather than several. It probably depends on how you construe the scenario, but I don't have enough info to tell. I do suspect there is a risk of a sorites-like problem in that direction.
In the case of two CPU cores, it may well be that the AI is one, but it's not using all of its brain power at a time. But then again, can the two cores be turned on simultaneously, running independent software? I need more info.
At any rate, if there is a problem, it's one of language, it seems to me, not a problem for making strong AI, because as far as I can tell, the AI will do all of the stuff it's supposed to do, regardless of whether it's one or many entities, a single mind or a hive mind of sorts or a zombie, or whatever it is.
In re: the soul/God theory, unsurprisingly that's not a live option for me, but I'm curious about how it would handle the issue of different servers or CPU cores. If they can run AI independently, wouldn't those be two different souls?
ReplyDeleteAngra:
ReplyDelete" let's say that a computer (hardware + software) passes any Turing test, is always mistaken for a human being in online interactions, comes up with proof of all open problems mathematicians are working on (or proofs that there is no proof from the axioms, etc.), comes up with a new physics model that has predictive power surpassing that of QM and that of GR, etc. "
That's not Strong AI in the sense that I mean the term. I think there may be a difference between how philosophers and computer scientist uses the term "Strong AI". The crucial thing for me about Strong AI is that the Strong AI really thinks the kinds of thoughts we do (conscious, with intentionality, etc.) Of course, this is the just terminology.
I wouldn't be surprised if Strong AI in your sense were possible.
For any type of entity, there is a fact about how many entities of that type there are. The soul theory yields such a fact about digital persons. But it is a fact that is not accessible empirically. So we wouldn't know if there is one soul per core, one soul per CPU, one soul per cluster, etc.
Alex,
ReplyDeleteI see, then there's been a misunderstanding.
On your terminology, I would go with the example of smart aliens who have an analog to a sense of right and wrong but that it's not so close to ours - when they're not ill or anything. But by your previous posts, I get the impression you would say they are persons, right? I think they're not. But maybe we're not using "person" in the same manner, either (there may be more than one common usage of "person", I suspect).
Regarding the soul theory, I'm not sure we wouldn't be able to tell. Or more precisely, I think we wouldn't because we should conclude that there are no souls, but that applies to humans as well. However, assuming I'm mistaken and we can tell that humans have souls, I'm not sure we wouldn't be able to tell in case of AI.
If we can tell that at least some persons are strong AI, then similar behavior on the part of hardware+software would seem to provide evidence that those are persons too. If the two servers can behave independently of each other as persons, then that would seem to be two persons (as before, as long as we can there are some AI-persons; do you think we wouldn't be able to tell?).
Also, we can at least generally tell when it comes to humans whether there is one or two or three persons in one, two or three brains.
But similarly, it may well be that strong AI (in your sense), if they existed, would be able to tell the number of persons in two cores, servers, etc., and they would be able to communicate that info to us. If they turned out to be reliable in those cases we can test, wouldn't that make it likely that they're telling the truth? (yes, they could lie to us for unfathomable reasons; that's always a possibility, but it becomes less probable the more we see them telling the truth).