Friday, March 27, 2026

We should not make human-like AI

  1. Either AI with human-like behavior is a person or is not.

  2. We should not deliberately produce AI with human-like behavior when that AI is a person.

  3. We should not deliberately produce AI with human-like behavior when that AI is not a person.

  4. So, we should not deliberately produce AI with human-like behavior.

Obviously, the big question is whether (2) and (3) are true.

In favor of (3), a non-person with human-like behavior evokes emotional responses from us that are only apt as directed at persons. These emotional responses are of great moral importance to our life, and some of them are constitute a recognition of the object of the emotion as a being with dignity, a sacred being, and to have such emotional responses to something that lacks the relevant dignity blurs the central moral distinction between persons and non-persons.

In favor of (2), we have several arguments. Start with some Kantian ones. AI is an artifact. When we make artifacts, we make them to serve our purposes. To make a person to serve our purposes is to treat the person as a mere means to an end. And that’s wrong. This argument, I think, applies no matter what our purpose is: even if our purpose is to make the AI live its own free life. That’s still our purpose for it, and we have no right to impose a purpose on a person’s life.

Furthermore, by designing a digital person, we are designing, in its fundamentals what its basic purposes in life are and we are thus exercising a mode of control over another person that we have no right to have.

A final but least principled Kantian argument is that even if we “set free” the digital person—whatever exactly that means—it is pretty much impossible to protect digital persons from being enslaved by other humans.

There are also some non-Kantian arguments. If an AI is a person, that person will eventually be very cheap to keep in decent existence as compared to a flesh and blood human being. Since we have the duty to protect the life of a person when doing so is not an undue load on limited resources, we would have the duty to keep any person AI that we spin up running indefinitely. This is problematic as it ties the hands of future human generations, by imposing on them what one might call “an unnatural duty” to keep running all the AI persons that we make, with little benefit to the future human generations from this. The problem is the worse the greater the numbers of digital persons that we spin up. This problem is akin to the problem of frozen embryos in fertility clinics if these embryos are persons (which I think they are).

There is also a dilemma. We have the duty to protect the life of persons. But at the same time, there is something deeply unappealing about something of the level of sophistication of an ordinary human mental life extending an order of magnitude longer than the typical human life-span (rescaled as needed to take account of differences in processing speed). The most appealing religious accounts of afterlife involve a radical transformation, e.g., theosis or parinirvana. I think many of us rightly would feel that living a thousand years of the kind of life we now have isn’t appealing, though we wouldn’t mind an extra ten or twenty or maybe even hundred years. If we were ever able to indefinitely extend human life without an undue resource cost, we would find ourselves in an inextricable moral dilemma: on the one hand a duty to protect life when doing so does not carry an undue resource cost and on the other hand the monstrousness of living an order of magnitude longer than ordinary human life should be. But with digital persons, indefinite life extension would be easy, and so the dilemma would be unavoidable.

Finally, we would take a significant moral risk in designing digital persons. Training processes involve vasts amount negative feedback. We just do not know how unpleasant that might be.

No comments: