Tuesday, June 14, 2016

Some thoughts on the ethics of creating Artificial Intelligence

Suppose that it's possible for us to create genuinely intelligent computers. If we achieved genuine sapience in a computer, we would have the sorts of duties towards it that we have towards other persons. There are difficult questions about whether we would be likely to fulfill these duties. For instance, it would be wrong to permanently shut off such a computer for anything less than the sorts of very grave reasons to make it permissible to disconnect a human being from life support (I think here about the distinction between ordinary and extraordinary means in the Catholic tradition). Since keeping such a computer running is not likely to typically involve such reasons, it seems that we would likely have to keep such a computer running indefinitely. But would we be likely to do so? So that's part of one set of questions: Can we expect to treat such a computer with the respect due to a person, and, if not, do we have the moral right to create it?

Here's another line of thought. If we were going to make a computer that is a person, we would do so by a gradual series of steps that produce a series of systems that are more and more person-like. Along with this gradual series of steps would come a gradation in moral duties towards the system. It seems likely that progress along the road to intelligence would involve many failures. So we have a second set of questions: Do we have the moral right to create systems that are nearly persons but that are likely to suffer from a multitude of failures, and are we likely to treat these systems in the morally appropriate way?

On the other hand, we (except the small number of anti-natalists) think it is permissible to bring human beings into existence even though we know that any human being brought into the world will be mistreated by others on many occasions in her life, and will suffer from disease and disability. I feel, however, that the cases are not parallel, but I am not clear on exactly what is going on here. I think humans have something like a basic permission to engage in procreation, with some reasonable limitations.

7 comments:

  1. I think you've been watching too much person of interest.

    ReplyDelete
  2. Let us say that a closed system A.I. was created. By this, a system that had no way to connect to other systems.(I say this to take out some of the problems that come with this subject) This A.I. was able to be taught from its infancy to its "adulthood" much like humans are.

    In this case the A.I., for all purposes, is nothing more than a brain that sends its electrical impulses within its hardware apposed to the humans organic system.

    This system over its lifetime will be powered on and off for the repair and upgrade of components and can be equated to nothing more than a human going to sleep. Now for long term loss of power, it is the same train of thought, The A.I just "slept" longer.

    This A.I. does not suffer from the same problems that can arise in a human from prolonged loss of brain activity. Problems from trauma, disease and age, for a small set of examples, get tossed out the window dealing with this A.I.. This closed system intelligence, as long as a human or even a subsystem could run repairs, in theory could run/live forever. A hard drive fails, gets replaced and is set to its back up point with no loss of personality, knowledge or other harmful affect.

    Can we treat this intelligence with respect due a human being? Absolutely. We could even envy its longevity.

    Your question about it being morally right to create this intelligence. To me it would be acceptable to do so. The A.I. would never, and I am paraphrasing, nearly be human. Until the A.I. became self-aware it was nothing more than code that mimicked a sentient being. Because of this, even in the failures leading to its consciousness, the A.I. has nothing more than the concept of human nature. We also have to remember that we are not creating a human in any way, shape or form. We would be giving birth to an entirely new category of, well, life.

    ReplyDelete
  3. "We would have the sorts of duties towards it that we have towards other persons."

    I disagree, because I think that our duties towards other persons depend on many ways on concrete facts about human life and how it works. For example, there is typically no such thing as shutting off a computer "permanently," while killing a human being is normally irreversible.

    ReplyDelete
  4. Using the words "genuinely intelligent" and "sapient" was meant to include self-awareness.

    Turning off without backing up volatile storage (e.g. RAM) to nonvolatile storage (e.g. disk) could be closely analogous to killing, depending on details. Turning off without preserving a nonvolatile copy would also be like that.
    We might see turning off, even when everything is saved to nonvolatile media, as akin to freezing a person. But freezing a person to cryostorage seems wrong when the person isn't already dying and when there is no expectation of thawing.

    ReplyDelete
  5. The first AI we create would very likely not experience things as deeply or as beautifully as possible. Basically the same way chimps have more morally significant than ants, this machine might be only a few ethical steps ahead of us, or even potentially even behind us. If we want to create the being that can experience things to the utmost degree (and therefore have the most ethical significance) we couldn't waste resources on the first thing we happen to create that requires ethical consideration.

    ReplyDelete
  6. Everything that requires ethical consideration is something that it is ethically worth expending resources on.

    ReplyDelete
  7. The new generation of robots are able solve moral dilemmas if you don't know (following Code of Ethics) so the future is now... think about this.

    invenitmundo.blogspot.com/2016/06/the-new-generation-of-robots-are-able.html

    ReplyDelete