tag:blogger.com,1999:blog-3891434218564545511.post2670754447705419246..comments2024-03-28T19:56:42.305-05:00Comments on Alexander Pruss's Blog: Some thoughts on the ethics of creating Artificial IntelligenceAlexander R Prusshttp://www.blogger.com/profile/05989277655934827117noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-3891434218564545511.post-17163702003566389192016-09-23T17:23:44.798-05:002016-09-23T17:23:44.798-05:00The new generation of robots are able solve moral ...The new generation of robots are able solve moral dilemmas if you don't know (following Code of Ethics) so the future is now... think about this. <br /><br />invenitmundo.blogspot.com/2016/06/the-new-generation-of-robots-are-able.htmlAnonymoushttps://www.blogger.com/profile/12245841576670440480noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-36210284866459054012016-06-15T12:57:07.270-05:002016-06-15T12:57:07.270-05:00Everything that requires ethical consideration is ...Everything that requires ethical consideration is something that it is ethically worth expending resources on.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-18115060716384473722016-06-15T12:42:57.735-05:002016-06-15T12:42:57.735-05:00The first AI we create would very likely not exper...The first AI we create would very likely not experience things as deeply or as beautifully as possible. Basically the same way chimps have more morally significant than ants, this machine might be only a few ethical steps ahead of us, or even potentially even behind us. If we want to create the being that can experience things to the utmost degree (and therefore have the most ethical significance) we couldn't waste resources on the first thing we happen to create that requires ethical consideration.Anonymoushttps://www.blogger.com/profile/13551944056086383587noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-60526753715838319552016-06-15T08:12:47.976-05:002016-06-15T08:12:47.976-05:00Using the words "genuinely intelligent" ...Using the words "genuinely intelligent" and "sapient" was meant to include self-awareness.<br /><br />Turning off without backing up volatile storage (e.g. RAM) to nonvolatile storage (e.g. disk) could be closely analogous to killing, depending on details. Turning off without preserving a nonvolatile copy would also be like that. <br />We might see turning off, even when everything is saved to nonvolatile media, as akin to freezing a person. But freezing a person to cryostorage seems wrong when the person isn't already dying and when there is no expectation of thawing.Alexander R Prusshttps://www.blogger.com/profile/05989277655934827117noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-15775466492931777172016-06-15T07:54:20.628-05:002016-06-15T07:54:20.628-05:00"We would have the sorts of duties towards it..."We would have the sorts of duties towards it that we have towards other persons."<br /><br />I disagree, because I think that our duties towards other persons depend on many ways on concrete facts about human life and how it works. For example, there is typically no such thing as shutting off a computer "permanently," while killing a human being is normally irreversible.entirelyuselesshttps://www.blogger.com/profile/12422102436356978880noreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-49107937873773917792016-06-15T00:27:24.851-05:002016-06-15T00:27:24.851-05:00Let us say that a closed system A.I. was created. ...Let us say that a closed system A.I. was created. By this, a system that had no way to connect to other systems.(I say this to take out some of the problems that come with this subject) This A.I. was able to be taught from its infancy to its "adulthood" much like humans are.<br /><br />In this case the A.I., for all purposes, is nothing more than a brain that sends its electrical impulses within its hardware apposed to the humans organic system.<br /><br />This system over its lifetime will be powered on and off for the repair and upgrade of components and can be equated to nothing more than a human going to sleep. Now for long term loss of power, it is the same train of thought, The A.I just "slept" longer.<br /><br />This A.I. does not suffer from the same problems that can arise in a human from prolonged loss of brain activity. Problems from trauma, disease and age, for a small set of examples, get tossed out the window dealing with this A.I.. This closed system intelligence, as long as a human or even a subsystem could run repairs, in theory could run/live forever. A hard drive fails, gets replaced and is set to its back up point with no loss of personality, knowledge or other harmful affect.<br /><br />Can we treat this intelligence with respect due a human being? Absolutely. We could even envy its longevity.<br /><br />Your question about it being morally right to create this intelligence. To me it would be acceptable to do so. The A.I. would never, and I am paraphrasing, nearly be human. Until the A.I. became self-aware it was nothing more than code that mimicked a sentient being. Because of this, even in the failures leading to its consciousness, the A.I. has nothing more than the concept of human nature. We also have to remember that we are not creating a human in any way, shape or form. We would be giving birth to an entirely new category of, well, life.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3891434218564545511.post-4910080150873824552016-06-14T19:05:25.836-05:002016-06-14T19:05:25.836-05:00I think you've been watching too much person o...I think you've been watching too much person of interest. omarhttps://www.blogger.com/profile/16961211668137264303noreply@blogger.com