Monday, June 18, 2018

Might machine learning hurt the machine?

Machine learning has the computer generate parameters for a neural network on the basis of a lot of data. Suppose that we think that computers can be conscious. I wonder if we are in a position, then, to know that any particular training session won’t be unpleasant for the computer. For we don’t really know what biological neural configurations, or transitions between them, constitute pain and other forms of unpleasantness. Maybe in the course of learning, among the vast number of changing network parameters or the updates between them there will be some that will hurt the computer. Perhaps it hurts, for instance, when the value of the loss function is high.

This means that if we think computers can be conscious, we may have ethical reasons to be cautious about artificial intelligence research, not because of the impact on people and other organisms in our ecosystem, but because of the possible impact on the computers themselves. We may need to first solve the problem of what neural states in animals constitute pain, so that we don’t accidentally produce functional isomorphs of these states in computers.

If this line of thought seems absurd, it may be that the intuition of absurdity here is some evidence against the thesis that computers can be conscious (and hence against functionalism).

1 comment:

Louis Francini said...

Brian Tomasik's article "Do Artificial Reinforcement-Learning Agents Matter Morally?" might be of interest:

https://arxiv.org/abs/1410.8233

An earlier version of this article by the same author can be found at:

http://reducing-suffering.org/ethical-issues-artificial-reinforcement-learning/