Monday, June 18, 2018

Might machine learning hurt the machine?

Machine learning has the computer generate parameters for a neural network on the basis of a lot of data. Suppose that we think that computers can be conscious. I wonder if we are in a position, then, to know that any particular training session won’t be unpleasant for the computer. For we don’t really know what biological neural configurations, or transitions between them, constitute pain and other forms of unpleasantness. Maybe in the course of learning, among the vast number of changing network parameters or the updates between them there will be some that will hurt the computer. Perhaps it hurts, for instance, when the value of the loss function is high.

This means that if we think computers can be conscious, we may have ethical reasons to be cautious about artificial intelligence research, not because of the impact on people and other organisms in our ecosystem, but because of the possible impact on the computers themselves. We may need to first solve the problem of what neural states in animals constitute pain, so that we don’t accidentally produce functional isomorphs of these states in computers.

If this line of thought seems absurd, it may be that the intuition of absurdity here is some evidence against the thesis that computers can be conscious (and hence against functionalism).

5 comments:

  1. Brian Tomasik's article "Do Artificial Reinforcement-Learning Agents Matter Morally?" might be of interest:

    https://arxiv.org/abs/1410.8233

    An earlier version of this article by the same author can be found at:

    http://reducing-suffering.org/ethical-issues-artificial-reinforcement-learning/

    ReplyDelete
  2. Dr. Pruss, what’s your thoughts on connectionism as a theory of consciousness. I think connectionism is the best theory we have of consciousness. Connectionism entails that there are no temporarily unextended token mental-states, so any form of tensed time cannot explain the experience of time if our best theory of consciousness turns out to be correct.

    ReplyDelete
  3. Even on a classical symbolic theory, I suspect any conscious token mental states would be temporally extended.

    That said, I think "so any form of tensed time cannot explain the experience of time" is too strong. The presentist connectionist can say that what makes it be the case that you have a quale Q at time t is constituted by your present state at t together with facts about the states you had a short time ago.

    ReplyDelete