Perhaps the deepest question about human beings is about the source of our dignity. What feature of us is it that grounds our dignity, gives us a moral status beyond that of brute animals, provides us with a worth beyond market value, makes us into beings to be respected no matter the stakes?
I was thinking about the proposal (from the Kantian tradition, but rather simplified) that it is our ability to set ends of ourselves that is special about humans. But as far as I put it, the proposal is obviously inadequate. Suppose I take our Roomba and program it to choose a location in its vicinity at random and then try to find a path to that location using some path-finding algorithm. A natural way to describe the robot's functioning then is this: The robot set an end for itself and then searched for means appropriate to that end. So on the simple end-setting proposal, the robot should have dignity. But that's absurd: even if one day someone makes a robot with dignity, we're not nearly there yet, and yet what I've described is well within our current capabilities (granted, one might want to stick a Kinect on the Roomba to do it, since otherwise one would have to rely on dead-reckoning).
Perhaps, though, my end-setting Roomba wouldn't have enough of a variety of ends. After all, all its ends are of the same logical form: arrive at such and such a location. Maybe the end-setting theory needs the dignified beings to be able to choose between a wider variety of ends. Very well. There is a wide variety of states of the world that can be described with the Roomba's sensors, and we can add more sensors. We could program the Roomba to choose at random a state of the world that can be described in terms of actual and counterfactual sensor values and then try to achieve that end with the help of some simple or complex currently available algorithm. Now, maybe even the variety of ends that can be described using sensors isn't enough for dignity. But now the story is starting to get ad hoc, as we embark on the hopeless task of quantifying the variety of ends needed for dignity.
And surely that's not the issue. The problem is, rather, with whole idea that a being gets dignity just by being capable of choosing at random between goals. Surely dignity wouldn't just require choice of goals, but rational choice of goals. But what is this rationality in the choice of goals? Well, there could be something like an avoidance of conflicts between goals. However, that surely doesn't do much to dignify a being. If the Roomba chose a set of goals at random, discarding those sets that involved some sort of practical conflict (the Roomba--with some hardware upgrade, perhaps--could simulate pursuing the set of goals and see if the set is jointly achievable in practice), that would be cleverer, but wouldn't be dignified.
And I doubt that even more substantive constraints would make the random end-setting be a dignity-conferring property. For there is nothing dignified about choosing randomly between options. There might be dignity in a being that engaged in random end-setting subject to moral constraints, but the dignity wouldn't be grounded in the end-setting as such, but the being's subjection of its procedures to moral constraints.
The randomness is a part of the problem. But of course replacing randomness with determinism makes no difference. We could specify some deterministic procedure for the Roomba to make its choice--maybe it sorts the descriptions of possible ends alphabetically and always chooses the third one on the list--but that would be nothing special.
If end-setting is to confer dignity, the being needs to set its ends not just subject to rational constraints, but actually for reasons. Thus there must be reasons prior to the ends, reasons-to-choose and not just constraints-on-choice. However, positive reasons embody ends. And so in a being whose end-setting makes it be dignified, this end-setting is governed by prior ends, the ends embodied in the reasons the being is responsive to in its end-setting. On pain of vicious regress, such a being must be responsive to ends that it did not choose. Moreover, for this to be dignity-producing, surely the responsiveness needs to to these ends as such. But "an end not chosen by us" is basically just the good. So these beings must be responsive to the good as such.
At this point, however, it becomes less and less clear that the choice of ends is doing all that much work in our story about dignity, once we have responsiveness to the good as such in view. For this responsiveness now seems a better story about what confers dignity. (Though perhaps still not an adequate one.)
Objection: No current robot would be capable of grasping ends as such and hence cannot adopt ends as such.
Response: Sure, but can a two-year-old? A two-year-old can adopt ends, but does it cognize the ends as ends?
If as a human I am the end setter, then what would that have made my Irish Setter? :-))
ReplyDelete