Derek Parfit’s Repugnant Conclusion says that, on standard utilitarian assumptions, if n is sufficiently large, then n lives of some minimal level of flourishing will be better any fixed size society of individuals that greatly flourish.
I’ve been thinking about the interesting things that you can get if you combine the Repugnant Conclusion argument with strong Artificial Intelligence.
Assume utilitarianism first.
Given strong Artificial Intelligence, it should be possible to make a computer system that achieves some minimal level of human-like flourishing. Once that is achieved, economies of scale become possible, and I expect it should be possible to replicate that system a vast number of times, and to do so much more cheaply per copy than the cost of supporting a single human being. Note that the replication can be done both synchronically and diachronically: we should optimize the hardware and software in such a way as to make both lots of instances of the hardware and to run as many flourishing lives per day as possible. Once the program is written, since an exact copy is being run for each instance with the same inputs, we can assure equal happiness for all.
If strong AI is possible, generating such minimally flourishing AI and making a vast number of replicates seems a more promising way to increase utility than fighting disease and poverty among humans. Indeed, it would likely be more efficient to decrease the number of humans to the minimum needed to serve the great number of duplicates. At that point, the morally best thing for humans to do will be to optimize the hardware to allow us to build more computers running the happy-ish software and to run each life in as short an amount of external time as possible, and to work to increase the amount of flourishing in the software.
Now note an interesting difference from the traditional Repugnant Conclusion. It seems not unlikely that if strong AI becomes achieved, we will be able to repeatably, safely and cheaply achieve in software not just the minimal levels of human-like flourishing, but high levels of human-like flourishing, even of forms of flourishing other than the pleasure or desire fulfillment that classical utilitarian theories talk about. We could make a piece of software that quickly and cheaply enjoys the life of a classical music afficionado, enjoying the best examples of human classical music culture, and that has no hankering for anything more. And if compatibilism is true (and it is likely that it is true if strong AI is true), then we could make a piece of software that reliably engages in acts of great moral heroism in its simulated world. We lose a bit of value from the fact that these acts only affect a simulated world, but we gain by being able to ensure that no immoral activity mars the value. If we are not certain of the correct axiology, we could hedge our bets by making a software life that is quite flourishing on any plausible axiology: say one that combines pleasure, desire satisfaction, enjoyment of the arts and virtuous activity. And then just run vast numbers of copies of that life per day.
It is plausible that, unless there is some deep spiritual component to human flourishing (of a sort that is unlikely to be there given the materialism that seems needed for strong AI to be possible), we will not only be able to more efficiently increase the sum good by running lots of copies of a happy life than by improving human life, but we will be able to more efficiently improve on the average good.
But one thing is unchanged. The conclusion is still repugnant. A picture of our highest moral imperative being the servicing of a single computer program run on as many machines as possible repeatedly as quickly possible is repugnant.
A tempting objection is to say that multiple copies of the same life count as just one. That’s easily fixed: a well-controlled amount of algorithmic variation can be introduced into lives.
Observe, too, that the above line of thought is much more practical than the original Repugnant Conclusion. The original Repugnant Conclusion is highly theoretical, in that it is difficult to imagine putting into place the kind of society that is described in it without a significant risk of utility-destroying revolution. But right now rich philanthropists could switch their resources from benefiting the human race to working to develop a happy AI (I hesitate to write this sentence, with a slight fear that someone might actually make that switch—but the likelihood of my blog having such an effect seems small). One might respond to the Repugnant Conclusion that all ethical theories give implausible answers in some hypothetical cases. But the case here is not hypothetical.
We can take the above, just as the original Repugnant Conclusion, to be a reductio ad absurdum against utilitarianism. But it seems to be more than that. Any plausible ethics has to have a consequentialist component, even if pursuit of the consequences is restricted by deontic considerations. So on many competing ethical theories, there will still be a pull to the conclusion, given the vast amount of total value, and the respectable amount of average (and median) value achieved in the repugnant proposal. And one won’t be able to resist the pull by denying the picture of value that underwrites utilitarianism, because as noted above, “deeper” values can be achieved in software, given strong AI.
I can think of three plausible ways out of the strong AI version of the Repugnant Conclusion:
The correct axiology lays great stress on the value of deep differences between lives, deeper than can be reliably and safely achieved through algorithmic variation (if there is too much variation, we risk producing misery).
There is a deontic restriction prohibiting the production of software-based persons, perhaps because it is wrong for us to have such a total influence over the life of another person or because it is wrong for us to produce persons by any process other than natural reproduction.
Strong AI is impossible.
I am inclined to think all three are true. :-)
A similar non-hypothetical utilitarian argument would be that individuals should behave like the fictional Zacharay Baumkletterer. You should live on the edge of starvation and make sure that every extra penny goes to other starving people on the other side of the world. This is a non-hypothetical situation because anyone could start doing it immediately, if they chose to do so, and standard utilitarian reasoning suggests that it would be a great thing to do.
ReplyDeleteYet the conclusion is repugnant, particularly if it says that the proposal is morally obligatory, but even the proposal that it would be a great thing to do at least for any normal person.
The basic answer is that utilitarianism is false, and the same basic answer applies to your scenario.
Nonetheless, someone could say it would not be a big deal if a few people did this. And in the same way, I would say it would not be a big deal if a few people try to make many happy AIs.
In the Zachary Baumkletterer situation, the thing people find objectionable (rightly) is that they are devoting all of their resources to strangers that they know nothing about. In the same way, but even more so, the problem people will have with your AI scenario is that all resources are being devoted to non-humans -- things even more remote from them than strangers on the other side of the world. This is a valid consideration in the Baumkletterer situation, and likewise in the AI situation. So the repugnant conclusion is false in both situations, and for similar reasons. Yet the AI situation is worse because the aggravating factor (distance from current humans) is worse.
As long as Baumkletterer isn't neglecting duties to friends and family, and isn't neglecting the good of friendship (friendship generally isn't financially costly, so one can have friendships even while living on the edge financially), I think it *is* a great thing to do. I just don't think it's obligatory.
ReplyDeleteNote, too, that in the AI case, the efforts would be devoted to the very opposite of "strangers that they know nothing about". The AIs in question would be very well known to us, because we would have designed their lives. Moreover, they would be akin to our children.
The fact that they are non-human does not seem to me to be ethically all that important. If I lived on Andoria, presumably some of my friends would be human and some would be Andorian. I would have some preferences among my friends, but I don't see why these preferences morally should align along species lines.
Compare: There is a wounded chimpanzee and a wounded dolphin, and I can help only one. The chimpanzee is a fellow primate. The dolphin is not. But does *that* make much of a difference?
Interesting! But I have some worries about the three "ways out" you mention at the end. Re: (1), this seems contingent. Even if we could guarantee significant variation among these simulated lives without creating lots of misery, the conclusion seems just as repugnant (or not *that* much less repugnant). Re: (2), let's grant that there's a deontic restriction on creating software-based persons. Still, we can just ask, from the perspective of axiology, which state of affairs would be better (apart from which actions are needed to bring them about---we could suppose they each emerge by a fluke arrangement of particles): the zillion AI scenario with few or no flourishing humans, or 10 billion flourishing humans? If we say the second state of affairs would be better, then the deontic-restriction move doesn't help. Re: (3), the metaphysical or nomic impossibility of strong AI wouldn't help with the deontic version of the problem. Insofar as consequences are relevant to moral decision making, presumably it's *expected* consequences, understood relative to our credence function (or whatever credence function would be rational given our evidence). So we'll get the problem as long as we (rationally) have a non-zero credence in the strong-AI hypothesis. And surely we should have non-negligible confidence that an appropriately designed AI would be conscious.
ReplyDeletePerhaps one answer to the repugnant conclusion is that the type of life which is minimally flourishing has certain necessary conditions: at least one deep human relationship (it is not god that Man should be alone), access to express one’s creativity and autonomy through the arts, philosophy, or self- directed labor (such that one does not experience complete alienation from the self), and the basic requirements of food, etc. that do not make one’s life unbearably harsh (the pious acetic still meets this condition because her devotion to God makes depriation of food, etc. bearable). Perhaps one should add a relationship with God as another necessary condition of flouishing.
ReplyDeleteHowever, these necessary conditions for minimal flourishing actually provide the sufficient conditions for substantive flourishing (e.g. Paul’s contentedness in “all things,” including prison or material hardship). So the repugnant conclusion ends up not being repugnant under the ex hypothesi stipulation that the lives of persons are minimally flourishing.
However, your objections to the AI scenario are compatible with someone who accepts such an account lf flourishing- especially if even strong AI cannot have a relationship with God.
Brian:
ReplyDeleteGood points.
Ad 1: I am not sure that with a lot of variety in the lives I find there to be much that is repugnant. Except perhaps insofar as I think flourishing lives of persons should include some spiritual ingredient that we can't give to an AI.
In any case, I am OK with contingency. I am inclined to accept something like natural law, and on natural law, our moral obligations are indexed to our nature and our moral intuitions come from our nature, while our nature is designed for a particular contingent environment. Consequently, when we consider a hypothetical environment that sufficiently greatly differs from the one we are designed, we should not be surprised if our moral intuitions produce weird results.
Ad 2: I don't see why thinking that the deontic restriction move doesn't help if it turns out that the zillions software persons scenario is better than the billions human persons one. My intuition is that we shouldn't neglect humans in order to try to create zillions of software persons, not that the human persons are better -- bracketing the question of a spiritual ingredient, that is.
Ad 3: I think that expected utilities may not be an issue, because of an asymmetry between producing good and removing harm. For instance, suppose that there is no God and no life after death ordinarily, but aliens have put me in the following situation. I am about to be tortured for ten years. I can now press button A or button B or neither, but not both. If I do neither, nothing changes. If I press A, I escape the torture. If I press B, I gain a one in a million chance of an infinitely long life of full human flourishing. It seems to me that I should press A: the certainty of relieving the great finite suffering beats a tiny chance of producing a great good. Likewise, I am inclined to think that producing an infinite number of AIs on a small chance that they will be happy does not beat relieving one person from ten years of torture.
For the same reason, I think Pascal's Wager can fail when the probability of God's existence is sufficiently small and when there are great positive evils that one would on balance suffer if one embraced the life of faith.
Mr. Ellis:
ReplyDeleteApart from the relationship with God issue, it seems that all the goods you mention are ones that the software persons could basically be guaranteed to have, if strong AI is possible. We could, for instance, create them in pairs, and make the pairs have a deep and satisfying interpersonal relationship. We could give them creativity, etc. And assuming compatibilism (and I think incompatibilism goes hand in hand with an anti-naturalism that doesn't fit with strong AI) all these goods could be pre-programmed.
Reflecting on the comments and the issues, I am now thinking that a good move might be:
ReplyDelete4. The central components of human flourishing involve an indeterministic freedom
of the will that (a) software persons could not have, or (b) even if they could have
it, one would no longer have a high degree of confidence that their lives would go
on balance very well for them.
Alex:
ReplyDeleteSuppose a strong AI has already made the flourishing computer programs. There are billions of them (many more than humans), and they can make more of their own kind. Alas, that AI was poorly designed, so it has also set up a device D1 that will destroy all of those flourishing AI, and another device D2 that will devastate the Earth, killing all or nearly all multicellular organisms, including of course all humans. It will also destroy any crewed spaceship. The AI that did that has already committed suicide (again, bad design), but left one way to stop one of the devices: there are two buttons, B1 and B2, but only one of them can be used; after that, both become inoperative. B1 will stop D1, but then B2 D2 will do its job (the flourishing AI won't be affected). On other hand, B2 will stop D2, but then D1 will do its job. The defective AI has said this would happen, and all of its previous claims have come out true to the extent one can verify them. Moreover, human scientists with the help of other computers have concluded that the devices will in fact do as claimed.
Suppose B1 or B2 will only work if pushed by Alice. Does she have an obligation to push D1?
I think clearly not. The bottom line is that this sort of scenario is a reductio against utilitarianism, but doesn't seem to tell us anything about the morality of making strong AI or the morality of artificial means of reproduction. The conclusion that there is no obligation does not require that any humans make the AI, and is independent of how we reproduce.
I meant "Does she have an obligation to push B1?"
ReplyDeleteAlso, if needed, we can further stipulate that D1 will destroy all the AI inflicting on each, on average, a similar amount suffering as humans are expected to suffer, on average, as a result of D2.
My intuition is that we should favor the vastly larger number of persons, as long as the lives are on par, and regardless of species or biologicality. When the numbers are close, we should favor friends and family.
ReplyDeleteMy intuition is that there is no obligation to moral obligation to save many strangers over friends and family, though it might be permissible. Also, I'm not sure the AI are persons. What's a person?
ReplyDeleteThe AI are intelligent, they flourish, etc., but I think their minds would likely be pretty alien, and I don't think there is a moral obligation to save aliens over humans. I'm not talking about fictional aliens, like Vulcans or Andorians. Those are for all intent and purposes humans, in the sense that their minds are essentially human minds, with some quirks if you like. I'm talking about truly alien minds, the sort of thing we can't relate to in a significant fashion, who see the world in ways we cannot even begin to fathom. One might stipulate that the AI are not alien in that sense, but I don't know how Alice would figure that out.
But I guess we have different intuitions on the matter.
I would still say there is no obligation to dedicate resources to make happy AI instead of, say, helping humans who suffer, just as there is no obligation to dedicate resources to make the future world better for humans rather than help those who suffer now. Humans in the future are likely to be overall better off than they are now. Why should humans now dedicate their limited resources to help the better off who don't yet exist, instead of the poorer who do?
Similarly, why should humans today help the better off AI of tomorrow instead of the weak and needy human people of today?
This is independent on the morality of artificial insemination or reproduction by other artificial means.