Showing posts with label flourishing. Show all posts
Showing posts with label flourishing. Show all posts

Tuesday, June 4, 2024

The Epicurean argument on death

The Epicurean argument is that death considered as cessation of existence does us no harm, since it doesn’t harm us when we are alive (as we are not dead then) and it doesn’t harm us when we are dead (since we don’t exist then to be harmed).

Consider a parallel argument: It is not a harm to occupy too little space—i.e., to be too small. For the harm of occupying too little space doesn’t occur where we exist (since that is space we occupy) and it doesn’t occur where we don’t exist (since we’re not there). The obvious response is that if I am too small, then the whole of me is harmed by not occupying more space. Similarly, then, if death is cessation of existence, and I die, then the whole of me is harmed by not occupying more time.

Here’s another case. Suppose that a flourishing life for humans contains at least ten years of conversation while Alice only has five years of conversation over her 80-year span of life. When has Alice been harmed? Nowhen! She obviously isn’t harmed by the lack of conversation during the five years of conversation. But neither is she harmed at any given time during the 75 years that she is not conversing. For if she is harmed by the lack of conversation at any given time during those 75 years, she is harmed by the lack of conversation during all of them—they are all on par, except maybe infancy which I will ignore for simplicity. But she’s only missing five years of conversation, not 75. She isn’t harmed over all of the 75 years.

There are temporal distribution goods, like having at least ten years of conversation, or having a broad variety of experiences, or falling in love at least once. These distribution goods are not located at times—they are goods attached to the whole of the person’s life. And there are distribution bads, which are the opposites of the temporal distribution goods. If death is the cessation of existence, it is one of these.

I wonder, though, whether it is possible for a presentist to believe in temporal distribution goods. Maybe. If not, then that’s too bad for the presentist.

Friday, November 11, 2022

Species flourishing

As an Aristotelian who believes in individual forms, I’m puzzled about cases of species-level flourishing that don’t seem reducible to individual flourishing. On a biological level, consider how some species (e.g., social insects, slime molds) have individuals who do not reproduce. Nonetheless it is important to the flourishing of the species that the species include some individuals that do reproduce.

We might handle this kind of a case by attributing to other individuals their contribution to reproduction of the species. But I think this doesn’t solve the problem. Consider a non-biological case. There are things that are achievements of the human species, such as having reached the moon, having achieved a four minute mile, or having proved the PoincarĂ© conjecture. It seems a stretch to try to individualize these goods by saying that we all contributed to them. (After all, many of us weren’t even alive in 1969.)

I think a good move for an Aristotelian who believes in individual forms is to say that “No man or bee is an island.” There is an external flourishing in virtue of the species at large: it is a part of my flourishing that humans landed on the moon. Think of how members of a social group are rightly proud of the achievements of some famous fellow-members: we Poles are proud of having produced Copernicus, Russians of having launched humans into space, and Americans of having landed on the moon.

However, there is still a puzzle. If it is a part of every human’s good that “I am a member of a species that landed on the moon”, does that mean the good is multiplied the more humans there are, because there are more instances of this external flourishing? I think not. External flourishing is tricky this way. The goods don’t always aggregate summatively between people in the case of external flourishing. If external flourishing were aggregated summatively, then it would have been better if Russia rather than Poland produced Copernicus, because there are more Russians than Poles, and so there would have been more people with the external good of “being a citizen of a country that produced Copernicus.” But that’s a mistake: it is a good that each Pole has, but the good doesn’t multiply with the number of Poles. Similarly, if Belgium is facing off Brazil for the World Cup, it is not the case that it would be way better if the Brazilians won, just because there are a lot more Brazilians who would have the external good of “being a fellow citizen with the winners of the World Cup.”

Monday, May 18, 2020

Gamification

Most philosophers don’t talk much about games. But games actually highlight one of the really amazing powers of the human being: the power to create norms and to create new forms of well-being.

Lately I’ve been playing this vague game with vague rules and vague non-numerical points when out and about:

  • Gain bonus points if I can stay at least nine feet away from non-family members in circumstances in which normally I would come within that distance of them; more points the further away I can be, though no extra bonus past 12 feet.

  • Win game if I avoid inhaling or exhaling within six feet of a non-family member. (And of course I have to be careful that the first breath past the requisite distance be moderate in size rather than a big huff.)

When the game goes well, it’s delightful, and adds value to life. On an ordinary walk around campus, I almost always win the game now. Last time I went shopping at Aldi, I would have won (having had to hold my breath a few times), except that I think I mumbled “Thank you” within six feet of the checkout worker (admittedly, if memory serves, I think I mumbled it quietly, trying to minimize the amount of breath going out, and then stepped back for the inhalation after the words; and of course I was wearing a mask, but it's still a defeat). Victory, or even near-victory, at the social distancing game is an extra good in life, only available because I imposed these game norms on myself, in addition to the legal and prudential norms that are independent of my will. Yesterday, I think I won the game all day despite going on a bike ride and a hike, attending Mass (we sat in the vestibule, in chairs at least nine feet away from anybody else, and the few times someone was passing by I held my breath), and playing tennis with a grad student. That's satisfying to reflect on. (At the same time, playing a game also generally adds a bit of extra stress, since there is the possibility, and sometimes actuality, of defeat. And it's hard to concentrate on the Mass while all the time looking around for someone who might be approaching within the forbidden distance. And, no, I didn't actually think of it as a game when I was at Mass, but rather as a duty of social responsibility.)

I think the only other person in my family who has gamified social distancing is my seven-year-old.

Wednesday, April 3, 2019

Loving our neighbor as ourselves

Suppose that, as some theories of motivation hold, that all our actions are done in pursuit of our flourishing. But the Scriptures tell us that we should love our neighbor as ourselves. Therefore, all our actions should also be done in pursuit of our neighbor’s flourishing. This seems an unreasonably high standard.

There are three ways out:

  1. Deny that all our actions are done in pursuit of our flourishing.

  2. Deny the love ethic of the Old and New Testaments.

  3. Argue that the standard is not unreasonably high.

For me, (2) is not an option. I do think (1) is a serious option for independent reasons.

But I also think (3) is a very promising approach. Reasons to think that the requirement that we be pursuing our neighbor’s flourishing in all our actions is excessive are apt also to be reasons to think that Paul’s requirement that we “pray constantly” (1 Thes. 5:17) is excessive as well. But if all our actions are done in pursuit of our neighbor’s flourishing, and if we see our neighbor as in the image of God, then all our actions might be a kind of prayer, thereby fulfilling Paul’s difficult injunction. And, conversely, if we are praying always, aren’t we going to be always pursuing our neighbor’s flourishing?

We get something something similarly onerous to the requirement to pursue our neighbor’s flourishing in Kantian ethics: the requirement always to treat rational beings as ends.

One family of difficult cases, both for the flourishing requirement and the Kantian one, lies in everyday businesslike interactions. To use an example of Parfit, you’re buying coffee. It seems that all that is relevant about the barista is that they are supplying coffee. How can you not treat them as a mere means? How can you be pursuing their flourishing? Well, a useful reflection is that we flourish in large part by promoting the wellbeing of others. The barista’s professional activity is a part of their flourishing as a social animal. In courteously buying coffee, one is doing one’s part in an interaction that constitutes a part of that flourishing. Of course, it would be very odd, and likely to lead to pride (“Look at how great I am: I am enabling his flourishing”), if one were to be explicitly thinking about this each time one buys coffee. But courteously making opportunities for others to exercise their professional skills can be a habitual background intention in one’s actions. Similarly, when I when I bite into a delicious sandwich, my intention to get some enjoyment is not something that I need to think about, but it structures the activity (e.g., it explains why I don’t at the same time pinch myself hard).

A different kind of difficult case is given by activity which adversely impacts the flourishing of others. Morality sometimes requires such actions. Less well qualified applicants need to be turned down and trolleys need to be redirected towards more sparsely occupied tracks. Here I think three things can be done to abide by the flourishing requirement. The first is that one not intend a bad effect on flourishing. One doesn’t turn down the less well qualified applicants in order to negatively impact their flourishing. The second is that while declining the applicants or redirecting the trolley, one should be taking their flourishing into account, by thinking about any creative ways to decrease the negative impact on flourishing. Even if no creative ways are found (but isn’t prayer always an option?), the action is chosen as part of a pursuit of the flourishing of those who are harmed by it—but not of course as part of the pursuit of only their flourishing. The third is that there is a kind of harm to one if one is benefited immorally. To a morally sensitive person, it feels bad to get a job that another applicant is was better qualified for, and it would surely feel awful to have five people die because the trolley operator refused to redirect the trolley away from them for one’s sake. These feelings reflect reality. No human is an island, and when our flourishing is at the expense of those who deserve flourishing more, that is bad for us—even if we don’t know about it. It may not be on balance bad for us, but still it is a bad thing. And so the person who turns down the less qualified candidate or redirects the trolley prevents this bad thing from happening, and this is a positive impact on flourishing.

Monday, May 7, 2018

Heaven and materialism: The return of the swollen head problem

Plausibly, there is a maximum information density for human brains. This means that if internal mental states supervene on the information content of brains and there is infinite eternal life, then either:

  1. Our head grows without bound to accommodate a larger and larger brain, or

  2. Our brain remains bounded in size and either (a) eventually we settle down to a single unchanging internal mental state (including experiential state) which we maintain for eternity, or (b) we eternally move between a finite number of different internal mental states (including experiential states).

For if a brain remains bounded in size, there are only finitely many information states it can have, because of the maximum information density. Neither of options 2a and 2b is satisfactory, because mental (intellectual, emotional and volitive) growth is important to human flourishing, and a single unchanging internal mental state or eternal repetition does not fit with human flourishing.

Note, too, that on both options 2a and 2b, a human being in heaven will eventually be ignorant of how long she’s been there. On option 2b, she will eventually also be ignorant of whether it is the first time, the second time, or the billionth that she is experiencing a particular internal mental state. (I am distinguishing “internal mental states” from broad mental states that may have externalist semantics.) This, too, does not fit with the image of eternal flourishing.

This is, of course, a serious problem for the Christian materialist. I assume they won’t want to embrace the growing head option 1. Probably the best bet will be to say that in the afterlife, our physics and biology changes in such a way as to remove the information density limits from the brain. It is not clear, however, that we would still count as human beings after such a radical change in how our brains function.

The above is also a problem for any materialist or supervenientist who becomes convinced—as I think we all should be—that our full flourishing requires eternal life. For the flourishing of an entity cannot involve something that is contrary to the nature of a being of that sort. But if 2a and 2b are not compatible with our flourishing, and if 1 is contrary to our nature, then our flourishing would seem to involve something contrary to our human nature.

This is a variant of the argument here, but focused on mental states rather than on memory.

Monday, April 16, 2018

The Repugnant Conclusion and Strong AI

Derek Parfit’s Repugnant Conclusion says that, on standard utilitarian assumptions, if n is sufficiently large, then n lives of some minimal level of flourishing will be better any fixed size society of individuals that greatly flourish.

I’ve been thinking about the interesting things that you can get if you combine the Repugnant Conclusion argument with strong Artificial Intelligence.

Assume utilitarianism first.

Given strong Artificial Intelligence, it should be possible to make a computer system that achieves some minimal level of human-like flourishing. Once that is achieved, economies of scale become possible, and I expect it should be possible to replicate that system a vast number of times, and to do so much more cheaply per copy than the cost of supporting a single human being. Note that the replication can be done both synchronically and diachronically: we should optimize the hardware and software in such a way as to make both lots of instances of the hardware and to run as many flourishing lives per day as possible. Once the program is written, since an exact copy is being run for each instance with the same inputs, we can assure equal happiness for all.

If strong AI is possible, generating such minimally flourishing AI and making a vast number of replicates seems a more promising way to increase utility than fighting disease and poverty among humans. Indeed, it would likely be more efficient to decrease the number of humans to the minimum needed to serve the great number of duplicates. At that point, the morally best thing for humans to do will be to optimize the hardware to allow us to build more computers running the happy-ish software and to run each life in as short an amount of external time as possible, and to work to increase the amount of flourishing in the software.

Now note an interesting difference from the traditional Repugnant Conclusion. It seems not unlikely that if strong AI becomes achieved, we will be able to repeatably, safely and cheaply achieve in software not just the minimal levels of human-like flourishing, but high levels of human-like flourishing, even of forms of flourishing other than the pleasure or desire fulfillment that classical utilitarian theories talk about. We could make a piece of software that quickly and cheaply enjoys the life of a classical music afficionado, enjoying the best examples of human classical music culture, and that has no hankering for anything more. And if compatibilism is true (and it is likely that it is true if strong AI is true), then we could make a piece of software that reliably engages in acts of great moral heroism in its simulated world. We lose a bit of value from the fact that these acts only affect a simulated world, but we gain by being able to ensure that no immoral activity mars the value. If we are not certain of the correct axiology, we could hedge our bets by making a software life that is quite flourishing on any plausible axiology: say one that combines pleasure, desire satisfaction, enjoyment of the arts and virtuous activity. And then just run vast numbers of copies of that life per day.

It is plausible that, unless there is some deep spiritual component to human flourishing (of a sort that is unlikely to be there given the materialism that seems needed for strong AI to be possible), we will not only be able to more efficiently increase the sum good by running lots of copies of a happy life than by improving human life, but we will be able to more efficiently improve on the average good.

But one thing is unchanged. The conclusion is still repugnant. A picture of our highest moral imperative being the servicing of a single computer program run on as many machines as possible repeatedly as quickly possible is repugnant.

A tempting objection is to say that multiple copies of the same life count as just one. That’s easily fixed: a well-controlled amount of algorithmic variation can be introduced into lives.

Observe, too, that the above line of thought is much more practical than the original Repugnant Conclusion. The original Repugnant Conclusion is highly theoretical, in that it is difficult to imagine putting into place the kind of society that is described in it without a significant risk of utility-destroying revolution. But right now rich philanthropists could switch their resources from benefiting the human race to working to develop a happy AI (I hesitate to write this sentence, with a slight fear that someone might actually make that switch—but the likelihood of my blog having such an effect seems small). One might respond to the Repugnant Conclusion that all ethical theories give implausible answers in some hypothetical cases. But the case here is not hypothetical.

We can take the above, just as the original Repugnant Conclusion, to be a reductio ad absurdum against utilitarianism. But it seems to be more than that. Any plausible ethics has to have a consequentialist component, even if pursuit of the consequences is restricted by deontic considerations. So on many competing ethical theories, there will still be a pull to the conclusion, given the vast amount of total value, and the respectable amount of average (and median) value achieved in the repugnant proposal. And one won’t be able to resist the pull by denying the picture of value that underwrites utilitarianism, because as noted above, “deeper” values can be achieved in software, given strong AI.

I can think of three plausible ways out of the strong AI version of the Repugnant Conclusion:

  1. The correct axiology lays great stress on the value of deep differences between lives, deeper than can be reliably and safely achieved through algorithmic variation (if there is too much variation, we risk producing misery).

  2. There is a deontic restriction prohibiting the production of software-based persons, perhaps because it is wrong for us to have such a total influence over the life of another person or because it is wrong for us to produce persons by any process other than natural reproduction.

  3. Strong AI is impossible.

I am inclined to think all three are true. :-)

Thursday, June 4, 2015

Teleological personhood

It is common, since Mary Anne Warren's defense of abortion, to define personhood in terms of appropriate developed intellectual capacities. This has the problem that sufficiently developmentally challenged humans end up not counting as persons. While some might want to define personhood in terms of a potentiality for these capacities, Mike Gorman has proposed an interesting alternative: a person is something for which the appropriate developed intellectual capacities are normal, something with a natural teleology towards the right kind of intellectual functioning.

I like Gorman's solution, but I now want to experiment with a possible answer as to why, if this is what a person is, we should care more for persons than, say, for pandas.

There are three distinct cases of personhood we can think about:

  1. Persons who actually have the appropriate developed intellectual capacities.
  2. Immature persons who have not yet developed those capacities.
  3. Disabled persons who should have those capacities but do not.

The first case isn't easy, but since everyone agrees that those with appropriate development intellectual capacities should be cared for more than non-person animals, that's something everyone needs to handle.

I want to focus on the third case now, and to make the case vivid, let's suppose that we have a case of a disabled human whose intellectual capacities match those of a panda. Here is one important difference between the two: the human is deeply unfortunate, while the panda is--as far as the story goes--just fine. For even though their actual functioning is the same, the human's functioning falls significantly far of what is normal, while the panda's does not. But there is a strong moral intuition--deeply embedded into the Christian tradition but also found in Rawls--that the flourishing of the most unfortunate takes a moral priority over the flourishing of those who are less unfortunate. Thus, the human takes priority over the panda because although both are at an equal level of intellectual functioning, this equality is a great misfortune for the human.

What if the panda is also unfortunate? But a panda just doesn't have the range of flourishing, and hence for misfortune, that a human does. The difference in flourishing between a normal human state and the state of a human who is so disabled as to have the intellectual level of a panda is much greater than the total level of flourishing a panda has--if by killing the panda we could produce a drug to restore the human to normal function, we should do so. So even if the panda is miserable, it cannot fall as far short of flourishing as the disabled human does.

But there is an objection to this line of thought. If the human and the panda have equal levels of intellectual functioning, then it seems that the good of their lives is equal. The human isn't more miserable than the panda. But while I feel the pull of this intuition, I think that an interesting distinction might be made. Maybe we should say that the human and the panda flourish equally, but the human is unfortunate while the panda is not. The baselines of flourishing and misfortune are different. The baseline for flourishing is something like non-existence, or maybe bare existence like that of a rock, and any goods we add carry one above zero, so if we add the same goods to the human's and the panda's account, we get the same level. But the baseline for misfortune is something like the normal level for that kind of individual, so any shortfall carries one above zero. Thus, it could be that the human's flourishing is 1,000 units, and the panda's flourishing is 1,000 units, but nonetheless if the normal level of flourishing for a human is, say, 10,000 units (don't take either the numbers or the idea of assigning numbers seriously--this is just to pump intuitions), then the human has a misfortune of 9,000 units, while the panda has a misfortune of 1,000 units.

This does, however, raise an interesting question. Maybe the intuition that the flourishing of the most unfortunate takes a priority is subtly mistaken. Maybe, instead, we should say that the flourishing of those who flourish least should take a priority. In that case, neither the disabled human doesn't take a priority over the panda. But this is mistaken, since by this principle a plant would take priority over a panda, since the plant's flourishing level is lower than a panda's. Better, thus, to formulate this in terms of misfortune.

What about intermediate cases, those of people whose functioning is below a normal level but above that of a panda? Maybe we should combine our answers to (1) and (3) for those cases. One set of reasons to care for someone comes from the actual intellectual capacities. Another comes from misfortune. As the latter reasons wane, the former wax, and if all is well-balanced, we get reason to care for the human more than for the panda at all levels of the human's functioning.

That leaves (2). We cannot say that the immature person--a fetus or a newborn--suffers a misfortune. But we can say this. Either the person will or will not develop the intellectual capacities. If she will, then she is a person with those capacities when we consider the whole of the life, and perhaps therefore the reasons for respecting those future capacities extend to her even at the early stage--after all, she is the same individual. But if she won't develop them, then she is a deeply unfortunate individual, and so the kinds of reasons that apply in case (3) apply to her.

I find the story I gave about (2) plausible. I am less convinced that I gave the right story about (3). But I suspect that a part of the reason I am dissatisfied with the story about (3) is that I don't know what to say about (1). However, (1) will need to be a topic for another day.

Friday, April 10, 2015

Integration

It sure seems that:

  1. A good human life is an integrated human life.
But suppose we have a completely non-religious view. Wouldn't it be plausible to think that there is a plurality of incommensurable human goods and the good life encompasses a variety of them, but they do not integrate into a unified whole? There is friendship, professional achievement, family, knowledge, justice, etc. Each of these constitutively contributes to a good human life. But why would we expect that there be a single narrative that they should all integrally fit into? The historical Aristotle, of course, did have a highest end, the contemplation of the gods, available in his story, and that provides some integration. But that's religion (though natural religion: he had arguments for the gods' existence and nature).

Nathan Cartagena pointed out to me that one might try to give a secular justification for (1) on empirical grounds: people whose lives are fragmented tend not to do well. I guess this might suggest that if there is no narrative that fits the various human goods into a single story, then one should make one, say by expressly centering one's life on a personally chosen pattern of life. But I think this is unsatisfactory. For I think that the norms that are created by our own choices for ourselves do not bear much weight. They are not much beyond hobbies, and hobbies do not bear much of the meaning of human life.

So all in all, I think the intuition behind (1) requires something like a religious view of life.

Friday, March 21, 2014

The human animal and the cerebrum

Suppose your cerebrum was removed from your skull and placed in a vat in such a way that its neural functioning continued. So then where are you: Are you in the vat, or where the cerebrum-less body with heartbeat and breathing is?

Most people say you're in the vat. So persons go with their cerebra. But the animal, it seems, stays behind—the cerebrum-less body is the same animal as before. So, persons aren't animals, goes the argument.

I think the animal goes with the cerebrum. Here's a heuristic.

  • Typically, if an organism of kind K is divided into two parts A and B that retain much of their function, and the flourishing of an organism of kind K is to a significantly greater degree constituted by the functioning of A than that of B, then the organism survives as A rather than as B.
Uncontroversial case: If you divide me into a little toe and the rest of me, then since the little toe's contribution to my flourishing is quite insignificant compared to the rest, I survive as the rest. More controversially, the flourishing of the human animal is to a significantly greater degree constituted by the functioning of the cerebrum than of the cerebrum-less body, so we have reason to think the human animal goes with the cerebrum.

Another related heuristic:

  • Typically, if an organism of kind K is divided into two parts A and B that retain much of their function, and B's functioning is significantly more teleologically directed to the support of A than the other way around, then the organism survives as A rather than as B.

My heart exists largely for the sake of the rest of my body, while it is false to say that the rest of my body exists largely for the sake of my heart. So if I am divided into a heart and the rest of me, as long as the rest of me continues to function (say, due to a mechanical pump circulating blood), I go with the rest of me, not the heart. But while the cerebrum does work for the survival of the rest of my body, it is much more the case that the rest of the body works for the survival of the cerebrum.

There may also be a control heuristic, but I don't know how to formulate it.

Tuesday, July 10, 2012

At least on average, a human life is good

A stranger is drowning. You know nothing about the stranger other than that the stranger is drowning. You can press a button, and the stranger will be saved, at no cost to yourself or anybody else. What should you do?

Of course you ought to press the button. That's simply obvious.

But it wouldn't be obvious if at least on average a human life weren't good, weren't worth living. If on average, a human life were bad, were not worth living, you would have to seriously worry about the likely bad future that you would be enabling by saving the stranger. It still might well be right to pull out the stranger, but it wouldn't be obvious. And if on average a human life were neutral, it wouldn't be obvious that it's a duty.

So our judgment that obviously a random stranger should be saved commits us to judging that at least on average a human life is good (or at least will be good).

Now suppose we get exactly one of the following pieces of information:

  • The stranger is a member of a downtrodden minority.
  • The stranger is currently a hospital patient (and is drowning in the bathtub of the hospital room).
  • The stranger's mother did not want him or her to be conceived.
  • The stranger is economically in the bottom 10% of society.
None of these pieces of information makes it less obvious that we should save the stranger's life. This judgment, then, commits us to judging that on average the life of a member of a downtrodden minority, or of a hospital patient or of someone whose mother did not want him or her to be conceived, or of someone economically in the bottom decile is at least on average good.

Suppose, however, that we get some more specific information, such as that the stranger is suffering horrendous pain that cannot in any way be relieved, or that the stranger will tomorrow be tortured to death. It may still be right to save the stranger, but it is no longer obvious that that's the right thing to do. So the on-average judgments above aren't simply derivative from a general judgment that all human life is worth living or from a deontic judgment that any drowning person who can easily be saved should be saved.

So, not only is the average human life worth living, but the average human life in conditions of significant adversity (being a downtrodden minority member, etc.) is worth living.

Now, I happen to think that every human life is worth living. But in this post I've only argued for a weaker claim.

Monday, October 4, 2010

Theism and flourishing

  1. (Premise) No life centered on love of someone who never exists is a flourishing life.
  2. (Premise) Some people lead a flourishing life centered on love of God.
  3. Therefore, it is false that God never exists.
  4. (Premise) God either always exists or never exists.
  5. Therefore, God always exists.
(I use "always" and "never" in such a way that a timeless being would count as existing always.)
One thing that is interesting in this argument is that it makes it harder to be a "sympathetic atheist/agnostic"—someone who, while not believing in God, can positively evaluate the lives of theists.
It's also interesting that there does not seem to be a parallel argument for atheism. One might try:
  1. (Premise) No life centered on the denial of someone who exists is a flourishing life.
  2. (Premise) Some people lead a flourishing life centered on the denial of God.
  3. Therefore, God does not exist.
But I deny (7). The denial of God is a merely negative attitude, and as such is not fitted out for being the center of a flourishing life, regardless of whether God exists. Those atheists who lead a flourishing life make their life be centered on something other than the denial of God—friendship, the pursuit of truth, etc.
This points out an important asymmetry between theism and atheism. If theism is true, one's life should be centered on theism. But if atheism is true, one's life should not need be centered on atheism, but on valuable things like friendship and understanding.