Showing posts with label racism. Show all posts
Showing posts with label racism. Show all posts

Thursday, January 7, 2021

Two kinds of moral relativism

A moral relativist has a fundamental choice whether to define moral concepts in terms of moral beliefs or non-doxastic moral attitudes such as disapproval.

In my previous post, I argued that defining moral concepts in terms of moral beliefs leads is logically unacceptable.

I now want to suggest that neither option is really very appealing. Consider first this case:

  1. Bob believes he ought to turn Carl in for being a runaway slave. But his emotions and attitudes do not match that belief. He hides Carl and feels morally good about hiding Carl despite his belief. (Bob may or may not be like Huck Finn.)

A relativist who defines morality in terms of beliefs, has to say that Bob is doing wrong in hiding Carl. That seems mistaken. It seems that mere belief is less important than actual attitudes. Thus, if something is to define morality for Bob, it is his attitudes, not his mere beliefs.

So far, we have support for a relativist’s defining moral concepts in terms of non-doxastic moral attitudes. But now consider:

  1. Alice thinks of herself as a progressive, and thinks that racism is wrong. Nonetheless, her moral attitudes do not evince genuine disapproval of racist behavior, say when she is with friends who tell racist jokes.

If we define right and wrong in terms of non-doxastic moral attitudes, then our implicit biases unacceptably affect what is morally right and wrong, so that racist behavior turns out to be permissible for Alice, her beliefs to the contrary notwithstanding.

So, neither approach is satisfactory.

Friday, September 25, 2020

Racism and power structures

Many, but not all, theorists think that racially based discrimination only counts as racism, and its perpetrators as racists, when the discrimination aligns with the power structures in society.

This has the following odd consequence: Simply by defeating Nazis militarily, without in any way changing their hearts or minds, you can turn them into non-racists.

Or imagine a protracted racially-based genocidal civil war between the Xs and the Ys, with the tide of war going back and forth. Sometimes, the Xs are ahead, but then the Ys pull ahead, and then later the Xs regain the lead. On the view in question, whenever the Xs are ahead, their acts of genocide constitute racism, but when they fall behind, it is the Ys that are the racists. (This is like an absurd opposite of the view that history is written by the victors.)

(One might object that genocide doesn’t count as discrimination. But it clearly does.)

(The above argument is only a vivid way of putting a well-known objection, that the alignment view of racism makes for too much dependence of the fact of racism on changing social conditions.)

One can escape from the above arguments by weakening the alignment principle to say:

  • x’s racially based discrimination only counts as racism when the discrimination aligns with the power structures that actually obtain or that x desires.

There will probably still be other counterexample.

As usual, my disclaimer: this is very far from my main areas of philosophy.

Thursday, September 24, 2020

Discrimination and coin tosses

Bob is deciding whom to hire for a job where race is clearly irrelevant to job performance. There are two clear front-runners. Bob hires the white front-runner because that candidate is white.

Bob has done something very wrong. Why was it wrong? A naive thought is that what he did wrong was to take into account something irrelevant to job performance while deciding whom to hire. But that can’t be right. For suppose that all the job-performance related facts were on par as far as Bob could tell. And then suppose that Alice when dealing with a similar case just said to herself “Heads, A, and tails, B”, tossed a coin, got tails, and hired candidate B. Alice didn’t do anything wrong. But Alice also made a decision on the basis of something irrelevant to job performance, namely whether the prior heads/tails assignment to a candidate matched the outcome of the coin toss.

In terms of deciding on irrelevancies, the paradigm of a fair tie-breaking procedure—a coin flip—and the paradigm of an unfair tie-breaking procedure—a racist decision—look very similar.

Here is a standard thing to say about this (cf. Scanlon): When the job-performance related facts are tied, and we still have to choose, we just have to choose on the basis of something not related to job performance. But that something had better not be something that forms the basis for large-scale patterns of dominance in society. Both Alice’s and Bob’s procedures are based on something not related to job performance, but Bob’s procedure is an instance of a large-scale social pattern of dominance.

I want to propose an account of why Bob did wrong and Alice did not that seems to me to differ slightly from the standard story (or maybe it just is a version of it). To that end, consider a third story. Carl runs a graduate program where he has to make lots of hard choices about current students, e.g., about travel-funding, stipend-renewal, lab and office allocation, etc., and these choices often involve ties on the usual academic metrics. (This is not a description of the Baylor philosophy program: we have lots of funding, and rarely if ever had to break ties regarding funding.) Carl is lazy and has decided to simplify things for himself by saving the number of coin tosses he has to make. Instead, whenever a student is admitted, Carl chooses a random number between one and a thousand and assigns that number to the student, re-rolling the random number generator if that number matches the number of a student already in the program. Thereafter, whenever a tie is to be broken, Carl always breaks the tie in favor of the student with the higher pre-assigned number.

Carl’s tie-breaking procedure is like Alice’s in terms of randomness and lack of alignment with larger social patterns of discrimination. But it’s still a terrible procedure. It’s terrible, because it distributes benefits and burdens in a seriously unequal way: if you got randomly assigned a low number at admission, you are stuck with it and keep on missing out on goodies that people assigned a high number got.

One can now explain what goes wrong in Bob’s procedure as something rather like what went wrong in Carl’s procedure: given structural racism, the minority candidate, call him Dave, passed over by Bob has tended to have been on the negative side of many other decisions (some of them perhaps being racist tie-breaking decisions, and many of them being even more unjust than that). Bob’s procedure has contributed to Dave having a life of tending to get the short end of the stick, just as Carl’s procedure has led a number of students having a graduate career with a tendency to getting the short end of the stick. And a tendency to getting the short of the end of the stick is something we should (at least typically) not contribute to.

This is close to the standard account about Bob’s racism. It likewise involves the large-scale patterns of dominance in society. But it seems to me also importantly different: The large-scale patterns of dominance in society are relevant to Bob’s action insofar as they make it likely that Dave has been on the unfavorable side of too many decisions. In the graduate program case, there may be no larger social patterns that match the ones within the program (or at least not pre-existing ones), and even within the program there need not be any significant interpersonal patterns of dominance between the persons assigned high numbers and low numbers, especially if the initial numerical assignments and the tie-breaking procedure are kept secret from the students, who just say things like, “My luck is terrible!” (This is going to be most likely in a program where students are oblivious to their social environment due to a focus on their individual research.)

In the alternate account, the focus is on the individual rather than the group, and the larger social facts are relevant precisely as they have impacted the individual. But this may seem to miss out on a common dimension of invidious discrimination. If I am a member of a group and someone else in the group is unfairly discriminated against, then that is apt to harm me in two ways: first, because I am apt to have a special concern for other members of the group (either because they are members of the group, or because persons more closely related to me tend to be members of the group), and harm to someone I have a special concern for is harm to me, and, second, because seeing someone like me get harmed scares me.

But I think this fits with my individualistic story by just multiplying the number of times that Dave gets the short end of the stick: sometimes he gets the short end of the stick directly and sometimes he gets it indirectly by having someone else in his group get it.

At the same time, I have to say that this is material I know next to nothing about. Take it with a grain of salt.

Tuesday, October 16, 2018

Yet another reason we need social epistemology

Consider forty rational people each individually keeping track of the ethnicities and virtue/vice of the people they interact with and hear about (admittedly, one wonders why a rational person would do that!). Even if there is no statistical connection—positive or negative—between being Polish and being morally vicious, random variation in samples means that we would expect two of the forty people to gain evidence that there is a statistically significant connection—positive or negative—between being Polish and being morally vicious at the p = 0.05 level. We would, further, intuitively expect that one in the forty would come to conclude on the basis of their individual data that there is a statistically significant negative connection between Polishness and vice and one that there is a statistically significant positive connection.

It seems to follow that for any particular ethnic or racial or other group, at the fairly standard p = 0.05 significance level, we would expect about one in forty rational people to have a rational racist-type view about any particular group’s virtue or vice (or any other qualities).

If this line of reasoning is correct, it seems that it is uncharitable to assume that a particular racist’s views are irrational. For there is a not insignificant chance that they are just one of the unlucky rational people who got spurious p = 0.05 level confirmation.

Of course, the prevalence of racism in the US appears to be far above the 1/40 number above. However, there is a multiplicity of groups one can be a racist about, and the 1/40 number is for any one particular group. With five groups, we would expect that approximately 5/40=1/8 (more precisely 1 − (39/40)5) of rational people to get p = 0.05 confirmation of a racist-type hypothesis about one of the groups. That’s still presumably significantly below the actual prevalence of racism.

But in any case this line of reasoning is not correct. For we are not individual data gatherers. We have access to other people’s data. The widespread agreement about the falsity of racist-type claims is also evidence, evidence that would not be undercut by a mere p = 0.05 level result of one’s individual study.

So, we need social epistemology to combat racism.

Thursday, November 30, 2017

Self-sacrifice and bigotry

Consider:
Case 1: A child is drowning in a dirty pond. You can easily pull out the child. But you’ve got cuts all over your dominant arm and the water is full of nasty bacteria and medical help is a week away. If you go in the water to pull out the child, your arm will get infected, become gangrenous and in a week it will be amputated. There will be no social losses or gains to you.
Case 2: A child is drowning in a clean pond. You can easily pull out the child. But the child is a member of a despised minority group, and you will be ostracized by your friends and family for life for your rescue. There will be no physical losses or gains to you.

Here is my intuition. In both cases, it would be a good thing to rescue the child. But in Case 1, unless you have special duties (e.g., it’s your own child), you do not have a duty to rescue given the physical costs. In Case 2, however, you do have a duty to rescue, despite the social costs.

The difference between the two cases does not, I think, lie in its being worse to lose an arm than to be ostracized. Imagine your community has a rite of passage that involves swimming in the dirty pond with the cuts on your arm, and you’d be ostracized if you don’t. You might well reasonably judge it worthwhile—but still, I think, the intuition remains that in Case 2 you ought to pull out the child, while in Case 1 it’s supererogatory. So it seems then you might have a duty to undertake the greater sacrifice (facing social stigma in Case 2) without a duty to undertake the lesser sacrifice (amputation in Case 1). But for simplicity let’s just suppose that the harms to you in the two cases are on par.

Is it that physical harm excuses one from the duty to rescue the child but social harm does not? I don’t think so.

Case 3: A child is being murdered by drowning in a clean pond. You can easily pull out the child. But if you do, the murderer will punish you for it by transporting you away from your home community to a foreign community where you will never learn the difficult language and hence will not have friends.

We can set this up so the harm in all three cases is equal. But my intuition is that Case 2 is like Case 1: in both cases it is supererogatory to rescue the child but there is no duty.

In Cases 2 and 3 we have equal social harms, but I feel a difference. (Maybe you don’t!) Here’s one consideration that would explain the difference. That an action gains one the praise and friendship of bigots qua bigots does not count much in favor of the action, even if, and perhaps even especially if, such praise and friendship would make one’s life significantly more pleasant. Similarly, that an action loses one the friendship of bigots, and does so precisely through their bigotry, is not much of a consideration against the action. I say “not much”, because there might be instrumental gains and losses in both cases to be accounted for.

Here’s a second consideration. Perhaps if I refrain from doing something directly because doing it will lose me bigots’ friendship or gain me their stigma, I am thereby complicit in the bigotry. In Case 2, then, I need to ignore the direct loss of goods of social connectedness in considering whether to rescue the child. I need to say to myself: “When those are the conditions of their friendship, so much the worse for their friendship.” In Case 3, I have similar social losses, but I don't lose the friendship of bigots qua bigots, so the loss counts a lot more.

But note that one can still legitimately consider the instrumental harms from the loss of goods of social connectedness. Consider:

Case 4: A child is drowning in a clean pond, but you have a wound that will become gangrenous and force amputation absent medical help. You can easily pull out the child. But the child is a member of a despised minority group, and if you rescue the child, the only doctor in town will refuse to have anything to do with you. As a result, your wound will become gangrenous by the time you find another doctor, and you will require amputation.

I think in Case 4, you are permitted not to rescue the child, just as in Case 1.

Monday, November 2, 2015

Empathy and inappropriate suffering

Consider three cases of inappropriate pains:

  1. The deep sorrow of a morally culpable racist at social progress in racial integration.
  2. Someone's great pain at minor "first world problems" in their life.
  3. The deep sorrow of a parent who has been misinformed that their child died.
All three cases are ones where something has gone wrong in the pain. The pain is not veridical. In the first case, the pain represents as bad something that is actually good. In the second, the pain represents as very bad something that is only somewhat bad. In the third, the pain represents as bad a state of affairs that didn't take place. There is a difference, however, between the first two cases and the third. In the third case, the value judgment embodied in the pain is entirely appropriate. In the first two cases, the value judgment is wrong--badly wrong in the first case and somewhat wrong in the second.

Let's say that full empathy involves feeling something similar to the pain that the person being empathized with feels. In the parent case, full empathy is the right reaction by a third party, even a third party who knows that the child had not died (but, say, is unable to communicate this to the parent). But in the racist and first-world-problem cases, full empathy is inappropriate. We should feel sorry for those who have the sorrows, but I think we don't need to "feel their pain", except in a remote way. Instead, what should be the object of our sorrow is the value system that give rise to the pain, something which the person does not take pain in.

I think that in appropriate empathy, one feels something analogous to what the person one empathizes with feels. But the kind of analogy that exists is going to depend on the kind of pain that is involved. In particular, I think the following three cases will all involve different analogies: morally appropriate psychological pain; morally inappropriate psychological pain; physical pain. I suspect that "full empathy", where the analogy involves significant similarity, should only occur in the first of the three cases.

Tuesday, October 14, 2008

The danger of mixing moral philosophy with coffee: Pride

From How I Found Livingstone, by Sir Henry M. Stanley:

We drew our canoe ashore here, and, on a limited area of clean sand, Ferajji, our rough-and-ready cook, lit his fire, and manufactured for us a supply of most delicious Mocha coffee. Despite the dangers which still beset us, we [presumably, Stanley and Livingstone] were quite happy, and seasoned our meal with a little moral philosophy, which lifted us unconsciously into infinitely superior beings to the pagans by whom we were surrounded—upon whom we now looked down, under the influence of Mocha coffee and moral philosophy, with calm contempt, not unmixed with a certain amount of compassion.