A common popular criticism of valuing equality in and of itself is that one can achieve equality, say in utility, simply by bringing down everybody who is above the level of the least happy member of the community, which is plainly undesirable. I am ashamed to say that I've used this criticism myself in the past.

But the criticism only applies to a naive view where equality is considered in a binary way—you either have it or you don't, and there is a value in having it and a disvalue in not having it. But of course on any non-naive view, equality is valued along a continuum—a minor inequality has small disvalue, while a large inequality has large disvalue. If one takes this into account, it's easy to come up with ways of weighing the value of equality, or equivalently the disvalue of inequality, that are not subject to the above criticism.

For instance, suppose we have *n* persons, with utilities: *u*_{1},...,*u*_{n}. Standard consequentialism calculates an overall value of *u*_{1}+...+*u*_{n}. But there are many ways of modifying this so that one (a) takes equality into account, and (b) avoids the popular criticism. Now, the intuition behind the popular criticism is, I think, based on the following intuition:

- It is good if the utility of some is increased and the utility of none is decreased.

*u*

_{1}+...+

*u*

_{n}−

*c*

_{n}(|

*u*

_{1}−

*a*|+...+|

*u*

_{n}−

*a*|), where

*a*is the arithmetic average (

*u*

_{1}+...+

*u*

_{n})/

*n*, and

*c*

_{n}is a constant such that 0<

*c*

_{n}<1/2. You get a different model, with different normative consequences, for different values of

*c*

_{n}. It's easy to check that increasing any one of the

*u*

_{i}increases the total good on this model[note 1] and so we have (1) and (b). It is never the case that on this valuation, decreasing the utility of some without increasing the utility of any will improve total good—thus, leveling down is not something to worry about. Moreover, equality is taken into account—a more equal distribution is, ceteris paribus, preferable even if it decreases

*u*

_{1}+...+

*u*

_{n}(but not preferable if it decreases each individual utility, or even some individual utilities without increasing others).

All that said, I think such numerical models are not something to take very seriously. Here's one reason. While we might think that there is an objective answer to the question: "What is the mass of the electron?", and that this might be some number, the idea that there should be an objective answer to the question: "What is the true value of *c*_{n} in the above total-good formula?" is very implausible to me. And all such additive formulae assume commensurability of goods between people, which I deny. But the models may still be useful, say for showing how an advocate of equality might avoid the leveling-down criticism.

## 3 comments:

One interesting consideration of the idea of equality is that it seems difficult to deal with (at least) distributional equality in a non-instrumental way - see Joseph Raz here - http://ssrn.com/abstract=1288545 - for instance.

Thanks for the reference. I knew there was a literature on this, but since this is a blog and not a scholarly journal, I don't usually try to chase it down. I just post ideas as they occur to me, sometimes in fields that I work in and sometimes, as in this case, in fields I know just about nothing about. Anyway, here are my off-hand reactions to the Raz piece.

1. Here is an interesting pair of cases in the Raz paper: "In the first case we know that at some future time Jane will be the only person alive. We can do something which will make sure that she will not be hungry. In the second case we know that at some future time both Jane and John, but no one else, will be alive. Whatever we do John will not be hungry. There is something we can do which will make sure that Jane is not hungry. In this second case we can act in order to achieve equality (in freedom from hunger), but we cannot do so in the first case, in which no distribution can be either equal or unequal. The good of avoiding hunger is achievable in both. Those who think that the reason to protect Jane from hunger in the first case is the same as the reason to protect her from hunger in the second case show in that that they take the avoidance of hunger rather than equality as the good of the distribution."

It's kind of fun to see what my model makes of this. Let's set c_2 = 1/3. Suppose that the people's utility when hungry is -10 and when sated it's 0.

So, in case 1, utility if Jane is fed is 0, and else it's -10.

In case 2, utility if Jane is fed is 0, and if Jane is not fed, it's -10-(5/4)=-11.25.

So my model for egalitarianism suggests that in case 2 we have a slightly stronger reason to feed Jane.

My feeling is that this result just about matches a lot of people's intuitions--in case 2, Jane's hunger is a little bit worse than in case 1, but not

muchworse. (And these intuitions could come from a cognitive distortion--contrast can increase the apparent badness of something.)2. Raz argues against the intrinsic value of equality in a rather nice way:

"equality is of intrinsic value only if it can benefit people." But equality, in and of itself, does not benefit people. We learn this through a subtler version of the leveling-down argument than the one I was refuting. Hence equality is not of intrinsic value.

I think an egalitarian could respond as follows: There could be a global aesthetic value in certain distributions of goods. For instance, suppose it turns out that when you plot a scatter graph of happiness versus height, the graph looks exactly like a Duerer etching. Then the distribution of happiness is

beautiful. Well, a distribution of goods that exhibitsperfectequality might also be beautiful in a certain way--it elegantly satisfies a symmetry condition, after all. Or, at least, it could exhibit a value that has similar properties to the value of beauty. (Egalitarians, I think, want to say that more equal distributions are always better than less equal ones. That is different from seeing a value in perfect equality.)Now, Raz will say that the aesthetic value in the happiness-height graph could be appropriately appreciated by a person, and hence could benefit the person. But I think this doesn't help. The graph is not beautiful because it can be appropriately appreciated by a person, but it can be appropriately appreciated by a person because it is beautiful. Likewise, if equality is good, then it can be appropriately appreciated by a person, thereby benefiting that person, but of course it is not good because it can be thus appreciated.

All that said, I do not think a more equal distribution

isparticularly good. Certainly, I think it is good for there to be a great chain of being, with angels being capable of greater happiness than we are, and us being capable of greater happiness than dogs are.It is never the case that on this valuation, decreasing the utility of some without increasing the utility of any will improve total good—thus, leveling down is not something to worry about.Even if that is so, we might have a violation of the intuition in (1).

(1) It is good if the utility of some is increased and the utility of none is decreased.The intuition behind (1) is that anytime we can increase the utility to some without decreasing the utility to any, we have not arrived at a Pareto Optimal outcome. But you seem to allow movement away from Pareto Optimality in cases where decreasing the utility to some is offset by increasing the utility to others. That does not offhand seem consistent with (1).

Post a Comment