Wednesday, April 27, 2005

Utility, Equality and Priority

Is inequality somehow intrinsically bad? Derek Parfit has a wonderful article called 'Equality and Priority' where he explores some common intuitions on the subject. I want to describe and elaborate on some issues he raises.

There's no question that inequality can be instrumentally bad, e.g. by causing people to feel envious or otherwise unhappy. But that's not what is at issue here. To avoid such confounding influences, Parfit asks us to imagine a Divided World, where the two halves have no contact or knowledge of each other. Now consider two possible states of affairs, where numbers refer to 'utiles' or units of well-being:

(1) Half at 100, Half at 200.
(2) Everyone at 145

Which world is better? It's important to note that the numbers refer to utility, not money or resources, and thus have already been adjusted for diminishing marginal utility. ($100 has more utility to a homeless man than to a millionaire.) So try not to let intuitions about DMU influence your judgment.

I'm happy to go with pure utilitarianism here and say that (1) is better. But many people would disagree with me. So let's look at what such a judgment would commit us to.

Many people prefer (2) because they believe that it's more important to benefit those who are badly off than those who are well-off already -- even if the latter person would benefit slightly more. They are concerned with people's prior level of welfare. But there are two ways this could be understood. We might be concerned with relative levels of welfare (egalitarianism), or we might instead care about their absolute level of welfare (prioritism).

The Priority View says that "Benefiting people matters more the worse off these people are." Note that prioritists do not care (intrinsically) about equality. Unlike egalitarians, they do not claim that inequalities are intrinsically bad.

It may be difficult to see the difference here. It is subtle but important. Relative inequalities can be remedied by making everyone suffer equally (this is known as the "Leveling Down Objection"). But this does nothing to help anyone's absolute well-being. Compare the following two possibilities:
(3) Everyone at 100
(4) Half at 100, Half at 200

Egalitarians would say that there is some respect in which (3) is better than (4). They can still agree that (4) is best overall, since the benefits to utility may outweigh the harm of inequality. But they still think there is some element of harm in moving from (3) to (4). This is not plausible. We should accept the Person-affecting View: "nothing can be bad if it is bad for no one." Egalitarianism is inconsistent with this claim, and should be rejected.

Now, let us distinguish two forms of moral concern: the 'good' (i.e. value which adheres in states of affairs), and the 'right' (i.e. what action someone ought to take). This is important because, although Consequentialists claim that it is right to maximize the good, not everyone accepts this claim. So the two judgments could come apart. It would be good if one person died [by coincidence, say] so five could live. But it would not be right to kill that person for the sake of the five.

If we think that (2) is better than (1), that is a claim about the good. If we think that, given a choice to bring about one or the other, we ought to choose (2), this is a claim about what is right.

Telic egalitarians/prioritists claim that equality/priority affects the value of a state of affairs. Deontic egalitarians/prioritists deny this and instead claim that equality/priority only affects what is the right thing to do.

If we accept the person-affecting view, then the value of a state of affairs is a function of the well-being of the people in it. But there are various different aggregation principles we might use. Utilitarians simply sum up the total, counting all individuals equally. Telic Prioritists, by contrast, hold that the value of a unit of well-being is proportional to how well-off the person is. They think that the pair (10, 10) yields more value than (2, 20).

We might formalize this by adding a 'disvalue' that is proportional to the reciprocal of someone's well-being value. This would capture numerically the idea that the suffering of those with poor welfare matters very much. For example, the pair (10,10) adds priority disvalue proportional to 1/10 + 1/10 = 0.20. Compare this to (2, 20), which yields 1/2 + 1/20 = 0.55. We then multiply these by the appropriate constant (let's pretend it is 10), and subtract the result from the summed utilities. Then (10,10) yields total value of 20 - 2 = 18, whereas (2,20) yields 22 - 5.5 = 16.5.

Hopefully that gives some indication of how a telic prioritist could say that world (2) has more value than world (1), despite (1) having a greater sum of personal utility. I'm not sure how plausible it is, though. I think that deontic prioritism is more plausible: we ought to focus on benefitting the worst off, not because it would result in a better state of affairs [it would not], but because it's the right thing to do. (I'll discuss possible reasons for claiming this in a future post.)

Utilitarians have a different answer, of course. We should not intrinsically care more about benefitting the badly off. They don't matter more than anyone else. It's just that, due to diminishing marginal utility, it's a whole lot easier to benefit people who are badly off. We can make more of a difference there. We should relieve poverty because that's an effective way to maximize total utility, not because the welfare of the worst off is more important than that of the rest of us.

3 comments:

  1. er... since no one else seems to be commenting at the moment ..
    You certainly like to analyze things Richard! But is not the ideal solution for a real world outside your assumed 'framework'? ie .. the best is everyone at 200 .. or whatever the highest achievable score is !

    Which leads to the next question ( as you seem to approach at different times) just exactly what would constitute the best for each individual. I mean fully the best for a human life.
    And the next ,.. what system or arrangement between people is necessary that would allow the pursuit of this 'best' so that any impact on others ( because all activity and even property ownership, does impact on others, sometimes positively, sometimes negatively ) did not impact adversely. Or at least was seen to operate in a fair way.
    However if you want to limit it to the arrangements you suggest I simply can't answer.
    Hmm. other than to suggest that since the supposed higher scorers have so much more than they 'truly need' it may even be better for them as humans to experience the pangs of sacrifice for the sake of low scorers!
    or something.
    cheers.

    ReplyDelete
  2. Certainly the best is for everyone to be perfectly well-off. But that might not be a viable option. So the question arises, who matters most? Can a small benefit to the worst-off be more important than a larger benefit to someone else?

    You're right that I have presupposed that we know what well-being is. If you'd rather examine that question, I've a dozen posts about it here, which are summarized in my recent essay on the topic.

    On your last point, I wonder if you might be confusing well-being with money/resources. Someone might have more resources than they need, in which case giving them more would not improve their well-being at all. But I meant to be talking about units of well-being, so that an increase from 150 to 151 is just as good for the person as is an increase from 10 to 11.

    (The prioritist claim is that the increase from 10 to 11 has greater value for the state of affairs, not greater value for the person.)

    ReplyDelete
  3. Assuming we can all get to 200 is a bit of a stretch. While I agree w/Richard that it would be great if everyone was equally well off, that just isn't going to happen.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)