Saturday, January 07, 2006

Against the Priority View

This post assumes familiarity with my old post on utility, equality and priority. I've previously argued against egalitarianism; I now want to do the same for prioritarianism, i.e. the view that "benefiting people matters more the worse off these people are."

First we need a standard measure for quantifying benefits. I will employ the notion of a 'util' or unit of utility (i.e. individual welfare) to serve this end. I stipulate that all utils are of equal worth to the person who receives them. If the welfare value of my life increases from 2 to 3, this is exactly as good for me as an increase from 99 to 100 would be. They are also to be standardized between individuals, so a one util benefit for you is just as good for you as a one util benefit is good for me.

This is clearly wildly different from how material goods behave. $100 is worth a lot more to a starving man than to a millionaire. More generally, material goods are less valuable to you the more of them you have. We call this "diminishing marginal utility" (DMU). For example, the $100 might be worth 20 utils to the starving man, but only 1 util to the millionaire. Given a choice between giving the money to one or the other, utilitarianism recommends we favour the starving man with material goods; that is what will maximize total welfare in this imagined case.

Now, suppose we can either give a large benefit (in utils, not merely dollars!) to someone who is already well off, or else a smaller benefit to someone less fortunate. Which should we do? For utilitarians, the answer is simple: give the greatest benefit, without regard for who receives it. On the priority view, however, we might instead opt for the more egalitarian option.

I think that would be a mistake. It's a tempting mistake, insofar as our intuitions are more familiar with material goods and so find it difficult to ignore DMU. But ignore it we must, for recall that utility benefits - by definition - do not suffer from diminishing returns. We tend to assume that helping the worse-off will "make a bigger difference", i.e. benefit them more than offering similar help to someone more fortunate. It is important to be clear that this is not the case in the scenario I have described. The well-off person really would gain the greater benefit, in real (and not merely material) terms.

Here is why it's a mistake: consider a similar option but all within the life of just one person. He can receive a mild benefit when he is badly off, or else a larger benefit at a different stage of his life when things are going better for him. Which option is better for him? Well, by definition, the greater benefit is better for him. So if offered the choice, he should - if rational - prefer that you benefit his well-off self, rather than prioritizing his worse-off self.

But recall the definition of prioritarianism: "benefiting people matters more the worse off these people are." This suggests that giving the lesser benefit might matter more (be "better") than giving the greater benefit, in the case just discussed. Considering only this person's welfare, it might be better to do what is worse for him. This is an absurd and contradictory result.

So the priority view, as stated, is not universally true. In particular, it is not true within an individual's life. Defenders might hope to modify it into a purely inter-personal form, e.g. "benefiting distinct people matters more the worse off each person is." This restriction seems ad hoc, but never mind that for now. The problem is that it seems open to an analogous objection to the above.

Recall that benefits have been defined in terms of 'utils' which are an inter-personal standard measure of welfare. Each util I gain is just as good for me as each util you gain is for you. This much is stipulated. Also, let us define the welfare value of a life relative the zero baseline of a life that is barely worth living (for the person living it).

Now, let's say Ana currently has a welfare value of 100, and Bob's is 10. Suppose you have a choice between giving a benefit of +10 to Bob, or else +11 to Ana. Which is best, from a neutral point of view? Simply enough, +11 is better than +10, which is all it comes down to when giving equal weight to the interests of both involved. If you put the agents behind a "veil of ignorance", so they didn't know which person they were, they would (if rational) prefer you to choose the +11 benefit to Ana rather than the +10 to Bob. The fact that Bob is worse-off to begin with is irrelevant. What matters is how much better off each of them could be.

But prioritists would have us believe that the benefit to Bob matters more, because he is worse off to begin with. Although our only consideration is the welfare of these two individuals, we're supposed to believe that it might be better to do what is worse from their combined point of view. Again, this is an absurd and borderline contradictory result.

So we should reject the Priority View. Benefiting people matters more the greater the benefit is to the beneficiary. Their prior welfare level has no intrinsic relevance here. It is only relevant insofar as, say, it might be easier to benefit worse-off people, e.g. if the same material investments would yield greater benefits for them. But of course such factors are already taken into account by utilitarian principles. A preference for egalitarian or prioritarian principles may rest on a failure to understand this point.


  1. "If you put the agents behind a "veil of ignorance", so they didn't know which person they were, they would (if rational) prefer you to choose the +11 benefit to Ana rather than the +10 to Bob."

    Isn't that question begging? That is, its only rational if you assume that something like utilitarianism is true.

    Indeed, those who don't like to gamble may much prefer to give +10 to Bob.


  2. Sorry, but I see this as just another example in which desire utilitarianism works better than act utilitarianism. You are asking people to do the act-utilitarian best act, which they can only do if they have only one desire, this being the desire to maximize utility.

    Your example identifies some of the intuitions surrounding the fact that people are not built that way.

    As a general rule, we can reasonably infer that an act of charity to the poor will produce more benefit than an act of charity to the rich. We then have reason to promote a preference to give to the poor. This preference for giving to the poor will generate a disposition to give to the poor, even when it is not the act-utilitarian best act.

    Individuals act so as to fulfill their own desires given their beliefs. The act of charity to the poor is that act that a person will do only if it "matters more" to him to give to the poor than to give to the rich. We have good reason to cause people to desire to give to the poor.

    There might be individual act-utilitarian instances in which giving to the rich is better. However, to be the type of person who will actually give to the rich under these circumstances, one has to be a creature with only one desire (a desire to maximize utility), and with that we have stepped out of anything resembling the real world.

    Alonzo Fyfe
    The Atheist Ethicist

  3. I suppose it would be churlish of me to point out that there are no utils. But even disregarding that, I think something else that needs to be taken into account is that happiness is not like a bank account where your balance can theoretically go up and up indefinitely. There's obviously going to be some happiness ceiling above which nobody's going to be able to go, though this ceiling might differ from person to person. Also, all evidence seems to suggest that happiness is homeostatic -- people tend to get used to their place in the world after they've been there for a while and reach a sort of happiness equilibrium, but if their situation worsens they'll get less happy and if it gets better they'll get more happy. (Which incidentally carries with it the implication that util increases from wealth redistribution are likely to be ephemeral, but that's getting into a slightly tangental topic...)

  4. And whoops, there should have been a link to the inimitable Will Wilkinson in that comment.

  5. Hmmm. UTILs indeed seem very academic, if but for the fact that generating 1 util for different people, or generating subsequent utils for the same person, or 1 util for 1 person at different times in their life - would all require very different amounts of resources. utils do not seem like a very useful measure to me.

    Another effect : if you would give 11 utils to someone who has 100, rather than 10 to someone who has 10, then afterwards the 10 person might get angry and get nasty with the 111 person. People are not in isolated containers, but in the same physical world.

    And what exactly is the act of "giving" someone a util? Who would hand them out in the real world?

  6. I think it's hard to distinguish utilitarianism from prioritism without saying more about what a util is. Let's say that the utilitarian has a function, u(x), that takes any state of the world x and tells you the number of utils that each person has in that world. The prioritist (or at least one species of prioritist) thinks that what matters in our moral decisions is not the util but some other unit, the prio, that weights changes in utils as more important when the person has a low absolute level of utility. The prioritist's function p takes u(x) as the input and outputs the number of prios that each person has in the world. For nonnegative utils, p(u(x)) would have the general shape of the square root of u(x) for each individual, indicative of "diminishing marginal priority" or somesuch.

    Formally, utilitarianism and prioritism would be identical - each says that the moral thing to do is to maximize some quantity, which is given by some function that depends only on the state of the world x. The only disagreement is what function gives you the quantity that matters. For utilitarians it's u, which turns states of the world into utils, and for prioritists it's the compound function (p o u), which turns states of the world into prios via utils.

    In order to distinguish the two views you have to say something about how the function that matters is defined. The prioritist could say that you've begged the question by calling the quantity that matters the "util". What you've actually described is the prio, and if you tried to create this "standard measure for quantifying benefits" by only taking into account desire fulfillment, or pleasure and pain, or some other factor that does not appropriately increase in importance when the person's total utility is low, then you wouldn't be able to produce a measure that makes an increase from 2 to 3 exactly as good for you as an increase from 99 to 100. You would have to transform that function in order to make it linear in that way, and the transformation that you'd have to use would be p.

    I think that the prioritist would be wrong to say that, but I think that the argument needs to be taking place in the area that you stipulate over, when deciding what sorts of things matter.

    Then there's also the prioritism-as-indirect-utilitarianism possibility that Alonzo brings up, where someone could espouse prioritism simply because they think that it helps counter our tendency to neglect the well-being of the least well off. As Aristotle said, we're better at hitting the target if we aim a little too far away from the direction in which we are most likely to miss. Prioritism as a fundamental theory may often just be an error, as you suggest, where people overgeneralize the lesson of diminishing marginal utility by treating utility itself as if its "mattering" diminished marginally. They are essentially correcting the numbers to keep the less well of from getting screwed when the numbers had already been corrected in the previous calculation.

  7. Utilitarianism have the problem of "what is an util" or more importantly how can anyone making a decision really know how many utils will result from an action and thus it seems odd to some to talk as if one is making accurate decisions based on it. Or even inventing accurate rules to maximize it.

    One could argue that using the monetary value plus some rules of thumb like diminishing returns is the method by which utilitarianism would be worked out. In a sense then there is really no argument it just comes down to a matter of how one measures/scales utility.

    Presumably this would only be a problem for utilitarianism if one was to say benefit for the rich has no value (i.e. infinite diminishing utility) Or if it is a system that is inconsistent when repeated (for example if you favored slightly poorer people over slightly richer people in billions of individual cases until everyone was poorer than they started out or approximately as poor as the previous poorest person.

    If one used a veil of ignorance regarding utils AND distribution the debate could center around me wanting to give 11 kg of food or 10kg to some starving people and choosing the first because at least then there are 11kg in the system with potential to do good (ie prevent starvation).

  8. Blair,
    Maybe Prioritism stems from a social/political debate between poor and rich (or whoever). The Prioritists always ask for more benefit for the group that they represent no matter what the starting point is. If someone created a priority system like "prio" then someone else would need to claim that that doesnt favour their group enough it would be redefind as a util and you would be back where you started - in the end they push towards infinite benefit fo the group they represent (or in this case perfect egalitarianism).

  9. Regarding practicalities and indirect utilitarianism, I should clarify that I'm really just arguing about the abstract principle here. It's far from clear what practical consequences, if any, can be drawn from it. As many of you know, I certainly don't want people to go around making utility calculations in their everyday lives! It's possible that direct utilitarian considerations might be more appropriate when assessing societal institutions, however. On the two-level view, this would seem to fit on the "critical level" (appropriate for direct utilitarianism), analogously to reflecting on our everyday ethical principles/virtues, whereas the running of those institutions, like the running of our everyday lives, should be conducted in the "indirect utilitarian" fashion recommended by our critical-level conclusions.

    Matt: I wasn't identifying utils with happiness, but simply whatever it is that makes one's life go well. If there is an upper limit to that, then I guess that simply means we could get to a stage where certain lucky people can't be benefited any further. That's interesting (as are Will's "happiness" posts, which I always enjoy), but doesn't seem to speak directly to the question of whether, given a set level of benefits that either of two people could receive, it matters more to benefit the worse-off person.

    Derek: I've never really been impressed by the importance of 'the separateness of persons'. But I grant that many people are, and they wouldn't have any problem with considering such restrictions to be principled, as you say.

    Blar: that's a really interesting point about the formal equivalence of utilitarianism and prioritarianism. But I think you go down the wrong track in suggesting that "The prioritist could say that you've begged the question by calling the quantity that matters the 'util'."

    Let me clarify my definitions. The 'util' is defined to be a standard measure of welfare value for the person living the life. To avoid begging questions, let's call the standard measure of objective value ("what matters", or the all-things-considered value of a state of affairs) "ovals". I would suggest, but not stipulate, that utils and ovals are linearly related (and perhaps identical, if lives are the only bearers of value and if welfare value is the only sort of value that they bear). The Priority View holds that utils have "diminishing marginal objective-value", so that extra utils yield proportionally fewer ovals, etc.

    Now, it is important to note that the priorist does not dispute my definition of the util (they can't, it's a stipulation!). They must grant that an increase of my life's welfare value from 90 utils to 91 is just as good for me as an increase from 3 to 4 would be. (It remains an open question how we should translate from natural properties, like desire satisfaction, into quantities of utils. However it's done, it must respect the above stipulations.)

    What they dispute is that these equal internal benefits matter equally from an external point of view. Although the benefits for me are the same, the equal benefit to my worse-off self is "more important" in some objective sense, even though they're (by definition) equivalent from my point of view, and so I would have no self-interested reason to prefer one over the other. I argue that this sort of judgment is incoherent. Because we are here considering nothing but my welfare, there is no basis for a divergence between the internal and objective value judgments. That's the core of my argument against prioritism.

    Alex: you say, "those who don't like to gamble may much prefer to give +10 to Bob."

    I see your idea, but given my definition of utils, that would be a very odd preference. Recall that each util is of equal value to the person who receives it. Now, drawing on your intuitions, presumably we're imagining that Bob's life sounds pretty bad, and a +10 bonus would make it much more bearable for us. But by the definition of utils, a +11 bonus to Ana would be an even greater real improvement in the quality of her life. And we're equally likely to end up with either life. So why would we choose the smaller real improvement? (It makes sense to avoid such "gambles" in practical life, since the harm to us of losing tends to be greater than the benefit of winning. But such concerns no longer apply given the above definition of 'utils'.)

    I do feel the pull of your concerns, but I think that might just be because I have trouble properly imagining 'utils' myself. Unless anyone can explain why risk-aversion could still make rational sense in light of this standard measure?

  10. Richard,

    "If the welfare value of my life increases from 2 to 3, this is exactly as good for me as an increase from 99 to 100 would be."

    To me those are not equal. They may increase by the same amount of utils, but as a ratio the increase differs.

    i.e. 2 to 3 is 50% increase
    99 to 100 is ~1% increase

    Where is the greater benefit? Some could argue that the percentage of the increase is more important that the actual number of utils.

  11. Steven, that isn't really plausible, because utils extend into the negatives and the positioning of the 'zero' baseline is arbitrary. For intuitive simplicity, I've simply chosen the 'zero' mark to be the borderline of when a life is worth living. But the "percentage increases" you speak of are merely notational. We could imagine shifting the whole number line ten points to the right, so that what is currently a shift from -9 to -6 would instead be described as changing from +1 to 4; this mere redescription apparently turns a 30% benefit into a whopping 400% one, which just goes to show the arbitrariness and irrelevance of such ratios.

  12. Derek, thanks for keeping the separate issues tidily so -- I've responded to your other comment over there also. As for this one, I would bring up the two-level issue again: utilitarianism only deals in generalities, not particular cases. But perhaps we ought to set up a system (if these are the options open to us) where more people can become filthy rich at the cost of some others getting only cold gruel rather than hot. I don't think that's so implausible.

    Of course, helping the worse off person is not absurd. What's absurd is thinking that this lesser factual benefit morally outweighs the greater benefit to another. (Again, at least at the general or 'critical' level. We might want our everyday practical moralities to treat them differently, for reasons others have pointed out.)

  13. We could argue that it is better to take the guy from 99 to 100 than 2-3 because lets say 5 are required to make life worth while for these two people. Person 1 is safe anyway person 2 fails anyway so commits suicide if he is rational.

    And as soon as he does that the average until welfare of our set moves to 100. A bit heartless I guess. But in a sense suicide is always an option and it drops any util level to an arbitrary "0".

    (I always like to look for the worst implications of my own arguments because they should be able to stand up to them and I am uninterested in winning debates by deception. Making me a bad ally I guess!)

    > Is it reasonable to expect Bob and those like him to accept and endorse this reasoning?

    And yet in a world where they have equal welfare despite the fact that person A "earns" most of it in some sense and person B does not is hardly going to be endorsed by person A either.
    Furthermore - his selfishness will in a sense be a part of the moral calculation anyway.

  14. "Is it reasonable to expect Bob and those like him to accept and endorse this reasoning?"


    (After all, as noted above, it is what the rational person would choose from behind the veil of ignorance.)

  15. ok yes is the other answer ;)

  16. My point is not that this has to be a meritocracy - I am actually arguing in favour of richard's aproach.
    It is that while Bob may be unhappy with Richards solution anna might be unhappy with yours. hard to determine who will be unhappy and why we should care more about one person than the other.

    Anyway Utilitarianism would do its best to minimize both of these.

  17. It's worth noting an ambiguity in the Rawlsian difference principle. It does not, in fact, guarantee that the particular person who is the actual worst-off individual is the best off that he can be. (Obviously we could make him better off by giving him all of society's resources.) The point is that any alternative arrangement would have someone (else) worse off than him.

    So it simply isn't true that the worst off person in the Rawlsian society has no basis for complaint. He has exactly the same reasons for complaint as the worst-off in a utilitarian society: you could have made him better off (albeit at the cost of what proponents consider "more important" benefits to someone else), and chose not to.

  18. This comment has been removed by the author.

    1. If you reject the stipulation that each utile increment is equally beneficial to the subject then we may have a merely terminological disagreement. (I no longer know what you mean by "utility", but it seems not to be what I mean. You seem to be thinking of utility as a kind of resource, rather than as a measure of benefit.)

    2. This comment has been removed by the author.

    3. Ok, it looks like we don't actually disagree, then. As a non-hedonistic utilitarian, I think we should maximize welfare. You claim that pleasure has diminishing marginal benefit to one's welfare. These claims are compatible.

    4. This comment has been removed by the author.

    5. I would define it as a matter of what's desirable for an individual's sake. But if you want a theory of what things have this normative property, then I'm sympathetic to some kind of hybrid "achieving objectively worthy, subjectively-endorsed goals" kind of view.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.