Tuesday, February 02, 2010

Desiring Each Good

Some critics allege that the utilitarian agent has but a single desire: to maximize welfare. This would seem to embody an objectionably instrumental attitude towards individual persons. Rather than caring about each of Tom, Dick, and Harry in their own right, the utilitarian (allegedly) just cares about helping them as a constitutive means to promoting aggregate welfare. Tom serves as a faceless 'receptacle' of utility, rather than mattering for his own sake.

The obvious response for the utilitarian is to deny that their psychology is accurately characterized, at the fundamental level, by such a totalizing desire. They don't just desire a single thing (namely, goodness in general). Rather, they desire each possible good, in proportion to its value. They value Tom's welfare in particular, and also Dick's, Harry's, and everyone else's. And it's just a mistake -- a gross mischaracterization on the part of critics -- to confuse this vast collection of particularized desires with the single, extensionally equivalent, generic desire for aggregate welfare.

I'm sympathetic to this response, and think that something along these lines is necessary to distinguish indifferent from ambivalent options. (In the first case, there's no moral difference between the options: they both serve the exact same values. In the second case, you may feel torn because the two options serve different but equally weighty values, such that the relative losses and gains exactly counterbalance each other.)

But a difficulty arises when we consider goods that the utilitarian agent is unaware of. Consider some particular unknown person, Sally. Our utilitarian cannot have a particularized desire for Sally's welfare, since he cannot even refer to Sally in particular. But his values must extend to Sally and others somehow. (It's not as though he'd accept an offer to improve the welfare of Tom at greater cost to some unknown other.) So it seems like we need something like a generic desire for aggregate welfare to step in and fill the gap. (To avoid double-counting, we'd probably need to exclude Tom -- and any others for whom the agent has a particularized concern -- from the remaining 'aggregate'.)

Is this a problem? I hope not. It doesn't seem so objectionable to treat people you've never even heard of as faceless members of the aggregate. How could they be other than faceless and generic to you? Moreover, the motivation here is not merely instrumental. It's not as though our utilitarian thinks that unknown people fundamentally only matter in respect of their being members of the unknown aggregate. Rather, his concern for unknown people's aggregate welfare is a stop-gap measure that reflects, in the only way possible, his appreciation of the fact that each of those individual unknown persons fundamentally matters in their own right. He knows that, if he knew more, he would form particularized desires for the welfare of each; but in the absence of the requisite identifying information, the best he can do to respect these unknown values is to fall back on the generic desire for aggregate welfare, as a kind of placeholder.

So far, so good. But what about merely possible future persons? Here the placeholder strategy seems dubious. Before, we were holding the place for the particularized desires we would have if fully informed -- and it's easy to see the normative authority of fully-informed desires. But in case of merely possible persons, the barrier to particularized reference is metaphysical, not merely epistemic: there is no such particular person to refer to. The most we can appeal to is the counterfactual desire that we (ideally) would have had if someone else had existed. We would have formed a particular desire for that someone's welfare. But so what? As things stand, there is no such person, and hence no valuable entity for us to respect as best we can. As I've previously argued, the normative force of merely possible people's welfare cannot be grounded in the merely possible people themselves (for, strictly speaking, there are no such entities). Our concern must instead be 'impersonal', i.e. a concern that the world (or its inhabitants, considered generically) be as good as possible.

Even so, this needn't return us to any single, totalizing desire that the world be thus-and-so. We may instead have distinct desires for each possible generic good. (We still need to distinguish between indifferent and ambivalent pairs of prospects, after all.) For example, I may desire both that Anne have a happy child, and that Beth have a happy child. I can't coherently desire these things for the sakes of the respective (non-existent) children, but I can desire them (for the sake of the world, perhaps). And in this sense I recognize that the prospective persons are not fully 'replaceable': despite being of equal value, there is a morally relevant difference between world A, where only Anne has a child, and world B, where only Beth has a child. The comparison calls for ambivalence, rather than indifference, since they serve distinct (though equally weighty) values / ideal desires.

So I think the objection ultimately fails. Even in the toughest case -- that of merely possible persons, who cannot be the ultimate ground of our concern for their welfare -- consequentialists can still desire each good separately, and hence refrain from treating people as fungible.


  1. I don't quite understand your reply to the 'value receptable' objection. According to this objection, utilitarianism fails because it attaches value to the experiences* of a person rather than to the person herself; the person is, on this view, only valued as a container of her experiences. In assessing the force of this objection, it seems to me irrelevant whether utilitarian agents have just one single desire to maximize good experiences or a multiplicity of desires corresponding to each of the good experiences to be maximized. In either case, the desire is not ultimately aimed at the person, but only at her experiences, whether individually or as constitutive parts of aggregate good.

    Those who are troubled by this objection should reply instead that what utilitarians value is not experiences, but people's lives. It's just that the value of a person's life is, for a utilitarian, a function of the quality of her experiences. But if each of these experiences occurred in isolation from all others in such a way that none could be properly ascribed to any individual person, the experiences would lack value, since they would have no impact on people's lives.


    I assume, for simplicity, a hedonistic form of utilitarianism.

  2. Hi Pablo, there are a couple of different objections in this vicinity. Here I'm focusing on the objection that utilitarians treat people as fungible. (I agree with your response to the distinct objection that utilitarians value experiences rather than people. Perhaps I shouldn't have invoked the term 'receptacle', if it's generally interpreted in the way you describe.)

  3. Hi Richard. Thanks for the clarification. To be honest, I'm not so sure there is a standard interpretation of the 'value receptacle' objection, so I may well have interpreted your use of that term uncharitably. (In this comment I draw a distinction between two forms of utilitarianism which seems related to the "couple of different objections" you mention above. These are (I take it) that utilitarianism treats people as mere receptacles, and that it treats people as fungible or replaceable.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)