Wednesday, July 07, 2010

Killing and Average Utility

Towards the end of his (1983) 'Value and Population Size', Thomas Hurka considers the objection that value holism might sometimes mandate killing those of (positive but) below average welfare, so as to raise the average. He responds:
It is foolish to think that the consequentialist principles we use to assess the values of different populations could ever be the only principles in an acceptable moral theory. They have to be accompanied by supplementary principles setting constraints which we must not violate while pursuing our population goals and which we must not violate in particular by taking the lives of existing people. If we are to assess population principles as population principles, then we must assess them in circumstances where these constraints do not apply, that is, in circumstances where only increases and not decreases in the human population are in question.

This looks like the kind of mistake I had in mind when I wrote 'Anti-Consequentialism and Axiological Refinements': what Hurka interprets as a need to go beyond consequentialism, I see as a need to refine our axiology. If Hurka's right, then we should think something like the following: "Though it'd violate the moral rules to help bring about this outcome in any way, I must say it'd be really grand if all those happy folks of below average welfare would just drop dead. Here's hoping for some well-placed lightning strikes!"

But of course that's not what we think at all. It's not that their premature deaths are a good outcome that we simply aren't "allowed" to bring about. Rather, it'd be a bad outcome in its own right -- as we can see when we consider the scenario where the deaths are a result of natural causes. (Hurka's last sentence seems to neglect this possibility.)

Compare two very different forms of 'average utilitarianism':
(1) The value of a world is a function of the average happiness at each moment: e.g., the sum (or perhaps the average) of the momentary net happiness divided by the momentary population.
(2) The value of a world is a function (namely, the average) of the welfare values of each individual's whole life. (Welfare need not be temporally located.)

The "killing to promote average utility" objection only makes sense against the type-1, momentary view. On the second view, where we take a timeless perspective, killing someone does not reduce the (eternal) population. It merely makes one of the lives shorter than it otherwise would be. But that life still counts as one life in the history of the world, the same as it ever did. So, if death was bad for the person -- if it made their life worse than it otherwise would have been -- then, all else equal, it thereby reduces the average welfare of the world. It thus counts as a bad outcome. And doesn't this seem by far the more plausible view?

More generally: It makes a big difference if we understand individual welfare as a value that inheres in whole lives, rather than mere momentary timeslices. We've seen that it allows us to avoid the absurd result that harming some (by killing them), while helping no-one, could improve the world according to the average principle. It also helps against a related objection that applies even to the total principle: that killing someone and replacing them by someone slightly happier would increase utility. This may be true of chickens and other beings that lack a persisting identity, but you can't replace a person without cutting short a temporally-extended life, the disvalue of which might easily outweigh the increase in mere momentary happiness.


  1. Good replies.

    Do you find it odd that, on this view, we might have made things worse (and acted wrongly) even if (i) the created person is very glad he got a chance to live, and (ii) his existence benefits everyone else (perhaps in a very small way)?

    Another issue: do think that the boundary between persons and non-persons can bear the weight you're putting on it? Let R be the relevant relation you have to bear to your future stages to count as a person. R is probably going to be the kind of thing that comes in degrees. Would the idea be that there is a certain degree to which some stages are R related, above which avg. considerations apply, below which they do not? Or would avg. considerations start to apply more and more as the degree to which stages are R-related increases?

  2. Hi Nick,

    I agree that Mere Addition creates a bit of a bullet to bite, but it strikes me as less bad than the alternatives. (Hurka does a nice job in his paper of suggesting intuitive violations of mere addition in other spheres, e.g. aesthetic value, which may also help take some of the sting out of the objection.)

    I haven't thought enough about "degrees of personhood", though you raise an important question. My gut reaction is to go for the second option -- a smooth, gradual increase in temporally-extended values. (That seems more principled.) It definitely complicates matters though. What would you recommend?

  3. Adding folks: It's not just a mere-addition thing. It's also a sort of pareto superiority. I find it hard to swallow that an action that is prudentially rationally approved by all could be wrong (esp. provided there are no desert issues).

    Example: in the future, we discover that we can significantly enhance our lives by creating sims. These sims wouldn't be able to last for long though (only 20 years or so, starting at a subjective age of 20), due to constraints on technology. Humanity collectively decides that the sims would have lives worth living, and some sims initially created in a test run are very happy to get the chance to exist. When we're about to create the sims, an average utilitarian bursts through the doors and yells, "STOP! You're making the world worse!"

    I find it deeply implausible that we are acting wrongly. What do you take to motivate the average view? Are there ways of satisfying yourself that avoid these implausible consequences? (Perhaps not. Population ethics is filled with paradox.)

    Another issue: You dodge the bullet by aggregating within lives and then across lives. But how do we think about how good a world is at a time, on this view? If we look at averages

  4. Degrees of personhood: Definitely the gradual thing is the way to go. (The consequence of going the other way is to allow that arbitrarily small changes in how some person-stages are related to other person stages could ground a large difference in goodness of outcome, which is implausible. Some people might think this is less implausible if it is indeterminate where the discontinuous jump is, but I find that cold comfort.)

    Another issue: You dodge the killing bullet by aggregating within lives and then across lives. But how do we think about how good a world is at a time, on this view? Is it going to be a consequence of this average view that there is no such thing as goodness for a world at a time? What if you want to care about the shape of the world's history (whether things are getting better, for example)?

  5. One major motivation is avoiding the repugnant conclusion. A second is theoretical parity: I think holism is clearly true of value within a life (across times), so it's neat to be able to also extend this to value across lives.

    For much more detail, including my response to Huemer's modal pareto principle, see my 'Value holism' paper. [For the record, I don't think the straight 'average' view is quite right; but the differences don't matter too much for the objections we've been discussing.]

    As for the idea that value can't be assigned to particular moments of time, that's more an implication of my holism about welfare than about population ethics. (Discussed more here.) Still, you're right that we want to be able to make at least rough value comparisons across times (and this is true within individual lives as well). I feel like this should be compatible with my view, but I'll have to think more about the details.

  6. You don't find other implications of avg. util. worse than RC? (E.g., small number in horrible hell > extremely huge number in ever so slightly less bad hell)

    Glanced at pp.23-27 of the paper. I think I gave you comments on an early version of this. The argument is that the pareto principle is only good because it is motivated by personalism (all reasons are grounded in facts about individuals), and personalism is not true. Its falsity is a lesson of the non-identity problem.

    Here's a fairly natural motivation for mere addition not being bad that doesn't presuppose personalism. You look at all the alternatives before you. You look at all the individuals that would exist in each alternative. For each individual, you look at how good or bad his life would be. For each such individual, you get a reason to choose that alternative (or not to choose it) whose strength is an increasing function of the goodness of that individual's life. Then, you somehow weigh the reasons. (Maybe additively, maybe not.) Here's a plausible (not obligatory, but plausible) principle for weighing your reasons:
    (More Reasons Principle) If the set of all reasons for favoring alternatives A and B are the same, except for an additional consideration in favor of A, then it is false that we have less reason to choose A than B.
    To deny this is to hold that adding a reason in favor of an alternative (and leaving all other relevant considerations the same) could give us less reason to choose it. You put all this together, and you can conclude that mere addition doesn't make things worse.

    A mild strengthening of the More Reasons Principle even gives you the Modal Pareto principle (assuming that it is rational to prefer life worth living to non-existence (from the standpoint of self-interest)).

  7. Just to clarify the dialectic: I explore personalism as a way of independently motivating the pareto principle. I agree that there are more straightforward motivations you can offer. The problem is that they transparently presuppose atomism. For example, you just assumed that all the reasons we get are obtained by considering individual lives from the perspective of self-interest. But a holist will think that there are also important reasons that arise from how a life impacts upon the 'shape' of the world as a whole. By simply ruling this out from the start, you haven't offered an argument that will have any rational traction on the holist. (Of course, if you're merely offering a defensive argument explaining why you accept atomism, that's fine. I'm not claiming that atomism is indefensible or anything.)

    I agree that avg. util. has other, more absurd, implications. That's why (as noted in my previous comment) I'm not an average utilitarian! Here I'm merely defending avg. util. against those objections that would also apply to my preferred form of value holism (which gives some, but not exhaustive, weight to considerations of average welfare).

  8. OK I'm clear enough on what you think of avg. util.

    One confusion, one objection.

    1) Why do you regard personalism, but not the view I just described, as independently motivating the pareto principle under discussion? Your stated reason was that my principle "just assumed that all the reasons we get are obtained by considering individual lives from the perspective of self-interest. But a holist will think that there are also important reasons that arise from how a life impacts upon the 'shape' of the world as a whole. By simply ruling this out from the start, you haven't offered an argument that will have any rational traction on the holist." Couldn't you say the same to the personalist?

    2) While I see that some holists will believe that there are reasons that arise from how a life impacts upon the shape of a world as a whole, I don't see that this is a necessary feature of holism. There could be considerable holism in how the reasons are aggregated. On your def, "We may thus take value holism to be the claim that the contributive value of a part cannot be assessed in isolation; it depends on how the part stands in relation to the whole, and in particular on what other parts there are." (p.1). Accepting the view I described leaves it open that the contributory value of the fact that a certain individual would have a life a certain quality depends on the quality of the lives of others. (Not terribly clear what "contributory value" means when you don't treat reasons additively, but so it goes with holism.) Thus, I don't think the view I described presupposes that holism is false, though it rules out one kind of holistic consideration.

  9. Yeah, I need to tidy up that section. I guess I was thinking something like the following: the most distinctive and appealing version of value holism is one that invokes the broader 'shape' considerations. Even if the pareto principle is compatible with the letter of value holism, it clearly rules out this appealing form of the view right from the start, which is the more dialectically relevant point.

    Now, while the pareto principle is just another rival first-order view, personalism seems like a 'higher-level' view. So while it does imply the falsity of value holism, it at least may have some independent appeal, i.e. reasons for accepting it which are not just first-order reasons for rejecting value holism. We obtain these reasons by reflecting on the nature of morality, rather than just on the first-order question of which things have what moral properties. That was the rough idea, at least.

    P.S. I think I do "treat [practical] reasons additively." The difference is instead that I don't think we can determine what reasons we have to bring something into existence just by looking at its intrinsic features. (For another example, it's not as though pleasure always gives rise to the same reason, which is then counted differently depending on the external features. Rather, there's a reason created by 'deserved pleasure [in such-and-such circumstances]', and a very different kind of reason created by 'sadistic pleasure [...]', etc.)

  10. I'm skeptical of the using the first-order/higher-order distinction here. Are you thinking personalism is somehow a meta-ethical view, one which has importantly different standards of justification?

    And even if this first-order/higher-order thing is relevant here, I don't see why you wouldn't count the view I sketched as "higher-order". It seems like a more careful way of saying that the only things that count for or against alternatives are facts about how good or bad the alternatives are for individual people. One might say it is part of the "nature of morality" that all relevant considerations have to do with the welfare of particular individuals. Indeed, I wonder whether it isn't a better way of spelling out personalism, especially since I don't see that this view requires any sort of appeal to how good the world would be if people did one thing rather than another and it avoids the non-identity problem.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)