Wednesday, October 24, 2012

Parfit on Aggregation and Iteration

People often claim that a large harm to one person is more important to prevent than a very great number of smaller harms to different people.  But this "anti-aggregative" view (that we ought to prevent the one great harm rather than the many smaller ones that quantitatively outweigh it) is indefensible for the straightforward reason that repeated iterations of such a choice would make everyone worse off.  As Parfit explains in 'Justifiability to Each Person' (p.385):
[W]e might claim that
    (1) we ought to give one person one more year of life rather than lengthening any number of other people’s lives by only one minute.
And we might claim that
    (2) we ought to save one person from a whole year of pain rather than saving any number of others from one minute of the same pain.
These lesser benefits, we might say, fall below the triviality threshold.
These claims, though plausible, are false. A year contains about half a million minutes. Suppose that we are a community of just over a million people, each of whom we could benefit once in the way described by (1). Each of these acts would give one person half a million more minutes of life rather than giving one more minute to each of the million others. Since these effects would be equally distributed, these acts would be worse for everyone. If we always acted in this way, everyone would lose one year of life. Suppose next that we could benefit each person once in the way described by (2). Each of these acts would save one person from half a million minutes of pain rather than saving a million other people from one such minute. As before, these acts would be worse for everyone. If we always acted in this way, everyone would have one more year of pain.

9 comments:

  1. Hi Richard,

    Assume, as Parfit does, that we are community of a million people. And let's call the choice to give any one person in our community one more year of life rather than lengthening everyone’s life by only one minute "the anti-aggregative choice." Now it's clear that one million iterations of the anti-aggregative choice would make everyone worse off and that, therefore, you ought not to perform one million iterations of the anti-aggregative choice. How, though, does this show that it's clearly false that you ought to give one person one more year of life rather than lengthening everyone's life by only one minute?

    Take the self-torturer case and call the choice to move up a notch on the dial the moving-up choice. Clearly, you ought not to perform a thousand iterations of the moving-up choice given Quinn's stipulations. But I don't see how that shows that you ought not to perform one iteration of the moving-up choice.

    So are you denying that if you have the opportunity to perform one and only one iteration of the anti-aggregative choice, it is clearly false that you ought to make the anti-aggregative choice in your situation? And if you are, could you spell out how you move from the obvious claim that you ought not perform a million iterations of that choice (in a situation where that's an option) to the less obvious conclusion that you ought not to perform one iteration of that choice (where the only options are to perform one or zero iterations of that choice).

    ReplyDelete
    Replies
    1. Hi Doug, thanks, that's an important point. I take it that, in the case with the million iterations, each act has the same moral status -- there is no morally relevant difference between the various instances of the anti-aggregative choice. So they are each wrong. It also seems to me that the first act of the million has the same moral status regardless of what follows it -- the subsequent actions are independent of, and hence irrelevant to, this action. And so the action is wrong even when offered only once.

      Whether the same reasoning applies to the self-torturer case depends on whether it is set up such that the various iterations have the same moral status. In my linked post, I argue that there is an objective fact of the matter as to what increment it would be best for the agent to stop at. This will be increment zero if each step involves an identical increase in pain and a payoff of constant utility. But if the payoff is in money (or other resources with diminishing marginal utility), or if phenomenal pain actually increases on only some increments but not others, then not all increments are morally alike.

      Delete
    2. It is clear that the compound act of performing a million iterations of the anti-aggregative choice is an immoral act. But I don't see how that necessarily shows that each individual act (taken individually) of which the compound act is composed has the same moral status as the compound act. It would be immoral to give me 50 doses of Yylenol after another during next fifty minutes: t1 - t50. But I don't see how that shows that it is immoral to give me one dose of Tylenol at, say, t3. So I would say that each iteration of the anti-aggregative choice (as well as the Tylenol-dose-giving choice) has the same moral status: permissible (or obligatory?).

      So you seem to be arguing:

      (P1) In the case with the million iterations, each iteration has the same moral status.
      (P2) The compound act of performing the million iterations is wrong.
      (C) So each iteration is wrong -- that is, it would be wrong to perform even one iteration of the anti-aggregative choice.

      (It sounds like you accept (C), although you didn't give me a direct answer when I asked this question in my previous comment).

      But that's like arguing as follows in the case in which you iteratively give me 50 doses of Tylenol:

      (P1) In the case with 50 iterations of Tylenonl-dose-givings, each iteration (each dose-giving) has the same moral status.
      (P2) The compound act of performing the 50 iterations is wrong.
      (C) So each iteration is wrong -- that is, it would be wrong to perform even one iteration of the Tylenol-dose-giving choice.

      Delete
    3. Please change "50 doses of Yylenol after another" to "50 doses of Tylenol one after another."

      Delete
  2. In essence, it seems that you may be relying on the following principle, where 'W(A)' stands for 'It is wrong to perform A':

    W(x & y) --> W(x)

    This principle seems to be obviously false. But if not this principle, then I want to know how you infer from the wrongness of performing a million iterations of the anti-aggregative choice that it must be wrong to perform each iteration of the anti-aggregative choice.

    ReplyDelete
    Replies
    1. Agreed, I don't want to rely on that principle. In your Tylenol case, the various actions are clearly not "independent" in the right way. Rather, the effect of giving a Tylenol at t2 depends on whether a Tylenol was given at t1. But the iterated trade-offs in Parfit's case do seem independent. The value of preventing a year of pain for Bob (at cost of a minute of pain for a million others) does not depend on whether one has (or will) prevent a year of pain for Jim (at similar cost).

      Delete
    2. Good. So I think that the key assumption is that they are independent in the relevant sense. Clearly, the consequences of any one iteration of the anti-aggregative choice will be the same regardless of how many other iterations (if any) are performed. But I'm wondering whether the anti-aggregationist could argue that the moral value (or status) of performing any one iteration of the anti-aggregative choice is dependent upon how many other iterations (if any) are performed. I'm pro-aggregation myself, but I do want to understand what all the assumptions are in this type of argument against the anti-aggregationist and whether it is plausible for them to resist any of them. Perhaps, the anti-aggregationist can claim that the number of iterations does affect the reasonableness of rejecting a principle that allows that number of iterations. It's not so obvious to me that they couldn't. So it's not so clear to me that the anti-aggregationist position is as clearly indefensible as you suggest.

      Delete
    3. Yeah, it's possible that contractualists could tweak their view to avoid being committed to a self-defeating sequence of actions. I guess I'm not so interested in that. I'm more interested in the axiological question of whether the (dis)value of the greater harm outweighs that of the many smaller harms, and I think Parfit's thought experiment pretty decisively disproves that. (Which is interesting, since to hold the deontic view without the corresponding axiological view strikes me as pretty unappealing.)

      Delete
  3. Fortunately, in the real world there are always a range of harms that are equally morally pressing, so even deonticists (?) can choose the most efficient intervention from among these in clear conscience.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.