Tuesday, January 05, 2010

Helping Wrongdoers

Garrett Cullity, in The Moral Demands of Affluence, argues against extremely demanding conceptions of beneficence by appeal to the following principle: "Someone else's interests in getting what it is wrong for her to have cannot be a good reason for requiring me to help her." (For example, we obviously shouldn't help a gangster unjam his gun and shoot his victims, in the absence of any other reason to do so. His interests in wrongdoing don't suffice.) Let's call this the 'No Helping Wrongdoers' principle, or NHW for short. In this post, I'll briefly explain how Cullity uses NHW to argue against extreme demands of beneficence, and then I'll show that NHW is actually false. The intuitive cases appealed to in its support really only support a weaker principle that is insufficient for Cullity's purposes.

Cullity's argument: We can describe a scenario in which you would be required by beneficence to advance Bob's (non-altruistic) personal interests, say in developing his musical talents. By NHW, we only have such reasons if Bob's interests here are morally permissible. So it must be permissible for Bob to develop his musical talents (even though he could be saving lives instead). To generalize: morality isn't generally extremely demanding, or it would - by NHW - absurdly imply that many ordinary personal interests fail to provide us with reasons of beneficence. (It would, in a sense, be insufficiently demanding in ordinary contexts.)

Why NHW is false: Gangster examples support the narrow claim that we shouldn't help people in ways that make things worse overall. But NHW goes further than that, as cases like Bob's show. The problem is that some intrinsic good X might be "wrong" to pursue for the purely comparative reason of opportunity costs: though good in itself, there's something else you could be doing that would be even better. But suppose that you just can't bring yourself to make the required sacrifice to achieve this impartially best outcome Y. Further suppose that you might even fail to achieve X, in which case the result would be the dreary outcome Z, containing neither personal nor impartial goods. Now let's bring a third party into the picture. Suppose I'm in a position to easily help you achieve X, rather than Z. Should I do so? Of course! Better X than nothing. But according to NHW, I should not help you here, since (we've stipulated) your interest in X is one that it's wrong for you to achieve, given that you ought instead to have pursued Y. We thus see why NHW is false. It fails to recognize that we sometimes ought to help someone to achieve a lesser good, if the realistic alternative to our aid is that there will be no good done at all.


  1. So to relate your argument back to the example...

    If the gangster doesn't unjam his gun, he'll detonate a nuclear device, killing many more people - so you should help him unjam his gun. Ne?

  2. I'd say so. Mind you, Cullity might well agree, since he's merely committed to saying that the gangster's interests here don't provide us with good reason to help him (which seems right). There might still be other reasons, e.g. provided by others' interests.

    The real disagreement emerges in cases like Bob's, where the relevant interest concerns an intrinsic good (e.g. developing his musical talent), that [we suppose] it's merely comparatively wrong to pursue. (The gangster's interest in killing people is, by contrast, an 'absolute' or positive bad, which is why it provides no reasons in itself.)

  3. [Garrett Cullity responds:]

    Thanks very much for passing on this comment. You raise an excellent issue, and put the point in a nice sharp form. You’re right that the issue of comparative benefits and costs associated with different pairs of options is important for the use I make of the principle, and I wish I’d discussed it more in my book. (There is some discussion of this at pp.152-5, but not enough I fear.)

    The issue seems to me quite subtle: here’s a two-stage reply.

    Stage I.

    Suppose (scenario A) I face the choice between two outcomes: X, in which you are benefited, and Z, in which no one is benefited. I can push a button, let’s say: if I push it, X happens, and if I don’t, Z does. I agree that the benefit you will receive from X could mean that I am morally required to push the button. According to the principle I accept, that implies that it wouldn’t be wrong for you to push the button either – but that’s true.

    (I wouldn’t call the principle NHW, because that makes it sound too much like the principles I reject on p.144, but let that pass.)

    Now take scenario B, in which the choice is restricted to X and Y – in Y, resources are diverted from benefiting you to avert great harm to others. If you push a button, X happens, and if you don’t, Y does. If the disparity between the harm to the others and the benefit to you is great enough, then it would be obviously wrong for you to push the button. So the principle implies that the benefit you will receive from X could not be a good reason for requiring me to push the button – but that’s true too.

    The scenario in your comment (scenario C) is more complicated than either of these. In this scenario, you are choosing between all three outcomes X, Y and Z, but I know that you won’t choose Y. In this case, *I* am effectively in scenario A, but you are not. I can be morally required to get you X-rather-than-Z, since those are my only two options. This implies that it would be OK for you to make the same choice, *if these were your only two options*. But they’re not.

    Stage II.

    I think those remarks show that scenario C does not provide grounds for thinking that the principle is false. However, there might seem to be a different kind of problem – a problem not for the truth of the principle but the use I put it to. I want to cite the existence of everyday requirements of beneficence, invoke the principle, and infer that various kinds of everyday behaviour are not wrong. But the treatment of scenario C might seem to spoil this strategy of argument. According to that treatment, from my being required to get you X we cannot infer that your own pursuit of X is morally OK.

    But here’s where the choices I’m actually facing are relevant, in the way I (too briefly) discuss at pp.152-5. If my choice is whether to stop your expensive possession from getting damaged, then that is an X-vs-Z choice. In many such cases, I can be morally required to prevent the damage, which implies (correctly) that it wouldn’t be wrong for you to prevent it yourself. By contrast, if my choice is whether to help you live a materially luxurious life rather than an equally fulfilling, less luxurious one, then that is an X-vs-Y choice, and here there is nothing absurd about denying that I am not required to help you get X rather than Y. But there are other X-vs-Y choices – e.g. where you have decided to become a musician rather than an aid worker – in which I *am* required to help you, and hence which it is not wrong for you to pursue for yourself. These are the ones for which I claim, as you nicely put it, that a morality that lacks these requirements is insufficiently demanding.

  4. This is helpful -- it turns out that the argument is much less ambitious than I thought. On this narrower interpretation of NHW, Extreme Demands turn out to be compatible with requiring us to help non-altruists in the vast majority of cases (where even without our help they would not act any more altruistically). So the argument is not so threatening. Extreme Demands merely implies that we shouldn't help people to become musicians (etc.) rather than aid workers (etc.). But that seems right -- we shouldn't (for reasons of beneficence) cause others to act less altruistically. This implication is at least as antecedently plausible as the thesis of Extreme Demands itself. So I don't think it undermines the view at all.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)