Thursday, August 23, 2012

Counterexamples to Consequentialism

I've never been much impressed by the standard "counterexamples" to Consequentialism.  They generally start by describing a harmful act, done for purpose of some greater immediate benefit, but that we would normally expect to have further bad effects in the long term (esp. the erosion of trust in vital social institutions).  The case then stipulates that the immediate goal is indeed obtained, with none of the long-run consequences that we would expect.  In other words, this typically disastrous act type happened, in this particular instance, to work out for the best.  So, the argument goes, Consequentialism must endorse it, but doesn't that typically-disastrous act type just seem clearly wrong?  (The organ harvesting case is perhaps the paradigm in this style.)

To that objection, the appropriate response seems to me to be something like this: (1) You've described a morally reckless agent, who was almost certainly not warranted in thinking that their particular performance of a typically-disastrous act would avoid being disastrous.  Consequentialists can certainly criticize that.  (2) If we imagine that somehow the voice of God reassured the agent that no-one would ever find out, so no long-run harm would be done, then that changes matters.  There's a big difference between your typical case of "harvesting organs from the innocent" and the particular case of "harvesting organs from the innocent when you have 100% reliable testimony that this will save the most innocent lives on net, and have no unintended long-run consequences."  The salience of the harm done to the first innocent still makes it a bitter pill to swallow.  But when one carefully reflects on the whole situation, vividly imagining the lives of the five innocents who would otherwise die, and cautioning oneself against any unjustifiable status-quo bias, then I ultimately find I have no trouble at all endorsing this particular action, in this very unusual situation.



Eduardo Rivera-López has a fun new paper called 'The Moral Murderer. A (More) Effective Counterexample to Consequentialism', where he grants something like the above strategy for dealing with standard counterexamples to consequentialism.  The lesson he draws from this is that an effective counterexample to consequentialism must (i) maintain the normal causal connections that we'd expect to hold in the described circumstances, and (ii) avoid undermining consequentialist-approved institutions.

Rivera-López's new counterexample appeals to (highly inconclusive) empirical evidence that the death penalty has a huge deterrence effect, such that each execution can be expected to save 18 lives.  Rivera-López further suggests that it's plausible that a poor man in Oklahoma who killed a white woman in an especially heinous way could have a high enough chance of execution to render this murder a positive expected-utility act.  So, should consequentialists all head down to Oklahoma and start murdering innocents, in hopes of being executed ourselves?

Well, no.  For all we actually know, executions may not have any such deterrent effect.  But that's ok for Rivera-López's purposes.  We just need there to be a nearby possible world where this is (i) true and (ii) widely known (such that the agent can reasonably rely on it).  So let's imagine that world.  The pro-death penalty empirical research pans out, and everyone becomes well aware of the incredible deterrent power of executing murderers.  Should the consequentialists in this world now become "high-impact murder-preventers", by themselves committing a single heinous murder and getting executed for it?

It's a funny question.  I think the imagined world is still different enough from ours that it takes some time to wrap your head around the situation. (At least, I find it pretty hard to believe that execution could do so much good.)  But if we really play along with the stipulations, then sure, I guess heinously killing one innocent and getting yourself executed could be a morally worthwhile act.  (That's not to deny that it would be psychologically traumatizing, perhaps almost impossible, for any normal human.)  That's assuming that the would-be murderers would not themselves have gotten executed, and that their non-murdering possible futures do not involve comparably bad actions, etc. etc.

Of course, it's hard to imagine that this could possibly be the best course of action available to a committed utilitarian.  If he's sufficient intelligent and committed to be considering "moral murder", then surely there are even better opportunities out there for him to help people. Rivera-López responds that even if Tom the moral murderer is not doing as much good as he strictly could and ought to, he is at least doing more than most people, and so is (according to consequentialism) less blameworthy than your average person who is neither a moral murderer nor a professional philanthropist.

Here it's a little hard to assess how much good an ordinary person does through their everyday life, work and social relationships.  (If they raise great kids, then that's a huge boon to the future.  And folks at various tech companies have come up with innovations that improve the lives of many millions of people.)  Being executed naturally puts an end to all Tom's efforts to improve the world.  Is this one thing so much greater than everything else he would otherwise achieve?  It's really far from obvious (unless Tom is in the last stages of a terminal illness.  Perhaps it's just the octogenarian consequentialists to whom this case most strongly applies!)

Finally, it's worth distinguishing act and character evaluations in this case.  Note that there would be something really disturbing about the moral character of someone who preferred to be a moral murderer than to realize an equal or greater good by more traditional philanthropic methods.  Our ordinary failure to help non-salient distant needs, and to prefer to focus on more local concerns, is highly unfortunate but not positively malicious.  But for someone to positively want to kill an innocent person, when the underlying philanthropic end could be better achieved a different (less harmful) way, seems quite dastardly.  So even though the moral murderer does more good (by stipulation) than your average Joe, and hence is acting in a more worthwhile way, as an agent he may actually be more blameworthy, for his choice betrays an element of malice or otherwise bad will.

I think this analysis takes much (and perhaps all) of the sting out of the alleged counterexample.  Yes, it's possible that a moral-murderer could be acting rightly, or at least no more wrongly than ordinary inaction. But we should still be very wary of the idea that such actions are normally a good idea, and so we may sensibly continue to endorse our intuitive moral disapproval of such acts in general.  Moreover, even in a case where the act is worth performing, it's possible that the agent may reveal a moral flaw in their character if the salient harms do not bother them at all, or if they positively prefer to cause harm than to save lives by less harmful means.

23 comments:

  1. re. your second paragraph: are you saying a.) Some murders are morally good or b.) Sometimes the deliberate killing the innocent is not murder?

    ReplyDelete
    Replies
    1. I was thinking the first, but if you want to stipulate that murder is wrong by definition, then you can read me as making the second claim. (I don't care about the terminology.)

      Delete
  2. I know that sounds inflammatory, but it wasn't meant to be. I really am interested in the answer. Or do you see a third option?

    ReplyDelete
  3. Out of curiosity, roughly what do you think the relationship between character and blameworthiness is? More specifically, it seems odd to describe a person who has done something good (as we've stipulated the murder/killing to be) as more blameworthy for it because of bad motivations or whatever. Unless you're just thinking more blameworthy is equivalent to less praiseworthy. This may just be a terminological dispute, but I would have thought one can only be *blamed* for doing bad things, though one can be *reprehensible* even in doing good things. But I don't have a worked out view of blame or anything.

    ReplyDelete
    Replies
    1. Sorry, it's Cory Nichols from Princeton. I don't know why my email address didn't show up, I signed in with my Google acct...

      Delete
    2. Hi Cory (I'm guessing your google profile is set to 'private'?), I think that one can blamed for doing good things for bad reasons. For example, if a tactical bomber volunteers for a justified tactical strike because he anticipates enjoying the collateral damage, then he seems blameworthy. It would be reasonable for people to criticize him for his bloodthirstiness, say, even though the bombing was objectively justified.

      For related discussion, see Blameworthy Utilitarians.

      Delete
  4. Interesting. What's your take on Brown's Consequentialize This paper http://homepages.ed.ac.uk/cbrown10/papers/CT.pdf

    ReplyDelete
    Replies
    1. Looks good to me (I was never sympathetic to attempts to "consequentialize" deontological constraints). Were you thinking there's a connection between that paper and this post?

      Delete
  5. OK, upon rereading your post, I think I have a more substantial comment. What right does the consequentialist have to say that the "erosion of vital social institutions" is a bad consequence of the Harvesting Doctor's decision? After all, by consequentialist lights, the institution that will harvest the organs (in the right circumstances) is the morally superior institution.

    So, I think that the case presents a deeper problem for the consequentialist: the problem of making sense of why these social institutions seem so "vital" in the first place. If this vital medical establishment is grounded in public trust, and if part of that trust is in turn grounded in the deontic expectation that our organs will never be harvested, then the trust is itself anti-consequentialist. It looks very much like the consequentialist should actually welcome the erosion of any social institution which operates under this kind of general deontic restriction, and so it may be somewhat disingenuous to refer to the erosion of such institutions as a "bad effect in the long term". Ideally, the consequentialist should hope that such institutions will be eroded and replaced with more "rational" ones.

    ReplyDelete
    Replies
    1. The bad consequence I was imagining was that more people would die due to avoiding hospitals / professional medical care.

      Consequentialists want good results, not "rational" ones. (That is, the aim is to make people well-off, not to make them endorse consequentialism.)

      Delete
    2. Right, but the good consequence I was imagining was that the hospital system could be vastly improved, essentially via a rejection of the Hippocratic Oath and by the incorporation of organ harvesting/sharing into its practises, hence saving/prolonging more lives. Why isn't this a future to work towards? Is it just unrealistic? If so, why?

      Delete
    3. Because people aren't so self-sacrificing that they'd be willing to visit a place where their organs might be harvested without warning?

      Delete
    4. I was hoping you'd say this. There seem to be two ways to describe this lack of "extreme altruism" in ordinary persons. The consequentialist has to see the resistance to harvesting as irrational (or perhaps as a-rational), as a kind of brute emotional fact that is perhaps just an unfortunate reality that moral theory has to deal with. IF there are many more lives to be saved, then my emotional resistance to having MY organs being the ones that do the saving is straightforwardly irrational by consequentialist lights.

      However, the deontologist (or, indeed, any anti-consequentialist) is going to say that we are not just encountering irrationality or brute emotional fact, we are encountering a widespread "bedrock" moral belief (harvesting is never permitted, period). In sum, in order to make sense of the doctor-case as a case of the erosion of "vital" institutions, the consequentialist is forced to adopt a certain pessimistic view of the moral emotions, a view the deontologist can nicely avoid.

      Delete
  6. [Toby Ord writes:]

    Interesting post. I think the first two paragraphs are a great response to this type of case in general and could be expanded into a very short, very useful summary of why consequentialists aren't concerned by these examples and shouldn't be, despite most non-consequentialists thinking that they work. It might be a nice Analysis piece, or at least a great thing to link people to online.

    ReplyDelete
  7. One reason that I don't find counterintuitive implications of far-fetched though experiments reason to reject consequentialism is that such implications seem to be a feature of any reasonable, consistent ethical theory. Kantian ethics has the problem of the ax-wielding murderer at the door. Libertarianism has the problem not being able to criticize people who let children drown in ponds. You can tweak the theories so they avoid some of these problems, (1) such moves attract an air of ad hocness, and (2) you can tweak consequentialism, too.

    I listened to a debate in which William Lane Craig claimed that hedonistic utilitarianism was an untenable account of morality because it would say that aliens should invade earth and rape everybody if this course of action would result in a greater balance of pleasure over suffering than any alternative. It's kind of hard to imagine this scenario. But divine-command theory would also recommend massacre of humans by aliens if that's what God wanted.

    ReplyDelete
  8. Hi Richard. I agree with your analysis here, but I'm wondering what you'd think of another type of (alleged) counterexample to consequentialism.

    Suppose I know that having the chocolate cake for dessert would not have the best consequences for me (e.g. it wouldn't maximize my happiness, welfare, or preference satisfaction, in the long run). Suppose I also know that whether I have the cake or not will have no effect on anyone else. Finally, suppose that in a moment of weakness I give in and eat the cake. A consequentialist would have to say that I've done moral wrong, correct? This seems wrong, however. I'm inclined to say that I can't wrong myself, and that consequentialism errs by recommending I treat myself as just another person whose interests need to be taken into account.

    Has such an example been raised in the literature, to your knowledge? And what do you make of it?

    Thanks in advance!

    -Ben

    ReplyDelete
    Replies
    1. Hi Ben, I see consequentialism more as a theory of the all things considered ought rather than any more limited (e.g. purely "other-regarding") notion of 'morality'. I don't see this as a problem, as I think the more limited notion is of limited interest.

      Delete
    2. That's very helpful -- thanks!

      Delete
    3. I have one more thought I'd like to run by you, Richard.

      In my example of akrasia and the chocolate cake, there's a sense in which I do the wrong thing by eating the cake. I didn't do moral wrong in the narrow, other-regarding sense of 'moral'. But I did make a bad decision. And you suggest that consequentialism is concerned with the all-things-considered sense in which eating the cake was wrong. That's plausible enough.

      But what about the case in which I knowingly sacrifice a certain amount of my welfare/happiness/whatever for a lesser amount of someone else's? Suppose I choose to discount my own interests when they compete with the interests of my child. Consequentialism must say that I've done wrong -- perhaps not wrong in the narrow sense of 'morality' but wrong in an all-things-considered sense. (Correct?) But I'm strongly inclined to believe that a parent who discounts her interests for the sake of her child does not do wrong, all things considered. What do you think?

      Delete
    4. In principle, I'd think, we should not weight other people's interests more heavily than our own, just as we should not treat them as of less weight, for either is a distortion from the basic fact of moral equality. Your parent/child case is complicated by various factors, e.g. perhaps the flourishing of her offspring is actually a component of her own welfare, so that the apparent "sacrifice" doesn't really make the parent's life go worse for her after all. Even if it does, if her action is motivated by love for her child, rather than any form of self-loathing, then at least she's acting on good motivations, and hence blameless (on a quality of will account), despite having made an objectively poor (normatively mistaken) decision.

      Delete
    5. You're right that the parent/child case does bring additional complications. I still have the feeling that I'm entitled to discount my own interests -- that I can yield my claim to equal consideration just as I can yield my ownership of property. I'll certainly be thinking about your response as I try to figure that out. Thanks again!

      Delete
  9. I don't see why this case isn't just another example of people intuiting that such and such act that they believe would produce tremendous harm (in utilitarian terms) would be wrong, which is what this and the typical organ-harvesting case have in common. Rivera-López can stipulate that, in actuality, this sort of act would produce lots of (utilitarian) good and so the 'normal causal connections' hold, but clearly most intuiters don't believe that such cases would produce net (utilitarian) good. I suspect that even most people who believe in deterrance don't think, when hearing of a murder and the execution of the murderer, "well, at least 18 lives will be saved through deterrance" (irregardless of their moral stance). It doesn't matter if his huge-deterrance-claim is true, so long as most intuiters don't believe that it's true. What would be relevant would be if people believed that an act(-type) produced utilitarian good and still believed that it was wrong- that would demonstrate counter-utilitarian intuitions. Similarly the fact that one could imagine a possible world where such an act would be utilitarianly right, shouldn't be a threat, because it seems obvious that lots of intuiters will find it find it too weird to endorse the normally-terribly-harmful-act even when it's stipulated that in bizarro world it's net beneficial.

    ReplyDelete
    Replies
    1. It's crucial for the argument that this is meant to be a very nearby possible world, rather than a "bizarro world"; but you're right that the relevant issue is intuitive plausibility rather than objective nearness, which significantly weakens the objection.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.