Friday, July 25, 2014

Objections to Consequentialism

What do you think are the strongest objections to Consequentialism? (By 'Consequentialism' I roughly mean the unconstrained pursuit of the good -- which might be agent-relative, but shouldn't build in intrinsic concern for traditional "side constraints" like promises, fairness, etc.)

* Counterexamples: I've previously explained why I'm not impressed by the standard "counterexamples" to consequentialism (transplant, bridge, etc.).  In short, they involve situations where the supposedly "consequentialist" act seems morally reckless, and merely stipulating that it "really is" for the best predictably doesn't undo our intuitive aversion to such irresponsible behaviour.  I think it's a lot harder than most people realize to come up with a real case where an act both (i) maximizes rationally-expectable value, and yet (ii) seems morally repugnant on reflection.  So I wish it weren't so common for people to breezily dismiss Act Consequentialism with a mere hand-wave towards the "familiar counterexamples".

Exception: cases of sadistic mobs, etc., which instead call for axiological refinements (reject hedonism!).

* The Separateness of Persons: This venerable objection is vulnerable to the observation that it erroneously assumes that commensurability entails fungibility. (That is, assuming we understand normative 'separateness' as a matter of being non-fungible, distinct values or desirable ends.  Sometimes it is instead used in a contentless way to merely emote disapproval of consequentialist aggregation, or to reassert using new words the trivial observation that consequentialism is incompatible with non-derivative rights.)



* Objections to Aggregation: The critic does better, I think, to just object directly to the idea of aggregating welfare.  And I agree there are cases where it seems highly counterintuitive to let many small benefits or harms outweigh one great one.  But I think considerations of iteration show that our initial intuitions are simply mistaken here.

* The Demandingness Objection: It's often not clear that this is even an objection, as opposed to just a complaint. (However rough on us it is to have to pony over some hard-earned cash, it would be far rougher for the child with intestinal parasites to have to go on untreated! So what, exactly, are we supposed to be complaining about?)

A more sophisticated variant of this objection claims that utilitarianism commits one to implausible character evaluations (e.g. that we're all moral monsters). But this is not so.  Combine with deontic pluralism: Act Utilitarianism is an account of the ought of most reason, not the ought of minimal decency. (This move also allows consequentialists to recapture the category of the supererogatory in a plausible way, for those who care about that.  See my 'Satisficing' paper for more detail.)

* The Nearest and Dearest Objection: A related (but importantly distinct) objection holds not that utilitarianism is too demanding, but that it demands the wrong things: we should care more about promoting the welfare of our loved ones in preference to strangers.  Maybe! I think one could reasonably go either way on this one.  But at most it gives a reason to move to a form of agent-relative consequentialism.  (Which I don't think makes a huge difference in practice, so long as you give any non-trivial weighting to the welfare of distant others.) It doesn't do anything to motivate deontological side-constraints.

* The Self-Effacingness Objection: is generally no objection at all, except for my souped-up version which can be answered. (Note that the picture of the rational utilitarian agent which emerges from that last linked post is very relevant to why I think the standard counterexamples fail.)

* Fairness Requires Randomness: I confess I've just never understood the appeal of Taurek-style resistance to picking the best option (and associated complaints of "unfairness"). But for those who do, why not just see the actual distribution of identities as amounting to a kind of (metaphorical) divine lottery?

Have I missed any good ones?  I know there are (bafflingly) many non-consequentialists out there, so please do help out by posting your favourite objection -- or, if it's covered above, explaining what in my responses you find unsatisfactory!

[P.S. See also, 'Why Consequentialism?', for reasons in favour of the view.]

33 comments:

  1. What about the "epistemic argument" against consequentialism, either maintaining its non-viability as a decision procedure or (more profoundly) as an argument against the criterion of rightness view, on the grounds that it would seem to leave open or epistemically uncertain the rightness (or wrongness) of actions for which we are certain of the action's status as right or wrong. See James Lenman, "Consequentialism and Cluelessness" for a nice statement of the worries... http://www.jstor.org/stable/2672830

    ReplyDelete
    Replies
    1. Thanks, yeah, that's an interesting one! I'm inclined to reject Lenman's swamping principle (that huge unknowns mean that the comparatively "insignificant" knowns count for nothing). Absent special reason to think otherwise, I think we can be reasonably confident that randomly killing strangers really is an expectably bad thing to do, etc. (But it would be worth exploring the argument more fully in its own post sometime.)

      Delete
    2. I read Lenman's paper last night and, I have to say, this is the first time that my confidence in consequentialism has been threatened. I highly recommend it Richard, and I'd be very interested in what you think about it.

      In essence, your above reply is precisely what he attacks. Consequentialists have said this (we can be "reasonably confident" that killing strangers has expected negative utility and giving to charity has positive expected utility) in the past and it seemed intuitive to me until last night.

      However, your agent-relative idea may avoid the objection. If what has agent-relative value is promoting the utility of the near future rather than distant, Lenman's objection fails. Unfortunately I don't share the agent-relative intuition and it seems to me to be one of the most important aspects of consequentialism: its sheer impartiality.

      If a god told me for certain that the person I was about to save will turn out to be the ancestor of a super-Hitler in the very distant future (and all in all, the utility will be massively negative, whereas the utility would be massively positive if I let her die), I would not save her even if it meant the parts of the world close to me in space-time would be significantly improved.

      Can you motivate your agent-relative version for me please? Giving up impartiality to me seems a bit, well, selfish!

      Delete
  2. Hi, Richard

    I'm not sure the following is the kind of objection you have in mind (since it might be seen as a broader objection to a number of theories; if it's not, please let me know), but with regard to the “The Nearest and Dearest” objection and the agent-based consequentialism, what if an agent does not give a non-trivial weighing to the welfare of distant others?

    I'm thinking agents like an Ideally Coherent Caligula (ICC), who doesn't need to be human (or do you reject the possibility of such agents?)
    Alternatively, we may consider agents completely different from that, which aren't particularly interested in torturing others, but have vastly different goals from any humans, which may result in actions that are not recognizably moral, and who bring about very bad results if they act rationally in pursuit of their values.

    Some examples would be Yudkowsky paperclip maximizers or some other weird AI.
    Also, aliens from other planets like Street's social insects who care about their queen and colony, but might care very little – or nothing - about, say, humans – or even value the existence of humans negatively, as they take resources they might use for the expansion of the colony.

    So, the objection I'm considering would hold something like:

    OB1: There are possible (perhaps, actual) agents that all-things-considered ought to bring about definitely very bad results. Agent-relative consequentialism can handle that but at the cost of defining “good” relative to the agent in a way that is not connected to our usual conceptions of good and bad things, and definitely not connected to moral good goodness, or moral oughts.
    But if “good” in act consequentialism is conceptually disassociated from morality, a question is whether consequentialism is still a moral theory?

    Wouldn't the result be essentially like constructivism about reasons + an externalist view on morality?

    I think a consequentialist might respond that the theory only applies to some kinds of agents, so there would be agent-relative good as long as the agent has some sort of mind, whereas maybe different sorts of agents all-things-considered ought to do some other stuff - i.e., when relativizing to agents, in some cases we get variants of good, and in some cases, we get different things -, though that would introduce a number of complications, it seems to me.

    ReplyDelete
    Replies
    1. Hi Angra, the kinds of agents you describe raise a "Why be moral?" challenge to the normative authority of morality in general, rather than any particular theory (like consequentialism) in particular. A committed moralist, of any kind, will simply assert that such "amoralists" are not doing what they really ought to do, because they're neglecting all the super-important moral reasons that there are for caring about the welfare of sentient beings. (The committed moralist thus rejects constructivism about reasons in favour of a less relativism-prone view, e.g. realism or expressivism. Instrumental rationality is no guarantee of rationality tout court, if the agent in question has the wrong values!)

      I should clarify that by "agent-relative consequentialism" I didn't mean the subjectivist view that each agent gets to pursue whatever values they please. Rather, I had in mind an objective view that just slightly modifies utilitarianism by granting extra weight to the welfare of those to whom one stands in a special relationship (friends and family, say). There's still an objective account of "the good" at work in the background, it's just "agent-relative" insofar as it specifies slightly different aims to each person: "promote the good of all, but especially your friends and family."

      Delete
    2. Richard,

      Thanks for the clarification on what you mean by “agent-relative consequentialism”. I will think about other objections one might raise.

      That aside, I don't think the view I described is subjectivist in the colloquial sense of the word, just as the fact that there are other species that have things similar to color (in terms of perception, function, etc.), but which aren't color (e. g., their perceptions are associated with different parts of the spectrum) doesn't mean that color statements are subjective, that there is no objective fact of the matter as to whether a traffic light was green when the defendant crossed it, etc., in the usual, colloquial sense of those expressions. So, I think color is objective even if aliens have alien color (depending on the specific aliens), and the good is objective even if aliens have alien good (more precisely there is some variation in normal color vision among humans, but that would complicate the example without being crucial to the point I'm making).

      Granted, technical usage varies, and many philosophers would call that view “subjectivist”, but I think it's a bad choice of words; I'd rather stick to the usual meaning of the terms.

      Terminology aside, I don't agree about the assessment about committed moralists, either.

      In particular, I don't think making proper moral assessments, being committed to behaving in a morally proper manner and being consistent about it commits one to making claims that weird AI or aliens from another planet (and with a psychology radically different from ours) would be somehow mistaken about something.

      If you meant something else by “committed moralist”, please clarify, but describing a person who does not have the views you mention as not a committed moralist is something I would object to, as it gives the impression that they're somehow less committed to behaving morally, defending true moral claims, or something along the lines, and I would argue that that's not the case.

      All that aside, I will address your other points in the post below (I get you disagree with most or all of what I will post below, but I'm outlining these objections because you expressed bafflement by the existence of so many non-consequentialists, and asked us to raise our favorite objection, so I'm explaining where I'm coming from as a non-consequentialist. While I don't have a single favorite objection, I find these ones – among others - decisive.).

      Delete
    3. With regard to the weird and smart aliens, AI, etc., I'm not sure whether they would all be (or they all are or will be; chances are either there are or there eventually will be such beings, somewhere in the universe) moral agents who behave immorally, or not moral agents at all (in which case, there is nothing that they morally ought to do, though there plausibly still is something they rationally ought to do, or all-things-considered ought to do, given their own evaluative standpoint), but I would see no reason to believe they're making any mistakes, or that they all-things-considered ought to seek the good; it seems to me it would be irrational of them to seek the good, given the values they have (and they would have to behave irrationally to change them).

      Also, I would argue that they wouldn't have moral language at all – similar language would not be moral language, just like color-like language is not the same as color language, and trying to debate would result in talking past each other.

      After all, language is determined by usage, and as a result of a very different evolutionary past – or design in the case of the AI – they would care about properties and things very different from the ones we care about, and their language would allow them to talk about us, not about something H. Sapiens does.

      The alternative that there would be genuine disagreement about morality, that they would all be making some kind of mistake, etc., seems just too improbable given evolution. So, if the type of consequentialism you have in mind is committed to the alternative (if not, please clarify), then I would say this is an objection, other than the ones you listed – and one I find decisive.

      Regarding whether they have the “wrong values”, do you mean it's irrational to have those values, or immoral, or both?
      I get the impression you're saying it's irrational (and plausibly also immoral, though I'm less confident in that interpretation).

      Personally, I see nothing irrational in having those values, as long as they are consistent; in fact, it would be instrumentally irrational of them to seek the good. Still, I suppose there might be more than one usage of “rational” in this context, and if so, they might be irrational in some sense if they don't seek the good (but still instrumentally irrational if they do).

      However, that is not crucial. Assuming that that's the case (i. e., that that would be irrational, in some common sense of the word), that would seem to me like an issue about meaning of “rational” and “irrational” in English (and if so, plausibly other terms in other human languages), but then, the reply given in the case of morality can be given in the case of rationality. For example (simplifying, but assuming humans morally ought to seek the good for the sake of the argument, since this is not crucial to the objection, either), if some intelligent aliens, with complex language, etc. (let's call them “zyntomans”) evolved very differently and have minds very different from ours, and they generally are instrumentally rational in bringing about what they value, but as it happens, they have irrational values, and it's irrational to seek the z-good (instead of the good), for that matter, seeking the z-good would plausibly be z-rational, and seeking the good, z-irrational. When they talk, they talk about z-rationality, z-good, etc.

      Delete
    4. A "moralist" is someone who's morally judgmental of others. I was using it as rough shorthand for the "moral rationalist" position that amoralism is irrational. (I agree that you can be committed to acting morally as a matter of personal values, choice, or preference, without being a "moralist" in my sense.)

      In general, I worry that you're running together issues in normative and meta-ethics. I take it that you object to my brand of moral realism (which includes moral rationalism) -- and I'm happy to explore that disagreement more in another thread (e.g. I don't think it's so clear that the zyntomans are talking about z-goodness rather than goodness -- it depends in part on whether they take themselves to be fundamentally fallible in the same way that we Earthly robust realists do). But consequentialism as a normative theory is independent of all that. You could combine it with expressivism, or with non-rationalist ("externalist") naturalism, etc. Correct me if I'm wrong, but I don't see that the objections you raise are reasons to favour deontology over consequentialism, right?

      Delete
    5. True, most of the objections I raised would not, on their own, favor either deontology or virtue ethics over consequentialism as long as you don't include some metaethical views on the definition of consequentialism – but as a theory of all-things-considered reasons which holds agents ought to bring about the good, I got the impression they were.
      As I mentioned, though, I wasn't sure this is the sort of objection you would be interested in, and I see now that you don't seem to include those metaethical views in the definition of consequentialism, so point taken:; I won't insist on any of those objections in this thread.
      On the other hand, the objection I raised in my second post in the thread definitely favors deontology over consequentialism, even if it involves some metaethics lurking there as well.
      While it is not an argument for deontology as the full answer – e. g., it takes no stance on whether, say, when it comes to moral goodness, a virtue ethics approach is correct -, it does support deontology at least one it comes to moral obligations, even if without denying that consequences do matter in many cases – but secondarily, and because of rules allowing for it; the “do”s and “don't”s would be primary.
      So, I think that objection is still on target.

      On the issue of moralists, thanks for the clarification on what you mean, but I don't agree with the view that a committed moralist of any kind would adhere to the views you mention.
      For example, the following are committed moralists, by that definition:

      a. Alice holds that it's always irrational for non-psychopathic human beings to behave immorally, and is morally judgmental of them. She also is morally judgmental of psychopaths in the usual sense of the terms, since she judges them and their actions immoral and blameworthy and judges them evil in many cases, even if she does not take a stance on whether their immoral actions are always irrational.
      As for weird AI, aliens, etc., she holds that it would be irrational of some of them (in some cases, depending on their minds) to bring about the good, and in fact, some of them even rationally ought to bring about bad results (if they are, say, z-good).
      Maybe bringing about what they rationally ought to would be immoral, or maybe they would not be moral agents at all, but more like lions or viruses, which can bring about bad results and aren't moral agents - even if unlike lions or viruses, these beings would be capable of rational reflection on a level similar to that of humans or superior -; Alice takes no stance on that.
      She leans towards naturalist externalism about morality, but about constructivism about reasons.

      b. Bob's stance is like Alice's, with the difference that he holds that it's always irrational for all human beings to behave immorally (so, he disagrees with Alice about the psychology of psychopaths).

      There are other variants.

      On a different note, I've been thinking about your suggestion of an agent-relative consequentialism, but I'm not sure it would be consequentialism anymore. In fact, it seems to me that the consequence is that sometimes, some people ought to promote the bad.

      For example, let's say that Alice's daughter is at risk of dying in a burning building, and so are other 5 kids; she can save either her daughter or the others (the first she tries to rescue) without serious personal risk, but very probably not both. It seems plausible she ought to save her daughter – that would be promoting the good of her family first. However, at the same time, that's promoting the bad all-things-considered – i. e., the result in which one is saved but 5 die is overall bad; and the lack of action/omission distinction indicates she's promoting the bad by not saving them instead.

      Delete
    6. Just to clarify, the agent-relative consequentiailst posits agent-relative value. Saving my kids over yours is better-sub-me, but worse-sub-you. That is, we should rank the outcomes differently. That's what makes it agent-relative.

      Delete
    7. Thanks for the clarification; I thought there were some assumption of agent-independent value as well.

      So, given agent-relative value, would people who are not aware of agent-relative consequentialism be talking past each other if they debate whether a certain course of action would make the world a better place, if the matter involves saving their nearest and dearest, or at least those of one of the people involved?

      For example, let's say that Tina says that by saving her own kid over two unrelated ones, she made the world a better place, but Alice (who has no kids and is not related to Tina's) says that Tina is mistaken, and that Tina would have made the world a better place if she had saved the two other kids instead.

      Is the proper interpretation of agent-relative consequentialism that Tina's claim is true, Alice's claim that Tina is mistaken is false, but Alice's claim that Tina would have made the world a better place if she had saved the two other kids instead, true?

      Delete
    8. I think one could interpret such talk in a variety of ways. It's often most natural to read a phrase like "made the world a better place" as invoking agent-neutral value, in which case Tina's claim would be false. On that way of talking, agent-relative consequentialism tells us not just to aim at making the world a better place, because that neglects the agent-relative value of our nearest and dearest. But Tina could clarify that she just meant that by saving her own kid, she brought about the better outcome, i.e. the outcome that she had most reason to prefer and to pursue, in which case her claim is true and Alice should agree that she brought about the outcome that she (Tina) had most reason to prefer.

      Delete
  3. There is also an evolution-based objection to consequentialism about morality (consequentialism about all-things-considered reasons might avoid this objection at the cost of separating morality from all-things considered reasons and not trying to be about morality, but that seems to high a cost in this context).

    The idea is that based on evolutionary considerations, we should expect morality not to be about making the world a better place – at least, not entirely so -, but rather, at least an important part of it would indeed be a list of “do”s and “don't”s.
    This sort of argument makes no claim that expressions like “moral obligation” are about evolution (that would be obviously false), but rather, uses evolutionary considerations to assess what kind of usage of [some] words we should expect, and then argues that the only good candidates for that usage are some moral terms. Also, it's not a debunking argument – it makes no claim in support of an error theory.

    So, an objection of this sort would be as follows:

    As we can observe in other social animals, like wolves, lemurs, monkeys, and other apes, there are some species-wide and species-specific rules of social behavior (plus some group-relative rules, but some rules are species-specific).
    In particular, it's clear that our primate relatives have the mental capacity to compute those rules, and normally care a lot about them. Rule-breaking is normally punished when discovered, in various different ways.
    Now, if some social animals were to evolve and become more intelligent, to the point of being able to use complex language to communicate with one another, one would expect that they would have words that allow them to communicate information about things they care about, like members of their species, predators, sex, food, etc., and of course their rules – on the issue of the rules, one would definitely not expect that millions of years of evolution of minds involving complex systems of rules would just be deleted by the capacity to speak, so that speakers would not care about the rules. That would be extremely improbable, given how evolution works. The capacity to compute those rules and the value the animals would put in them, would remain.

    Now, in the human case, there are excellent candidates to words that pick rules of behavior in our species, namely expressions like “moral obligation”, “morally obligatory”, “morally permissible”, “morally impermissible”, etc. But there are no other good candidates.

    So, it seems that our moral assessments, at least when it comes to moral obligation, permissibility, and impermissibility, track the rules of behavior in our species. Barring error theories, that supports a deontological view; that does not mean that consequences do not play a role, since some rules involving how to behave depending on expected outcome may very well have evolved (and given ordinary moral intuitions, they sure did).
    However, expected outcome would be secondary – rules would be primary.
    Moreover, also very probably not all rules would involve expected outcome. In fact, expected outcome would be on many occasions very difficult if not impossible to know in ancestral environments, but rules would be there anyway. While new, more complex rules involving some consequences may well have evolved more recently, one would expect that that would edit previous ones to some extent, not just fully erase it – because of how evolution works -, so consequentialist rules operate on par with other, non-consequentialist ones.

    Now, other moral terms that may require a different treatment (like, say, the concept of a morally good agent, which may involve virtue considerations). Maybe not all of the talk we identify as moral talk involves rules, “do”s and “don't”s. But at least a significant proportion of it very probably does.

    ReplyDelete
  4. Hi Richard,

    I think this and your previous blog post on consequentialism (the one linked to in the PS) are both very helpful as "landscape mapping" exercises. But I would like to push back against you're bafflement with the fact that there are many non-consequentialists out there. (I hope I am not committing the sin of taking too seriously something that was merely intended as a joke. For my part, I can tell you I'm not taking it "too" seriously; I just hope you didn't intended it "merely" as a joke either!)

    I was struck by your candid description of yourself on your earlier post. You said you have a "deep-rooted sense that non-consequentialist views just don't make sense," and that the "considerations" you went on to outline were merely such that "could plausibly lead one to favor consequentialism." I think this is incredibly perceptive, forthcoming, and indicative of a much wider phenomena. I imagine that this is a fair description of the mindset of many who are introspectively honest about their ethical-theory commitments. More importantly, I think that a proper appreciation of this fact should extinguish your bafflement straightaway.

    First, it seems to me that the blog post just above would be better framed in the same spirit of your previous post. One has a prior "deep-rooted sense" that consequentialism is true, and these are some "considerations" that could plausibly lead one to reject common objections against it. The picture that then emerges is that both the set of arguments for consequentialism as well as the set of arguments against consequentialism are mere considerations, highly disputable as standalone (as the literature bears out), but perfectly capable of being combined in such a way to justifiably buttress one's "deep-rooted sense."

    Second, it seems that the mirror image is true of the deontologist. Like with you, I imagine it to be a fair description of their mindset that direct introspection reveals merely a "deep-rooted sense" that consequentialism is false. If that's right, then expressing your displease with common arguments against consequentialism amounts to pretty harmless fire. The deontologist's commitment to deontology does not stand or fall with these arguments anymore than your commitment to consequentialist stands or fall with certain arguments against deontology. Much like you, the deontologist herself can buttress that "deep-rooted sense" with "considerations" that could plausibly lead one to favor deontology, and can go on to reject the "considerations" that could plausibly lead one to reject deontology.

    Of course, no one has so nicely laid out the considerations in favor of deontology, and the considerations against the common objections to it, in a crisp and accessible blog post as you have for consequentialism. But someone could, and hopefully someone will. (Any takers out there?)

    But while you express bafflement as to the existence of many deontologists, I myself perceive such bafflement as, perhaps, a bit parochial or naive. Most deontologists, I am here suggesting, are in precisely the same theoretical situation as you describe yourself to be in: they have a "deep-rooted sense" that is buttressed by certain "considerations" that "could plausibly count in favor of their view," and certain considerations that could plausibly count against the common arguments against their view.

    As philosophers, what more could we expect?

    ReplyDelete
    Replies
    1. Hi Luis, perhaps you're right! I guess my naive bafflement remains resilient in the face of a merely general expectation that there are (somewhere) plausible-seeming reasons to favour deontology. What I really need is to become more closely acquainted with such reasons, and in a way that helps to bring out their plausibility. (For the "standard" objections, addressed in my post above, just strike me as very bad objections to consequentialism. I'm not convinced that, on reflection, anyone should think they plausibly favour deontology at all -- except, I guess, for certain cases where others just have different intuitions -- but such controversial cases, while they might reasonably motivate deontology for the particular people who are convinced by them, are not well suited to be considered clear "counterexamples".)

      Delete
  5. Hi, Richard. As a nonconsequentialist, I've enjoyed following your blog and getting insights from the other side of the fence. Here's an objection that I (and a few of other philosophers I know) might lodge against consequentialism, though it could be applied to other influential moral traditions as well (e.g., Kantianism, virtue ethics). You might call it the "Omissions Objection" or "Incompleteness Objection."

    When examining ethical theories, one can easily come away with the impression that they all have their own strengths and weaknesses. Certain theories seem vulnerable to objections that others can evade and vice-versa. If this impression survives sustained reflection, then the picture that starts to emerge is something like the following: most moral theories have a part of the picture right but are missing something important. Applied to consequentialism, one might think that the theory just omits too much. The (reasonably expected) consequences of an action are important to assessing the rightness or wrongness of that action, but they cannot be the sole determining factor in whether it is right or wrong. Other considerations could include (among others) fairness, the presence of a promise (or contract) specifying one should act in a certain way, one's intentions or motives, and the effects of performing the actions on one's character. (Here, I could include "respect for the separateness of persons," but you obviously believe some forms of consequentialism can accommodate that consideration.) Consequentialism seems incomplete because it does not give any direct weight to these kinds of moral considerations.

    As mentioned earlier, this objection could be lodged against other moral theories. One could use it to argue against Kant's Categorical Imperative, for instance, by pointing out relevant moral considerations that the principle (whichever version is picked) omits or minimizes. But I've mostly only heard the objection pressed against consequentialism because such theories are so explicit in denying that non-consequentialist considerations play a (direct) role in determining whether an action is right or wrong.

    ReplyDelete
    Replies
    1. Hi Trevor, thanks for your comment! That sense that consequentialism neglects morally relevant considerations does seem a common reason to favour a view more along the lines of Rossian Pluralism (or "moderate / common-sense" deontology).

      This may just be an immediate impasse of conflicting intuitions, but I wonder how confident we should be that apparent intuitions to this effect are really so clearly expressing (i) the judgment that promises (etc.) have direct (non-derivative) weight; rather than (ii) the judgment that promises (etc.) generally have weight. Consequentialists can, of course, accommodate type-ii judgments. And when I reflect on the matter, I find that I don't ultimately endorse any type-i judgments. (I think promises matter instrumentally, because they enable us to better coordinate, etc. But it would be weird to add them to the things that really matter in life: love, happiness, intellectual understanding, etc.) So while on my view it will often be true enough to say that an act is wrong because it broke a promise, that surface-level description isn't the deepest or most complete explanation available of why the act was wrong.

      Do you find that kind of (indirect consequentialist) response at all satisfying, or do you find that you have firm type-i intuitions about these things? (If so, I guess we've reached bedrock. But I appreciate your bringing the point to my attention!)

      Delete
    2. Sorry for the extraordinary delay in my reply: it was just a fortuitous impulse to revisit this post when I was browsing your year-in-review post.

      I think we actually agree that it's desirable to give an explanation (beyond just an appeal to foundational intuitions) about why it's wrong to break a promise. But here, I think we'll hit a divide pretty quickly. If I were asked to explain why it is generally wrong to break a promise, I would say that it's because doing so fails to show proper respect to the person to whom one has made the promise. (In expressing this idea, I'd probably appeal to something similar to the humanity formulation of Kant's categorical imperative.) I'd offer a similar explanation to explain why one generally shouldn't deceive others. So the problem with these particular issues is a disagreement about what the best explanation for their wrongness is. I wouldn't be satisfied with trying to cash out the idea of showing individuals appropriate moral respect exclusively in terms of how showing such respect promotes good consequences.

      In any case, this sort of disagreement can't really be resolved in comments on a blog, but since I expect to continue reading your work, you'll have further opportunities to persuade me.

      Delete
  6. Hi Richard, Good stuff here (as usual). Regarding aggregation, I agree that iteration cases are useful to think about. But I'm not sure how worried the anti-aggregationist must be. The traditional cases of aggregation were pretty much "pure" cases of inter-personal aggregation -- I'm thinking here of the headaches-vs-lives cases, for example. In those cases, there really does seem to be a conflict between one person's interests and the interests of the many. But in a case like Parfit's "minutes" case, everyone benefits by adopting the iterative policy of granting single-minutes to the many. It's hard for me to see how the permissibility of the minutes policy entails the permissibility of inter-personal aggregation in the standard cases. Perhaps the consequentialist has an argument to that effect, but I don't yet see it.

    I think a stronger argument stems from iterative cases involving risk of death, like the one discussed by Tom Dougherty in his JESP article (http://www.jesp.org/PDF/aggregation-beneficence-and-chance.pdf). Here I think the non-consequentialist should respond similarly, invoking something like the ex ante Pareto principle: everyone's life is made better off by policies permitting risk-taking, even though we know someone will die while the group takes risks for even small benefits. But this of course forces the non-consequentialist to explain why a more global ex ante Pareto (and its attendant average utilitarianism) is problematic. When it comes to iterative cases, I think that is where the action is. (The exchange between Kamm and Gibbard in *Reconciling Our Aims* is useful on this.)

    Regarding the Nearest and Dearest objection: despite all my non-consequentialist intuitions about inviolability and distributive fairness, the Nearest and Dearest conviction does seem me the last I would be willing to give up. It is too hard for me to believe that the only normative reason I have to promote the well-being of my nearest and dearest is that this is a way to promote impersonal goodness. And I find the move to agent-relative goodness problematic as well. If I chose to save my child's life instead of saving the lives of two other children not related to me, this situation would be all the more tragic (I would think) if I had to justify my action by saying my child's existence makes the world better than it would be if she died and the other two children lived. So I'm curious: is there a brand of agent-relative consequentialism that would permit my saving my child but would free me from having to say this unsavory thing about the other children's lives?

    ReplyDelete
    Replies
    1. Hi, Paul,

      On the issue of agent-relative goodness, my impression (I might be missing something) is that the conclusion that your child's existence makes the world better than if she died and two other children lived is not only unsavory, but incompatible with this modified version of the consequentialist theory, which still holds that there is an agent-independent account of the good, even if each agent has different goals to promote (i. e., as Richard said in his reply to one my post raising the issue of weird agents, each person gets to "promote the good of all, but especially your friends and family.").

      Given the agent-independent account of the good, it seems it's not the case on this agent-relative theory that your child's existence makes the world a better place than the existence of the two others. If it were, the parents of each of the other two could similarly make the assessment that the existence of their child makes the world a better place, and if that were a consequence of the theory as well, the result would be a contradiction.

      Moreover, someone not related in any way ought to save two over one, according to this theory (by aggregation), so the existence of the two other children is better (makes the world a better place, etc.)

      So, it seems to me that the agent-relative type of consequentialism is committed to the view that the parent in question ought to promote the overall bad in such circumstances - given that the theory rejects moral distinctions between actions and omissions, it seems that you would be promoting the overall bad, even if by promoting the good of your family.

      Delete
    2. Hi Paul, re: nearest and dearest: I think the relevant justification (given that value = desirability) is just that you have more reason to desire your own child's survival. That's all it is for that outcome to be better-relative-to-you. You're certainly not claiming that the other children's lives are less valuable in an agent-neutral sense: you can acknowledge that a stranger would have more reason to desire that the two be saved, and of course that their own parents would have even stronger reasons to prefer that outcome (making it better-relative-to-them).

      Also, I think it's important to stress that (contrary to the common stereotype) consequentialist justifications shouldn't ultimately appeal to abstractions like "making the world better". We act for the sake of particular valued ends (e.g. people), not for the sake of the abstract aggregate of value. See my post on 'General and Particular Moral Explanations', as well as the separateness of persons stuff above.

      re: aggregation: I don't mean to deny that a deontologist could prohibit aggregation via a purely deontic principle about permissibility (perhaps applied only to not-pareto cases). I just think the intuitive force of the cases is undermined once we realize (via the iteration cases) that we must reject the intuitive evaluative claim that the outcome with the big harm is worse than the outcome with the many smaller harms. (See my exchange with Doug in the comments.) Its seeming worse is what was doing all the intuitive work, or so it seems to me.

      Delete
  7. (I meant to describe Dougherty-type cases as ones of risk-imposition, not merely ones involving risks of death. Obviously Parfit's case involves risks of death too.)

    ReplyDelete
  8. Hi Richard, Thanks. Just an autobiographical claim: I don't think my intuitive dislike of inter-personal aggregation is related to an evaluative intuition. But perhaps I find somewhat easy to accept the evaluative kind of aggregation, since we non-consequentialists can contain it in practice with deontic principles.

    Re "making the world better": OK, right. I have read "Value Receptacles" and agree it advances consequentialist theory. (Indeed, I cite it in a forthcoming paper on aggregation :). But I took your relevant conclusion in "VR" to be in an important sense comparative: on 'fungibility', consequentialism doesn't fare worse than other views, since other views probably must also treat merely possible future people as fungible. If that's right, does it not mean that all views should take abstractions like "making the world better" as generating morally relevant reasons?

    ReplyDelete
    Replies
    1. Well, in the special case of merely possible people, sure. But when there are existing concrete individuals available, we can (and should!) care about them. :-)

      Delete
    2. Oh I agree, but I reckon you've already encountered consequentialists who staunchly reject that future "world improvement" duties really do differ in kind from present duties of "individualistic concern." I guess I'm just welcoming you to the club of distinction-drawers-that-many-(most?)-consequentialists-bill-as-straw-graspers. But by all means jump on in, the water's warm!

      Delete
  9. Has anyone mentioned 'painless violation' counterexamples? For example:
    1. You stalk one of your employees back home for several weeks to harmlessly satisfy your desire to have power over them, without them or anyone else ever discovering that you did this. During that time you gathered intimate secrets about them, but never reveal these secrets or use them for any malicious purpose - you simply enjoy having the privileged information. If you hadn't done this you probably would have satisfied your desires by treating them worse at work, giving them jobs they hate and refusing them holidays or promotions.
    2. You administer a mind-altering drug to your friend who suffers from depression, secretly and without the recipient noticing, which co-opts their will so that they do whatever you command them to do, and then wipes their memory of it once the drug wears off; but you only command them to walk your dog, which they really enjoy doing. If they hadn't been forced to do this, they would have sat at home in a stubborn melancholic state.
    3. With their dying breath, your best friend specifies their preferences for how their mortal remains should be treated - they ask for a cremation and a reasonably modest funeral. Instead, and without anyone else knowing this, you preserve their dead body in a large container of formaldehyde in your basement so that you can put it to use in various anatomical experiments. This allows you to become an accomplished surgeon more quickly (since there is a world-wide shortage of voluntarily donated cadavers) and save more patients' lives.
    I've tried my best to stipulate that in these examples you are going a reasonable way towards maximising pleasure/happiness; it's not as if there are any obvious alternative ways of procuring the same amount of pleasure/happiness. Satisficing consequentialism would certainly permit these acts. It's also plausible that maximising consequentialism would consider you to have more reason to commit these acts than to commit some other acts which procure less pleasure/happiness but do not involve spying, betrayal or the violation of someone's autonomy.
    For me it's these kinds of examples that make me reluctant to accept pure consequentialism. They may be a bit outlandish, but actually there are plenty of opportunities in everyday life to betray people or violate their privacy or autonomy without harmful consequences.
    Your response might be that you have good reason to think that your behaviour will be harmful; that it will erode trust, hurt people's feelings, cause outrage. And if it really was the case that you had no reason to believe as such, then you might accept that (in these very unusual circumstances) it is permitted (or even required) for you to spy, betray or violate someone's autonomy; but this nonetheless reveals something very disturbing about your character. Perhaps my intuitions about these cases differ, but I feel that the acts are wrong not because of their expected consequences, nor because of what they say about your character - though both of these are considerations which might speak against so acting. They are wrong for an independent and sufficient reason: the individuals in question would object if they knew about it. I think that consideration matters deeply to us as moral agents, independently of expected consequences or evaluations of character.

    ReplyDelete
    Replies
    1. Hi William, aren't you just objecting to hedonism here? A preference utilitarian, for example, can give direct weight to such violations of people's self-regarding preferences.

      The key test for consequentialism vs. deontological side-constraints is whether we should be willing to bring about one such violation (say) to prevent five similar violations. If such violations are bad, then this should be a worthwhile (albeit distasteful) trade-off.

      Delete
    2. Thank you for your reply, Richard. I suppose a preference utilitarian would want to say that these acts are wrong because they violate the individuals' self-regarding preferences; however, I think I just struggle to see that as a moral assessment that is plausibly based on the consequences of these acts. In what sense is the frustration of their preferences a part of the consequences of the act? I think it's a bit too mysterious to envisage preferences as entities which exist independently of the subject, which can be frustrated without their being aware of it. It seems to me that the moral assessment is really based on certain counterfactual considerations, consideration about what the individuals would have felt if they had been aware of my behaviour. Counterfactual claims are both less mysterious than claims about subject-independent preferences and more intuitively relevant to our moral assessments of behaviour; they are not claims about the effect my act will have on certain free-floating preferences, but rather about whether the individuals affected would have consented to my acting that way if they had been in a position to choose.

      Does the trade-off test reduce my reluctance to call that a form of consequentialist moral reasoning? Not really. I may well judge that one such violation is worth it to prevent five similar violations, because the normal constraints on violation are defeasible given special circumstances. But those circumstances need not be just any situation where the frustration of some preference would prevent the frustration of a greater number of similar preferences, thereby reducing the quantity of frustrated preferences overall. The defeasibility criteria might be more demanding than that - the fact that you suggested five similar violations, rather than just two, speaks to our intuitions that the criteria really are more demanding than consequentialism would assume.

      Delete
  10. Great post Richard. It is nice to see so many of your ideas come together in a summary such as this. You could write a great book (or encyclopedia article) on consequentialism if you wanted, presenting it in an appropriately human light.

    ReplyDelete
    Replies
    1. Thanks Toby, I do hope to write a book on this stuff eventually!

      Delete
  11. Smilansky ("Utilitarianism and 'punishment' of the innocent: the general problem". Analysis 1991, 256-61) brings up the settings for false positive and false negative rates in the judicial system. He looks at the case for increasing the sensitivity of the test (increasing FPR and decreasing the FNR) to improve the law and order situation. A corollary is that transparency into the procedures that would cause indignation and alteration in preferences should be avoided as it would lead to a net increase in aggregate suffering, so returning to the old conspiracy of consequentalist do-gooders against the deontologists ;)

    ReplyDelete
  12. Weirdly, my answer is a bit different.

    I think the answer is relativity.

    If your notion of consequentialism doesn't have something that fills the role of discounting you get absurd results it many quite plausible states of affairs. In particular you lose the property that if one outcome is preferable to the other at every time it is overall better. In other words in order for it to turn out that it is better to create better consequences now than to simply delay it for any arbitrary amount of time.

    Unfortunately, if delaying initiating changes with beneficial conseuqnces is worse than not doing so special relativity dictates that if one group of individuals are moving at a high rate of speed relative to the first (they can even take off on a ship from the earth) they will have different judgements about which states of affairs are preferable.

    Both groups view the other group as moving and in their reference frame time runs more slowly for the moving group. Thus, if the groups have to decide who will take a person who will start to experience extreme joy (any good conseuqnces) in an hour both will judge that the time dilation experience by the other group means it's morally preferable for this individual to come with them since in their reference frame he will thereby start experiencing joy sooner than if he had gone with the other group. (Note that since the person will feel happy forever after that point this won't be made up by a longer interval of feeling happy).

    This is a problem because now there is no fact about which is the better outcome. I have no idea how to solve this problem.

    ReplyDelete
    Replies
    1. Wouldn't the morally relevant reference frame just be whichever one the subject themselves is in, rather than the reference frame of the assessor? So, if the delay will be one hour from the subject's perspective whichever way they go, it seems the correct answer is that it makes no moral difference which group they go with.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.