Saturday, November 13, 2010

Brainstorm: (alleged) consequentialist perversity

I'm interested in defending consequentialism against character-based objections, and especially objections which claim that consequentialist agents would be, in a sense, morally perverse. I'll give some examples of the kinds of objections I have in mind here, and invite readers to share any other objections (of this kind) that I've missed.

Is Consequentialism Unfit for Human Agents?
A number of objections arise specifically within the context of our distinctively human psychologies and limited capacities. (So it's less obvious that these objections would apply to an omniscient God.) E.g. objections that consequentialist agents would:
- be constantly calculating
- have "One Thought Too Many" (rather than responding directly to the needs of loved ones)
- be emotionally narcissistic, in virtue of actively "regret[ting] the absence or lack of any and every attainable good."
- be untrustworthy, unstable, and unpredictable, engaging in marginally beneficial rulebreaking
- be incapable of friendships or other commitments

Can you think of any other objections, along similar lines, to add to this list? (I should mention the epistemic objection as applying specifically to non-omniscient agents, but it isn't an objection to the consequentialist's character per se.)

Is Consequentialism Essentially Perverse?
I think it's harder to come up with character-based objections to consequentialism that would apply even to omniscient agents. Worries about 'calculation' or directed attention are obviously inapplicable to unlimited minds that see all and know all right from the start. So the kind of objection I have in mind now is one that sees consequentialist theory as representing an inherently perverse moral perspective. The only example of this that immediately springs to mind is the following:

- the objection that consequentialists would have but a single, overriding desire to maximize the good, a hence a perversely instrumental attitude towards the welfare of individual persons (which is treated as valuable only as a means to the more general goal of promoting aggregrate welfare).

Any other examples would be much appreciated!


  1. Looks like the objection you mention includes the Rawlsian ignoring "separateness of persons" objection.

    So, perhaps: since consequentialism evaluates actions or rules in terms of their future consequences, it ignores morally relevant backward-looking considerations, such as, e.g., that one has made a promise. According to consequentialist evaluation, past commitments qua past commitments carry no moral weight, while it seems intuitively convincing that they do.

  2. Well, there's nothing barring the consequentialist from adopting an holistic value theory such that the value of some future state of affairs is contingent on the past. (E.g. they could claim that promise-breaking is among the intrinsic bads to be minimized.) But an idea in this vicinity would be that, for agent-neutral consequentialists at least, one's own commitments carry no special weight: you should break your own promise if that'll prevent two other people from each breaking theirs.


  3. I like the objections you mention regarding consequentialism being unfit for human agents! I understand your claim that an omniscient agent would avoid being vulnerable to these, but I'm not sure why "[w]orries about [...] directed attention are obviously inapplicable to unlimited minds that see all and know all right from the start." Do you mean that, were one to know all morally relevant facts about the world, one could direct one's attention to the overall good without at the same time weakening one's caring for the loved ones, or that the latter couldn't object to your (omniscient) actions? The latter interpretation would imply that *every* possible agent (including the loved ones) is omniscient, namely that your reasons are transparent to every other person you might care for and who might expect you to care for her in a 'special' way.

  4. Hi Nicolas, I was thinking that insofar as "directed attention" involves limited awareness (we are attending to some features rather than others) this would not even be possible for omniscient agents. They would always be fully aware of all facts and features of the world. (You might say that they can fully attend to everything at once.)

  5. Your response to Nicolas is interesting. Now instead of directed attention, how about directed affection? Would your omniscient agents be capable of loving particular persons? If your answer here is analogous to yours regarding directed attention, then it seems only universal impartial love would be possible for them? I'd like to know what you think.

    Also, and unrelated to the main query of your post, it seems that your strategy for avoiding some objections by appealing to omniscient (and perhaps also omnibenevolent) agents could be vulnerable to the conditional fallacy.

  6. Hi Boram, I don't see any conceptual barrier to an omniscient agent having biased desires, or caring more about some people than others. I'm inclined to think that there's something 'unfitting' or perverse about this, however. (Love is trickier, since you might think it requires some sort of interaction or active relationship. I'm not totally sure about that though.)

    P.S. I don't think that appealing to omniscient agents is enough to avoid the objections mentioned in the post. My strategy instead relies on the idea that if an objection only applies to human-like agents, then I can appeal to our cognitive limitations as part of my response (a strategy illustrated in some of the linked posts). They do still require a response! On the other hand, if an objection (e.g. the value receptacle one) would apply even to omniscient agents, then a different kind of response is in order...

  7. Interesting issue Richard.

    Isn't there a one-thought-too-many style worry that consequentialists don't simply direct their attention toward the wrong things, but rather that they're motivated to act by the wrongs things? I'd imagine that the latter issue could still a problem for omniscient beings.

    (I also wonder whether an omniscient being would really direct their attention at everything at once - must all their beliefs be occurrent beliefs?)

  8. Thanks Richard. But I still don't think your premise is plausible: "directed attention" involves limited awareness - I agree -, but it doesn't involve limited (potential) access to anything. But let's add a further condition and grant that, as you suggest, "they can fully attend to everything at once". This does not preclude them, as Boram points out, their having desires which, per definition, are biased (an omni-desire would just be the maximal extension of its possible objects). You should agree that they don't, since I don't see that an (even indirectly) preference-based consequentialism could avoid making room for biased desires. Their being biased, though not thereby illicit, seems to be the source of their value. Without such desires, their wouldn't be anyone toward whom I should direct my attention in the first place.

    You admit that there can be omniscient agents with biased desires, though you seem reluctant to admit they would be (as required) impartial agents.

    What kind of consequences would one instead consider? Objective benefits not based on preference or desires? That's still much plausible, but I'd like to get a clarification.

    Furthermore, would living without any special relationship to anything or anyone not make an omniscient agent somewhat perverse? How can you even care for the world if you're not able to care for yourself first?

    If consequentialism requires that agents be immune to having their agent-centered preferences and/or their personal relationships prevail upon every global requirement, then it makes an ideal life a peculiar sort of valuable thing! But even if it actually was a desirable life per se, wouldn't it, upon reflection, make your global consequentialism collapse: if there is no room for personal valuation in the first place, where does a global, higher-order valuation stem from?

  9. Alex - is the worry here that consequentialists are allegedly motivated by facts about general welfare, rather than being directly moved by the needs of the person before them? This ends up looking a lot like (one interpretation of) the value receptacle objection. (Not that that's a problem -- it's always nice to be able to assimilate and address several objections at once!) But if you had something different in mind, do say more...

    Nicolas - this might be getting a bit far afield from the original topic, but I do find it plausible that some value would be lost by becoming fully "ideal". (Most obviously, the omniscient agent could no longer engage in the valuable activity of philosophical inquiry!)

    As for your main worry, I don't think that impartiality precludes personal interests or preferences. It just tempers their weight, so that you're no longer inclined to promote these interests of yours at greater cost to another's interests. (I might opt for an objective theory anyhow; but I don't think a society of impartial agents is any problem for preferentist accounts.)

  10. Yes, that's the sort of thing I had in mind. I'd have to think some more about responses to the value receptacle objection to see how well they generalise to this issue.

  11. Kind of an off shoot of the objection from being constantly calculating...

    Couldn't a Consequentialist resort to mindsets of:
    1. "Desperate times call for desperate measures"
    2. "The ends justify the means"

    A person who is a utilitarian and holds to the two mindsets stated above in the situation of a society in complete disarray (starvation, unemployment, hyperinflation, massive debts etc.) could argue that sacrificing and/or enslaving a small segment of the population will result in a higher GDP per capita [for the remaining citizens], lower unemployment, and constrain the starvation to a certain segment of the population.

    He would argue that this would be the best move for the society, especially in the long run because it is the only way to really get out of the complete disarray. He would also argue that this is the greatest good for the society as a whole and therefore the greatest good for the greatest number.


    Side Note: Thanks for running this blog, I just started reading these this week. I am an engineering major, not philosophy so I struggle to keep up with all the terminology you use, but it is very interesting stuff that I enjoy reading. I love Philosophy. I may have some flaws in my logic because of my lack of background in it thought. Thanks again.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)