I'm interested in defending consequentialism against character-based objections, and especially objections which claim that consequentialist agents would be, in a sense, morally perverse. I'll give some examples of the kinds of objections I have in mind here, and invite readers to share any other objections (of this kind) that I've missed.
Is Consequentialism Unfit for Human Agents?
A number of objections arise specifically within the context of our distinctively human psychologies and limited capacities. (So it's less obvious that these objections would apply to an omniscient God.) E.g. objections that consequentialist agents would:
- be constantly calculating
- have "One Thought Too Many" (rather than responding directly to the needs of loved ones)
- be emotionally narcissistic, in virtue of actively "regret[ting] the absence or lack of any and every attainable good."
- be untrustworthy, unstable, and unpredictable, engaging in marginally beneficial rulebreaking
- be incapable of friendships or other commitments
Can you think of any other objections, along similar lines, to add to this list? (I should mention the epistemic objection as applying specifically to non-omniscient agents, but it isn't an objection to the consequentialist's character per se.)
Is Consequentialism Essentially Perverse?
I think it's harder to come up with character-based objections to consequentialism that would apply even to omniscient agents. Worries about 'calculation' or directed attention are obviously inapplicable to unlimited minds that see all and know all right from the start. So the kind of objection I have in mind now is one that sees consequentialist theory as representing an inherently perverse moral perspective. The only example of this that immediately springs to mind is the following:
- the objection that consequentialists would have but a single, overriding desire to maximize the good, a hence a perversely instrumental attitude towards the welfare of individual persons (which is treated as valuable only as a means to the more general goal of promoting aggregrate welfare).
Any other examples would be much appreciated!