Sunday, July 19, 2009

Rational and Moral Demands

Quick poll: supposing that morality and practical (all things considered) reasons can come apart, which of the following two claims sounds more plausible?

(1) Morality requires you to sacrifice your loved ones if this would promote the impartial good, but it would be unreasonable to do so. 

(2) Rationality requires you to sacrifice your loved ones if this would promote the impartial good, but it would be immoral to do so.

In other words: Is impartial consequentialism more plausible as a theory of morality or of practical reason? In Commonsense Consequentialism, Doug Portmore suggests that utilitarianism is widely recognized to be 'unreasonably demanding', in the sense that it asks us to do things that we lack sufficient reason to do. So, he suggests, if we come to accept 'moral rationalism' -- the view that we always have decisive reason to do what's morally required -- we will be led to reject utilitarianism.

But if I had to pick one of the above claims as a starting point, I'd sooner endorse (2) than (1). Impartial consequentialists may endorse Parfit's suggestion that the view constitutes an "external rival to morality". At bottom is the idea that there's no principled reason for favouring your own welfare over others', and so even if radical impartiality bears little resemblance to ordinary "moral" thought, such absence of personal bias is nonetheless what's rationally required, strictly speaking (just as we're rationally required not to be temporally biased, e.g. in favour of the near future).

Whilst taking the fundamental normative requirements to be impartial in this way, the utilitarian might follow Railton in constructing a more commonsensical and moderate "practical morality" that people would do well to follow. Given the familiar pragmatic reasons for introducing norms of partiality (an efficient division of moral labour insofar as we tend to be more motivated and able to help those who are closer to us), this constructed "morality" could plausibly allow for more partiality than the fundamental norms. It might even make it obligatory to look out for your family, even when this means passing up apparently greater benefits to others. This shows one route to claim (2) above.

On this view, we have every reason to prefer the impartially best outcome. It's just that we can't call it 'morally obligatory'. More than that: falling short of perfection is not sufficient grounds for social censure, so in this sense it would be unreasonable to demand that people meet the strict requirements of utilitarianism. But take care: it is the third party's demanding that is unreasonable, not the act thereby demanded. It'd be perfectly reasonable for the agent to act with perfect impartiality. It just isn't reasonable for others to ask him to do this -- a vital difference!


  1. It'd be perfectly reasonable for the agent to act with perfect impartiality. It just isn't reasonable for others to ask him to do this...

    There's a case given by Peter Winch (in his book Ethics and Action, in the essay "Moral Integrity"), which has always somewhat puzzled me. Your thoughts above might help.

    The case is of an Amish elder who is in a situation where a real "bad guy" is threatening to shoot one of the girls in the Amish community where the bad guys are hiding out. The elder, clearly conflicted and horrified by what he does, thrusts a pitchfork in the man's back, killing him (and violating his own deepest moral principles).

    Winch says it's clear (the story comes from a 1950s film, Violent Saturday) that the elder judges what he did was wrong, but also that in killing the man he was not--says Winch--exhibiting something like weakness of will.

    The part that always puzzled me is that Winch says the necessity of killing the man to prevent him from shooting the girl flows from the "perspective of the action".

    And this does seem to be a case where it would be "unreasonable to demand" of the Amish elder that he perform a violent act, but--as you suggest--that's a separate issue from the moral status (viz. "perspective") of the action. (There are other puzzling issues here, such as what to say about the elder's judgment that he did something wrong by killing the man. Should we say he is mistaken? Is the answer "yes and no"?)

    I'll leave it to you, if you wish, to sort out what sort of reason the elder has here...good post.

  2. This isn't what you're asking about, but I'm curious what you'd say about this, since you brought up the Railton thing. I just don't see that it would maximize utility to promote anything all that close to a deeply partial commonsense morality. Unless your kids are likely to become super do-gooders, and most people's kids aren't, it just doesn't seem plausible that it would maximize utility to inculcate a morality according to which it is permissible to, say, spend 20k a year to send your kid to a fancy private school rather than saving hundreds of lives over the course the of child's public education by donating the money to effective causes. Perhaps there was a time when this more "commonsense" morality would have been good to inculcate from a utilitarian perspective, but this isn't it. Do you disagree? If so, where do you get off the boat?

    Perhaps the suggestion is that it would backfire to promote such a morality? I don't see why. There are a few things one could have in mind by "it would backfire to promote such a morality". One would be that it would be bad for particular individuals to go around preaching this morality. But that doesn't seem true--it seems like that is what Peter Singer is doing, and it doesn't seem to be backfiring. Maybe the suggestion that it would be bad for the public at large to accept a significantly more impartial moral code. I don't see that either. That would probably just lead them to give away money and become vegetarians, which sounds OK to me (you know, behave like philosophers that actually accept utilitarianism). Perhaps it's true that they shouldn't accept a fully impartial moral code, but I just don't see that the code they should accept is anywhere near as partial as ordinary morality.

    As for your question, neither sounds all that good, but (2) sounds better.

  3. I actually hold something pretty close to 1. If you combine Peter Singer's normative ethics with instrumentalism about practical rationality, that's where you go. It helps to accept something like the young Philippa Foot's morality/reasons externalism.

    But I can get why (2) feels right to people.

  4. Matthew - I guess it's an interesting psychological question whether someone might act from a kind of "normative necessity" whilst thinking their action wrong, and if so how we should describe their state of mind. (I guess believers in true moral dilemmas must have something to say here too.) Though objectively speaking, I think it's straightforwardly permissible to kill in defense of others, no less than in self-defense, and may even be obligatory (or "reasonable to demand") in some cases.

    Nick - it probably depends a lot on whether we're assessing the benefit of a code against the actual backdrop of widespread noncompliance, or in the ideal case of total compliance. Liam Murphy, for example, thinks we're only obliged to give away our "fair share" to charity (even if others fail to pull their weight), which wouldn't be so demanding. That seems rather unmotivated to me. But if you did think, for whatever reason, that the ideal case determines such things, then I guess you could justify greater partiality that way.

    But I think I basically agree with you that, in actuality, significantly greater impartial generosity would be desirable. It's not completely obvious that this is the most effective thing to advocate however. Maybe something slightly more modest -- say slight increases in generosity -- would seem more manageable to the target audience, and hence more likely to be followed. But it's definitely an open question.

    We might get a more stable "undemanding" position if we abandon the Railton view in favour of some independent conception of what it's "reasonable to demand". But much more would need to be said there.

    Neil - yup, that's reasonable, and I guess that's the sort of view that Portmore had in mind. I just found his set-up curious since I'm personally more drawn to a very different (more rationalistic) understanding of impartial consequentialism.

    It'll be interesting to hear more responses, to get a better sense of how many people lean one way or the other...

  5. Hi Richard,

    Thanks for the post. First, a small correction. You should have written: "Doug Portmore suggests that utilitarianism is widely recognized to be 'unreasonably demanding', in the sense that it asks us to do things that we lack decisive reason to do." You wrote 'sufficient' in place of 'decisive'.

    Second, I'm with Neil. I don't see why (2) feels right to some people -- perhaps, it feels less implausible than (1) to some people, but that seems beside the point. Of course, I can see how one might be driven to accept (2) given some argument such as Parfit's, but, initially at least, it seems quite implausible. It implies that the agent-relative reasons we have to favor ourselves and our loved ones can never decisively oppose the agent-neutral reason we have to promote the impersonal good. In any case, my point is that many philosophers (utilitarians and non-utilitarians alike) reject (2) and so hold that utilitarianism is unreasonably demanding. These philosophers include, for instance, Peter Singer, Henry Sidgwick, and David Sobel in the utilitarian camp, and Sarah Stroud, Paul Hurley, and myself in the non-utilitarian camp. Interestingly, it seems that Derek Parfit, at least given what he says in this forthcoming book, would reject (2).

  6. Associate Professor is Doug Portmore.

  7. Hi Doug, at the start of chapter 1 you write, "utilitarianism sometimes requires agents to act contrary to what they have decisive reason to do... I reject any moral theory, such as utilitarianism, that requires agents to act contrary to the requirements of reason."

    This seems to imply the stronger claim that I mentioned in my post. You don't just think we lack decisive reason to follow utilitarianism (as might be the case if we had sufficient reason to act either way). You think we sometimes have decisive reason not to act as utilitarianism demands, i.e. we lack sufficient reason to so act. No?

  8. On the other points: I agree it would be odd to think that there are agent-relative reasons but that these can never counterbalance or outweigh agent-neutral reasons. But mightn't one reasonably deny that there are agent-relative reasons at all? (Just as we might deny that there are time-relative reasons. We might think that reason demands a kind of universality. That doesn't seem totally crazy to me.)

    I also wonder about your characterization of what it means to be "unreasonably demanding". There seems an important difference between claim (1) and the ordinary idea of something being unreasonable to demand -- as I try to bring out at the end of my post. Are you meaning to analyze the ordinary notion in a surprising way, so that the two really do coincide after all? Or are you simply using "unreasonably demanding" as a technical term -- mere shorthand for something like (1), and not necessarily connected to the ordinary notion at all?

  9. Richard,

    I'm sorry. I got confused by your different phrasing. But what you say is indeed equivalent to what I was said. So what you said in your post was correct. My bad.

    On the other point, I agree that you're view isn't totally crazy. I just don't find it at all plausible. If you deny that there are agent-relative reasons, then you just get off the boat right at the start, for one of the working assumptions of the book is that there are agent-relative reasons.

    As to your last question, I'm using 'is unreasonably demanding' as a technical term to mean 'holds that agents are morally required to make sacrifices that they do not have decisive reason to make, all things considered'.

  10. Though objectively speaking, I think it's straightforwardly permissible to kill in defense of others, no less than in self-defense, and may even be obligatory (or "reasonable to demand") in some cases.

    I was talking about what the sincere Amish elder thinks, not what you think...

    But still, I brought up the case because it illustrates a break in his reasons (assuming the elder acts for some reason), and that break could fall along the lines of your (2) above because the case involves: (a) an agent violating his own moral conception in order to (b) perform an act which has some kind of value (else why would he do it) that isn't accounted for by his morality (and which seems morally questionable from within the moral system). Not many of us will blame the elder for violating his moral code because we think non-violence as an absolute ideal is silly or unrealistic, etc. But if we take his moral commitments seriously, that's where I was applying your "unreasonable to demand" thought to the particular case.

    I think it would have been unreasonable for someone who understands the elder to demand that he kill the bad guy.

    I was under the impression that you were trying to find a place for consequentialism besides morality (pace (2)), so I thought the case might be interesting (and not just psychologically) for you.

  11. Just thought I'd say something about why (2), which I reject, might feel right to people.

    The moral emotions associated with sacrificing one's loved ones are likely to be stronger than the emotions associated with promoting the impartial good. If one takes this moral phenomenology at face value, one might think that morality is the force pushing against sacrificing one's loved ones, while some other force is pushing against.

  12. I'm actually a little sceptical of the example at hand, but I'll abtract from that. There's a lot going on here, and its hard to disentangle it all. But here's some thoughts:

    To me a "requirement" means "decisive reason", and "unreasonable" means "decisive reason not to". If that is right, then your (1) is strictly incoherent. (Neil would deny the first of these claims, that moral requirements entail decisive reasons.)

    Though it's interesting to note that I would still accept:
    (1*) You morally should sacrifice your loved ones if this would promote the impartial good, but it would be unreasonable to do so.

    Since "should" doesn't have the same force as "requirement", (1*) just says that one can have some moral reason to do one thing, but overall reason to do something else. That seems fine.

    (2), on the other hand, starts with the phrase "Rationality requires", which I can't understand unless it's a pleonasm. If "requirement" means "decisive reason", then "rational requirement" means "decisive rational reason". How is that different from decisive reason?

    If it is a pleonasm, then (2) just says that we have decisive reason to do some things that are immoral. That seems ok, and, in fact, is just what (1*) implies.

    I think Neil is correct to say that a lot hangs on whether we think we always have reasons to do what we morally should do. As I said in comments to another post just now, I take it that this is normally called "moral rationalism", even though it's weaker than the claim that Doug Portmore expresses using those terms. This may be confusing matters.


  13. Hi Alex - note that both (1) and (2) are ways of rejecting [strong] moral rationalism, so they do have that implication in common. The difference is in whether strict impartiality falls on the side of reason or of morality.

    The terminological point doesn't really matter, but I have pretty much the opposite linguistic intuitions. "Requirements" are demands of some or other system of rules, and people can make up rules that aren't genuinely normative (in the reason-giving sense) at all. One can talk of what's "required by law", or "by etiquette", or "by your parents", without presupposing that there's any reason at all -- let alone decisive ones -- to obey any of these demands.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.