Sunday, March 12, 2006

Moral Obligation

Susan Wolf gave another interesting talk the other week, this time on the concept of 'moral obligation'. (We may take this as equivalent to notions of what is 'morally required', or what it would be 'morally wrong' not to do.) The most natural way to explicate the idea is in terms of what one has decisive moral reason to do, but Wolf suggested that this doesn't work.

The problem is that we often have decisive moral reason to do things which (seemingly) aren't morally obligatory. Wolf appealed to the example of not driving an SUV. The environmental and safety disadvantages of SUVs count against them, and for typical urban usage there really aren't sufficient favourable reasons to counterbalance these and make driving an SUV around town a reasonable thing to do. Sure, it's not the end of the world, but the moral reasons here do count conclusively against driving an SUV. It's not something a perfectly reasonable agent would do. Now, despite this, we do not typically think people are obligated or required to drive more sensible cars. While their decision may be morally imperfect, it is not "immoral" in the strong sense (which we may align with blameworthiness and social censure). So decisive moral reasons are insufficient to establish moral obligation.

I wonder whether decisive moral reasons are at least necessary for obligations to be incurred. Perhaps this must be so, in order to leave room for supererogatory actions. We can say that an action is supererogatory when the strong moral reasons that count in its favour are nevertheless not decisive (perhaps due to strong prudential disadvantages).

This leaves us with a tripartite moral structure. An agent might reasonably fail to perform supererogatory actions, though it would be very nice if they did manage them. Then there's the important middle layer where one has decisive moral reasons, and so is at least unreasonable (in the weak sense of "less than ideally reasonable") if one fails to act accordingly. And then we have the base level of moral obligation, or what is required to meet minimal standards of moral decency. This reminds me of the "ethical minimalism" Paul Studtmann argued for once back at Canterbury. Though I got the impression that Wolf considers the second level to be more important (and appropriate to aspire to) than the minimalist base.

But I digress. Returning to Wolf's talk: she pointed out that we need to set aside a small subset of the morally desirable actions as 'obligatory' for pragmatic reasons. There are too many morally desirable actions, and we can't expect everyone to satisfy them all. That would make morality too demanding. So it is useful for society to be able to point to a subset of the most important actions and say, "you must at least do those!" It is the binding force of this 'must' which distinguishes moral obligation from the weaker sense of moral desirability in which you ought not to drive an SUV.

The crucial question now arises: how are we to draw this distinction? What makes some morally desirable actions obligatory, and not others? One might initially think to appeal to the 'weightiness' of the moral reasons. (The SUV case seems non-obligatory precisely because it is relatively trivial. The reasons are decisive, but decisively small.) But that won't do, because we can have trivial moral obligations, such as the obligation not to steal a paperclip.

Wolf proposed a modified "social command theory" of obligation, such that X is morally obligatory only if X is commanded by society (and backed by adequate moral reasons). But that strikes me as unacceptably arbitrary, despite the parenthetical constraint. Surely the facts about moral obligation must be determined solely by morally relevant facts, i.e. facts about welfare, not anyone's arbitrary "commands". It also has the odd consequence that we can change what's truly obligatory, simply by influencing opinions or expectations, and hence altering "what society commands".

Besides, it isn't clear that anything else in Wolf's argument leads to this particular theory of obligation. In light of her pragmatic motivations, all she needs is some way or other to draw a distinction. Arbitrariness doesn't matter for her purposes, because she doesn't believe there's any principled basis upon which to draw the distinction in any case! So we might just as well adopt the Coin Theory of Obligation: for any morally desirable act-type A, flip a coin. If the coin lands heads, then A is morally obligatory. Otherwise it is not. (It's no more arbitrary than appealing to societal "commands", after all!)

In fact, given the pragmatic motivations, Wolf really should be led to an indirect utilitarian theory of obligation. Since our aim is to draw a distinction which will help promote more moral behaviour in practice, the obvious basis for this distinction is to identify that class of actions which, if recognized as 'moral requirements', will have the morally best consequences. There is certainly some fact of the matter about which such classes would have the best results, and so we have a principled basis for determining (in the metaphysical sense; whether we can know these facts is another question!) which actions are morally obligatory.

8 comments:

  1. I think you got very close to solving the problem and then drifted away from it. The key is

    "That would make morality too demanding."

    The bottom line is that the "we" we are considering is some sort of "civilized society" (defined by the individual but maybe we can collectively get some sort of consensus) now this "we" doesn’t have infinite power we cant force every one to behave as we want and we cant force anyone to behave how we want all the time.
    With the limited amount of power that we have we can create propaganda to oblige people to do certain things. Wherein these things are utilitarian they might add to our legitimacy as people see that they work but in general it will cost us to apply the social pressure (or physical pressure) to encourage compliance.
    "Civilized society" can thus say "theft is immoral" because it is quite simple and intuitive but have difficulty with "SUV are immoral". Hard to say exactly how it will work but there is no point wasting all your resources to make something like “having a girlfriend” immoral at the cost of being able to achieve anything else.
    Its all about the ROI

    ReplyDelete
  2. Of course that is different from
    A) "what is fundimentally immoral" (the utilitarian approach).
    or
    B) "what tends to be immoral" (the indirect utilitarian approach A)
    or even
    C) "what rules would be good if you could make all the rules" (the indirect utilitarianism aproach B)

    If you appeal to intuition, I sugest intuition at it's best answers the question "what rules work" (i.e. at best your intuitions are rules to help you get through life). So you cant expect them to answer any other question such as those above.

    I am interested by what you think of these distinctions for example B and C for an indirect utilitarian (or any others that could be made).

    ReplyDelete
  3. Pat - the point of the SUV case is that there are clearly moral reasons which count against driving one. (If Kantianism doesn't recognize this, then Kantianism is thereby falsified.)

    Demandingness is no issue for consequentialism because we can shift the bar by adopting a satisficing account of obligation. Although maximizing is best, it isn't obligatory. What's obligatory is merely that we do "good enough".

    Genius - I think your option C sounds closest to my suggestion. Though actually I'm not sure how this is different from your first comment. Note that it isn't a matter of "what rules would be best if everyone magically followed them". It is rather the more practical question of "what rules it would be best for us to attempt to enforce".

    ReplyDelete
  4. > Genius - I think your option C sounds closest to my suggestion. Though actually I'm not sure how this is different from your first comment. Note that it isn't a matter of "what rules would be best if everyone magically followed them". It is rather the more practical question of "what rules it would be best for us to attempt to enforce".

    You can't make all the rules, and you are playing the game with many other players, so what you should attempt to do is different from what you would ideally wish.

    One example would be that in a board game the objective might be to get to the other side - but your strategy might be to take the other guys pieces.

    ReplyDelete
  5. I'll try to break it down...
    B differs from A in that we accept you dont have perfect knowledge about the concequences of our actions and that things that seem good might be very bad so we simplify things (you cant control their more obscure concequences)
    C differs from B in that we dont fully understand the concequences of even our simplified rules (and also that people might not obey them)
    (1) differs from C in that we accept that it is not entirely in our power to make the rules (or force obedience)

    ReplyDelete
  6. Genius, isn't your (1) just what my suggestion was, then? To repeat: my suggestion is that we define the morally obligatory in terms of whatever definition would have the best consequences. If it would turn out best for society to try to enforce X as a moral requirement, then this fact makes X morally obligatory.

    Pat - how is it "biased" to recognize environmental damage and the safety of third parties as providing moral reasons? It is only "begging the question" in the sense that it rules out some (patently false) options. But there's not really any serious question here to beg. It's just obvious.

    "it is perfectly coherent to say that the government needs to regulate the coordination problem, but I am not--all things considered--perfectly oblgiated to sacrifice myself."

    Of course. How is that disagreeing with anything I've said?

    "I am more than willing to admit that there might be moral reasons not drive an SUV, what I would deny is that these decisive, especially when put next to the value of autonomy."

    You must have misunderstood what I mean by the term 'decisive'. It simply means that the moral reasons outweigh the other reasons. We may stipulatively build this fact into the described scenario. (Forget the contingent facts. Let us stipulate for sake of argument that SUVs cause greater harm to the environment, and also pose some safety risk to other road users. Let us also assume that they have no real advantages, other than the small amount of frivolous satisfaction the drivers get from being higher up than everyone else. This egoistic benefit is so small that it fails to counterbalance the moral reasons mentioned above. End of story. It follows from this story that the moral reasons are decisive. The best and most reasonable decision is to not buy an SUV.) But we still don't think there's any obligation here.

    "Autonomy" is not a reason for choosing either particular option here. Choosing to buy an SUV doesn't give the agent more autonomy than choosing to buy a more sensible car! Autonomy is only relevant when we consider how others should react to the agent. That is, it might provide us with reasons not to censure the agent, or hold them to a moral requirement here. That is, it touches on the issue of obligation, but not the issue of decisive reason. You shouldn't conflate the two.

    "Why should we adopt a satisficing criterion?"

    Because it makes sense. It sounds eminently plausible to say that although maximizing is best, we are only obligated to do "good enough". I see nothing 'ad hoc' about this. And I leave the "rule-worship" (!?) to deontologists ;-)

    ReplyDelete
  7. Indeed - I was just interested in that one cant expect them to exactly match. I think it would be very easy to slip between definitions and think you were talking about the same thing.

    I also note it is a little dependant on what you consider you have control over (ie me talking now I have almost no power but I might theoretically be able to convince most civilized NZders utilitarianism is good but I have basically no chance of convincing EVERY person - that will define what my "civilized society" is and thus who I am speaking to and asking action of)

    > It sounds eminently plausible to say that although maximizing is best, we are only obligated to do "good enough".

    maybe you mean it is non maximizing to try to enforce every rule?
    or maybe the cost to freedom exceeds the benefit of the rules?

    It seems pretty easy to produce that sort of argument, and it's pretty logical.

    ReplyDelete
  8. Not so. The egoistic reasons discussed are agent-relative ones. The point is simply to find a case where they are outweighed by the moral (agent-neutral) ones. If you don't find the SUV case to be a convincing example of this, substitute some other scenario. Or suppose there are not even any egoistic reasons for the agent to want an SUV. That is, suppose there is nothing at all to be said in favour of buying an SUV. Then the moral reasons against it are certainly decisive, since they face no opposition whatsoever. But we still wouldn't consider it a matter of moral obligation (would we?). So decisive moral reasons do not entail moral obligation.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.