Monday, June 20, 2005

Consistency and Utilitarianism

Consistency requires that we make the same moral judgments in situations that are relevantly similar. The Golden Rule reflects this requirement of moral rationality. If I hold that it is morally permissible for me to break a promise to a friend when it is to my advantage to do so, I must likewise affirm that it would be permissible for my friend to act likewise were our respective positions reversed. Consistency thus requires that we put ourselves in the positions of everyone affected by our actions, and draw conclusions that can be endorsed from this "collective" point of view.

From this viewpoint, it becomes difficult to deny the moral importance of any other person or group of persons, for this is not a denial that one could endorse from the targetted position itself. For example, the consistent White Supremist must hold that, if (contrary to fact) he were black, then he ought to be discriminated against. But it seems unlikely that a rational person could truly endorse this conclusion. They would not want to be harmed by others if they were black. This casts doubt on whether they can consistently endorse racist behaviour in the form of a universal moral prescription.

It will often happen that no single conclusion is deemed acceptable from every possible viewpoint. We thus require some way to adjudicate between conflicts of interest, from the moral point of view, in a manner that meets the requirements of consistency. Hare suggests that we treat the conflicts between people as we would a conflict within a person, allowing trade-offs between costs and benefits so as to reach the optimum result for the collective. In other words, consistency leads to utilitarianism.

But is it really true that consistency alone forces us to accept utilitarianism? There are three major arguments against this conclusion. We might deny that consistency requires us to adopt a universal or collective viewpoint. But even if we accept Hare's argument that consistency requires us to imagine ourselves in the position of others, it isn't clear that his "maximizing" method is the appropriate way to resolve conflicting preferences. We might think that such a method overlooks the distinction between persons. Or we might deny that "all preferences are created equal" -- that is, we might hold that there are moral facts that exist independently of human desires, such that not all preferences contribute equally towards the moral verdict. This final objection rests on assumptions that a non-cognitivist would not be willing to grant. McNaughton (pp.169-170) tries to bolster the intuitive appeal of this objection by examining Socrates' decision to take the hemlock against his friends' wishes:

It looks as though his action will not maximize the satisfaction of preferences. He is satisfied that, having been sentenced to death by a properly constituted court, he is required to accept the verdict. Moreover, he believes that death is not fearful, and that his friends' distress at the thought of his death is due to their failure, exacerbated by their grief, to see the situation aright. He admits that, if he were in their position, he would not endorse the decision to take the hemlock. But why should this realization affect his present moral judgement when he believes that their opposition is due to an inadequate appreciation of the situation? He is quite consistent in sticking to his original decision... If, after putting myself in the other people's position, I remain convinced by the reasoning that led me to believe that the action was right in the first place, then I need not withdraw it.
There are two responses the utilitarian can make here. Firstly, he can grant that misinformed preferences need not contribute towards utility. If the friends' preferences are due to an intrinsic desire that Socrates not be harmed, and the mistaken belief that death harms him, then the utilitarian can grant that utility is in fact best served by allowing Socrates to take the hemlock. If Socrates is truly not harmed by it, then the friends' intrinsic desires are not thwarted by this course of action after all. So the utilitarian can take someone's "inadequate appreciation" of the non-moral facts, at least, into account.

Moreover, the non-cognitivist will simply deny that there are any desire-independent moral facts for us to be mistaken about. Socrates' moral conviction is simply one preference among many, and he cannot presuppose it to be well-grounded, for that is precisely what is at issue here. As Hare writes, "To insist on the prior authority of the moral intuitions that one starts with is simply to refuse to think critically." In questioning the validity of our moral intuitions, we must be prepared to go beyond them.

The second anti-utilitarian argument mentioned above is the "separateness of persons" objection, which I have discussed before.

The first argument suggests that the egoist need not be inconsistent. This poses a greater challenge to the non-cognitivist utilitarian, for they recognize no independent moral facts which may be pointed to in order to rebut the egoist, and it is not obvious that egoist exhibits any internal inconsistency.

But let us distinguish two forms of egoism. The 'ethical egoist' holds that each individual ought to pursue their own self-interest. But it would seem inconsistent for the egoist to universally endorse other people's selfishness, as their selfishness would be to his own detriment. Alternatively, the 'personal egoist' holds that everybody ought to promote his interests, no matter their own. Of course, this judgment could never survive universalization -- he would not be willing to accept it were he anybody else -- but the egoist might simply hold this as a personal preference, and refuse to make any universal claims at all. That is, he could become an 'amoralist'. I have previously argued that even the amoralist may be criticized for inconsistency, but that need not concern us here. We may merely conclude that, if he adopts any moral viewpoint at all, consistency will lead the non-cognitivist to utilitarianism.

Or will it? We have so far been applying the test of consistency only at the 'formal' level, of mediating between conflicting desires. But it might also be used to yield substantive judgments about which intrinsic desires are more rationally supported than others. Suppose we live in a society full of racists. We have seen that consistency would prevent us from being racist ourselves - we would give the preferences of black people equal weight to those of whites. But what if all the white racists prefer to see the black man suffer? The collective weight of their preferences might outweigh his lone opposition. If we merely consider ourselves in the position of each, the most preferences will be satisfied by endorsing racist behaviour. But suppose that we instead consider what it would be consistent to prefer from the position of each. We would (seemingly) then have to disregard the racists' preferences, for they are supposedly inconsistent.

But this argument goes too fast. We have in fact only established that racist moral judgments are inconsistent, as they cannot be universalized. But personal preferences need not be universalizable: I can prefer butter to margarine without thereby committing myself to the universal judgment that everyone ought to do likewise. Similarly, the racist might prefer to see black people worse off, despite recognizing that he could not universalize this into a moral judgment.

What the anti-utilitarian requires is some grounds for criticizing the consistency of personal intrinsic preferences. The non-cognitivist will refuse any move to appeal to desire-independent moral facts, for he denies the existence of such metaphysically "queer" entities (as Mackie would put it). But the non-cognitivist might follow Michael Smith in adopting a richer conception of rationality that goes beyond mere means-ends reasoning, instead allowing a desire-set to be rationally assessed on grounds of unity and internal coherence. It may be that a set of specific desires (e.g. for the good of certain people) could be better explained and justified through the addition of a more general desire (e.g. for the good of all persons). This conception of rationality enables us to rationally criticize the arbitrary distinctions drawn by racists and other bigots. We might then conclude that their preferences ought to carry less moral weight, to the degree that their desire sets are not maximally coherent.

The utilitarian might grant all this, but simply redefine his notion of utility such that it comprises the satisfaction of rational desires. This would yield a theory quite different from how utilitarianism has traditionally been conceived, but it might better capture the fundamental utilitarian ideal of treating everyone with equal concern, as it could prevent selfish or bigoted preferences from justifying the worse treatment of unpopular individuals or groups. We thus find that the requirement of consistency can have a great impact on our moral reasoning, forcing non-cognitivists towards some form of utilitarianism.


  1. 1) Consistancy/hypocracy is heavily dependant on what you hold constant and what you consider to be the variable in question.
    you could take almost any two situations where there is difference and similarity (basically any situation) and declare hypocracy or consistancy.

    I have ever since I came on the internet engaged hundreds of people who have accused others of hypocracy while either being hypocritical themselves, or proposing policies doomed to failure due to ridiculous attempts at consistancy.

    treating others as you would treat yourself or any other form of consistancy may be good but it is far from the ultimate priority and it will constantly conflict with utilitarianism.

    (just a simple example - if you were hitler how would you want to have been treated?)

    I can think of many international events in particular tibet and chechnya and iraq and rwanda (in fact almost any major event) where if you applied a simplistic "consistant" policy it would (or did) cause a disaster.

    Anyway as to the specifics.
    > But it seems unlikely that a rational person could truly endorse this conclusion. They would not want to be harmed by others if they were black.

    Your argument assumes selfishness. Somthing that contridicts our theory also. most utilitarians would be unhappy to sacrifice themselves for hte greater good if they were the one who had to be sacrificed but would not be very troubled by others being sacrificed.

    Similarly the argument "if you were black" may to them represent a fundimntal change rather like "if you were a plant" your argument for consistancy would thus go "if you were a plant you would not want to be eaten therefore we cannot eat plants". Or the previous "if I was hitler" argument. your defence would almost certainly be "but if I was hitler I wouldnt really be like hitler" but that is a nonsense afteral lthe initial proposal was you WERE hitler AND a racist could easily say "if I was a black man I would not really be a black man" in the same sense.

    I am not trying to defend their position just to demonstrate that your method of attacking it is either flawed or at least not the moral rock you think it is. hey someone has to be hte devils advocate otherwise we will all be lazy philosophers!

    By the way if you were to read racist literature (I stress - don't bother) you would probably find that the distinction is not entirely arbitrary, there is some logic beyond "their skin is black" as unplesant as we may find that logic.

    >Firstly, he can grant that misinformed preferences need not contribute towards utility.

    For a policy to be a correct policy it need only be right more often than wrong (assuming equal cost) socrates may be wrong or all his friends may be wrong. It is possilbe to tell who is more reliable on a case by case basis. Both from socrates point of view hte friends point of view or the governments point of view - having said that it is likely that none of them will act in an intelligent utilitarian way (by this I mean you take into account the probability of you being wrong) but then again it is also highly unlikely any of them will act as utilitarians at all even if they call themselves utilitarian (which they probably don't).

  2. Note that my argument is directed at people who may not be utilitarians themselves. The argument aims to show that moral consistency will require them to become utilitarians.

    When we imagine ourselves in others' (e.g. Hitler's) shoes, the point is not just to do whatever they would want. Rather, we take their interests into consideration, just as we do everyone else's interests. We give them all equal weight and then balance costs against benefits to yield the best result for everyone overall. In other words: utilitarianism.

    "Your argument assumes selfishness."

    No, just self-interest. But everybody is self-interested (I can barely imagine a human being who cared nothing at all for their own wellbeing), so that's hardly a problematic assumption.

    "your argument for consistancy would thus go "if you were a plant you would not want to be eaten therefore we cannot eat plants"."

    Plants don't have minds, and so cannot "want" anything. There's no threat to consistency here. Though it might lead to vegetarianism, if farm animals are sufficiently conscious and dislike the way we treat them. Many utilitarians would endorse this reasoning though - e.g. Peter Singer.

    So I don't think the issues you raise are problematic.

  3. It would seem you are taking the "golden rule" and defining it in a utilitarian manner and then adding a number of assumptions to it such as "the object must have a mind" and by implication "being more intellient is important" and so forth and somehow still including your interests as well as the "new you"'s interests and everyone elses.
    I am not sure others would define it exactly like you have and so I am not sure you can say what their conclusions would be.

    For example when dealing with a criminal I might say "he should be beaten into shape" because "if I was wrong I would want to be made to see the light". Or a masochist might say "if I was him I would want to be flogged" for a more comical example. This method of course holds your desires constant and transplants them into their situation.

    Another method might say if you were them you would only care about their interests.
    Or another method might say if you were htem you would just take their position into consideration and still sacrifice them for the greater good. Somthing like "no genetically handicapped people means no genetically handicapped children" (to use a nazi argument just for comparison).

    To get back ot the plant argument a racist would see a black as a sort of animal. They would say somthing like "we are evolved to be smarter (or better in some regard)" the same logic we use to say we are above animals and animals are above plants etc. so jsut as you dism9iss hte applicability of the plant usage of the golden rule and largely dismiss he animal version they would dismiss the black version. Some other groups might define it even more widely.

    The most obvious examples of "hypocracy" are where a person behaves in one way towards lets say a man on the street punching a guy and another way to his twin brother who has a gun(and is lets say hitting a guy with it). Demanding consistancy of action is just stupid and in the end no situation is immune to the "but it is diferent" argument.

  4. I'm not really starting from the golden rule at all - that was just an example. My starting point is consistency. I'm not sure what alternative definitions of this concept anyone could offer that would be remotely plausible.

    A person's desires are part of their "situation", so your suggestions won't work. (If you were them, you would want whatever they do. Otherwise you wouldn't really be them, would you!) If someone has different desires, then that has changed the situation in a morally relevant way, such that our moral judgments perhaps ought to change too.

    "Demanding consistancy of action is just stupid"

    You've badly misunderstood the argument. Here's a recap:

    According to the prescriptivist, to say "You ought to X" is to recommend that X be done by anyone in your situation / circumstances. Moral claims are universal prescriptions.

    Consistency requires us to hold our moral judgments constant across relevantly similar circumstances. So if everything else stays the same except that you and I have switched places (including that I have your preferences, etc., and you have mine), then my recommendation must extend to that situation as well. I must hold that, in that situation, I ought to X.

    We extend this imagined "trading places" to everybody else affected, one at a time, and think about how they are impacted by our choices. Our moral judgment is universal, so it must apply to all those situations without exception. So I'm going to want to choose the judgment which produces the best overall result for all of the different "me's" that I imagined. That is, utilitarianism.

    So, you see, my argument really has nothing to do with consistency of action in everyday life (where any two situations are almost always relevantly different). It's about consistency of judgment against imagined hypotheticals, and how this leads one towards utilitarianism.

    I hope that makes more sense now.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.