Wednesday, March 24, 2010

The Limits of Moral Theory

Many Consequentialists would like to identify the moral theory with the following property: whichever people successfully satisfy the theory's requirements (on any given occasion) thereby bring about the best outcome within their collective power -- in particular, they could not have done better by doing other than what the moral theory required. This sounds like a reasonable aspiration for Consequentialists. After all, this isn't the over-optimistic claim that mere widespread belief or even attempts to follow the theory must lead to good outcomes -- that's obviously an open empirical question, since mistakes may be made. Instead, this is the claim that the people who succeed in meeting their objective obligations under the theory -- who really do exactly what the theory implies should be done -- thereby (collectively) bring about as good an outcome as possible. That sounds like a property that a consequentialist moral theory should, ideally, possess. Curiously, though, it turns out to be impossible. No moral theory is such that successfully satisfying it guarantees the best possible outcomes. Let me explain why.

The proof takes the form of a dilemma. Either the theory builds into its requirements a particular decision procedure, or it does not (it merely requires certain actions to be performed, without regard for the process of reasoning that led to action). If it does not, i.e. if it is a traditional, 'exclusively act-oriented' theory, then satisfaction of the theory is compatible with suboptimal results in coordination games, as explained in the previous post.

What if the theory, like Regan's Cooperative Utilitarianism, does require a certain decision procedure? In that case, as Regan himself acknowledges, there are two ways that satisfying the theory could lead to worse results. Most obviously, the decision procedure might be directly costly in certain (e.g. time-sensitive) situations. (Neil raised this worry in the comments.) The other possibility is that the decision procedure might have indirect costs -- suppose, for example, that an evil demon threatens to torture us if you use the specified decision procedure.

We thus find that, although satisfying CU ensures that you select the best of the available options (at least, those still available by the time you finish deliberating), it might have the prior effect of shaping your available options in undesirable ways. (In the demon case, the co-operative utilitarian is effectively choosing between two options plus some guaranteed torture, rather than just choosing between the two options as presented to other, non-CU agents.)

So, whatever form the moral theory takes, there's no guarantee that by successfully doing what the moral theory requires of them, agents will thereby bring about the best (collectively possible) results. It's always possible to rightly do badly. A very curious result!

9 comments:

  1. Here's a theory that seems to escape the dilemma. We're given a choice situation CS with (i) a finite number of people (ii) a finite number of options (iii) a mapping from everyone's actions to outcomes and (iv) a betterness ordering over outcomes.

    Setup: First, we identify all possible collective sets of actions, using (i) and (ii). Next, we use these collective sets of actions to generate a list of outcomes using (iii). Next, we identify the best outcome on this list. If there is a tie, we arbitrarily choose one, and arbitrarily choose a set of collective actions that leads to it. Call the resulting set of collective actions “The Chosen Plan for CS”

    Repeat this process for all possible choice situations. Take the resulting mapping from choice situations to Chosen Plans.

    Theory T: for any choice situation, each individual ought to do the action in The Chosen Plan for that choice situation.

    Claim: by the construction of The Chosen Plans, if each individual performs the action in The Chosen Plan for a given choice situation, there would be no other available set of collective actions which would have better consequences in that choice situation. Thus, if everyone acted in accordance with T, it would have the best consequences, for any (finite) choice situation.

    Clearly, this doesn't work for some infinite choice situations, such as those with infinitely many outcomes. But I take it those choice situations aren't under consideration right now.

    This theory is not plausible, but it does have the property you claimed no moral theory could have, I think.

    (It isn't a decision procedure. I was just describing a method for constructing The Chosen Plans.)

    ReplyDelete
  2. This result does not seem particularly curious. If there is imperfect information and the cost of getting information exceeds expected benefits, then obviously there will be an efficient amount of mistakes.

    I would further contest the Whiff-Poof example on the grounds that the two are looking at payoffs but not probabilities. Taking the expected value solves the problem there.

    Nick: My concern in your example is that you (as an actor in the game) cannot assume that all other actors have the same moral theory or that they would follow through with it if they did. Certainly, given zero negotiating costs, all such games are solvable--that is Coase's Theorem. The challenge is when there are barriers to negotiating/coordinating.

    ReplyDelete
  3. Richard, either I'm missing something here or this is far from a proof.

    "The proof takes the form of a dilemma. Either the theory builds into its requirements a particular decision procedure, or it does not [...] If it does not, i.e. if it is a traditional, 'exclusively act-oriented' theory, then satisfaction of the theory is compatible with suboptimal results in coordination games"

    Why think that the only two alternatives are classic act-consequentialism or a form of consequentialism with a built in decision procedure? Might there not be alternative ways to solve simple coordination problems?

    (I believe that Chris Woodard has just one such alternative solution. See his paper on Pedro, amongst others (and a book!), here: http://sites.google.com/site/patternreasons/papers)

    Alex

    ReplyDelete
  4. (Actually, now that I think about it, Chris' theory may not fulfil your original criterion that the theory should tell us that we ought to always bring about the best outcome. Still, the logical point stands that you haven't proved that no-one *could* have a theory that did so and still solved simple coordination problems.)

    ReplyDelete
  5. Alex - 'exclusively act-oriented' doesn't mean only classical act-consequentialism. The term instead describes any moral theory which can be satisfied by performing a certain act (i.e. any moral theory which doesn't place any demands on the decision procedure that goes into producing the act). Many forms of (e.g.) rule consequentialism are also 'exclusively act-oriented' in this sense. (All except the ones that fit in the other category, namely of requiring that you actually follow the associated decision procedure.)

    [It's true that here I've merely sketched the outlines of the proof, rather than filling in all the details. You can read Regan for the latter.]

    Nick - that doesn't work. Try applying it to the Whiff and Poof case. It looks like the Chosen Plan will be to push. But if Poof doesn't push, then Whiff's following the Chosen Plan will make things worse. We want our perfect consequentialist theory to be such that the satisfiers of the theory do as well as possible, given the behaviour of non-satisfiers, however many satisfiers there may (or may not) be. As such, the theory needs to be sensitive to what other agents are doing.

    David - you're confusing the surprising claim with the unsurprising claim (distinguished in my intro) that reasonably mistaken attempts to follow the theory may not work out for the best. The surprising claim is that successfully doing what the theory requires, with no mistakes, may be bad.

    ReplyDelete
  6. I guess I misunderstood you when you wrote "So, whatever form the moral theory takes, there's no guarantee that by successfully doing what the moral theory requires of them, agents will thereby bring about the best (collectively possible) results. It's always possible to rightly do badly." I took you to be claiming that there is no moral theory such that if everyone does what is required by the theory, an optimific outcome obtains. I constructed a theory with this property.

    But you wanted a theory which is such that if every _cooperator_ does what is required by the moral theory, an optimific outcome obtains. (This was actually clear from your 1st paragraph.) So you want a theory such that if every co-operator does as he ought to, then of all the actions the co-operators could have collectively taken, no other set of collective actions would have been better. A small tweak of T will give you this.

    Modify T as follows. Add two ingredients: (v) a list of cooperators and (vi) a function that takes the actions of co-operators and returns the actions of non-cooperators.

    New Setup: First, we identify all possible collective sets of actions of the co-operators, using (i), (ii), and (v). Next, we use these collective sets of actions of co-operators to generate a list of everyone's actions, including the non-cooperators (using (vi)). Using (iii), we associate each total collective action set with an outcome. Next, we identify the best outcome on this list, using (iv). If there is a tie, we arbitrarily choose one, and arbitrarily choose a set of collective actions that leads to it. Call the resulting set of collective actions “The Chosen Plan for CS”

    Repeat this process for all possible choice situations. Take the resulting mapping from choice situations to Chosen Plans.

    Theory T': for any choice situation, each cooperator ought to do the action in The Chosen Plan for that choice situation. Each non-cooperator may do as he pleases.

    New Claim: by the construction of The Chosen Plans, if each cooperator performs the action in The Chosen Plan for a given choice situation, there would be no other available set of collective actions by cooperators which would have better consequences in that choice situation. Thus, for any (finite) choice situation, if every cooperator acted in accordance with T', no other set of collective actions by cooperators would be better.

    ReplyDelete
  7. Nick - interesting suggestion! My worry would be that your 'list of cooperators' (i.e. agents who satisfy the theory) is ill-defined.

    To illustrate: suppose both Whiff and Poof not-push. Did both, neither, or just one of them satsify T'? Not both, presumably, since they brought about a collectively suboptimal result. So at least one didn't -- without loss of generality, say Poof. Now let's assess Whiff in light of this. Given that there are no other cooperators, Whiff performs the action required of him by T'. That is, Whiff satisfies T'. But the situations of Whiff and Poof are (by hypothesis) exactly symmetrical. Given that Whiff doesn't push, Poof's not-pushing likewise satisfies T'. So both satisfy T'. Contradiction.

    (Regan's CU gets around this problem by offering an independent specification of who counts as a 'cooperator' or not, depending on the decision procedure they follow.)

    ReplyDelete
  8. I"m not sure what a list of cooperators is either. I was assuming we already had some non-defective notion in mind. Your argument suggests that doing as T' requires should not be a necessary and sufficient condition for being a cooperator. (For one thing, it would make my analysis circular.) But it would seem that there are some views about what it takes to be a cooperator that would satisfy T'. For example, maybe having internalized T' is what it takes to be a cooperator. Under this definition, whenever the cooperators follow T', they did the best they could have done. (It is worth repeating that T' is not plausible.)

    If the notion of being a cooperator is somehow defective, then, indeed, we shouldn't believe that cooperators can always expect to do they best they could have done. However, it doesn't seem that your argument above is essentially aimed at attacking the notion of being a cooperator. So if the defectivenes of the notion of 'cooperator' is essential to your claim no theory has your desired property, it would seem a new argument is needed.

    ReplyDelete
  9. Like I said, there are non-defective ways to specify the 'cooperators', but it requires going beyond an 'exclusively act-oriented theory', and instead requiring that agents have a particular psychology, as per CU. A theory like CU can secure the property of adaptability -- i.e., the agents who satisfy CU thereby collectively choose the best option available to them. So they achieve "the best they could have done" given what options were available to them.

    The problem then, as explained in the original post, is that an evil demon might punish those who possess the specified psychology. The fact that they follow CU (or internalize T', or whatever...) leads to a worse outcome -- they choose the best of their options, but their option set is worse than it could have been if they had not followed the theory.

    (Note that you can't have an exclusively act-oriented theory T' based on a list of 'cooperators' that is determined in a not exclusively act-oriented way, because it is indeed required that all and only cooperators satisfy the theory. [Regan dedicates a chapter of his book to showing how to secure this in a non-circular way, but the details are too complex for me to go into here.] Otherwise choosing an optimal plan for cooperators wouldn't be the same thing as choosing an optimal plan for the theory-satisfiers.)

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.