Tuesday, March 23, 2010

Co-operative Utilitarianism

Donald Regan's masterful Utilitarianism and Cooperation raises a problem for traditional consequentialist views, which conceive of agents as choosing between external options like 'push' or 'not-push' (options that are specifiable independently of the motive from which they are performed). He proves that no such traditional theory T is adaptable, in the sense that "the agents who satisfy T, whoever and however numerous they may be, are guaranteed to produce the best consequences possible [from among their options] as a group, given the behaviour of everyone else." (p.6) It's easy to see that various forms of rule or collective consequentialism fail when you're the only agent satisfying the theory -- doing what would be best if everyone played their part is not necessarily to do what's actually best. What's more interesting is that even Act Utilitarianism can fail to beat co-ordination problems like the following:

Poof: pushNot-push
Whiff: push100
        Not-push06

Here the best result is obviously for Whiff and Poof to both push. But this isn't guaranteed by the mere fact that each agent does as AU says they ought. Why not? Well, what each ought to do depends on what the other does. If Poof doesn't push then neither should Whiff (that way he can at least secure 6 utils, which is better than 0). And vice versa. So, if Whiff and Poof both happen to not-push, then both have satisfied AU. Each, considered individually, has picked the best option available. But clearly this is insufficient: the two of them together have fallen into a bad equilibrium point, and hence not done as well as they (collectively) could have.

Regan's solution is build a certain decision-procedure into the objective requirements of the theory:
The basic idea [of Cooperative Utilitarianism] is that each agent should proceed in two steps: First he should identify the other agents who are willing and able to co-operate in the production of the best possible consequences. Then he should do his part in the best plan of behaviour for the group consisting of himself and the others so identified, in view of the behaviour of non-members of the group. (p.x)

To illustrate: suppose Poof is a non-cooperator, and so decides on outside grounds to not-push. Then Whiff should (i) determine that Poof is not available to cooperate, and hence (ii) make the best of a bad situation by likewise not-pushing. In this case, only Whiff satisfies CU, and hence the agents who satisfy the theory (namely, Whiff alone) collectively achieve the best results available to them in the circumstances.

If both agents satisfied the theory, then they would first recognize the other as a cooperator, and then each would push, as that is what is required for them to "do their part" to achieve the best outcome available to the actual cooperators.

8 comments:

  1. I'm not sure I see how Regan's decision procedure gets the act-consequentialist anything he didn't already have access to.

    Suppose Whiff and Poof are able to communicate. In that case, it's a bit misleading to represent the decision they face by the above 2x2 matrix. They also face a decision about whether to try to coordinate with the other party, perhaps by saying: "hey, let's both push, sound good?" If they have that option, then the EU maximizing option will be to do that, and then push--it's better than any other conjunction of pushing or not pushing with communicating or not communicating.

    It seems to me that the only cases in which both agents can satisfy AU, and we can nevertheless end up in the sub-optimal equilibrium, are ones where they don't have the opportunity to communicate and coordinate their behavior. But in that case, I don't think the theory is obviously giving the wrong results; sometimes agents act completely appropriately, and they don't get the best outcome because they didn't have the opportunity to coordinate. See, e.g., stag hunt games.

    ReplyDelete
  2. Communication doesn't really make any difference here. It's sufficiently obvious that both pushing is the preferable outcome that explicitly saying "hey, let's push" is redundant. Even given such opportunities for communication, the formal problem remains the same. Suppose that, after communicating as you suggest, both agents (perversely) not-push. Then each has satisfied AU. (It's true of each, taken individually, that things would've been worse if they'd instead pushed.) CU avoids this nutty result. It has the formal property, lacked by AU, of adaptability: the agents who successfully satisfy the theory do as well as was collectively possible.

    Note that the original puzzle is formulated in terms of objective oughts, rather than expected utility. (One might act "appropriately", if by that you mean rationally or blamelessly, and yet fail to satisfy the theory due to misinformation. Your best effort may fall short of truly being the best action.) It's slightly trickier to formulate the problem in expectabilist terms, but even here we may note that CU has an advantage: put two cooperative utilitarians in this situation, with only the information that the other person satisfies CU, and they will certainly succeed in attaining the best outcome.

    By contrast, give two act utilitarians only the information that the other satisfies AU, and there's no such guarantee. If they're lucky, they'll find themselves with an externally generated expectation that the other will push (and hence they will too, in order to maximize expected utility), but there's nothing in the theory itself which assures this; they might just as well find themselves with an expectation that the other will not-push (in which case, again, they would too...). And something's gone wrong if two people who know full well that the other is successfully following the correct moral theory could nonetheless end up in the (collectively) wrong place!

    ReplyDelete
  3. As a utilitarian, I've always been anxious that building decision procedures into our theory is a sure way to get into trouble. This is part of what makes me nervous about Regan's proposal. Is this warranted?

    Let's say that the button is only pushable for a brief moment, and it'll take us longer than that to identify other willing and able agents. So Cooperative Utilitarians will always not-push, while Act Utilitarians at least have a shot at getting it right. Then it seems that his view fails on the grounds you've described. Two Cooperative Utilitarians who know full well that each other are successfully following Cooperative Utilitarianism will nonetheless end up in the collectively wrong place.

    Maybe there's a way to understand Regan's thing so we're not talking about decision procedures in this problematic sense. I'd like to see what that is then.

    ReplyDelete
  4. Hi Neil, you're right to be worried. See my follow-up post for more detail.

    Still, in Regan's defense, there's an important formal difference between this objection to CU and his objection to AU. With AU, the agents might (collectively) choose a worse option. With CU, the problem is less 'voluntary': it's not as though the agents make bad choices (picking an inferior over some superior available option) -- the problem is instead that the fact that they follow CU constrains what options are available to them in the first place, in unfortunate ways. (E.g. Given their psychological makeup, they don't really have the option to select a button that's sure to disappear before they finish deliberating.) This may seem like less of a theoretical flaw.

    ReplyDelete
  5. Oh, one other thing possibly worth flagging: when CU requires agents to "identify other co-operators", this really just means something like form a true belief about who else will satisfy CU. So it's possible to satisfy the theory's objective requirements without conducting the sort of long-winded empirical inquiry that may be needed to form justified beliefs here. But of course a variant of your objection may still go through, if we can imagine a case where simply the time taken to spontaneously form the required belief already means it's too late to push the button. (Or, better, the sort of evil demon cases discussed in my follow-up post.) And, again, this all becomes rather more complicated when re-formulated in terms of rational expectations rather than objective oughts.

    ReplyDelete
  6. [Toby Ord writes...]

    If you allow oughts for sets of agents doing sets of acts (sets of anything for global consequentialists) then this gets the best of both worlds. That is, I am very happy with all the oughts it does and does not endorse. While it is the case that {agent1, agent2} ought to {push1, push2} (as this is the best thing that they together can do) it is not the case that agent1 ought to push1 as this is not the best thing that agent1 (alone) can do. This avoids fixed decision procedures, and also pretty much gets around your concerns about agents doing what they ought to not implying that the situation is optimal: if they together do what they ought together to do, then it will be optimal.

    ReplyDelete
  7. That may take some of the sting out, but there's still the problem at the individual level that we could have each agent doing what they ought to do, yet with suboptimal results (if they're stuck in a bad equilibrium). It seems a benefit of Regan's theory that it requires individual agents to co-ordinate when possible. (Especially since it is individuals, and not sets, that make decisions. The force of an "ought"-claim as applied to a set seems a bit obscure to me!)

    ReplyDelete
  8. Us at Felicifia, the utilitarian forum, have been discussing your essay in this thread. Thought you would want to know!

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.