Wednesday, November 07, 2012

What if everyone did that?

People often appeal to "What if everyone did that?"-style moral arguments (e.g. for a putative obligation to vote).  While there's something to the underlying thought here, I think it is often misapplied.  If we're not careful, this "universalizing" reasoning can easily mislead us into accepting stronger conclusions than are actually warranted.

For example, advanced economies depend upon there being diverse and specialized professions.  So if everyone worked in (say) construction, we'd all starve; but that obviously doesn't make working in the construction sector immoral.  Even if construction work is widely regarded as permissible, there is no risk of everyone doing it, and hence no risk of disaster.  Similarly for choosing not to have children.  As these cases suggest, the relevant question turns out to be, not "what if everyone did that?", but rather, "what if everyone felt free to do that?"  The answer to this latter question will often be, appropriately enough, "no problem!"


Other times, the "what if everyone did that?" heuristic serves to highlight a genuine moral problem, but one that can be equally well addressed from a straightforwardly Act Consequentialist perspective.  This can take two forms:

(1) We often unjustly neglect the aggregate impact of small differences made to large numbers of people.  So, iterating these effects can help to make them more visible, and bring us to see that actually each individual instance was more significant than we realized (i.e., more significant than was a competing, more visible effect on a single person).

(2) Sometimes decisions (e.g. buying factory-farmed meat) have what I call "chunky impacts", or threshold effects, whereby the vast majority of instances have no effect, but then a single threshold-breaking instance has a proportionately huge effect, such that the expected value comes out to the same (if you have a proportionate chance of being the threshold-breaker).

In either case, there's no special "collective action problem" that requires us to pretend that we're deciding for everyone.  Simply assessing the expected utility of our individual action yields the right result.

Things can be different in threshold cases where you have background knowledge suggesting that it's (disproportionately) unlikely that you'll be a threshold-breaker.  Voters in non-swing states may be in such a position, where the odds of their (presidential) vote making a difference are so slim that the expected value of their vote is negligible, even given the high stakes of a presidential election.  (This will, of course, depend on the details!)

In this case, Act Consequentialism may recommend not voting.  Critics then object that if every right-thinking citizen followed this recommendation, then even "safe states" would no longer be safe, with disastrous consequences!  But note that AC's recommendation is highly contingent on our background evidence regarding others' dispositions.  As it happens, we know that many people vote for expressive reasons, etc.  If this were to change -- were we to suddenly find ourselves in a society of Act Consequentialists who don't get any intrinsic value out of the act of voting itself -- then the expected value of our voting would also change.  For each right-thinking citizen who "drops out" of voting in a once-safe state, the expected value of the remaining citizens' votes increases, until you reach a point where the optimal number of right-thinking people are voting.

Of course, things can be trickier if the society falls into a bad equilibrium.  If every right-thinking citizen drops out simultaneously, then subsequently there will be no act-consequentialist reason for them to start voting again, as no one alone has any chance of making a difference.  (One can multiply examples along these lines, involving firing squads, etc.)  The proper solution to this pure coordination problem is to build in some of the insights of Donald Regan's Cooperative Utilitarianism:
The basic idea [of Cooperative Utilitarianism] is that each agent should proceed in two steps: First he should identify the other agents who are willing and able to co-operate in the production of the best possible consequences. Then he should do his part in the best plan of behaviour for the group consisting of himself and the others so identified, in view of the behaviour of non-members of the group. (p.x)

In a world where all the right-thinking citizens are cooperative utilitarians, and some X% of them need to vote in order to avoid disaster, then a bit over X% of them should vote. (Perhaps each person would use a randomizing device to determine whether they fall into the voting group.)  Meanwhile, in the real world, for as long as there are vanishingly few cooperative utilitarians around, those of us in non-swing states probably needn't bother.

16 comments:

  1. I haven't really studied cooperative utilitarianism. Could you tell me when cooperative utilitarianism diverges from act utilitarianism in its verdicts? Perhaps, in this case: the members of the group are willing and able to co-operate but are not going to co-operate for some reason (perhaps, because they lack the relevant information) and I know that they're not going to cooperate and nothing I can do will ensure that they do co-operate. But, in that case, shouldn't I refrain from doing my part -- at least, if doing my part involves some sort of cost? So I'm curious what's the insight? In what sort of case, is the cooperative utilitarian's verdict more plausible than the act utilitarian's verdict?

    ReplyDelete
    Replies
    1. There's an example in the linked post. But the main difference is that successfully following CU entails that you actually cooperate with all other CU agents. If others in your group are not themselves cooperative utilitarians, then it will yield the standard Act Utilitarian verdicts. But a group of fully-informed CU agents will not get stuck at a bad equilibrium point because they know that (i) the others are CU agents, and (ii) all CU agents cooperate with each other to secure the best possible outcomes given what the non-CU agents will do.

      In other words, a group of CU agents is precluded from knowing that the others (as fellow CU agents) are "not going to cooperate for some reason". Whereas AU agents may end up in this unfortunate position.

      Delete
    2. This comment has been removed by a blog administrator.

      Delete
    3. Sorry. I hit publish before editing. Please ignore the above.

      Is CU a theory about objective (fact-relative) rightness or subjective (evidence-relative) rightness?

      In the example in the linked post, I take it that we're to assume (1) that neither Poof nor Whoof push, (2) that neither has any control over what the other does, and (3) that both are willing and able to cooperate. Is that right? And can we also assume that Poof knows that Whoof is not going to push regardless of what he does?

      If those assumptions are correct, then I see that AU says that neither Poof nor Whoof did anything wrong. And so I gather that CU conflicts with AU because CU says that Poof did wrong in not-pushing. I know that I must be confused somewhere. Can you help me by spelling out the example so that it's clear that CU and AU have conflicting verdicts about what some individual should do.

      Delete
    4. Let's start with objective rightness, for simplicity.

      According to CU, Poof (and, similarly, Whoof) is objectively required to (a) identify who else is willing and able to cooperate, and then (b) play their part in the best plan that's implementable by the group of cooperators.

      So, suppose that Poof and Whoof both successfully follow CU. Then, they will have correctly identified each other as cooperators in the first step, and since the best plan for both cooperators is to push, each will push.

      This shows that the assumptions you label (1) and (3) are incompatible. If both agents are "willing and able to cooperate" (in the relevant sense), then they will both push. (For a simplifying illustration, imagine that some people have "CU" branded on their foreheads, and that every CU-branded agent is guaranteed to cooperate with any other CU-branded agents to play their part in the best collectively possible plan. It is inconsistent with these assumptions that Poof and Whoof are both CU-branded and yet fail to both push in the described case.)

      Suppose instead that Poof (still following CU) knows that Whoof is not going to push regardless of what he does. This entails that Whoof is not successfully following CU, or else Whoof would have identified Poof as a cooperator and then played his part in the best cooperative plan by pushing. Since there are not other CU agents, Poof can (after correctly identifying Whoof as a non-CU agent who will not-push) permissibly not-push. [Note that Whoofs non-pushing is not similarly permissible, for he failed to successfully undertake the prior step of identifying Poof as a cooperative agent.]

      This complicates assumption (2). If we hold fixed that Poof successfully follows CU, then there is an important sense in which Whoof does have "control" over Poof, in that Poof can only successfully follow CU by doing the same action as Whoof chooses. (Of course, there's no causal control here. It may be that Poof expects Whoof to not-push, such that if Whoof actually pushed then Poof would mistakenly still not-push, and hence would fail to successfully follow CU.)

      Still, as an objective theory of rightness, CU has the unique formal property that all agents who successfully follow it are thereby guaranteed to collectively act as best they can.

      Does that help?

      Delete
    5. Hi Richard,

      Thanks for trying to help. Unfortunately, I'm still puzzled, although I'm not puzzled about this:

      "Still, as an objective theory of rightness, CU has the unique formal property that all agents who successfully follow it are thereby guaranteed to collectively act as best they can."

      I understand this perfectly well. Moreover, I understand that AU does not have this formal property. But what I'm asking for is a case in which CU and AU give different moral verdicts with respect to some individual's act -- a case where, for instance, although CU implies that Poof's failing to push in circumstances C is wrong, AU implies that Poof's failing to push in circumstances C is not wrong.

      Now, it sounds like you're saying that Poof can identify Woof as someone who is willing and able to cooperate only if Whoof is someone who is going to push (at least, so long as Poof is going to Push). But if Whoof is going to push (so long as Poof is going to push), then AU (and not just CU) requires Poof to push. By contrast, if Woof is not going to push, then Poof cannot identify Whoof as willing to cooperate. And in that case AU implies that Poof is obligated not to push. Likewise, for CU (or am I wrong about this?). Some I'm still not getting it.

      So you seem to be suggesting that CU and AU can imply conflicting judgments about the permissibility of a subject, S, performing an act, X, in circumstances, C. And you have claimed that the Poof-Whoof case is such an example. Could you tell me then what S, X, and C stand for and what you take AU's implication to be as well as CU's conflicting implication to be.

      In other words, I'm asking for a case with this structure:

      (1) S does X in C.
      (2) CU implies that S acted impermissibly in doing X in C.
      (3) AU implies that S acted permissibly in doing X in C.

      (You can reverse the permissibly and impermissibly if you like.)

      And I just need to know what S, X, and C stand for.

      Now, it seems to me that if S equals Poof and X equals pushes and C stands for the circumstances under which Whoof is willing and able to cooperate and thus will push (at least, so long as Poof pushes), then (3) will hold, but (2) won't. And if you change C to the circumstances under which Whoof is not going to push regardless of what actions Poof takes, then (2) will hold, but (3) won't. So I'm still puzzled as to why you think that the Poof-Whoof is an example where AU and CU generate conflicting verdicts about an individual's action

      Delete
    6. Oh, right, sorry I should've been clearer on that. Take S = Whoof, X = not-pushing, and C = only Poof is willing and able to cooperate, and hence (because Whoof isn't cooperative) Poof not-pushes.

      In that case, (3) holds because, given that Poof is not-pushing, Whoof does best by doing likewise. But (2) holds because Whoof didn't act as a result of going through the required CU decision procedure.

      Delete
    7. Thanks. This is helpful. Okay, so I get that Whoof is not willing to cooperate. But is there something that Whoof could have done to indicate to Poof that he is willing and able to cooperate (that is, if Whoof had been willing to cooperate)? And would that have had the effect that Poof pushes. Or is it that Poof not-pushes no matter what Whoof does? If the former, then suppose that Whoof could have convincingly said to Poof I'm willing to cooperate. Won't AU also say that Poof's not pushing is wrong, for pushing and convincingly communicating to Poof that he is willing to communicate is an option that is superior in its production of utility to all the other options? If it's the latter and there is nothing Whoof can do to get Poof to push, then I don't see why we should think that it is permissible to push.

      Perhaps, though, the thought is that there is nothing that Whoof can physical do that will cause Poof to push. But Poof will push if and only if Whoof has a certain mental state (a desire/willingness to cooperate). If that's right, then one thing we could do is think that CU offers an insight into how we need to modify AU's criterion of rightness. But another alternative is to modify our account of what constitutes an option and realize that options don't just consist in physical actions but rather sets of mental states that include both intentions to perform physical actions as well as other judgment-sensitive attitudes (like desires). If we conceive of our options in this way, then (3) will not hold. AU will claim that it was impermissible for Poof to not-push given that intending to push while wanting to cooperate with Whoof was an option for Poof. And had Poof had this set of attitudes, the utility would have been greater than had he had any alternative set of attitudes.

      So I'm wondering whether another way of capturing what you take to be the insight of CU is to adopt a different account of options, such as the one that I adopt in my book Commonsense Consequentialism, where options are delineating not just in terms of physical actions but in terms of what the agent can secure with various combinations of intentions and other judgment-sensitive attitudes.

      Delete
    8. Yeah, that's interesting. One worry about even the "expanded options" conception of AU is that it still lacks the nice formal property I mentioned above. It remains possible for successful AU agents to get stuck in a bad equilibrium. For imagine the case where Poof and Whoof are both uncooperative and independently happen to not-push. Each acts objectively rightly according to AU, since there is no (even mentally-expanded) better option they could have taken. CU, by contrast, implies that each acts wrongly in this case -- not because either alone should have pushed, but because they should employ the CU decision procedure (despite it making no consequential difference when the other agent remains uncooperative in disposition).

      Of course, you might judge that the nice formal property of CU is not worth building in a fixed decision-procedure (one that is mandated regardless of its consequences!), and so prefer an expanded "option utilitarianism" all things considered. But it at least looks to me as though CU has an advantage in dealing with coordination problems.

      Delete
    9. In the case that you're asking me to imagine is it that Whoof will not push even if I were to form the desire to cooperate, the intention to push, and act so as to indicate to Whoof that I'm willing to cooperate. If so, then Whoof will not-push no matter what I do, think, or feel. And I thought that you agreed with me that it is not objectively wrong for Poof to not-push if Whoof is not going to push no matter what Poof does, thinks, or feels. And since we're talking about objective wrongness, then I don't see how it matter that Poof in fact not-pushes out of a desire to not cooperate.

      Also, I'm not clear on why it is good for a theory about what individual agents are morally required to do to have the formal property that all agents who successfully follow it are thereby guaranteed to collectively act as the best that *they collectively* can act. No plausible theory about what individual agents are prudentially required to do has this formal property. And I don't buy the Parfitian idea that a theory about what *individual* agents are morally required to do is one that should be one that ensures that *we* act the best that we can collectively act as opposed to ensuring that each of us acts in the way that is the best that each of us as individuals can act.

      Delete
    10. "And I thought that you agreed with me that it is not objectively wrong for Poof to not-push if Whoof is not going to push no matter what Poof does, thinks, or feels."

      Well, it isn't not-pushing per se that's wrong, even according to CU, but rather the failure to follow the CU-mandated decision procedure. (Poof could permissibly not-push, after all, as he would if he followed the CU decision procedure in this situation.) The decision procedure can coherently "matter" to objective rightness if a moral theory builds DP-related facts into its criterion of rightness, as CU does. (Perhaps you understood this, but are just expressing your view that you don't think a moral theory should do this.)

      It's an interesting question whether we should want our moral theories to have the formal feature we've been discussing. (To give it a label: Regan calls it 'adaptability'.) I feel some intuitive pull towards it, but I'm not completely sold: I certainly think you can reasonably resist it. In the dialectic of the original post, I'm more just wanting to point out that anyone worried about such coordination problems shouldn't go all the way to Rule Consequentialist- or Kantian-style universalization. Regan's CU, which is very close to AU, suffices.

      Delete
    11. The decision procedure can coherently 'matter' to objective rightness if a moral theory builds DP-related facts into its criterion of rightness, as CU does. (Perhaps you understood this, but are just expressing your view that you don't think a moral theory should do this.)

      I'm not opposed in principle to building in DP-related facts into a criterion of rightness. But I do find it implausible to think that a consequentialist should suppose that one would be objectively required to follow the CU-mandated decision procedure by performing some act that constitutes identifying those who are willing to cooperate when such an act would have no good effects and have only whatever opportunity costs come with performing that act. After all, I wouldn't advise someone to go to the trouble of identifying the agents willing to cooperate if I know that there is no one who is, or too few who are, willing to cooperate and that, therefore, it would be a futile undertaking. But I guess that's just my act consequentialist intuitions coming through, and CU isn't meant to be a version of act-consequentialism, I gather.

      So I guess that this is just my way of saying that an "expanded option" utilitarianism (or what I would call securitist utilitarianism) seems to me to get the intuitive verdict in the cases where it diverges with CU. And it very unclear to me why adaptability is a formal feature that we should want our moral theories to have.

      In any case, thanks for your patience with me and you're willingness to explain CU to me. I obviously need to read up on this literature.

      Delete
  2. "If every right-thinking citizen drops out simultaneously, then subsequently there will be no act-consequentialist reason for them to start voting again, as no one alone has any chance of making a difference."

    I don't understand this. If I know that everyone else is going to refrain from voting, then don't I have a 100% chance of making a difference?

    Maybe a better example would be where everyone is going to stop paying their taxes next year. But in this case do you really think that I should pay my taxes if I know that no one else is going to pay theirs and nothing I can do will change the fact that no one is going to pay their taxes? It seems to me that I should hold back my tax payment so that I'll have more money to purchase the guns and food that I'll need in preparation for the inevitable anarchy that lays ahead.

    ReplyDelete
    Replies
    1. I was imagining that there were still many wrong-thinking citizens who would vote for the worse party.

      (In the case where you know others won't cooperate, I agree with the act utilitarian verdict -- see my above comment.)

      Delete
    2. When you say "no one alone has any chance of making a difference," I take that you're talking about the chances relative to the agent's evidence. And if there is no chance of one's vote making a difference, then don't I know that no enough other right-thinking individuals are going to vote (that is, cooperate) so as there to be any chance of my vote making a difference?

      And when you say that "CU agents is precluded from knowing that the others (as fellow CU agents) are 'not going to cooperate for some reason'," does that mean that CU offers no verdict as what to do when they know this? How can a moral theory preclude agents from knowing certain information?

      Delete
    3. (See my new comment, above. CU precludes knowing that information because it precludes the state of affairs that the information describes. It can't possibly happen that successful followers of CU fail to mutually cooperate.)

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.