tag:blogger.com,1999:blog-6642011.post3305237935852294204..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: What if everyone did that?Richard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger16125tag:blogger.com,1999:blog-6642011.post-90775994429050453962012-11-09T08:59:58.637-05:002012-11-09T08:59:58.637-05:00The decision procedure can coherently 'matter&...<i>The decision procedure can coherently 'matter' to objective rightness if a moral theory builds DP-related facts into its criterion of rightness, as CU does. (Perhaps you understood this, but are just expressing your view that you don't think a moral theory should do this.)</i><br /><br />I'm not opposed in principle to building in DP-related facts into a criterion of rightness. But I do find it implausible to think that a consequentialist should suppose that one would be objectively required to follow the CU-mandated decision procedure by performing some act that constitutes identifying those who are willing to cooperate when such an act would have no good effects and have only whatever opportunity costs come with performing that act. After all, I wouldn't advise someone to go to the trouble of identifying the agents willing to cooperate if I know that there is no one who is, or too few who are, willing to cooperate and that, therefore, it would be a futile undertaking. But I guess that's just my act consequentialist intuitions coming through, and CU isn't meant to be a version of act-consequentialism, I gather. <br /><br />So I guess that this is just my way of saying that an "expanded option" utilitarianism (or what I would call securitist utilitarianism) seems to me to get the intuitive verdict in the cases where it diverges with CU. And it very unclear to me why adaptability is a formal feature that we should want our moral theories to have. <br /><br />In any case, thanks for your patience with me and you're willingness to explain CU to me. I obviously need to read up on this literature.Doug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-66829795408776280692012-11-08T19:47:50.558-05:002012-11-08T19:47:50.558-05:00"And I thought that you agreed with me that i..."<i>And I thought that you agreed with me that it is not objectively wrong for Poof to not-push if Whoof is not going to push no matter what Poof does, thinks, or feels.</i>"<br /><br />Well, it isn't <i>not-pushing</i> per se that's wrong, even according to CU, but rather the failure to follow the CU-mandated decision procedure. (Poof <i>could</i> permissibly not-push, after all, as he would if he followed the CU decision procedure in this situation.) The decision procedure can coherently "matter" to objective rightness if a moral theory builds DP-related facts into its criterion of rightness, as CU does. (Perhaps you understood this, but are just expressing your view that you don't think a moral theory <i>should</i> do this.)<br /><br />It's an interesting question whether we should want our moral theories to have the formal feature we've been discussing. (To give it a label: Regan calls it 'adaptability'.) I feel some intuitive pull towards it, but I'm not completely sold: I certainly think you can reasonably resist it. In the dialectic of the original post, I'm more just wanting to point out that anyone worried about such coordination problems shouldn't go all the way to Rule Consequentialist- or Kantian-style universalization. Regan's CU, which is very close to AU, suffices.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-79982435603354805742012-11-08T19:03:58.246-05:002012-11-08T19:03:58.246-05:00In the case that you're asking me to imagine i...In the case that you're asking me to imagine is it that Whoof will not push even if I were to form the desire to cooperate, the intention to push, and act so as to indicate to Whoof that I'm willing to cooperate. If so, then Whoof will not-push no matter what I do, think, or feel. And I thought that you agreed with me that it is not objectively wrong for Poof to not-push if Whoof is not going to push no matter what Poof does, thinks, or feels. And since we're talking about objective wrongness, then I don't see how it matter that Poof in fact not-pushes out of a desire to not cooperate.<br /> <br />Also, I'm not clear on why it is good for a theory about what individual agents are morally required to do to have the formal property that all agents who successfully follow it are thereby guaranteed to collectively act as the best that *they collectively* can act. No plausible theory about what individual agents are prudentially required to do has this formal property. And I don't buy the Parfitian idea that a theory about what *individual* agents are morally required to do is one that should be one that ensures that *we* act the best that we can collectively act as opposed to ensuring that each of us acts in the way that is the best that each of us as individuals can act. Doug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-45511144343489636672012-11-08T11:35:23.611-05:002012-11-08T11:35:23.611-05:00Yeah, that's interesting. One worry about eve...Yeah, that's interesting. One worry about even the "expanded options" conception of AU is that it still lacks the nice formal property I mentioned above. It remains possible for successful AU agents to get stuck in a bad equilibrium. For imagine the case where Poof and Whoof are both uncooperative and independently happen to not-push. Each acts objectively rightly according to AU, since there is no (even mentally-expanded) better option they could have taken. CU, by contrast, implies that each acts wrongly in this case -- not because either alone should have pushed, but because they <i>should employ the CU decision procedure</i> (despite it making no consequential difference when the other agent remains uncooperative in disposition).<br /><br />Of course, you might judge that the nice formal property of CU is not worth building in a fixed decision-procedure (one that is mandated <a href="http://www.philosophyetc.net/2010/03/limits-of-moral-theory.html" rel="nofollow">regardless of its consequences</a>!), and so prefer an expanded "option utilitarianism" all things considered. But it at least looks to me as though CU has an advantage in dealing with coordination problems.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-41437643499624860642012-11-08T07:33:53.030-05:002012-11-08T07:33:53.030-05:00Thanks. This is helpful. Okay, so I get that Whoof...Thanks. This is helpful. Okay, so I get that Whoof is not willing to cooperate. But is there something that Whoof could have done to indicate to Poof that he is willing and able to cooperate (that is, if Whoof had been willing to cooperate)? And would that have had the effect that Poof pushes. Or is it that Poof not-pushes no matter what Whoof does? If the former, then suppose that Whoof could have convincingly said to Poof I'm willing to cooperate. Won't AU also say that Poof's not pushing is wrong, for pushing and convincingly communicating to Poof that he is willing to communicate is an option that is superior in its production of utility to all the other options? If it's the latter and there is nothing Whoof can do to get Poof to push, then I don't see why we should think that it is permissible to push. <br /><br />Perhaps, though, the thought is that there is nothing that Whoof can physical do that will cause Poof to push. But Poof will push if and only if Whoof has a certain mental state (a desire/willingness to cooperate). If that's right, then one thing we could do is think that CU offers an insight into how we need to modify AU's criterion of rightness. But another alternative is to modify our account of what constitutes an option and realize that options don't just consist in physical actions but rather sets of mental states that include both intentions to perform physical actions as well as other judgment-sensitive attitudes (like desires). If we conceive of our options in this way, then (3) will not hold. AU will claim that it was impermissible for Poof to not-push given that intending to push while wanting to cooperate with Whoof was an option for Poof. And had Poof had this set of attitudes, the utility would have been greater than had he had any alternative set of attitudes. <br /><br />So I'm wondering whether another way of capturing what you take to be the insight of CU is to adopt a different account of options, such as the one that I adopt in my book Commonsense Consequentialism, where options are delineating not just in terms of physical actions but in terms of what the agent can secure with various combinations of intentions and other judgment-sensitive attitudes.Doug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-24277480485638606962012-11-07T22:28:04.063-05:002012-11-07T22:28:04.063-05:00Oh, right, sorry I should've been clearer on t...Oh, right, sorry I should've been clearer on that. Take S = Whoof, X = not-pushing, and C = only Poof is willing and able to cooperate, and hence (because Whoof isn't cooperative) Poof not-pushes.<br /><br />In that case, (3) holds because, given that Poof is not-pushing, Whoof does best by doing likewise. But (2) holds because Whoof didn't act as a result of going through the required CU decision procedure.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-87191054021236108922012-11-07T22:12:02.870-05:002012-11-07T22:12:02.870-05:00Hi Richard,
Thanks for trying to help. Unfortunat...Hi Richard,<br /><br />Thanks for trying to help. Unfortunately, I'm still puzzled, although I'm not puzzled about this:<br /><br />"Still, as an objective theory of rightness, CU has the unique formal property that all agents who successfully follow it are thereby guaranteed to collectively act as best they can."<br /><br />I understand this perfectly well. Moreover, I understand that AU does not have this formal property. But what I'm asking for is a case in which CU and AU give different moral verdicts with respect to some individual's act -- a case where, for instance, although CU implies that Poof's failing to push in circumstances C is wrong, AU implies that Poof's failing to push in circumstances C is not wrong.<br /><br />Now, it sounds like you're saying that Poof can identify Woof as someone who is willing and able to cooperate only if Whoof is someone who is going to push (at least, so long as Poof is going to Push). But if Whoof is going to push (so long as Poof is going to push), then AU (and not just CU) requires Poof to push. By contrast, if Woof is not going to push, then Poof cannot identify Whoof as willing to cooperate. And in that case AU implies that Poof is obligated not to push. Likewise, for CU (or am I wrong about this?). Some I'm still not getting it.<br /><br />So you seem to be suggesting that CU and AU can imply conflicting judgments about the permissibility of a subject, S, performing an act, X, in circumstances, C. And you have claimed that the Poof-Whoof case is such an example. Could you tell me then what S, X, and C stand for and what you take AU's implication to be as well as CU's conflicting implication to be. <br /><br />In other words, I'm asking for a case with this structure:<br /><br />(1) S does X in C.<br />(2) CU implies that S acted impermissibly in doing X in C.<br />(3) AU implies that S acted permissibly in doing X in C. <br /><br />(You can reverse the permissibly and impermissibly if you like.)<br /><br />And I just need to know what S, X, and C stand for. <br /><br />Now, it seems to me that if S equals Poof and X equals pushes and C stands for the circumstances under which Whoof is willing and able to cooperate and thus will push (at least, so long as Poof pushes), then (3) will hold, but (2) won't. And if you change C to the circumstances under which Whoof is not going to push regardless of what actions Poof takes, then (2) will hold, but (3) won't. So I'm still puzzled as to why you think that the Poof-Whoof is an example where AU and CU generate conflicting verdicts about an individual's actionDoug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-36141817072568281952012-11-07T17:36:17.554-05:002012-11-07T17:36:17.554-05:00(See my new comment, above. CU precludes knowing ...(See my new comment, above. CU precludes knowing that information because it precludes the state of affairs that the information describes. It can't possibly happen that successful followers of CU fail to mutually cooperate.)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-58328964663932894672012-11-07T17:32:56.983-05:002012-11-07T17:32:56.983-05:00Let's start with objective rightness, for simp...Let's start with objective rightness, for simplicity.<br /><br />According to CU, Poof (and, similarly, Whoof) is objectively required to (a) identify who else is willing and able to cooperate, and then (b) play their part in the best plan that's implementable by the group of cooperators.<br /><br />So, suppose that Poof and Whoof both successfully follow CU. Then, they will have correctly identified each other as cooperators in the first step, and since the best plan for both cooperators is to push, each will push.<br /><br />This shows that the assumptions you label (1) and (3) are incompatible. If both agents are "willing and able to cooperate" (in the relevant sense), then they will both push. (For a simplifying illustration, imagine that some people have "CU" branded on their foreheads, and that every CU-branded agent is guaranteed to cooperate with any other CU-branded agents to play their part in the best collectively possible plan. It is inconsistent with these assumptions that Poof and Whoof are both CU-branded and yet fail to both push in the described case.)<br /><br />Suppose instead that Poof (still following CU) knows that Whoof is not going to push regardless of what he does. This entails that Whoof is not successfully following CU, or else Whoof would have identified Poof as a cooperator and then played his part in the best cooperative plan by pushing. Since there are not other CU agents, Poof can (after correctly identifying Whoof as a non-CU agent who will not-push) permissibly not-push. [Note that Whoofs non-pushing is not similarly permissible, for he failed to successfully undertake the prior step of identifying Poof as a cooperative agent.]<br /><br />This complicates assumption (2). <i>If we hold fixed</i> that Poof successfully follows CU, then there is an important sense in which Whoof <i>does</i> have "control" over Poof, in that Poof can only successfully follow CU by doing the same action as Whoof chooses. (Of course, there's no <i>causal</i> control here. It may be that Poof expects Whoof to not-push, such that if Whoof actually pushed then Poof would mistakenly still not-push, and hence would <i>fail</i> to successfully follow CU.)<br /><br />Still, as an objective theory of rightness, CU has the unique formal property that all agents who <i>successfully</i> follow it are thereby guaranteed to collectively act as best they can.<br /><br />Does that help?Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-61701450229767515312012-11-07T16:50:29.139-05:002012-11-07T16:50:29.139-05:00When you say "no one alone has any chance of ...When you say "no one alone has any chance of making a difference," I take that you're talking about the chances relative to the agent's evidence. And if there is no chance of one's vote making a difference, then don't I know that no enough other right-thinking individuals are going to vote (that is, cooperate) so as there to be any chance of my vote making a difference? <br /><br />And when you say that "CU agents is precluded from knowing that the others (as fellow CU agents) are 'not going to cooperate for some reason'," does that mean that CU offers no verdict as what to do when they know this? How can a moral theory preclude agents from knowing certain information?Doug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-7129127390978712582012-11-07T16:28:07.202-05:002012-11-07T16:28:07.202-05:00Sorry. I hit publish before editing. Please ignore...Sorry. I hit publish before editing. Please ignore the above.<br /><br />Is CU a theory about objective (fact-relative) rightness or subjective (evidence-relative) rightness?<br /><br />In the example in the linked post, I take it that we're to assume (1) that neither Poof nor Whoof push, (2) that neither has any control over what the other does, and (3) that both are willing and able to cooperate. Is that right? And can we also assume that Poof knows that Whoof is not going to push regardless of what he does? <br /><br />If those assumptions are correct, then I see that AU says that neither Poof nor Whoof did anything wrong. And so I gather that CU conflicts with AU because CU says that Poof did wrong in not-pushing. I know that I must be confused somewhere. Can you help me by spelling out the example so that it's clear that CU and AU have conflicting verdicts about what some individual should do. Doug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-32581512203667124842012-11-07T16:25:03.094-05:002012-11-07T16:25:03.094-05:00This comment has been removed by a blog administrator.Doug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-45406708122157712772012-11-07T15:48:31.187-05:002012-11-07T15:48:31.187-05:00I was imagining that there were still many wrong-t...I was imagining that there were still many <i>wrong-thinking</i> citizens who would vote for the worse party.<br /><br />(In the case where you know others won't cooperate, I agree with the act utilitarian verdict -- see my above comment.)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-21103161349582719502012-11-07T15:45:45.266-05:002012-11-07T15:45:45.266-05:00There's an example in the linked post. But the...There's an example in the linked post. But the main difference is that successfully following CU entails that you <i>actually</i> cooperate with all other CU agents. If others in your group are <i>not</i> themselves cooperative utilitarians, then it will yield the standard Act Utilitarian verdicts. But a group of fully-informed CU agents will not get stuck at a bad equilibrium point because they know that (i) the others are CU agents, and (ii) all CU agents cooperate with each other to secure the best possible outcomes given what the non-CU agents will do.<br /><br />In other words, a group of CU agents is precluded from knowing that the others (as fellow CU agents) are "not going to cooperate for some reason". Whereas AU agents may end up in this unfortunate position.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-28769698291275816032012-11-07T15:35:18.995-05:002012-11-07T15:35:18.995-05:00"If every right-thinking citizen drops out si..."If every right-thinking citizen drops out simultaneously, then subsequently there will be no act-consequentialist reason for them to start voting again, as no one alone has any chance of making a difference."<br /><br />I don't understand this. If I know that everyone else is going to refrain from voting, then don't I have a 100% chance of making a difference?<br /><br />Maybe a better example would be where everyone is going to stop paying their taxes next year. But in this case do you really think that I should pay my taxes if I know that no one else is going to pay theirs and nothing I can do will change the fact that no one is going to pay their taxes? It seems to me that I should hold back my tax payment so that I'll have more money to purchase the guns and food that I'll need in preparation for the inevitable anarchy that lays ahead. Doug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-62745161147168108612012-11-07T15:27:56.243-05:002012-11-07T15:27:56.243-05:00I haven't really studied cooperative utilitari...I haven't really studied cooperative utilitarianism. Could you tell me when cooperative utilitarianism diverges from act utilitarianism in its verdicts? Perhaps, in this case: the members of the group are willing and able to co-operate but are not going to co-operate for some reason (perhaps, because they lack the relevant information) and I know that they're not going to cooperate and nothing I can do will ensure that they do co-operate. But, in that case, shouldn't I refrain from doing my part -- at least, if doing my part involves some sort of cost? So I'm curious what's the insight? In what sort of case, is the cooperative utilitarian's verdict more plausible than the act utilitarian's verdict?Doug Portmorehttps://www.blogger.com/profile/13506624812156829116noreply@blogger.com