tag:blogger.com,1999:blog-6642011.post2512158742967208843..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: The Limits of Moral TheoryRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-6642011.post-33357782562428177422010-03-26T15:31:54.743-04:002010-03-26T15:31:54.743-04:00Like I said, there are non-defective ways to speci...Like I said, there are non-defective ways to specify the 'cooperators', but it requires going beyond an 'exclusively act-oriented theory', and instead requiring that agents have a particular psychology, as per CU. A theory like CU can secure the property of <i>adaptability</i> -- i.e., the agents who satisfy CU thereby collectively choose the best option available to them. So they achieve "the best they could have done" <i>given</i> what options were available to them.<br /><br />The problem then, as explained in the original post, is that an evil demon might punish those who possess the specified psychology. The fact that they follow CU (or internalize T', or whatever...) leads to a worse outcome -- they choose the best of their options, but their option set is worse than it could have been if they had not followed the theory.<br /><br />(Note that you can't have an exclusively act-oriented theory T' based on a list of 'cooperators' that is determined in a <i>not</i> exclusively act-oriented way, because it is indeed required that all and only cooperators satisfy the theory. [Regan dedicates a chapter of his book to showing how to secure this in a non-circular way, but the details are too complex for me to go into here.] Otherwise choosing an optimal plan for cooperators wouldn't be the same thing as choosing an optimal plan for the theory-satisfiers.)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-27018980962052874532010-03-26T11:30:12.027-04:002010-03-26T11:30:12.027-04:00I"m not sure what a list of cooperators is ei...I"m not sure what a list of cooperators is either. I was assuming we already had some non-defective notion in mind. Your argument suggests that doing as T' requires should not be a necessary and sufficient condition for being a cooperator. (For one thing, it would make my analysis circular.) But it would seem that there are some views about what it takes to be a cooperator that would satisfy T'. For example, maybe having internalized T' is what it takes to be a cooperator. Under this definition, whenever the cooperators follow T', they did the best they could have done. (It is worth repeating that T' is not plausible.)<br /><br />If the notion of being a cooperator is somehow defective, then, indeed, we shouldn't believe that cooperators can always expect to do they best they could have done. However, it doesn't seem that your argument above is essentially aimed at attacking the notion of being a cooperator. So if the defectivenes of the notion of 'cooperator' is essential to your claim no theory has your desired property, it would seem a new argument is needed.Nick Becksteadhttps://www.blogger.com/profile/16561745593227211371noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-20516421571026857282010-03-25T14:17:20.283-04:002010-03-25T14:17:20.283-04:00Nick - interesting suggestion! My worry would be ...Nick - interesting suggestion! My worry would be that your 'list of cooperators' (i.e. agents who satisfy the theory) is ill-defined.<br /><br />To illustrate: suppose both Whiff and Poof not-push. Did both, neither, or just one of them satsify T'? Not both, presumably, since they brought about a collectively suboptimal result. So at least one didn't -- without loss of generality, say Poof. Now let's assess Whiff in light of this. Given that there are no other cooperators, Whiff performs the action required of him by T'. That is, Whiff satisfies T'. But the situations of Whiff and Poof are (by hypothesis) exactly symmetrical. Given that Whiff doesn't push, Poof's not-pushing likewise satisfies T'. So both satisfy T'. Contradiction.<br /><br />(Regan's CU gets around this problem by offering an <i>independent</i> specification of who counts as a 'cooperator' or not, depending on the decision procedure they follow.)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-31140603608542152192010-03-25T11:28:22.357-04:002010-03-25T11:28:22.357-04:00I guess I misunderstood you when you wrote "S...I guess I misunderstood you when you wrote "So, whatever form the moral theory takes, there's no guarantee that by successfully doing what the moral theory requires of them, agents will thereby bring about the best (collectively possible) results. It's always possible to rightly do badly." I took you to be claiming that there is no moral theory such that if everyone does what is required by the theory, an optimific outcome obtains. I constructed a theory with this property.<br /><br />But you wanted a theory which is such that if every _cooperator_ does what is required by the moral theory, an optimific outcome obtains. (This was actually clear from your 1st paragraph.) So you want a theory such that if every co-operator does as he ought to, then of all the actions the co-operators could have collectively taken, no other set of collective actions would have been better. A small tweak of T will give you this.<br /><br />Modify T as follows. Add two ingredients: (v) a list of cooperators and (vi) a function that takes the actions of co-operators and returns the actions of non-cooperators.<br /><br />New Setup: First, we identify all possible collective sets of actions of the co-operators, using (i), (ii), and (v). Next, we use these collective sets of actions of co-operators to generate a list of everyone's actions, including the non-cooperators (using (vi)). Using (iii), we associate each total collective action set with an outcome. Next, we identify the best outcome on this list, using (iv). If there is a tie, we arbitrarily choose one, and arbitrarily choose a set of collective actions that leads to it. Call the resulting set of collective actions “The Chosen Plan for CS”<br /><br />Repeat this process for all possible choice situations. Take the resulting mapping from choice situations to Chosen Plans.<br /><br />Theory T': for any choice situation, each cooperator ought to do the action in The Chosen Plan for that choice situation. Each non-cooperator may do as he pleases.<br /><br />New Claim: by the construction of The Chosen Plans, if each cooperator performs the action in The Chosen Plan for a given choice situation, there would be no other available set of collective actions by cooperators which would have better consequences in that choice situation. Thus, for any (finite) choice situation, if every cooperator acted in accordance with T', no other set of collective actions by cooperators would be better.Nick Becksteadhttps://www.blogger.com/profile/16561745593227211371noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-14075489194390632072010-03-25T10:31:27.713-04:002010-03-25T10:31:27.713-04:00Alex - 'exclusively act-oriented' doesn...Alex - 'exclusively act-oriented' doesn't mean only classical act-consequentialism. The term instead describes <i>any</i> moral theory which can be satisfied by performing a certain act (i.e. any moral theory which doesn't place any demands on the decision procedure that goes into producing the act). Many forms of (e.g.) rule consequentialism are also 'exclusively act-oriented' in this sense. (All except the ones that fit in the other category, namely of requiring that you actually follow the associated decision procedure.)<br /><br />[It's true that here I've merely sketched the outlines of the proof, rather than filling in all the details. You can read Regan for the latter.]<br /><br />Nick - that doesn't work. Try applying it to the Whiff and Poof case. It looks like the Chosen Plan will be to push. But if Poof doesn't push, then Whiff's following the Chosen Plan will make things worse. We want our perfect consequentialist theory to be such that the satisfiers of the theory do as well as possible, <i>given the behaviour of non-satisfiers</i>, however many satisfiers there may (or may not) be. As such, the theory needs to be sensitive to what other agents are doing.<br /><br />David - you're confusing the surprising claim with the unsurprising claim (distinguished in my intro) that <i>reasonably mistaken attempts</i> to follow the theory may not work out for the best. The surprising claim is that <i>successfully</i> doing what the theory requires, with <b>no mistakes</b>, may be bad.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-82112106285580330992010-03-25T05:13:52.275-04:002010-03-25T05:13:52.275-04:00(Actually, now that I think about it, Chris' t...(Actually, now that I think about it, Chris' theory may not fulfil your original criterion that the theory should tell us that we ought to always bring about the best outcome. Still, the logical point stands that you haven't proved that no-one *could* have a theory that did so and still solved simple coordination problems.)Alex Gregoryhttps://www.blogger.com/profile/03836914221864280274noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-63031980419259067652010-03-25T04:19:38.498-04:002010-03-25T04:19:38.498-04:00Richard, either I'm missing something here or ...Richard, either I'm missing something here or this is far from a proof.<br /><br />"The proof takes the form of a dilemma. Either the theory builds into its requirements a particular decision procedure, or it does not [...] If it does not, i.e. if it is a traditional, 'exclusively act-oriented' theory, then satisfaction of the theory is compatible with suboptimal results in coordination games"<br /><br />Why think that the only two alternatives are classic act-consequentialism or a form of consequentialism with a built in decision procedure? Might there not be alternative ways to solve simple coordination problems?<br /><br />(I believe that Chris Woodard has just one such alternative solution. See his paper on Pedro, amongst others (and a book!), here: http://sites.google.com/site/patternreasons/papers)<br /><br />AlexAlex Gregoryhttps://www.blogger.com/profile/03836914221864280274noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-48631716427224982462010-03-25T00:57:57.458-04:002010-03-25T00:57:57.458-04:00This result does not seem particularly curious. I...This result does not seem particularly curious. If there is imperfect information and the cost of getting information exceeds expected benefits, then obviously there will be an efficient amount of mistakes.<br /><br />I would further contest the Whiff-Poof example on the grounds that the two are looking at payoffs but not probabilities. Taking the expected value solves the problem there.<br /><br />Nick: My concern in your example is that you (as an actor in the game) cannot assume that all other actors have the same moral theory or that they would follow through with it if they did. Certainly, given zero negotiating costs, all such games are solvable--that is Coase's Theorem. The challenge is when there are barriers to negotiating/coordinating.Davidhttps://www.blogger.com/profile/13774705010608065001noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-68684208112935842462010-03-24T23:41:12.684-04:002010-03-24T23:41:12.684-04:00Here's a theory that seems to escape the dilem...Here's a theory that seems to escape the dilemma. We're given a choice situation CS with (i) a finite number of people (ii) a finite number of options (iii) a mapping from everyone's actions to outcomes and (iv) a betterness ordering over outcomes.<br /><br />Setup: First, we identify all possible collective sets of actions, using (i) and (ii). Next, we use these collective sets of actions to generate a list of outcomes using (iii). Next, we identify the best outcome on this list. If there is a tie, we arbitrarily choose one, and arbitrarily choose a set of collective actions that leads to it. Call the resulting set of collective actions “The Chosen Plan for CS”<br /><br />Repeat this process for all possible choice situations. Take the resulting mapping from choice situations to Chosen Plans.<br /><br />Theory T: for any choice situation, each individual ought to do the action in The Chosen Plan for that choice situation.<br /><br />Claim: by the construction of The Chosen Plans, if each individual performs the action in The Chosen Plan for a given choice situation, there would be no other available set of collective actions which would have better consequences in that choice situation. Thus, if everyone acted in accordance with T, it would have the best consequences, for any (finite) choice situation.<br /><br />Clearly, this doesn't work for some infinite choice situations, such as those with infinitely many outcomes. But I take it those choice situations aren't under consideration right now. <br /><br />This theory is not plausible, but it does have the property you claimed no moral theory could have, I think. <br /><br />(It isn't a decision procedure. I was just describing a method for constructing The Chosen Plans.)Nick Becksteadhttps://www.blogger.com/profile/16561745593227211371noreply@blogger.com