tag:blogger.com,1999:blog-6642011.post8076861431681954375..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: Objections to ConsequentialismRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger33125tag:blogger.com,1999:blog-6642011.post-42574867008454928092015-01-03T22:47:28.246-05:002015-01-03T22:47:28.246-05:00Sorry for the extraordinary delay in my reply: it ...Sorry for the extraordinary delay in my reply: it was just a fortuitous impulse to revisit this post when I was browsing your year-in-review post.<br /><br />I think we actually agree that it's desirable to give an explanation (beyond just an appeal to foundational intuitions) about why it's wrong to break a promise. But here, I think we'll hit a divide pretty quickly. If I were asked to explain why it is generally wrong to break a promise, I would say that it's because doing so fails to show proper respect to the person to whom one has made the promise. (In expressing this idea, I'd probably appeal to something similar to the humanity formulation of Kant's categorical imperative.) I'd offer a similar explanation to explain why one generally shouldn't deceive others. So the problem with these particular issues is a disagreement about what the best explanation for their wrongness is. I wouldn't be satisfied with trying to cash out the idea of showing individuals appropriate moral respect exclusively in terms of how showing such respect promotes good consequences.<br /><br />In any case, this sort of disagreement can't really be resolved in comments on a blog, but since I expect to continue reading your work, you'll have further opportunities to persuade me.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-27396446416115346042014-09-27T06:15:58.138-04:002014-09-27T06:15:58.138-04:00Wouldn't the morally relevant reference frame ...Wouldn't the morally relevant reference frame just be <i>whichever one the subject themselves is in</i>, rather than the reference frame of the assessor? So, if the delay will be one hour <i>from the subject's perspective</i> whichever way they go, it seems the correct answer is that it makes no moral difference which group they go with.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-51964841106062217992014-09-26T18:05:45.748-04:002014-09-26T18:05:45.748-04:00Weirdly, my answer is a bit different.
I think th...Weirdly, my answer is a bit different.<br /><br />I think the answer is relativity. <br /><br />If your notion of consequentialism doesn't have something that fills the role of discounting you get absurd results it many quite plausible states of affairs. In particular you lose the property that if one outcome is preferable to the other at every time it is overall better. In other words in order for it to turn out that it is better to create better consequences now than to simply delay it for any arbitrary amount of time. <br /><br />Unfortunately, if delaying initiating changes with beneficial conseuqnces is worse than not doing so special relativity dictates that if one group of individuals are moving at a high rate of speed relative to the first (they can even take off on a ship from the earth) they will have different judgements about which states of affairs are preferable.<br /><br />Both groups view the other group as moving and in their reference frame time runs more slowly for the moving group. Thus, if the groups have to decide who will take a person who will start to experience extreme joy (any good conseuqnces) in an hour both will judge that the time dilation experience by the other group means it's morally preferable for this individual to come with them since in their reference frame he will thereby start experiencing joy sooner than if he had gone with the other group. (Note that since the person will feel happy forever after that point this won't be made up by a longer interval of feeling happy).<br /><br />This is a problem because now there is no fact about which is the better outcome. I have no idea how to solve this problem.TruePathhttps://www.blogger.com/profile/00124043164362758796noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-16449084584963658742014-08-13T11:13:29.042-04:002014-08-13T11:13:29.042-04:00Thanks Toby, I do hope to write a book on this stu...Thanks Toby, I do hope to write a book on this stuff eventually!Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-88662153281919916202014-08-13T01:00:02.216-04:002014-08-13T01:00:02.216-04:00Smilansky ("Utilitarianism and 'punishmen...Smilansky ("Utilitarianism and 'punishment' of the innocent: the general problem". Analysis 1991, 256-61) brings up the settings for false positive and false negative rates in the judicial system. He looks at the case for increasing the sensitivity of the test (increasing FPR and decreasing the FNR) to improve the law and order situation. A corollary is that transparency into the procedures that would cause indignation and alteration in preferences should be avoided as it would lead to a net increase in aggregate suffering, so returning to the old conspiracy of consequentalist do-gooders against the deontologists ;)<br />David Duffyhttps://www.blogger.com/profile/12142997170025811780noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-57118704569340421992014-08-12T07:46:24.732-04:002014-08-12T07:46:24.732-04:00Great post Richard. It is nice to see so many of y...Great post Richard. It is nice to see so many of your ideas come together in a summary such as this. You could write a great book (or encyclopedia article) on consequentialism if you wanted, presenting it in an appropriately human light.Toby Ordhttps://www.blogger.com/profile/18019744097526255393noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-58934457884459971242014-07-30T12:17:02.910-04:002014-07-30T12:17:02.910-04:00Thank you for your reply, Richard. I suppose a pre...Thank you for your reply, Richard. I suppose a preference utilitarian would want to say that these acts are wrong because they violate the individuals' self-regarding preferences; however, I think I just struggle to see that as a moral assessment that is plausibly based on the consequences of these acts. In what sense is the frustration of their preferences a part of the consequences of the act? I think it's a bit too mysterious to envisage preferences as entities which exist independently of the subject, which can be frustrated without their being aware of it. It seems to me that the moral assessment is really based on certain counterfactual considerations, consideration about what the individuals would have felt if they had been aware of my behaviour. Counterfactual claims are both less mysterious than claims about subject-independent preferences and more intuitively relevant to our moral assessments of behaviour; they are not claims about the effect my act will have on certain free-floating preferences, but rather about whether the individuals affected would have consented to my acting that way if they had been in a position to choose.<br /><br />Does the trade-off test reduce my reluctance to call that a form of consequentialist moral reasoning? Not really. I may well judge that one such violation is worth it to prevent five similar violations, because the normal constraints on violation are defeasible given special circumstances. But those circumstances need not be just any situation where the frustration of some preference would prevent the frustration of a greater number of similar preferences, thereby reducing the quantity of frustrated preferences overall. The defeasibility criteria might be more demanding than that - the fact that you suggested five similar violations, rather than just two, speaks to our intuitions that the criteria really are more demanding than consequentialism would assume. Anonymoushttps://www.blogger.com/profile/01576721847630944663noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-8896888066398261012014-07-29T13:02:16.264-04:002014-07-29T13:02:16.264-04:00I read Lenman's paper last night and, I have t...I read Lenman's paper last night and, I have to say, this is the first time that my confidence in consequentialism has been threatened. I highly recommend it Richard, and I'd be very interested in what you think about it. <br /><br />In essence, your above reply is precisely what he attacks. Consequentialists have said this (we can be "reasonably confident" that killing strangers has expected negative utility and giving to charity has positive expected utility) in the past and it seemed intuitive to me until last night. <br /><br />However, your agent-relative idea may avoid the objection. If what has agent-relative value is promoting the utility of the near future rather than distant, Lenman's objection fails. Unfortunately I don't share the agent-relative intuition and it seems to me to be one of the most important aspects of consequentialism: its sheer impartiality. <br /><br />If a god told me for certain that the person I was about to save will turn out to be the ancestor of a super-Hitler in the very distant future (and all in all, the utility will be massively negative, whereas the utility would be massively positive if I let her die), I would not save her even if it meant the parts of the world close to me in space-time would be significantly improved. <br /><br />Can you motivate your agent-relative version for me please? Giving up impartiality to me seems a bit, well, selfish!Anonymoushttps://www.blogger.com/profile/10068280841774741999noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-189955747787242832014-07-29T11:56:25.211-04:002014-07-29T11:56:25.211-04:00Oh I agree, but I reckon you've already encoun...Oh I agree, but I reckon you've already encountered consequentialists who staunchly reject that future "world improvement" duties really do differ in kind from present duties of "individualistic concern." I guess I'm just welcoming you to the club of distinction-drawers-that-many-(most?)-consequentialists-bill-as-straw-graspers. But by all means jump on in, the water's warm!Paul K.https://www.blogger.com/profile/02144070415873094011noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-28829510605040002732014-07-29T11:34:39.384-04:002014-07-29T11:34:39.384-04:00Hi William, aren't you just objecting to hedon...Hi William, aren't you just objecting to hedonism here? A preference utilitarian, for example, can give direct weight to such violations of people's self-regarding preferences.<br /><br />The key test for consequentialism vs. deontological side-constraints is whether we should be willing to bring about one such violation (say) to prevent five similar violations. If such violations are bad, then this should be a worthwhile (albeit distasteful) trade-off.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-44600398496167323852014-07-29T04:40:30.831-04:002014-07-29T04:40:30.831-04:00Has anyone mentioned 'painless violation' ...Has anyone mentioned 'painless violation' counterexamples? For example:<br />1. You stalk one of your employees back home for several weeks to harmlessly satisfy your desire to have power over them, without them or anyone else ever discovering that you did this. During that time you gathered intimate secrets about them, but never reveal these secrets or use them for any malicious purpose - you simply enjoy having the privileged information. If you hadn't done this you probably would have satisfied your desires by treating them worse at work, giving them jobs they hate and refusing them holidays or promotions.<br />2. You administer a mind-altering drug to your friend who suffers from depression, secretly and without the recipient noticing, which co-opts their will so that they do whatever you command them to do, and then wipes their memory of it once the drug wears off; but you only command them to walk your dog, which they really enjoy doing. If they hadn't been forced to do this, they would have sat at home in a stubborn melancholic state.<br />3. With their dying breath, your best friend specifies their preferences for how their mortal remains should be treated - they ask for a cremation and a reasonably modest funeral. Instead, and without anyone else knowing this, you preserve their dead body in a large container of formaldehyde in your basement so that you can put it to use in various anatomical experiments. This allows you to become an accomplished surgeon more quickly (since there is a world-wide shortage of voluntarily donated cadavers) and save more patients' lives.<br />I've tried my best to stipulate that in these examples you are going a reasonable way towards maximising pleasure/happiness; it's not as if there are any obvious alternative ways of procuring the same amount of pleasure/happiness. Satisficing consequentialism would certainly permit these acts. It's also plausible that maximising consequentialism would consider you to have more reason to commit these acts than to commit some other acts which procure less pleasure/happiness but do not involve spying, betrayal or the violation of someone's autonomy.<br />For me it's these kinds of examples that make me reluctant to accept pure consequentialism. They may be a bit outlandish, but actually there are plenty of opportunities in everyday life to betray people or violate their privacy or autonomy without harmful consequences.<br />Your response might be that you have good reason to think that your behaviour will be harmful; that it will erode trust, hurt people's feelings, cause outrage. And if it really was the case that you had no reason to believe as such, then you might accept that (in these very unusual circumstances) it is permitted (or even required) for you to spy, betray or violate someone's autonomy; but this nonetheless reveals something very disturbing about your character. Perhaps my intuitions about these cases differ, but I feel that the acts are wrong not because of their expected consequences, nor because of what they say about your character - though both of these are considerations which might speak against so acting. They are wrong for an independent and sufficient reason: the individuals in question would object if they knew about it. I think that consideration matters deeply to us as moral agents, independently of expected consequences or evaluations of character.Anonymoushttps://www.blogger.com/profile/01576721847630944663noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-84342567228119393722014-07-28T12:53:53.590-04:002014-07-28T12:53:53.590-04:00Well, in the special case of merely possible peopl...Well, in the special case of merely possible people, sure. But when there are existing concrete individuals available, we can (and should!) care about them. :-)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-82257582086020843402014-07-28T10:44:09.830-04:002014-07-28T10:44:09.830-04:00Hi Richard, Thanks. Just an autobiographical claim...Hi Richard, Thanks. Just an autobiographical claim: I don't think my intuitive dislike of inter-personal aggregation is related to an evaluative intuition. But perhaps I find somewhat easy to accept the evaluative kind of aggregation, since we non-consequentialists can contain it in practice with deontic principles. <br /><br />Re "making the world better": OK, right. I have read "Value Receptacles" and agree it advances consequentialist theory. (Indeed, I cite it in a forthcoming paper on aggregation :). But I took your relevant conclusion in "VR" to be in an important sense comparative: on 'fungibility', consequentialism doesn't fare worse than other views, since other views probably must also treat merely possible future people as fungible. If that's right, does it not mean that all views should take abstractions like "making the world better" as generating morally relevant reasons? Paul K.https://www.blogger.com/profile/02144070415873094011noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-36293636431238149452014-07-27T18:46:42.165-04:002014-07-27T18:46:42.165-04:00I think one could interpret such talk in a variety...I think one could interpret such talk in a variety of ways. It's often most natural to read a phrase like "made the world a better place" as invoking agent-neutral value, in which case Tina's claim would be false. On that way of talking, agent-relative consequentialism tells us not just to aim at making the world a better place, because that neglects the agent-relative value of our nearest and dearest. But Tina could clarify that she just meant that by saving her own kid, she brought about the <i>better outcome</i>, i.e. the outcome that she had most reason to prefer and to pursue, in which case her claim is true and Alice should agree that she brought about the outcome that she (Tina) had most reason to prefer.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-52261635943012053982014-07-27T18:36:35.513-04:002014-07-27T18:36:35.513-04:00Thanks for the clarification; I thought there were...Thanks for the clarification; I thought there were some assumption of agent-independent value as well. <br /><br />So, given agent-relative value, would people who are not aware of agent-relative consequentialism be talking past each other if they debate whether a certain course of action would make the world a better place, if the matter involves saving their nearest and dearest, or at least those of one of the people involved? <br /><br />For example, let's say that Tina says that by saving her own kid over two unrelated ones, she made the world a better place, but Alice (who has no kids and is not related to Tina's) says that Tina is mistaken, and that Tina would have made the world a better place if she had saved the two other kids instead. <br /><br />Is the proper interpretation of agent-relative consequentialism that Tina's claim is true, Alice's claim that Tina is mistaken is false, but Alice's claim that Tina would have made the world a better place if she had saved the two other kids instead, true? Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-70889640719756617182014-07-27T16:30:20.290-04:002014-07-27T16:30:20.290-04:00Just to clarify, the agent-relative consequentiail...Just to clarify, the agent-relative consequentiailst posits agent-relative value. Saving my kids over yours is better-sub-me, but worse-sub-you. That is, we should rank the outcomes differently. That's what makes it agent-relative.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-70941825973553684312014-07-27T16:27:32.971-04:002014-07-27T16:27:32.971-04:00Hi Paul, re: nearest and dearest: I think the rele...Hi Paul, re: <b>nearest and dearest:</b> I think the relevant justification (given that value = desirability) is just that you have <i>more reason to desire</i> your own child's survival. That's all it is for that outcome to be better-relative-to-you. You're certainly not claiming that the other children's lives are less valuable in an agent-neutral sense: you can acknowledge that a stranger would have more reason to desire that the two be saved, and of course that their own parents would have even stronger reasons to prefer that outcome (making it better-relative-to-them).<br /><br />Also, I think it's important to stress that (contrary to the common stereotype) consequentialist justifications shouldn't ultimately appeal to abstractions like "making the world better". We act for the sake of particular valued ends (e.g. people), not for the sake of the abstract aggregate of value. See my post on '<a href="http://www.philosophyetc.net/2012/11/general-and-particular-moral.html" rel="nofollow">General and Particular Moral Explanations</a>', as well as the separateness of persons stuff above.<br /><br />re: <b>aggregation:</b> I don't mean to deny that a deontologist <i>could</i> prohibit aggregation via a purely deontic principle about permissibility (perhaps applied only to not-pareto cases). I just think the intuitive force of the cases is undermined once we realize (via the iteration cases) that we must reject the intuitive <i>evaluative</i> claim that the outcome with the big harm is <i>worse</i> than the outcome with the many smaller harms. (See my exchange with Doug in the comments.) Its seeming worse is what was doing all the intuitive work, or so it seems to me.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-19383783614876933762014-07-27T15:44:07.029-04:002014-07-27T15:44:07.029-04:00Hi, Paul,
On the issue of agent-relative goodnes...Hi, Paul, <br /><br />On the issue of agent-relative goodness, my impression (I might be missing something) is that the conclusion that your child's existence makes the world better than if she died and two other children lived is not only unsavory, but incompatible with this modified version of the consequentialist theory, which still holds that there is an agent-independent account of the good, even if each agent has different goals to promote (i. e., as Richard said in his reply to one my post raising the issue of weird agents, each person gets to "promote the good of all, but especially your friends and family."). <br /><br />Given the agent-independent account of the good, it seems it's not the case on this agent-relative theory that your child's existence makes the world a better place than the existence of the two others. If it were, the parents of each of the other two could similarly make the assessment that the existence of their child makes the world a better place, and if that were a consequence of the theory as well, the result would be a contradiction. <br /><br />Moreover, someone not related in any way ought to save two over one, according to this theory (by aggregation), so the existence of the two other children is better (makes the world a better place, etc.)<br /><br />So, it seems to me that the agent-relative type of consequentialism is committed to the view that the parent in question ought to promote <i>the overall bad</i> in such circumstances - given that the theory rejects moral distinctions between actions and omissions, it seems that you would be promoting the overall bad, even if by promoting the good of your family. Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-46949535748202911012014-07-27T12:02:26.857-04:002014-07-27T12:02:26.857-04:00(I meant to describe Dougherty-type cases as ones ...(I meant to describe Dougherty-type cases as ones of risk-imposition, not merely ones involving risks of death. Obviously Parfit's case involves risks of death too.)Paul K.https://www.blogger.com/profile/02144070415873094011noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-48341127552047080532014-07-27T12:00:22.375-04:002014-07-27T12:00:22.375-04:00Hi Richard, Good stuff here (as usual). Regarding ...Hi Richard, Good stuff here (as usual). Regarding aggregation, I agree that iteration cases are useful to think about. But I'm not sure how worried the anti-aggregationist must be. The traditional cases of aggregation were pretty much "pure" cases of inter-personal aggregation -- I'm thinking here of the headaches-vs-lives cases, for example. In those cases, there really does seem to be a conflict between one person's interests and the interests of the many. But in a case like Parfit's "minutes" case, everyone benefits by adopting the iterative policy of granting single-minutes to the many. It's hard for me to see how the permissibility of the minutes policy entails the permissibility of inter-personal aggregation in the standard cases. Perhaps the consequentialist has an argument to that effect, but I don't yet see it.<br /><br />I think a stronger argument stems from iterative cases involving risk of death, like the one discussed by Tom Dougherty in his JESP article (http://www.jesp.org/PDF/aggregation-beneficence-and-chance.pdf). Here I think the non-consequentialist should respond similarly, invoking something like the ex ante Pareto principle: everyone's life is made better off by policies permitting risk-taking, even though we know someone will die while the group takes risks for even small benefits. But this of course forces the non-consequentialist to explain why a more global ex ante Pareto (and its attendant average utilitarianism) is problematic. When it comes to iterative cases, I think that is where the action is. (The exchange between Kamm and Gibbard in *Reconciling Our Aims* is useful on this.)<br /><br />Regarding the Nearest and Dearest objection: despite all my non-consequentialist intuitions about inviolability and distributive fairness, the Nearest and Dearest conviction does seem me the last I would be willing to give up. It is too hard for me to believe that the only normative reason I have to promote the well-being of my nearest and dearest is that this is a way to promote impersonal goodness. And I find the move to agent-relative goodness problematic as well. If I chose to save my child's life instead of saving the lives of two other children not related to me, this situation would be all the more tragic (I would think) if I had to justify my action by saying my child's existence makes the world better than it would be if she died and the other two children lived. So I'm curious: is there a brand of agent-relative consequentialism that would permit my saving my child but would free me from having to say this unsavory thing about the other children's lives?Paul K.https://www.blogger.com/profile/02144070415873094011noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-56701596755153748962014-07-27T04:14:51.734-04:002014-07-27T04:14:51.734-04:00True, most of the objections I raised would not, o...True, most of the objections I raised would not, on their own, favor either deontology or virtue ethics over consequentialism as long as you don't include some metaethical views on the definition of consequentialism – but as a theory of all-things-considered reasons which holds agents ought to bring about the good, I got the impression they were. <br />As I mentioned, though, I wasn't sure this is the sort of objection you would be interested in, and I see now that you don't seem to include those metaethical views in the definition of consequentialism, so point taken:; I won't insist on any of those objections in this thread. <br />On the other hand, the objection I raised in my second post in the thread definitely favors deontology over consequentialism, even if it involves some metaethics lurking there as well.<br />While it is not an argument for deontology as the full answer – e. g., it takes no stance on whether, say, when it comes to moral goodness, a virtue ethics approach is correct -, it does support deontology at least one it comes to moral obligations, even if without denying that consequences do matter in many cases – but secondarily, and because of rules allowing for it; the “do”s and “don't”s would be primary. <br />So, I think that objection is still on target. <br /><br />On the issue of moralists, thanks for the clarification on what you mean, but I don't agree with the view that a committed moralist of any kind would adhere to the views you mention. <br />For example, the following are committed moralists, by that definition: <br /><br />a. Alice holds that it's always irrational for non-psychopathic human beings to behave immorally, and is morally judgmental of them. She also is morally judgmental of psychopaths in the usual sense of the terms, since she judges them and their actions immoral and blameworthy and judges them evil in many cases, even if she does not take a stance on whether their immoral actions are always irrational. <br />As for weird AI, aliens, etc., she holds that it would be irrational of some of them (in some cases, depending on their minds) to bring about the good, and in fact, some of them even rationally ought to bring about bad results (if they are, say, z-good). <br />Maybe bringing about what they rationally ought to would be immoral, or maybe they would not be moral agents at all, but more like lions or viruses, which can bring about bad results and aren't moral agents - even if unlike lions or viruses, these beings would be capable of rational reflection on a level similar to that of humans or superior -; Alice takes no stance on that. <br />She leans towards naturalist externalism about morality, but about constructivism about reasons.<br /><br />b. Bob's stance is like Alice's, with the difference that he holds that it's always irrational for all human beings to behave immorally (so, he disagrees with Alice about the psychology of psychopaths). <br /><br />There are other variants.<br /><br />On a different note, I've been thinking about your suggestion of an agent-relative consequentialism, but I'm not sure it would be consequentialism anymore. In fact, it seems to me that the consequence is that sometimes, some people ought to promote the bad. <br /><br />For example, let's say that Alice's daughter is at risk of dying in a burning building, and so are other 5 kids; she can save either her daughter or the others (the first she tries to rescue) without serious personal risk, but very probably not both. It seems plausible she ought to save her daughter – that would be promoting the good of her family first. However, at the same time, that's promoting <i>the bad</i> all-things-considered – i. e., the result in which one is saved but 5 die is overall bad; and the lack of action/omission distinction indicates she's promoting the bad by not saving them instead. Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-165704343497327252014-07-27T00:07:18.514-04:002014-07-27T00:07:18.514-04:00A "moralist" is someone who's morall...A "moralist" is someone who's morally judgmental of others. I was using it as rough shorthand for the "moral rationalist" position that amoralism is irrational. (I agree that you can be committed to acting morally as a matter of personal values, choice, or preference, without being a "moralist" in my sense.)<br /><br />In general, I worry that you're running together issues in normative and meta-ethics. I take it that you object to my brand of moral realism (which includes moral rationalism) -- and I'm happy to explore that disagreement more in another thread (e.g. I don't think it's so clear that the zyntomans are talking about z-goodness rather than goodness -- it depends in part on whether they take themselves to be fundamentally fallible in the same way that we Earthly robust realists do). But consequentialism as a normative theory is independent of all that. You could combine it with expressivism, or with non-rationalist ("externalist") naturalism, etc. Correct me if I'm wrong, but I don't see that the objections you raise are reasons to favour deontology over consequentialism, right?Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-60775751148578053882014-07-26T19:48:29.401-04:002014-07-26T19:48:29.401-04:00With regard to the weird and smart aliens, AI, etc...With regard to the weird and smart aliens, AI, etc., I'm not sure whether they would all be (or they all are or will be; chances are either there are or there eventually will be such beings, somewhere in the universe) moral agents who behave immorally, or not moral agents at all (in which case, there is nothing that they <i>morally</i> ought to do, though there plausibly still is something they rationally ought to do, or all-things-considered ought to do, given their own evaluative standpoint), but I would see no reason to believe they're making any mistakes, or that they all-things-considered ought to seek the good; it seems to me it would be <i>irrational</i> of them to seek the good, given the values they have (and they would have to behave irrationally to change them). <br /><br />Also, I would argue that they wouldn't have moral language at all – similar language would not be moral language, just like color-like language is not the same as color language, and trying to debate would result in talking past each other. <br /><br />After all, language is determined by usage, and as a result of a very different evolutionary past – or design in the case of the AI – they would care about properties and things very different from the ones we care about, and their language would allow them to talk about us, not about something H. Sapiens does. <br /><br />The alternative that there would be genuine disagreement about morality, that they would all be making some kind of mistake, etc., seems just too improbable given evolution. So, <i>if</i> the type of consequentialism you have in mind is committed to the alternative (if not, please clarify), then I would say this is an objection, other than the ones you listed – and one I find decisive. <br /><br />Regarding whether they have the “wrong values”, do you mean it's <i>irrational</i> to have those values, or immoral, or both? <br />I get the impression you're saying it's irrational (and plausibly also immoral, though I'm less confident in that interpretation). <br /><br />Personally, I see nothing irrational in having those values, as long as they are consistent; in fact, it would be instrumentally irrational of them to seek the good. Still, I suppose there might be more than one usage of “rational” in this context, and if so, they might be irrational in some sense if they don't seek the good (but still instrumentally irrational if they do). <br /><br />However, that is not crucial. Assuming that that's the case (i. e., that that would be irrational, in some common sense of the word), that would seem to me like an issue about meaning of “rational” and “irrational” in English (and if so, plausibly other terms in other human languages), but then, the reply given in the case of morality can be given in the case of rationality. For example (simplifying, but assuming humans morally ought to seek the good for the sake of the argument, since this is not crucial to the objection, either), if some intelligent aliens, with complex language, etc. (let's call them “zyntomans”) evolved very differently and have minds very different from ours, and they generally are instrumentally rational in bringing about what they value, but as it happens, they have <i>irrational</i> values, and it's irrational to seek the z-good (instead of the good), for that matter, seeking the z-good would plausibly be z-rational, and seeking the good, z-irrational. When they talk, they talk about z-rationality, z-good, etc. Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-22731275197527766202014-07-26T19:43:10.550-04:002014-07-26T19:43:10.550-04:00Richard,
Thanks for the clarification on what yo...Richard, <br /><br />Thanks for the clarification on what you mean by “agent-relative consequentialism”. I will think about other objections one might raise. <br /><br />That aside, I don't think the view I described is subjectivist in the colloquial sense of the word, just as the fact that there are other species that have things similar to color (in terms of perception, function, etc.), but which aren't color (e. g., their perceptions are associated with different parts of the spectrum) doesn't mean that color statements are subjective, that there is no objective fact of the matter as to whether a traffic light was green when the defendant crossed it, etc., in the usual, colloquial sense of those expressions. So, I think color is objective even if aliens have alien color (depending on the specific aliens), and the good is objective even if aliens have alien good (more precisely there is some variation in normal color vision among humans, but that would complicate the example without being crucial to the point I'm making). <br /><br />Granted, technical usage varies, and many philosophers would call that view “subjectivist”, but I think it's a bad choice of words; I'd rather stick to the usual meaning of the terms. <br /><br />Terminology aside, I don't agree about the assessment about committed moralists, either. <br /><br />In particular, I don't think making proper moral assessments, being committed to behaving in a morally proper manner and being consistent about it commits one to making claims that weird AI or aliens from another planet (and with a psychology radically different from ours) would be somehow mistaken about something. <br /><br />If you meant something else by “committed moralist”, please clarify, but describing a person who does not have the views you mention as not a committed moralist is something I would object to, as it gives the impression that they're somehow less committed to behaving morally, defending true moral claims, or something along the lines, and I would argue that that's not the case. <br /><br />All that aside, I will address your other points in the post below (I get you disagree with most or all of what I will post below, but I'm outlining these objections because you expressed bafflement by the existence of so many non-consequentialists, and asked us to raise our favorite objection, so I'm explaining where I'm coming from as a non-consequentialist. While I don't have a single favorite objection, I find these ones – among others - decisive.). Angra Mainyuhttps://www.blogger.com/profile/16342860692268708455noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-5609501149862543662014-07-26T16:31:06.665-04:002014-07-26T16:31:06.665-04:00Hi Luis, perhaps you're right! I guess my nai...Hi Luis, perhaps you're right! I guess my naive bafflement remains resilient in the face of a merely general expectation that <i>there are</i> (somewhere) plausible-seeming reasons to favour deontology. What I really need is to become more closely acquainted with such reasons, and in a way that helps to bring out their plausibility. (For the "standard" objections, addressed in my post above, just strike me as very <i>bad</i> objections to consequentialism. I'm not convinced that, on reflection, anyone should think they plausibly favour deontology at all -- except, I guess, for certain cases where others just have different intuitions -- but such controversial cases, while they might reasonably motivate deontology for the particular people who are convinced by them, are not well suited to be considered clear "counterexamples".)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.com