Full-blown moral nihilism strikes me as unnecessary. A meaningful notion of 'the good' can be grounded naturalistically, so we needn't worry that it commits us to metaphysical flights of fancy.
I should begin by conceding that the universe doesn't care what we do or how we fare. Value doesn't come pre-built into the fabric of the cosmos. If something matters, it's because it matters to us. And, indeed, things do matter to us. We have desires, and their fulfillment is of value to us.
So I think the existence of agent-relative non-moral value is particularly difficult to deny. Surely everyone would agree that some things can be better or worse for me. (I argue for the more specific claim that what is good for me is the fulfillment of my desires, but the specifics are not important here.) But once we grant the existence of non-moral value, we can easily construct an agent-objective aggregate, which could plausibly be equated with moral value.
There's a particularly nice argument to this effect in Stephen Darwall's Philosophical Ethics (p.125), inspired by J.S. Mill:
1) Morality, by its very nature, is concerned with what is good from the perspective of the moral community.
2) What is good from the perspective of the moral community is the greatest amount of what is good to the individuals comprising it.
3) What is good to any individual is that person's pleasure or happiness.
The details of (2) and (3) could be revised without damaging the overall force of this argument for naturalism. For instance, one might replace the maximization principle of (2) with a more Rawlsian 'maximin' principle, or something along those lines. The result would no longer be utilitarianism, but it would be no less naturalistic. Similarly, you can plug your favourite account of wellbeing (or non-moral value) into premise (3). The overall structure is flexible enough to handle it. Really all we need is the eminently plausible premise (1), in conjunction with any naturalistically specifiable account of "what is good from the perspective of the moral community" (which shouldn't be too difficult to provide).
When ethical naturalism is this easy, why would you ever resort to nihilism?
I suppose I'm not sure quite what it is you're going for when you talk about 'moral nihilism'. Sometimes the phrase means, skepticism about the possibility of morality; sometimes, however, it isn't skepticism about the possibility of morality, but about its point. I take it that Quentin Smith's moral nihilism is one possible version of this. Such a moral nihilist might well agree with your argument, and yet hold that it makes no matter: what is good from the perspective of the community has no particular privilege over good from any other perspective, or being moral is a pointless activity, or some such.
ReplyDeleteI take it you were dealing with moral nihilism in the first sense; I think your argument works very well against moral nihilism in that sense.
I agree, for the most part, with Johnny-Dee's criticisms. Even though I believe morals can be grounded naturalistically, I think the method you put forth is flawed. It could be that the greatest desire-fulfillment in a society would come from the murder of all x's (where an x is a person of a given race, religion, whatever). If there is any morality at all, I would want to say that these people have an absolute right to not get killed simply for the pleasure of others.
ReplyDeleteAlso, as John points out, your method might entail some sort of moral relativism, something that I have argued is very strange indeed.
Like you, Richard, I too think that morals can be grounded in a society, but not by this utilitarian strategy. Instead, I think we can use intersubjective rationality. I guess you could call this a naturalized Kantian approach. Accordingly, I think ethicists are on the right track by using our rationality to argue back and forth about ethical arguments; the only problem is the lack of agreed upon first principles. I believe, however, that this problem can be overcome with time.
Brandon, you're right that I was only talking about the first sort of skepticism.
ReplyDeleteJD - when did I ever mention natural selection? I think evolutionary ethics is entirely wrongheaded, probably for much the same reasons you do.
I'm also no fan of cultural relativism. I take the moral community to be universal - I understand morality to be concerned with what is good for everyone, not merely 'good for my tribe'.
I also think morality is quite independent of people's beliefs about it, so issues of consensus (or lack thereof) are irrelevant. I guess talk about "the perspective of the moral community" might be misleading in that sense; I did not mean what the moral community believes to be good. I mean what really is good for them, in objective fact. (I've discussed this a lot in the past, just follow some of the links, e.g. the desire fulfillment one.)
Also, note that (3) merely makes claims about individual welfare. It should come as no surprise that self-interest and morality often conflict. I would count it a mark against a theory that did not recognise this.
Chris' first objection is more on target. Like all forms of consequentialism, this one will have some counterintuitive results. The worst could probably be avoided through minor tweaks; for example, choosing a maximin aggregation principle (or the like) in place of premise (2). I prefer a different strategy, however, which I'll discuss in future posts.
(Thanks for the comments!)
I don't know what naturalists you've been reading. (Whoever it is, forget them and try, say, Peter Railton instead.)
ReplyDeleteI'm simply suggesting that morality is [by definition] grounded in what is good for us (collectively), and that this in turn is grounded in natural facts. If I am poorly off, that is because the natural state of affairs is such that I am severely constrained in the pursuit of my goals.
My general strategy is to begin with a naturalistic account of individual wellbeing (or 'non-moral value'). That isn't too difficult to provide. (We can all recognize when people are better or worse off due to the natural facts of their situation.) And from this, I suggest, we can move to specifically moral value through some form of aggregation. Such aggregation yields conclusions about what is good for everyone, or in everyone's interests, and this is precisely what morality is all about.
Now, it's true that evolution will have influenced what is good for us, in the mundane sense that we have evolved certain needs (e.g. food, water, etc.) that any moral system must take into account. But that's not much of an issue.
One of the things that I found most interesting about Rawls’ work was that he tried to put together a minimal set of ethical/social principles for a social contract from a disinterested perspective. I think that this is naturalistic in the needed sense. All you need to participate in the debate about an appropriate contract is a sense of the general “human condition” as it is now. Knowing evolutionary details perhaps could help, but it isn’t necessary. For example, such ideas could be used to talk between groups with quite different social mores. That is, assuming they were not offended by the idea of constructing a social contract in the first place.
ReplyDeleteConsider questions like whether we should help suppress “honor killings” in cultures where these are supposed to be an important part of their religious/ethical identity. Using something like Rawls methods, I think we can answer “yes, we should” without worrying about “imposing our values” on someone else.
I’ve not read any of the literature for years. Does anybody know if this kind of work has been done?
I've discussed that sort of thing here. (I think Rawls is the wrong way to go, though.)
ReplyDeleteI looked at your “Veil of Ignorance” thread. Rawls dropped that device in his later writing perhaps because all the disputes about what the veil really did or did not do meant that it wasn’t the aid to thinking that he intended it to be. Sadly, it has been 15 years since I was reading Rawls, and my memory of the details has grown thin. But I do remember that I was more impressed with his post-Theory of Justice work that was published as individual papers than I was with that particular book. Since I’ve been thinking about these issues recently, maybe I should go reread those writings.
ReplyDeleteBut the Veil aside, I think he had the right idea in trying to come up with a minimal set of principles for a liberal society. There is always tension between the abstractness of an ethical theory and all the concrete particulars of the kind of creatures we humans find ourselves to be. I always thought Rawls took this seriously and tried to design a method to address it. So besides finding the details of his “justice as fairness” interesting, I thought that he tackled the right kind of problem in developing his theory.
Your “Multiculturalism vs Individual Rights” was very much the kind of thing I was thinking about. I do agree with the comment writers that that particular mechanism seems unworkable since I think we would just be guessing—and probably filling in our own prejudices—about the relative preferences. But in keeping with the theme of this thread, why did you run your case using a single person from the other society? I don’t yet know quite why I think so, but I suspect that it is a better idea to compare the structures of the two societies rather than just the structure of individual lives within those societies. Does this make sense? Any thoughts?
ReplyDelete"Surely everyone would agree that some things can be better or worse for me."
ReplyDeleteAs an error theorist I don't agree with this. To say that something is "better or worse for me" is to use evaulative terms in a way that I simply don't accept. Actually it's quite common to debate what's good for someone, I would argue that maximising your long term happiness is not always good for you if you do it at the expense of other qualities you possess, while some ethicists would argue that it is.
If we did give some determinate definition of "good for you" and then "good for the moral community" it's hard to see that it would have any normative qualities. Your moral naturalism sneaks in non natural ethics through the backdoor by it's reliance on "better or worse for me".