Thursday, June 08, 2006

Why We Need to Idealize Ethics

Naive moral relativism is the view that 'X is wrong' is true for you iff you disapprove of X (or something along those lines). I don't think very highly of this view, largely because it entails infallibilism: the mere fact of your holding any (arbitrary) moral attitude suffices to make it "right for you". This makes moral progress impossible, and hence reflection superfluous. I find that repugnant. It implies that I'm already as morally discerning as I can possibly be. (What a depressing thought! I could've sworn there's much more for me to learn yet.)

Naive relativists sometimes ask what objective moral facts are meant to do. Abstract objects can't prevent murders, for example. (Of course, being causally impotent, they can't do anything. That's our job.) But I've explained before that this misses the point. We need objective morality not to causally influence the world, but to provide an ideal standard to which we may aspire. (Much like historical truths provide an ideal for historians to pursue.) Moral objectivism offers us a goal, not the means to get there. Note also that the reason for idealizing ethics is primarily to enable the (personal or collective) endeavour of rational self-improvement, not the political project of influencing others.

[Doctor Logic once objected: "The only basis you have for selecting an absolute morality is your subjective opinion." But, as my response explained, this is either trivial or false. It's trivial that our beliefs reflect what we ("subjectively") judge to be the case. But it's false -- or at least question-begging -- to claim that there are no reasons for concluding one thing rather than another. Morality is no different from any other form of inquiry in this respect. Unfortunately, the good Doctor continues to advance that argument, neglecting to note that he might just as well ask what historical truths are "really good for".]

Curiously, there is a more sophisticated form of moral relativism which can avoid these woes, as I learned from Andy Egan's pre-talk this afternoon. The key is to introduce idealization without removing the agent-relativity. The resulting view goes something like: 'X is wrong' is true for you iff your idealized self would disapprove of X. (The relevant idealization might concern what you would conclude under ideal rational reflection, if you had full factual knowledge and perfect reasoning skills, unlimited cognitive capabilities, etc.) It's similar to the kinds of constructivist non-cognitivism I favour, though Andy explicated it in a rather novel way:

Some (esp. indexical) statements are not about the world, but rather your location in it. By saying "I am in Canberra," you locate yourself as one of the in-Canberra people. The claim is not about which possible world is actual, but rather where (or who) you are within the actual world. Similarly, moral claims aspire to locate yourself according to attitudes that would be held under idealization. To say "Theft is wrong!" is to locate yourself as one of those people whose idealized selves would share that moral attitude.

The great advantage of this view (over naive relativism) is that it grants us moral fallibility. Being non-ideal ourselves, we might be mistaken about what conclusions our idealized selves would reach. (And surely we must, in theory, defer to their superior judgment? I'm puzzled by why anyone would hold naive relativism over this view.)

It also allows for genuine moral disagreement, on the assumption that the disputants' idealized judgments would converge. The question effectively becomes the shared one of what we (rather than just "I") would think under idealized conditions. Though on the contrary assumption, i.e. of idealized divergence, apparently conflicting claims could in fact be mutually compatible. (It might be that my idealized self would approve of theft but yours wouldn't. Then 'theft is wrong' would be true for you but not for me. You could affirm it while I deny it, and we could both be right.)

The base view seems pretty hard to deny, actually. After all, if we add the assumption that all rational agents would ultimately converge to the same moral attitudes, then we arrive at the sort of moral universalism Michael Smith advances, and to which I'm very sympathetic. Moreover, it seems right that universalism requires this convergence fact. If the convergence claim is false, and even fully informed and ideally rational agents could disagree morally, then there would seem to be no basis for universal moral truths. (The same plausibly holds for all a priori endeavours, e.g. metaphysics.) The most we could get, in cases of divergence, would be agent-relative truths. Is this better than no truth at all?

At least sophisticated relativism is still "objective" in the sense that it upholds the distinctions between belief and truth, appearance and reality, or -- most importantly -- between actual and ideal judgments. Recognizing the possibility of defects in our present perspective, idealized conceptions of ethics carve room in logical space for a sort of moral progress that is impossible under naive relativism or subjectivism. And I think that's what is really important for a meta-ethics we can live with. The possibility that others might have different ideal ends seems rather less of a worry in comparison to the sort of nihilism which admits of no ideality whatsoever.


  1. But what is this idealization covering up? To paraphrase one New Zealand philosophy student, shouldn't I take something to be wrong because of the reasons that my idealized self would have to disapprove of it, rather than because of the mere fact that my idealized self would disapprove? I'm seeing visions of the Euthyphro dilemma.

  2. Oh, you shouldn't listen to New Zealanders ;-)

    Seriously though, I can't seem to make up my mind whether to take reasons or rationality as fundamental. Reasons seem more intuitive, and avoid Euthyphro problems. But they also seem more mysterious, whereas it seems easier to see how rationality might fit into the natural order. (Is this appearance illusory?)

    Though even if we take rationality as fundamental, I think we have more to go on than the "mere fact" of your idealized self's attitudes. He might hold those attitudes because they form a maximally unified and coherent set of attitudes, for instance. (I consider this a rationality-based account because the reasons aren't fundamentally independent of rational notions like 'coherence'.)

    In any case, I think the core argument of my post is consistent with either approach. We need the idealization, but it doesn't matter here whether it's fundamental or not. (That certainly is an interesting and important further question though.)

    Don Jr. - Perhaps my choice of words was misleading. I certainly didn't mean to suggest any "noble lies" are involved. I instead meant to clarify the theoretical role of moral truths, to answer the skeptic's challenge of "what they are for". Truth (moral or otherwise) is the goal of inquiry. But the possibility of truth, as an ideal distinct from one's actual beliefs, presupposes a kind of "objectivism" (cf. my concluding paragraph). It is in this sense that objectivism "provides us" with the goal that is truth. (It's the fact of objectivism that does the metaphysical work here, not our believing of it. Though believing it is also important, of course, so that we might then live by it. But if it wasn't really true then it couldn't really provide the goal I'm demanding here. I don't want a pretense of ideality. I want the real thing!)

  3. >had full factual knowledge and perfect reasoning skills, unlimited cognitive capabilities, etc.

    I wonder what exactly this means. What sort of reasoning would an ideal person have?

    Would such a "you" have the same values but infinite capacity (reasoning etc). or consistant values (in whatever way you are inclined towards) and infinite capacity. or views entirely foreign to you (but ideal) and infinite capacity.

  4. It's meant to be a purely formal and "value-neutral" change initially -- merely a matter of infinite capacity, perfect logical skills, immunity from cognitive bias or fallacies, etc. Then the rational pressure towards coherence would naturally lead one to hold more consistent values, and perhaps even some apparently "foreign" ones (if you weren't previously aware of all the implications of your views). But the changes are all simply a rational progression from your initial starting point, and I don't mean to beg the question by building any substantive conception of values into the idealization process. The hope is for substance to result from a purely formal rational procedure (in conjunction with the initial inputs of our present values).

  5. Seems to me some sorts of infinite knowledge imply certain results.
    Just for example, if we imply infinite empathy to the ideal person (I expect you can see what that does) Or you could say that it doesnt effect empathy at all and you could have a entirely non empatheic person idealized with perfect knowledge etc you can probably see how that could work.

  6. I'd agree that we shouldn't confuse utility and truth. But I think we might interpret the skeptic more charitably as asking what theoretical role moral objectivity plays. He asks, "what reason do we have to believe moral objectivism?" It would be a mistake, as you rightly note, to attempt to answer this with practical reasons. But it's worth answering with theoretical reasons, I think, which is what I've tried to do here. I argue that there's an important sense in which we need to posit moral objectivism. (Much like we need to posit the existence of an external world, or of historical truths, etc.) It plays a vital theoretical role. And theoretical "usefulness" is, I think, a perfectly legitimate basis for accepting philosophical theses ("possible worlds" are a nice example of this).

  7. Hi Richard,

    I think you're coming around to my way of thinking, but you still have some distance to go yet. ;-)

    Actually, I wrote about some of these topics in one of my earlier posts (Are Omniscient Goods Indistinct?).

    I define my subjective morality as that which I think I should do. I can still be fallible because I may allow my near term desires to override my "better judgement." I can also be mistaken because I have limited ability to forecast the outcomes of my actions. So, it's clear that I aspire to act in the way that I should act if I had some form of omniscience (could better predict outcomes of my choices).

    So, we're in sync on idealization.

    However, your analogy between objective history and objective morality is flawed. Objective morality is a far stronger claim than objective science or mathematics. For the moment, I'll consider history to be a form of predictive science (e.g., if the Romans discovered America, one predicts we will find Roman artifacts in the Americas).

    Mathematics is objective because we can all agree that accepting certain axioms will lead to certain outcomes (theorems). Since mathematical axioms are not heavily value-laden, this is easy to do, and we accept and deny mathematical axioms at will as we consider different mathematical structures.

    In science, there are is another implicit axiom: the measure of the goodness of a theory is its ability to predict future observations. We can all agree that Einstein's view of physics is superior to Newton's on this basis. Again, most, but not all, people accept science because they value prediction.

    However, an axiomatic system, S, does not compel an observer to accept S. Science cannot demand that prediction should be valued. Science objectively informs you about outcomes, but using that information is not morally right for you if you don't value prediction/outcomes. Science doesn't prove science should be valued.

    Objective morality is a very different kettle of fish. Morality is precisely the set of axioms that one should value. Yet, as with every axiomatic system, there can be no moral axiom that can compel one's selection of axioms. Moral axioms cannot tell you that you should value those axioms, they can only appeal to axioms you already value.

    That said, given some moral axioms, we might be able to determine what actions are consistent with those axioms. That much would be objective. But saying that one axiomatic morality is better than another is like saying that algebra is objectively better than geometry. It's a no go.

    Of course, 98% of humans might hold to a certain set of deeply fundamental moral axioms, even if their actions don't live up to their ideals. So, the objectivity of morality may not be a critical roadblock because we just happen to share many deeply-held values.

    BTW, you might think that we can render morality objective by making it descriptive, but that doesn't really help. Knowing how people will feel about an action or policy may help you decide whether you personally value the outcome of that action, but it cannot tell you whether you should value that outcome.

  8. Even if there are objective morals is there an objective way to tell what they are?
    If there isn't then is there a point in rationally debating it?

    When I debate such things with people one of my first assumptions is that I am never going to "prove" a moral position - all I can do is point out inconsistency in the other person's position and see where that tends to lead. But that doesn't PROVE the conclusion in the way that you might have a proof in maths (although you might be able to disprove something?).

  9. Hi Doc,

    You assume that there can be no self-evident axioms. But that seems false. It is self-evident that suffering is bad, just as it is self-evident that modus ponens is a valid rule of inference. Note that people are rarely skeptical of other normative fields, e.g. logic and epistemology. Everyone recognizes that there are things we ought to believe, and ways we ought to reason, quite independently of how we happen to feel about the matter. (It is no excuse to say that you reject the "axioms" of theoretical rationality. That would just make you irrational. Likewise with practical reason and ethics.) So why should the normative status of actions be any more problematic?

    But getting back to the present argument: I'm not sure that you concede enough ideality. You point to weakness of will and factual ignorance of how to achieve your ends, as two limitations on one's present perspective. But you still seem to be supposing that one's present ends or values are themselves beyond question (infallible). I see no basis for this assumption. If better reasoning and understanding would lead you (your idealized self) to adopt some different moral values, then shouldn't they take priority over your present, defective, set? We can reflect on our fundamental values, after all, and so I think it's important to recognize the ideal standard as a goal for such reflection. It's something we all can aim for (even if agent-relativity is true, so that it provides a slightly different target for each person).

    Finally, I don't see any reason to assume that the rational convergence thesis fails to hold. But if it does hold then this plainly entails a universalistic moral standard. (Again, irrational people might never achieve it, but so much the worse for them!)

    So I want to suggest that (1) you should broaden your view to encompass a greater degree of ideality, applying to values themselves and not merely the means to achieving them; and (2) you should grant the possibility of a truly objective or universalistic moral standard. (Or if you don't like these suggestions, I'd be interested to hear more about the basis on which you hope to reasonably reject them. Because I'm just not seeing it at present.)

    Genius - rational debate is the "objective way" to reach philosophical conclusions (or at least the best we have).


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.