Naive moral relativism is the view that 'X is wrong' is true for you iff you disapprove of X (or something along those lines). I don't think very highly of this view, largely because it entails infallibilism: the mere fact of your holding any (arbitrary) moral attitude suffices to make it "right for you". This makes moral progress impossible, and hence reflection superfluous. I find that repugnant. It implies that I'm already as morally discerning as I can possibly be. (What a depressing thought! I could've sworn there's much more for me to learn yet.)
Naive relativists sometimes ask what objective moral facts are meant to do. Abstract objects can't prevent murders, for example. (Of course, being causally impotent, they can't do anything. That's our job.) But I've explained before that this misses the point. We need objective morality not to causally influence the world, but to provide an ideal standard to which we may aspire. (Much like historical truths provide an ideal for historians to pursue.) Moral objectivism offers us a goal, not the means to get there. Note also that the reason for idealizing ethics is primarily to enable the (personal or collective) endeavour of rational self-improvement, not the political project of influencing others.
[Doctor Logic once objected: "The only basis you have for selecting an absolute morality is your subjective opinion." But, as my response explained, this is either trivial or false. It's trivial that our beliefs reflect what we ("subjectively") judge to be the case. But it's false -- or at least question-begging -- to claim that there are no reasons for concluding one thing rather than another. Morality is no different from any other form of inquiry in this respect. Unfortunately, the good Doctor continues to advance that argument, neglecting to note that he might just as well ask what historical truths are "really good for".]
Curiously, there is a more sophisticated form of moral relativism which can avoid these woes, as I learned from Andy Egan's pre-talk this afternoon. The key is to introduce idealization without removing the agent-relativity. The resulting view goes something like: 'X is wrong' is true for you iff your idealized self would disapprove of X. (The relevant idealization might concern what you would conclude under ideal rational reflection, if you had full factual knowledge and perfect reasoning skills, unlimited cognitive capabilities, etc.) It's similar to the kinds of constructivist non-cognitivism I favour, though Andy explicated it in a rather novel way:
Some (esp. indexical) statements are not about the world, but rather your location in it. By saying "I am in Canberra," you locate yourself as one of the in-Canberra people. The claim is not about which possible world is actual, but rather where (or who) you are within the actual world. Similarly, moral claims aspire to locate yourself according to attitudes that would be held under idealization. To say "Theft is wrong!" is to locate yourself as one of those people whose idealized selves would share that moral attitude.
The great advantage of this view (over naive relativism) is that it grants us moral fallibility. Being non-ideal ourselves, we might be mistaken about what conclusions our idealized selves would reach. (And surely we must, in theory, defer to their superior judgment? I'm puzzled by why anyone would hold naive relativism over this view.)
It also allows for genuine moral disagreement, on the assumption that the disputants' idealized judgments would converge. The question effectively becomes the shared one of what we (rather than just "I") would think under idealized conditions. Though on the contrary assumption, i.e. of idealized divergence, apparently conflicting claims could in fact be mutually compatible. (It might be that my idealized self would approve of theft but yours wouldn't. Then 'theft is wrong' would be true for you but not for me. You could affirm it while I deny it, and we could both be right.)
The base view seems pretty hard to deny, actually. After all, if we add the assumption that all rational agents would ultimately converge to the same moral attitudes, then we arrive at the sort of moral universalism Michael Smith advances, and to which I'm very sympathetic. Moreover, it seems right that universalism requires this convergence fact. If the convergence claim is false, and even fully informed and ideally rational agents could disagree morally, then there would seem to be no basis for universal moral truths. (The same plausibly holds for all a priori endeavours, e.g. metaphysics.) The most we could get, in cases of divergence, would be agent-relative truths. Is this better than no truth at all?
At least sophisticated relativism is still "objective" in the sense that it upholds the distinctions between belief and truth, appearance and reality, or -- most importantly -- between actual and ideal judgments. Recognizing the possibility of defects in our present perspective, idealized conceptions of ethics carve room in logical space for a sort of moral progress that is impossible under naive relativism or subjectivism. And I think that's what is really important for a meta-ethics we can live with. The possibility that others might have different ideal ends seems rather less of a worry in comparison to the sort of nihilism which admits of no ideality whatsoever.