What exactly do we mean when we ask the deliberative question, "What should I do"? It's surprisingly elusive. With a bit of work, we can pin down a behaviouristic kind of answer -- specifying when it's appropriate to offer and to challenge various responses to the question. But I suspect that in a more fundamental/philosophical sense, it isn't really a well-formed, determinate question at all.
First, note that we're not asking what would be objectively best. (Consider Parfit's Mineshafts case. You know that one of option A or B will save all ten lives, but the other will save none, and you don't know which is which. Option C is guaranteed to save nine lives. Clearly, the answer to the deliberator's question is "choose option C", even though this is the one option we can know is not objectively best.)
Perhaps we're asking (roughly) what would maximize expected value for the agent. This explains why option C is the answer in the ordinary Mineshafts scenario: relative to the agent's knowledge, it is worth 9 expected lives, whereas options A and B each have expected value of only 5 lives saved. But (as Kolodny and MacFarlane point out) this standard view has trouble accommodating our assertoric practices. For suppose Informant comes along and tells Agent that he's made a mistake, and in fact it's option A that is guaranteed to save nine lives, and options B and C that are the all-or-nothing gambles. Informant claims to reject (disagree with) Agent's previous answer to the question what he ought to do, and Agent himself seems likely to acquiesce in this by repudiating his previous response. ("I was mistaken to think that I should pick option C; really I should do A.") What's worse, we can further imagine an omniscient observer saying, "No, really he should pick option B -- that's the one that'll save all ten lives."
Here's the dilemma: is there a single, constant question, to which these various responses offer conflicting answers? Theoretically, it's difficult to see how this could be so. There's the question what maximizes expected utility relative to this evidence or that -- but these are different questions, so the diverging answers don't really conflict. On this picture, Agent should respond to Informant by saying, "Ah! You've changed my epistemic context in a most helpful manner. Granted, I answered my initial question [what ought I to do relative to my then-available evidence] correctly. But now I can ask an even better question: 'what ought I to do relative to my now-available evidence?' And, I agree, my answer to this question is 'pick option A!'"
(Compare ordinary context-dependent terms like the indexical 'now'. If tomorrow I say, "It is raining now," I won't thereby have to retract my current assertion that it isn't raining now. There is no single, constant question "Is it raining?" to which these are competing answers. There are only the more specific questions whether it is raining at this or that time and place.)
Unfortunately, when it comes to the deliberative question, this isn't how our linguistic practices seem to actually work. Instead, it seems, Agent will repudiate his previous answer, implicitly treating it as a competing answer to one and the same question (what should he do, period). My question is: does this make sense?
The relativist formally accommodates this behaviour by positing a semantics for 'ought' on which the truth of any token assertion 'Agent ought to phi' varies across assessors. We effectively end up understanding the deliberative question as having the constant meaning 'What should I do relative to the relevant evidence?' whilst allowing the relevant evidence to vary across assessors. Disagreeing as to what evidence is relevant thus translates into disagreeing about the deliberative question. But there's no absolute fact of the matter as to which evidence really is "relevant", and hence the correct answer varies from perspective to perspective, even when assessing a single token utterance.
I actually think that this 'relevant evidence relativist' (unlike the moral relativist) gets it right as a pragmatic account of when it's appropriate to make, challenge, and retract assertions. Intuitively, it seems appropriate for everyone involved to behave as if there was a single constant question here (unlike in the 'raining now' case). For example, it seems appropriate for Agent to initially judge that he ought to go with Option C, and then to retract [not merely "move beyond"] this judgment when faced with new evidence.
But does that really answer my initial question? I guess Wittgensteinians would think so -- "meaning is use", and all that. But intuitively, it seems like there's a further question here: not just about what assertoric behaviours are appropriate, but the more 'metaphysical' question of what is really meant, and really true.
If we think this is a genuine further question, we may be unsatisfied by the relativist's answer, since it seems most plausible that, strictly speaking, substantive answers exist only for complete or 'absolute' questions -- "Is it raining at such-and-such time and place?", not just "Is it raining?". The general question, "What should I do?" is likewise incomplete until we fill in the missing parameter of whose evidence we're assessing this against. Granted, we can offer a sociological story about how it's useful for a community to adopt linguistic norms that allow us to treat this incomplete question as though it were complete -- "disagreeing" in practice even when there's not really any substantive proposition at stake. But then it looks like this really is just a "language game", lacking in philosophical substance. (As MacFarlane himself concludes in 'Relativism and Disagreement': "From lofty philosophical heights, the language games we play with [relativistic words] may seem irrational. But that is no reason to deny that we do play these games, or that they have a social purpose.")
Really, we should conclude, there isn't any single question here. Strictly speaking, it doesn't really make sense for Agent to retract his earlier judgment. And the apparent 'disagreement' (of Informant or the omniscient observer), though "appropriate" according to the rules of the game, is - in a more important sense - philosophically empty.