Building on my previous post: Suppose there is some worthwhile goal G (e.g. happiness or general utility), which is best achieved by an "indirect" strategy, i.e. by aiming at goals S other than G itself. What is the normative status of the strategically recommended goals S, especially in those particular instances where they conflict with G?
We have reason to achieve G. But I am more likely to achieve G by adopting S as my goal instead. So there are instrumental reasons to aim at S rather than G. If I know all this, my apparent and objective reasons will coincide, so the demands of rationality coincide with what is objectively best: namely, to achieve G by aiming at S instead.
This may seem puzzling. Rationality tells us to aim at the good, or do what seems best, i.e. maximize expected utility (for whatever scale of "utility" we're interested in). But the whole idea of the indirect strategy is to be guided by reliable rules rather than direct utility calculations. One effectively commits to occasionally acting irrationally (in the 'local' sense), though it is rational/optimal to make this commitment. Parfit thus calls it "rational irrationality".
But we may question whether it is really irrational to abide by the rules (against apparent utility) after all. We adopt the indirect strategy because we recognize that our direct calculations are unreliable. The over-zealous sherriff might think that torturing a terrorist suspect would have a high expected utility. But if he recalls his own unreliability on such matters, he should lower the expected utility accordingly. As a good indirect utilitarian, he believes that in situations subjectively indiscernible from his own, the best results will generally be obtained by respecting human rights and following a strict "no torture" policy. Taking this "meta" information into account, then, he should reach the all-things-considered conclusion that expected utility is maximized by refraining from torture.
Global reasons thus entail local reasons. It's not a matter of following strategy S no matter what seems most likely to achieve G. A broadened perspective will lead the agent to recognize that S is what's most likely to achieve G. The conflict is only with prima facie "seemings". His all-things-considered judgment of "what seems best" should in fact coincide with his general strategy. So "rational irrationality" need not be genuinely irrational at all. It is only irrational in the restricted, "local" sense whereby we only take into account first-order evidence or considerations, and fail to consider higher-order issues of reliability and so forth. But rationality, simpliciter, is surely an "all things considered" rationality. And when we consider all things, the apparent conflict dissolves.