It can be useful to formulate moral theories in terms of their implications for normative reasons, since this brings into view their substantive commitments. For example, I recently argued that this methodology allows us to deflate global consequentialism into mere act consequentialism. I now want to see what it can tell us about rule consequentialism.
As with any consequentialism, we begin with some theory of the good (equivalently - if we accept a 'fitting attitudes' analysis of value - what's desirable). The distinctive thesis of rule consequentialism (RC) is then to derive our reasons for action from the recommendations of the best rules, rather than just straightforwardly recommending the best action. RC thus implies that the act we ought to perform may not be the most desirable act. It may be that we should keep a promise, while hoping that we won't (and immediately regretting that we did). This seems odd, and perhaps even incoherent.
Alternatively, the Rule Consequentialist could hold that the fact that the best rules recommend φ-ing makes the action not just actionable ("right"), but also desirable. At this point we may question whether the view is still recognizably consequentialist in nature. It seems to be treating reasons for action ('the right') as prior to reasons for desire ('the good'), which one might reasonably take to be diagnostic of deontology. What the rule consequentialist now treats as fundamental is not value generally, but merely a component of value: welfare, say. Their more general value theory also accords intrinsic value to acting according to the welfare-maximizing code. This also seems odd.
The Rule Consequentialist might respond by restricting the fitting attitudes analysis of value. Much as some theorists restrict 'value' to objects that provide us with agent neutral reasons for desire (even though they think we may have agent-relative reasons to desire other things in addition), the rule consequentialist might analyze 'value' in terms of fundamental reasons for desire, i.e. reasons that are not derived from reasons for action. They could then maintain that only welfare is valuable, and hence that their code is selected on the basis of value generally, compatibly with maintaining that it's desirable to perform disvaluable acts that are in accordance with the valuable code. Perhaps this is a better-sounding way for them to word it. But whatever words we use to gloss the view, it is the fundamental claims about normative reasons that we need to evaluate for their plausibility.
Which option is more plausible? Should rule consequentialists take themselves to be advising undesirable actions? Or would they have us revise our preference ordering over possible worlds, in such a way that their 'ideal code' is selected not on the basis of all-things-considered desirability, but only on the basis of a restricted set of our reasons for preferring some outcomes to others? Neither option seems especially appealing...