Most everyone agrees that you should break the rules if that's the only way to avoid disaster. But it seems intuitively objectionable that Act Consequentialism tells us to (say) break a promise whenever doing so would be even the slightest bit better than keeping it. Well, maybe. I agree that there's something troubling about the agent who breaks a promise the moment it seems like there's something (marginally) better he could do. But such an agent is not, I will argue, what a competent act consequentialist would look like.
As I argued in 'Defective Deliberateness', competent agents can't be constantly deliberating. In addition, we must recall that overt calculation often goes awry. So the competent Act Consequentialist largely relies on rationality-enhancing dispositions and rules of thumb in his everyday life, only pausing to reflect when his well-calibrated sub-personal mechanisms alert him to the need (say due to complex novel circumstances, that his "auto-pilot" wasn't designed to deal with). Everyday promise-keeping is not exactly novel, so for the competent agent the question whether to keep the promise shouldn't even arise. It's a no-brainer.
(This is not necessarily because it's always clear that keeping promises is objectively for the best. But it typically is for the best, and on the odd occasion where it isn't, this almost certainly won't be clear. In that case, the possible benefit from breaking the rule is so marginal that it generally won't be worth the cognitive costs of attempting to assess the precise balance of reasons.)
But suppose the agent comes to consider the question anyhow. What should he conclude? We can stipulate that in fact the outcome would be marginally better if he broke his promise, but does the agent himself have any way of knowing this? Not easily, at least. (Among other things, he'd need to first consider the possibility of self-serving bias corrupting his judgment, and weigh the apparent benefits of rule-breaking against the long-run value of retaining a reputation for trustworthiness.) Maybe if he heard the booming voice of God reassuring him of this fact, then he could rationally go ahead and break his promise without further worry. But in ordinary circumstances -- as we're supposed to be concerned with here -- it's simply never going to be clear when rule-breaking is marginally beneficial. So the agent is faced with an immediate choice: he can (i) break the rule even though it's unclear to him whether any good would come of this; (ii) sink further cognitive resources into investigating a question that he probably shouldn't have bothered to ask in the first place; or (iii) simply keep his promise and turn his attention to more important matters. It seems pretty clear that, in this sort of case, option (iii) is the way to go.
In sum: breaking a rule will only be clearly worthwhile in cases where it is also of significant benefit (in which case we all approve of rule-breaking anyway). If it's only of marginal benefit, this fact typically won't be clear enough for a rationally self-doubting agent to confidently act on it. And the low potential payoff means that it isn't really worth inquiring further: better just to stick with the generally-reliable rule of thumb. So a rational act consequentialist generally won't be found engaging in marginally beneficial rule-breaking after all. They'd even share our intuition that there's something awfully dubious about any agent who would act that way.
This seems to me to defang the original objection. What do you think?