There are many possible targets for consequentialist evaluation: acts, rules, motives, etc. This might lead one to ask which should be privileged: should we be act, rule, or motive consequentialists? But the question is ill-formed. The only principled answer is offered by global consequentialism: none is to be privileged; each of these targets may be assessed in terms of their consequences.
One may object: "Don't these perspectives offer conflicting normative advice?" An example (from Parfit, I think): suppose that Dad's love for Son is overall for the best. But on particular occasions, it leads him to make sub-optimal choices: e.g. benefiting Son when he could have netted a greater benefit for some strangers. One is tempted to ask, "How are we to assess Dad? Is he good because he has the best character of those available to him, or did he act badly in failing to perform the best action available to him?" Call this 'the integration problem'. But note that - again - the question is ill-formed. Those two assessments are perfectly compatible, so the answer is 'both'. He has the best character, and this led him to fail to perform the best action. What's the problem?
It's tempting to think that agential evaluation must be perfectly integrated, such that the best character (or motivations) will inevitably lead one to perform the best actions. Now, it's true that there is some connection here: acting on good motivations can be expected to lead to good consequences more often than acting from some alternative motivational set -- that's precisely what makes the former motivations 'better'. But this integration is imperfect, as the above example illustrates. The various targets for moral evaluation can come apart, and when they do it would be arbitrary for a consequentialist to assess all others on the basis of how well they integrate with a privileged one. When the best motivations pulls us away from performing the best acts, we are neither wholly good nor wholly bad. Accurate evaluation requires differentiating the quality of the motives from the quality of the act.
My best attempt at achieving a wholly integrated perspective was to consider the agent's life as a whole (thus including their every act, motive, and all the rest). This seems to obviously be a privileged viewpoint, and - moreover - one that fully integrates all other perspectives. After all: any divergence from the aggregation of acts and motives specified in my best (most utility-promoting) possible life will necessarily lead to a worse outcome.
But while that might work for perfect agents, extending the theory to non-ideal agents complicates matters. After all, if my future self cannot necessarily be trusted to do the ideal thing, this could radically alter what current decision would be for the best. Suppose my currently φ-ing could lead to (i) the best possible outcome if I were to follow this up with a series of acts S which I could, but actually won't, perform; and (ii) the worst possible outcome otherwise. In those circumstances, it is not best for me to φ -- doing so would have very bad consequences, due to my subsequent failure to S -- even though it's part of the best possible life. The divergence arises because at any given time I can only choose how to act then; I cannot perform a lifetime's aggregation of acts with a single decision. And so we find that even our evaluations of acts and of act-aggregations fail to fully integrate.
The only way to preserve integration is to embrace implausibly fatalistic theses. For example, if my decision to φ renders it impossible for me to later S, then there is no ideal 'φ+S' life available to me. More generally: if each act entails the full lifetime's aggregation of acts, then since the two cannot come apart, neither can the evaluations. Similarly, if it turns out that one's motivational set strictly determines each of one's actions, then it's no longer true that Dad acts sub-optimally in favouring Son. The only way he could possibly do otherwise, on this account, is if he had a different motivation set, which ex hypothesi would lead to worse outcomes overall. Forcing the coincidence of acts and motivations in this way could dissolve the integration problem, but again this narrow form of determinism (whereby future outcomes are fixed by a mere part of present reality) is not remotely plausible.
So we should accept evaluative non-integration. The best motivations may pull us away from the best actions, which in turn may steer us away from the best life. And the best decision-procedure might tell us to ignore all of this! So it goes.