Many Consequentialists would like to identify the moral theory with the following property: whichever people successfully satisfy the theory's requirements (on any given occasion) thereby bring about the best outcome within their collective power -- in particular, they could not have done better by doing other than what the moral theory required. This sounds like a reasonable aspiration for Consequentialists. After all, this isn't the over-optimistic claim that mere widespread belief or even attempts to follow the theory must lead to good outcomes -- that's obviously an open empirical question, since mistakes may be made. Instead, this is the claim that the people who succeed in meeting their objective obligations under the theory -- who really do exactly what the theory implies should be done -- thereby (collectively) bring about as good an outcome as possible. That sounds like a property that a consequentialist moral theory should, ideally, possess. Curiously, though, it turns out to be impossible. No moral theory is such that successfully satisfying it guarantees the best possible outcomes. Let me explain why.
The proof takes the form of a dilemma. Either the theory builds into its requirements a particular decision procedure, or it does not (it merely requires certain actions to be performed, without regard for the process of reasoning that led to action). If it does not, i.e. if it is a traditional, 'exclusively act-oriented' theory, then satisfaction of the theory is compatible with suboptimal results in coordination games, as explained in the previous post.
What if the theory, like Regan's Cooperative Utilitarianism, does require a certain decision procedure? In that case, as Regan himself acknowledges, there are two ways that satisfying the theory could lead to worse results. Most obviously, the decision procedure might be directly costly in certain (e.g. time-sensitive) situations. (Neil raised this worry in the comments.) The other possibility is that the decision procedure might have indirect costs -- suppose, for example, that an evil demon threatens to torture us if you use the specified decision procedure.
We thus find that, although satisfying CU ensures that you select the best of the available options (at least, those still available by the time you finish deliberating), it might have the prior effect of shaping your available options in undesirable ways. (In the demon case, the co-operative utilitarian is effectively choosing between two options plus some guaranteed torture, rather than just choosing between the two options as presented to other, non-CU agents.)
So, whatever form the moral theory takes, there's no guarantee that by successfully doing what the moral theory requires of them, agents will thereby bring about the best (collectively possible) results. It's always possible to rightly do badly. A very curious result!