Assume utilitarianism is true. Now compare three people:
(A) Eenie is a deontologist who is very concerned for human rights, fairness, justice, and all that good stuff -- not to mention his special concern for his friends and family. These concerns of his predictably lead to utility-promoting acts.
(B) Meenie is a hard-core utilitarian who cares only about maximizing utility. He is cautious of pro tanto harms like torture, but happy enough to condone them when (to the best of his ability to judge) the expected benefits outweigh the costs. He has no special attachment to his friends or family, helping them only when (it seems to him) doing so is the impartially best thing he can do.
(C) Miney is a sadist whose main concern is to spread pain and misery. But Miney lives in a world governed by a Trickster Demon who ensures that ill-intentioned acts turn out well, and well-intentioned acts turn out poorly. So Miney's vicious character is, in these circumstances, most fortunate.
Intuitively, the level of virtue decreases as you go down the list from Eenie to Miney. But this datum is difficult for utilitarians to accommodate. One obvious candidate for a utilitarian analysis of virtue would have us count an agent as virtuous just in case she has (explicitly) utilitarian motivations. But that would incorrectly imply that Meenie is more virtuous than Eenie.
As a second pass, we might shift to counting an agent as virtuous just in case her motivations are fortunate (hence desirable from a utilitarian perspective). But this incorrectly implies that Miney is virtuous.
What we need instead is a more refined view on which an agent counts as virtuous insofar as she is motivated by the surface-level or prima facie utility-promoting features. (That is, features like respecting rights or promise-keeping, that tend to promote the good in normal circumstances). This yields two desirable results: (i) intuitively virtuous non-utilitarians like Eenie are properly recognized as such, and (ii) direct utilitarians like Meenie are susceptible to deserving blame if they do something terrible.
I should say more about this second point. The basic idea is that engaging in rights violations in hopes of promoting utility is unreasonably risky. Most people who do this miscalculate, and end up doing more harm than good. So, like the person who plays Russian Roulette, they are responsible if the foreseeable risk eventuates.
Sometimes, it is sufficient excuse to say that you had good intentions, and didn't realize that things would turn out so badly. But this excuse only works if your expectations were reasonable. If you violate someone's rights (or other generally utility-promoting rules or laws) for the greater good, I think you forsake any such ignorance-based excuse. Since you are overriding the general rules and taking it upon yourself to get better results, you had better be right. If things backfire, that suggests that you were insufficiently motivated by the weighty utilitarian reasons to respect rights (etc.). This is an internal flaw in the agent, and thus eligible to qualify as a vice.
What if Meenie is a very cautious utilitarian, who acknowledges his own fallibility and so is never willing to condone torture or the like (albeit for transparently instrumental reasons, unlike Eenie)? Or consider Moe, a perfectly rational utilitarian who never makes mistakes in his utility calculations. Would these guys still count as less virtuous? It seems not. It might be unfortunate that they have the character they do, if this prevents them from forming loving relationships and so on. But that is really an 'external' flaw, not a problem with their quality of will (or moral reasonableness).
Overall, then, my preferred account of virtue for utilitarians would go something like this: an agent is virtuous insofar as she is motivated by those things, whatever they are, that her total evidence indicates to be utility-promoting. Two important points: (1) Total evidence includes higher-order evidence of our own fallibility, which is why for most of us it's reasonable to be more moved by generally reliable rules/laws than by our own attempts at directly calculating utilities. In particular, miscalculation can't get one off the hook for torture or other actually-disastrous acts. (2) There are a couple of ways to have the right motivations. One might, like the cautious Meenie, treat the surface features as merely instrumental to the ultimate goal of maximizing utility. Or else one might, like Eenie [or the sophisticated consequentialist], regard the surface features non-instrumentally. I'm tempted to say that either option is equally good from the internal perspective of 'virtue', but that the latter may be more fortunate (given plausible empirical assumptions about the external world).
What do you think?