Some critics allege that the utilitarian agent has but a single desire: to maximize welfare. This would seem to embody an objectionably instrumental attitude towards individual persons. Rather than caring about each of Tom, Dick, and Harry in their own right, the utilitarian (allegedly) just cares about helping them as a constitutive means to promoting aggregate welfare. Tom serves as a faceless 'receptacle' of utility, rather than mattering for his own sake.
The obvious response for the utilitarian is to deny that their psychology is accurately characterized, at the fundamental level, by such a totalizing desire. They don't just desire a single thing (namely, goodness in general). Rather, they desire each possible good, in proportion to its value. They value Tom's welfare in particular, and also Dick's, Harry's, and everyone else's. And it's just a mistake -- a gross mischaracterization on the part of critics -- to confuse this vast collection of particularized desires with the single, extensionally equivalent, generic desire for aggregate welfare.
I'm sympathetic to this response, and think that something along these lines is necessary to distinguish indifferent from ambivalent options. (In the first case, there's no moral difference between the options: they both serve the exact same values. In the second case, you may feel torn because the two options serve different but equally weighty values, such that the relative losses and gains exactly counterbalance each other.)
But a difficulty arises when we consider goods that the utilitarian agent is unaware of. Consider some particular unknown person, Sally. Our utilitarian cannot have a particularized desire for Sally's welfare, since he cannot even refer to Sally in particular. But his values must extend to Sally and others somehow. (It's not as though he'd accept an offer to improve the welfare of Tom at greater cost to some unknown other.) So it seems like we need something like a generic desire for aggregate welfare to step in and fill the gap. (To avoid double-counting, we'd probably need to exclude Tom -- and any others for whom the agent has a particularized concern -- from the remaining 'aggregate'.)
Is this a problem? I hope not. It doesn't seem so objectionable to treat people you've never even heard of as faceless members of the aggregate. How could they be other than faceless and generic to you? Moreover, the motivation here is not merely instrumental. It's not as though our utilitarian thinks that unknown people fundamentally only matter in respect of their being members of the unknown aggregate. Rather, his concern for unknown people's aggregate welfare is a stop-gap measure that reflects, in the only way possible, his appreciation of the fact that each of those individual unknown persons fundamentally matters in their own right. He knows that, if he knew more, he would form particularized desires for the welfare of each; but in the absence of the requisite identifying information, the best he can do to respect these unknown values is to fall back on the generic desire for aggregate welfare, as a kind of placeholder.
So far, so good. But what about merely possible future persons? Here the placeholder strategy seems dubious. Before, we were holding the place for the particularized desires we would have if fully informed -- and it's easy to see the normative authority of fully-informed desires. But in case of merely possible persons, the barrier to particularized reference is metaphysical, not merely epistemic: there is no such particular person to refer to. The most we can appeal to is the counterfactual desire that we (ideally) would have had if someone else had existed. We would have formed a particular desire for that someone's welfare. But so what? As things stand, there is no such person, and hence no valuable entity for us to respect as best we can. As I've previously argued, the normative force of merely possible people's welfare cannot be grounded in the merely possible people themselves (for, strictly speaking, there are no such entities). Our concern must instead be 'impersonal', i.e. a concern that the world (or its inhabitants, considered generically) be as good as possible.
Even so, this needn't return us to any single, totalizing desire that the world be thus-and-so. We may instead have distinct desires for each possible generic good. (We still need to distinguish between indifferent and ambivalent pairs of prospects, after all.) For example, I may desire both that Anne have a happy child, and that Beth have a happy child. I can't coherently desire these things for the sakes of the respective (non-existent) children, but I can desire them (for the sake of the world, perhaps). And in this sense I recognize that the prospective persons are not fully 'replaceable': despite being of equal value, there is a morally relevant difference between world A, where only Anne has a child, and world B, where only Beth has a child. The comparison calls for ambivalence, rather than indifference, since they serve distinct (though equally weighty) values / ideal desires.
So I think the objection ultimately fails. Even in the toughest case -- that of merely possible persons, who cannot be the ultimate ground of our concern for their welfare -- consequentialists can still desire each good separately, and hence refrain from treating people as fungible.