Something I've been wondering about for a while is whether we have non-instrumental reasons to favour our "nearest and dearest", or whether an ideal (morally fitting) agent would instead be perfectly impartial.
For impartiality: I think there's a lot to like about impartiality: it's clearly very principled, idealistic, etc. (And if there's anything we ought to be idealistic about, our moral ideals are surely it!) By striving to adopt "the point of view of the universe", we transcend our limited individual perspectives. There's something appealing about the way that this promises to resolve conflicts, insofar as it mandates that all moral agents share the same goals (i.e., maximizing net value). And of course it's theoretically simple -- e.g., no tricky questions about precisely how much extra weight we can give to so-and-so. Impartiality would thus seem to yield a maximally coherent and unified desire set.
Those are some nice formal features. Impartiality can also be substantively appealing, insofar as there's something clearly right about the idea that other people matter just as much as we do (and that the morally fitting agent is oriented towards what objectively matters). Indeed, "equal concern" and other considerations I previously gave in favour of consequentialism are more specifically reasons to favour impartial consequentialism.
Dubious objections to impartiality: One might imagine an impartial agent as cold and emotionless, which doesn't sound very appealing. But of course, rather than subtracting away the special concern that we have for our loved ones, the utilitarian "moral saint" would be one who extends their deepest concern to everyone (i.e., caring as deeply about strangers as most people care about themselves and their loved ones). Think Jesus, not Spock. Insofar as we fall short of this ideal, it is because we care too little about strangers, not because we care too much about our friends and family.
More reasonably, one might worry that it's humanly impossible to muster such universal love. That seems plausible, but it's not clear why being out of our reach would make the utilitarian ideal any less of an ideal. Such a creature would be morally better than we can ever hope to be -- so what? If one is worried about "ought implies can", then one will not say that we "ought" to exemplify this ideal; perhaps we ought just to do the best that we can. That's fine. It doesn't mean that we're morally better for being inevitably partial in this way. And it doesn't cast doubt on the idea that we should choose to promote the impartial good when we're able to do so.
A stronger objection argues that impartiality is, as a matter of principle, incompatible with genuine friendship and other relationships of value. I'm not sure why this must be -- don't Christians claim that Jesus is friends with everyone? (One might doubt the veracity of their belief, but it doesn't seem outright incoherent.) But I should probably address such arguments in a separate post.
For partiality: Probably the main reason to favour (a degree of) partiality is that it yields more intuitive verdicts in many cases. Surely, we think, a parent may -- indeed, should -- save their own child's life over the lives of two young strangers. Surely it's right and proper to save money for your child's college fund, rather than donating every spare cent to GiveWell.
One might even attempt to give these practical intuitions a more theoretical gloss by appeal to Frankena's maxim that "morality is made for man, not man for morality". To be truly normative for us, Williams-style "reasons internalists" might insist, morality must be more firmly rooted in our actual concerns. And that means giving pride of place to our individual projects, relationships, and concerns.
I'm dubious about reasons internalism: surely there could be a thoroughly misguided agent, moved only by sadistic and misanthropic concerns, to whom pro-social moral norms still apply, however little regard he may have for them. But the case-based intuitions offer tougher bullets to bite, I admit that much.
Have I missed any important considerations? Which way do you incline, and why?