Tuesday, March 30, 2021

Is Effective Altruism "Inherently Utilitarian"?

A recent post at the Blog of the APA claims so.  Here's why I disagree...

It's worth distinguishing three features of utilitarianism (only the weakest of which is shared by Effective Altruism):

(1) No constraints.  You should do whatever it takes to maximize the good -- no matter the harms done along the way.

(2) Unlimited demands of beneficence: Putting aside any intrinsically immoral acts, between the remaining options you should do whatever would maximize the good -- no matter the cost to yourself.

(3) Efficient benevolence: Putting aside any intrinsically immoral acts, and at whatever magnitude of self-imposed burdens you are willing to countenance: you should direct your selected resources (time, effort, money) that are allocated for benevolent ends in whatever way would do the most good.

EA is only committed to feature (3), not (1) or (2).  And it's worth emphasizing how incredibly weak claim (3) is.  (Try completing the phrase "no matter..." for this one.  What exactly is the cost of avoiding inefficiency?  "No matter whether you would rather support a different cause that did less good?" Cue the world's tiniest violin.)

Most of the objections to utilitarianism instead relate to features 1 and 2, which simply do not carry over to EA at all.  So I think it's straightforwardly false and misleading to claim that EA is "inherently utilitarian" or inherits the putative "problematic structural features" of utilitarianism.  (EA is not exhausted by claim 3, of course, since it tends to additionally claim that we should direct a non-trivial portion of our resources to benevolent ends. But so does every plausible moral view.  So that's still a far cry from the unlimited demands of utilitarianism.)

Rossian deontologists could easily accept EA's efficient conception of benevolence, for example, without this interfering in any way with the rest of their (decidedly non-utilitarian!) moral theory.  The same seems true of many other non-utilitarian views.  To reject EA, it seems you'd need a moral theory that countenances gratuitous inefficiency, which seems much harder to motivate (though some do, of course, accept such views).

6 comments:

  1. If EA is only committed to (3), then it doesn't have much teeth. For EA (so conceived) is compatible with the view that there is absolutely no reason for us to countenance even the slightest magnitude of self-imposed burdens for the sake of any benevolent ends. Indeed, an ethical egoist would be happy to accept EA (so conceived). But if you look at the work of EAists, they often seem to be suggesting that there is something wrong with, say, not countenancing the slight risk of being significantly harmed by donating a kidney for the sake of the benevolent end of saving someone in renal failure.

    ReplyDelete
    Replies
    1. Right, that's why I mention in the post that (3) is not exhaustive of EA's normative commitments. (EA may "additionally claim that we should direct a non-trivial portion of our resources to benevolent ends. But so does every plausible moral view.")

      It's just that of the three numbered claims, only the last is a commitment of EA.

      FWIW, my sense is that there'd be a fair bit of intra-movement disagreement about whether or not we're obliged to donate a kidney.

      Delete
  2. In (3), you use the phrases "benevolent ends" and "do the most good". I think non-utilitarian views tend to have complex, incomplete, and vague ways of understanding these things, including in some cases denying that the latter is a coherent notion. I think most people approve of some idea of "making the world a better place" but that would appears to be contradictory to some things that Aristotelians say.

    Perhaps an interesting topic is how political philosophers view justice. (I'd guess political philosophers are virtually all non-utilitarian since otherwise wouldn't they just be moral philosophers?). What is the relation between the justness of a system or (possibly dynamic) "situation" and our "reasons" to try to alter that system. What kinds of things are worth doing in order to achieve greater justice? How do we trade off different degrees of justice over time/place/setting/uncertainty? Should we do things that reduce justice initially in order to improve it later? I think there was a paper called "just and juster" by David Estlund I was hoping to read at some point.

    ReplyDelete
    Replies
    1. Interesting questions! You might also enjoy Pummer and Crisp's paper, 'Effective Justice', which argues for such a maximizing approach to promoting justice.

      Delete
  3. I think even (3) is far stronger than what is needed. In order to justify organizations like GiveWell existing, you only really need this:

    (4) A significant number of people should direct a significant fraction of their money/efforts towards whatever does good most efficiently.

    Where "significant" just means, enough to justify a division of labor where some people do research to help others donate more effectively.

    One could even weaken "should" with "might reasonably", or "in the absence of any reason to direct their efforts differently", or various other constructions.

    Obviously, this is a really bland and dull proposition from the perspective of moral philosophy. But it is not boring from the perspective of organizing people to find common ground and actually inspire them to do good.

    Just as, say, you can have an Open Source movement without everyone in it agreeing on a common creed about the exact philosophical underpinnings of why freely avaialble source code is a good idea; some people may think proprietery software is immoral while others may just find it a vibrant community that they want to contribute to etc.

    As a separate point, I'm not sure even (3) by itself doesn't have some strongly counterintuitive consequences after it is decoupled from (1) and (2). One could rephrase it as: "if you choose to be benevolent, it must be maximally efficient". But this would imply that it would be e.g. immoral to donate to an orchestra or a 1st world friend who needs help with a car payment, even though it would not be immoral to spend the same money on a nice steak dinner for oneself (because that is not benevolent and thus does not need to be efficient). But that's pretty far from most people's moral intuitions.

    ReplyDelete
    Replies
    1. Yes, thanks, that's an important point -- though I'm also interesting in trying to pin down what implicit normative theses critics might reasonably target here (as in some sense "underpinning" EA, even if not strictly necessary for justifying the movement and its institutions).

      I should clarify that (3) should just be read as constraining what qualifies as genuine benevolence, rather than constraining what you spend your non-charitable budget on. So you can donate to the orchestra out of your personal consumption budget if you want (better than steak, as you say!), but it can't substitute for making effective donations.

      (I've gotten enough pushback on this point that I've come to think that my original post was poorly expressed!)

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.