Friday, September 24, 2021

Agency as a Force for Good

One fundamental reason for favouring consequentialism is the basic teleological intuition that the primary purpose of agency is to realize preferable outcomes.  If you have a choice between a better state of affairs and a worse one, it's very natural to think that the better state of affairs would be the better option to choose.

A slightly different way to put it is that if it would be good for something to happen, then it would be good to choose for it to happen.  Our agency is itself part of the natural world, after all, and while it is distinctive in being subject to moral evaluation -- misdirected exercises of agency may be wicked in a way that unfortunately directed lightning strikes are not -- it's far from clear why this should transform an otherwise desirable outcome into an undesirable one.  There's nothing obviously misdirected (let alone "wicked") about straightforwardly aiming at the good, after all.

Consequentialism thus fits with an appealing conception of agency as a force for good in the world. Left to its own devices, the world might just as easily drift into bad outcomes as good ones, but through our choices, we moral agents may deliberately steer it along better paths.

This suggests to me a (possibly new?) argument for consequentialism.  For it seems a real cost to non-consequentialist views that they must give up this view of agency as a force for good.  Instead, on non-consequentialist views, it could well be a bad thing for outcomes to fall under the control of -- even fully-informed and morally perfect -- agents.

For example, consider a "lifeboat" case (with a choice between saving one or saving five others) where the non-consequentialist insists on flipping a coin rather than simply saving the many.  Imagine a variant of the case where, if the captain of the lifeboat hadn't been steering it, it would have naturally drifted towards the five -- resulting in the best outcome.  It's natural to think that putting an ideal (i.e., in no way ignorant or vicious) agent in control of the situation shouldn't make things worse.  But for the non-consequentialist, it can, for it introduces extra moral reasons (e.g. to treat people "fairly") that could outweigh the welfarist ones, such that the captain might end up deliberately choosing to bringing about the worse outcome instead.  And that seems messed up!

Of course, not every non-consequentialist view embraces coin-flipping over saving the many.  Different examples may be generated to apply to different non-consequentialist views. But this simple example serves to illustrate the appeal of the consequentialist conception of agency as a force for good.

12 comments:

  1. I don't have the intuition that action has a purpose. Indeed, that seems to be a category mistake. What has a purpose is the agent who is performing the action. That purpose is to affect how the world goes. But often the aim is not to make the world go better. When I help my daughter with her homework rather than my neighbor's son with his homework (and let's stipulate that the neighbor's son would benefit more from my help), my aim is not to make the world go better. The aim is to help someone with whom I have a special relationship, one that involves a special commitment to helping them. Do you have some reason for thinking that I'm somehow missing the "purpose" of action in acting in this way? In any case, is there some reason why we should think that agency's "purpose" is to promote the good rather than to promote what the particular agent has most reason to desire (where what the agent has most reason to desire can come apart from what does the most good)?

    ReplyDelete
    Replies
    1. Oh, I'm not assuming that the good must be agent-neutral. Perhaps helping your daughter a little is really preferable (for you) compared to helping your neighbour's son a lot. If so, acting accordingly makes perfect sense.

      My main interest here is in cases where the agent is a "disinterested" party, with no special connection to any of the potential beneficiaries of their action (or inaction). (This is because I see agent-relative welfarism as a very close cousin to utilitarianism. Disagreements over impartial morality, by contrast, run deeper.)

      Delete
  2. How can "THE good [emphasis added]" or "THE better state of affairs" be anything but agent-neutral? My helping my daughter doesn't result in THE better state of affairs. Rather, it results in a state of affairs that I ought to prefer but that my neighbor ought to disprefer. And if you're not talking about agent-neutral goodness despite what your language suggests, then why conclude that "agency is a force for good." Do you think that ethical egoism (a form of consequentialism that's not committed to promoting the impersonal good) fits with an appealing conception of agency as a force for good in the world?

    ReplyDelete
    Replies
    1. Yes, in an attenuated sense: egoists still conceive of agency as an instrument for realizing antecedently desirable (to them) outcomes. So they don't have the non-consequentialist's problem of having reason to lament their own control over a situation.

      Of course, their agency is no longer agent-neutrally good, which may limit the appeal of the view to some extent. But that's really a separate issue from the core phenomenon that I'm trying to draw attention to here.

      Delete
  3. I'm struggling with the idea that agency has a purpose, that's very unintuitive to me. That said, isn't your conclusion circular in some sense? That the non-consequentialist wouldn't save 5 isn't worse from their perspective (or else why did ideal agent make that choice), and thus agency doesn't lead to wrong from their perspective - or did I miss something?

    ReplyDelete
    Replies
    1. I was thinking that agency introduced certain constraints on the non-consequentialist (e.g. to treat everyone fairly) which could over-ride the importance of bringing about the better outcome. Presumably everyone should prefer that an unmanned lifeboat drift towards the five than that the boat is manned by a captain who flips a coin and then saves just one.

      (A non-consequentialist might prefer the coin-flip outcome to the captain *unfairly* saving five, but note that there's no such "unfairness" involved if the boat is unmanned and simply drifts towards the five.)

      Of course, there are possible "numbers don't count" views that deny that it's in any way better to save more lives, but those are so insane I'm not concerned to address them.

      Delete
    2. Thanks - I think I get what your after better now, and I think the coin-flip case gets closer to what I see as the issue, which I think is agency as you've pointed out. In the unmanned boat/lightning-bolt case, it seems one death is the preferred outcome. The source of the dilemma seems to be choosing to bring about the "better" outcome when that involves a wrong, killing one. I think the coin-flip can capture that sense of wanting to give up agency, but perhaps clearer, though more fantastic, would be if the captain was mind-controlled - would you prefer that the controller have you kill one or the five. I think the answer is obvious, so the issue is really just choosing to bring about that outcome, i.e. agency. I still the the conclusion is somewhat circular in the sense that, the case where the outcome is brought about by choice involves a wrong of some sort (killing) not in the others - so how to balance that with fairness/welfare needs to be addressed first IMO.

      Delete
  4. Super interesting! A few scattered thoughts:

    (1) By the "purpose" of agency are you saying something analogous to what some people say about belief? So just as beliefs "aim" at truth, choices "aim" at realizing preferable outcomes?

    (2) I guess I wonder why "agency" is the issue rather than "moral agency". Is the thought that the purpose of non-moral choices is also to realize preferable outcomes? So what goes for agency in general goes for moral agency in particular. The only thing that differs is that "preferability" is cashed out in moral terms?

    (3) I'm having some trouble with understanding the connection between the teleological intuition and consequentialism. I think my confusion stems from the terms "realize" and "force" sounding very active. But I take it that some good consequentialist choices, as you mention, result in inaction. Sometimes it seems like the consequentialist will want to refrain from being a force against the good without being "force-y" in any way. Indeed, your lifeboat case seems to be one of that kind: "It's natural to think that putting an ideal (i.e., in no way ignorant or vicious) agent in control of the situation shouldn't make things worse." If that's right then, I'm not sure how well the "force for the good" explanation works for the life boat case. "Agency is not a force against the good" seems like the claim that's needed. That's more of a "non-anti-teleology" intuition and one that is harder to feel the force of as a more general statement about the nature of action. (As opposed to just an intuition I share about the particular lifeboat case.) Perhaps I'm reading "realizing outcomes" in the wrong way. But my intuitions about general "aims" of agency don't survive more capacious readings of terms like "force" and "realizing".

    (For what it's worth, I have a similar reaction to those insist that consequentialism is at its best when coupled with a view of "agency as production." Choice for the consequentialist had better not be fundamentally about actively producing anything, lest we see a big difference in doings and allowings.)

    (4) It's been a long time since I've read Tamar Schapiro's "Three Conceptions of Action in Moral Theory," but at the level of general intuitions about the aims of agency, I find the general statement of all three accounts she discusses pretty intuitive. Thus, I wonder whether the strength of the argument for consequentialism from the general Teleological Intuition would be strengthened if it can be shown that the view of agency underlying that intuition is superior to the views of agency underlying the others.

    ReplyDelete
    Replies
    1. Hi Jimmy! Yeah, I was thinking that in something like the way that belief aims at truth, the point of practical agency is to steer the world towards more desirable outcomes. I think corresponding claims could be made for both moral and non-moral choices/agency (if one goes in for such a distinction; I tend to be suspicious of it myself).

      And yeah, I mean for such "steering" to be neutral between doing and allowing -- to choose to do nothing when one could have done otherwise is no less an exercise of one's agency, as I see it, no matter the differences in physical manifestation. The basic idea is that in selecting between one's options (one of which may be to remain still, or "do nothing"), this selection is properly guided by consideration of which outcome we've most reason to want. I would've thought the intuitions more robust on this capacious reading (since the narrower reading has perverse implications), so I'd be curious to hear more on this point.

      Thanks for the pointer to Schapiro -- I'll have to check that out!

      Delete
    2. Hi Richard! Part of my worry was that the perverse implications seemed perverse because of our intuitions about moral theory, not because of intuitions about action theory. So let me take it from a slightly different angle. As I understood it, you were proposing an argument for consequentialism based on the teleological intuition. In other words, some truth(s) about the philosophy of action supports consequentialism. To avoid begging questions, then, it seems that we can't just interpret the teleological intuition in the most consequentialist-friendly way for free. We need a justification for interpreting the teleological intuition about action in a way that supports the more capacious reading. If the reason to prefer the capacious reading over the more "just actively produce good" reading relies on consequentialism, we would begging the question.

      The reason the capacious reading didn't seem natural to me at first was that omissions don't seem "force-y" (or to have aims) - they just aren't counter forces or don't counter-aim - so agency as a "force" of the good didn't seem like it supported the right interpretation of the teleological intuition. I had nothing terribly sophisticated in mind here, however. I was just operating at the level of brute intuitive response. And I may well be idiosyncratic here.

      But I also think I might be getting hung up on something irrelevant. I would buy that choices have something like constitutive aims irrespective whether they result in actions or omissions. So just subbing in the the word "choice" for "agency," "actions," and "omission," for whatever (perhaps misguided) reason, help me. I also think the "steering" metaphor helps quite a bit too. If we model agency in general on steering (and I can see the attractions of that) then you say that the constitutive aim of steering is to realize preferable outcomes, then I can see how the gap between the teleological intuition and consequentialism I worried about is filled in.

      I'm interested to see how this develops! Is it part of the other project on constraints?

      Delete
    3. Ah, thanks, that all makes sense! So yeah, I agree it's probably best for me to go with the framing of "choice" and "steering".

      This is part of a separate project, where I'm sketching out some rough arguments for utilitarianism (or, in some cases, consequentialism more broadly) for a new page on utilitarianism.net -- but may end up developing some of them further for my book project on Bleeding Heart Consequentialism (which is still in its early stages, but will be bringing together a lot of my work on consequentialism).

      Delete
  5. Hi Richard,

    With regard to the "lifeboat" case, my impression is that the original Taurekian view would be that there is no obligation to save the five, but also no obligation to flip the coin: all of those choices are permissible. At any rate (i.e., whether this would be Taurek's view), I think this one is less vulnerable to objections than the view that flipping the coin is obligatory.

    The above would be the case when the lifeboat is not already drifting in the direction of the five. If it's drifting, does the deontologist already have information to rationally reckon that it's probably going to rescue the five? If so, they may hold it's impermissible to divert it. If not, tossing a coin will not increase the probability of the worse outcome given the information available to them (or am I misreading the scenario? )

    (In the end, though, I don't think it's a problem if rational moral agents would sometimes act in ways that lead to worse outcomes).

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.