Saturday, August 21, 2021

Why Constraints are Agent Neutral

My previous post argued that deontologists must prefer not to violate deontic constraints, or those constraints would lack normative significance. There is one last way that they might avoid my argument that constraints trivialize killing, namely, by holding that while the agent must prefer to abide by constraints, bystanders should prefer that the agent acts wrongly, killing one to save five.  This post will set out why I think that view is mistaken.

By way of background: It's typically assumed that constraints must be agent-relative.  To explain why an agent should not kill one to prevent five other killings, deontologists often say things like, "Each agent has a special responsibility for their own actions -- that they not act wrongly, even to prevent more others from doing so." But in his groundbreaking paper, 'Agent-neutral deontology', Tom Dougherty pointed out that the injunction to not act wrongly, even to prevent more others from doing so can be given agent-neutral form, e.g.: "Each agent should [prefer and] ensure that no one kills to prevent more killings by others." (2013, 531)

This agent-neutral conception of constraints seems much more intuitively appealing. It might be characterized as "patient-centered" rather than "agent-centered". As Setiya (2018, p. 97) put it, "when you should not cause harm to one in a way that will benefit others, you should not want others to do so either."  Whatever deontologists have in mind when talking about "special responsibility", they surely wouldn't intend it to entail the denial of this commonsense claim. (If they just mean that agents should respect constraints rather than engage in preventative killings, then it's clearly compatible with the agent-neutral version.)

To bring out the problem with agent-relative constraints, just imagine an agent-relative deontologist (Ard) as a bystander to the trolley footbridge, shouting encouragingly, "Push! Push! Push!"  (Perhaps this would qualify as an impermissible action.  If so, imagine instead that it is involuntary: Ard just wants so much for the five to be saved that he literally cannot contain himself.  Red-faced, he covers his mouth, too late, and looks around for an escape route.)

The agent atop the bridge turns around and says, "You do it then!"

Ard demurs, "Oh, no, I mustn't kill -- that would be wrong!"  

"As it would be for me!" the agent shoots back.

"Admittedly so, but that is no concern of mine."

Something has gone very wrong here!  The agent-relative conception of constraints would seem to have agents fetishize their own moral purity, in a way that the agent-neutral alternative nicely avoids (as the neutral deontologist does not want anyone to be pushing others off bridges -- a misguided view, perhaps, but admirably consistent, at least).

To further bring out the problem, compare Ard's attitude to that of one rational egoist who is about to be exploited by another: "Damn it!  Granted, he's doing what he has most reason to do, but how terrible that is!  If only he could be stopped!"

We can of course make sense of egoists lamenting each other's rationality in such a way.  But it seems deeply strange to think of morality as so conflicted (at least when partiality for loved ones is not involved).  It strikes me as vastly more natural to think of deontic constraints as the sorts of things that morally motivated bystanders should support rather than oppose (at least on the assumption that the constraints have any normative force in the first place).

That is, on the assumption that deontic constraints have normative force at all, we should not generally want and hope that others violate constraints, and then feel deeply disappointed if they instead do as they objectively ought to have done. But these attitudes are just what the agent-relative conception of constraints absurdly implies (in at least a wide range of cases).  Such a view might (though needn't) go so far as to justify encouraging others to act wrongly.  At the very least, it will likely require that you not intervene to prevent a useful violation by another.  And that all seems rather odd.

Compare this to the response of an agent-neutral deontologist bystander, Anders.  Seeing that a utilitarian agent was about to push the innocent man off the bridge, we may imagine Anders yelling, "NOOOOOO!!!" and perhaps even heroically tackling the agent to the ground in order to prevent the wrongful killing.

Should deontologists be more like Ard or Anders?  While I think both are misguided, the latter certainly seems more principled to me.  (So if you agree with me that Anders is ultimately misguided, you should reject constraints entirely and join Team Consequentialism.)  But I'm always curious to hear from those who disagree...

25 comments:

  1. You write: "This agent-neutral conception of constraints seems much more intuitively appealing." Perhaps, this is how things seem to you, but, to my mind, it's the opposite. For I find both of the following counterintuitive: (1) I should, other things being equal, be more concerned with killings that prevent other killings than with, say, killings that prevent other deaths and (2) I should, other things being equal, prefer that someone's rightly refraining from killing one to their wrongly saving five, because I should, other things being equal, be more concerned about whether others do wrong than I should be about whether several more people die.

    ReplyDelete
    Replies
    1. But neither of those claims follows from agent-neutral deontology.

      On (2), the better reason for the preference would not be that you should care so much "about whether others do wrong" but that you should care greatly about whether a potential victim is killed as a means (or something along those lines).

      I'm not sure how to respond to (1) because I'm not sure why you think that's an implication of the view.

      Delete
    2. Why can't the agent-relative deontologist hold that you should care greatly about whether a potential victim is killed as a means (or something along those lines)?

      Delete
    3. Because that would be to grant an agent-neutral reason that's sufficient to explain the constraint against killing -- which is just what I'm taking "agent-neutral deontology" to be.

      (Agent-neutral deontology, so understood, does not rule out that there might be agent-relative reasons in addition. Recall the larger context here is to defend my premise that deontologists are all committed to preferring Five Killings over One Killing to Prevent Five. If the agent has *even more* reason here than bystanders do, that's fine. My concern is just to rule out the view that bystanders *lack* sufficient reason for this preference.)

      Delete
  2. I'm not sure if Anscombe counts as a deontologist or not. She definitely writes out of the Just War tradition, where 1 for 5 sounds like a good deal:

    "Then only if it is in itself evil violently to coerce resistant wills, can the exercise of coercive power by rulers be bad as such. Against such a conception, if it were true, the necessity and advantage of the exercise of such power would indeed be a useless plea. But that conception is one that makes no sense unless it is accompanied by a theory of withdrawal from the world as man’s only salvation; and it is in any case a false one."

    Sacrificing the Innocent one for the Innocent five, doesn't come up explicitly, however.

    ReplyDelete
  3. Hi Richard,

    Very interesting points. I would have to give it more thought if and when I have the time they deserve. But for now, I would say that a preference among bad results does not need to involve any cheering or intention to do so, or deep disappointment, or any strong feelings on the matter. For example, if I'm given two scenarios S1 and S2 both of which are bad, and I am asked to tell which one I prefer, I might (or might not) say I prefer the lesser evil of those, say S1. But that does not mean I like S1. Suppose there are 6 potential killers K1-K6 who want to kill victims V1-V6 exclusively for fun. But if K6 succeeds, he also makes K1-K5 fail. I might prefer that K6 succeeds over K1-K5, but I prefer all other things equal that they all fail, and I definitely do not like the scenario in which K6 succeeds.

    In this case, I might think "do not push" and not want the would be killer to push, while at the same time want something else to save the other people - even if very improbable. And there would be no cheering.

    Also, with very bad scenarios, one might just not have a preference between them, and it is not clear to me that there would be a moral obligation to have one (side note: even when it comes to choices rather than mere preferences, I can't rule out a Taurekian view is that there is no obligation to save the five (I think the so-called Taurekian view that one ought to randomize is mistaken, though)).


    As for not intervening to prevent a useful violation in some unusual cases, that would not be particularly odd in my view. One can construct plenty of realistic scenarios in which it would be wrong to prevent some wrongdoings. But leaving that aside, it doesn't have to lead to that. Side constraints seem to be for example compatible with permissibility of intervention or not intervention.

    ReplyDelete
    Replies
    1. Yes, I take your point that there needn't be any liking (or cheering, etc.) involved in a preference for the lesser (over the greater) evil. But it does surely entail being more appalled or disappointed if the greater evil instead eventuates. And that seems enough to make trouble for the agent-relative view here, since it would seem strange for a deontologist to be appalled/disappointed/etc. by an agent rightly refraining from killing another as a means.

      Delete
    2. Maybe, but can't the deontologist could be appalled/disappointed etc., by the actions of the murderers who kill the five, rather than by the actions of the one who does not kill an innocent as a means of saving the five?

      At least, I can tell that I would likely feel that way; while the deontologist you have in mind has a view that is in several ways at odds with mine, I don't think there is anything that would prevent them from being disappointed as above.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. I'm not sure I get the neutral view right, then. Perhaps, there is something I'm not understanding. But maybe the following examples can clarify what I'm getting at. The way I see it, we can distinguish between global preferences and preferences about forced choices between scenarios. So, for example, consider the following scenarios:

      S1: Killers 1-10 want to torture children to death purely for fun (so, not to save anyone). Children 1-10 are their intended victims. Killer 1 tortures Child 10 to death for fun, predictably resulting in a causal chain that kills Killers 2-10 before they can do any damage (Killer 1 predicted that outcome, though he did not do it for that reason).

      S2: As S1, but Killer 1 fails to kill, and then Killers 2-100 torture Children 2-10 to death for fun. It was predictable that Killer 1's failure would result in Killers 2-10 not being hurt or in any way forcibly stopped from doing what they want.

      Which scenario is worse?
      S2.

      I am informed that either S1 or S2 happened. Which one do I prefer to have happened?
      Well, given those alternatives (i.e. forced choice), S1, but globally, I would have preferred neither of them to happen of course.

      But if I'm told that Killer 1 tried to kidnap Child 1 in order to torture him to death, but failed and was arrested, I'm not going to feel disappointed by the failure, let alone appalled. I'm going to be disappointed and appalled at the actions of Killers 1-10 - even Killer 1, who tried by failed.

      Now, consider S3:

      It's like S1, but one of my loved ones is in place of Killer 1.

      I am informed that either S3 or S2 happened. Which one do I prefer to have happened?
      S2.

      Of course, it can be argued that I'm being irrational and/or immoral. I do not believe so, but that can be debated. But let us assume for now my preferences here are acceptable. Then it seems to me that proper preferences for scenarios involving moral violations are not agent-neutral. Suppose that we replace the scenarios by scenarios in which Killer 1 (or my loved one) do not act purely for fun, but partially or even totally in order to prevent the other instances of torture to death. Then my preference order remains (it's more difficult to place myself as the torturer/killer, because then I would already know it happened, so I would not be informed - and I know I would not act in that manner, so this brings other problems involving freedom and the like; but still, the above indicates agent-relativity anyway).

      So, here a question is (assuming my preferences as above are proper): would that be enough to conclude that the constraints, if there are any, are agent-relative?
      If the answer is 'yes', it seems your argument doesn't succeed.
      If the answer is 'no', then it seems to me that whether they are agent-relative is not determined by what is proper to prefer, in forced choices between scenarios. But then, what are they determined by? If it's global choices, the deontologist may always say they prefer no wrongful behavior on anyone's part, and also no bad results of any kind, no 1 kill or 5 kill, etc.

      If it's something else, I think that would require further explanation.

      Of course, again it could be argued that the preferences above are immoral and/or irrational, or that it has to be for some reason I rather than my loved one instead of Killer 1, but that too would require some argumentation I think.

      Then again, perhaps I just missed something about the sort of preference you're talking about, so I would like to ask what it is in that case.

      Delete
    5. Oh, sorry, I see now that I misread your previous comment! You were (in effect) suggesting that the agent-relative deontologist might be disappointed when an agent rightly fails to kill one to prevent five, while the target of their disappointment is the five killers and not the one who failed to prevent them.

      Yeah, that seems like an available response. So it may be misleading for me to say that they're appalled "by an agent rightly refraining from killing another as a means". They're merely appalled that (or when) this choice is made, because it causally guarantees that the other murders -- which are the true source of what's appalling in the situation -- will now happen.

      The point about the true source of the problem is certainly correct. I guess the question is just whether that's enough to defang the apparent awkwardness of a deontologist's preferring that one be killed as a means without permitting it.

      I'll have to think more about your original Killers case, as it does seem intuitive there that any impartial bystander should prefer that just K6 succeeds than that all of K1 - K5 do. I'm not immediately sure how to reconcile that with my intuitions about what deontologists should say about other cases (where the one is more deliberately being killed as a means). An interesting puzzle!

      On your latest comment: I've been leaning towards a hybrid view on which even "agent-neutral deontologists", i.e. who think bystanders should generally want constraints to be respected, should also allow for some agent-relative reasons, e.g. to prefer that they (or their loved ones) not kill even if someone else is otherwise guaranteed to take their place.

      Delete
    6. Yeah, I think that's a central question - i.e., whether it defangs the apparent awkwarness.

      I've been thinking about it, but I'm uncertain at this point. This sort of scenario is difficult for me too - I got myself in trouble!

      Assuming it's not enough, I think a couple of potential (very preliminary) responses would be:

      1. Argue that our obligations are about choices, not about preferences. While our choices can influence our preferences, that influence is indirect, and at most there is an obligation to try to influence one's preferences in one way or another. I think this is true, but still, there is the question of how one should try to influence one's preferences - though there is then the question of whether there is an obligation of that sort with regard to preferences involving these particular scenarios.


      2. Argue that even if the argument is not entirely defanged, there are stronger considerations against the view that there is an obligation to kill one to save five - in other words, this one is more awkward.
      One possibility is to say that the very idea of killing one to save five is already counterintuitive enough to make it more awkward. But another, perhaps better option is to consider scenarios where the intended victim (i.e., the one whose death would save the others) fights for her life, begs not to be murdered, etc., and then modify the scenarios increasing the damage, e.g., instead of killing the victim, torturing her for a year in the most horrific ways, things like that. Also, one could add that the killer could also save the five by self-sacrifice but chooses not to, etc. (while this view is not committed to the obligatoriness of self-sacrifice, I think one can obtain awkwardness in this manner too). So, something along these lines could be part of an answer, but that of course does not address your argument but rather tries to outgun it.

      Delete
  4. I'm inclined to agree that if there are constraints they are agent-neutral. But couldn't the agent-relative deontologist just say that there is an agent-relative constraint against letting die as a means? I think most people would find it seriously objectionable for to let die someone who could easily be saved in order to use their organs to save five people.

    ReplyDelete
    Replies
    1. Interesting! I take it that's a proposal for how the agent-relative view could avoid my charge that "it will likely require that you not intervene to prevent a useful violation by another." For, to deliberately refrain from preventing such a killing would now be to (by omission) treat the victim as a means.

      Yeah, I guess they could say that. To distinguish the views would then require more of a focus on their associated attitudes. For example, the Ard might now regard it as very unfortunate that he's in a position to help: "Damn, now I'm required to prevent this useful killing! If only I'd been a bit further away..." Whereas, on the agent-neutral view where the killing is truly undesirable, we would expect more whole-hearted endorsement of the prevention effort. Arden, if too far away, might add: "If only I were close enough to prevent the dastardly deed!"

      Interesting question for deontologists of all stripes is what to think about cases where they're too far away to prevent a death (from natural causes) that enables five life-saving organ transplants. Should they wish they could've saved the one, preventing the five from being saved? Or should they be relieved to have avoided such an obligation, and that more lives are saved without anyone having to act wrongly?

      Delete
    2. That’s right. I do think the tension in, ‘I regret that I must not push/must intervene to prevent you from pushing’ is somewhat less intuitively costly than that in, ‘You shouldn’t push, but also I shouldn’t prevent you from pushing’. There do seem to be other cases in which it seems perfectly appropriate to regret that one must not violate a constraint to do more good. Imagine a hired bodyguard who is morally required to prevent a smaller harm to their client when they could instead prevent significantly more harm to a bystander. Also, a little support for the agent-relative solution: It could explain how it is less seriously wrong (though still impermissible) to allow someone else to push than to push yourself (the constraint against letting die as a means being weaker than the constraint against killing as a means).

      Delete
    3. The bodyguard case seems like an instance of a special obligation rather than a constraint. (Special obligations are certainly agent-relative!)

      But I do think it'd be most plausible for even the agent-neutral deontologist to allow for supplemental agent-relative reasons, to explain why we should (all else equal) generally prefer that someone else be the agent of wrongdoing rather than us, if those are the only options.

      Delete
  5. You're right, of course, that it's a case of a special obligation and so probably won't count for much as circumstantial evidence. Here's a case that might? I can imagine someone thinking that (a) there is a constraint against seriously harming a non-threatening person even if they deserve it and also (b) it is a good thing when wrongdoers get what they deserve (and there aren't other, non-axiological reasons). I'm not sure, but it doesn't seem too weird for such a person to wish that there wasn't the constraint, so that they could give the wrongdoer what they deserve (because that would make the world better by like restoring the balance or whatever). (Not sure about this case just food for thought really.)

    The suggestion that the agent-neutral deontologist might appeal to supplemental agent-relative reasons to explain why we should want someone else to push for example is plausible. Still, I can't help but feel that the ideal view would just say that the constraint is just weaker, so I think that there is something to be said for explaining the duty to prevent someone else from pushing in terms of a constraint against letting die as a means.

    ReplyDelete
    Replies
    1. Fair enough. Thanks for the very interesting comments!

      Delete
    2. Thanks to you, too, and for the great posts as well!

      Delete
  6. I'd echo Angra's sentiments (assuming I understood them correctly). It seems the deontologist is committed to the belief that they ought to prevent such a state of affairs from ever obtaining. Once they have, once at least one killing is guaranteed, they are left to choose between cases which entail wrongdoing on someone's part. That they have a preference to not be the one to commit one act of killing (to save five) doesn't seem to require the belief that another ought to do the same, nor does it seem to me to be at odds with the belief that it'd be better if less people be killed (or that more live).

    It seems on the deontological point of view, that once that state of affairs have obtained, everyone has already lost, why should there be a preference for one way of loosing over another? Alternatively, why ought on prefer that one innocent person be made into a killer rather than have 4 additional people die?

    ReplyDelete
    Replies
    1. To take it further, assuming a world where the trolley problem is pervasive, it seems there are two possible states of affairs that could follow:
      1 - no killing to save lives/allowing 5 to be killed results in 2x population where half the population allowed 5 to be killed rather than becoming a killer and killing one to save 5
      2 - killing one to save five results in 6x population where 1/6th the population became a killer due to moral obligations

      If the utilitarian view entails a preference for killing 1 to save 5, it seems to entail a preference for 2. But taken alone, it doesn't seem obvious why 2 should be preferable to 1. If I take you correctly as thinking that the deontologist is committed to preferring not killing, then they'd prefer 1, but I'm inclined to think they're committed to no preference, which seems appropriate when choosing between 1 and 2. I think everyone would prefer less killers in a population, even if the killing were motivated by saving lives, and it doesn't seem a larger population is in itself preferable. If the normative force of the consequentialist motivation for prefering killing one to save five is located in the act of saving lives, rather than in the number of remaining lives, it maybe avoids that issue, but then there is a single act of saving lives regardless of the choice made, so if I didn't make a mistake somewhere, it seems the preference depends on a commitment to valuing more total lives in the world, which runs into its own issues I believe.

      Delete
    2. Reframing the case in terms of mere population size conflates killing with failing to bring into existence. (Or, more specifically, conflates saving five existing lives with creating five new ones!) These are morally very different (especially, I would think, for deontologists)!

      Delete
    3. I suppose this might not avoid the issue you raise (I agree they are very different), but in the resulting scenarios, the difference in population size is due to the choice to kill to save five or not, not a choice re: bringing into existence, so I think the population size relevance to a preference for outcome 1 vs. 2 still holds? The only salient difference I see is in having killers in 2 vs. letting die-ers in 1, and the total resulting population in each. Since I don't see a reason to prefer either outcome, I don't see why I should prefer a course of action that leads to one or the other. I don't know if that helps ...

      Delete
    4. You're abstracting away from all the deaths. An obvious reason to prefer (2) over (1) is not that it has "more population" but because it had four times fewer deaths. If you don't see this as a reason then I don't know what to say.

      Of course, if the killings are truly *wrongful* then you might hold that their moral objectionability outweighs the reason we have to generally want fewer deaths. And indeed, how deontologists should weigh these two considerations is just the sort of thing I'm trying to get at in this post.

      But at any rate, I really think that trying to shift it into a population ethics question serves to occlude rather than illuminate the moral issues here.

      Delete
    5. (Correction: five times fewer deaths.)

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.