Thursday, February 27, 2020

A New Paradox of Deontology

[Update: I wrote a better version of this argument over at PEA Soup.]

There's something odd about the view that it'd be wrong to kill one innocent even to prevent five other (comparable) killings.  Given plausible bridging principles, this implies that we should prefer Five Killings over Killing One to Prevent Five.  But that seems an odd preference: how can five killings be preferable to one?  The deontologist (like Setiya) must think that agency is playing a crucial role here.*  While we should prefer one gratuitous killing over five, there is (on this view) a special kind of killing -- killing as a means -- where the good results of the killing don't get to count. So Killing One to Prevent Five is treated as morally akin to Six Killings, rather than to One Killing.

This is odd enough, but I think it gets worse.  For compare some variations of the case.  First note that if the good results of the killing-as-a-means don't get to count, then it seems it shouldn't matter to our moral verdicts whether the intended good results actually eventuate or not.  So consider Killing One in a Failed Attempt to Prevent Five (KOFAPF).  Clearly, KOFAPF is much worse an outcome than Killing One to Prevent Five (KOPF): it has the same agential intervention, but with six killings instead of just one.  So we should strongly prefer KOPF over KOFAPF.  But then how can we coherently prefer Five Killings over KOPF?

KOFAPF seems broadly akin to Six Killings.  We may suppose that all the same people are threatened, and indeed killed, in both cases.  The only difference is that one of the killings in KOFAPF, instead of being gratuitous, was intended to try to prevent the others.  If anything, this should make KOFAPF morally better than Six (Gratuitous) Killings, it seems to me.  At any rate, it would seem deeply misguided to strongly prefer Six Killings over KOFAPF.  But then, since KOPF is vastly preferable to KOFAPF, it seems to follow that KOPF must be vastly preferable to Six Killings.  But that seems to contradict the deontologist's verdict that Five Killings is preferable to KOPF.

Consider another case: Gratuitous Killing that Accidentally Prevents Five (GKAPF).  A family of six wannabe murderers, named Bob1 - Bob6, set out to do their thing.  Due to a random causal fluke, Bob6's murderous act has the unintended causal consequence of preventing the murderous acts that would otherwise have been committed by Bobs 1 - 5.  So only one killing occurs, and as far as the agent was concerned, it was a gratuitous killing, not an instance of killing-as-a-means.  So it seems that this situation is morally akin to One Killing.  At any rate, it is surely better than Five Killings, and hence (by the deontologist's lights) KOPF.

But this conclusion now seems morally obscene.  If Bob6 had instead been morally motivated, deliberately seeking to prevent the five other killings, this (as an instance of KOPF) would have been evaluated by the deontologist as much worse than the scenario in which he viciously killed the one, with no desire to save the five.  But how can well-intentioned killing be worse than ill-intended killing, all else equal?

[Update: on further reflection, I suspect that Setiya-like deontologists would suggest that GKAPF is equivalent to KOPF, and worse than Five Killings, after all.  They wouldn't want Bob6 to perform his killing, even if it has the unintended side-effect of preventing the five other killings.  So, scrap that case.  Still, the earlier verdicts involving KOFAPF seem a challenge for them.  And there are further questions about how to justify treating GKAPF as akin to KOPF, given the point below that mere causal structure -- if implemented by a non-agent -- doesn't give rise to such verdicts.  It would then seem more principled to rely on intentional agency rather than just any old agency, I would think...]

Can the deontologist's verdicts here be rationalized?

* = It isn't just the (agent-independent) causal structure that's doing the work for the deontologist, for they'd surely agree that we must prefer Lightning-Kills-One-and-Prevents-Five over Five Killings.

[Thanks to my Ethical Theory seminar participants for helpful discussion of these cases!]


  1. I don't think most deontologists would accept your 'plausible bridging principles', in part because I think it follows from the structure of most forms of deontology that preference-that (i.e., preference that this happen rather than that) doesn't necessarily have much to do with actual moral decision-making (unlike preference-to, i.e., preference to do this rather than that). -- This is presumably related to your agency point, but I'm not sure you're doing full justice to the extent to which deontologists usually take preferences about things you are not yourself doing to be irrelevant when actually making moral decisions; or else they hold that they only become relevant if (1) they are things you can do something about through some action and (2) you have already determined that that kind of action is a moral kind of action. That is, there's not really any reason why a deontologist can't say that they prefer in the abstract that fewer killings happen (as something to aim for when you morally can) but that, more fundamentally than this (and as an essential part of acting morally), they should not themselves ever prefer to kill an innocent. The Bob6 scenario is bad not because of the resulting situation but because there's a lot more deliberate murder-attempting (and thus a lot more bad preference-to) going on. And, of course, if you are Bob6, the way you are the chooser in the other scenarios, you are deliberately acting on a preference to murder.

    I'm also not sure it's really true that KOFAPF is itself worse than KOPF in any way that would be (uncontrversially) relevant to moral verdicts; it seems that a KOPF is just a lucky case of what, if unlucky, would have been KOFAPF. KOFAPF is more unfortunate, but it's surely at least controversial to hold that this should affect our moral verdicts about it.

    1. Interesting! I should clarify that the relevant bridging principle is just Setiya's insight that moral side-constraints are most plausibly agent-neutral: "In general, when you should not cause harm to one in a way that will beneļ¬t others, you should not want others to do so either." (97)

      I was further thinking that we could put aside questions about "moral decision-making", and just ask what preferences a benevolent observer ought to have about these various situations. Whether "relevant to moral verdicts" or not, it surely is the case that any observer should strongly prefer KOPF over KOFAPF. Once the agent has killed the one, and you see the remaining options are either that this successfully prevents five other killings or that it doesn't, you should surely strongly prefer the former outcome! (Failure to do so would seem a kind of disrespect to the five extra victims, given that there's no countervailing reason whatsoever to prefer that they get killed in addition to the one already gone.)

    2. So, to pick up on your key claim, if an agent "should not themselves ever prefer to kill an innocent", they likewise should not ever prefer that anyone (in a like situation -- KOPF, etc.) kill an innocent. But that then seems to lead to the troubles outlined in my main post.

    3. Ah; it's been a while since I've read Setiya, but I suspect that you're right that a Setiya-like deontologist (in the sense of accepting something like the agent-neutrality point) would have to work harder to handle the problem. I think more traditional deontologists like Kantians and even divine command theorists are going to be much more intention-oriented, and thus will tend to allow a stronger split between judgments about how tragic or disastrous situations are (which would be for them almost like an aesthetic judgment, which situation is an uglier mess) and judgments about what is right or wrong to do. Thus the question of which is worse would really be equivocal, and thus you could often answer to these kinds of cases, 'Well, it would be worse in one way, and less bad in another'. But the Setiya principle at least complicates this.

  2. It seems to me you might be able to bring this into super stark relief by considering a case where the deontologist doesn’t initially know why there are less murders.

    For instance, a deontologist police chief is using a super-smart AI to assign police beats. Surely if the AI predicts this pattern of walking beats results in less murders they have reason to prefer it. Now what happens if they then find out that it results in less murders because it ends up not preventing the murdef of a serial killer. You can think of all sorts of variations to cause problems.

    Now I think many deontologists could easily escape but not ones who have some kind of agent-nuetrality.

    1. Yes, that does sound especially awkward! Too bad I don't have more deontologically-inclined readers to share their thoughts on how to respond...


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.