Saturday, July 31, 2021

The Cost of Constraints

[An excerpt from my paper-in-progress on the paradox of deontology, setting out my core argument...]

Let's begin with some cases.^[The first two are drawn from Setiya (2018).]  In all of these cases, the background setup involves five other agents who are each about to murder a different innocent victim. Protagonist may be in a position to prevent these five murders, by means of himself killing a sixth individual. (The precise details shouldn't matter for our purposes--feel free to fill in the story however seems most sensible to you.) Against this background, compare the following four possible outcomes:

Five Killings: Protagonist does nothing, so the five other murders proceed as expected.

One Killing to Prevent Five: Protagonist kills one as a means, thereby preventing the five other murders.

Six Killings (Failed Prevention): As above, Protagonist kills one as a means, but in this case fails to achieve his end of preventing the five other murders.  So all six victims are killed.

Six Killings: Instead of attempting to save the five, Protagonist decides to murder his victim for the sheer hell of it, just like the other five murderers.

Next, two assumptions:

(1) Wrong acts are morally dispreferable to their permissible alternatives.  If an agent can bring about either W1 or W2, and it would be wrong for them to bring about W1 (but not W2), then they should prefer W2 over W1.

(2) Bystanders should similarly prefer, of a generic moral violation, that it not be performed.  As Setiya (2018, p. 97) put it, "In general, when you should not cause harm to one in a way that will benefit others, you should not want others to do so either."

Importantly, these principles do not straightforwardly imply that we must always prefer the outcome that contains the smaller number of wrong acts. Some wrong acts may be more morally undesirable than others, and there's no assumption here that undesirability aggregates in the way that consequentialists typically assume.  For any given wrong act, we should prefer that the agent not perform it. But we can't just add up the number of wrong acts overall and minimize that number come what may, for some ways of minimizing violations (e.g., by committing a violation ourselves) may themselves be wrong, and hence be morally dispreferable.

Deontologists claim that Protagonist acts wrongly in One Killing to Prevent Five.  They are thus committed to the following preference ordering:

(3) Five Killings > One Killing to Prevent Five.

Now, this preference already seems awkward on its face.  Consequentialists often stop here, and ask how you could reasonably prefer a larger number of killings to a smaller number, whatever other minor details may differ between the two (so long as those details make no instrumental difference to others' welfare). But let's grant (for reductio) the deontologist's view that the details of the causal structure make a big moral difference--such that one can rationally regard one killing as worse than five, at least when the one involves the specific causal structure of killing as a means to preventing the other five killings.

Again, since this is just what the deontologist is straightforwardly committed to in judging that agents should not kill one to prevent five killings, it would seem question-begging for the consequentialist to object at this point.  So let's delve deeper.  The real problem for the deontologist emerges once we bring our two remaining cases (involving six killings) into the picture.

Here's a moral datum:

(4) One Killing to Prevent Five >> Six Killings (Failed Prevention).^[Here I use the '>>' symbol to mean is vastly preferable to.  Consider how strongly you should prefer one less (generic) murder to occur in the world.  I will use 'vast' to indicate preferences that are even stronger than that.]

This should be uncontroversial.  In the former scenario, Protagonist kills one person to prevent five other killings.  The latter scenario contains everything that's bad or morally objectionable about the former, plus five additional, completely gratuitous murders, as Protagonist now fails in his attempt to prevent them.

However severely the deontologist may judge Protagonist's moral violation, they must surely agree that once the choice is made (and his victim killed) we have immensely strong moral reasons to hope that this act--however objectionable it might have been--at least succeeds in preventing the five other murders.  For if the five other murders happen in addition, then that is so very much worse.  Specifically, this is a substantially greater moral difference (i.e., yields a world that is morally dispreferable to a greater extent) than would generally be obtained by just adding (say) one gratuitous murder to a scenario.

It's worth emphasizing that five additional murders has got to be a big deal on any moral view. I'm not smuggling in anything contentious in making this evaluation. I'm not, for example, supposing that we must prefer the smaller number of killings even when there are differences in the causal structure, or different individuals spared, which could provide some reason to prefer the other option. We're talking about a case of Pareto inferiority, where there is literally no reason at all to prefer the extra killings.

So we should regard (4) as an unassailable moral datum, the rejection of which would entail severe moral disrespect to the five extra murder victims.

However, I'll now argue that deontologists cannot accommodate (4).  The best they can get is that the extra killings are *somewhat* morally undesirable -- not extremely undesirable as moral decency requires. For consider:

(5) Six Killings (Failed Prevention) >= Six Killings.

When the same six killings occur either way, it's an interesting question whether the killers' motivations matter. Perhaps they don't, in which case we may be indifferent between the two outcomes in which these same six killings occur. Alternatively, if motivations do matter, it's surely better for Protagonist to be beneficently motivated (despite not actually achieving any good) than for him to gratuitously murder someone for no good reason at all.  There's no way that being beneficently motivated could be intrinsically morally worse than being evilly motivated, so we may safely conclude that Six Killings (Failed Prevention) is at least not worse than Six Killings.

But now notice that the magnitude of the moral difference between Five Killings and Six Killings is precisely that which is generally obtained by adding one additional murder.  This is, of course, serious.  But we are here using 'vast' to denote moral chasms that exceed the magnitude of one typical additional murder. So, by the specified meaning of 'vast' (and hence '>>') in this context, we have our final premise:

(6) It is not the case that Five Killings >> Six Killings.

Recall, from (3)--(5) and transitivity, we have already established that deontologists are committed to:

(7) Five Killings > One Killing to Prevent Five >> Six Killings (Failed Prevention) >= Six Killings.

Clearly, (6) and (7) are inconsistent.  By transitivity, the magnitude of preferability between any two adjacent links of the chain must be strictly weaker than the preferability of the first item over the last.  But the first and last items of the chain are Five Killings and Six, which differ but moderately in their moral undesirability.  The basic problem for deontologists is that there just isn't enough moral room between Five Killings and Six to accommodate the moral gulf that ought to lie between One Killing to Prevent Five and Six Killings (Failed Prevention). As a result, they are unable to accommodate our moral datum (4), that Six Killings (Failed Prevention) is vastly dispreferable to One Killing to Prevent Five.

Setiya (2018, p. 102) noted a related "puzzle" about the ethics of killing.  Given constraints against killing (which he endorses), we must prefer two random murders over One Killing to Prevent Five.  While this initially sounds odd, Setiya goes on to defend this verdict as follows:

The situation in which someone is going to be killed unless they are saved [by wrongly killing another innocent] is as bad as the situation in which they are going to be killed. Ethically speaking, the damage has been done... It makes things worse, not better, that the button is pushed, so that the innocent stranger dies. That is why One Killing to Prevent Five is worse than Five Killings: it starts out the same and then declines. If we think through the temporal unfolding of events in One Killing to Prevent Five, we can explain why Five Killings, and thus Two Killings, should be preferred. (104-105)

This defense of side-constraints contains the seeds of its own refutation.  For the claim that "the damage has been done" when victims are threatened (in this way) -- rather than when the threat is realized -- implies, plainly enough, that little is gained by averting the threat and saving their lives (in this way).  But this is unacceptable.

In effect, the deontologist is committed to holding that, once an agent kills one in an impermissible attempt to prevent five other killings, it just doesn't matter all that much whether their attempt succeeds or fails. That is, it doesn't matter all that much whether five additional -- and entirely unmitigated -- killings occur or are prevented. This seems incompatible with treating killing as extremely morally serious: the very moral datum that motivated deontic constraints against killing in the first place. Deontic constraints are thus "paradoxical" in the strong sense of being self-undermining.

27 comments:

  1. is it really true that "(4) One Killing to Prevent Five >> Six Killings (Failed Prevention)"? my idea of deontology says that the agents in these two situations act equally wrongly -- they both kill one person with the intention of preventing five deaths. that the outcomes are different does not affect the deontologist's judgment. true, in the second situation there are five additional wrongs happening, but these are wrongs carried out by five additional agents. they are not relevant in evaluating the central agent's action.

    in general, i'm a bit confused about whether you are evaluating the goodness of a world-state, or the goodness of an agent's action? sometimes it seems like the latter, like when you write: "In the former scenario, Protagonist kills one person to prevent five other killings. The latter scenario contains everything that's bad or morally objectionable about the former, plus five additional, completely gratuitous murders, as Protagonist now fails in his attempt to prevent them." but sometimes it seems like the former, such as when asserting (4).

    ReplyDelete
    Replies
    1. Hi Erich, sorry that isn't so clear in this excerpt! I'm talking about what world-states we should prefer (taking into account the actions performed within those states).

      And I take it that one prefers that an act A1 be performed over an alternative A2 iff one prefers the A1-world over the A2-world. But one could have a strong preference of this sort even while regarding A1 and A2 as equally wrong. So, while (by premise 1) we must generally prefer permissible acts to impermissible ones, it's not necessarily the case that all (even "equally") impermissible acts are equally undesirable. It's preferable to at least achieve some good, even if that doesn't affect the wrongness of the act.

      Delete
  2. Hi Richard,


    Those are powerful objections if the deontologist is committed to those results.
    However, I'm not sure why they would be so committed. Perhaps, it is intended as an argument against versions of deontology predominant in philosophy?

    I'm asking because I do believe in side-constraints if I understand the idea correctly, but I do not find the argument problematic for my view - though I might be missing something.
    But for example, and with respect to assumption (2), I would first distinguish between:

    1. How good or bad a situation is.

    2. How much immoral or morally praiseworthy behavior it contains.

    Those two are very different, and even a morally wrong behavior might prevent a far worse situation. For example, suppose that in S1 Jane is a member of a tribe of nomads. She gathers food for her family. As the tribe moves, they get near a cave with bats. Jane goes to gather food, and gets infected with a bat virus. Which ends up wiping out the entire tribe of 150 people, who die slowly and painfully. In S1, Tom and Bob play a game that is not dangerous. In S2, she's going to gather food as she regularly does. But she doesn't, because Tom and Bob are teenagers recklessly play a game that involves throwing a rock around, and Tom hits her in the knee, causing an injury, for which she rests for a day. But the next day, the tribe moves on, so she gathers food at enough distance from the cave not to get near the bats. She does not get the virus, and none of them do. They keep going about their business, dodging a bullet they never knew about.

    In the above scenarios, all other things equal S2 contains more immoral behavior, because Tom and Bob behave recklessly. In S1, by the way, Jane is not at all reckless. She knows nothing about viruses, and never saw anything suggesting the possibility of an illness like that. But S1 is far worse.

    Which one would I prefer?
    Well, I would prefer a scenario without immoral behavior and also without Jane's catching the virus, but if I have to say which one among those two I prefer (in the sense of preferring that one of them happens), then it is S2. But I do not see any difficulty here for side-constraints. So, in short, I do not see why I should accept your assumption (2). In particular, you say that " Some wrong acts may be more morally undesirable than others, and there's no assumption here that undesirability aggregates in the way that consequentialists typically assume." But further, I would say that some things that are not wrong are worse than some wrong acts, in the sense of my example above, and that makes it permissible (and, I would say, sometimes even obligatory) to prefer them.

    In other words, sometimes one should prefer (or at least it is permissible to prefer; that's enough to block the argument as I understand it) world 1 over world 2 even if world 1 contains more immoral acts than world 2, and also only morally worse acts than world 2. In fact, sometimes one should prefer a world with plenty of immoral acts to a world with zero immoral acts. In light of that, I'm not sure how to construe assumption (2) in a way that makes it plausible, other than something like 'usually', bystanders should similarly prefer, etc. But 'usually' doesn't seem to do the work you want to use (2) for.

    ReplyDelete
    Replies
    1. Hi Angra, I'd take unforeseeable side-effects to be just the sort of thing we can take to be excluded by the "in general" proviso. My cases don't involve any such unforeseeable effects, so it seems like (2), so understood, can still do the work I need?

      So I agree with your claim that "some things that are not wrong are worse than some wrong acts", but don't think that this interferes with the core argument.

      Delete
    2. Fair enough, though I do not see why I should see any of the above as a reason to change my mind about side-constraints, so that is why I'm thinking maybe you're targeting versions of deontology predominant in philosophy, rather than general belief in side-constraints?

      At least, I do not see a problem for my rating of the scenarios, which is as follows:

      In terms of the moral wrongness they contain (where '>' indicates more wrongness).

      a. Six killings > Six killings (Failed prevention).

      I think the murder 'for the heck of it' (i.e., for fun) is probably more immoral than to save others (but assuming it's the other way around would not cause problems, though I would have to change the order).

      b. Six killings (Failed prevention) >= One killing to prevent five:

      One might think it's the same amount of wrongness all other things equal (other agents' choices, etc.), but it would in fact depend on how the killings are intended by the five others (for example, S3: A murders B for fun by beating B to death. S4: A attempts to murder B for fun, by beating B to death, and begins exactly as in S3. But A is stopped and knocked unconscious at t1; then S3 and S4 contain the same amount of moral wrongness on the part of A up to t1. But in S3, after t1, A still has an obligation to stop the beating, so as he continues the beating past t1, A engages in further wrongful behavior).

      c. One killing to prevent five >? Five killings.

      It looks like more immoral because it's one more immoral choice to kill. However, there is the question of potential further immoral behavior by the successful murderers than the unsucessful ones, depending on the killing method, which might be particularly gruesome, when the killings are stopped, etc.

      In terms of how bad the end situations are (assuming similarly painful killing methods, etc.):

      Six killings >(1) Six killings (Failed prevention)>(2) Five killings >(3) One killing to prevent five.

      It seems >(1) is slightly worse (due to the more immoral motive), >(2) is considerably worse, since the difference is one more immoral killing, though with a slightly less immoral motive than the others, and >(3) is much worse...as long as we assume that the killing to prevent other killings does not have a massively negative impact on social cohesion, which might happen if generalized, or something like that.

      I see no problem above. Granted, your deontologist seems to believe that the killing to prevent murders for fun is more immoral than the killing for fun. Fair enough, but that would only change the order above, to some extent: there is still room for a massive jump in terms of how bad the resulting situation is, even if not in how much immoral behavior it contains. That said, I'm not familiar with other details of the deontologist position you target, so I can't rule out that there is no room for a big jump there.

      Delete
    3. Just to clarify notation: I was using 'X > Y' to mean that X is preferable to Y. If I'm reading you correctly, you've flipped this around, and use it to mean X is worse than Y? I'll stick to my original notation in what follows.

      Now, I gather you dispute the following ranking that I claimed deontologists are committed to:
      (3) Five Killings > One Killing to Prevent Five.

      I agree that if the deontologist can reverse this ranking, and prefer One Killing to Prevent Five (despite its greater wrongness), then they can escape my argument.

      Now, (3) follows from (1) and (2) together with a constraint against killing. In your original comment, you wrote, "I do not see why I should accept your assumption (2)." So I'll focus here on that.

      I think the first thing to say is that if you don't even generally prefer that constraints not be violated (even in the most straightforward cases involving no ignorance or unforeseen side-effects), then I question the extent to which constraints really matter on your view. Perhaps they determine "wrongness", but who cares about that if we should all want people to act wrongly whenever that'd be better in utilitarian terms? If you share the utilitarian's preferences about what we should want to be done, that seems like you've given most of the game away. Any residual disagreement seems minor in comparison.

      But I wonder if you really mean this. Presumably if you really prefer One Killing to Prevent Five over Five Killings, then antecedently you will hope that this preferred outcome is the one that eventuates. That is, you hope that Protagonist will commit murder (saving five). That's a really surprising attitude for a deontologist to have in this situation!

      Moreover, I wonder how Protagonist should respond to this normative fact. If everyone rightly (!) wants them to murder the one, in what sense could the action nonetheless be "wrong"? And why should they care more about avoiding such "wrongness" than doing what everyone has most moral reason to want them to do?

      I think the direction you're suggestion will be very difficult for the deontologist to maintain.

      Delete
    4. Typo: the direction you're *suggesting*...

      Delete
    5. Nice reply! Plenty of food for thought. My answer (split in two because I need more characters):

      First, I did flip the notation in one case. In the other, I was using ">" to talk about having more wrongful deeds, rather than being worse. Sorry if that wasn't clear. But yes, you got that key part of my objection right.

      In re: what we should want (other) people to do, I think this one requires more detail. Let us combine Six Killings with Prevention, to get One Killing For Fun That Prevents Five: the Protagonist chooses to kill a Victim Six for fun. He knows that that will also prevent the other five killings - though it will not prevent the other would-be killers from acting, the killing effects of their actions will be blocked. But Protagonist does not care about whether other people kill for fun, one way or another. He just wants his fun killing Victim Six. So he goes ahead.

      So, let us compare One Killing vs. Five Killings.

      Which one contains more wrongful behavior?

      One Killing (we are assuming the method in which the five others kill is such that once they acted, they do not need to continue until they cause death, i.e., no beating to death for example), as there are six attempts to murder for fun vs. five attempts.

      Which is worse?

      Five Killings, as five people get murdered for fun vs. only one. It is much worse - though less than five times as bad because One Killing has more immoral behavior.

      Should we want Protagonist to murder Victim Six, thereby preventing the other murders?

      I believe we do not have that obligation. Indeed, it is permissible - maybe obligatory if one has a want at all - to want Protagonist to choose not to commit the murder, and also want the other five not to do it.

      What if the others already acted?

      I believe it is permissible to want Protagonist to choose not to murder, and - say - want something else to prevent the other murders (a meteor out of nowhere blocks whatever it is Victim Six's body was meant to block, an eagle flies in and lands in the wrong (right! ) place, or whatever.

      Do we have an obligation to prefer the outcome of One Killing vs. Five Killings, leaving all other possible outcomes aside?

      I do not see why we would have an obligation to even entertain specifically One Killing vs. Five Killings. But assuming we do consider them, then it is still not clear to me there is an obligation to prefer it. Would a person (say, a Taurekian) behave unethically for having no preference one way or another? I'm not convinced of that, even if one reckons the outcome to be better. But let us say that there is. Even then, I do not see this as problematic for side-constraints.

      Regarding your question about what I hope, it is not the case I hope that One Killing to Prevent Five or One Killing For Fun That Prevents Five happens. I would say I hope no murders happen, and no victims die. Now, if I am asked to rank by my preference One Killing to Prevent Five (or One Killing For Fun That Prevents Five) vs. Five Killings, yes I prefer the former (all other things equal). But I'm not convinced I would be behaving impermissible if I had no such preference.

      Now you say "That's a really surprising attitude for a deontologist to have in this situation!". I grant you that. And I do not know whether I qualify as a deontologist. If believing in side-constraints is enough, then I am. But then again, pick any kind of ethical view X - consequentialist, deontologist, or whatever X is, as long as it has a name in philosophy. Then it is a safe bet that some of my views will be really surprising for an X, whatever X is. Such is life, but I learned to live with that. :)

      Delete
    6. In re: "in what sense could the action nonetheless be "wrong"?", I'm talking in the sense Protagonist has a moral obligation not to do that, it would be unethical, immoral, etc., in the usual colloquial sense of those words.

      That said, I would say that not everyone rightly wants him to murder the one, for the reasons I mentioned in the first part of the reply. But let us say that having the preference for One Killing to Prevent Five vs. Five Killings in the sense I described is enough to count as 'wants', and that everyone rightly wants that. Then the question is:

      Q1: If everyone rightly wants agent A to X, is it possible that A has a moral obligation not to X?

      I would say 'yes'. For example, scenario S7: suppose A1 a serial killer on the loose. A2 is a police officer. She knows A1 is the serial killer. She has conclusive evidence, but the evidence will not convince a court, because some of it was lost (testimony of witnesses later murdered by A1; but A2 heard it), and some of it was contaminated by the mistake of another officer. So, A2 sets up a trap. If A1 attempts to kill A2 for fun, with the same MO as he does with his victims, he will be stopped and caught. Further, the evidence will connect him to the other murders he committed - his MO includes the use of the same weapon, etc. -, so A1 will be punished for his crimes. Furthermore, he will no longer be on the streets.

      So, we may assume A1 rightly wants A2 to try to kill her, for fun, using the same MO, etc.; the 'everyone' condition is more difficult to meet, but then, it is so in the case of Protagonist and your scenarios too, so assuming it can be met in that case, it can in this one too. And yet, A1 still has a moral obligation not to attempt to kill A2 for fun.

      Granted, you might say that even then, A2 has a stronger reason to want A1 to turn himself in, confess, etc., so there is a course of action A2 should prefer in that case. But if that's the condition, we just might need a weirder scenario.

      S8: Imagine, for example, that Murderer released a virus that will slowly and painfully kill millions. Protagonist can only save them by murdering innocent Victim just for the fun of it, because that is the only thing that will make means-to-end vastly superhuman Rogue Singleton decide to stop the virus - the evaluative function was programmed poorly, evidently! but that already happened, and Rogue Singleton will allow no other strong AI -, and the virus is too fast and lethal for human response to stop it before those millions of fatalities.

      Well, it seems to me that everyone may rightly prefer the outcome that Protagonist murders Victim for fun over the alternative given above, in the sense I specified. Sure, there are better outcomes, in which no one dies. But we are assuming one is somehow forced to choose between those - else, we are back to rejecting the hypothesis that everyone rightly wants him to do it.

      Q2: Why should he care more about avoiding wrongness than about doing what everyone has most moral reason to want him to do?

      In the moral sense of 'should' I think 'A should X' is equivalent to 'A has a moral obligation to X', and I think it's intuitive that a moral agent has a moral obligation to care about not breaking their moral obligations more than about other things! But I'm guessing that that is an 'all-things-consider should'? Please let me know if that is not the case. But assuming it is, I think on this matter my position is different from what you would consider moral realism (we would disagree about that!), so I'm not sure it's the answer you're looking for. I'll address it if you like.

      Delete
    7. Interesting cases! I don't think they actually support counter-preferential obligations though. Here's why...

      S7: "we may assume [everyone] rightly wants [A1] to try to kill [A2], for fun, using the same MO, etc.... yet, A1 still has a moral obligation not to attempt to kill A2 for fun."

      Here I think we need to separate out actions from the motives from which they're performed (for the sorts of reasons Scanlon explains in his book Moral Dimensions). So let's consider trying to kill A2 as the relevant act to assess. Now this seems another ignorance case: we know, though A1 does not, that this act will result in (i) the serial killer's apprehension and (ii) no actual harm to A2. Given those facts, there's no reason to judge the act itself as wrong. (If A1 performed it for these very reasons: to get himself safely apprehended, without harming anyone, that would be very virtuous of him!) Any criticism thus belongs not to the act itself, but just to the motives from which we expect it to be performed by the ignorant agent. When A1 attempts murder with ill intent but in an objectively harmless fashion, their act is not objectively impermissible but just blameworthy.

      Similar remarks apply to S8. Any sensible deontological view is "moderate" in the sense of allowing constraints to be overridden when the stakes are sufficiently high. So they would regard killing one innocent as permissible if that's truly the only way to save millions. (Against the absolutist, I would offer the same argument that everyone rightly wants Protagonist to infringe upon the constraint, and that's surely more objectively important than whatever the absolutist is talking about with their "obligation" talk.)

      You have the added quirk that Protagonist must perform this act from bad motives. As Scanlon argues, we can't generally choose the motives from which we act, we can just choose what action we perform. So it'll be a matter of constitutional luck whether Protagonist is evil enough to pass muster with the Rogue AI and thereby save the world -- hopefully he is! But that's all a separate question from whether the act of killing the innocent person is objectively permissible. (It is permissible if Protagonist happens to have the requisite evil dispositions, and otherwise it isn't -- on the assumption that no good would be achieved if the act is done from other motivations.)

      "...I'm guessing that that is an 'all-things-considered should'?"

      Right, the question is whether moral obligation, so construed, is something that could truly warrant caring about. It seems too much to have become an arbitrary system of rules, if its guidance doesn't track what we all have most impartial reason to prefer.

      Delete
    8. I see; that's interesting. The framework is now very different from the way I see things (metaethics aside). A key issue is that I think an action is (morally) impermissible iff it is immoral iff it is wrongful iff it is blameworthy. So I do not think there are blameworthy actions that are permissible. From what you say, it seems this answer is not available to your deontologist - not without substantially changing their view. Further, if I'm getting this right, the deontologist will be willing to say that in S8, it is morally permissible for Protagonist to kill Victim purely for fun. I admit I'm very surprised! I take it to be an obvious case: he's murdering Victim purely for fun! Of course that's morally wrong, impermissible, etc.! Well, it looks intuitively obvious to me. In fact, it would remain impermissible if we replaced 'murdering' for fun by, say, 'punching in the face' for fun - though the degree of immorality is of course much less.

      Also, it is difficult for me to see how he can choose to kill or not to kill, but if he has a choice to kill, the only choice is to kill purely for fun. It seems to imply people don't have enough free will not to engage in blameworthy behavior: consider S9, where Murderer wants to murder millions, purely for fun. Protagonist can choose to stop that by killing Murderer, at no personal risk. But - alas - Protagonist can only choose to either kill Murderer exclusively for fun, or not to kill Murderer. So, if Protagonist chooses to kill Murderer, then Protagonist is blameworthy because he did it exclusively for fun. If Protagonist chooses not to kill Murderer, he is also blameworthy for failing to stop the murder of millions for fun, when he could do it at no personal cost! (that seems obviously blameworthy to me, but please let me know if you think it's not). So, it looks like whatever he does, he is blameworthy, and he has no free choice to make for which he does not deserve blame. I find that at least odd.

      That said, within this framework, I haven't been able to find a way out for the deontologist. It looks to me that your dilemma gets them after all! Then again, I can't be sure that's because the deontologist has no good answer, rather than because I'm not familiar enough with the framework to find a good answer, so I'll have to give it more thought (...even so, I'd say the deontologist can give up on some of the other features of the framework rather than the side-constraints, and agree with me !)


      In re: impartial reasons: I think we have a built-in evaluative and/or rules-based system, namely morality, which is what we human monkeys got. I do not believe there is some kind of cosmic, species-independent system that sufficiently smart aliens and/or AI would likely have too. And I don't think it's arbitrary in the usual sense of 'arbitrary', because the usual sense already assesses arbitrariness from within that system. But unlike the permissibility of killing purely for fun, I knew your deontologist would disagree with my view on this one, so I didn't want to go down that road.

      Delete
    9. Just to briefly help motivate the distinction between permissibility and praise/blameworthiness: note that this is necessarily in order to make sense of phenomena like (i) doing the right thing for the wrong reasons (e.g. rescuing a drowning child purely in order to make your rival jealous of the esteem you subsequently receive) and (ii) harmless but ill-motivated and hence blameworthy acts (e.g. voodoo).

      On choosing motivations: I encourage you to check out Scanlon's book! But yeah, it's a controversial view. I don't think my response strictly depends upon those further claims. The (more standard) separation of objective permissibility and motive-based blameworthiness should suffice.

      Thanks for the illuminating discussion!

      Delete
    10. Oh, I should add: "inescapable blameworthiness" is equally a challenge for the motive-choosing view. Just return to your S8 where the Rogue AI will kill millions unless Protagonist kills one from evil motives. It would, as you say, be blameworthy for Protagonist to let millions die when he could have prevented this at no personal cost (and much smaller moral cost). But if he kills one from evil motives, that too is blameworthy (you surely agree). So there's no escape. (Unless he can't choose his motives, in which case he may be blameless if he lacks evil motives altogether, and is thus unable to save the millions, as would be true of a virtuous agent. This would, of course, be a very morally unfortunate instance of virtue! But it is, on my view, not the agent's fault.)

      Delete
    11. In the case of 'doing the right thing for the wrong reasons', I think that's a manner of speech, but the person behaves immorally. I also think that voodoo is immoral, assuming they intend to say kill an innocent just to get them out of the way or something like that - they're just attempted murders with an ineffective means, assuming they sincerely believe they'll succeed.

      That said, I could go with the distinction, but in that case, it seems to me that what matters in moral discussions would be blameworthiness, not immorality!

      In the case of S8, Protagonist does not need to know about the millions (and I was thinking he did not know). If we modify the scenario so he is told, then it seems to me any action intended to save the millions will not be purely for fun (even if his choice were to change his motives, since that too would be a motive!), so it seems to me Protagonist cannot save the millions in that manner; motives cannot be chosen in that manner. He could refrain from doing anything, and he would not be to blame I think. A modified scenario would raise the same problem, though. However, after reconsidering the S9 scenario, I realize there is a way out of the problem, so I made the wrong assessment earlier: if Protagonist can only choose to kill only for fun or not to kill, I would say he is not blameworthy for choosing not to kill (and I would add has no obligation to kill for fun, and he has no obligation to do what he cannot do, which is to kill to save millions). Protagonist is a weird agent!

      Delete
    12. This comment has been removed by the author.

      Delete


    13. And thank you for the interesting points and replies as well!
      I do not have time these days to read the book or study these matters as much as I would like to, but I will keep that in mind.

      At any rate, very nice post.

      Delete
  3. Richard, I doubt your argument will persuade a hard-core deonotologist, who is likely to say that morality simply isn't about which consequences we prefer, and will thus deny any consistency relation between moral rightdoing and your axiology analysis.

    But to play devil's advocate, I have a somewhat more concrete objection to your argument. Suppose that, as Socrates said, "it is better to suffer the unjust thing than to do it". This implies it is worse to be a murderer than to be killed. I think your argument implicitly assumes that the main problem with the killings is the actual deaths, rather than with people formulating the intention to kill. But this is presumably not going to be granted by someone who thinks it is wrong to kill 1 to save 5.

    To turn this into a crudely consequentialist calculus, let's suppose that becoming a murderer (or an attempted murderer) is worth -10 utiles, while being killed is worth -1. And let's suppose (though I don't at all believe this part myself) that the intrinsic badness of intending to kill somebody is in no way mitigated by any benevolent consequentialist-style motivations for the crime.

    Then we have the following utility assigments:

    Five Killings: -55 utiles (-10 for each of 5 murderers, -1 for each of 5 deaths)
    One Killing to Save Five: -61 utiles (this is importantly underspecified, but in the most natural scenario, where the killing causes the 5 murder attempts to fail, without changing the intention of the 5 murderers, there are now 6 individuals stained with the bad character of a muderer, even though only 1 of them succeeds in killing their victim.)
    Six Killings (Failed Prevention): -66 utiles
    Six Killings: also -66 utiles (since we aren't giving credit for benevolent motives)

    This particular assignment of utilities satisfies all of your criteria except for (4). But the reason it fails to satisfy (4) is logically benign---the gap of 5 utiles between -65 and -61 arises purely based on the 5 lives saved, but without any benefit to the perpetrators' "souls" which are what the deontologist chiefly objects to. In other words, the argument for (4) is invalid because it equivocates between a mere killing, and an actual murder (which is, in this toy moral system, 11 times worse).

    ReplyDelete
    Replies
    1. Hi Aron, very interesting! The view you describe sounds to me like a weird form of consequentialism (where bad intentions are the biggest "bad" to be minimized). For it would seem to imply that the agent should kill one to prevent the five would-be murderers from ever forming their bad intentions in the first place (resulting in a superior score of -6 utiles).

      That said, I take it to be morally unacceptable to prefer that nine people die in a tragic accident than that there is one unsuccessful attempted murder! (Compare Angra's comment above: some things matter more than wrongness!) If you could prevent one or the other, for example, it would be pretty obscene to let nine die in order to prevent someone from forming bad intentions that you know will ultimately cause no harm to others.

      re: "the hard-core deonotologist, who is likely to say that morality simply isn't about which consequences we prefer,"

      I agree that many will want to dodge the argument in this way. But I would press them on the fact that we can form preferences, and there are normative facts about what preferences we ought to have, and (as per premise (1)) these had better not be inconsistent with the normative facts about what we ought to choose.

      (Of course, the full paper will say more in support of (1), drawing in part on the sorts of arguments I offer here. So I'll be very interested to hear more about how deontologists might best attempt to justify rejecting this premise!)

      Delete
    2. "The view you describe sounds to me like a weird form of consequentialism (where bad intentions are the biggest "bad" to be minimized)."

      I agree! My goal here was not so much to present a framework that a deontologist would agree with, but to try to see how much of a specific deonotological intuition can be retained in a consequentialist framework. But one can't keep everything.

      "For it would seem to imply that the agent should kill one to prevent the five would-be murderers from ever forming their bad intentions in the first place (resulting in a superior score of -6 utiles)."

      Again, I agree, except that I don't know how you got to -6; I think by my stipulated scoring system there are actually -11 utiles here (1 muderous intention, 1 death). But this is still superior to any of the other scenarios discussed.

      Now if you add epicycles regarding respect for people's freedom, and positive value for virtuous intentions, then this could render immoral certain methods of preventing people from forming their bad intentions, but it would be hard to condemn as immoral all ways of doing so (for example, ensuring that the 5 individuals in question are raised with sound moral education that makes them unlikely to murder for fun). Thus, on this moral system, it would probably be right to kill 1 innocent nun to cause 50 small children to be raised to adulthood by nuns, rather than by a gang of murderous bandits.

      "That said, I take it to be morally unacceptable to prefer that nine people die in a tragic accident than that there is one unsuccessful attempted murder!"

      This weighting would make more sense in a religious system where there is an afterlife, for which moral character matters (although, in reality the ultimate effects on one's moral character from a passing intention to kill probably depend a lot on the individual psychological circumstances, and could range anywhere from trivial to severe.) I agree that if there is no afterlife, it's plausibly no worse to be immoral than dead. (Although I suppose this would leave open the question of how the Socratic maxim fares in nonterminal cases, e.g. One Slap to Prevent Five, One Rape to Prevent Five, etc.)

      While I myself believe the Socratic maxim, I should emphasize that I personally reject deontological reasoning (and have found your own blog helpful for thinking about why I do so). My own moral system is more like a synthesis between consequentialism and virtue ethics (which are hopefully more compatible since they are both teleological systems). I do think that in a typical moral decision, the back-reaction on our own character is usually more important than the external effects (but I don't claim this is true without exception, especially in thought experiments involving mad scientists!). However, I typically take the consequentialist side of thought experiments designed to distinguish between consequentialism and deontology.

      Ceteris paribus, I think it is morally commendable to Kill One to Save Five, and hence (contrary to my toy model), such welfare-maximizing actions do not actually stain one's character at all! (At least if one is careful to do it for the right reasons, without producing bad habits that will lead to trouble later on.) I say ceteris paribus, since I think there are moral values more important than not dying young, so in general whether it is right to Kill One to Save Five is a highly context-dependent question, depending on the details of the narrative specification, including somewhat holistic questions about innocence vs. guilt, the significance of the action for the participants and spectators etc.

      Delete
  4. I've been thinking about this argument a bit more, and it occurs to me there is a simplification:

    You say that "Deontologists claim that Protagonist acts wrongly in One Killing to Prevent Five. " and thus are committed to (3). This I take is due to (1) and/or (2) if I understood those correctly. Suppose then that (1) and (2) are true, so the deontologist is indeed so committed. Instead of One Killing to Prevent Five, we use One Killing just for Fun that Prevents Five (in the conditions I explained earlier, so the agent knows about the consequences). With your original notation for the order, we have:

    (3') Five Killings Just For Fun > One Killing Just For Fun that Prevents Five.

    (4') One Killing Just For Fun that Prevents Five >> Six Killings Just For Fun.

    And you don't need (5), or the arguments supporting it. There is also no room for arguing that a twisted benevolent motive is somehow bad for one reason or another. The motive of all agents here is the same: they kill people for fun, and do not care whether others also kill. But there isn't enough room between Five Killings Just For Fun and Six Killings Just For Fun to get ">>".

    On the other hand, the deontologist can turn the tables and point out that the above argument, if correct, not only takes down their claim that killing one to prevent five is morally wrong, but also any view that a killing just for fun that prevents five is morally wrong. From my perspective, this answer seems decisive regardless of the other points: killing for fun (by a human) is obviously always immoral, so something must be wrong with some of the premises of your argument! But within your framework, I take it you would reply that one killing for fun that prevents five is not immoral even if it is blameworthy?

    At any rate, I do not see a disadvantage in this variant (rhetorical issues aside, and assuming the deontologist also entertains the One Killing for Just Fun that Prevents Five variant and will raise it as an objection if they find it persuasive, regardless of whether you mention it).

    ReplyDelete
    Replies
    1. Yeah, that works! Though I don't think anyone would dispute (5), so I'll just have to think about whether the extended exposition helps in any way. Nice to have an abbreviated version to consider, at any rate. Thanks!

      Delete
    2. (And yes, I'd say that one killing to prevent five is permissible -- because desirable, with good reasons that exist to justify the act -- and simply blameworthy if instead done from bad motivations, e.g. "for fun".)

      Delete
    3. For comparison: consider a sadistic doctor who vaccinates his patients because he enjoys the momentary pain they get from the jab. Is it wrong for the doctor to vaccinate people, given that his motivation is to enjoy the pain he causes? Well, the doctor is clearly a bad person, and he's acting for the wrong reasons, but still, the act itself -- vaccinating people -- is fine and good, supported by excellent justifying reasons, even if they are not the reasons that happen to motivate the doctor.

      (Further discussion of related issues here, including the point that it'd be odd for the doctor to need to introspect on his own motives before determining whether or not he ought to vaccinate his otherwise vulnerable patients. That makes morality too narcissistic. The doctor should be guided by the important matter of how his acts affect others, not the petty question of his own motives.)

      Delete
    4. Great! I'm glad it works. And thanks again for your post too.

      P.S.: I still don't think there is blameworthy but morally permissible behavior, but it's not a matter to argue for here. In any event, given that the deontologists who will likely address your paper believe there is, I reckon they will have trouble ruling out that this example falls into that category, so they'll look for a different objection; and it won't be easy for them. I'm very curious about what answer they'll give!

      Delete

    5. Sorry; I sent the previous post before I saw your post with the doctor example.

      Briefly, I would say that if the doctor vaccinates them only for the pleasure of causing them pain, yes, I think it is wrong. If he has multiple motivations, it depends on what they are, and it's more difficult to say in any event.

      But imagine an instance of 'Torture an innocent person slowly and very painfully to death, that prevents 50 instances of that'. The perpetrator is fully informed about the effects, but he does not care. He just wants to torture an innocent person slowly and very painfully to death, because he finds it very entertaining. When others want to punish him for his behavior, he replies 'I did nothing wrong. It would be unjust to punish me'. Does the objection succeed? (assume the others respond in a proper manner)
      If it does succeed, he gets away with the most horrific blameworthy behavior, and rightly so!
      If it does not, instead of punishing people for their wrongdoings, we talk about punishing people for their blameworthy behavior...and then blameworthiness looks to me like a more central concept in everyday moral talk than wrongness.

      Delete
    6. I'd be fine with taking blameworthiness to be "more central" to the backward-looking question of how how we should react to others' actions (whether to punish them, etc.). I'm here more interested in forward-looking properties like whether an act is more choice-worthy than another.

      So if you're on board with separating choice-worthiness (whether or not we call it 'permissibility') from blame- and punish-worthiness, then we may not substantively disagree here.

      Delete
    7. Maybe you're right, though after re-reading your previous post, I'm not sure we don't have a slight misunderstanding. If the doctor is guided by the important matter of how the vaccination affects others, then as I understand it, he is not doing it only because he enjoys the momentary pain they get. When I say "just for fun", or things like that, the goal is to have fun, and nothing else - even if saving people is a predictable and actually predicted side-effect that the agent, say, does not care about. So, for example, I think the doctor should vaccinate others because of the effects, etc., but not just for fun. And if he can't help enjoying the pain he causes for some reason, he should tell an assistant to do the vaccination, but still have people vaccinated for their sake and the sake of third parties. If that is impossible or the cost is too high (e.g., there is no assistant, the assistant is not competent, etc.), then he should vaccinate them in order to protect them and other people, etc., even if he cannot help also having sadistic pleasure while doing it. But I don't think he would be blameworthy, either - though he should take measures not to be in that position again, if doable and the cost for others isn't too high. On the other hand, if the doctor is aware that vaccination helps others but doesn't care about helping others one way or another, and chooses to vaccinate purely for the sadistic pleasure he gets out of it (i.e., in order to get sadistic pleasure), I think he behaves in a blameworthy manner - and, I would say, wrongfully, but we might be using that word differently here.

      So, back to the distinction you suggest: how we should react to others' actions vs. whether an act is more choice-worthy than another, I think I am on board with making a distinction however the latter is construed, but I'm not sure how you're construing "act" in the latter. In particular, if it doesn't involve motive, it looks to me it's a choice between states of affairs, i.e., we - as third parties - are assessing either which state of affairs is better, or which one we should prefer, or something along those lines. Before I go one, please let me know if I'm getting the distinction you're making right, and if not, what I am missing.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.