Tuesday, November 12, 2013

Savulescu's Epidemic: Killing vs. Letting Die

I've previously discussed why I'm unmoved by the standard "counterexamples" to consequentialism.  Now Julian Savulescu offers a cleaned-up variation on the "transplant" case where the consequentialist response seems (to me at least) intuitively right.

Epidemic: An epidemic ravages the population, causing unconsciousness from which only 1 in 6 people will recover. A day after falling ill, unconscious patients can be tested to discover whether they produce the necessary antibodies for survival. One such patient can, if their blood is extracted, produce enough antibodies to save five patients who would otherwise die -- but the one will not survive the process of extraction. Which policy would you (antecedently) prefer:
  • Inaction -- 1 in 6 will recover, all others are left to die.
  • Extraction -- 1 in 6 are killed, all others are treated and recover.
Extraction strikes me as clearly the preferable policy (does anyone disagree?), despite the fact that it involves killing as a means -- often thought to be the most difficult kind of utilitarian sacrifice to justify.
A couple features of the case stand out:

* The antecedent distribution of harms and benefits is more readily acceptable as "random" and morally arbitrary, compared to transplant cases where it is easy for us to think of ourselves as healthy, and those dying of organ failure as "others" who may (for all we know) be partly responsible for their ill health.  Our reluctance to give up a position of privilege (especially if we can blame the victims) may obstruct our moral reasoning in those cases.

* It is unique and population-wide, better enabling us to consider the case on its own terms rather than being distracted by implicit concerns about downstream effects (e.g. rogue organ harvesting making people scared of doctors).

For these reasons, it strikes me as a better -- more pure -- case than most that ethicists discuss in this context.  Do you agree?


  1. Hi Richard,

    To me, it matters how big the population is. If the population consists of six people and I would have do the killing of the one patient with the antibodies, then I think that I should refrain from killing that one and let the other five die instead. If the population consists of six billion people such that Extraction results in an extra four billion lives being saved and I am the dictator who must to decide whether to implement the Extraction policy or the Inaction policy, then I think that I should implement the Extraction policy, thereby killing one billion to save five billion. But all this shows that the constraint against killing has a threshold. So I'm not sure how this helps make things clearer.

    1. Hi Doug, that's interesting. I would've expected that if the moral badness of one killing (of this type) outweighed the benefit of saving five lives, then the same would be true of each such killing, so that the moral badness of a billion such killings would be proportionately greater than the benefit of saving five billion lives.

      (I took the usual "threshold" idea to be that there is some (large) number n such that killing one to save n lives can be justified. It's very different to claim that there's a threshold at which killing n to save 5*n becomes justified.)

    2. Hi Richard,

      It may be that my intuitions are atypical. But those are my unreflective intuitions. The thought underlying my intuitions might be that the moral badness of my killing 5 is not five times worse than the moral badness of my killing 1. The extra agent-relative badness of my killing over and above the agent-neutral badness of some general killing doesn't aggregate in the way that the agent-neutral badness of killing does. Once I've gone on the path of killing people for sake of saving more numerous others, it doesn't much matter how many killings of this sort I commit except insofar as more killings involves deaths.

  2. I think going unique and population-wide increases, rather than decreases, potential distractions and confounding factors. For instance, once we go population-wide, we lose a clear sense of personal responsibility, and so we have the distraction of the fact that I could think a policy preferable, in the sense of less damaging to me and mine and others like me, while nonetheless regarding it as not preferable in another sense because I don't think I have the right to decide such a policy. Or indeed, put more strongly: one can think it preferable, were anyone to have the right to do it, without thinking that anyone in reality could ever have the right to do it. Likewise, most people distinguish between right and wrong-but-understandable, on the principle that truly terrible circumstances can make major wrongs minor. And, in addition, on this scale broadly piacular as opposed to strict moral guilt (the most common case of piacular guilt in the broad sense is survivor's guilt, but more generally I mean cases where one feels responsible and needing to atone even while knowing that there is no actual moral culpability) becomes a downstream effect, and an element of moral reasoning, that one can no longer abstract from. This can be related to the prior point, as well -- living with the ravages of that many people dying under circumstances obviously conducive to piacular guilt can look pretty well at the very limits of what even a very heroic human being could endure, regardless of the rights and wrongs of the action itself.

    Not all of these will be equally important if the only interest is just to pick a policy, but my primary point is not the right answer to the case but the fact that I think the move to thinking in terms of entire populations problematizes their value for thinking specifically about the killing/letting die distinction. And we start running into the problem that many people will start saying, "Well, obviously it's wrong to use killing as means in this case, but given that human endurance is not infinite, and given that level of strain, we can easily see that anyone put into that situation who made that choice would be massively more unfortunate than they were actually culpable, deserving compassion rather than condemnation." And at that point we are dealing with too many confounding factors to find the case usable for general moral questions. (Which would be related to another point, which is that even if the killing/letting die distinction breaks down here, the circumstances are extreme enough that one might question whether the breakdown generalizes to more ordinary cases rather than just being a breakdown under extreme conditions. A case that raises this worry doesn't really help the consequentialist at all, at least without some rather extensive additional argumentation.)


  3. Hi, Richard,

    I'm not entirely sure how you construe consequentialism, but with regard to your question as to whether anyone disagrees with the policy in question, I do.
    In particular, going by my moral intuitions, I would say that one should allow a person to recover, rather than using her for saving other people, generally even if the law says otherwise. Of course, there is – as usual – an "all other things equal" clause, or something like that, since there are other factors that might play a role (e.g., if the unconscious person who will recover on his own is a convicted serial killer, and his blood can be used to save the children of some of his victims, I would be in favor...all other things equal again, etc.).
    Personally, I'm pretty sure I would refuse to engage in the extraction policy barring a very serious threat (like a credible threat of execution), and even in that case, I would consider myself at very serious moral fault if I yielded.

    An exception to the above would be cases in which one has sufficient reasons to believe that a specific person prefers to sacrifice her life to save others.

    That said, I think a person making the decisions should do more than that, like (for instance, and depending on many factors) start a campaign in which all conscious adults are asked to say whether they would be willing to donate their blood – which would result in death – to save others, in case they become unconscious but have the stuff required for survival.

    Also, answering the question would be mandatory, and even though those who fail to answer would not be subject to extraction and death if they become unconscious, they would have to pay heavy fines for failing to take a stance.

    There are plenty of other factors, but I think that that policy (and, of course, a lot of money for research, to try to find an alternative way to make the antibodies, perhaps using GM non-human animals or something), would be acceptable – and doing something obligatory, so not literally inaction, either.

    My intuitions are similar if, instead of 6, the scenario said (for example) 60 or 600, though perhaps the culpability of the extractor would usually diminish (though that too would have to be assessed on a case by case basis), and also it's probably more frequent that one has sufficient reasons to believe a person would prefer to make the sacrifice, even if they did not state their preference beforehand.

    By the way, would you favor extraction if it were 2 instead of 6?

    1. If each "extraction" saved two lives? Yes, sure.

      I'm very moved by the fact that it would be prudentially rational for everyone, before the epidemic strikes, to vote for a policy of Extraction: effectively, consenting to their own possible sacrifice on the condition that everyone else does likewise. This way everyone increases their expected chances of survival. (This illustrates the more general point that utilitarian sacrifice is preferable from behind a "veil of ignorance" -- the randomness of the epidemic effectively serves to build such a "veil" into the scenario itself.)

      In light of this, to morally privilege the one in six who turn out to be arbitrarily privileged by natural events just strikes me as a kind of status quo bias.

      P.S. As in the classic Prisoner's Dilemma, asking people to consent individually (without any guarantee that others will do likewise) is just asking for a morally suboptimal outcome. Ex ante (before we know who is disposed to produce the antibodies when infected), it's a kind of collective action problem. And ex post, for the 1 in 6 to now refuse to go along with what they would have consented to ex ante, is just special pleading.

    2. On the issue of the changes of survival, I'm not sure maximizing them would require voting for the extraction policy. For example, a healthy young person in a society of mostly old people may well have a better shot by opposing extraction, if she properly reckons her chances of being capable of producing the antibodies are far better than the rest (antibodies might not work like that, but they do not work as in the example, either, afaik), and the chances of benefiting from what happens to others is low.

      But if it's true that each person would maximize her chances of survival by voting for the extraction policy, intuitively I would be intuitively inclined to say that voting for such policy is still immoral.

      Briefly speculating on the features of the behavior that trigger my moral assessment, I would tentatively suggest that it may be the way those engaged in extraction use and kill people who do not want to be so used and killed (like those who vote or would vote against extraction, ex ante). While those voting for the policy do not know in advance who would be killed, the result of the policy is that the extraction would be made on some people, and those making the extractions would know in advance (in advance with respect to making the extraction) that they will be using and killing people.

      With regard to asking people to consent individually ex ante, I don't agree that that's morally suboptimal, intuitively. The fact that it's a case of collective action does not resolve the problem, since the policy of extraction strikes me as the morally wrong collective action. Respecting their choice strikes me as proper.

      All that said, and assuming maximizing one's chances of survival requires voting for the extraction policy, I'm not sure why that practical rationality would require that. What considerations are you including as relevant to practical rationality?
      For example, what if a person will suffer serious distress for being part of the extraction scheme? Would it be in their interest to maximize survival chances regarding of that suffering?

      At this point, I'm begging to suspect that this is a case of persistently divergent moral intuitions, but – just to test the matter a bit more, if you don't mind -, how about the following scenario?

      The chances of dying are as in the original, but the illness does not make people unconscious.
      Instead, it gives them a very distinctive rush in their neck, and serious headaches. They will show symptoms for 3 days, and then the symptoms end on their own. After the symptoms end, when the patients go to sleep (whenever they do that), 5 in 6 never wake up, whereas 1 in 6 wakes up, and has no further problems.

      Immediately after showing symptoms, patients can be tested to discover whether they produce the necessary antibodies for survival. It takes 48 hours to get the results of the test. One such patient can, if their blood is extracted, produce enough antibodies to save five patients who would otherwise die – but the one will not survive the process of extraction.
      The extraction and use process can be done rather quickly, and the antibodies can be stored.
      So, here an extraction policy might be:
      a. Mandatory testing after symptoms first appear. Those who were not tested for whatever reason are left to their own, so they get a 1/6 chance of survival.
      b. Those that have the right antibodies are painlessly killed in their sleep after the symptoms ended.
      c. Those that do not have the right antibodies are cured after their symptoms ended.
      d. The patients who were tested don't know in advance (i.e., before falling sleep) who's going to be killed and who's going to be saved.

      Granted, some people might suffer due to the mandatory testing. But then again, in the original, some people might suffer because they might get killed, and/or because other people in their society engage in the extraction policy, so suffering occurs nonetheless.

    3. Back to the original scenario, an alternative policy would be as follows:

      People are asked whether they want to participate in the extraction scheme.

      1. Those who choose to participate, if they become ill, get tested. If they have the right antibodies, they are killed, to save others. If they do not, they are saved with the antibodies taken from other participants in the scheme.

      2. Those who do not choose to participate, if they become ill, are left alone.

      Under that scenario, my intuition is that there is nothing wrong about taking part in the extraction scheme, and furthermore, at least for most people, their chances would be better by taking part in it.
      But those who do not choose to take part are left alone.

    4. Right, that seems a more comfortable option, if possible. But suppose that it isn't possible -- there isn't time to get prior consent from everyone. Policymakers must simply choose, on behalf of the population, to force everyone into either your option 1 or your option 2. I claim: the fact that it would be rational for everyone to antecedently choose option 1 (and positively irrational to not participate, as in option 2) suggests that enforcing extraction is morally better than enforcing inaction (if those are our only two options).

    5. Interesting scenario.

      I'm not entirely sure it would be irrational of all of them to choose not to take part, for a number of reasons (see * below), but even assuming it is, it turns out that my moral intuitions on the matter have not changed, and I don't know how else to assess moral issues. I can input more information, including your point about the irrationality of the alternative choice, and then make an intuitive assessment (that's actually what I'm doing), but my intuitive assessment is still that it would be immoral to apply the extraction scheme, except for those who choose to take part (if it's not possible to ask everyone, then by asking those it's possible to ask), so our different moral assessments persist for some reason.

      Still, how about the following scenarios?

      In both cases, killing 1 person saves 2 (not 5):

      1. Let's say the epidemic comes from a neighboring country, and they know that it will probable cross the border, but hasn't yet.

      So, some NGO set up a website where people can say whether they wish to take part in an extraction scheme – or something like a website, but the point is that people can express a choice, and there are safeguards to make sure that the information about choices is correct.

      As it turns out, 90% choose not to take part. Then, people fall unconscious, except for those (like key members of the government) in secure areas, and some very rare cases of immunity.
      Would it be morally acceptable for the President to choose to implement the extraction scheme anyway?

      2. Let's say that there is no time to ask people, and 99% are already unconscious. However, there is an advanced machine that can scan the brain and make an assessment as to whether a person would choose to take part in the scheme, with an accuracy such that it gives only 1 in 1000000 false results (counting false positives and negatives).
      Would it be morally acceptable for the President to make everyone participate in the scheme, regardless of what they would choose?
      Would it be morally acceptable to do so if, say, 90% would choose not to take part?

      If your answer to all of those questions is "no" (else, we would have to come up with other scenarios), then whether it would be irrational of them to take part does not make extraction morally acceptable.

      My moral intuitions say that the irrationality of not taking part is also not decisive in your scenario, in which extraction is immoral. Granted, your moral intuitions say otherwise, so we still have a moral disagreement.

      Granted, also, it might be suggested (for example) that the fact that we know at least what decision they would make (in scenario 2) makes a morally relevant difference in these particular cases, and that if we do not know what they would choose, extraction is morally acceptable in your scenario...but then, my intuitions say it's not a sufficient difference (i.e., intuitively, I would say that extraction in cases 1. or 2. is plausibly morally even worse than in your scenario, but the other is wrong too).

      * For example, people normally have intuitive knowledge about their own health situation, even if they cannot always articulate it, and the "1 in 6" scenario (or similar ones) overlook that information.

      In particular, the "1 in 6" scenario does not factor in, say, whether a person is (prior to the epidemic) a young, healthy person, or an old, weak one. It may well be plausible for a person in the former category not to make the 1 in 6 probabilistic assessment, even if that would be the statistical probabilistic assessment when considering the entire population as a category, ignoring further information.

      Moreover, in addition to their being healthy and young, some people may have had experiences with other illnesses, where they tended not to fall ill when other people similarly exposed did.

      Still, I guess you can get around that by giving more details about the 1 in 6 condition.

    6. I've been thinking about it a bit more, and I'd like to ask a question: would your assessment be the same if, instead of death, the harm inflicted on the targets of the extraction or similar were much worse, like (say), hours or days of horrible pain before death, or even (say) infinite Hell?

      Granted, one would need different scenarios, but for example:

      Let's say that there are two rooms. Two people are unconscious in room 2, and 1 in room 1, in a research facility.

      They fell unconscious due to the release of a sedative gas in the ventilation system, in an accident.
      The container for some deadly chemical weapon (they were working on ways to neutralize it) in room 2 was affected in the accident, and will open in 5 minutes, releasing the chemical agent.

      That will awake the two people in the room, and kill them slowly. It will take them one hour to die, suffering horrendous pain in the process.

      It's not possible for anyone in the facility to open either room for at least two hours, when their locking mechanism will shut down. After those two hours, the rooms will only remain closed until someone opens them, either from the inside or from the outside.

      Bob, who is outside the rooms, may reset the system, and that will allow her to open the door in room 2, get the two people out of here, and then close it again immediately, saving those two people. However, the door in room 1 malfunctioned, so the reset will not affect it. The door will not open for at least two hours.

      Moreover, resetting the system will make the containers in both rooms open within 5 minutes, unless someone cancels that manually.
      As a result, if Bob triggers the reset, the person in room 1 will suffer horrible pain for an hour, and then die. The agent is paralyzing too, so suicide is not an option.

      Would it be morally acceptable for Bob to trigger the reset, assuming no previous consent on how people get saved?

      I would say it would not be morally acceptable. That's an intuitive moral assessment.

      I suppose someone might say that it's not morally acceptable, but [they might argue] it would not always be irrational not to choose beforehand to participate in a scheme in which in case of accident, everyone always saves two or more even at the expense of condemning one to the same fate the two or more others would otherwise suffer, no matter how horrible, as long as the results are certain (as they are in the case of extraction, where extraction always succeeds in saving others).

      But if that would not always be irrational, why would not to participate in the extraction scheme would?

      I don't see why declining participation in one scheme would always be irrational, but not in the other.

    7. Then again, in that alternative scenario, the person in room 1 is not used as a means to an end, so perhaps someone who intuitively assess that diverting the trolley in the trolley problem is acceptable (personally, I don't) would also find resetting the system acceptable, so I guess another alternative scenario might be needed – one in which a person is used as a means to save the others, but what is inflicted on that person is much worse than death, even if they too are saved from something much worse.

    8. Yeah, that case sounds trolley-like to me, and my intuition is that Bob is clearly required to do the "reset" that saves the greater number from great suffering, and that it would be straightforwardly irrational for the participants to withhold ex ante consent (before they know which room they'll be in) for this. (Why would they not want to minimize their ex ante chance of great suffering?)

      Regarding your earlier comment: the case is meant to be understood such that each person rationally assigns a 1 in 6 chance of having immunity to the epidemic: age, prior health, etc. is known to make no difference. Given that ex ante consent to extraction is rationally mandatory, I think extraction is still permissible even if they would (irrationally) fail to consent.

      A clarificatory question for you: suppose that (as we should expect) everyone would in fact consent to extraction ex ante if given the chance, but they never get the chance. You still think extraction would be wrong, and that we should force the policy of inaction on them, even though this will cause 5/6 people to die who could otherwise be saved, and every single person in the scenario would have objected to your choice (ex ante) had they the opportunity?

    9. When it comes to ex-ante consent, for instance people may be wary of misuse of the power in cases like that, not necessarily deliberately but due to mistaken assessments of the situation by those in a position to decide what to do, and they and they may be rational about that. Granted, you can stipulate that those who would make a decision are be trusted, though I'm not sure how well such stipulations work, intuitively.

      However, what I was getting at with respect to the rationality of the behavior was that there seems to be no difference rationality-wise between ex-ante consent in both cases, all other things equal (factors like rational mistrust of the decisions the people in charge would work in both situations equally), and we seem to agree on that, so no problem there.

      On the other hand, it's very interesting and intriguing to me just how different our intuitions about the room cases are. You find that Bob is clearly required to do the "reset", whereas I find that he's clearly required not to.

      Side note: Though this has not been tested, my guesses (based on similarity with tested scenarios) are that a majority would find Bob's resetting the system permissible in the two-rooms case (though I do not know whether they would find that obligatory; I'm skeptical), but the majority would probably reject the extraction policy in your scenario, at least as preliminary replies in both cases.

      On the question of what would happen if they would fail to consent ex-ante, as I understand your reply, you say extraction is permissible even if the person making the choice actually knows that they would fail to consent (machine-based analysis of the brain, etc.).

      I would like to ask:

      a. Do you think extraction is also obligatory in that case, or only permissible?
      b. Would forced extraction be permissible or obligatory in the website case, on the 90% who actually chose not to take part?

      Regarding your clarificatory question, I would make three points:

      1. I do not think "inaction" is a policy that anyone forces on them by not acting. That is not force (force would be to prevent them for setting up their own private extraction scheme, for example).

      2. I don't think we should expect that everyone would choose to take part. I'm not even sure whether a majority of people would probably agree to participate. I have serious doubts, especially when it comes to whether people trust those making the decisions. At least, a significant number would probably choose not to take part in the extraction scheme...but then again, this is a probabilistic assessment in a case that is so far removed from real life that I'm not sure I can trust it, either. There are too many variables that could change the situation in one way or the other.

      3. That aside, in the case of the machine, if we can tell what they would say by looking into their brains what they would choose, that would be essentially like asking them in my view, so I would say that acting according to the choice they would make if conscious is permissible. A different question is whether it's morally obligatory to act in that manner. I think more information about the "we" in your question is needed. For example, I think usually individual doctors and nurses can permissibly refuse to take part, though there might be situations in which that would not be permissible. But I'm inclined to say that the President (say, she gets to make the decision) would have an obligation to enforce extraction if everyone would agree, and moreover, if some people would agree and some would not agree, she should enforce extraction only for those who would agree.

    10. By the way, I think that in addition to the difficulty of other variables that would play a role in a real life case (e.g., why to trust those making the choices?), in some cases there is another, perhaps more serious problem with the stipulation that it's irrational to choose not to take part in the extraction scheme, because whether it's rational may well depend on some of the moral issues involved.

      That does not affect all extraction schemes in my assessment, but it affects some of them, and in particular compulsory ones.

      For example, let's say that after the epidemic wipes out a neighboring country, the President announces a compulsory extraction scheme before it breaks out in the country in question.

      Let's say that Alice rationally reckons that her chances of survival are much higher if she does not resist the scheme.

      However, Alice still refuses to participate in the scheme on moral grounds, because she assesses that a scheme that does not respect people's choice not to take part is very unjust. So, she hides in the woods (or whatever), trying to escape the President's enforcers.

      My assessment is that Alice's choice is not irrational, because she does have sufficient moral grounds to make it, and it's at least morally praiseworthy. Maybe it's not morally obligatory, but it's supererogatory in my assessment, and behaving in a morally praiseworthy manner is not irrational (at least for human agents, if not for all moral agents. However, questions such as whether possibly [or in some way relevantly counterpossibly] there are non-human moral agents such that it would be irrational of them to carry out some morally praiseworthy behaviors or even some morally obligatory behaviors are not relevant to the matter at hand, so I'll leave hypothetical non-human agents aside).

      In short, in my assessment, in some cases there are sufficient moral reasons to make it rational to refuse to take part in an extraction scheme, even if that person properly reckons that her chances of surviving are much higher if she takes part in the scheme than otherwise.

      So, I would not be inclined to accept a stipulation that it's irrational not to participate, at least not in all cases (a stipulation that it's not morally praiseworthy to behave in that fashion would be an improper stipulation in this context, in my view).

      Granted, you might disagree on the evaluation of Alice's actions (I'm not sure, but I got the impression you would disagree), but that's another issue - maybe we still get a moral disagreement, and maybe also a disagreement about the rationality of the behavior.

      Incidentally, what do you think about the rationality and the morality of Alice's behavior?

    11. Imposing extraction contrary to consent (in either your machine or website case) feels prima facie wrong to me, though on reflection I think that intuition is misguided and it's always morally better to bring about the better results. (Obligatory or supererogatory? Insofar as I accept the distinction at all, I'd probably lean towards supererogatory: failure to extract would be a highly forgivable moral weakness.)

      Note that pragmatic reasons for refraining (lack of trust, etc.) should be stipulated away, to keep the case as pure as possible.

      It's an interesting question whether Alice can reasonably object on moral grounds. There's a worry of circularity, if you hold that both (i) extraction is wrong iff and because some people can reasonably refrain from granting ex ante consent, and (ii) some people can reasonably refrain from granting ex ante consent to extraction on the grounds that it's wrong. (Hence contractualists' "deontic beliefs restriction" against this sort of thing.)

      But I guess Alice's objection isn't so contentless. She isn't just objecting on the grounds that it's wrong, but rather that it does not respect others' choices not to take part. That naturally raises the question: Do those others have good grounds for refraining from participation, or is it irrational for them not to consent? If the former, you can just elucidate those reasons rather than appealing to Alice, so let's assume the latter. We're now left with the question: is it morally objectionable to require full participation in this rationally-mandatory life-saving extraction scheme? If people irrationally refuse to participate, are we required to respect their (irrational and harmful to both self and others) decision?

      I claim that we're not required to respect irrational choices when others' lives are on the line. (This is generally accepted in public health policy, with emergency measures like quarantines, and even commonplace mandatory vaccinations, that individuals are not permitted to self-select out of, since by doing so they would be endangering public health.)

      If I'm right about this, then Alice's belief that extraction is wrong would seem unjustified, and hence objecting on this basis would be neither rational nor (I take it) praiseworthy. (It isn't praiseworthy to martyr yourself for an immoral, or morally misguided, cause.) So I take it our disagreement here stems from our differing views about whether we must respect even irrational "veto" claims by individuals.

    12. Regarding your rejection of your intuitions in the case of extraction contrary to consent, I'd like to ask the following:

      1. You say it's morally better to bring about the better result. But is the result in which people are forced into extraction, better? What if the result is worse because it's a case of people's being forced to that extent?

      2. You seem to be rejecting your intuitions by applying a consequentialist theory. Is that the case? If so, how do you go about testing a moral theory if not by testing it against our intuitive assessments in more specific and intuitively manageable cases?

      3. Are your intuitions still the same, or do you only reject your initial intuitions? I get the impression that you reject your initial intuitive assessment, but your intuitive assessment on the morality of the contrary-to-consent extraction scheme at this point has changed (perhaps, we're using the word "intuitions" differently?).

      On the issue of Alice's scenario, I realize I've been unclear.

      I will try to further clarify in a later post, but for now, I would like to point out that Alice does not know what the reasons of the other people are, but at least one of her reason for refusing to take part is that the scheme is very unjust. While she thinks that, in this case, the scheme is very unjust because of the way it fails to respect other people's choices, she does not only refuse to take part because the scheme does not respect other people's choices, but because it's very unjust, or because it's very unjust for that reason in particular (that is not specified in my scenario, so either way is compatible with it; maybe we could consider sub-scenarios).

      What you propose suggests a variant. What if, say, Bob refuses to participate without making an assessment of the morality of the scheme, and because he just does not want to take part in a scheme that does not respect other people's choices to such an extent, since taking part would make him suffer greatly, emotionally? Still, I'm not sure whether that's psychologically realistic (he would probably make an intuitive moral assessment), but I think other conditions in these kinds of scenarios aren't, either.

      Regarding your claim "we're not required to respect irrational choices when others' lives are on the line.", it's not clear that other lives are on the line, unless by "others" you're considering that for every person who chooses not to take part, others who also choose not to take part count as "others".

      My point is that only the lives of those who choose not to take part are at stake.

      After all, there is no reason to expect that those who refuse to take part would be more likely (let alone a lot more likely) to be resistant to the disease (if there were such a reason, then that may well give them enough grounds to refuse to take part, depending on how much more likely).

      For example, let's say that 50% refuse to take part, and their choice is respected. Then, the other 50% would take part, and the illness would predictably kill 1/6 of those who take part, and 5/6 of those who don't (or 1/3 and 2/3 respectively in some of the variants I proposed).

      Okay, so what if fewer than 6 people in the whole country didn't refuse to take part? (or fewer than 3, in one of the variants).

      That might complicate matters a bit, but if that's the case, there is no way the President can implement the scheme anyway (unless one adds additional unrealistic features), and the Alice scenario does not require that everyone but fewer than 6 people refuses to take part.

    13. Sorry, I made a mistake.

      If the choice is respected, the illness will predictably kill 5/6 or those who refuse to take part, and 0 of those who choose take part. Some people (doctors, nurses, etc.) will kill 1/6 of those who take part.

    14. Some clarification and corrections about Alice's original scenario:

      Earlier, I said that Alice had sufficient moral grounds to [rationally] refuse to participate, and also said that behaving in a morally praiseworthy manner is not irrational.
      Upon further reflection, I think the second part is problematic in this context, since it gives the impression that her actions are rational because they're morally praiseworthy, in a sense of "because" that is stronger than my giving reasons in the context of an argument.

      More precisely, while it would not be circular to say that her actions are rational because (in that stronger sense of "because") they're morally praiseworthy – since I'm not suggesting they're morally praiseworthy because they're rational -, I do not know that there is such kind of dependance. In other words, I do not know that the feature of her action that makes it rational is that it's morally praiseworthy.

      It may well be that:

      i. Her refusal to take part in the forced extraction scheme is rational because the motivation for her action is that she does not want to take part in a very unjust scheme. Not taking part in a very unjust scheme is enough a reason, in the context of this particular situation.

      ii. Her refusal to take part in the scheme is morally praiseworthy because the motivation for her action is that she does not want to take part in a very unjust scheme, and she is willing to refrain from taking part even if taking part in the very the unjust scheme would increase her survival chances significantly.

      iii. While those features are relevant in the context under consideration, all other things equal, etc.; there are contexts in which it would be irrational and/or immoral to refuse to take part in a very unjust scheme, or at least not morally praiseworthy, etc.

      That said, I think that someone may properly reason as follows: "Since her actions are morally praiseworthy, they're rational.". But that "since" would not be one indicating that the feature of the action that makes it rational is that it's morally praiseworthy. Rather, a person reasoning like that may use an intuitive moral assessment that the action is morally praiseworthy to strengthen the conclusion that it's rational, if the intuitive moral assessment is more clear on its own than the intuitive rationality assessment.

  4. I'm just curious, Richard. Do you use these kinds of thought experiments to try to demonstrate to deontologists that consequentialism is not wholly counter to their intuition? That is, do you put any trust in moral intuition? I for one would bite the bullet and say, "Maybe your intuitions tell you X, but your intuitions are wrong." Moral intuition developed as a result of a wholly unreliable process (evolution), and whatever moral intuitions we happen to have now are probably mere extensions of the primitive emotional responses that were selected for.

    For instance, I think deontological intuition is completely deluded by the doing allowing distinction. In the original transplant case, if you refuse to kill the one to save the five, I see that as KILLING net four people, assuming you are the only person who is in a position to intervene (which we can stipulate). Obviously we evolved to have a negative emotional response to up-close and personal killing and never really developed a strong response to distant and indirect stranger killing.

    Once you get beyond the doing/allowing delusion, you can no longer object to utilitarianism based on the fact that it permits and sometimes mandates sacrifice. In situations where sacrifice is possible, sacrifice is necessary. That is, to NOT sacrifice X for Y is equivalent to sacrificing Y for X. If a sacrifice is possible, then a sacrifice is necessary and you might as well get the most utility out of it.

    You've probably already read it, but if you haven't, you should check out Joshua D. Greene's The Secret Joke of Kant's Soul. It gives a pretty good description and explanation of the psychology of deontologists ("alarm bell" emotional response-based) versus that of consequentialists (far more cognitive).

  5. I'm a pretty diehard consequentialist, but my intuitions in this new case are just as dead-set against killing the one person as they are in the normal organ transplant case. My considered judgment is different, since as you say the new case eliminates a lot of the consequentialist reasons against killing the organ donor, but that doesn't change my intuitive aversion to killing.

    Ultimately, I think the quest for a properly "cleaned up" thought experiment that eliminates all extraneous features and shows that our intuitions all converge on the same judgments is a fantasy. It's just a psychological-anthropological fact that different people have different intuitions.

    1. Hi, Stentor,

      I'm not a consequentialist, but with regard to intuitions, yours seem similar to mine in the original extraction policies.
      I wonder if they also agree on one of the alternative policies I suggested. It works (adding now more details) as follows:

      1. People who meet some competence criterion (which may be aged-based if there is no time for something more sophisticated, but other alternatives seem preferable) are asked whether they want to enter an extraction scheme.
      2. Those who do not choose to participate are not killed, or saved, except for point 4.
      3. Those who choose to participate, if they become unconscious, will be tested, so:
      a. If they happen to have the antibodies, they will be killed, and their antibodies used to save 5 others.
      b. If they happen not to have the antibodies, they will be saved with the antibodies taken from someone else who chose to take part in the scheme.
      4. Given that there will be an excess of antibodies, there will be a lottery to randomly choose who else will be saved. People eligible for the lottery will be those who were not able to make a choice, like:
      a. Children, and people otherwise found not competent.
      b. If the outbreak already started when the policy is implemented, those who were already unconscious, if they're still alive.

      As I see it (intuitively, but I do not reject my intuitions. That's another issue, though), the policy in question would be morally better (all other things equal) than both the inaction and extraction policies, at least for some choices of the set of people who get to make the choice in 1. Generally, even a simple aged-based (e.g., everyone over 16 is given the choice) seems better to me than either of the other policies, though a more sophisticated scheme adapted to the specific details of each situation would be better.

      Do your intuitions on the matter agree?

      It would be interesting to have some non-anecdotal evidence on the matter.

    2. On an intuition basis, certainly -- your scheme seems to my intuition entirely unobjectionable, while as stated in my earlier comment I have an intuitive resistance to the original scenario. Whether it's the all-things-considered best policy depends a lot, I think, on how large a proportion of the population agreed to be in the scheme.


    3. Thanks.

      I tend to think it's always a better policy that forced extraction since I do not find that acceptable, precisely due to the intuitive moral assessment.

      On that note, when a considerably general moral theory is made, the way to test it as far as I know is to try to see whether, in more specific cases, yields true judgments. But to do that, we need to be able to make true judgments in specific cases, which we do intuitively as far as I know. That does not have to be just preliminary intuitions, but after considering consequences, intent, etc., one still needs to make an intuitive assessment (or, if you prefer to use a different terminology, still one needs to use one's own sense of right and wrong to assess the morality of the specific hypothetical cases in order to test the general theory).

      That's why I find examples like this to be counterexamples that provide evidence against consequentialism, as long as consequentialism is construed in such a way that it yields extraction (without previous choice) morally good, or even morally acceptable.

      On the other hand, you say your considered judgment is different, and if I read it right, you find the extraction policy considered in the OP to be morally good.
      What I don't know is what you base your considered judgment on, if it's not an intuitive moral assessment after contemplating the situation, including variables such the intent of the people involved, what they know, what consequences they expect and should properly probably expect, etc.
      Could you (roughly) explain, please? (or alternatively provide a link to a page that gives a procedure to make moral assessments without using our moral intuitions, and which you use).

  6. I don't have moral "intuitions" just opinions so I can't help you on that score but the example seems to me a muddle
    because it is built into the case that the one is harboring a scare resource that the other's need to survive. Does the guy have a right to the resource in these exigent circumstances? One might think not (e.g. because of the Lockean Proviso) but that doesn't have much to do with doing/allowing.

    Question: how is this case more "pure" than he one where the Nazi occupiers insist that you kill one or they will kill six?

  7. There doesn't appear to be any difference in the morally relevant parts of the experiment. It's just another variation of the original Trolley Problem.

    As to the voting for terminating the resistant people... It's not clear that it would be moral even if the now-incompetent person had given prior consent. After all we don't actively kill those that have signed Do Not Resuscitate -notices either. Voting on such an issue seems to slip into the tyranny of the majority.

    To add an Real World complication... What about the second time the pandemic hits? All the resistant people would have been killed and so there wouldn't be a population that could survive a second wave. One hit wonders don't exist among diseases, after all.

    1. Ilkka,

      On the issue of the "tyranny of the majority", I'm not sure what cases you're referring to. But if a person previously agreed to participate in an "extraction scheme" (as in my reply to Stentor), that is not a tyranny of the majority.

      As for the "Do not resuscitate" notices, those people have agreed to something very different (also very different, but there are jurisdictions where assisted suicide is legal, and others where it's not, but even then, it does happen, more or less frequently, and I wouldn't say it's always immoral, but rather it depends on a number of circumstances; still, neither "do not resuscitate" nor assisted suicide appears to be relevantly similar).

      In the case of signing up for an extraction scheme, precisely what they sign up for is a scheme under which they might be actively killed, though they do that because that increases significantly the chances of survival. So, in this case, it seems to me that they would have rationally entered the scheme – assuming the scenario, for the sake of the argument.

      With regard to the Real World complication that you mention, it seems to me that in the real world the scenario wouldn't happen anyway. In other words, it's not (realistically) the case that five people could be saved by harvesting antibodies from one that gets killed, and there is no way around that. So, it seems to me that in the real world, we would not be facing the scenario that you mention.

  8. An interesting thought experiment. Very carefully crafted, and yes, pure.

    But, to me, it is the care with which it had to be crafted that is the weakness of consequentialism for use in real world moral decisions. Another is how such a decision would be carried out. The burden to the doctors and nurses who had to administer the decision might be a stumbling block. (It seems to me they would feel the same revulsion as the people who refuse to push the large man in the trolley scenario.) As would the reaction of the sixth person's friends and relatives.

  9. It seems to me that, back before I was a consequentialist, I would have balked at this scenario just as much, if not more so, than the transplant problem. I don’t see it as a giant improvement.
    When I think of a way to make the transplant problem more intuitive, my thought is to “naturalize” it, as per Richard’s “Anti-Consequentialism and Axiological Refinements.” Completely remove any human agency from the problem.

    Imagine there are five people in a hospital who are dying of some disease. A drifter wanders in, catches the same disease, and dies. Upon doing the autopsy the doctors discover that his body was able to generate antibodies to fight the disease (though sadly, it was too late for him by the time they were generated). They inject the antibodies into the five patients, who recover. To keep the experiment clean, let us posit that this is a unique strain of the disease, the antibodies cannot be used to develop a treatment for anyone else.

    The question is, is it good news or bad news that these events happened? Obviously the recovery of the five is good news and the death of the one is bad news, but does it add up to good news or bad news on the net?

    I’d say in this case it is obviously good news that the five recovered, even though it’s sad that one guy died. Therefore the resistance to the original transplant problem is likely due to an aversion to violence, rather than a moral principle, since, in a circumstance where the same consequences are achieved without violence, they seem like a good thing.


    I also want to comment on the statement that “The antecedent distribution of harms and benefits is more readily acceptable as "random" and morally arbitrary, compared to transplant cases where it is easy for us to think of ourselves as healthy, and those dying of organ failure as "others" who may (for all we know) be partly responsible for their ill health. Our reluctance to give up a position of privilege (especially if we can blame the victims) may obstruct our moral reasoning in those cases.”

    I agree with this statement entirely, but am concerned that using the same logic could be disastrous in other cases. For instance, in Parfit’s famous drug addiction example, one could argue that we are thinking of our addicted self as an “other” and privileging our current desires over his much stronger future desire. However, I would argue that we understand entirely how much stronger our addicted self’s desire would be, and for precisely that reason we reject becoming an addict.

    Similarly, one could argue in favor of the Repugnant Conclusion by arguing that since we already exist, our reluctance to add more people to the world is caused by a reluctance to give up privilege. I would argue, however, that the precise reason we reject addition is that we understand all too well how strong and overwhelming the desires of those people would become if we added them, and reject adding them for that precise reason.

    So again, that statement is a good rule of thumb for dealing with existing people and desires, but maybe not so good for dealing with adding them in the first place.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.