Tuesday, April 10, 2012

Singer's Pond and Quality of Will

Singer argues that, just as we're obliged to save a drowning child at modest cost to ourselves (e.g. ruining an expensive suit), so we're obliged to help the distant needy when we're in a position to do so (e.g. by donating to GiveWell-recommended aid organizations). People often balk at this comparison, but I don't see any plausible grounds for escaping the conclusion that we have similarly strong reasons to act in either case.

What's more, I don't think this particular result is really all that counter-intuitive, either. Of course we have incredibly strong reasons to save innocent lives whenever we can! What could be more important, or more worth choosing, than that? This claim about the strengths of various reasons for action -- call it the Act Evaluation -- is eminently plausible.

What is counter-intuitive, I think, is the putative implication that when we fail to donate to effective charities we are thereby just as bad, or as blameworthy, as a person who lets a child drown before their eyes. (Call this the Character Evaluation.) Such a person, we feel, would have to be monstrously callous. As for ourselves, we may not be saints, but at least we are surely not moral monsters. Thus the comparison strikes us as preposterous.

I think this objection to the Character Evaluation is spot on. Consider a quality of will account, on which we are blameworthy to the extent that our actions manifest an insufficient degree of good will (e.g. concern for others). And now notice that differences in what strikes us as salient may lead us to act differently even if there's no difference in our quality of will (or altruistic concern). In particular, our concern for others is much more likely to trigger altruistic action when another's need is made vividly salient to us -- as when we see a child drowning right before our eyes, as opposed to hearing abstract descriptions of the needs of distant strangers.

It seems to be a fact of human psychology that you would need to be a much more callous person to neglect a child drowning before your eyes, than to neglect the needs of distant strangers. I think it is this fact that we are correctly picking up on when we look askance at Singer's analogy. But this fact also shows us why the Act Evaluation does not entail the Character Evaluation, so that our intuitive resistance to the latter should not prevent us from accepting the former. After all, while facts about salience and the psychological vividness might well affect how blameworthy an act of apparent neglect is (since to neglect vivid and salient needs is to manifest a greater callousness than is found in our commonplace neglect of distant strangers' needs), these facts about our own psychologies can't plausibly be taken to affect how choice-worthy the various actions are.

Some confusion may arise due to the ambiguity of 'obligation' talk. Obligation is often understood as closely related to blameworthiness. (Roughly: you're blameworthy if you violate an obligation without an excuse.) Maybe that's the most natural reading. But this moral category ends up being a somewhat convoluted construction that fundamentally concerns character rather than act evaluation. And Singer's argument really only works if we're talking about basic act evaluation (i.e., choice-worthiness, or reasons for action). So perhaps it's unhelpful for consequentialists to speak of 'obligation' here when we're really concerned with the 'ought' of choice-worthiness.

Terminology aside, though, I take it that in practical deliberation we should be concerned with making choice-worthy choices, rather than just avoiding blameworthiness. (Akratic as we are, we might at least adopt the latter standard as a minimum "baseline" that we must meet to maintain our self-respect as moral agents. But it's always better to do better...)

In summary:

(1) The Character Evaluation is intuitively mistaken, because blameworthiness depends on quality of will, and equally choiceworthy acts might exemplify different degrees of moral (un)concern if the morally relevant features are psychologically much more vivid and salient in one case than the other. In particular, letting a child drown before your eyes plausibly exemplifies (at least in typical human agents) a much greater degree of callousness and lack of concern for others than is involved in our failure to save distant strangers.

(2) The Act Evaluation is plausibly true, since the choice-worthiness of an act depends just on the morally relevant features of the situation, and not on how psychologically vivid and salient these features are to us.

(3) Once we clearly distinguish the Act Evaluation from the Character Evaluation, we may find that only the latter is counter-intuitive, whereas the former is actually quite plausible.

On a more practical note, I hope that by explicitly severing the connection to negative moral emotions (guilt, blame, etc.), the Act Evaluation becomes less apt to provoke defensive responses from people -- You can accept it without thinking yourself a horrible person! Yay! -- and I think it can even start to sound positively appealing. And from there one might be inspired to take some initial steps towards making more of these incredibly choice-worthy decisions, e.g. by joining Giving What We Can or similar philanthropic movements. And that would be cool. Not because you're a moral monster if you don't. But just because it's really worth doing.

15 comments:

  1. 1. Suppose someone habitually gives 90 percent of her disposable income to GiveWell organizations. She then comes across the nearby drowning child. Intuitively, her refusal to rescue the drowning child at a modest cost is quite different, morally speaking, from her refusal to bump up her charitable donations by a similar modest amount. I take it that her failure to rescue the child morally impermissible, and that this is an act evaluation (perhaps you reject this?). Are you suggesting that her refusal to bump up her charitable donations is equally impermissible? This strikes me as intuitively difficult to accept, given the deliberative priority and importance that mark judgments of moral impermissibility.

    2. Because of psychological quirks, the needs of the drowning child may be salient to one person but not to another. A psychologically quirky person (perhaps whose quirks are intentionally self-induced) may find the needs of the distant imperiled strangers (whom she is in a position to address) just as salient as the needs of the nearby drowning child. Does this suggest that her neglect of the nearby drowning child may be just as blameless as the normal person’s neglect of the distant needy? If not, then it seems that we can't simply draw on facts about human psychology--we'll also need some normative claims about when a factor should be salient to a person. I wonder if these claims might end up supporting moral distinctions at the level of act evaluations.

    3. Suppose I am deciding between saving the drowning child at modest cost and donating the equivalent in cost to a recommended aid organization. I take it that you are suggesting that both options are equally choice-worthy. But might we not want to say that saving the drowning child is more choice-worthy insofar also enables me to avoid blameworthiness? (I take it that you are suggesting reasons—and hence factors related to choice-worthiness--when you suggest that, “Akratic as we are, we might at least adopt the latter standard as a minimum "baseline" that we must meet to maintain our self-respect as moral agents.”)

    ReplyDelete
    Replies
    1. Interesting questions!

      (1) I'd guess that your 'impermissibility' judgment is blame-implying, so in that sense I'd categorize it as (indirectly) expressing a character- rather than act-evaluation.

      (2) It's difficult to vividly imagine so alien a psychology, but yes, I'm thinking that (e.g.) a Martian who found the needs of distant strangers to be more salient (vivid, attention-grabbing, emotionally gripping, etc.) than a drowning child before her eyes would be in the reverse situation from us: she might blamelessly neglect the drowning child, but be blameworthy if unmoved by the needs of those geographically distant people who filled her mind's eye. (Does this seem the wrong result to you?)

      (3) Hmm, I hadn't thought of "avoiding blameworthiness" as a goal in itself, though I suppose it might at least tip the balance in close cases like this. In my aside on akrasia, I was thinking of it more as a strategy one might pursue to better conform to (at least some of) one's antecedently-given reasons, rather than a source of new reasons in itself. (Compare: suppose you judge that you have most reason to phi. Does this judgment provide an additional reason to phi? Or does it merely suggest that phi-ing is the best way you know of to conform to the reasons you already had?)

      Delete
    2. Thanks for these responses. I don’t find the Martian result so bad, but perhaps I just lack confidence in character evaluations that are based on the actions of creatures so alien from ourselves.

      But I don’t think we need to imagine a highly abnormal agent to defeat the suggested special grounds for blameworthiness for refusals to rescue the nearby drowning child. Let me offer two cases in which I think that the special grounds would seem to be defeated. For both cases, suppose that it is true that I have no more reason to rescue the nearby drowning child than I have to slightly increase my giving to a good charity. I am also supposing that the special grounds for blameworthiness arise because of the needs of the nearby drowning child are a lot more conspicuous, or attention-grabbing, than the needs of distant strangers. When an agent refuses to perform the nearby rescue, this would indicate either deliberate disregard for the conspicuous needs of the drowning child, or a tendency to simply fail to regard the needs of others as providing reasons for oneself (in both cases the needs of the child are conspicuous; in the former case these needs strike the agent as providing strong reasons, but she consciously decides to disregard them; in the latter case, the needs are attention-grabbing but perhaps in the way that a curious event is attention-grabbing: the agent's attention is drawn to the needs, but the other's needs simply don’t strike the agent as providing her with reasons). Either way the failure to rescue the child would seem to indicate a significant moral shortcoming in the agent.

      Suppose that I am newly resolved to do what I have most reason to do, especially when it comes to my others-regarding reasons. Suppose that I have also been thinking about the fact that I have no more reason to rescue the nearby drowning child than I have to slightly increase my donations to a good charity. I am pondering these things on my way to catch a flight to a crucial job interview, and, in an amazing coincidence, I privately come upon a child drowning in a nearby shallow pond. Just as I am registering the fact that this is a real child in real need, I also realize that rescuing the child would cause me to miss my flight. Mostly because I have recently given this very scenario such careful philosophical consideration, I realize that I have much stronger reasons to neglect the child and catch my flight, on the condition that I will then donate a significant portion of the expected benefit of my making the job interview to a good charity (I'd then have a lot more income). With heroic resolve to respond to my strongest other's-regarding reasons, I tear myself away from the drowning child and catch my flight.

      The second case will follow...

      Delete
    3. Second case: I again recognize that I have just as much reason to increase my giving to a good charity as I would have to rescue a drowning child from a nearby shallow pond. I also realize that it is generally good to take advantage of our psychological propensities whenever these propensities facilitate our responses to our others-regarding reasons. Finally, I recognize that my own propensities would make it very difficult for me to neglect a drowning child, and that, should I refuse to help such a child, I would ever after be afflicted by thoughts of that failure. If only, I think to myself, I could find a way to expand these sympathies to plights of the distant needy. As I am thinking these things I come across drowning child. Just as I start to wade in, it dawns on me that if I watch this child die right in front of me, my conscience will continually remind me of it, provoking feelings of guilt. I could then turn every such guilty reminder into an occasion to reflect on the fact that I am continually faced with equivalent opportunities for helping others. I could turn every such occasion into an occasion to send yet another text-message-donation to a good charity. In this way I would not only recover my self-respect as a moral agent, but I would also be making the very most of my psychological propensity to regard the needs of the nearby drowning child. I therefore move in even closer to watch the child drown.

      Delete
    4. Hmm, yeah, so in both those cases I'm at least committed to saying that the agent isn't blameworthy for the usual reason in letting the child drown, for they do genuinely appreciate the importance of saving the child; it's just that in each case the agent thinks this (very strong) reason is outweighed by some other, even more important consideration (viz., the opportunity to save multiple lives that they otherwise would not).

      I find it hard to get clear intuitions even in these more "human" cases, given the number of stipulations that need to be borne in mind for it to really be true that the agent (reasonably believes that he) is doing the best thing by letting the child drown. (Normally, we'd hope, a job interview could be rescheduled, etc.) But given all the necessary stipulations, I'm inclined to think that the first case, at least, is pretty clearly blameless. It's equivalent to passing by the one drowning child so that one can get to a pond where five are drowning, and I can't see any reasonable objection to that.

      Your second case is trickier. I could see a deontologist arguing that the act is wrong because you're not merely neglecting the drowning child, but positively using her as a means -- and not just as any old means, but specifically as a crutch to compensate for your own moral weakness, which seems intuitively very distasteful (to put it mildly).

      If the deontologist is right about this, then the quality of will account implies that the agent, in using the drowning child as a means, is in fact not showing her the appropriate kind of concern/respect, and this is why the agent is blameworthy for watching her drown.

      As a consequentialist, I disagree on that point, and so see the agent as blameless here too, if it's really true that this is the best or only way for the agent to ensure that he saves dozens more lives in future. But I suggest that insofar as you find the agent intuitively blameworthy in this case, it's because you're giving intuitive credence to a deontological story according to which the act itself (using the child as a mere means) is wrong. And that's compatible with what I've been saying in the other cases, where this distinctive feature ("using as a mere means") is not present.

      Delete
    5. On the difficulty of having clear intuitions about the complex cases, I wonder if it would help to think about responding to someone who has behaved in one of these ways and is now trying to justify her actions to us (or to excuse herself from blame). I find it difficult to concede that the facts the agent could cite would not only undermine my remaining grounds for serious moral criticism (at least of the kind that I might have the standing to make), but even suggest that she has acted in a morally heroic way. (If there are other grounds for blameworthiness, what might they be?) I accept, though, though that my intuitions here might not be trustworthy, and that they might arise from internalized rules (e.g., "don't use others as mere means") whose universal application is ultimately indefensible.

      You've got me thinking that my remaining worries might all arise from my attempt to classify some refusals to rescue others as morally wrong/impermissible, where this is distinct from a judgment of blameworthiness, and yet still carries the deliberative priority and significance commonly attached to judgments of moral wrongdoing.

      Delete
  2. Since the drowning child relies on nearby people to be saved and the distant starving person does not, the starving child can be saved remotely, but the drowning child cannot. Also, since drowning to death takes only a few minutes and starving to death takes a few days, the starving child can be saved tomorrow, but the drowning child cannot. So, in order for drowning children to be saved, spatiotemporally proximal people must do the saving. However, starving children can be saved by persons that are both spatiotemporally proximal and distal. This tells me that we should always prioritize the drowning child over the distant starving person when the drowning child is in our spatiotemporal proximity.

    So it would seem that choice-worthiness of the drowning child scenario is more spatiotemporally sensitive than the distant starving child scenario. Whether or not this makes the drowning child more choice-worthy overall than the distant starving child, is up to debate, but whether it is choice-worthy in the same way is not.

    Thoughts?

    ReplyDelete
    Replies
    1. Sure, when time is limited we should first respond to the more immediately pressing, or time-sensitive, of two equally weighty needs. But this temporal "priority" should not be taken to imply normative priority, i.e. the claim that meeting the first need is more important, or more worthwhile, than meeting the second. So I don't see this as an especially significant disanalogy. (I guess I am more concerned with what you call "overall" choiceworthiness.)

      Delete
  3. So I guess I want to call attention to the pragmatic asymmetry between choosing to prefer the distant needy person (who have more rescuers at their disposal and therefore do not need to be saved by any particular person) and choosing the proximal drowning person (who be saved by the passer-by and only the passerby). In a world where people always prefer the distant person, drowning people will always die and distant needy people will not. In a world where people always choose to prefer the proximal drowning person, neither the drowning person or the distant needy person must die because all persons not faced with a proximal drowning child can be actively saving distant people. So, if these pragmatic elements (or their consequences) count as morally salient variables, then the two scenarios are not equally choice-worthy. It seems that, assuming we can choose only one, when we are faced with the choice of saving either the spatiotemporally proximal needy person or the spatiotemporally distal needy person, then we should always prefer the spatiotemporally proximal needy person (because, presumably, the are many other persons faced only with the distal needy person and can therefore save the distal needy person while you are saving the person that is proximal to you).

    This seems to tell me that they their choice-worthiness, at least in contexts where space-time is as it is for you and me, is not equal. They are equal outside of space-time, but that does not seem like the ideal place from which to build ethical theories.

    I have been mistaken many times before, so I am still open to you disagreeing with me.

    ReplyDelete
    Replies
    1. Oh, I agree that if you can rely on others to take care of the distant needy, then you no longer need to act to help them in order for your preference (that their needs be met) to be satisfied. In such a case, this pragmatic difference would indeed make a big difference to what you ought to do. But the scenario you imagine does not accurately reflect our actual circumstances. Accordingly, I'm assuming a scenario of abundant unmet needs, where your choice to send another $500 to GiveWell's top-rated charity really does have an expected value of one more life saved that otherwise would not be.

      [P.S. If you use the 'reply' link, instead of the default comment box, it'll help keep the comments page neatly structured. Thanks!]

      Delete
    2. I see. Thanks for your patience with my questions. I will keep in mind your NB about comment structuring.

      Delete
  4. I think that's super interesting to think about. I think it comes down to the matter of "in the moment" versus "the long run". If you were to see a baby drowning in front of you, of course you'd rush to save it; it's a fast-paced moment of panic and adrenaline. When one gets that feeling running through them, they often act without thinking and rush into their actions (whether they're justifiable or not). The situation is different if one has to make more of a long-run commitment to help a given situation. For instance, if you watch a documentary on how individual X or group X is being oppressed or is suffering at the extent of someone else's wrongdoing, usually a person would be very concerned about it for awhile, but only those that are truly moved would try to take further action. Same thing goes if there are starving children in front of you, of course you'd want to help. You may give them food for the time being, but that's only going to last them one meal. If you really wanted to make a difference, you would have to invest your time and energy into the cause. I think that is why people don't donate to charities more, because it's too grand a commitment for the typical busy life of the average American. (As unfortunate as that is).

    ReplyDelete
  5. Hi,

    I'm pretty familiar with Singer's and Unger's argument. My issue with their argument is that they seemed to me to have used a bad analogy between saving the drowning child and donating. These are not analogous cases and the argument seems to rely on an intuition pump that bridges the two cases. That bridging is what gives the bite it has.

    In the case of donating money to, say UNICEF, to save children, I believe that the reason people aren't as likely to criticize or blame others or themselves for not donating is because they understand, consciously or subconsciously, that donating commits one to far more than the case of saving the child.

    In the case of donating, the case commits one to more, because the argument can iterate itself. Say you donate 10 dollars. The argument can be applied again making an additional $10 donation morally obligatory, and so on until all of one's disposable income is gone such that a person is on the brink of destitution. Both Singer and Unger seems to bite the bullet and see that this unforgiving conclusion must be accepted.

    But now notice that there is no more analogy to be made because in the case of the saving of a drowning child, that was just a one-off, incident. The example only asks the reader to save the drowning child once. There is no stipulated or implied further commitments.

    Now a better analogy would be made between donating all one's disposable income and something like a scenario where you'd have to continuously save drowning children, say, once every hour for the rest of your life.

    I suspect that most people would still say that you are morally required to do so (if you physically can) but that we ought not blame someone for choosing to let all those children die. It is simply too harsh a requirement. It's like asking someone to be a moral saint. We cannot blame someone for their all too human weaknesses especially when we know that we may not have the moral resolve and integrity to choose such a harsh life ourselves. So even though it may be morally required to keep saving children and to donate all our disposable income, people ought not be blamed for not doing so because the demand is too harsh.

    ReplyDelete
    Replies
    1. Interesting. It sounds like the story you offer could involve an alternative use of the quality of will account to reconcile demanding act-evaluations with less-strict character evaluations. This would be so if quality of will provided the explanation why failing to meet "harsh demands" doesn't render one blameworthy. The explanation is that "harsh demands" are ones that even generally well-meaning people will typically fail to live up to. So the failure to meet them doesn't imply the lack of good will that ordinary wrongdoing involves. (Does that sound right to you? Or would you skip this intermediate explanatory step, and just say that the demandingness directly explains the blamelessness?)

      I'm a little suspicious of the move from "it would be too demanding to save all the people we could" to "it would be too demanding to save any of the people we could". Generally in repeated-choice situations where a certain choice is worth taking (up to some vague boundary) but undesirable if repeated ad infinitum (e.g., ever better wine, the self-torturer, etc.), the obvious thing to do is to make a "resolute choice" whereby you draw an artificial line in the sand after some number n (greater than zero) repetitions, and stop after that. For example, we might decide that we need to keep at least $50k per annum to comfortably live off ourselves, and donating everything above that. Or, in your repeated-pond case, dedicate just one hour per day to saving lives (perhaps in addition to the above donations to hire more life-savers). Reasonable people might disagree about just where to draw the line. But it's hard to see how the mere fact of repeatability could make it reasonable to do nothing at all in such a case.

      Delete
    2. The example I used was from a paper I wrote a long time ago. Yes, I did use a quality of will and quality of character account; specifically, an Aristotelian one. But I suppose that other quality of will approaches will also do. I argued that accepting the harsh conclusion that we are as obligated to donate as much as we can as we are in saving the drowning child commits us to unpalatable conclusions such as accepting certain excuses from truly despicable individuals.

      The "arbitrary line" would be the social solution to problems of that sort. But I wonder if such a solution only confers obligations or duties after they are adopted within society. So long as they are not adopted there would be no moral grounds for blame in the same way that we blame the person who doesn't want to ruin his suit to save a drowning child for violating some moral obligation. It seems to me that in the later case, it confers an obligation to each and every one of us without any arbitrary social convention being put in place.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.