Monday, June 21, 2021

The Paralysis of Deontology?

MacAskill & Mogensen's Paralysis Argument (forthcoming in Phil Imprint) argues that deontological constraints entail paralysis, once long-term indirect effects are taken into account:

According to most non-consequentialists, reasons against doing harm are weightier than reasons to benefit. Since you have no greater reason to expect that benefits as opposed to harms will predominate among the indirect effects of any action you perform, it therefore seems that you should try as best you can to avoid bringing about any significant indirect effects through your actions at all. Since virtually anything you do will inevitably result in significant numbers of indirect harms, you should therefore try to do as little as possible.

It's a cute argument!  I recommend reading the full paper, where they address (and handily dispose of) a number of possible responses on behalf of the deontologist. For example, the Arms Trader case (p. 17) shows that it won't do to exclude causal chains that involve others' voluntary choices.  And Mystery Box cases (p. 20) similarly warn against excluding convoluted causal chains. They ultimately conclude that the best option for deontologists is to embrace extreme demands: "to escape paralysis, your every motion must be at the service of posterity." (p.34)

But I wonder if deontologists might get by with some more conservative revisions to their views. In our recent reading group discussion of the paper, David O'Brien (mentioned with permission) suggested that something like the Doctrine of Double Effect seems well-suited to resist M&M's argument.  For even if we can, in some sense, foresee that our acts will have long-term effects some of which are harms, we generally do not intend those harms, or use them as a means to whatever everyday goals we are pursuing.  So if deontic constraints are restricted to harms that feature in our intentions, or that we make use of as a means, paralysis may be avoided.

Of course, not all wrongs involve intended harms in this way.  But that's fine.  It's a familiar point that DDE is a supplemental principle, not the entirety of a moral theory.  At a minimum, DDE proponents should agree with consequentialists that even merely foreseen harms (or "collateral damage") can make an act wrong if they outweigh the expected benefits.  The trickier question is whether DDE suffices for all the distinctively deontological constraints that the non-consequentialist might have wanted.  I'd guess that most additionally want some kind of harm/benefit asymmetry, e.g. to rule out fatally driving over one (as collateral damage) on the way to rescuing five (p.7).

At this point, I suspect that our deontological intuitions are mostly just tracking the salience of the harm.  If you see or touch the harmed person, we're apt to attribute outsized importance to the harm. Distant future harms to unknown ("statistical") victims, by contrast, seem maximally non-salient, and so avoid activating deontological intuitions.  As a result, the suggestion that there could be deontological constraints against these sorts of harms can seem intuitively absurd.  But insofar as we doubt that salience provides a sufficiently principled basis for counting some harms more than others, we may be forced to conclude that deontological constraints against salient harms are ultimately in no better position.

It's a nice challenge, at any rate, which deontologists will need to address if they want to appeal to anything stronger than the Doctrine of Double Effect.

[Update: I've been pointed to Nye's very similar 2014 paper, 'Chaos and Constraints', which also contains some nice arguments against appealing to the DDE here, e.g. by comparing lesser means-harms with greater collateral harms.]

3 comments:

  1. This is indeed a nicely constructed argument, although the particular way in which it is characterized is a little odd -- it reads a little bit like a consequentialist's attempt to explain how a deontologist would think, since it still treats consequences as the things of primary importance for the *moral reasoning itself*. I think your DDE response at least partly works for this reason -- while harms may be the reason why something is morally important, a genuinely deontological view would put much more emphasis on how our actual choices are related to those harms, which DDE partly covers. That is to say, deontologies are usually structured in such a way that the things we *primarily* reason about morally are *our choices* and not about the situation overall, which only comes into play to the extent that it can actually inform our choices in the here and now. 'Salience', I think, is probably a good term for it.

    For all that, though, the argument does nicely, as you say, in at least raising the question of how to fit consideration of harms and benefits into a general deontological approach, which most deontologists who aren't hardcore Kantians want to do.

    ReplyDelete
  2. Piggy backing on the salience stuff and the Nye's term "distal harms", I wonder if deontologists can use some sort of a metric for how direct the causation was. Then the extra harm penalties are weighted by the metric.

    Perhaps this metric could be formalized using dynamical systems: either by thinking about how much the agent changed the "initial condition" or how far down the trajectory from the new (and/or old) initial condition the harm occurs. Other philosophies of causation might lead to other metrics.

    Or in a game theoretic environment with strategic agents, we could think of players in an extensive form game tree. The more nodes up the tree your last action was separated from the final move (perhaps by nature), the less responsibility you bear and thus the less penalty your ethical preferences register. Agent's who act further up the tree but not on the last decision node, can still be responsible (if to a lesser degree) for setting up the situation where the last agent is incentivized (or has no choice) but to implement that harmful outcome.

    ReplyDelete
    Replies
    1. That's an interesting idea, but I expect it would end up implying excessive discounting in the Mystery Box case (where there's an arbitrarily complicated causal chain within the black box, but the "output" of killing someone right before your eyes remains extremely salient).

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)