Monday, March 05, 2012

Consequentialist Decision Procedures

Most character-based objections to consequentialism start from the assumption that a consequentialist agent would make decisions by making explicit "expected value" (EV) calcuations. Most consequentialists respond by pointing out that the bad consequences of fallible humans attempting to implement such a decision procedure ensures that consequentialism would actually recommend against such an "unfortunate" mindset. But this response doesn't address whether the EV decision procedure is, even if not consequentialist-recommended, nonetheless the consequentialism-exemplifying or fitting mindset. This then leaves us vulnerable to the Strongest Self-Effacingness Objection. So, to address this deeper problem, we need to show that the EV decision procedure is not even rationally fitting, according to consequentialism.

First consider why one might initially think that the (fitting) consequentialist agent would use EV. Presumably the thought is something like this: There's an isomorphism of sorts between moral facts and fitting mindsets (e.g., it's fitting to desire just what's good/desirable); the facts about what one ought (rationally) to do, according to consequentialism, are settled by the expected values of one's options; so it's rationally fitting to choose what to do by calculating the expected values of one's options.

I think that this rests on an overly simple view of the isomorphism between the moral facts and fitting mindsets. It's true that there's a straightforward isomorphism between the goods posited by a theory and what desires, or ultimate ends, are thereby shown to be fitting. But what about our capacities for "instrumental rationality", which take our ultimate desires as inputs, and -- guided by our available evidence -- yield concrete intentions or actions as outputs? Why think that our moral theories have any particular implications for these operations? On the contrary, I propose that we can give an independent (morally neutral) account of "instrumental rationality", with the upshot that fitting consequentialists aren't saddled with the EV decision procedure after all.

We may begin with the normal competence condition for instrumental rationality (in non-ideal agents): the dispositions that constitute our rational capacities are those that render us well-equipped to act in a wide variety of "normal" environments. This suggests the following feature list:
  1. Well-calibrated expectations (i.e. epistemic rationality)
  2. Well-allocated attentional resources (e.g. scanning for threats/opportunities)
  3. Well-calibrated predispositions (e.g. to avoid pain, be cooperative, help others in need, etc.)
  4. Executive faculty triggered when faced with novel or complex situations for which one’s predispositions are ill-equipped to handle (relative to one's ultimate ends)
[Compare the picture that emerges from “Dual Process” models of human psychology -- e.g., Kahneman, Thinking, Fast and Slow. Since I'm theorizing about the preconditions for competent agency in human-like agents, it's reassuring to receive empirical confirmation that this is roughly how actual human agents work!]

The crucial observation underlying the above list is that "executive oversight" is an especially scarce resource in our cognitive economy, rendering conscious deliberation too slow to serve as our "default" mode of decision-making in normal circumstances. (There are also more principled philosophical obstacles, e.g. the regress problem inherent in "deliberating whether to deliberate", etc.) Instead, an instrumentally rational (normally competent) human-like agent must by default be guided by generally reliable sub-personal "predispositions" to act directly upon registering pertinent information, only triggering conscious deliberative oversight in those odd circumstances when one's sub-personal mechanisms aren't up to the task.

On this view, the fitting (human, non-ideal) agent rarely acts upon explicit deliberation at all, let alone explicit EV calculations. This is so even when we plug in impartial consequentialist values as the "ultimate goals" at which this instrumentally rational agent aims. Furthermore, even when conscious deliberation is triggered, the evidence that we are unreliable at EV calculations precludes us from accepting their verdicts too hastily and uncritically (especially when the verdicts are at odds with more reliable rules of thumb, e.g. against torture, harming innocents, etc.).

Question: My above distinction between "morally neutral" instrumental rationality and "morally determined" ultimate ends seems well-suited to consequentialist theories. But does this approach seem appropriate for deontological theories also? Should fitting deontologists be understood as simply having certain constraints ("don't lie", etc.) among their ultimate ends? Or are deontological constraints better understood as mirrored in the "decision procedure" that converts the agent's aims into actions -- providing, in effect, an alternative to standard "instrumental rationality"?

1 comment:

  1. I suspect that the ethically relevant decisions made by military commanders will be made using explicit consequentialist type calculations, performed swiftly by dint of practice. However, other military virtues such as obedience/duty constrain how far this can be taken. One review I could find was
    here. Double effect type reasoning also seems popular.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.