Wednesday, November 30, 2011

Satisficing by Effort

Satisficing Consequentialism aims to capture the intuitive idea that we're not morally obligated to do the best possible, we merely need to do "good enough" (though of course it remains better to do better!). Ben Bradley, in 'Against Satisficing Consequentialism', argues convincingly against forms of the view which introduce the baseline as some utility level n that we need to meet. Such views absurdly condone the act of gratuitously preventing boosts to utility over the baseline n. But I think there is a better form that satisficing consequentialism can take. Rather than employing a baseline utility level, a better way to "satisfice" is to introduce a level of maximum demanded effort below which one straightforwardly maximizes utility. That is:

(Effort-based Satisficing Consequentialism) An act is permissible iff it produces no less utility than any alternative action the agent could perform with up to X effort.

Different theories of this form may be reached by fleshing out the effort ceiling, X, in different ways. It might be context-sensitive, e.g. to ensure (1) that it's never permissible to do just a little good when a huge amount of good could be achieved by an only slightly more effortful action; (2) that vicious people can't get away with doing little just because it would take a lot more effort for them to show the slightest concern for others; or (3) that your current effort ceiling takes into account your past actions, etc. I'll remain neutral on all those options for now.

To preempt one possible misreading, I should stress that this theory doesn't require (or even necessarily permit) you to "try hard" to achieve moral ends. That would be fetishistic. If you can achieve better results with less effort, then you're required to do just that! It merely places a ceiling on how much effort morality can demand from you. Within that constraint, the requirement is still just to do as much good as possible.

Some other features of the view worth flagging:

* Unlike traditional (utility baselines) satisficing accounts, it never condones going out of your way to make thing worse. Such action is rendered impermissible by the fact that there are better outcomes that you could just as easily -- indeed, more easily -- bring about (i.e. by doing nothing).

* It respects the insight that the "demandingness" of maximizing consequentialism cannot consist in its imposing excessive material demands on us, since the material burden on us is less than the material burden that non-consequentialism imposes on the impoverished (to remain without adequate aid). Instead, if there is an issue of "demandingness" at all, it must concern the psychological difficulty of acting rightly.

* It builds on the idea that there's no metaphysical basis for a normatively significant doing/allowing distinction. The only morally plausible candidate in the vicinity, it seems to me, is effortful willing.

* It provides a natural account of supererogation as going beyond the effort ceiling to achieve even better results. (As others noted in class, traditional utility-baseline forms of satisficing consequentialism have trouble avoiding the absurd result that lazing back in your chair might qualify as "going above and beyond the call of duty", if you have inferior alternative options that nonetheless exceed the utility baseline.)

So, all in all, this strikes me as by far the most promising form of satisficing consequentialism. Can anyone think of any obvious objections? How would you best flesh out the details (of how X gets fixed for any given situation)?

P.S. My next post will look at why we might be led to a view in this vicinity, over (or as a supplement to) straightforward scalar consequentialism.

[Update: Cross-posted to PEA Soup.]

8 comments:

  1. I like the general direction taken -- the ability to think in terms of alternatives is, I think, a major improvement over any crude satisficing account. I confess myself skeptical of the idea that effort is just that morally significant; but perhaps your effortful willing point does something to indicate why it might be.

    I'm thinking this through with a tired brain, but I suppose it's important how one defines the effort threshold for this. I see you mention this; it seems to me that it's going to be a big question. It's at least relatively clear how one would go about defining a utility threshold, since a sort of utility threshold is built into any kind of utilitarian consequentialism already, and the question is just how demanding it's going to be. But it seems like the only obvious way to define the effort threshold is by the fact that it's the effort required for the kinds of actions that meet some kind of utility threshold, which seems like it would make the problem come back, but in a more complicated form -- direct action to make things worse would perhaps still be ruled out, but it seems like the problem would return insofar as it would still allow slacking off if the effort threshold itself were indirectly set to meet a utility threshold, and if some utility levels require considerable (but attainable) effort. Perhaps there's some other way to do it.

    ReplyDelete
  2. I really like this idea, but I worry that it will still have some counterintuitive results. So for example, it has to turn out that it's obligatory to ruin your shoes (effort-equivalent of two extra hours of work) if doing so will save a drowning child. If there's a threshhold, clearly this must fall below it or it's a non-starter.

    But since we can each save a life for an extra two hours of effort, and that much effort was already established to be obligatory for that end, it seems that we again get the demandingness we were trying to avoid.

    I can think of some strategies for resisting this conclusion, and this is where things get interesting. One is to say that the effort threshhold should not apply on a per-action basis but a "per year" basis - as in, in order to be morally adequate that year (on Santa's list), you need to expend x units of moral effort at choosing actions which are harder to do than... than what? Than what you're inclined to do if you were ignoring moral considerations? Let's say yes.

    This would mean that you probably should donate some of your money to a worthy charity, but it's not your duty to have this take over your life. Nice, that's as it should be. And it's an empirical result of psychology that it's much easier to talk yourself into saving a kid in front of you rather than a kid on a different continent. That fact gives you the leverage for treating those cases differently - also a good result. (Asking someone to save the kid in front of them is asking much less of them than asking them to save the kid far away.)

    What worries me now is that people who are effortlessly very good can turn out to be immoral on this account. That will always be a problem if their effort, and not their actions themselves, become the object of moral evaluation. One way such a person could fulfill the injunction to be morally better is to corrupt her character, making it harder for herself to do good acts, so as to have to expend effort in doing them. But I don't think that corruption really is a moral improvement!

    ReplyDelete
  3. This strikes me as a very promising idea. Here's a fairly standard sort of worry, which I suspect fairly standard sorts of moves will help take care of—but it would be good to think through how exactly they'll go here.

    Sometimes the biggest barrier to doing the best action is figuring out what the best action is. Toy example: your only two options are A and B, each is easy to do, one of them has much better consequences than the other (in fact it's A), but it would take a really huge amount of effort to figure out which of them is the better one. In this case, it looks like a not-too-demanding conception of permissibility would say it's permissible to pass on the extremely strenuous research, and take your best guess—so B is permissible. But the EBSC says: it isn't permissible to do B, since A is much better and takes no more effort than B.

    ReplyDelete
  4. I worry that the view will get counter-intuitive results because it is formulated in terms of effort as opposed to self-sacrifice. Suppose that, in the given context, X = 10. Suppose that my top two acts in terms of utility production are (1) A1, which produces 50 units of utility for me and 1,000 units of utility for others and requires 100 units of effort from me, and (2) A2, which produces 51 units of utility for me and 1,001 units of utility for others and requires 101 units of effort from me. And assume that all other alternatives either produce less utility than A1 or require more effort than A1. It seems, then, (although I may not be understanding it completely) that EBSC implies that it's permissible to perform A1.

    This seems counter-intuitive to me. After all, isn't absurd to condone, as EBSC seems to, making a self-sacrifice so as to produce less overall utility (as well as less utility for others)? One should not be permitted to fail to produce more utility for others when one can do so while at the same time benefiting oneself.

    ReplyDelete
  5. I'm sorry, I missed a zero in the second sentence. I should have written "Suppose that, in the given context, X = 100."

    ReplyDelete
  6. And, Richard, I hope that you'll consider cross-posting this or the next post over at PEA Soup.

    ReplyDelete
  7. Brandon - yep, the devil's in the details! I explore the possibility of basing the effort ceiling on a prior conception of blameworthiness, in my follow-up post. (You might worry that that gets the order of explanation wrong, though.)

    Unknown - I should clarify that by 'effort' here I just mean the effort of will required to implement the immediate decision, not the downstream costs to the agent. So yeah, I think the contingent psychological facts about us will explain why abstract charitable donations (typically) require much more effort/ ego-depletion than helping those who are visible and nearby (which might come almost automatically to some people).

    The moral corruption objection is interesting. Clearly the act of corruption will itself be incredibly wrong. Does it render the agent's subsequent barely-good actions more likely permissible? Perhaps. I'm not sure if that's an objection. It's not as though acting permissibly should be one's ultimate aim (certainly not on consequentialist accounts!), so I don't think this result provides agents with any moral reason to corrupt themselves, or to be pleased to be so corrupted. We should just recognize that there are many factors that contribute to the permissibility of one's actions. It might be that one acts permissibly because one is acting well (by objective standards). Or it might be that one is such a pathetic agent that the usual standards don't apply. The latter doesn't sound like something to be proud of!

    [More responses to come...]

    ReplyDelete
  8. I've responded to Doug over at PEA Soup.

    Jeff - yeah, I've been abstracting away from epistemic issues in order to settle the rough contours of the core view. But it's certainly worth thinking about how to deal with those complications. My immediate inclination is to distinguish 'objective' and 'subjective' (evidence-relative) permissibility, and say that the current account concerns the former. To address the latter, we would need to change the criterion to reference "expected utility" rather than actual utility. Is that enough to deal with your case, do you think? The two simple options have equal expected utility (.5 chance of the big reward), whereas the "do more research" option has higher utility (increased chance of big reward) but may be over the effort ceiling, depending on the details.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.