Monday, May 15, 2006

Global Strategies and Indirect Reasons

Building on my previous post: Suppose there is some worthwhile goal G (e.g. happiness or general utility), which is best achieved by an "indirect" strategy, i.e. by aiming at goals S other than G itself. What is the normative status of the strategically recommended goals S, especially in those particular instances where they conflict with G?

We have reason to achieve G. But I am more likely to achieve G by adopting S as my goal instead. So there are instrumental reasons to aim at S rather than G. If I know all this, my apparent and objective reasons will coincide, so the demands of rationality coincide with what is objectively best: namely, to achieve G by aiming at S instead.

This may seem puzzling. Rationality tells us to aim at the good, or do what seems best, i.e. maximize expected utility (for whatever scale of "utility" we're interested in). But the whole idea of the indirect strategy is to be guided by reliable rules rather than direct utility calculations. One effectively commits to occasionally acting irrationally (in the 'local' sense), though it is rational/optimal to make this commitment. Parfit thus calls it "rational irrationality".

But we may question whether it is really irrational to abide by the rules (against apparent utility) after all. We adopt the indirect strategy because we recognize that our direct calculations are unreliable. The over-zealous sherriff might think that torturing a terrorist suspect would have a high expected utility. But if he recalls his own unreliability on such matters, he should lower the expected utility accordingly. As a good indirect utilitarian, he believes that in situations subjectively indiscernible from his own, the best results will generally be obtained by respecting human rights and following a strict "no torture" policy. Taking this "meta" information into account, then, he should reach the all-things-considered conclusion that expected utility is maximized by refraining from torture.

Global reasons thus entail local reasons. It's not a matter of following strategy S no matter what seems most likely to achieve G. A broadened perspective will lead the agent to recognize that S is what's most likely to achieve G. The conflict is only with prima facie "seemings". His all-things-considered judgment of "what seems best" should in fact coincide with his general strategy. So "rational irrationality" need not be genuinely irrational at all. It is only irrational in the restricted, "local" sense whereby we only take into account first-order evidence or considerations, and fail to consider higher-order issues of reliability and so forth. But rationality, simpliciter, is surely an "all things considered" rationality. And when we consider all things, the apparent conflict dissolves.



  1. This seems a little like communism. (which gives us a test case!)
    some group makes a set of rules (if it was just you making the rules they have an even less special status) on the assumption that they know a lot more about what you need in all situtions than you do when confronted directly with the choices.

    I guess it would depend on the person but to me it seems in general you would be in more danger of obeying the rules and causing more harm than disobeying them and causing more harm.

    Possibly the thesis relies on the fact that when confronted with the choice you are likely to have too much adrenaline and become irrational or that one person can never approach the cumulative wisdom of a religion/moral system or similar but there needs to be something to outweigh the fact that when you are there you have a lot more information.

    Maybe the above would be mroe aplicable to normal peopel and less so to philosophers who should be more resistant. Also I'd accept that you may not have time/intelect to think it out properly (and then might revert to a previous position) but that is a standard formulae you would have to apply even in forming your global oppinions.

    Anyway the global rules theory doesn’t seem to work in economics very well why would it work in morality?

    (I also note that you would stil have LAWS because they serve a different purpose)

  2. Pat, the justification for my indirect utilitarianism (as explained in my original post on the topic) is very much grounded in classical utilitarianism itself, and not some independent desire for rule-worship. It is, I happily concede, merely "sophisticated utilitarianism", i.e. the position that every (act) utilitarian should naturally be led to.

    I don't think I.U. is the same as Rule Util. To situate the tradition, I take Derek Parfit and R.M. Hare to be archetypical indirect utilitarians. It would be weird to call them rule utilitarians.

    But in any case, my argument suggests that your point is moot. If it's true that there are situations where, faced with subjective evidence E, we are transparently in a position to reliably employ direct utility calculations, then this will be included in the globally optimal strategy! The best rules will include one to the effect that "If you find yourself in situation E, then ignore the other rules and directly maximize utility!"

    (Incidentally, I have a taste for bullets.)

    So my point is that, all things considered, direct and indirect reasons simply can't come apart in the way required for it to be a significant question whether the strategic rules have independent normative force.

  3. 1) I think your position is misleading to the average reader. Most of us who read your posts probably assume that you indirect utilitarianism differs significantly from act utilitarianism (partly as a result of how fiercely you seem to try to differentiate it). I don’t think you should intentionally surrender the term “utilitarianism” to an incorrect/intellectually deficient belief of how it might be applied.

    2) I'm concerned that indirect utilitarianism or rule utilitarianism might be used as a "cop out" (which, actually, I propose is usually the case). For example the person might want to excuse the fact they don’t act in a utilitarian manner so they might find a set of reasonable rules they live by and declare them the indirect rules.

    3) I think it obscures some of the most important topics there are.
    Related to this is that much of the debate on morals is treated by people already as a debate over what general rules people can live by - taking it a level higher potentially leaves even the philosophers to give the details lip service.

  4. Pat, I've a forthcoming post which will discuss situations where, to succeed in G, we must pursue S for its own sake and not for sake of G.

    "We are still forced to make the same utility calculations and epistemic judgments that a direct act utilitarian would demand of us."

    I think you've misunderstood. The vast majority of the time, we will not have to make any utility calculations at all. The globally optimal rules will only tell us to employ direct calculation if we're in that specially gerrymandered situation E, which probably none of us ever will be. Now, I've pointed out that if one were to attempt a direct calculation, then considerations of "metacoherence", reliability, etc., should soon lead one to abandon that and return to the rules of thumb. But that doesn't mean we have to go through that whole tired process every time we make a moral decision. Most of the time we can just directly employ our (non-utilitarian) practical morality.

    "if you are willing to hang some innocent people..."

    Oh, please. If you want to play that game, I'll start ranting about how you think your moral "purity" is more important than the millions of innocent people who would otherwise have been tortured (or whatever our silly thought-experiment of the day is). Don't be such an ass.

  5. Some additional thoughts…….

    There may be a bit of psychology in here
    I.e. mental effort seems to be a fundamental aspect of this problem

    1) You cant apply a moral calculation (or a rule) to every action, some things you will just do instinctively or selfishly or from habit, so we must be talking about some sub set of actions that we will control.
    2) Calculations and rules must be combined – it is nonsense to imagine you would reinvent yourself every time you had a thought (that just wouldn’t work) or that you would mindlessly obey rules without some sort of analysis – at a bare minimum which of some pre prepared rules were appropriate.
    a) you may well be saying that any algorithm is a rule but if so I’m having difficulty understanding how act utilitarianism can exist. In which case it would be a straw man.
    3) Clearly we can say it is easier to follow a reverse justified set of rules
    but it may be (I can think of some reasons why it probably is) harder to get marginally better utility out of following theoretically justified rules as compared to utility calculations.
    a) Applying calculations takes effort - but so does applying rules where those rules differ from normal actions. It is debatable which will be easier because some rules might make that easier but then again so might some utility calculation algorithms. (e.g. a rule may well conflict with normal behavior more often than a calculation)
    4) If we do divide the algorithm into rules and calculations and admit that people are using a combination of both and are debating how that should be shared then there is a question as to how this works
    a) does an individual apply calculations from day to day which then are used to form templates and rules?
    b) Do they create rules and at some point declare them fixed in order to work on other rules?
    How it should be done?
    5) Does trying to be an act utilitarian or an indirect utilitarianism improve this system? Is the debate about marginal improvement or is it about who’s dogmatic position is closer to the happy middle?

  6. "I just said that we have to talk about other things if you are gonna bite the bullet."

    Of course the problem is with how you said it, with such a one-sided presentation of an emotive example which gave no hint of recognizing the reasons why a utilitarian might accept that position. It was a gratuitous jab or "parting shot" before moving on to other issues. How would you like it if a pro-lifer turned up on your blog and said "Well, if you're willing to slaughter innocent unborn babies..."? Even if it's not strictly false, anyone who would phrase it like that clearly isn't interested in representing your position fairly. They're just going for the rhetorical effect, either to smugly express their own moral superiority, or simply to piss you off. Either way, that makes them an ass.

  7. Okay, I appreciate that you didn't intend it that way, but my point was precisely that it is an unfair description because you leave out all the features which are relevant to the utilitarian's moral judgment. I'm not worried about the action being "intuitively shocking". My complaint was your misleading and unfair description of the action. (I'd happily concede that I'm "willing to hang an innocent person if I transparently know that this is the only possible way to save a million other innocent persons". But that's a whopping huge "IF". So it's awfully misleading to say that I'm "willing to hang an innocent person", simpliciter, as if it's the kind of thing I'd endorse in everyday life.) As in the abortion case, it simply isn't a fair representation of the other person's position. I hope that's a little clearer now.

    (Though I'll grant that I was probably a little over-sensitive due to having just endured a more significantly hostile comment in another thread, which certainly wasn't your fault.)

    As for being an ass, if I ever leave such a rhetorically loaded throwaway "parting shot" on your blog, you're most welcome to accuse me of the same. But I reserve the right to be an ass on my own blog ;-). (Not that I think there's anything wrong with emotive language in the context of a substantive argument.)

  8. This dispute seems to have stalled progress in debating anything substantive.

  9. Pat, okay, it looks like I misinterpreted you. My apologies.

  10. Surely I stumbled across somthign in my last two posts worthy of adressing?

  11. G., you might need to explain why you think indirect utilitarianism "obscures some of the most important topics there are." That isn't clear to me at all.

    As for my position being "misleading", I think it would be more misleading to say one is an "act utilitarian" but insist that this means something different from how the position is commonly understood. There's little point arguing over terminology though. The important point is that we typically shouldn't try to employ utilitarian reasoning in our everyday lives.

  12. Richard,
    Try to meet me half way here, or at least 1/3 of the way.

    Your position seems to contradict the usual sources - for example
    Act utilitarianism
    "Many act utilitarians agree that it makes sense to formulate certain rules of thumb"

    Similarly all the other dictionaries say something like "... the most common understanding of Utilitarianism and uses the outcome of an action to assess whether the action is right or wrong."

    A search for "indirect utilitarianism" on the other hand gives – you (interesting...)
    But in general it would seem to be defined as per the penguin dictionary

    “A kind of utilitarianism which recognizes that an agent is more likely to act rightly by developing the right attitudes, habits and principles. This indirect utilitarianism is so called because it bears on actions ONLY indirectly.” (my emphasis)

    At first glance this implies a plausible (but inferior) theory wherein you DON’T EVER apply utility calculations you only ever employ those tools that indirectly effect habits attitudes and principles in general. Which is the opposite of “direct act utilitarianism” something you seem to confusingly simplify to "act utilitarianism".

    However I expect your definition is more like this

    1) Direct: [I note you seem to use act utilitarianism not direct act utilitarianism]
    Agents should seek to maximize aggregate utility by trying to ascertain, on each occasion, which available alternative would produce the greatest utility and act accordingly.

    2) Indirect:
    Agents should at the intuitive level seek to maximize aggregate utility not by trying to ascertain, on each occasion, which available alternative has the greatest utility, but by abiding by tried and tested rules for the most part … Agents should appeal to the criterion of rightness directly only at the critical level: (1) when there are conflicts between the rules, (2) when the rules do not provide sufficient guidance, and (3) when there are obviously extraordinary utility losses associated with following the rule.

    I think this guy has also failed to clearly define why 1 doesn’t include 2 but regardless I think I know what he is trying to do.

    That is, to create what I would term a “leading definition”. What it does is define one form as extreme and the other as the other side plus the best of your argument (despite the fact that the name implies a strict division).

    (Now I have a number of thoughts on how these two inter related the fundamental impossibility of pure direct utilitarianism and the fact that most decisions are dominated by habit and instinct and not by moral strategies such as utilitarianism - but we can put that aside so I don’t loose you)

    But even this definition faces the standard rule utilitarian problems (I assume I don’t need to list them).
    "We typically shouldn't try to employ utilitarian reasoning in our everyday lives."

    I guess I am nitpicking here but that seems outright wrong. I give an example though I expect it is not required...

    Imagine swapping your least productive use of your "brain time" for a little time looking into utility of certain actions. Is there any benefit?

    Anyway - The very concept of indirect utilitarianism seems to fly in its own face in that it gets you out of the habit of considering utility on a day to day basis. I expect that has implications for your habits attitudes and principles.

    "Obscures some of the most important topics there are."

    I think I need to go a bit slower here so I’ll let you catch up on the above but the paragraph above touches on a little.

  13. OK I went back to one of the old posts to get your old approach. So I might adress that to save you tackling the other approach.

    R.M. Hare's "two-level" utilitarian theory... The intuitive level is our everyday, practical, working morality [rules of thumb]...The critical level, by contrast, is when we critically reflect upon our intuitive-level principles. If beset by a 'moral dilemma', in which we have a clash between our principles, we will need to reason about how to resolve it. Or, in moments of cool reflection, we might ask for justifications of our intuitive-level principles... This is where utilitarianism comes in.

    (and indeed that is a correct description of a generic human.... [well aside from the fact that it might not be utility they want to maximize])

    the thing that concerns me is that you follow that up with

    "attempting to reason in a utilitarian fashion tends to have disastrous consequences, and fails miserably to maximize utility. Therefore, we ought not to reason in a utilitarian manner."

    Maybe you mean (using the points in that post) "in the heat of the moment (time), when you have a shortage of brain power, when you can't trust your own judgment or willpower, when the situation is generic/has unguessable side effects." (or the reverse - when the factors don’t combine in the right way to make analysis useful) But if the qualifier exists it seems to constantly be dropped?

    I guess the above can be used as a criteria to allocate a period of time a status as "mindless action according to rules" or "rethinking rules" times.

  14. OH well Ill get the rest of this out of hte way...

    I also wonder is there a limit to the number of rules a person can have in their head? a condition under which "rule utilitarian aspect of act utilitarianism becomes more and more negative?
    do you examine rules to make sure they are better than the default? how much time does that take? every rule?

    Surely the default must be considered to be normal behavior, which is not too bad.
    I would suggest experimentally it is fairly close to ruleish indirect utilitarianism. (don’t kill, don’t steal etc all those things most people don’t do anyway)
    and vastly inferior to pretty direct indirect utilitarianism (all those hard calls people DONT make that are often worth multiple lives including high level things where individuals following general rules will dig holes for themselves like maybe dealing with infections diseases and so forth).

    I personally I believe there is not nearly enough hypocrisy in the world. It sounds funny but too many people are too eager to treat very similar situations the same despite there being different consequences.
    hey just apply he rule and never get round to reviewing it based on the situation.

    I also note that you originally seem to have pointed me to this debate based on a government level discussion so that’s how I was approaching the initial few posts in the old thread and apparently NOT how you were approaching it which really set us up to disagree.

  15. Do you think that a libertarians (or other philosophies) are in a sense indirect "utilitarians"?

    1) they apply a set of principles that they tend to assert is best in all situations (as well as being fundamentally right) - any argument that this is not the case is countered by recourse to unknowable future side effects, lack of trust in the morality of decision makers in the heat of the moment and the other usual suspects.

    Most people then would then accept that there may be exceptions where theories contradict (i.e. two people apparently have property rights over the something) or it is outstandingly dysfunctional or unjust (e.g. I can’t buy you children as slaves or claim ancestral ownership of your house).

  16. Some libertarians are utilitarians, whereas others are fundamentally deontologists (and so not 'utilitarian' in any sense).

    "Maybe you mean (using the points in that post) "in the heat of the moment (time), when you have a shortage of brain power, when you can't trust your own judgment or willpower, when the situation is generic/has unguessable side effects.""

    Yes, that's right. There are special occasions when one may employ utilitarian reasoning. But the point is that one shouldn't usually do so.

  17. So do you think the vast majority of normal people are utilitarians or dentologists?

  18. This is rather like being Cullen debating with a national party MP about tax.

    I wonder who in history is a "naive utilitarian" - I note that Bentham is sometimes called a naive utilitarian but apparently the Benthamites supported all sorts of rules formed through contemplation - i.e. they seem to be indirect utilitarians as are almost any coherent group of utilitarians that has ever caused trouble (one of the cheat essays on the net seems to make this sort of point too).

  19. I think I might have gotten to the root of it.

    OK this is what I think you do -
    1) you say "in my experience consistent laws and equality are good" and a few others (maybe it comes from socialism).

    This is your basic proposition - then you proceed to look at utilitarianism.

    2) Now while utilitarianism is quite straight forward in theory when you want to do something about it gets more complex because your decisions have moral values but you can’t be sure of what they are.

    So you create a template or a set of rules so you can start talking about results by assuming the above sort of things (effectively assuming they are utilitarian in all cases). I expect other utilitarian might choose slightly different ones but yours would-be pretty common.

    The problem is who evaluated that the initial principles were utilitarian? The problem is that it becomes a fundamental question it quickly becomes the MOST important thing for the utilitarian society to debate - but then it becomes a variable and one gets stuck in, at best, a paradox.

    So for it to be perscriptive you need to make some assumptions but doing this leaves you opento having all sorts of naive assumptions about the results of a utilitarian analysis. Some of the most extreme examples of course are things like “Islam is right” etc while the least are the non committal “anything is possible approaches that dont realy provide advise.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.