Monday, May 15, 2006

Global Strategies and Indirect Reasons

Building on my previous post: Suppose there is some worthwhile goal G (e.g. happiness or general utility), which is best achieved by an "indirect" strategy, i.e. by aiming at goals S other than G itself. What is the normative status of the strategically recommended goals S, especially in those particular instances where they conflict with G?

We have reason to achieve G. But I am more likely to achieve G by adopting S as my goal instead. So there are instrumental reasons to aim at S rather than G. If I know all this, my apparent and objective reasons will coincide, so the demands of rationality coincide with what is objectively best: namely, to achieve G by aiming at S instead.

This may seem puzzling. Rationality tells us to aim at the good, or do what seems best, i.e. maximize expected utility (for whatever scale of "utility" we're interested in). But the whole idea of the indirect strategy is to be guided by reliable rules rather than direct utility calculations. One effectively commits to occasionally acting irrationally (in the 'local' sense), though it is rational/optimal to make this commitment. Parfit thus calls it "rational irrationality".

But we may question whether it is really irrational to abide by the rules (against apparent utility) after all. We adopt the indirect strategy because we recognize that our direct calculations are unreliable. The over-zealous sherriff might think that torturing a terrorist suspect would have a high expected utility. But if he recalls his own unreliability on such matters, he should lower the expected utility accordingly. As a good indirect utilitarian, he believes that in situations subjectively indiscernible from his own, the best results will generally be obtained by respecting human rights and following a strict "no torture" policy. Taking this "meta" information into account, then, he should reach the all-things-considered conclusion that expected utility is maximized by refraining from torture.

Global reasons thus entail local reasons. It's not a matter of following strategy S no matter what seems most likely to achieve G. A broadened perspective will lead the agent to recognize that S is what's most likely to achieve G. The conflict is only with prima facie "seemings". His all-things-considered judgment of "what seems best" should in fact coincide with his general strategy. So "rational irrationality" need not be genuinely irrational at all. It is only irrational in the restricted, "local" sense whereby we only take into account first-order evidence or considerations, and fail to consider higher-order issues of reliability and so forth. But rationality, simpliciter, is surely an "all things considered" rationality. And when we consider all things, the apparent conflict dissolves.


Categories:

29 comments:

  1. This seems a little like communism. (which gives us a test case!)
    some group makes a set of rules (if it was just you making the rules they have an even less special status) on the assumption that they know a lot more about what you need in all situtions than you do when confronted directly with the choices.

    I guess it would depend on the person but to me it seems in general you would be in more danger of obeying the rules and causing more harm than disobeying them and causing more harm.

    Possibly the thesis relies on the fact that when confronted with the choice you are likely to have too much adrenaline and become irrational or that one person can never approach the cumulative wisdom of a religion/moral system or similar but there needs to be something to outweigh the fact that when you are there you have a lot more information.

    Maybe the above would be mroe aplicable to normal peopel and less so to philosophers who should be more resistant. Also I'd accept that you may not have time/intelect to think it out properly (and then might revert to a previous position) but that is a standard formulae you would have to apply even in forming your global oppinions.

    Anyway the global rules theory doesn’t seem to work in economics very well why would it work in morality?

    (I also note that you would stil have LAWS because they serve a different purpose)

    ReplyDelete
  2. Well, it seems to me that if you are going to hang your hat on the "unreliability of of our direct calculations" (if our direct calculations are unreliable, why would our indirect calculations be less so?), then you don't really have an indirect theory.

    It seems to me that a genuine indirect theory is committed to the idea that the rules have independent force beyond their action-guiding "rule of thumb" status. Otherwise, it is just sophisticated act-utilitarianism.

    But when asked about that, you punt by appealing to our cognitive limitations.

    But it would not be hard to construct a case where it really IS that you maximize welfare by breaking the rule and despite all our cognitive limitations, we would be in a position to know that. I see no reason why that possibility isn't a live one.

    It seems to me you are committed to breaking the rule in those cases.

    ReplyDelete
  3. The post sounds good to me.

    "But when asked about that, you punt by appealing to our cognitive limitations."

    By saying that we're cognitively limited, he's saying that the rules *are* just rules of thumb, useful because they're a good shortcut calculating what actions will work toward a more general (and thus, hard-to-apply) goal. I'm not sure exactly what you're saying, Pat, but I don't see how Richard has punted anything.

    "But it would not be hard to construct a case where it really IS that you maximize welfare by breaking the rule and despite all our cognitive limitations, we would be in a position to know that. I see no reason why that possibility isn't a live one."

    In other words, a case where you understand the application of G as well as S, and see how they conflict? Well, the whole point of the rule of thumb is that G is difficult to apply correctly, but if you're able to calculate the liklihood that you're right about what each strategy actually says in a given situation, and the liklihood of you being right about G is higher in a given situation, then yes, go ahead and apply G. But if G really is hard to apply, then the liklihood that you can apply G to a situation should be lower, in general, than S.

    ReplyDelete
  4. "By saying that we're cognitively limited, he's saying that the rules *are* just rules of thumb, useful because they're a good shortcut calculating what actions will work toward a more general (and thus, hard-to-apply) goal."

    Precisely, but that isn't indirect or rule utilitarianism. If you are saying that the calculations are hard, and we often don't have enough time or info to do them properly so we use rules of thumb, then you are simply arguing for a version of act-consequentalism.

    And that's fine (well, fine for this particular discussion) if, like Smart, you are willing to bite the bullet in those cases--which are certainly possible--when any reasonable person would say that the utility calculation comes out in favor of sacrificing someone,.

    Rule or indirect utilitarianism tries to avoid having the utilitarian bite that bullet by saying that the rules themselves come to have independent force beyond being mere "rules of thumb" (see Rawls in "Two Concepts of Rules").

    The objection to this kind of utilitarianism is that is seems straightforwardly incoherent or rule fetishistic. Why continue to follow the rule when the reason for following the rule tells you not to?

    That's the status of the debate. I don't see how Richard has moved the ball at all. Hence, my question to Richard is why he thinks he has. I may be missing something.

    ReplyDelete
  5. "And that's fine (well, fine for this particular discussion) if, like Smart, you are willing to bite the bullet in those cases--which are certainly possible--when any reasonable person would say that the utility calculation comes out in favor of sacrificing someone,."

    OK. I don't see how these cases follow from act-consequentialism. If some obviously heinous action H is such that G(H) (the utility of H according to G) is low, and S(H) is high, then you're still not justified in applying S because the cost of calculating G(H) is so low (and/or its margin of error given a practical amount of calculation is low). This is implied in the "obviously" part, no?

    ReplyDelete
  6. Pat, the justification for my indirect utilitarianism (as explained in my original post on the topic) is very much grounded in classical utilitarianism itself, and not some independent desire for rule-worship. It is, I happily concede, merely "sophisticated utilitarianism", i.e. the position that every (act) utilitarian should naturally be led to.

    I don't think I.U. is the same as Rule Util. To situate the tradition, I take Derek Parfit and R.M. Hare to be archetypical indirect utilitarians. It would be weird to call them rule utilitarians.

    But in any case, my argument suggests that your point is moot. If it's true that there are situations where, faced with subjective evidence E, we are transparently in a position to reliably employ direct utility calculations, then this will be included in the globally optimal strategy! The best rules will include one to the effect that "If you find yourself in situation E, then ignore the other rules and directly maximize utility!"

    (Incidentally, I have a taste for bullets.)

    So my point is that, all things considered, direct and indirect reasons simply can't come apart in the way required for it to be a significant question whether the strategic rules have independent normative force.

    ReplyDelete
  7. Well, Richard, if you are willing to hang some innocent people, then you are and the debate has to proceed to other areas.

    So let me try shift to a different front. What seems to be motivating your version of IU is that sometimes the utility maximizing thing to do is to follow certain rules of thumb.

    But there is another worry that is related.

    Rather, we might think that people need to pursue certain ends for their own sakes to maximize utility.

    Now, if we formulate a rule that simply said: "pursue X for its own sake" then it seems that we have escaped this problem (though now we have problems of incoherence).

    But that isn't your rule. Your rule is: "Pursue end X (for its own sake) unless not doing so will reasonably lead (given our cognitive limitations) to greater utility."

    Why doesn't this have us end up right where we are before? We are still forced to make the same utility calculations and epistemic judgments that a direct act utilitarian would demand of us.

    So, I am changing the discussion of your post in an important way. I am adding the additional claim that what is really important for utility maximization is not just that we pursue dancing, say, for the sake of utility, but that we pursue dancing for its own sake. In fact, we wouldn't be made happy or joyous at dancing if we did it to be happy.

    If that's true, then does IU get you out of this worry? Or do you just reject the claim?

    ReplyDelete
  8. 1) I think your position is misleading to the average reader. Most of us who read your posts probably assume that you indirect utilitarianism differs significantly from act utilitarianism (partly as a result of how fiercely you seem to try to differentiate it). I don’t think you should intentionally surrender the term “utilitarianism” to an incorrect/intellectually deficient belief of how it might be applied.

    2) I'm concerned that indirect utilitarianism or rule utilitarianism might be used as a "cop out" (which, actually, I propose is usually the case). For example the person might want to excuse the fact they don’t act in a utilitarian manner so they might find a set of reasonable rules they live by and declare them the indirect rules.

    3) I think it obscures some of the most important topics there are.
    Related to this is that much of the debate on morals is treated by people already as a debate over what general rules people can live by - taking it a level higher potentially leaves even the philosophers to give the details lip service.

    ReplyDelete
  9. Pat, I've a forthcoming post which will discuss situations where, to succeed in G, we must pursue S for its own sake and not for sake of G.

    "We are still forced to make the same utility calculations and epistemic judgments that a direct act utilitarian would demand of us."

    I think you've misunderstood. The vast majority of the time, we will not have to make any utility calculations at all. The globally optimal rules will only tell us to employ direct calculation if we're in that specially gerrymandered situation E, which probably none of us ever will be. Now, I've pointed out that if one were to attempt a direct calculation, then considerations of "metacoherence", reliability, etc., should soon lead one to abandon that and return to the rules of thumb. But that doesn't mean we have to go through that whole tired process every time we make a moral decision. Most of the time we can just directly employ our (non-utilitarian) practical morality.

    "if you are willing to hang some innocent people..."

    Oh, please. If you want to play that game, I'll start ranting about how you think your moral "purity" is more important than the millions of innocent people who would otherwise have been tortured (or whatever our silly thought-experiment of the day is). Don't be such an ass.

    ReplyDelete
  10. Some additional thoughts…….

    There may be a bit of psychology in here
    I.e. mental effort seems to be a fundamental aspect of this problem

    So
    1) You cant apply a moral calculation (or a rule) to every action, some things you will just do instinctively or selfishly or from habit, so we must be talking about some sub set of actions that we will control.
    2) Calculations and rules must be combined – it is nonsense to imagine you would reinvent yourself every time you had a thought (that just wouldn’t work) or that you would mindlessly obey rules without some sort of analysis – at a bare minimum which of some pre prepared rules were appropriate.
    a) you may well be saying that any algorithm is a rule but if so I’m having difficulty understanding how act utilitarianism can exist. In which case it would be a straw man.
    3) Clearly we can say it is easier to follow a reverse justified set of rules
    but it may be (I can think of some reasons why it probably is) harder to get marginally better utility out of following theoretically justified rules as compared to utility calculations.
    a) Applying calculations takes effort - but so does applying rules where those rules differ from normal actions. It is debatable which will be easier because some rules might make that easier but then again so might some utility calculation algorithms. (e.g. a rule may well conflict with normal behavior more often than a calculation)
    4) If we do divide the algorithm into rules and calculations and admit that people are using a combination of both and are debating how that should be shared then there is a question as to how this works
    a) does an individual apply calculations from day to day which then are used to form templates and rules?
    b) Do they create rules and at some point declare them fixed in order to work on other rules?
    How it should be done?
    5) Does trying to be an act utilitarian or an indirect utilitarianism improve this system? Is the debate about marginal improvement or is it about who’s dogmatic position is closer to the happy middle?

    ReplyDelete
  11. Richard:
    "But in any case, my argument suggests that your point is moot."

    So do mine.

    Pat:
    "Well, Richard, if you are willing to hang some innocent people, then you are and the debate has to proceed to other areas."

    My previous post outlined why I don't think these situations arise. If you can give a counterexample, I'm willing to hear it. Note that any practical system of morality is going to make reasonable mistakes, i.e. wrong decisions in genuinely difficult cases. I don't see how this is a downside.

    "But that isn't your rule. Your rule is: "Pursue end X (for its own sake) unless not doing so will reasonably lead (given our cognitive limitations) to greater utility."

    Why doesn't this have us end up right where we are before? We are still forced to make the same utility calculations and epistemic judgments that a direct act utilitarian would demand of us."

    It doesn't because we aren't making the same calculations. We only suspends the rules of thumb in situation where direct utility calculations are easy and obvious. We're still obviating the need for the vast majority of subtle and difficult calculations.

    Richard:
    "Pat, I've a forthcoming post which will discuss situations where, to succeed in G, we must pursue S for its own sake and not for sake of G."

    I don't think that, in an ideal rational actor, there are any such cases. It may be that humans need to do that sometimes, because of emotional needs and cognitive biases and such. The next paragraph is copied from a comment on another blog, and is remarkably on-topic.

    People have weird brains. Adopting certain clearly irrational beliefs can improve one's state of mind and increase the chance of reaching certain goals because of the heightened performance, and *maybe* increase overall hapiness. I.e. "believing in yourself". That is, if those irrational beliefs don't end up blinding the person to reality in ways that cause missed opportunities and over-exuberant risk-taking or over-conservateness or other mistakes. We've evolved to over-estimate our own ability in a wide variety of contexts. We overestimate our agency in situations with favorable outcomes, and underestimate it in those with poor outcomes. There's lots of different cognitive biases, and a lot of appear to be geared toward keeping our self-esteem and optimism up. I can see how a belief God can help one's optimism, etc.

    People didn't evolve straight toward greater rationality (though they did evolve greater rationality). They evolved toward a process good at setting goals and achieving them.

    Adopting irrational beliefs is one form of this. Another is adopting irrational goals.

    ReplyDelete
  12. Richard,

    Well, given that you have played the fetishizing moral purity game, I don't see why what I am saying is illegitimate. Hell, you LINKED to a post where you essentially argue that such things are acceptable, and that shouldn't be surprising. I didn't say you would be "happy" to hang innocent people, which would be unfair. I just said you were willing to do so.

    I provided a bullet, you bite it, and I said that if you are willing to bite it, then we don't much to discuss on this point. And then I moved on to another discussion. All I was doing was explaining why I moved on to a different point. How does that make me ass?

    I didn't make any moral judgment, or say you were a bad person. I just said that we have to talk about other things if you are gonna bite the bullet. Maybe you shouldn't be so defensive, and lay off the personal attacks.

    All, I was asserting Richard and P, was that an act-consequentalist could just as well say the same thing as you: follow the general rules of thumb except when it could reasonably lead to better utility to do otherwise.

    I wasn't saying that people would have to do utility calculations all the time under your account, only that a direct act utilitarian wouldn't think that either.

    And it was in the context of my new objection that concerned doing G for its own sake. I was merely pointing out that your indirect view is doesn't answer that objection anymore than the direct view does.

    All I was trying to do was establish where you lie on the dialectic.

    ReplyDelete
  13. "I just said that we have to talk about other things if you are gonna bite the bullet."

    Of course the problem is with how you said it, with such a one-sided presentation of an emotive example which gave no hint of recognizing the reasons why a utilitarian might accept that position. It was a gratuitous jab or "parting shot" before moving on to other issues. How would you like it if a pro-lifer turned up on your blog and said "Well, if you're willing to slaughter innocent unborn babies..."? Even if it's not strictly false, anyone who would phrase it like that clearly isn't interested in representing your position fairly. They're just going for the rhetorical effect, either to smugly express their own moral superiority, or simply to piss you off. Either way, that makes them an ass.

    ReplyDelete
  14. Richard, you far too easily ascribe sinister intent to me, and I, at no point in this conversation have asserted anything like moral superiority during any of these posts. I don't believe I have ever remotely called you anything like an ass. In fact, I am fairly sure I have never insulted you.

    And your abortion example doesn't hold any water because the description isn't accurate: slaughtering innocent babies implies, in the vernacular, that abortion kills people. If I believed that and accepted that description, then I wouldn't hold my position. The reason I find that language objectionable is because I think it is an unfair description.

    There was nothing inaccurate or unfair about my description of your position. I was simply, and only, trying to explain what motivated my switch of direction. I was thread jacking a little bit, and I just trying to explain why I was doing that.

    I had no intention of lording any moral superiority over you or piss you off. I had no idea you would react this way, and I am quite shocked since you linked to a post where you accepted an action that seems even MORE intuitively shocking. I apologize for producing that reaction, and I would have been more sensitive had I known you would react that way. Still, I don't appreciate being called an ass.

    And for you to object to emotive language when I have read posts on this very site where you have asserted that I am some evil moral fetishist and a conservative(!) because of my ethical views seems pretty rich to me. So get off your horse.

    ReplyDelete
  15. Meh, nix conservative. That's not what you meant with that post. Although I do "idolize" the 'natural way of things' and "exhibit an unthinking deference to the status quo" that is the worst of conservative thinking.

    ReplyDelete
  16. Okay, I appreciate that you didn't intend it that way, but my point was precisely that it is an unfair description because you leave out all the features which are relevant to the utilitarian's moral judgment. I'm not worried about the action being "intuitively shocking". My complaint was your misleading and unfair description of the action. (I'd happily concede that I'm "willing to hang an innocent person if I transparently know that this is the only possible way to save a million other innocent persons". But that's a whopping huge "IF". So it's awfully misleading to say that I'm "willing to hang an innocent person", simpliciter, as if it's the kind of thing I'd endorse in everyday life.) As in the abortion case, it simply isn't a fair representation of the other person's position. I hope that's a little clearer now.

    (Though I'll grant that I was probably a little over-sensitive due to having just endured a more significantly hostile comment in another thread, which certainly wasn't your fault.)

    As for being an ass, if I ever leave such a rhetorically loaded throwaway "parting shot" on your blog, you're most welcome to accuse me of the same. But I reserve the right to be an ass on my own blog ;-). (Not that I think there's anything wrong with emotive language in the context of a substantive argument.)

    ReplyDelete
  17. Oh come on, isn't it somewhat reasonable to think that all I was doing was writing it shorthand?

    I mean, could any reasonable person look at what I said in context and believe I was saying you just go around hanging people for fun?

    I don't deny that someone could use that statement in a rhetorically loaded way, I am just saying you should know me and the situation better than to think that's what I was doing.

    ReplyDelete
  18. This dispute seems to have stalled progress in debating anything substantive.

    ReplyDelete
  19. Pat, okay, it looks like I misinterpreted you. My apologies.

    ReplyDelete
  20. Surely I stumbled across somthign in my last two posts worthy of adressing?

    ReplyDelete
  21. G., you might need to explain why you think indirect utilitarianism "obscures some of the most important topics there are." That isn't clear to me at all.

    As for my position being "misleading", I think it would be more misleading to say one is an "act utilitarian" but insist that this means something different from how the position is commonly understood. There's little point arguing over terminology though. The important point is that we typically shouldn't try to employ utilitarian reasoning in our everyday lives.

    ReplyDelete
  22. Richard,
    Try to meet me half way here, or at least 1/3 of the way.

    Your position seems to contradict the usual sources - for example
    Wikipedia
    Act utilitarianism
    "Many act utilitarians agree that it makes sense to formulate certain rules of thumb"

    Similarly all the other dictionaries say something like "... the most common understanding of Utilitarianism and uses the outcome of an action to assess whether the action is right or wrong."

    A search for "indirect utilitarianism" on the other hand gives – you (interesting...)
    But in general it would seem to be defined as per the penguin dictionary

    “A kind of utilitarianism which recognizes that an agent is more likely to act rightly by developing the right attitudes, habits and principles. This indirect utilitarianism is so called because it bears on actions ONLY indirectly.” (my emphasis)

    At first glance this implies a plausible (but inferior) theory wherein you DON’T EVER apply utility calculations you only ever employ those tools that indirectly effect habits attitudes and principles in general. Which is the opposite of “direct act utilitarianism” something you seem to confusingly simplify to "act utilitarianism".

    However I expect your definition is more like this

    1) Direct: [I note you seem to use act utilitarianism not direct act utilitarianism]
    Agents should seek to maximize aggregate utility by trying to ascertain, on each occasion, which available alternative would produce the greatest utility and act accordingly.

    2) Indirect:
    Agents should at the intuitive level seek to maximize aggregate utility not by trying to ascertain, on each occasion, which available alternative has the greatest utility, but by abiding by tried and tested rules for the most part … Agents should appeal to the criterion of rightness directly only at the critical level: (1) when there are conflicts between the rules, (2) when the rules do not provide sufficient guidance, and (3) when there are obviously extraordinary utility losses associated with following the rule.


    I think this guy has also failed to clearly define why 1 doesn’t include 2 but regardless I think I know what he is trying to do.

    That is, to create what I would term a “leading definition”. What it does is define one form as extreme and the other as the other side plus the best of your argument (despite the fact that the name implies a strict division).

    (Now I have a number of thoughts on how these two inter related the fundamental impossibility of pure direct utilitarianism and the fact that most decisions are dominated by habit and instinct and not by moral strategies such as utilitarianism - but we can put that aside so I don’t loose you)

    But even this definition faces the standard rule utilitarian problems (I assume I don’t need to list them).
    So
    "We typically shouldn't try to employ utilitarian reasoning in our everyday lives."

    I guess I am nitpicking here but that seems outright wrong. I give an example though I expect it is not required...

    Imagine swapping your least productive use of your "brain time" for a little time looking into utility of certain actions. Is there any benefit?

    Anyway - The very concept of indirect utilitarianism seems to fly in its own face in that it gets you out of the habit of considering utility on a day to day basis. I expect that has implications for your habits attitudes and principles.

    "Obscures some of the most important topics there are."

    I think I need to go a bit slower here so I’ll let you catch up on the above but the paragraph above touches on a little.

    ReplyDelete
  23. OK I went back to one of the old posts to get your old approach. So I might adress that to save you tackling the other approach.

    R.M. Hare's "two-level" utilitarian theory... The intuitive level is our everyday, practical, working morality [rules of thumb]...The critical level, by contrast, is when we critically reflect upon our intuitive-level principles. If beset by a 'moral dilemma', in which we have a clash between our principles, we will need to reason about how to resolve it. Or, in moments of cool reflection, we might ask for justifications of our intuitive-level principles... This is where utilitarianism comes in.

    (and indeed that is a correct description of a generic human.... [well aside from the fact that it might not be utility they want to maximize])

    the thing that concerns me is that you follow that up with

    "attempting to reason in a utilitarian fashion tends to have disastrous consequences, and fails miserably to maximize utility. Therefore, we ought not to reason in a utilitarian manner."

    Maybe you mean (using the points in that post) "in the heat of the moment (time), when you have a shortage of brain power, when you can't trust your own judgment or willpower, when the situation is generic/has unguessable side effects." (or the reverse - when the factors don’t combine in the right way to make analysis useful) But if the qualifier exists it seems to constantly be dropped?

    I guess the above can be used as a criteria to allocate a period of time a status as "mindless action according to rules" or "rethinking rules" times.

    ReplyDelete
  24. OH well Ill get the rest of this out of hte way...

    I also wonder is there a limit to the number of rules a person can have in their head? a condition under which "rule utilitarian aspect of act utilitarianism becomes more and more negative?
    do you examine rules to make sure they are better than the default? how much time does that take? every rule?

    Surely the default must be considered to be normal behavior, which is not too bad.
    I would suggest experimentally it is fairly close to ruleish indirect utilitarianism. (don’t kill, don’t steal etc all those things most people don’t do anyway)
    and vastly inferior to pretty direct indirect utilitarianism (all those hard calls people DONT make that are often worth multiple lives including high level things where individuals following general rules will dig holes for themselves like maybe dealing with infections diseases and so forth).

    I personally I believe there is not nearly enough hypocrisy in the world. It sounds funny but too many people are too eager to treat very similar situations the same despite there being different consequences.
    hey just apply he rule and never get round to reviewing it based on the situation.

    I also note that you originally seem to have pointed me to this debate based on a government level discussion so that’s how I was approaching the initial few posts in the old thread and apparently NOT how you were approaching it which really set us up to disagree.

    ReplyDelete
  25. Do you think that a libertarians (or other philosophies) are in a sense indirect "utilitarians"?

    1) they apply a set of principles that they tend to assert is best in all situations (as well as being fundamentally right) - any argument that this is not the case is countered by recourse to unknowable future side effects, lack of trust in the morality of decision makers in the heat of the moment and the other usual suspects.

    Most people then would then accept that there may be exceptions where theories contradict (i.e. two people apparently have property rights over the something) or it is outstandingly dysfunctional or unjust (e.g. I can’t buy you children as slaves or claim ancestral ownership of your house).

    ReplyDelete
  26. Some libertarians are utilitarians, whereas others are fundamentally deontologists (and so not 'utilitarian' in any sense).

    "Maybe you mean (using the points in that post) "in the heat of the moment (time), when you have a shortage of brain power, when you can't trust your own judgment or willpower, when the situation is generic/has unguessable side effects.""

    Yes, that's right. There are special occasions when one may employ utilitarian reasoning. But the point is that one shouldn't usually do so.

    ReplyDelete
  27. So do you think the vast majority of normal people are utilitarians or dentologists?

    ReplyDelete
  28. This is rather like being Cullen debating with a national party MP about tax.

    anyway,
    I wonder who in history is a "naive utilitarian" - I note that Bentham is sometimes called a naive utilitarian but apparently the Benthamites supported all sorts of rules formed through contemplation - i.e. they seem to be indirect utilitarians as are almost any coherent group of utilitarians that has ever caused trouble (one of the cheat essays on the net seems to make this sort of point too).

    ReplyDelete
  29. I think I might have gotten to the root of it.

    OK this is what I think you do -
    1) you say "in my experience consistent laws and equality are good" and a few others (maybe it comes from socialism).

    This is your basic proposition - then you proceed to look at utilitarianism.

    2) Now while utilitarianism is quite straight forward in theory when you want to do something about it gets more complex because your decisions have moral values but you can’t be sure of what they are.

    So you create a template or a set of rules so you can start talking about results by assuming the above sort of things (effectively assuming they are utilitarian in all cases). I expect other utilitarian might choose slightly different ones but yours would-be pretty common.

    The problem is who evaluated that the initial principles were utilitarian? The problem is that it becomes a fundamental question it quickly becomes the MOST important thing for the utilitarian society to debate - but then it becomes a variable and one gets stuck in, at best, a paradox.

    So for it to be perscriptive you need to make some assumptions but doing this leaves you opento having all sorts of naive assumptions about the results of a utilitarian analysis. Some of the most extreme examples of course are things like “Islam is right” etc while the least are the non committal “anything is possible approaches that dont realy provide advise.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.