Wednesday, November 30, 2005

The Future of Consequentialism

One objection to consequentialism is that there's no way we could even begin to guess at the full (long-term) consequences of our actions, millenia into the future. As such, we would seem to be 'morally blind', unable to attain moral knowledge. Now, I need to take this objection especially seriously, because - as an indirect utilitarian - I tend to think that the standard "counterexamples" to consequentialism (e.g. the organ-harvesting doctor) fail to take the big picture into account, and thus mistake what consequentialism would actually recommend. But then, it must be asked, when we do consider the big picture, what does consequentialism recommend?

The difficulty is exacerbated by the butterfly effect: the smallest changes in initial conditions can have momentous consequences, as the differences ripple outward, each sparking off a cascade of new differences itself, which accumulate and become ever more significant as time progresses. In a way, it's both gratifying and awe-inspiring: every decision we make, everything we do, has a profound impact on the course of future history. (This is the more ego-friendly way of thinking about determinism.) But what chance do we have of foreseeing this future?

Actually, I think we can generally know pretty well what sorts of actions are likely to produce a better future. These judgments are fallible, of course, but that's true of everything. It's possible that some particular repugnant action (murder or sadistic torture, say) will just happen to ultimately prove beneficial. But the vast majority of the time that surely won't be the case. So, as indirect utilitarians, we play by the numbers and adopt that "practical morality" or strategy that will give us the best results that are really possible for us.

I think it's interesting to ask what sort of practical morality would be recommended by this long-term view. The most morally salient action-types will be those which are in some sense self-propagating, setting up a "cycle" that will continue to echo through the generations. Bearing this in mind, child abuse is perhaps one of the most evil things in the world -- not just because of the damage done to the individual victim, but because it risks corrupting his psychology in such a way as to perpetuate the "circle of violence", as the victim grows up to become an offender themselves. Any one individual's suffering is not so significant in the grand scheme of things, however. So the long-term view might lead us to conclude that non-viral actions (might murder be an example?), which don't tend to self-replicate in this fashion, are not so serious. That is, we might be forced to conclude that it is better for an abuser to kill his victims rather than release them, if the former evil would be less "contagious". This is a surprising result. I'm curious to hear what others think of it.

The very best actions will also be those which tend to propagate themselves. Education stands out, for me, as a very significant issue here. But that's a topic for a future post. More generally, virtuous character seems to be pleasantly contagious. There are some people I've known whose sheer goodness makes me too want to be a better person. Random acts of kindness can inspire the recipient to "pass it forward" (there was a neat movie built around this idea), again creating a ripple effect which builds towards a significantly better future. Compassion, generosity, sincerity, and passion, all strike me as viral virtues. Each individual instance tends to do some minor good, and also tends to influence others to replicate it. Similarly, but in a negative fashion, for the vices: selfishness, deceitfulness, hatred, and perhaps apathy.

This is all very speculative, admittedly. (A philosopher's attempts at armchair sociology are probably not to be trusted.) So, if my meta-theory is correct, then our moral theorising will need to become more empirically informed about what I have called "viral" or "contagious" action-types. The future of consequentialism thus depends upon progress in developmental psychology, sociology, and the social sciences generally, if it is to survive skeptical concerns about the future.


  1. This is a fairly important point. As with many things, I like to express it in programming jargon: when formulating ethical algorithms, we always have to keep in mind that they're going to be applied recursively. The output of the organ-harvesting function might look okay after one iteration, but after several it obviously becomes degenerate.

    Keeping with the analogy, this also explains why ethics is hard: with any moderately complex recursive algorithm, it gets harder and harder to guess what the output will be as you progress through i=1, i=2 ... i=n without actually running the program (think about fractals). Humans can predict-first order consequences very well, second-order consequences not so well, and third-order consequences at a rate no better than chance.

  2. > That is, we might be forced to conclude that it is better for an abuser to kill his victims rather than release them, if the former evil would be less "contagious". This is a surprising result.

    This relates to the "to make people happier on aveage is to kill everyone except the happiest person." approach to utilitarianism (in this case "except for the least damaged person").
    If you reject that your formulae already compensates to prevent the conclusion above.

    That doesnt mean that in some cases you wouldn't want to kill a victim because they are too badly damaged

    (the alternative being 1) imprisoning them 2) crippling them 3) waiting for them to result in the death of someone - somthing you are extreemly confident they will do)

    > our moral theorising will need to become more empirically informed about what I have called "viral" or "contagious" action-types.

    There is probably situation dependancy - most importantly our actions (eg laws or policy changes) will affect which behaviours have what sort of viral effects.

  3. It seems to me that you don't really say anything to answer the problem that you begin with.

    As I understand it the problem is that it appears very difficult to accurately predict consequences far into the future. You then point out that this problem is even more difficult that it first seems once we recognize the 'butterfly effect.'

    But your answer to that seems to just ammount to: but actually our judgments about future consequences are pretty accurate, so it's not really a problem. You admit that sometimes actions will have surprising consequences, but in order to have any reliable basis on which to "play the numbers," you have to assume that in general we're pretty good at predicting long term consequences.

    But I thought the problem was that we had no obvious basis for taking such judgments to be generally reliable. What little empirical evidence we have available is likely to count against, not for, such reliability. Does our historical record provide us with examples of people predicting 200 years in advance the consequences of their actions? 1000 years? 2000? Even if it did, would that give us good reason to suppose such reliability extended out to 100,000 years? A million? A billion?

  4. 1) Derek Bowman: surely there is room here for a kind of "probabalistic consequentialism". If we cannot predict the long term outcome of an action, it is at least something to know the short-term consequences; and an action that has good short-term consequences and is otherwise indeterminate is surely better than an action with bad short-term consquences and indeterminate long-term consequences. To borrow from maths: if we want the sum of two numbers, one known and one randomly generated, to be as high as possible, it is better to make our known number as high as possible.

    2) Richard: Isn't this an objection to any moral theory that takes the future into account?; so, if it undermines consequentialism, would it not undermine just about any sensible moral theory we can come up with?

  5. Mike: In your mathematical example, you tacitly assume that the value of the unknown number is independent of your choice for the known number. But in calculating the consequences of our actions that condition is unavailable.

    Still, there are some accounts of choice under uncertainty that would allow us to count the expect value of the unknown conseuqences as a wash, and just leave the known ones.

    But notice that we would still not be answering the question 'what happens when we consider the big picture,' since your answer turns on the fact that we have no means of rationally considering most of the big picture. Instead, it directs us to abandon that question and turn instead to, 'What happens when we consider the slightly bigger picture?'

  6. [i]"Mike: In your mathematical example, you tacitly assume that the value of the unknown number is independent of your choice for the known number. But in calculating the consequences of our actions that condition is unavailable."[i]

    Not true. I believe the correct assumption is that the probabilities range for the unknown number is independent of our choice for the known number, a condition that is available. To elucidate, let's say I were to tell you to pick a number between one and six, then ask you to roll a die, and add the numbers together, with the goal of having a high number. The number you choose will have a tiny effect (butterfly effect) on how you roll the dice, but not in any predictable way. As long as no new information is introduced, the most rational choice is to choose six for the known number.

  7. "It seems to me that you don't really say anything to answer the problem that you begin with."

    Oops. I was meaning to suggest something close to Mike's response, i.e. we ignore the randomness, and focus on local processes that can have fairly predictable local effects, even in the long term. That's where the "viral" actions come in. For any particular act of kindness, we can't be sure what its long term consequences will be. But what we can know (let's suppose) is that acts of kindness are generally contagious, and that each one tends to have good short-term consequences. Combine those two factors and you end up with grounds for thinking that acts of kindness will generally have very good long-term consequences.

  8. Mike is right - there is no need to give up on logic just because we dont know everything.

  9. I reject the label "consequentialist" while embracing the label "pragmatist" precisely because of this sort of question, which I see as boiling down to "What do we mean by consequences?" Defenders and critics of the consequentialist view are frequently talking about events set against an exceedingly shallow temporal horizon. (And a rather narrow view of the social world, too, which I believe is related.)

    What is the timespan of doing something? It appears to me that I am able to do many things over again, and this suggests that doing has at least one aspect that is momentary, and another that is enduring.

    Hypothesis: Uncertainty about the consequences that follow from an action increase exponentially as distance from its primary iteration increases. This is similar to Matt's idea, no? But in practice, isn't this the opposite of what happens? The more we rehearse an action, the more certain we are about its results. We incorporate the most immediate consequences of a familiar action into our practical understanding of how things work, and yet the more familiar an action becomes, the less we are aware of doing it. So it becomes difficult to imagine any epistemological sort of certainty (or uncertainty) pertaining to such habitual actions, of which the agent is only peripherally aware, if at all, unless we are willing to examine the background against which action takes shape.

    Is the distinction between the habitual mode and the experimental mode of acting fundamental? It may simply be a question of what's at the forefront of attention, that we operate in both modes all the time. If this is the case, does that entail the possibility that the nature of consequences (and their actions) is changing over time, that they may be evolving or devolving, or otherwise historically contingent? Well, then we wouldn't have nailed down the eidetic form of consequences. But if we look at this possibility, we may be on the right track, towards a genetic (rather than static) description of consequences.

    I would identify a reflexive understanding of the do it again possibility inherent in action as a sign of ethically-guided conduct, but it cannot be the defining mark, as certain sociopathic cases would show. Recycling would be a good mundane example, Serial Mom the pathological test case.

  10. Consequentialism is clearly an empty theory if it has no yardstick for measuring consequences. Which basically bring you back to virtue ethics, utilitarianism, or divine command.

    However, perhaps the dilemma could be resolved with mathematics.

    Let us suppose that dQ is the change in quality in the state affairs brought about by some action. So, if I kill someone, then I bring about some unhappiness is the circumstances of the immediate family. dQ = (change in happiness) / (causal strength of action). In this case, the causal strength is 1, and the change in happiness is (let's say) -1, therefore dQ(murder) is -1.

    Let's call this the worst possible outcome.

    Now, let's say that instead I anger someone, and due to their anger they murder someone. In this case, the change in happiness is still -1, but the causal strength is .5. dQ(causing anger) = -1 / .5 == -2.

    So, we have something like golf. The "perfect score" is 1, then "worst score" is -1, and larger numbers are indicative of being more removed from causing the outcome.

    That way, the chaotic nature of history will result in the greatest responsibility going to those who were "closest" to causing the action, with much less responsibility going to those which are further away.

    If you prefer numbers to get larger with the degree of responsibility, just use 1 / dQ. 1/-1 = -1, but 1/-2 = -.5.

    This also has the advantage of being somewhat normalised.


  11. (Hmmm, it appears that the spam catcher doesn't like my post. Please delete all but one if this turns into a multipost.)

    Psychologist Jonathan Haidt has a theory that sounds a lot like your speculation, Richard. He doesn't have much empirical research to back it up yet, but here's what he says about "elevation", which he defines as "a warm, uplifting feeling that people experience when they see unexpected acts of human goodness, kindness, and compassion":

    Elevation appears to fit well with Fredrickson's (2000) broaden-and-build model. For an observer, seeing others do unselfish good deeds creates no threat requiring immediate or specific action. Rather, it signals the presence of an altruist, a good candidate for cooperation and affiliation (Frank, 1987). Witnessing good deeds changes the thought–action repertoire, triggering love, admiration, and affection for the altruist and making affiliative behavior more likely. Fredrickson describes the benefits to the individual of experiencing positive emotions, and elevation may indeed confer such individual benefits (e.g., the energy and playfulness of the woman in the above example). However, elevation is particularly interesting because of its power to spread, thereby potentially improving entire communities. If elevation increases the likelihood that a witness to good deeds will soon become a doer of good deeds, then elevation sets up the possibility for the same sort of "upward spiral" for a group that Fredrickson (2000) describes for the individual. If frequent bad deeds trigger social disgust, cynicism, and hostility toward one's peers, then frequent good deeds may have a type of social undoing effect, raising the level of compassion, love, and harmony in an entire society. Efforts to promote and publicize altruism may therefore have widespread and cost-effective results.

    See the link under my name for the rest of the article.

  12. One problem that I've found plagues consequentialist theories is when to cut off the calculation. That is at what time do the consequences matter? Clearly we recognize that sacrificing short term consequences for long term consequences is good. But at what point do we calculate? It seems that's a much harder issue than even whether it is in theory measurable.

  13. You can have estimations of the value of infinite and indeterminate things - for example we do this when we estimate the value of an investment.
    For example if I buy shares in a mutual fund I know there is an undetermined (but logically estimatable) risk and a theoreticaly infinite and not fully predictable series of returns (dividends or whatever) into the future.

    I suggest your cut off is in terms of the amount of analysis you put into it. You keep on gathering information in the best way you can about what the effects of the decision will be and when you cease actively looking (although still open to passive information) when you no longer get return on investment for your time (regardless of whether you have looked 1 hour into the future or 1 million years). this is a bit like the Buridan's Ass problem.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)