Wednesday, June 27, 2007

Conservative Progressivism

Looking at the 'big picture', should utilitarians care less about (present) welfare? John Broome argues [PDF] that the population effects of global warming will ultimately dwarf any direct suffering caused. More generally, impacts on present people dwindle to near insignificance when one considers the indefinitely many people that are yet to come. A dangerous thought. For example, does it mean that we should care less about temporary suffering, so long as an end is in sight?

Consider the vegetarian's arguments against factory farmed meat. The present system causes huge and unnecessary suffering to animals. But, we may think, it's only a matter of time until the industry is replaced by bioengineered meat (no animal required). If so, perhaps vegetarianism isn't the pressing moral issue Peter Singer says it is. Factory farming causes massive suffering today, but very little in the grand scheme of things.

The most pressing issue, on this way of looking at things, is to promote 'viral' or compounding goods (e.g. wealth and education) and the social/moral infrastructure that will support continued progress.

There's a sense in which this 'progressivism' is deeply conservative. We should be less concerned about making progress ourselves, than in ensuring that progress may continue to be made in future. Procedural liberalism trumps social justice. We should care more about improving the state of public debate than pushing our particular agendas. (The two aren't necessarily exclusive, of course.)

Further, perhaps we should embrace some degree of perfectionism, and prioritize excellence over mere welfare (assuming that high attainment is more likely to benefit future generations -- think scientific breakthroughs).



  1. Maybe I'm missing something here. Why should we think that the fact that bioengineered meat might be available in the future would have anything at all to do with factory farms and whether it is permissible for us to purchase their products now? Presumably, there would need to an argument, or some research, showing that bioengineered meat is something we can only get, or only get in a reasonable amount of time, if we support factory farming (or don't get rid it). But bioengineered meat research is not necessarily related to factory farming at all (and doesn't seem to be closely contingently related). A quick google search for 'space meat' will quickly come up with the information that meat that is grown without animals is being developed by people at NASA. No mention of factory farming around.

    This doesn't necessarily mean that there is no connection, but I don't there's any reason to think there is a connection (or even more--an efficient one) that is or will need to be connected to factory farming.

    Also, on the point about animal suffering being a small issue--I think if this is to believable, we are also going to have to say that all issues of suffering that have happened so far are pretty unimportant in the grand scheme (but maybe that's what you mean). It's hard to imagine anything that has caused as much suffering as the factory farm industry (think 52 billion animals who are killed every year, nearly all of whom suffering from birth to death, then add all the human disease, sickness, obesity, the pollution, etc.) I can't even think of anything that comes even a close second. So it seems like considering the distant future in this way will trivialize most issues (I would think) we think matter a lot.

  2. To clarify: my point was not that bioengineered meat depends in any way upon factory farming, but simply that it could replace it (so that factory farming is but a temporary evil).

    And yes, the broader suggestion is all present (and past) suffering is small in the grant scheme of things -- except insofar as they have causal impact on the future. The cycle of violence is especially troubling in this respect, for example.

  3. Richard, are you familiar with Nick Bostrom's position that most types of consequentialism imply we should focus on reducing "existential risks", i.e., possibilities that might destroy or permanently worsen human civilization? It seems to me that that's where this argument is going to end up.

  4. The present may seem relatively insignificant in comparison with the future, but that comparison is only relevant when there is a tradeoff between the two. And it does not mean that the present is insignificant in any absolute sense, as if it would be okay to ignore present suffering because it isn't worth the bother to deal with it. Even the life of a single individual matters a great deal. So your vegetarianism argument seems pretty weak, since you haven't made the tradeoff explicit and there is no obvious candidate for a more significant alternative.

    When there is a tradeoff with the future, in most cases your chances of having a substantial or lasting impact on the future are slim (and of doing so in just the way you intended, even slimmer). If you're going to be humbled by the vastness of the future, you should be doubly humbled; to some extent, the two kinds of humility will cancel out.

    I agree that avoiding annihilation and pursuing projects with cumulative long-term benefits are worthy goals, but we shouldn't neglect welfare, especially the welfare of those who are especially badly-off (sick, hungry, and the like). In addition to the obvious direct benefits, it's important that future advances ultimately serve to improve overall welfare, and I can't think of a better way to encourage that than by continuing to make current welfare a moral priority.

  5. > Factory farming causes massive suffering today, but very little in the grand scheme of things.

    that seems rather like saying "i can kill you because there are lots of other people out there"

    but in general I agree - except in as far as one needs to accept that future goods are more difficult to predict and understand so for example my efforts to make education better might easily not produce the results I predict but my efforts not to kill a animal now might well produce exactly the results I want. At a certain distance the results of my actions become almost impossible to say anything about.


  6. Steven - thanks for the link! I hadn't read Bostrom's paper before, but I agree about the prime importance of "existential risks" -- which, as you say, seems entailed by this line of thought.

    Blar - points well taken. Your suggestion that "two kinds of humility will cancel out" is especially interesting. GNZ picks up on this also.

  7. I also recommend Astronomical Waste. ("Even with the most conservative estimate, assuming a biological implementation of all persons, the potential for one hundred trillion potential human beings is lost for every second of postponement of colonization of our supercluster.")

    While I agree influencing the long-term future is very difficult, it's definitely not so difficult it cancels out the scale difference. If the long-term future is, say, 10^30 times as big as the present, that does not therefore make it 10^30 times as difficult to influence. It seems easy to imagine the world settling into some relatively stable attractor state depending on how current generations handle things -- one where we're either permanently extinct, or one where we've found ways to reliably avoid extinction (at least more so than now).


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.