Thursday, January 06, 2022

Longtermism Contra Schwitzgebel

In 'Against Longtermism', Eric Schwitzgebel writes: "I accept much of Ord's practical advice. I object only to justifying this caution by appeal to expectations about events a million years from now."  He offers four objections, which are interesting and well worth considering, but I think ultimately unpersuasive.  Let's consider them in turn.

(1) There's no chance humanity will survive long-term:

All or most or at least many future generations with technological capabilities matching or exceeding our own will face substantial existential risk -- perhaps 1/100 per century or more. If so, that risk will eventually catch up with us. Humanity can't survive existential risks of 1/100 per century for a million years.
If this reasoning is correct, it's very unlikely that there will be a million-plus year future for humanity that is worth worrying about and sacrificing for.

This seems excessively pessimistic.  Granted, there's certainly some risk that we will never acquire resilience against x-risk.  But it's hardly certain.  Two possible routes to resilience include: (i) fragmentation, e.g. via interstellar diaspora, so that different pockets of humanity could be expected to escape any given threat; or (ii) universal surveillance and control, e.g. via a "friendly AI" with effectively god-like powers relative to humans, to prevent us from doing grave harm.

Maybe there are other possibilities.  At any rate, I think it's clear that we should not be too quick to dismiss the possibility of long-term survival for our species.  (And note that any non-trivial probability is enough to get the astronomical expected-value arguments off the ground.)

(2) "The future is hard to see."  This is certainly true, but doesn't undermine expected value reasoning.

Schwitzgebel writes:
It could be that the single best thing we could do to reduce the risk of completely destroying humanity in the next two hundred years is to almost destroy humanity right now... that might postpone our ability to develop even more destructive technologies in the next century. It might also teach us a fearsome lesson about existential risk....
What we do know is that nuclear war would be terrible for us, for our children, and for our grandchildren. That's reason enough to avoid it. Tossing speculations about the million-year future into the decision-theoretic mix risks messing up that straightforward reasoning.

But that isn't really "reason enough to avoid it", because if Schwitzgebel were right that immediate nuclear war was the only way to save humanity, that would obviously change its moral valence.  It would be horribly immoral to let humanity go extinct just because saving it would be "terrible for us".  When interests conflict, you can't just ignore the overwhelming bulk of them for the sake of maintaining "straightforward reasoning".  (I'm sure confederate slaveowners regarded the abolition of slavery as "terrible for us, for our children, and for our grandchildren," but it was morally imperative all the same!)

Of course, I don't really think it's remotely credible that nuclear war has positive expected value in the way that Schwitzgebel speculates.  The hope that it "might" teach us a lesson seems far-fetched compared to the more obvious risks of permanently thwarting advanced civilization. (We're not even investing seriously in future pandemic prevention!  If we can't learn from the past two years, I'm not confident that a rebuilt civilization centuries or millennia hence would learn anything from tragedies in its distant history.  And again, there are serious risks that civilization would never fully rebuild.)

So I think longtermism remains practically significant for raising the moral stakes of existential risk reduction.  However important you think it is to avoid nuclear war, it's much more important once you take the long term into account (assuming you share my empirical beliefs about its expected harmfulness).  It also suggests that there's immense expected value to research that would allow us to form better-grounded beliefs about such matters.  We shouldn't just pre-emptively ignore them, as Schwitzgebel seemingly recommends.  If it's remotely possible that we might find a way to reliably shape the far-future trajectory in a positive direction, it's obviously important to find this out!

(3) "Third, it's reasonable to care much more about the near future than the distant future."  Schwitzgebel stresses that this concern can be relational in form (tied to particular individuals or societies and their descendants), which avoids the problems with pure time discounting.  That's an important point.  But I don't think any reasonable degree of partiality can be so extreme as to swamp the value of the long-term future.

To see why, just imagine a Parfitian "depletion" scenario, where we imagine that the harms of global warming are delayed by two centuries.  Imagine that everyone currently alive (and a couple of generations hence) could reap a bonanza by burning all the planet's fossil fuels, condemning all distant future people to difficult lives in a severely damaged world.  Or they could severely limit consumption while investing significantly in renewables, lowering quality of life over these two centuries while protecting the planet for all who come in the further future.  Should they choose depletion or preservation?  Obviously preservation, right?  It's clearly immoral to drastically discount future generations when the trade-offs are made this explicit.

(4) "Fourth, there's a risk that fantasizing about extremely remote consequences becomes an excuse to look past the needs and interests of the people living among us, here and now."

It's always possible that a moral view is self-effacing, but that's no objection to the truth of the view. Empirically speaking, the people I know to be most concerned about the far-future (i.e., effective altruists) are also the people who seem to do the most to help the global poor, factory-farmed animals, etc.  So this fear doesn't seem empirically well-grounded.

By contrast, I think there's a much more credible risk that defenders of conventional morality may use dismissive rhetoric about "grandiose fantasies" (etc.) to discourage other conventional thinkers from taking longtermism and existential risks as seriously as they ought, on the merits, to take them.  (I don't accuse Schwitzgebel, in particular, of this.  He grants that most people unduly neglect the importance of existential risk reduction.  But I do find that this kind of rhetoric is troublingly common amongst critics of longtermism, and I don't think it's warranted or helpful in any way.)

Of course, it's possible that enthusiasts might end up drawn towards bad bets if they exaggerate their likely efficacy on influencing the far future.  But that's just more reason to think that it's really important to investigate these questions carefully, and get the empirical estimates right.  It's not a reason to reject longtermism wholesale.

10 comments:

  1. Thanks so much for the thoughtful engagement and interesting critiques.

    On my argument 1: I'm inclined to agree that this argument probably can't carry the weight all by itself without supplementation by arguments 2, 3, and/or 4. A ~1% credence might be reasonable, in which case the utilities could just be divided by 100. They could still carry enormous weight, outweighing our near-term interests, if we don't either have radical ignorance about what is good to do for the longterm (argument 2) or a discounting of distant people (argument 3).

    On my argument 2: I don't know why you think that nuclear war now wouldn't be the best choice longterm. If you take seriously Ord's idea that we could acquire wisdom over the next few centuries that allows us permanently to reduce existential risk to near zero, then immediate nuclear war could buy us the time to develop that wisdom by slowing down more dangerous technical advances as well as possibly help teach us that wisdom. Quite possibly, it's just the type of thing we would need! If you don't take seriously Ord's idea about acquiring wisdom, then that worsens the case for our longterm survival per my argument 1. I think argument 2 could work without supplementation by the other arguments if we accept radical ignorance about what actions now would be good for the longterm future. By radical ignorance, I mean this: A topic about which one is radically ignorant is one where your ignorance is such that it's reasonable to give it no decision-theoretic weight, regardless of the value of the outcomes. This can be achieved by symmetry/indifference (the number of stars is even vs odd), but exact symmetry can be difficult to argue for in a complex set of scenarios like we're discussing and indifference principles are notoriously tricky. Nonetheless, there might still be space for radical ignorance on the topic at hand.

    On my argument 3: It's a matter of degree of discounting. Burning it all now for current luxuries with huge harms 200 years from now is too steep a discount in my view. Of course the flip side of absolutely no discounting is something like an extreme Singer/Mozi view, which is also pretty unintuitive and unattractive to most.

    On my argument 4: Fair enough. That's an empirical, sociological question. My guess is that the sweet spot here is to have a moderately long-term view of the sort I favor, where the bulk of the focus is on the present through the next few centuries. This will lead to a high prioritization of reducing climate risks, nuclear risks, AI risks, and such, but without the epistemic and sociological disadvantages of basing decisions on speculations about how our century is uniquely important to outcomes a million years from now.

    ReplyDelete
    Replies
    1. Thanks for following up! It sounds like #2 might be our most significant point of disagreement.

      I'm open to Ord's suggestion that slowing down to acquire wisdom could be a good thing. I'm very dubious about your suggestion that nuclear war is a good way to achieve this. For that wouldn't just slow us down, but would significantly -- and possibly permanently -- reduce our knowledge and capabilities. (Roving bands trying to survive nuclear winter doesn't seem like ideal circumstances for fostering wisdom.)

      More generally, I don't see a strong case for "radical ignorance" about the far future. There's certainly plenty of uncertainty around the edges and in the details, but it seems to me that we could have a reasonable degree of confidence about at least some of the things that tend to either positively or negatively affect the trajectory of human development. E.g.:

      Positive trajectory effects:

      * robust liberal institutions that provide incentives for pro-social / co-operative behaviour, foster diversity of approaches and competition while protecting individual rights, encourage innovation, discourage violence, etc.
      * economic growth (especially for the global poor)
      * basic research / knowledge-promotion
      * moral progress/education

      Negative trajectory effects:

      * existential risks
      * climate change and other severe environmental degradation
      etc.

      I grant that one can always speculate about surprising possibilities in which the seemingly-good things turn out to be bad, and vice versa. But those are really surprising possibilities; I don't think we should regard them as equiprobable with the possible futures in which the seemingly-good things really are (overall) good.

      To think otherwise is, it seems to me, much on a par with those who respond to Singer's pond case by arguing that the drowning kid might grow up to be the next Hitler. (Anything's possible, but that really isn't a reasonable baseline expectation!)

      Delete
    2. Continuing the conversation, Richard, I really hope it's not as bad as the child-Hitler counterargument! Let me respond to some of the details of your reply:
      * Why think nuclear war would *permanently* reduce our knowledge and capabilities? "Permanently" is a long time. That just doesn't seem like a very likely effect.
      * Nuclear war could plausibly increase society's wisdom by motivating the next society, after rebuilding, to be much more acutely aware of, and cautious about, existential risks than we are, given their knowledge of history. To me this seems quite plausible.
      * Several longtermists appear to disagree with you about having robust liberal institutions. Bostrom, if I recall, sometimes suggests we might need some top-down authoritarianism, and I remember this coming up at the end of a recent talk by Sandra Faber too. I favor liberal institutions over the ideal of philosopher kings; but clearly liberal institutions bring risks, since it seems very unlikely that the winning liberal coalitions will always have longtermist priorities and extreme caution about existential risks.
      * A number of longtermists regard the unsustainability of economic growth as a problem; and economic growth is probably highly correlated with growing power to execute unwise plans (at the individual or state level) with existential risk, right?
      * Basic research -- the same problem as economic power: The more we know, the more we can easily do, the more opportunities there are for self-destruction.
      * Moral progress: This of course will only favor longtermism if longtermism is the morally correct view, so this begs the question.

      So three of your four positive things strike me as quite plausibly *negative* things for existential risk!

      On your negative things:
      * existential risks: Right, these are probably bad -- unless it's best for humanity to be replaced, or unless what's actually best is near destruction and the only way to achieve near destruction involves courting some existential risk
      * climate change: Here I'd say the same thing as about nuclear war. For example, apparently there are some people who think that global warming now would actually be in our best longterm interest to prevent an even more catastrophic ice age in some tens of thousands of years.

      So I'm not talking about surprising "whoa the kid is Hitler!" possibilities. I'm talking about unsurprising, plausible positive longterm consequences for negative short- to medium-term things, and vice versa.

      Delete
    3. It's always possible for people to disagree. But I think it's most reasonable to regard the things I identified as positive (or negative) trajectory effects as such. They're the sorts of things that generally tend towards a brighter future, even if one can imagine special circumstances where, say, enabling human capacity (perhaps involving risky technologies) happens to backfire.

      Note that pro tanto positive trajectory is not the same thing as (merely) having short-term value. E.g. many have argued that improving factory-farmed animal welfare is orders of magnitude more cost-effective than helping humans, at least as far as the direct benefits are concerned. But there's more of an obvious tendency for global development to snowball into ever-greater future benefits than for animal welfare to have any such compounding effects.

      I'm very puzzled by your response that moral progress would only tend to lead to better futures if longtermism were true. Many people believe the "convergence hypothesis" that long-term and short/medium-term human welfare coincide. This convergence hypothesis could be true even if longtermism isn't the morally correct view. It would then still be true that moral progress tends to help the long-term future. (It just wouldn't be what mattered about moral progress.)

      "there are some people who think that global warming now would actually be in our best longterm interest to prevent an even more catastrophic ice age in some tens of thousands of years."

      tbh, that strikes me as 100% kid-Hitler territory. (Quick reason: if we still exist then, given continued technological development, we will be so much better placed to manipulate the climate however is needed thousands of years from now.) That's just the worst sort of baseless speculation, the very sort of thing I took you to be warning against in other sections of your post.

      I think we should have a strong epistemic presumption in favour of regarding pro-tanto positive trajectory things as long-term positive, and pro-tanto negative trajectory things as long-term negatives. It's possible to override these presumptions, but it requires really robust and compelling reasoning/evidence. Speculating about a conceivable scenario in which the pro-tanto good turns out to be all-things-considered bad is not sufficient to render such a verdict "plausible".

      Delete
    4. Thanks, Richard! I think this might be near the core of our disagreement. If we accept the convergence hypothesis as a default, then what your conclusions about nuclear war, etc., would be default correct and your view about moral progress would be correct. However, I'm not seeing much reason to accept the convergence hypothesis. In the individual human case, what's in your short-term interest *sometimes* converges with what's in your long-term interest (e.g., not starving to death), but other times the short-term and long-term conflict (e.g., drinking that third beer, spending the retirement money). In the individual human case, I don't see much motivation for a convergence principle. Similarly in the case of humanity as a whole.

      Delete
    5. The basic case for convergence is just that helping humanity in the "short term" involves saving and improving lives, and saving and improving lives (i.e. global health & development) increases humanity's general, all-purpose capacities (including wealth and problem-solving capacities).

      Now, I would've thought *increasing humanity's general capacities* was clearly prima facie positive trajectory, in just the same way that increasing an individual's general capacities (wealth, education, etc.) is prima facie positive trajectory. Again, you can speculate about special circumstances that would serve as possible defeaters (or trajectory-flippers) here, such as the risk of misusing our increased capacities. But again, I really don't see how you can deny that increasing capacities is good by default. For increased capacities to be a bad thing would (while possible) require special explanation, whereas no such special explanation is required to make sense of how increased capacities could be good.

      (Notably, your examples of individual harm all involve reducing wealth and capacities.)

      Delete
    6. That should read: "...in just the same way that increasing an individual's general capacities (wealth, education, etc.) is prima facie positive trajectory for that individual."

      Delete
    7. Here’s the case why increasing capacities is bad for long term risk: Increasing capacities increases the power to abuse those capacities in ways that threatens humanity’s self-destruction. The 21st century vs the 17th century might be an example. It’s because of our increased technological capacities that there’s plausibly more existential risk in the 21st than the 17th. Do you disagree?

      Delete
    8. Certain technological capacities introduce new sources of existential risk, I agree. That's precisely the sort of "special explanation" I had in mind that could weigh against the general positive-trajectory effects in specific circumstances. But I don't think it undermines my claim that increased capacities are positive by default, which I take to defeat the total cluelessness claim. We just need to assess the specific risks associated with specific advances. One might well conclude that certain lines of research are best slowed or avoided for the time being. But I certainly don't think there's any basis here for thinking that (say) global health & development in general is likely to be counterproductive.

      Delete
  2. PS: I'm still waffling about argument 1. I could imagine withdrawing my somewhat concessive response about it above.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.