Thursday, February 18, 2021

The Most Important Thing in the World

Sometimes we may dismiss a problem as "not the most important thing in the world".  Which raises the (surprisingly neglected!) question: what is, literally, the most important thing in the world?

In 'The Case for Strong Long-Termism', Greaves & MacAskill argue that the correct answer is: improving the long-run future. I'll try to summarize some of the core considerations here (with any flaws in the exposition being my own), but interested readers should of course check out the full paper for all the details.

First note that the near term -- the next hundred years, say -- is a vanishingly tiny proportion of all time, and contains an even tinier proportion of all the valuable entities (e.g. sentient lives) that could potentially exist (if we don't wipe ourselves out first).  It would seem to follow, on almost any plausible population axiology (whether 'total', 'average', or anything in-between -- so long as it does not intrinsically discount the value of future lives relative to current ones), that the overall value of the world will be determined almost entirely by the (quantity and) quality of far-future lives.  All of us existing today are, by comparison, a single speck of dust in the desert.  We matter immensely, of course, but no more than any other speck, and there are an awful lot more of them, in aggregate, than there are of us.

So if there's anything we can feasibly do to improve the trajectory of the long-run future, in expectation, the value of doing so seems likely to swamp every other possible consideration.

It's worth emphasizing that this implication is robust across a wide range of moral views.  For example, you don't have to be a consequentialist to think that consequences are among what matters.  So any kind of stakes-sensitive, non-absolutist form of deontology (which is surely the only plausible kind) is going to be similarly committed to allowing these astronomically great stakes to override the non-consequentialist elements of the theory, whatever they may be.

G&M similarly argue that their case does not depend upon traditional expected value being the correct account of rational choice under uncertainty.  Risk-aversion, if understood in "prioritarian" fashion as directing us to take special care to avoid the worst outcomes, may if anything increase the importance of protecting the far future.

One could cease to care about the future by adopting a narrow person-affecting view which embraces the "non-identity problem" to conclude that contingent future people simply don't matter.  But I take that to be an unacceptable view.  (It implies, for example, that we should maximally exploit the environment in order to benefit present generations, no matter how great the harm to future generations.)

I think about the only way to avoid long-termist swamping is if we can rationally discount highly-speculative reasoning. (This may be related to the ambiguity aversion discussed in section 4.5 of the paper? I'm not sure.)  There's obviously immense uncertainty about the long-run effects of our actions, so if we ought to overweight robust credences, and radically discount extremely tentative or weakly-based ones (perhaps in the way that Holden Karnofsky argues for here), then we may effectively treat the far-future as beyond our reach -- and focus instead on short-term "sure things" like anti-malaria interventions and cash grants.

I'm highly uncertain about whether we should discount speculative reasoning in this way (would that make the view self-undermining, as it does not itself seem to be robustly supported?), but I think it's most likely that we shouldn't -- or, at least, not too much.  So I'm ultimately pretty convinced by Greaves & MacAskill's case for seeing the long-run future as our top priority.  At the very least, it should certainly get a lot more attention and resources than it currently does -- though not, I think, the full 100% of our moral resources that would be warranted if we were certain that their view was correct.

But is the far-future tractable?  The long-termist conditional isn't of much practical significance if it turns out that we can't feasibly improve the long-run trajectory (in expectation).  So: can we?

As an undergrad, I suggested that self-propagating actions could have the greatest long-term impact. Breaking cycles of abuse, and generating cycles of virtue, might then be immensely important.  But I now note that their long-term persistence, like that of a virus, depends on the relevant actions having a "reproductive rate" of at least 1, which seems dubious.

More plausibly, it might be that we can best improve the future by improving the general capacities of humanity -- e.g. through deworming, developmental aid, and other interventions that help to ensure that as many people as possible develop their problem-solving capacities to their full potential. (Education would seem ideal in principle, but I gather that's it's nearly impossible in practice to actually find effective educational interventions, or at least ones that scale well, other than indirect stuff like deworming.  But it seems a crucial area for further research, at any rate.)  I think this is a crucial reason to prefer funding Global Health & Development over Animal Welfare, for example, despite expectations that the latter is more cost-effective at directly reducing suffering.

But it would be somewhat surprising if targeted efforts to improve the far future couldn't do better than these other cases of "short-term"-focused interventions that may have some incidental long-term benefits.  And G&A argue pretty convincingly, I think, that we should regard at least some targeted efforts as sufficiently tractable. A key concept they invoke is that of an attractor state, or state which, once entered, the world will tend to remain in for a long time.

The most obvious attractor state is extinction, and indeed it seems pretty straightforward that we should, as a species, invest significant resources to protect against extinction risks (e.g. asteroids, unfriendly AI, extreme climate change, nuclear war, pandemics, etc.).  But there are also less extreme attractor states -- just consider all the path-dependence one finds in politics and institutions generally.  Decisions about the constitutional rules of a new institution can shape their subsequent behaviour for as long as the institution in question survives, and tend to be much more difficult to change after the fact. G&A thus suggest that it's extremely important to get it right if the world develops "strong international governance organisations, or even a world government."  Climate change and AI are other issues that could plausibly shape the long-run future (even without causing our extinction).

Finally, G&A mention the "meta-option" of funding research into longtermist intervention prospects, which -- given our immense uncertainty around the first-order issues -- seems likely to be immensely valuable.  Given that the most important thing in the world, in principle, is to improve the trajectory of the long-run future, it seems pretty commonsensical to conclude that, for the time being, the most important thing we can do in practice is to secure more knowledge about how to affect this trajectory.

If you agree with this conclusion, a good place to start could be to (i) develop and/or share these arguments, and (ii) consider donating to the Long-Term Future EA Fund (or perhaps, more indirectly, the EA Infrastructure Fund).

Or, if you disagree, please leave a comment explaining your reasoning.  What do you think is the most important thing in the world, and why?

[UPDATE: I forgot to mention: I think this is one area where it matters a lot that one be a robust realist. If one thought that ethics was mere emoting, or a conventional construction out of people's actual attitudes and dispositions, there would seem no motivation to take these radical -- and potentially alienating -- arguments seriously.]

12 comments:

  1. I think (as a matter of classification) it is better to reserve 'deontology' for positions in which consequences, however astronomically great, can never override (certain fundamental) non-consequentialist considerations. But I think even the more popular versions of these would tend to see those considerations as making a basic framework, with consequence-based considerations 'filling in' further details, and I think you're argument probably could work for those, as well.

    I don't know how tenable it would be, but it does seem to me that there's another possible position on the table, one that I think many people might find plausible. In every inquiry there are times when you can optimize and times when you have very little sense of where to go and so just explore, and one could have the position that this latter is where we are with respect to the most important thing in the world. That is, we just don't know yet. On such a position the proper course of action would be instead to try lots and lots of things. (This would differ from the funding meta-option in that it's not a meta-option and would be unfocused.) Hospitals in the modern sense developed because some knights recognized a problem -- thieves were preying on sick pilgrims in foreign countries, who had nowhere to turn -- and set out to solve it, which led to new problems (e.g., how to house a bunch of sick people where they could be protected and how to get them on their feet again), which people were set out to solve, etc. Larrey invented the triage approach and the first rudiments of the ambulance to handle the single but extreme case of medical care for a fast-moving artillery-based army. In both cases, a genuinely good idea that started small turned out to have a benefits on a hugely unimaginable scale. So (one could argue, and again I'm not sure how one would go about developing it) what one really needs are a lot of small-scale interventions, some of which might snowball into something far beyond what a human being could design.

    ReplyDelete
    Replies
    1. Hi Brandon, yeah, it does seem plausible that broad experimentation may be a better response than narrow attempts at optimization in the face of immense uncertainty. Though, depending on what exactly you mean by "unfocused", I might disagree with that. I would expect that some deliberate (yet broad) thinking about optimization could help to steer us towards more promising avenues than a completely haphazard approach...

      Delete
  2. It seems like you mention one important thing but then gloss over it real quick (perhaps G&A talk about it in more detail) - why shouldn't we discount the value future lives, however slightly, compared to present lives? Within your own life, you're discounting everything because without having some time preference towards the present you wouldn't ever do anything. So it's only natural that we would discount the future, including future lives, from the perspective of "what to do today". I don't know what the discount rate is, and if we even can formulate THE one social discount rate. And I don't know how much of a problem it is to your and G&A's view - or at what levels of the discount rate it becomes a problem. But is should at least be acknowledged.

    ReplyDelete
    Replies
    1. Hi Robert, can you explain why it is that "without having some time preference towards the present you wouldn't ever do anything"? Never doing anything doesn't sound like it would serve my future interests very well, so I don't see how your claim is compatible with instrumental rationality.

      More generally: I glossed over pure temporal discounting quickly because it seems like a non-option to me: why should future people matter less, just because they're further away from me in time? It would be obviously unreasonable to apply a "spatial discount rate" and count geographically distant people as intrinsically less important. But the temporal version doesn't seem any better justified, in principle. Or so it seems to me. But I'm curious to hear more, if you disagree.

      (I should clarify that I'm open to some moderate partiality towards antecedently actual people, in contrast to contingent future people, as a unified class. But that's very different from a compounding temporal discount rate, which ends up counting very temporally distant people for nearly zero.)

      Delete
    2. Ha, sorry, completely forgot about it!

      Positive time preference - I prefer to receive 10 USD today to receiving 10 USD tomorrow, ceteris paribus.
      Time preference of 0 - I'm indifferent between 10 USD today and 10 USD tomorrow, ceteris paribus. It's a coin toss.
      Negative time preference - I prefer 10 USD tomorrow to 10 USD today, but tomorrow I prefer to get 10 USD on the next day and so on. So I never consume anything and die.

      This means that positive (or, at the very least, non-negative) time preference is actually an indispensable component of instrumental rationality.

      But now that I think about it, maybe we're talking about two different things. Time preference in the sense of economics is probably different from the time discounting you're talking about, since the latter is done from the perspective of the impartial observer.

      I can imagine that for an impartial observer there is no difference between me today and somebody 100 000 years from now. But if we have people deciding today on how to use resources, expecting them not to apply any discounting is unrealistic. Long-termism seems to put a huge burden on people that they are not wired to shoulder. But of course that alone doesn't mean it's not correct, and ultimately an argument of this sort seems to only question practicality of long-termism.

      Something is still bothering me about it, but I can't figure out what! ;)

      Delete
    3. "So I never consume anything and die."

      Um, the agent needs to take this risk into account before perpetually delaying consumption. (Indeed, even an immortal agent would need to take into account the risk that they would never get around to consuming anything.)

      And, of course, preferring to consume in the future does not entail doing nothing (let alone "you wouldn't ever do anything"). Instrumental rationality would have you now do whatever would most benefit your future self (e.g. work).

      So it's not true that any particular kind of time preference is "actually an indispensable component of instrumental rationality."

      "Long-termism seems to put a huge burden on people that they are not wired to shoulder. But of course that alone doesn't mean it's not correct..."

      Right, I don't see that as a (truth-relevant) objection.

      Delete
    4. Maybe we're just having a semantic disagreement since I absolutely agree that the agent needs to take such risks into account, but I'd just say this is already included in time preference.

      Delete
  3. Great post! I'm glad to see longtermism getting more discussion and attention - this post was linked to from DailyNous's heaps of links.

    Within longtermism cause areas, a big question is how to apportion efforts between ensuring survival (i.e. reducing existential risk), and trajectory changes conditional on survival, such as "decisions about the constitutional rules of a new institution", or trying to affect civilizational values. In a very rough approximation, the existential risk focus is represented by Toby Ord's "The Precipice" while the trajectory change focus is represented by Will MacAskill (some published remarks and a forthcoming book).

    I have yet to be convinced that interventions to affect trajectories are tractable. Compared to x-risk reduction they have extremely high variability in their impacts. For example, in trying to affect the constitution of a world government, it seems very easy to make changes that would unintended negative effects - especially if you are considering (as longtermism does) very long-term effects. As a simple example, perhaps you advocate for a world government to have strong surveillance powers. But this leads to a locked-in dystopian totalitarian state. Or perhaps you advocate against powers of surveillance. But it turns out that surveillance would have prevented the development of a weapon, a weapon that leads to a war and the rise of a much worse world government. These are simplistic examples / just-so stories, but they strike me as somewhat representative of our epistemic position in trajectory change. For example, if I could go back in time and try to improve the longterm trajectory of civilization by tinkering with the early Catholic Church (a longlasting institution that has had enormous effects), I think I would be quite clueless about what I should change. I think this cluelessness would persist even with enormous amounts of research, due to the 'underpowered' and unpredictable nature of history.

    This admittedly vague distrust of trajectory change interventions is why I currently favor efforts to mitigate extinction risks. Actions in this area have much more straightforward mechanisms: you identify some threat, hopefully with a relatively well-understood mechanism (greenhouse effect) and try to mitigate it. It seems easier in existential risk to know that you are not inadvertently having a massively negative impact; the path to impact is much simpler.

    Of course, there is still enormous uncertainty in existential risk reduction. For example, it's a live debate whether the work of OpenAI, which was founded to reduce existential risks related to AI, might be in fact be highly net negative. https://www.lesswrong.com/posts/CD8gcugDu5z2Eeq7k/will-openai-s-work-unintentionally-increase-existential#AkHsZq72dMFQNYQgJ

    Note that our uncertainty here justifies a broad portfolio, and the 'research' option mentioned above! Anyway, curious to hear your thoughts on this issue if you have any.

    ReplyDelete
    Replies
    1. Hi Rob, interesting thoughts! I guess I'm more optimistic about the potential for further research to be helpful here, at least in expectation, though high variability is probably going to be unavoidable. I think the best we can hope for is to try to tip the scales towards an increasing likelihood of better outcomes. But yeah, there'll always be the possibility that our best attempts turn out to backfire -- a depressing thought, for sure.

      Delete
    2. Thanks for the reply! Somewhat-related follow-up: do you have plans to conduct, or supervise, any such research yourself? As a longtime reader who has noted with admiration your long-time involvement with EA, I'm been curious what your current involvement looks like - especially as EA has become more longtermist in its orientation over the past few years.

      Delete
    3. It's something I'm very open to, but I don't currently have any settled plans. (I don't have the empirical expertise to contribute on the more applied issues, but I'd be delighted if I someday hit upon an idea or argument in the more theoretical space that was worth contributing!)

      Delete
  4. To me, the above discussion seems a case of over thinking something that is really pretty simple.

    If we wish to have a long term future, the logical place to start would seem to be in addressing the largest and most immediate threat to any kind of desirable future. And that would be nuclear weapons.

    As example, if I have a loaded gun in my mouth, reason dictates that issue should get first priority, because the threat is immediate and existential.

    Once the gun is out of our mouth, then perhaps we have the option to address more fundamental issues. In that case, I would cast my vote for a focus on our more=better relationship with knowledge, a simplistic, outdated and increasingly dangerous 19th century philosophy which is the source of nuclear weapons and most of the other man made threats which stand in the way of a desirable long term future.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.