tag:blogger.com,1999:blog-6642011.post5920132281593378704..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: The Most Important Thing in the WorldRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-6642011.post-136325881924534802022-01-26T09:27:50.178-05:002022-01-26T09:27:50.178-05:00Maybe we're just having a semantic disagreemen...Maybe we're just having a semantic disagreement since I absolutely agree that the agent needs to take such risks into account, but I'd just say this is already included in time preference.Anonymoushttps://www.blogger.com/profile/06306379993176564598noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-66927566536334905862022-01-25T20:19:38.952-05:002022-01-25T20:19:38.952-05:00"So I never consume anything and die."
..."<i>So I never consume anything and die.</i>"<br /><br />Um, the agent needs to take this risk into account before perpetually delaying consumption. (Indeed, even an immortal agent would need to take into account the risk that they would never get around to consuming anything.)<br /><br />And, of course, preferring to <i>consume</i> in the future does not entail <i>doing</i> nothing (let alone "you wouldn't ever do anything"). Instrumental rationality would have you now do whatever would most benefit your future self (e.g. work).<br /><br />So it's not true that any particular kind of time preference is "actually an indispensable component of instrumental rationality."<br /><br />"<i>Long-termism seems to put a huge burden on people that they are not wired to shoulder. But of course that alone doesn't mean it's not correct...</i>"<br /><br />Right, I don't see that as a (truth-relevant) objection.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-54566825828560576292022-01-25T18:34:09.662-05:002022-01-25T18:34:09.662-05:00Ha, sorry, completely forgot about it!
Positive t...Ha, sorry, completely forgot about it!<br /><br />Positive time preference - I prefer to receive 10 USD today to receiving 10 USD tomorrow, ceteris paribus.<br />Time preference of 0 - I'm indifferent between 10 USD today and 10 USD tomorrow, ceteris paribus. It's a coin toss.<br />Negative time preference - I prefer 10 USD tomorrow to 10 USD today, but tomorrow I prefer to get 10 USD on the next day and so on. So I never consume anything and die.<br /><br />This means that positive (or, at the very least, non-negative) time preference is actually an indispensable component of instrumental rationality.<br /><br />But now that I think about it, maybe we're talking about two different things. Time preference in the sense of economics is probably different from the time discounting you're talking about, since the latter is done from the perspective of the impartial observer. <br /><br />I can imagine that for an impartial observer there is no difference between me today and somebody 100 000 years from now. But if we have people deciding today on how to use resources, expecting them not to apply any discounting is unrealistic. Long-termism seems to put a huge burden on people that they are not wired to shoulder. But of course that alone doesn't mean it's not correct, and ultimately an argument of this sort seems to only question practicality of long-termism.<br /><br />Something is still bothering me about it, but I can't figure out what! ;)Anonymoushttps://www.blogger.com/profile/06306379993176564598noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-13249748898152902132021-05-02T08:49:05.108-04:002021-05-02T08:49:05.108-04:00To me, the above discussion seems a case of over t...To me, the above discussion seems a case of over thinking something that is really pretty simple.<br /><br />If we wish to have a long term future, the logical place to start would seem to be in addressing the largest and most immediate threat to any kind of desirable future. And that would be nuclear weapons.<br /><br />As example, if I have a loaded gun in my mouth, reason dictates that issue should get first priority, because the threat is immediate and existential.<br /><br />Once the gun is out of our mouth, then perhaps we have the option to address more fundamental issues. In that case, I would cast my vote for a focus on our more=better relationship with knowledge, a simplistic, outdated and increasingly dangerous 19th century philosophy which is the source of nuclear weapons and most of the other man made threats which stand in the way of a desirable long term future.<br />Phil Tannyhttps://www.blogger.com/profile/04428602361318497201noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-8108689246109813342021-02-22T21:25:24.648-05:002021-02-22T21:25:24.648-05:00It's something I'm very open to, but I don...It's something I'm very open to, but I don't currently have any settled plans. (I don't have the empirical expertise to contribute on the more applied issues, but I'd be delighted if I someday hit upon an idea or argument in the more theoretical space that was worth contributing!)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-26454718354619201252021-02-22T11:39:03.346-05:002021-02-22T11:39:03.346-05:00Thanks for the reply! Somewhat-related follow-up: ...Thanks for the reply! Somewhat-related follow-up: do you have plans to conduct, or supervise, any such research yourself? As a longtime reader who has noted with admiration your long-time involvement with EA, I'm been curious what your current involvement looks like - especially as EA has become more longtermist in its orientation over the past few years. Rob Longhttps://www.blogger.com/profile/13043924004506102702noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-4515649271724118162021-02-22T09:38:08.861-05:002021-02-22T09:38:08.861-05:00Hi Rob, interesting thoughts! I guess I'm mor...Hi Rob, interesting thoughts! I guess I'm more optimistic about the potential for further research to be helpful here, at least in expectation, though high variability is probably going to be unavoidable. I think the best we can hope for is to try to tip the scales towards an increasing likelihood of better outcomes. But yeah, there'll always be the possibility that our best attempts turn out to backfire -- a depressing thought, for sure.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-82002608886206301982021-02-22T09:32:23.709-05:002021-02-22T09:32:23.709-05:00Hi Robert, can you explain why it is that "wi...Hi Robert, can you explain why it is that "without having some time preference towards the present you wouldn't ever do anything"? Never doing anything doesn't sound like it would serve my future interests very well, so I don't see how your claim is compatible with instrumental rationality.<br /><br />More generally: I glossed over pure temporal discounting quickly because it seems like a non-option to me: why should future people matter less, just because they're further away from me in time? It would be obviously unreasonable to apply a "spatial discount rate" and count geographically distant people as intrinsically less important. But the temporal version doesn't seem any better justified, in principle. Or so it seems to me. But I'm curious to hear more, if you disagree.<br /><br />(I should clarify that I'm open to some <a href="https://www.philosophyetc.net/2013/01/non-identity-variability-and-actualist.html" rel="nofollow">moderate partiality towards antecedently actual people</a>, in contrast to contingent future people, as a unified class. But that's very different from a compounding temporal discount rate, which ends up counting <i>very</i> temporally distant people for nearly zero.)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-13640267335661219212021-02-22T09:23:15.360-05:002021-02-22T09:23:15.360-05:00Hi Brandon, yeah, it does seem plausible that broa...Hi Brandon, yeah, it does seem plausible that broad experimentation may be a better response than narrow attempts at optimization in the face of immense uncertainty. Though, depending on what exactly you mean by "unfocused", I might disagree with that. I would expect that some deliberate (yet <i>broad</i>) thinking about optimization could help to steer us towards more promising avenues than a completely haphazard approach...Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-53654552863213115122021-02-22T07:02:43.028-05:002021-02-22T07:02:43.028-05:00Great post! I'm glad to see longtermism gettin...Great post! I'm glad to see longtermism getting more discussion and attention - this post was linked to from DailyNous's heaps of links. <br /><br />Within longtermism cause areas, a big question is how to apportion efforts between ensuring survival (i.e. reducing existential risk), and trajectory changes conditional on survival, such as "decisions about the constitutional rules of a new institution", or trying to affect civilizational values. In a very rough approximation, the existential risk focus is represented by Toby Ord's "The Precipice" while the trajectory change focus is represented by Will MacAskill (some published remarks and a forthcoming book).<br /><br />I have yet to be convinced that interventions to affect trajectories are tractable. Compared to x-risk reduction they have extremely high variability in their impacts. For example, in trying to affect the constitution of a world government, it seems very easy to make changes that would unintended negative effects - especially if you are considering (as longtermism does) very long-term effects. As a simple example, perhaps you advocate for a world government to have strong surveillance powers. But this leads to a locked-in dystopian totalitarian state. Or perhaps you advocate against powers of surveillance. But it turns out that surveillance would have prevented the development of a weapon, a weapon that leads to a war and the rise of a much worse world government. These are simplistic examples / just-so stories, but they strike me as somewhat representative of our epistemic position in trajectory change. For example, if I could go back in time and try to improve the longterm trajectory of civilization by tinkering with the early Catholic Church (a longlasting institution that has had enormous effects), I think I would be quite clueless about what I should change. I think this cluelessness would persist even with enormous amounts of research, due to the 'underpowered' and unpredictable nature of history. <br /><br />This admittedly vague distrust of trajectory change interventions is why I currently favor efforts to mitigate extinction risks. Actions in this area have much more straightforward mechanisms: you identify some threat, hopefully with a relatively well-understood mechanism (greenhouse effect) and try to mitigate it. It seems easier in existential risk to know that you are not inadvertently having a massively negative impact; the path to impact is much simpler.<br /><br />Of course, there is still enormous uncertainty in existential risk reduction. For example, it's a live debate whether the work of OpenAI, which was founded to reduce existential risks related to AI, might be in fact be highly net negative. https://www.lesswrong.com/posts/CD8gcugDu5z2Eeq7k/will-openai-s-work-unintentionally-increase-existential#AkHsZq72dMFQNYQgJ<br /><br />Note that our uncertainty here justifies a broad portfolio, and the 'research' option mentioned above! Anyway, curious to hear your thoughts on this issue if you have any.<br />Rob Longhttps://www.blogger.com/profile/13043924004506102702noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-50663773364574211172021-02-19T06:41:21.699-05:002021-02-19T06:41:21.699-05:00It seems like you mention one important thing but ...It seems like you mention one important thing but then gloss over it real quick (perhaps G&A talk about it in more detail) - why shouldn't we discount the value future lives, however slightly, compared to present lives? Within your own life, you're discounting everything because without having some time preference towards the present you wouldn't ever do anything. So it's only natural that we would discount the future, including future lives, from the perspective of "what to do today". I don't know what the discount rate is, and if we even can formulate THE one social discount rate. And I don't know how much of a problem it is to your and G&A's view - or at what levels of the discount rate it becomes a problem. But is should at least be acknowledged.Robert Mhttps://www.blogger.com/profile/00867018078890305234noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-25013033111239377432021-02-18T14:46:52.915-05:002021-02-18T14:46:52.915-05:00I think (as a matter of classification) it is bett...I think (as a matter of classification) it is better to reserve 'deontology' for positions in which consequences, however astronomically great, can never override (certain fundamental) non-consequentialist considerations. But I think even the more popular versions of these would tend to see those considerations as making a basic framework, with consequence-based considerations 'filling in' further details, and I think you're argument probably could work for those, as well.<br /><br />I don't know how tenable it would be, but it does seem to me that there's another possible position on the table, one that I think many people might find plausible. In every inquiry there are times when you can optimize and times when you have very little sense of where to go and so just explore, and one could have the position that this latter is where we are with respect to the most important thing in the world. That is, we just don't know yet. On such a position the proper course of action would be instead to try lots and lots of things. (This would differ from the funding meta-option in that it's not a meta-option and would be unfocused.) Hospitals in the modern sense developed because some knights recognized a problem -- thieves were preying on sick pilgrims in foreign countries, who had nowhere to turn -- and set out to solve it, which led to new problems (e.g., how to house a bunch of sick people where they could be protected and how to get them on their feet again), which people were set out to solve, etc. Larrey invented the triage approach and the first rudiments of the ambulance to handle the single but extreme case of medical care for a fast-moving artillery-based army. In both cases, a genuinely good idea that started small turned out to have a benefits on a hugely unimaginable scale. So (one could argue, and again I'm not sure how one would go about developing it) what one really needs are a lot of small-scale interventions, some of which might snowball into something far beyond what a human being could design.Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.com