tag:blogger.com,1999:blog-6642011.comments2023-10-29T10:32:36.914-04:00Philosophy, et ceteraRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger13717125tag:blogger.com,1999:blog-6642011.post-63164932996427434352022-05-02T21:48:12.189-04:002022-05-02T21:48:12.189-04:00Philosophy TV was great while it lasted!<a href="http://www.philostv.com/" rel="nofollow">Philosophy TV</a> was great while it lasted!Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-19275996803339850392022-05-02T21:41:04.325-04:002022-05-02T21:41:04.325-04:00I wish there were more public discussions like thi...I wish there were more public discussions like this between experts. Not necessarily debate, but more discussion or critical interviewing of someone about their view with hard follow-up questions. I'd love to see Huemer talk to a Kantian about their similarities and differences. Anonymoushttps://www.blogger.com/profile/08407005231454153248noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-35551603970133875412022-05-02T16:36:18.028-04:002022-05-02T16:36:18.028-04:00Hi, Matthew here. I largely agree with your asses...Hi, Matthew here. I largely agree with your assessment of the debate. Huemer's intuitions strike me as similar to those who reject transitivity (Huemer accepts transitivity, for the record) because it requires they accept the repugnant conclusion, things about the utility monster, dust specks being worse than torture, Scanlon's counterexample about Jones being tortured, and lots of others. It feels like the intuitions appealed to are pretty shallow and stem from biases and heuristics. <br />Given that our intuitions are often wrong about moral issues--shown by the immense disagreement and the history of moral errors, we'd expect the correct view to diverge from our intuitions sometimes. However, it seems more surprising that the correct views would relate to deeper more fundamental logical principles, like transitivity, the notion that if a perfect being would hope for you to do x you should do x, avoidance of status quo bias, etc. <br />The rules of economics similarly are somewhat unintuitive. However, we should of course still accept them. <br /><br />Of course, from Huemer's perspective it no doubt seems like we're hedging too much in the opposite direction, much like average utilitarians who are willing to bite the bullet on crazy things relating to bringing miserable people into existence as long as most existing people are marginally more miserable. When such disputes arise, there are a ways to try to try to settle it. <br />1) Look at the judgments of most people who have considered the issue. If 99% of people agreed with Huemer here after considering the issues in depth, that would give us some reason to defer to Huemer (and to the majority). <br />2) Look at which can be better combined into a coherent web. To the extent that we're disputing whether or not we should privilege the intuition about organ harvesting or the intuition about it being good to bring about states of affairs if a perfect being would hope for them to be brought about, analyzing which one better forms a coherent moral system seems to be good test. If one off judgments don't cohere, we shouldn't trust them as much. <br />3) We can analyze the issues more in depth. I think that after reflecting and seeing the strange things that have to be accepted by one who accepts the organ harvesting case, the balance of intuitions favor rejecting that intuition. Examples of such intuitions include <br />A) Sometimes, making perfectly moral beings in charge of deciding whether or not events occur makes the world worse. <br />B) Perfect beings should sometimes hope for us to act wrongly. <br />C) Perhaps one should even hope that they themselves act wrongly. <br />Etc. <br />4) We can try to employ debunking accounts of the intuitions. On this front too, utilitarianism seems to do better. Utilitarianism has debunking accounts relating to evolution (the research of people like Greene), heuristics (generally it's bad to kill people), viciousness intuitions (generally organ harvesting killers are vicious so we have the intuitions that it's vicious even if we have the most reason to do it), and the fact that the world would be much worse if lots of people did it, so the act is good only if there's no risk of getting caught and it will affect a few people. <br />The deontological account doesn't have such debunkings. The closest that they have is "utilitarians have weird idiosynchratic intuitions." Perhaps this is true, however, this would be double counting the evidence described under point 1. Additionally, while it seems like utilitarianism is crazy, when confronted with all the weird implications of rejecting it, the utilitarianism becomes harder to reject. Finally, we don't have good data about what most people who have considered the paradoxes of deontology have thought about the matter. <br />We can also do more holistic comparisons based on theoretical virtues, plausiblility of axioms, and historical track record. These seem to overall favor utilitarianism. Deliberation Under Ideal Conditions https://www.blogger.com/profile/04561344275433727965noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-51613986113234126332022-04-27T01:14:25.402-04:002022-04-27T01:14:25.402-04:00Apologies if this comes off as non-philosopherish,...Apologies if this comes off as non-philosopherish, but I understand this struggle a lot. I'm only an undergraduate student, but to converse in philosophy is nothing but struggle as one of the first responses in the dialogue is that of "what's the point?", and this is a question I despise because while I know the average person isn't concerned with whether or not you can use invisibility virtuously, but the whole point of it is to engage the use of logic in a person, and that question throws it all out the window.Denver Koschhttps://www.blogger.com/profile/17607331219956947686noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-67144522110689041502022-02-28T09:35:48.481-05:002022-02-28T09:35:48.481-05:00"'right', 'wrong', 'oblig..."<i>'right', 'wrong', 'obligation', 'duty', etc. are terms we use to describe the coordination *specifically*, not every aspect of getting the best outcomes.</i>"<br /><br />This meshes with my sense that RC is not really a substantive normative theory, competing with AC in accounting for our <i>normative reasons for action</i>. Perhaps, as you say, RC is just giving an account of certain <b>words</b> that are more limited in scope ('right' etc. <i>when defined as</i> relating to public codes for coordination). AC obviously isn't talking about that; it's addressing the more fundamental normative question of what we <i>really ought</i> to do, all things considered, not just so far as public codes are concerned.<br /><br />The substantive question is whether we always really ought to do what the best public code would direct us to do, and I take it to be completely obvious that the answer to this is 'no'. RC can introduce terms for talking more narrowly about ideal public codes, and these will often be of some normative interest (insofar as we <i>very often</i> have good reason to abide by such codes), but they aren't really giving an account of what one ought (all things considered) to do.<br /><br />"<i>considering best outcomes at individual levels but only when we are considering best outcomes at population levels</i>"<br /><br />The ranking of outcomes isn't level-relative, so this doesn't make any sense. Consequentialists evaluate entire possible worlds, and direct us to prefer better worlds over worse ones. AC specifically directs us to <i>act</i> so as to bring about preferable worlds. Contrary to the occasional caricatured misunderstandings of its critics, it most certainly does <b>not</b> direct us to ignore the "population-level" consequences of our actions. All of that is fully taken into account.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-51045100737208880072022-02-19T10:27:14.170-05:002022-02-19T10:27:14.170-05:00Well, there are two very different issues here. On...Well, there are two very different issues here. One is whether the hope objection is a good argument against RC -- it is not, because it requires assuming as a 'best outcome' what is not the best outcome on RC, and which is not what RC would think you should primarily hope for. The other, which, reading over your responses, seems to be what you are primarily concerned with, is whether RC or some form of AC like MLAC is a better way to handle things like moral rules. That is a distinct issue.<br /><br />One of the reasons I think I'm so very skeptical of the objections under (1) especially is that there seems to be an assumption that RCists are really deontologists who are faking being consequentialist. This is not true; they are fully consequentialist, and if AC has any responses to the objections under (1) on consequentialist grounds, RC can generally make the same responses with little to no modification -- it will emphasize different things, but as a consequentialist approach, it can use the full panoply of consequentialist tools. Where RC and AC differ is not about anything in the abstract about consequences are their fundamentality but about what right and wrong are. For a typical RCist, right and wrong derive from consequentialist principles not when we are considering best outcomes at individual levels but only when we are considering best outcomes at population levels. This, I think, becomes really clear when we consider your response:<br /><br />> "I mean, really, we should want each person to internalize whatever set of rules it would be best to have that individual internalize. There's no logical guarantee that uniformity would be socially optimal (though there are obviously cases where co-ordination is important, e.g. road rules)."<br /><br />RCists don't hold that we should be uniform, since the typical RCist holds that there kinds of goodness and badness other than moral goodness and badness that should be determined by improvising personal tastes, etc. The existence of moral codes obviously does not imply any uniformity; moral rules are not precise enough to dictate details, but only give frameworks. They hold that the best possible outcome for the largest population is obviously something that requires a considerable amount of coordination, and 'right', 'wrong', 'obligation', 'duty', etc. are terms we use to describe the coordination *specifically*, not every aspect of getting the best outcomes. Thus for every form of RC, morality *just is* like road rules, with the primary variations being in how they think our moral rules progress (by aiming at an ideal we can already rough out or by testing and tweaking locally), and the best moral rules of the individual to internalize are those like 'Don't murder' that are capable of coordinating actions in a beneficial way on a large scale, because that's what a moral rule is, not some personal set of rules you've decided for yourself (even if they are very good rules).Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-19149005719967349872022-02-17T20:04:24.247-05:002022-02-17T20:04:24.247-05:00I mean, really, we should want each person to inte...I mean, really, we should want each person to internalize whatever set of rules it would be best to have <i>that individual</i> internalize. There's no logical guarantee that uniformity would be socially optimal (though there are obviously cases where co-ordination is important, e.g. road rules).Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-86936462221739470862022-02-17T20:01:16.548-05:002022-02-17T20:01:16.548-05:00"The worst possible outcome is always to lose..."<i>The worst possible outcome is always to lose our rule-structured shared moral project.</i>"<br /><br />If that's true, then AC will never recommend acts that would risk that result. Still, there seems logical space for acts that have no such long-term risk even while going against the generally best rules. So the question is how we should assess such acts. From a consequentialist perspective, it seems clear that we should assess such acts positively (just as we should assess the generally best rules positively). We should want people to internalize the generally best rules, and then we should want them to act contrarily on just those occasions when it would be best for them to do so.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-57734361973625700802022-02-17T18:10:28.972-05:002022-02-17T18:10:28.972-05:00I suppose another way to state this, which just oc...I suppose another way to state this, which just occurred to me after I clicked 'publish' is that for RC, right and wrong are entirely about the way we work together for overall good, which we do by shared standards, through which we cohere in common cause. Looking at the meticulous details will indeed show some cases in which we doing what is right (by the shared standard of our common project) will not get the best results *in that case*, but the whole point of the shared standards is that it is in fact the *right kind of working together with everybody else* that gets the best results overall, and therefore getting our joint moral project right outweighs any nonoptimalities that arise in particular cases by accident or freak circumstances. (If the nonoptimalities are not accidental or rare chance events, they can get taken into account in an improved ruleset, in a future upgrade of our moral project.) What matters in the Jack case, for instance, is our coming together in a shared stand against murder, by which we make our society one the rejects murder, which is a better outcome than murdering some to save others. The RC view is that the best outcome is one in which we follow, implement, and enforce a rule against murder (although, as we grow more enlightened, perhaps one more advanced and sophisticated than the one we have now), and all that follows from that, not any outcome that you can determine just by looking at the outcomes of this particular case. The worst possible outcome is always to lose our rule-structured shared moral project.Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-39273976868842804012022-02-17T17:31:39.912-05:002022-02-17T17:31:39.912-05:00"Well, not on that basis, but what about the ..."Well, not on that basis, but what about the fact that it tells you to abide by a rule even when you know that following the rule in your circumstances would be counterproductive?"<br /><br />This seems ambiguous about what "your circumstances" are. Are they this particular situation, or is it the greater context in which people following these rules is beneficial, for which your own particular situation may serve as example, influence, solidary support, etc.? Rule consequentialists are *conservative* about rules, just by the structure of the approach, but this is because they don't see right and wrong as being about getting the best result in this or that particular circumstance by this or that particular action, but about us all together getting better results overall by he *kinds* of things we do, which we capture in rules. The consequences still dominate the consideration. But rule consequentialists are building on a bigger scale than act consequentialists are, and therefore they are more tolerant of particular non-optimalities. Think of it by analogy. You don't custom-build one-use machines for every single thing you need done. When you need a computing job done, you don't build the computer and program it from scratch every single time. If you did that, and you were a genius engineer, then you'd get computing hardware and software that was absolutely optimal for each particular use. But you don't build a computing culture that way -- in fact, that guarantees you'll never a computing culture, because you can't build a culture on the assumption that everyone is a genius engineer able to start from scratch. You do it with general-purpose machines that are not going to be optimal for particular cases, but are going to be good enough for a lot that they are optimal for the general usefulness of computers. A rule consequentialist has some tolerance for nonoptimalities in particulars in order to get a better result overall. So far that's a common thing among consequentialists, but one thing that's distinctive of rule consequentialists is that they think that moral right and wrong are not at the level of the particulars, where not every kind of badness with respect to badness is moral badness, but are at the level of the specification of the overall system to which this or that particular action is just a part.<br /><br />This is why your MLAC response doesn't seem to work. MLAC is not the hope objection; it's just an alternative theory of why we use rules. The hope objection seems to assume that the rule consequentialist is already wrong about the level at which morally relevant hope occurs -- that the best overall outcome at which we are considering matters is this particular circumstance, not the wider society of which it is a part.<br /><br />I suppose I think of 'convolutedness' as a difficult of navigation, in which case the best evidence of whether something is convoluted is how difficult people actually find it to navigate -- which would certainly be at least a disadvantage in a practical field like ethis. I guess I'm not sure what supposed badness of the structure you are trying to point out if we are setting aside whether it's hard for a typical reasonable person to navigate it in practice. Perhaps I'll have to re-read your paper more closely on this point.Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-66444422725327464082022-02-16T19:23:31.316-05:002022-02-16T19:23:31.316-05:00Hi Brandon!
I should clarify that I'm not con...Hi Brandon!<br /><br />I should clarify that I'm not concerned about what folk <i>find</i> "convoluted" or "difficult to navigate" -- I don't think subjective reactions are relevant to assessing the truth of a moral theory. What's more relevant, I think, is the <i>objective structure</i> of the theory, and RC is convoluted here (in its interplay of reasons for action and desire) in a way that MLAC simply isn't. <br /><br />> "moral rules we actually are justified in using can change over time... and therefore it doesn't make much sense to say that they are rule-fetishizing."<br /><br />Well, not on <i>that</i> basis, but what about the fact that it tells you to abide by a rule <i>even when you know that following the rule in your circumstances would be counterproductive</i>? That seems to indicate that their rules have been imbued with excessive or undue moral significance.<br /><br />> "[The objection] would have to be one that makes sense of the best outcome not being one in which people have moral codes that guide them in acting with reasonable regularity and predictability for the general and overall improvement of outcomes."<br /><br />Not at all. MLAC endorses being guided by moral codes (understood as rules of thumb) insofar as this is a good thing. But (i) it allows for more exceptions and individual variation in appropriate rules than does RC, and (ii) in cases where the objectively best exceptions are unknowable, it holds that the objective moral reasons track the value facts (rather than pretending that the rules matter objectively), while agreeing that it's most rational (or supported by "subjective"/evidence-relative moral reasons) to continue following the generally-reliable moral rules.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-26221919693542387962022-02-16T19:04:21.630-05:002022-02-16T19:04:21.630-05:00Yes, those are nice ways of fleshing out the "...Yes, those are nice ways of fleshing out the "uniformity" objection!Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-65640519216381220102022-02-16T15:44:49.625-05:002022-02-16T15:44:49.625-05:00Huemer's version is the following and seems to...Huemer's version is the following and seems to run into problems "Consider the sets of rules that could realistically be the socially accepted morality of a human society. Among those sets of rules, select the one that would have the best consequences if it were the socially accepted morality. Then act according to those rules." <br />1 Suppose that we're considering developing AI. If I develop AI there's a 2% chance of the world ending. If someone else does first there's a 30% chance. Assume additionally that AI has no benefits. In this case a good social rule would be not developing AI. However, it's still good for me to develop AI. <br />2 How we define the community is unclear. If it includes everyone, that runs into the egyptology objection. <br />3 Consider a case where you know with metaphysical certainty and are justified in believing that killing one person would save 558 people. Most people who believe that are mislead and so not killing people even to save more would be a good societal rule. However, it would still be better to kill one to save 558 as Huemer agrees. <br />Deliberation Under Ideal Conditions https://www.blogger.com/profile/04561344275433727965noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-27192304497914365982022-02-16T14:19:45.054-05:002022-02-16T14:19:45.054-05:00While I'm not a consequentialist of any sort, ...While I'm not a consequentialist of any sort, I am extremely skeptical of all of these arguments against rule consequentialism, with the exception of the distant worlds objection (which, however, I think shows that a rule consequentialist should avoid a particular view of moral rule evaluation, not the falsehood of rule consequentialism itself). This is true even of the argument against the popular objection to rule consequentialism; it *does* give a reason to prefer RC, namely, simplicity, one sign of which is that people regularly find it easier to think in terms of RC, with its close association of criterion and decision procedure, than in terms of multi-level act consequentialism. This is, to be sure, not a decisive reason, but since you later provide the convolutedness objection to RC, I think it's fair for the RCist to point out that there seems to be some reason to think that people in fact tend to find RC less convoluted and difficult to navigate than MLAC.<br /><br />RC *is* fundamentally consequentialist. All RCists hold that moral rules we actually are justified in using can change over time (e.g., if situations change, if technologies change, if we discover something about the relevant consequences that weren't known before), on the ground of whichever consequences they take to be relevant to moral life, and therefore it doesn't make much sense to say that they are rule-fetishizing. The hope objection runs into the problem that RCists usually take rule-following to be one of the contributors to the best outcome (to take just one example, the best outcome for human beings will always involve having the best society, and societies are partly constituted by enforcement of rules and codes). Perhaps there's some version of it that could still be run, but it would have to be one that makes sense of the best outcome not being one in which people have moral codes that guide them in acting with reasonable regularity and preditability for the general and overall improvement of outcomes.<br /><br />(2) seems to require that we simply split systems of rules into two: one for acceptance/approval and for acting. This is certainly an inconvenience, but it's unclear why it is supposed to be absurd. It is not unheard of, actually, in the history of ethics (many early modern versions of ethics distinguish between agent-focused and spectator-focused rules this way, and it has the advantage that it makes ethics and aesthetics parallel, since standards of artistic production and standards of taste are generally recognized to be distinct and in unusual situations can come apart). And even if one holds that there is no essential connection, I'm not sure why the RCist can't simply say that nonetheless, there is an empirical one. After all, evil demon scenarios are among other things designed to get around what in fact we take our actual evidence to be. Yes, it would be nice to have a necessary connection (if we ever meet any aliens or evil demons, for instance), but sometimes we only have a factual one, and we can usually do just fine with that.<br /><br />I have to head off to virtual class, but on the distant worlds objection, my own view is that it really shows just that RCists should be (like Mill) positivists about obligation, seeing obligations and moral codes as mechanisms we design for a better world rather than as things given in the nature of reality.Brandonhttps://www.blogger.com/profile/06698839146562734910noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-81473316107980264392022-02-14T21:30:57.816-05:002022-02-14T21:30:57.816-05:00I don't find this objection as persuasive as y...I don't find this objection as persuasive as you do. It seems subject to an iteration problem. Suppose that we switched out our loved ones for improved loved ones 1000000000000000000000000000000000000000 times, such that at the end of the process our loved ones both were producing enormous benefits to the world (basically single handedly ushering in utopia) and were also experiencing utility monster esque levels of happiness every second. It seems that Cohen's account would condemn this because it just iterated a very bad thing 1000000000000000000000000000000000000000000000000000000000000000000000000 times. <br />This intuition just seems to be status quo bias in realms relating to sacred values. Similarly, one who is married might think that it would have been terrible if they hadn't married the person who they married, even if an omniscient oracle said to them that they would have married someone better. <br />Finally, if we accept Parfit's reductionism about the self then our actual loved ones die and are replaced every second in some sense, so the world would have almost no value. This is implausible. Even if one holds that Parfit's reductionism is false, it seems strange that if it were true, the world would have no value. Deliberation Under Ideal Conditions https://www.blogger.com/profile/04561344275433727965noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-20253582308990861852022-02-13T20:31:13.910-05:002022-02-13T20:31:13.910-05:00Ah, I see. I thought that by, "what fundament...Ah, I see. I thought that by, "what fundamentally matters" you meant to refer to some common ground between deontologists and utilitarians, not employ a different way of stating what's at issue from what deontologists typically use. My mistake.willcombs10https://www.blogger.com/profile/02166714552981829309noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-24311929076657557712022-02-13T18:58:57.038-05:002022-02-13T18:58:57.038-05:00(I mean, overlap between different people regardin...(I mean, overlap between different people regarding what they find intuitive, etc...)Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-22129873970628326302022-02-13T18:57:53.592-05:002022-02-13T18:57:53.592-05:00Deontologists typically makes claims about what...Deontologists typically makes claims about what's right or wrong, not what's important. My suggestion is that their claims sound much less plausible when re-stated in terms of <i>what's important</i>. Of course, it's always possible for someone to reject an argument like this by "biting the bullet" and just accepting the verdicts that I think sound crazy. (In just the same way that a utilitarian could dismiss any putative counterexamples by biting the bullet and saying nothing more than that they aren't bothered by those implications.)<br /><br />Argument can get no grip on someone who isn't the slightest bit bothered by the implications of their view that seem bothersome to others. But in practice, there tends to be pretty strong overlap between what people find prima facie intuitive or bothersome, which is why philosophers don't usually just "bite the bullet" without saying at least <i>something</i> more to try to weaken the apparent force of the objection (as I did with the putative counterexamples to utilitarianism, for example).Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1514044061551219292022-02-13T17:07:01.439-05:002022-02-13T17:07:01.439-05:00BTW, I don't mean "Is this just an 'i...BTW, I don't mean "Is this just an 'incredulous stare'" to come off as accusatory here. Some people think that's a fine philosophical response - I'm simply wondering if that is indeed the response you're giving here.willcombs10https://www.blogger.com/profile/02166714552981829309noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-63335731782417222572022-02-12T03:43:13.225-05:002022-02-12T03:43:13.225-05:00"* Most importantly, deontology makes incredi..."* Most importantly, deontology makes incredible claims about what fundamentally matters. It seems completely wild to claim that keeping a deathbed promise (to borrow one of Huemer's examples) is seriously more important, in principle, than the entire lives of many innocent people. So either deontologists are stuck making completely wild claims of this sort, or their normative prescriptions (concerning what we allegedly ought to do) bear no relation to what really matters."<br /><br />Is this just an "incredulous stare"? Why couldn't the deontologist simply say right back that your claims about what fundamentally matters are "incredible" and "wild"? Or do you intend the links to do the argumentative work and this remark is just a summary of your conclusions?willcombs10https://www.blogger.com/profile/02166714552981829309noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-136325881924534802022-01-26T09:27:50.178-05:002022-01-26T09:27:50.178-05:00Maybe we're just having a semantic disagreemen...Maybe we're just having a semantic disagreement since I absolutely agree that the agent needs to take such risks into account, but I'd just say this is already included in time preference.Anonymoushttps://www.blogger.com/profile/06306379993176564598noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-66927566536334905862022-01-25T20:19:38.952-05:002022-01-25T20:19:38.952-05:00"So I never consume anything and die."
..."<i>So I never consume anything and die.</i>"<br /><br />Um, the agent needs to take this risk into account before perpetually delaying consumption. (Indeed, even an immortal agent would need to take into account the risk that they would never get around to consuming anything.)<br /><br />And, of course, preferring to <i>consume</i> in the future does not entail <i>doing</i> nothing (let alone "you wouldn't ever do anything"). Instrumental rationality would have you now do whatever would most benefit your future self (e.g. work).<br /><br />So it's not true that any particular kind of time preference is "actually an indispensable component of instrumental rationality."<br /><br />"<i>Long-termism seems to put a huge burden on people that they are not wired to shoulder. But of course that alone doesn't mean it's not correct...</i>"<br /><br />Right, I don't see that as a (truth-relevant) objection.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-54566825828560576292022-01-25T18:34:09.662-05:002022-01-25T18:34:09.662-05:00Ha, sorry, completely forgot about it!
Positive t...Ha, sorry, completely forgot about it!<br /><br />Positive time preference - I prefer to receive 10 USD today to receiving 10 USD tomorrow, ceteris paribus.<br />Time preference of 0 - I'm indifferent between 10 USD today and 10 USD tomorrow, ceteris paribus. It's a coin toss.<br />Negative time preference - I prefer 10 USD tomorrow to 10 USD today, but tomorrow I prefer to get 10 USD on the next day and so on. So I never consume anything and die.<br /><br />This means that positive (or, at the very least, non-negative) time preference is actually an indispensable component of instrumental rationality.<br /><br />But now that I think about it, maybe we're talking about two different things. Time preference in the sense of economics is probably different from the time discounting you're talking about, since the latter is done from the perspective of the impartial observer. <br /><br />I can imagine that for an impartial observer there is no difference between me today and somebody 100 000 years from now. But if we have people deciding today on how to use resources, expecting them not to apply any discounting is unrealistic. Long-termism seems to put a huge burden on people that they are not wired to shoulder. But of course that alone doesn't mean it's not correct, and ultimately an argument of this sort seems to only question practicality of long-termism.<br /><br />Something is still bothering me about it, but I can't figure out what! ;)Anonymoushttps://www.blogger.com/profile/06306379993176564598noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-74259982493956926352022-01-25T00:13:41.069-05:002022-01-25T00:13:41.069-05:00Not sure if this is objectionable self promotion b...Not sure if this is objectionable self promotion but I wrote a ten part series responding to Huemer's argument here https://benthams.substack.com/ <br />I addressed each of the specific thought experiments. Deliberation Under Ideal Conditions https://www.blogger.com/profile/04561344275433727965noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-43067771536814732702022-01-24T10:04:07.521-05:002022-01-24T10:04:07.521-05:00One obvious downside to being a Bond villain is th...One obvious downside to being a Bond villain is that the rest of society will judge you to be a (highly unpredictable) threat, and react accordingly. That alone is sufficient reason for utilitarians to instead focus on more co-operative (less rights-violating) sorts of endeavours, of which there are plenty. Moral uncertainty is another reason (not to mention the obvious empirical uncertainty).<br /><br />That said, I've argued since the start of the pandemic that <a href="https://www.philosophyetc.net/2020/04/pandemic-moral-failures-how.html" rel="nofollow">conventional morality is deadly in a pandemic</a> and we should be more open to exploring the possible benefits of (consensual) deliberate infection to allow for early targeted immunity (before vaccines were available). So while I wouldn't go all the way to your "Bond villain" position, I do agree that there's plenty to criticize in folk morality here.Richard Y Chappellhttps://www.blogger.com/profile/16725218276285291235noreply@blogger.com