Sunday, September 16, 2007

Examples of Irrational Desires

More from Reasons and Persons. I love this one (pp.123-4):
A certain hedonist cares greatly about the quality of his future experiences. With one exception, he cares equally about all the parts of his future. The exception is that he has Future-Tuesday-Indifference. Throughout every Tuesday he cares in the normal way about what is happening to him. But he never cares about possible pains or pleasures on a future Tuesday... This indifference is a bare fact. When he is planning his future, it is simply true that he always prefers the prospect of great suffering on a Tuesday to the mildest pain on any other day.

We can judge such a preference to be irrational because it makes arbitrary discriminations. It is ad hoc, and fails to treat like cases alike. A more coherent desire-set would appreciate pleasure on future Tuesdays as for any other day.

Parfit also discusses "Within-a-Mile-Altruism". Rather than caring about the welfare of others in his general community, the Within-a-Mile Altruist cares only about those who are located within one mile of him. One step further, and he feels indifferent to their suffering.

I've discussed similar arguments from Michael Smith here. This leads to the core argument of my essay, 'Why be moral?':
We have already established that self-interested reasons would force the amoralist to develop an intrinsic appreciation of at least some other people as ends in themselves. But it would seem arbitrary to recognize only some people as having intrinsic worth or even agent-relative worth to him. We can ask the relativistic amoralist why others do not also have worth to him. It seems plausible to hold that his overall desire set could be made more unified and coherent by adding in a more general desire for human well-being. This would contribute to explaining and justifying the more specific values the amoralist holds in valuing himself and his friends. We thus have rational grounds to criticize his desire set, in that it fails to exhibit such a degree of internal coherence. Given the rational pressure towards coherence, we may thus conclude that even the amoralist has reason to care about morality.


  1. But just because it might be ad hoc or irrational doesn't necessarily make it arbitrary. Assume further that the person's life is such that he knows both that significantly unpleasant things will happen on a regular basis, and decides that they should all happen on Tuesday as opposed to any other day. Why? Because they have to happen at some time, and because it's better to have them all happen at once so he can move on. After all, if his day is going to be ruined it might as well be ruined repeatedly and spare him from other ruined days that week. The choice of Tuesday as opposed to some other day might be arbitrary, but not necessarily irrational.

    This changes the case around, of course, but if anything it matches the "within-a-mile" case even better. One can't be altruistic towards everyone, or care equally about everyone's welfare: this is not something humans are generally capable of. So there will be some people one cares about and some one doesn't, and it's not clear if there's a non-arbitrary line here. (At the least, there may not be a non-arbitrary moral line to be drawn.)

  2. Right (assuming you meant to write, "just because it might be ad hoc or arbitrary doesn't necessarily make it irrational"), that's a fair point. Sometimes we need to make arbitrary decisions, and since there are no reasons to be found, it is perfectly rational to adopt an ad hoc policy or means to your ends. But that is quite different from introducing arbitrariness into your ultimate ends, which I think is necessarily irrational.

    (Note that the within-a-mile altruist is not just a general altruist who adopted a within-a-mile policy for pragmatic reasons. That would be far more reasonable. No, this guy is someone who genuinely doesn't care, at the deepest level, about those who live more than a mile away. He judges that people really matter less once they cross that invisible boundary. It's every bit as crazy as Future-Tuesday Indifference.)

  3. I'd also note that, even if the hypothetical hedonist prefers all his suffering to happen on Tuesday so he can get it over with, that doesn't mean he should be *indifferent* to what happens to him on Tuesdays.

    All else being equal, even if Tuesday is worse for him than any other day of the week, a Tuesday with less suffering is preferable to a Tuesday with more. So, he should take steps to minimize his suffering on that day as well. If he was truly indifferent as to how bad Tuesdays were for him, treating a worse one as equivalent to a better one, that would truly be irrational.

  4. What if the within a mile altruist IS irrational BUT the decision to be one was made for some at least semi rational "indirect utilitarian" logic (eg I believe i can help people closer to me better than far away so I will only think about those close to me as people).

    I imagine such things are absolutely rampant in almost everyone's philosophy and i can think of hundreds of rational ways one could form them.


  5. By this logic, wouldn't it be irrational to prefer dogs to cats, as there is no relevant difference between them? But it seems odd to say that someone is irrational for liking dogs over cats(or vice versa).

    1. Surely there are relevant differences in which such a preference may be grounded, e.g. whether one prefers close companionship or independence and lower maintenance in a pet.

    2. Ok so maybe that wasn't the best example but I still think the point remains. Let's say someone has a preference for redheads over blondes, another person prefers blondes over redheads, and a third person doesn't have a preference. It seems strange to say that the third person is more rational than the other two. (Obviously I'm assuming that hair color doesn't reflect anything else about a person)

    3. Right, so it's important to distinguish tastes from philosophical "preferences" (even though in ordinary English we often use "prefers" broadly to cover either). Tastes, we may suppose, are a-rational, and not subject to rational criticism. But they have downstream rational significance: If you like the taste of chocolate more than vanilla, then that is a perfectly good reason to prefer that you get the chocolate rather than the vanilla ice-cream (indeed, it may render the opposite preference downright irrational, though you can imagine some confused or self-loathing person who prefers that they get the flavour they like less).

    4. Well wouldn't both 'tastes' and 'preferences' fall under the category of desires? And how would we distinguish between the two? The future Tuesday indifferent person could say that his future tuesday indifference is a matter of taste and that he is not being irrational in desiring it.

    5. No, we need to distinguish tastes or likings, on the one hand, from desires or preferences, on the other. (See, e.g., here.) The former simply concern what your subjective experiences are like, whereas the latter involve a kind of reflective endorsement or ranking, and a tendency towards choice. So again: liking the taste of chocolate (or finding redheads more attractive) is just a brute fact about your subjective experiences. What you prefer upon reflection is always a further question.

    6. "We can judge such a preference to be irrational because it makes arbitrary discriminations. It is ad hoc, and fails to treat like cases alike. A more coherent desire-set would appreciate pleasure on future Tuesdays as for any other day."
      What preferences aren't ad hoc?
      If a person wants to maximize his happiness over every day except Tuesday, and generalizing it to maximizing happiness over every day would be "more coherent" and therefore "more rational", then by the same reasoning, he can generalize this preference on maximizing happiness over every person and not just himself (making ad--hoc decision to exclude other people is irrational), and then on maximizing happiness of every organism (making ad--hoc decision to exclude other organisms is irrational), and then on maximizing every emotion of every organism and not just happiness (making ad--hoc decision to exclude other emotions is irrational) and then on maximizing every characteristic of every organism, and every characteristic of every object, and so on. (You can do those generalizations in a different order, and you can choose many more paths to generalization.)
      I think, ultimately, a preference sorts all outcomes into a group of outcomes you prefer and a group of outcomes you don't prefer, and the simplest and "most coherent" possible way of this is preferring or not preferring every single outcome equally.
      In fact, I quite strongly suspect that all possible preferences are "fundamentally arbitrary", and one way to say why is that an "arbitrary" complexity must be introduced to this function that determines what worlds to prefer more than others so it does not just output the same number for every input (which would be the simplest way of doing it). Of course, one could say that no preference is arbitrary because all prefrences have had causal reasons to be as they are, but that again does not allow to discern between rational and irrational ones.
      To be honest, I am more than a bit shocked that this argument, of all things, is listed as a rock-solid proof of "It is possible for desires (or ultimate ends) to be irrational. So there is more to rationality than just instrumental rationality.", which is a statement I would say is rock-solidly disproven by the is-ought distinction being a thing.

    7. At a minimum, it isn't objectionably arbitrary to prefer positively-valenced affective or emotional states over negatively-valenced ones. If there are any normative reasons at all, they surely include reasons to prefer happiness over misery. It seems very doubtful that there are (good, non-instrumental) normative reasons to prefer Tuesday pains over non-Tuesday ones, by contrast. So it all comes down to which distinctions we are ultimately willing to countenance as being of genuine normative significance. Some, surely, but not all.

    8. This is all on the "ought" side of the is/ought gap.

  6. "If there are any normative reasons at all, they surely include reasons to prefer happiness over misery."
    "So it all comes down to which distinctions we are ultimately willing to countenance as being of genuine normative significance. Some, surely, but not all."
    Well, I'm not willing to grant any special objective power to ANY preferences. "Surely", in this context, I assume, arises from the fact that we evolved in a way to be motivated by positive emotional states. But there is nothing objectively special in a way that our minds came to be. We could imagine minds evolving in an environment that would select for the ability to induce negative emotional states on others, and a mind from such an environment will look at us humans and say "If there are any normative reasons at all, they surely include reasons to prefer misery over happiness." (Or a mind that evolved in a way so as not to care about Tuesdays.)
    There is only arbitrarity on the "ought" side of the is/ought gap. That's a bullet I've bitten long ago. You can have inconsistent and contradictory preferences and be punished by Dutch-Booking or inevitable guilt, and MAYBE we could declare those as irrational (ex. "I don't care what happens to me on Tuesdays, and also I care twice as much about what happens to me on Tuesdays.), I strongly suspect that there is no Truth Any Possible Mind Would Be Able To Discover And Accept Using Pure Logic in preferring happiness over unhappiness or simple arbitrarities like that. If there is a Truth like that, if there is a proof that, for example, for any possible mind, preferring unhappiness leads to some contradiction that preferring happiness doesn't, I want it to be presented to me, not postulated as "surely existing", if I were to recognize a rock-solid proof of that.

    1. Of course, there's no arguing with radical skeptics. I regard such skepticism as deeply irrational, for the reasons explained in the linked post, but a skeptic may remain unmoved by the reasons I present. That does nothing to show that normative realism is wrong. It's not as though there's any law of physics that guarantees that you'll recognize good reasons when you come across them. (So I reject the suggestion that normative realism requires being able to convince "any possible mind using pure logic". There's more to reason than pure logic.)

    2. (You may also be interested in this old post arguing that the kind of epistemic debunking principles you're relying upon are self-defeating.)


    3. After hearing those comments and reading those posts, I cannot help but notice a very interesting analogy.

      I'm not a "radical skeptic" in general, just a "radical skeptic over the oughts/prefrences/normative statements/utility functions". But one of my friends is a radical skeptic over is's/facts/descriptive statements/reality too. A "True New-Ager", as I sometimes call her, although she self-identifies (genuinely!) as a Discordian. She refuses to make the Necessary Assumptions (such as "Logic always works" or "My senses perceive reality at least sometimes" or "Reality is consistent at least sometimes") to argue about objective reality.
      So, I have had a lengthy philosophical discussion with her, and indeed, I am forced to conclude that:
      1) Her core philosophical position is valid. In fact, in a sense, it's much simpler than mine and therefore more sound than mine.
      2) Her core philosophical position is unassailable and beyond any reach.
      3) There isn't much of a point for me to really reason with her if she invokes that position. (She maintains that logic does not work and everything is simultaneously true and false, simultaneously existing and non-existing. Using logic or observations of what's true or false to disprove those notions or demonstrate their inconsistencies would be pulling by the bootstraps.)

      Now, the situation is reversed. You are a person who made the Assumptions that let you argue about objective morality. I am the radical skeptic over morality who refuses to make those Assumptions. Just as my friend's claim about logic not working relies on logic but is still unassailable by it, my claim about no moral opinions being true in any relevant way is a moral opinion, but is still unassailable by the moral reasoning. Just as my friend mostly lives her daily life as if reality is objective, I mostly live my daily life as if morality is objective, and neither is a good argument against our positions.
      Now, I made some Assumptions on my own before, to argue about reality, so I could make those Assumptions for morality too.
      And of course, I hate making those kinds of Assumptions. They come quite literally out of nowhere, they are unprovable, they make my reasoning more complicated and more likely to be false as a whole. So, why did I make the Assumptions I made, and not the other ones? Why do I shamelessly call only mine Necessary?

    4. Well...
      1) Some of my Necessary Assumptions are required for reasoning itself, even the "pure logic" kind of it. I should have them to have ANY kind of philosophy or math or physics more complicated than "all is true and all is false". If I want to have any kind of truth, they are Necessary.
      2) Other of my Necessary Assumptions are required to reason about reality in a clear manner, establishing a map and a territory and a way to bind them. Why do I want that? Well, because people who build planes on the laws of physics can fly them, because jumping into chasms is usually fatal for a jumper, and because I want to do my daily routine without existential doubt running all over the system. I don't think any of those things are a valid philosophical argument. But I want to be able to build planes and not die from jumping unto chasms nevertheless, and for that, they are Necessary. (The doubt remains, but localized into the Assumptions and not spread over the entire mindscape.)
      3) The Assumptions I'd have to make to force objective morality would be at least as painful for my reasoning as those that force objective reality. But I can live with subjective morality about as fine as I can do with an objective one! It does not lead me to constant doubt - in fact, it leads me to a calmness of knowing that my morality is no fundamentally better or worse than that of anyone else. It does not make me a horrible person from the common sense standpoint - I am conventionally moral just like most other people raised by a family. It does not forbid me from reasoning about reasoning or about life.
      4) Because no evidence leads me to Objective Moral Laws (or, as you put it in one of the posts, "The normative facts are causally inefficacious"), and because logic can only make Objective Moral Laws out of other Objective Moral Laws, every fundamentally different Objective Moral Law becomes a separate Fundamental Assumption, inflicting it's own pain of additional complexity, additional risk, and additional unprovability on my system. This philosophically motivates me to make my Objectively Correct Utility Function as short and simple as possible. Even ignoring the fact that in it's "physical-centric" formulation morality is insanely complicated (I care about any sets of particles of such and such properties and prefer such and such changes in those properties that affect the particles in such and such ways etc etc etc) and saying that just an anthropocentric formulation (I care about other individuals having such and such thoughts and undergoing such and such changes) is acceptable, any mechanism that lets me devise Objective Moral Laws from your thoughts or intuitions or observations will also be a VERY complicated and a VERY costly Assumption which includes into itself a complexity of every moral law me devise and then some, and even if I just go for arbitrarily Assuming individual laws without that kind of a mechanism (which you probably don't do), you are rewarded for making absurdly simple moral systems.

      So that's my defense of radical moral skepticism. It's much simpler than the alternative, and the alternative is not Necessary.

    5. Fun stuff! Thanks for the great comments.

      Quick query: Do you hold that one ought to believe (or even just regard as more likely true) simpler hypotheses? That sounds like an ought-claim to me.

      Once you're on board with epistemic oughts, there's not really any greater cost (or "pain") involved with allowing practical/moral ones in addition. And you get the benefit of no longer having to deny that torturing babies really is worse than feeding them, etc.

      If you reject epistemic oughts (as thoroughgoing normative error theorists must), I'm doubtful that you can still (accurately) regard yourself as having captured the "Necessities" for reasoning.

    6. > Do you hold that one ought to believe (or even just regard as more likely true) simpler hypotheses?
      I believe that simpler hypotheses are more likely to be true. That's not an ought statement. It either starts out as a Necessary Assumption (or follows from a different one - I'm not very sure here -_-), although observations of reality (to the possibility of which you arrive using it) "support" it (not REALLY support it, of course, cause that would be circular).
      Why is it necessary? Well, suppose more complicated hypotheses are intrinsically more likely to be true. Wow. Suddenly, I am obligated to fill the entirety my reasoning and picture of the world with as much complexity as possible. Every particle is different from every other particle! And each of them has a different excuse of behaving in the same way! A+B=B+A ? Very simple, very unlikely! Why don't agree with a much likelier 24>09<=5MGK-33>>>46+G805++/+F37<608WV*H738582 instead? But because explanations infinitely more complex than anything that my limited mind can come up with can be formulated, I can't believe any that of those either. I'm simple, and so I'm always wrong.
      And if the complexity has no impact, I can arbitrarily load my hypotheses with complexity and that'll have no impact. That's only a little less of a problem. Only one explanation is correct, but infinitely many can explain the same thing and have different complexity. In fact, I would not be able to imagine most of them. I would still not be able to believe any that I came up with, because there are equally many of equally compelling explanations for any of those things.
      In other worlds, a thing often has infinitely many possible explanations, but simplicity is the only natural way to organize them in a way that'll let me pick one. Which leads me to...
      > Once you're on board with epistemic oughts, there's not really any greater cost (or "pain") involved with allowing practical/moral ones in addition.
      So long as there are infinitely many potential sets of True Oughts between which I can't discern based on pure logic or observation, the only way to prioritize is to order them by simplicity. (BTW, "all possible prefrences/moral systems/etc are equal" is probably the simplest set of True Oughts. Not sure tho.)
      To say that taking the more complex sets of oughts is no more costly as the simple ones is to doom yourself to infinitesimal probability of finding out the right one.
      > And you get the benefit of no longer having to deny that torturing babies really is worse than feeding them, etc.
      Honestly, I'd like that. But my alien friend who likes eating babies ( would chide me for claiming my obviously bad ethics objective.
      > If you reject epistemic oughts (as thoroughgoing normative error theorists must), I'm doubtful that you can still (accurately) regard yourself as having captured the "Necessities" for reasoning.
      I'm doubtful I can make this into a coherent system too, to be honest. But I need no epistemic oughts. I won't pretend my Assumptions are an objective moral obligation. They are not based. They are openly pulled out of nowhere. I could have not made them and been a radical skeptic, and I have to make them specifically to avoid that position. In fact, I suspect that making them epistemic oughts would mathematically necessarily make more complexity than it would remove, in the same way a freezer necessarily produces more heat that it removes, but I'm not sure about this particular claim.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.