Wednesday, July 19, 2006

July Open Thread

1) Another Philosophers' Carnival approaches! Get your entries in by the end of the week.

2) I've enjoyed Ricardo's visit. Feel free to suggest any other topics you'd like to hear his views on.

3) This is an open thread -- feel free to raise a philosophical puzzle, or talk about whatever else interests you at present.


  1. My question is, why would someone decide to go into philosophy? Is it that you think you're going to solve some big problem? A love of thinking/arguing or what?

    I enjoy it as a hobby, or something to play around with (like Su Doku), but I'm not sure why someone would choose it over something more practical.

  2. First, k:
    An OK answer might run as follows. I ask myself, "What do I want to go into?" Being the kind of person that I am, I think of all the positive reasons to go into all of the different occupations that cross my mind. Then, being the kind of person that I am, I think of which reasons count the most, and why those reasons count more. Then I think of how I came to decide which reasons counted more and why. If I pursue all of those questions skeptically and rigorously enough, I've probably already given myself an ivory tower's worth of philosophical material to chew on.

    I think it's funny how much metaphor and polysemy there is to be found in the ordinary words logicians use to talk about their subject matter. I wrote a poem which might amuse somebody, and I don't have a blog anymore, so I thought, with Richard's permission, that I would post a little poem I wrote a minute ago.
    I wish to end each day unsaturated,
    not an item,
    all sense, intention:
    quality -

    I know not yet of what;
    of what I yet know not.

  3. Ian, I don't think that really answered my question. Wouldn't going into psychology, or studying anthropology/history (history of decision making), or something connected with evolutionary biology or etc. be a much better route to try and answer such questions?

    For example if I look at this blog, we have lots of talk of possible worlds and such. Lets say Richard solves many problems in it and becomes on of the experts in modal logic, my first thought is, "so what?". Many branches have splintered off from philosophy and it seems they have long outpaced it, while philosophy makes little to no progress. So, once again, why choose staight philosophy over say the sciences?

  4. Okay, here's a puzzle that's been on my mind for a few days. How do we account for the intuitive validity of the following argument?

    (1) If you're going to do it, you should do it right.
    (2) You're not going to do it right.
    (3) Therefore, you should not do it.

    This appears similar to a modus tollens argument, but there's the deontic "should" in there.

  5. K, my old post on "progress in philosophy" might be relevant. But to address your initial question, the main reason I'm going into philosophy is because I find it so damn fun. I just can't imagine myself enjoying any other vocation half as much. I also think it's intrinsically valuable -- you know, the "pursuit of knowledge" and all that. Criticisms based on the (lack of) "usefulness" of the arts and humanities just seem to completely miss the point of being human. Sure, engineering may be more instrumentally useful. So what? That doesn't mean anything unless it's instrumental to an end which is valuable in itself. Why not cut right to the chase? ;-)

    (Besides which, I actually think philosophy is instrumentally very important to public debate on ethical and political issues. Anything which injects more rationality into public discourse can't be all bad -- even the humble act of blogging!

  6. Richard - how about inviting Ricardo's to post on 4-dimensionalism (or 2-D semantics)... I take it that he doesn't believe objects have temporal parts? (Although he did talk about time slices in his last post.)

    Brock - what's the problem? The argument doesn't look valid to me. (2) just claims that you'll fall short of the requirement to 'do it right' that (1) says is imposed upon you if you do it. But I don't see why that suggests you shouldn't do it.

  7. It may be easier to understand Brock's argument if we restate (1) as a wide-scope requirement, that is:

    (1-W) You ought that: if you're going to do it, do it right.

    (Excuse the ungrammaticality; it's a clarificatory trick I got from John Broome since standard English lacks the grammatical power to express the wide scope unambiguously.)

    (1-W') You ought not to both: do it, and not do it right.

    Note that (2) says that we're not going to do it right. So we will only satisfy the requirement in (1-W') if we don't "do it" at all. Since we ought to satisfy this requirement, it might seem to follow that we shouldn't "do it".

    (This actually doesn't follow, at least on the wide scope reading. It may be that, although it happens that we actually won't do it right, nevertheless that is what we should have done. (1-W) underdetermines the ought-facts, and so leaves open this possibility. See the linked post for more detail on these sorts of inferences.)

    Coming at it from another angle: Brock's original argument hinges on the following rule of inference: From Q implies ought(P), and not-P, infer ought(not-Q). [Is this a theorem of deontic logic?] The intuitive basis for this inference seems to be the idea that we should prevent unfulfilled obligations from arising. But the intuition might be misguided for the sorts of reasons that it's not always good to prevent harm. One can avoid unfulfilled obligations either by fulfilling them, or preventing the obligation from arising in the first place. It isn't entirely clear (though nor is it clearly false) that these two methods are equally legitimate or admirable.

  8. Brock,
    I see nothing wrong with it in itself (i.e. it is not self contradictory - although it might be implausible that anyone would know 2 if you didn't know it already). Except its prescriptive component looks like it is based on a false assumption.

    *climbs aboard his hobby horse*
    Any proposition about what you should do should depend on what we assume you are in control of.
    for example we might assume a "Richard" can control his "intent" but not directly the "consequences" Similarly a society might be able to control "the rules" but not directly "the intent".( even these are debatable) So any consequence based argument that punishes for error (but doesn’t reward anything) will always result in prescribing inaction.

  9. Richard,
    If you can compare individual’s utility how do you compare that with the other concepts of utility we have brought up such as the "life story" utility (a life worth living) and the wider utility that relates to how we shouldn’t release the "no more children" disease.

    E.g. if they conflicted how would you compare?

  10. I'm not aware of any distinction to be made between an "individual's utility" and their "life story" utility. (I had been proposing that the latter is how we should understand the former.)

    The 'wider' notion concerns the value of a state of affairs, which plausibly supervenes on the welfare of the people who exist in it. A potential "conflict" here might be if we had the option to choose between:

    (A) a future state of affairs that is a bit worse for everyone who presently exists, but very beneficial for future persons A1, A2, ... An. (No B-people exist.)

    (B) a future state of affairs that is better for presently existing people, but only moderately good for future persons B1, B2, ... Bn. (No A-people exist.)

    Note the crucial point that the future A-people are different from the future B-people. No nobody is harmed by choosing option B, even though option A contains better/happier lives.

    Should we care more about harms and benefits, or impersonal outcomes? I'd expect utilitarians to feel more pulled towards the optimality of option A. I think that's the better option, myself. (For more on this, see here.) What do you reckon?

    > I'm not aware of any distinction to be made between an "individual's utility" and their "life story" utility.

    I thought you would say that - and yet
    it seems a little similar to the individual-community issue (after all in a sense YOU are a community of "you"s over time).

    If your life story is not a simple sum of your happiness or preference ( assume you take this position?) presumably the community is not the simple sum of its constituents.

    As to your example I do indeed favour A.

    Part of what I was hinting at was is the average utility total utility issue i.e. when the scenarios have different numbers of people (as in the infertility virus hypothetical) [and if we should include past or just future people if we do the average!]

    But also is it possible for some "macro goal" exists (like "uncovering the secrets of the universe") that potentially people share for the group but not for themselves or in some odd way the "group" has but the individuals don’t.

    I also from a pragmatic perspective wonder if "life story utility" would be next to useless as a tool compared to preference... maybe not?

  12. Richard, your recent interpersonal utility comparison surprise suggests that you have not fully thought out the implications of global preferences. It seems like you started with preference utilitarianism, saw a gap in the theory, and invoked global preferences to fill that gap, but you haven't completely taken the next step and looked through the resulting theory as a whole, taking global preferences for what they are rather than as a way of filling the role that needed filling.

    For instance, is it possible for the rest of a person's preferences to be irrelevant once we interpret them in light of the person's global preferences? Consider a Benthamite who believes that pleasure and displeasure are the only relevant benefits and harms. Let's call him Werewolf. Werewolf's global preference is that his local preferences be satisfied only to the extent that their satisfaction impacts his hedonic experience (at least that's what's left once we use Success Theory to exclude the stuff that doesn't have a bearing on his life). He thinks that his desires, apart from being brute psychological facts, are nothing more than means to the end of his pleasure (and other beings' pleasure). Does that mean that Werewolf has made Benthamite utilitarianism correct as a theory of how Werewolf should be treated? A similar question arises for anyone whose global preferences include elements of a theory of welfare that is contrary to preference satisfaction. Unless you think that these kinds of cases are exceedingly rare (or nonexistent), or that they would disappear under idealization, then it may be misleading to describe your theory as a desire satisfaction theory.

    Or, what about the case of a person whose global preferences are built around a false premise? Consider a devout nun whose overriding global preference is to do God's will. She believes in a complex theology that describes what God wants, but, unbeknownst to her, God does not exist. The nonexistence of God shatters the entire structure of her global preferences. If she'd become convinced of His nonexistence during her life then it would have been devastating, and it would have taken many years (and many external sources of inspiration and support) to rebuild a life from the wreckage, but given the stability of her spiritual existence this had almost no chance of happening. Assuming she remains devout throughout her life, what does your theory say about her welfare? The most natural take on your theory, I think, implies that her life was about as bad as a person's life could be, as her entire structure of global preferences was as unfulfilled as can be. That would be quite a bullet to bite. An alternative take is that idealization would leave her with drastically different global preferences (or perhaps no global preferences at all?), and that we should assess her life in accordance to these drastically different global preferences (or maybe just by hedonic standards?). Similar problems arise for anyone else with false assumptions built into their global preferences.

    It also seems like it could be possible for people to have degenerate (or circular) global preferences. For example, someone might only want his local preferences to be satisfied insofar as that increases his welfare. He's not sure what the correct theory of welfare is, but whatever it is that's what he wants for himself. You're counting on his preferences to do the work in defining his welfare, but he's assuming that the work is done elsewhere. It seems like it is possible for people to have this type of agnostic, fill-in-the-details preference, at least when applied to questions other than one's own welfare. For instance, a father may want whatever is best for his son's welfare, without knowing what that is. So can your theory rule out the degenerate kind of global preference in a plausible way? I guess you'd want idealization to do the trick (maybe doing something with these inappropriate gaps in global preferences that is similar to what you do with false assumptions in global preferences), but again, it's not clear what idealization would leave behind.

    You also want to be careful about having idealization do too much work, or at least you want to be clear about how it is doing that work, and where it gets you. Otherwise your theory starts to look like my favorite ideal theory, the ideal philospher theory: The correct moral theory is the one that an idealized philospher would endorse.

  13. Blar, what more is involved in "taking global preferences for what they are"? There are plenty of difficult issues here (plus those you raise in the other thread) -- are these just meant as general challenges, or do you have a particular alternative in mind?

    "is it possible for the rest of a person's preferences to be irrelevant once we interpret them in light of the person's global preferences?"

    Yes, I don't see why not. The rest would be mere undesired cravings, like a drug addict's, perhaps, and not ones that the person wholly endorses. If our idealized Werewolf friend ultimately only wants a life of maximal pleasure, then that is indeed what's best for him. This anti-paternalism is at the core of welfare subjectivism, so I'd deny that it's "misleading to describe your theory as a desire satisfaction theory". It's the first-order desire theories that betray their subjectivist roots, by insisting that any desire-satisfaction is good for people regardless of whether they want it or not!

    Misguided (ignorance-based) preferences always pose a challenge. But recall that on my view what matters is that the idealized person would endorse the life-story that was lived. The nun might well have lived a decent and moderately praiseworthy (if imperfect) life, despite being so deeply misled. If, on learning the truth, her idealized self really would despair, then I don't think it's so implausible to think that there's a reason behind this despair, i.e. that the life really was an awful waste. The theory seems on pretty safe ground in this respect -- though you might worry that it's encroaching on your "ideal philosopher" territory. ;-)

    Preference paradoxes are good fun, I've discussed them before here, though the ones you raise might be even more troublesome. But again the "ideal endorsement of actual life-story" approach would seem to sidestep these issues.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)