Thursday, January 27, 2005

Ideal Decisions

Peter Railton ('Moral Realism' in Facts, Values, and Norms, p.12) writes:
Suppose that one desires X, but wonders whether X really is part of one's good. This puzzlement typically arises because one feels that one knows too little about X, oneself, or one's world, or because one senses that one is not being adequately rational or reflective in assessing the information one has...

I think it's plausible that ideal agent theories identify our self-interest. That is, the choice I would make if I were ideally rational and fully informed, etc., is probably the choice that is best for me. But it may be helpful to raise a variant of the old Euthyphro dilemma, and ask: Is X in my best interests because my idealized self would choose it, or would he choose it because it is in my best interests?

I think the answer is clearly the latter. But that then suggests that the reason why I should X is not just that my ideal self would choose it. Rather, the real reason must be whatever was behind my ideal self's choice. My (normative) reasons are his (descriptive) reasons, in other words.

So I'm now wondering: what would those reasons be? In particular, I wonder whether they would simply reduce to the desire-fulfillment theory of self-interest that I've previously advocated. That is, what's good for us is for our strongest desires to be fulfilled in objective fact. The 'ideal agent' heuristic just serves to rule out any subjective mistakes we might make, such as falsely believing that Y would fulfill our desires.

Do you agree with this reduction, or do you think your idealized self might want you to value strikingly different things from what you do in fact value?

From my earlier post on ideal agent theories:
One way to think of this would be to consider A as temporarily gaining full cognitive powers (i.e. turning into A+), and being frozen in a moment of time until he makes a decision, whilst knowing that the moment the decision is made, he will be turned back into A. This ensures that A+ has motivation to seek what is in A's genuine interest, even in those cases when the apparent interests of A and A+ would otherwise diverge.

Can you imagine being in A+'s position here, and choosing to do something other than what would best fulfill A's desires? I'm not sure I can. [Recall that A+ is perfectly rational.] I just don't know what it would be for something entirely undesired (nor indirectly fulfilling other desires) to be in A's "interests". But those who don't subscribe to the desire-fulfillment theory of value must be imagining something like this. So I'd very much like to hear what it is.

11 comments:

  1. > your idealized self might want you to value strikingly different things from what you do in fact value?

    Yes. I think there are two solutions to the problem which work on utility maximization. change our desires or fill those desires. one can use either or - more likely - a combination of both.

    I also note that A+ may have effective distain for A. Lets say you are a canibal (or meat eater) about to eat someone - then you suddenly become enlightened most importantly to full empathy with the person (or animal) you will eat. You know when you become A you will again become the creature that does that sort of thing again - and you find it disgusting.
    Rather like a insane person on medication thinking about what they do when they are NOT on medication.

    Another interesting possibility is that you do "what is best for A" and yet it is so complex that "A" never understands it and always thinks you did the wrong thing. 

    Posted by GeniusNZ

    ReplyDelete
  2. I agree with your answer to the Euthyphro variant. I know people who would give the opposite answer, and have never been able to understand their reasons. I'll keep trying, though!

    I think that A+ would often choose to do things other than what would fulfill A's desires. In fact, I think it is possible that A+ would choose things directly contrary to what would fulfill A+'s desires. Consider that as we become more rational, we often acquire new desires and abandon old ones. When I was a child, I had no concern for others' well-being. I do now. I attribute this change to an increase in my degree of rationality. If so, then I am a closer approximation of my +self than my child-self was. And I make decisions on a daily basis which directly thwart the desires my child-self would have in the same circumstances. I expect my +self could well bear the same relationship to my present self as I do to my child-self.

    If I'm being unclear or missing the point of your question please let me know. 

    Posted by david

    ReplyDelete
  3. Hmm, I think bringing up morality might muddle the problem slightly. Recall that we're asking A+ to choose what is in A's best interests - so if A+'s moral concerns conflict with that, his resultant answer isn't going to shed any light on the problem at hand. What I'm really wondering here is: could it be in our best interests to do something other than what would best fulfill our desires?

    The issue of morality makes for an interesting side-topic, however. Do you think that greater rationality would automatically change one's values/desires so as to be more moral? I'm not so sure about that. ('Evil genius' doesn't seem a contradiction in terms?) But then I think rationality is just instrumental rationality - it can tell us how best to achieve our ends, but it cannot say what those ends themselves should be. I take it you believe differently? 

    Posted by Richard

    ReplyDelete
  4. http://melbournephilosopher.blogspot.com/2005/01/theories-of-decisions.html

    The above link points to something I came up with in conversation with a group of friends, and expresses what we think about the way we make decisions.

    I rather like your desire-fulfillment theory of motivation, although it could possibly be more concrete on issues relating to time, etc. However, raw DF with no psychological description of mind might be thought to boil down to behaviorism.

    Do you think that's a fair accusation? Do you think that DF needs to be blended with a psychological description of human thought, or is it more of an abstract tool for you? How do instincts come into play? Do you think that reasoning informs "the choicemaker", or whether reasoning is the choicemaker, and is informed by instinct?

    Cheers,
    -MP 

    Posted by Tennessee Leeuwenburg

    ReplyDelete
  5. A key difference here is that I'm interested in what decisions we ought to make, rather than those which we actually do make. DF is meant as a normative theory of wellbeing, rather than a descriptive theory of action/motivation. (Though I do also think that most actions can be well explained by belief-desire psychology, and have discussed this in previous posts, that wasn't the intended focus of this one.)

    I do think beliefs and desires are real psychological entities, so this goes beyond mere behaviourism. I suppose I personally am just using DF as an abstract tool, but I agree that it probably needs to be better grounded in psychology.

    Instincts will influence our behaviour, but I don't think they're very relevant to wellbeing? And I'm not sure that I follow your final question. 

    Posted by Richard

    ReplyDelete
  6. > What I'm really wondering here is: could it be in our best interests to do something other than what would best fulfill our desires?

    yes i think changing A's desires to somthing that can easily be fufilled or for some reason is considered superior is rational if htat is within A+'s power (ie A+ may want to stop A smoking or tonot desire to be miss world if he is an old and fat man). 

    Posted by GeniusNZ

    ReplyDelete
  7. Richard,
    I didn't intend my concern for others to necessarily be a moral concern. I desire others' well-being; I care about whether my mother is in pain, for instance, and some of my desires would be thwarted if she were in pain. This desire needn't have roots in morality. I guess even Hitler would be upset if people he cared about were in pain.

    Still, if you don't like the example, we can switch it. Suppose as I child, I did not value knowledge for its own sake, but I do now. The same point I made before would follow.

    Generally speaking: As we become more rational, we don't just learn better how to fulfill our present desires. We acquire new desires we didn't have before. So I think it's reasonable to think that if I became David+, I'd acquire new desires. Thus, it seems to me, you can't assent to both of the following propositions:

    1. A's well-being consists in the satisfaction of A's present desires.

    2. A's well-being is maximized when A behaves as A+ would. 

    Posted by david

    ReplyDelete
  8. David, I'm still not convinced. In what sense is gaining a new ultimate end 'rational'? I agree that we do develop new ultimate ends as we age, but I'm not sure that rationality has anything to do with it. Or, in the cases where rationality is involved, that's because the new desire will help us fulfill our old ones better. (Aside to GeniusNZ - a desire to stop smoking would fall in this category.)

    I should also add that I assent to neither proposition you mentioned there, at least not exactly as stated. I would instead say:

    1) A's well-being consists in the fulfillment of A's desires.

    2) A's well-being is maximized when A behaves as A+ would want (were he in the actual condition of A).

    The minor tweaks are to include prudence (looking out for one's future desires), and to remove any conflicts-of-interest between A+ and A.

    That these are consistent follows from the instrumental theory of rationality. But perhaps the instrumental theory is false. Perhaps some ultimate ends are intrinisically more rational than others - I would certainly be interested to hear any argument you may have to that effect. 

    Posted by Richard

    ReplyDelete
  9. Richard,
    I don't think I'll be able to provide the argument you ask for, but I'll try.

    As A acquire more knowledge, A becomes

    a. aware of more possible objects of "ultimate" desires (i.e., more possible objects to be desired as ultimate ends), and
    b. more rational.

    Are (a) and (b) true or false? If they are both true, read on:

    It's clear that A cannot desire X as an object of ultimate desire until A becomes aware of it. Let's say that an "unknown object" (a "UO") is an object which A is unaware of and does not desire, but would desire as an ultimate end if A were made aware of it. If any UO's exist for A, then as A acquires more of the right kind of knowledge, A acquires more desires of objects as ultimate ends. But given (b), as A acquires more knowledge, A also becomes more rational. It follows that as A becomes more rational, A acquires more desires of objects as ultimate ends.

    Therefore:

    IF for A there are any UO's, THEN there are some ultimate ends which are more rational (for A) than others.

    Does everything I've said so far work? If so, then the question we need to answer is: For a given A, is it possible for there to be any UO's?

    I think there are very likely to be UO's for some or many A's. I think some of the examples I've already given serve to illustrate this. As a child, I was not vividly aware of the possibility that my mother could be in pain, so I did not ultimately desire her not to be in pain. Or suppose I'm a slave and have been prohibited from reading books. It's possible that if I became aware of the sort of knowledge books can convey, I'd desire knowledge for its own sake, even if I don't now.

    Still not sure whether I'm missing your point. Apologies for any continuing obtuseness. 

    Posted by david

    ReplyDelete
  10. Correction:

    When I wrote:

    "It follows that as A becomes more rational, A acquires more desires of objects as ultimate ends."

    I should have written:

    "Thus, assuming there are UO's for A, it follows that, as A becomes more rational, A acquires more desires of objects as ultimate ends."

    I hereby authorize Richard to fix my comment if he is so inclined.

     

    Posted by david

    ReplyDelete
  11. Thanks David, that's a very interesting argument!

    I'm not sure that knowledge and rationality are tied together as tightly as you suggest. Even if there's a correlation between them, that still doesn't mean that the UO has anything to do with rationality. ('Correlation is not causation', as they say.) So I don't think your conclusion, as a denial of the instrumental theory of rationality, quite follows.

    But in the present context, the ideal agent A+ is both more rational and more knowledgable. So it does (I think) follow that if there are any UOs for A, then A+'s decision may diverge from DF.

    In other words, you've shown that I may have been mistaken to say "That these [1 & 2] are consistent follows from the instrumental theory of rationality".

    I'll definitely have to give this some more thought - and perhaps a new blog post sometime in future.

    P.S. Blogger doesn't actually let us edit comments at all, other than to delete them entirely, which would probably be a bit excessive in this case ;) 

    Posted by Richard

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.