Saturday, September 15, 2007

Desires and Preferences

Desires address a single goal or object, and grant it an absolute weight on the scale of utility: "X is worth 5 utils to me." (We may write "|X| = 5" for short.) Preferences, in contrast, are purely comparative: "I prefer X to Y." These two types of states are presumably interdependent: in particular, I should prefer X to Y iff |X| > |Y|. But which is more fundamental?

Desires seem more basic, being monadic (taking just one object) and scalar. So I'm inclined to see comparative preferences as merely a fancy way to report relative desire weightings. But that would render intransitive preferences not just irrational, but strictly impossible. To prefer X to Y, Y to Z, and Z to X, is not possible for any combination of desire weights |X|, |Y|, and |Z|, for the 'greater than' relation is transitive. But isn't it possible that someone really might be disposed to pick X when offered X or Y, pick Y when offered Y or Z, and to pick Z when offered Z or X? Our motivational systems are messy and context-dependent in all sorts of ways that the simple desire model fails to capture.

Might we instead define or construct desires out of preferences? This would be a lot messier. We've already seen that in case of intransitive preferences there would be no coherent way to assign absolute weights to each individual desire. But so long as the preferences satisfy the standard formal requirements (transitivity, asymmetry, etc.) it might work out better. Though it's not too clear how to assign scalar values to a mere ordering of more and less preferred outcomes, one might suggest that the scale of "utils" never had any clear meaning in the first place.

Most likely neither option is wholly adequate, as it seems unlikely that either desires or preferences have exact neural correlates. They are what Dennett calls "real patterns": useful abstractions (like centers of gravity). It's unsurprising, then, that such models break down when we push them too far. I expect that desire talk will usually be more useful than preference talk, at least. Are there any other competitors for modelling human motivation?


  1. Hi Richard, maybe I'm missing something; is there something wrong with spelling things out thus?

    Things have values, given by the |X| you talk about. Those values relate to desires with one direction or the other -- either they have value by virtue of desiring, or our desires attempt, in some sense, to latch onto the valuable things. (I seem to recall that you prefer the former? No matter.)

    What is it, then, to PREFER A to B? Why not the relatively deflationary: you are disposed to select A over B, given the choice. We might have to tinker a little with finkish stuff or funny cases, but we can so tinker if we have to. This won't explain preference in terms of desires or vice versa; is there some reason we think those should be related that way?

    (We could still get a plausible normative link: you should prefer A to B just in case |A| > |B|.)

  2. Actually, you can get the person to choose Z over X under the preference framework. If the person is indifferent between X, Y and Z, but picked X randomly over Y (due to indifference), Y randomly over Z, he can pick Z randomly over X without violating his preferences. (Though this highlights a potential weakness in preference theory, since choosing X over Y can mean a preference for X or indifference, so you are left with rather little ways of finding out what preferences really are.)

    As for your question in the end, what about stimulus based models of behavior? Doesn't model motivation at all, and you need a different model for different contexts, but it does explain behavior not modelled by preference of desire based models.

  3. "[assuming that preferences arise from desires] would render intransitive preferences not just irrational, but strictly impossible. To prefer X to Y, Y to Z, and Z to X, is not possible for any combination of desire weights |X|, |Y|, and |Z|"

    I'm not sure that this is a problem. It needn't be the case that we have intransitive desire weights in order to have intransitive preferences. Rather, perhaps we have transitive desire weights, but have ineptly inferred our preferences from these facts.

    An analogy that does, intuitively, seem conceivable: Imagine when you ask me, I claim that Alan is taller than Bill, Bill is taller than Claire, but also that Clare is taller than Alan. This obviously isn't possible for any heights of Alan, Bill and Claire, but I don't think it's impossible for someone to make this mistake.

  4. Hi Jonathan, that could work, though we end up divorcing desires from actual behaviour (i.e. you can be disposed to choose X over Y even if your desire weight |Y| is the greater). But perhaps that is unavoidable -- or even theoretically beneficial, if it allows us to explain weakness of the will, etc.

    Alex - right, my use of the word "report" was misleading. I meant to identify preferences with the fact of our relative desire strengths, and not just our beliefs about them.

  5. desires are somthing like
    0) reference previous thoughts and return result if it exists
    1) initiate thought 1
    2) determine measure
    3) initiate thought 2
    4) apply measure (maybe review measure)
    5) compare two items
    6) return higher value item

    Now we might be able to say that any items compared at any instant in any mental state by any person will be placed in a certain order which one could allocate units (depending on the relitive strength of the mental pathways).
    And yet know that that wouldn't be true if mental state etc is a variable which it would be.

    It of course makes sense that we might prefer water when we are thirsty or KFC when we crave KFC. Or that thinking about a strong thought like suffering of sudanese might influence how we think about the next thought (like eating KFC).



Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)