## Wednesday, September 28, 2005

### Fair Grapes

Stupid blog software wouldn't let me comment on an interesting post over at the Cardinal Collective, so I'll just write up my response here instead, after quoting the revelant section:
Let's suppose that person A and person B are sitting together in a room when an angel appears in a burst of light and says "Be not afraid! Here are 100 grapes for your enjoyment - divide them fairly between the two of you, and then consume them." The angel vanishes.

How should A and B divide the grapes? As it happens, person A likes grapes twice as much as person B does. (For the purists, assume that each of them has constant marginal utility and that these facts are commonly known.) The following dialogue ensues:

A: In order to divide them fairly, we should split the grapes evenly - 50 to you, 50 to me.

B: That's not a fair division: I like grapes half as much as you, so if we split 50-50, I only end up half as happy as you are. A truly fair division would make us equally happy. Therefore, I should get 66 grapes, and you should get 34.

A: That's ridiculous! It can't be fair to give fewer grapes to the person who likes them more. Our individual happiness is irrelevant - fairness means splitting the grapes evenly, nothing more.

B: How can our individual happiness be irrelevant? The whole point of having the grapes is to makes ourselves happier. It's the quantity of grapes that is irrelevant here - neither of us cares about how many grapes we get except through how much utility we will gain from eating them. Our happiness is the relevant concept to equalize here.

A: [turns to a commentator] What should we do?

C: The fair thing to do is show equal concern for the interests of each of you. And B is quite correct that individual utility is what matters.

B: I knew it!

C: And since A would receive so much more utility from each grape, it would be unfair to give any at all to B. That would be to treat a small increment to B's welfare as more important than a larger increment to A's welfare. Clearly that is mistaken. You ought to give all 100 grapes to A.

B: Doh!

#### 18 comments:

1. Pratical considerations aside.

Can they not trade half of the grapes with person D, who receives the same utility from grapes as person A for a good that would give person B the same amount of utility as person A recieves from the grapes?

2. I am not sure how meaningful it is to say "A gets twice as much utility as B" since you can compare utility between you eating a grape and you eating an apple but how does one compare you eating a grape and B eating a grape.

But otherwise I agree, even if a situation where Bren's solution does not at least somewhat apply seems quite unlikely.

3. And the indirect utilitarian says: C, your solution maximizes grape-utility, but there is more to life than grapes, and more utility at stake than grape-utility. Equal division of grapes is a resolution that no one will be too offended by, and it is good to stick to the principle of the equal allocation of goods in order to keep people from justifying allocations that disproportionately benefit their side.

And the indirect utilitarian cited empirical research and it was good.

4. Bren, A and B are under angelic orders to consume the grapes, which presumably rules out this kind of exchange.

5. I'm not an expert in econ-speak, but does constant marginal utility mean the utility one gets from grapes is directly related to the number of grapes and not, say, the square root of the number of grapes?

If so, giving them all to A makes sense because it maximizes total utility. This is perhaps more obvious if you turn the grapes into a single piece of dark chocolate. Why should the situations be different, given that assumption of utility?

But in practice, the utility from eating 100 grapes isn't twice the utility from eating 50. Maximizing total utility would mean giving some grapes to B, but perhaps not 50, since B is likely to gain more utility from eating his first 10 grapes than A would from eating grapes numbers 91-100.

This fits with what would likely happen in real life: both begin eating grapes at a leisurely rate, but B gets sick of eating grapes quicker because he doesn't like them as much.

6. geniusnz is almost there: " how does one compare you eating a grape and B eating a grape."

The proper answer is "I deny the premise of the question". In particular, one CANNOT compare A's utility for grapes with B's. It's an invalid operation on utilities.

7. Also - there is utility from dynamics outside of that created by the eating itself.

For example the two people will probably be happier if they percieve some sort of fairness and some sort of percieved order.

The person with 100 grapes may feel bettr if he shares some rather than eats them all since that might cause feelings of guilt etc while the other person may get feelings of jelliousy resulting in them having negitive utility from each grape the other person eats.

8. Chris - that's right.

Craig - see here: "it's just plain silly to deny that we can make interpersonal comparisons [of utility]. If I get a papercut and you get your head chopped off, it is absurd to deny that you have suffered a greater harm."

While the difference in utility two people get from grapes might be a lot closer, and thus more difficult to discern, I don't see any reason why such a case is so different in principle as to justify your claim that such comparisons, however difficult, are necessarily "invalid".

9. It's an interesting position, but I don't know that most people would think of that as fair. Imagine the "utility monster", that really likes everything. Under an "equal concern" rule, the utility monster would get everything, but I doubt that most people would consider that allocation fair.

10. > it's just plain silly to deny that we can make interpersonal comparisons

maybe it would be useful to define the calibration of our utility scale (eg sharing utility ratings in a sort of tender with an equal starting amounts of negitive and positive and attributing 0 to sleeping).

A little thought reveals there are quite different ways to do this - and the simple ones seem quite counter intuitive... Has anyone done an analysis of how it is best done?

11. B : OK, you can have all the grapes, as long as you do my dishes.

12. Then the Chicago-school economist invokes Coase's theorem, and declares that it doesn't matter how you divide the grapes: in absence of transaction costs, A will end up with all the grapes anyway after some amount of bartering with B, and total utility will be maximized.

13. I'm not convinced that we really can imagine a utility monster. But I'll grant that utilitarianism doesn't always fit well with our initial intuitions about fairness. But, in light of C's reasoning, I'm inclined to think "so much the worse for our initial intuitions."

(Though, as Blar suggests, indirect utilitarianism would probably recommend a more intuitively "fair" division in any real life situation anyway.)

14. Since no one else seems to be thinking about it...

comparing interpersonal utility is not a fatal problem because I can fairly accurately predict your preferences (e.g. I expect you do not want to be poked in the eye) and therefore I must have a concept of their relative value and that I have that concept implies our scales are fairly similar and that I could get a 90% correct allocation of preferences all utility for all people for all events if I had enough time and information. (Of course that would probably fall short of the allocation they might achieve if they allocated it to themselves which might, in turn, fall short of the allocation with perfect information). I can test that with a few random events if we really want.

Anyway, almost every moral philosophy faces some sort of approximation problems.

So we have a set of ordinal scales that are about as accurate as any other moral philosophy and we can reasonably compare them BUT only when we have a huge set of variables and only probabilistically. i.e. I cant say you like grapes more than apples unless I can match up our ordinal scales with maybe 10 other standard events and find that you place them in a certain identical order (also I would want to be sure we had removed any game theory aspects to it!) with just the grapes being out of place.

But still this demonstrates that maybe A wants the grapes "more" than B but doesn’t make it very clear how much more

economics helps a bit here in that we can record a large number of "trades" or much better "gambles" (because this avoids diminishing marginal utility - although it does add risk avoidance) where an individual might make bets with money (or similar) to own various things.

Once we calibrate the desires of the person we could then use a standard event (lets say having \$10 was in a matching location in both preference lists) and allow the people to make bets with that event to give us a quantitative scale.

Of course there is an issue with this "preference utilitarianism" since it implies that people will make the right decisions for them (and won’t try to second guess the system).
The system could be adjusted for more perfect decision making by assessing the persons after the event analysis of their own decisions and their effective satisfaction. Then using that to determine where people are making potentially irrational decisions (such as killing someone in a fit of rage lets say).

this still leaves me with the most troubling problem of defining the utility of a person that dies or fails to came into existence and thus if we want to maximize average or total utility. Both seem to create unpleasant conclusions
1) A single super happy man
2) A billion only marginally happy people

15. I can think of 3 ways to put intelligence into this (as surely Richard - or anyone who wants to devalue chimps, or rats for that matter, must want to do)

1) As a multiplier - i.e. Bill matter more than Mark by the ratio to which he is smarter (therefore every unit on his preference list is multiplied by that ration when comparing them

Thus every unit of utility is measured as intelligence * approximated utility (from the above method)

This one requires a much extended upper end to the scale to provide the discrimination against lower life forms that is desired BUT that also would provide quite an elitist society (Richard and I might benefit from but some of our friends might not).

2) As a matter of having more interests and more abstract interests
This works by the fact that there will simply be more opportunities to make a smart person happier.
After calibrating normal things like being pricked with a pin there will be a large set of higher goals that only exist in the set of the more intelligent individual - therefore the system might automatically favor that individual.
This is related to "domain specific knowledge gathering" sort of things discussed in the humans thread.
However this probably won’t produce the total domination of lower life forms most would desire. Philosophers might benefit a lot from this.

3) Writing off certain things as invalid for comparison, for example saying a shark can't feel pain even if it fears pain more than a set of other things on its preference scale than also exists on our preference scale in almost identical form.
Maybe a better example is saying a chimp can’t feel love or something along those lines.
This seems more like what people actually do but also seems pretty dubious morally but I guess one could argue each step is an evolutionary advancement and that each advancement carries with it some sort of potential to have rights or intrinsic right.

16. Well, Richard, it appears that They have discovered how to "leave a comment". What's your plan B?

17. Let me just point out, for posterity, that there was one spam comment before my last comment (actually, before Genius's last comment), and two more after it. So it made sense when I left it, before plan B was ruthlessly enacted.

I'll add that spammers found my blog about two days ago, so I have gone to word verification. I was both ruthless and quick, so I caught 'em after just the second comment (though a third one slipped in while I was changing the settings).

18. Ha, yeah, sorry about that, my deletions did leave your comment looking kind of odd :)

"I caught 'em after just the second comment"

If only all of us could be so lucky!

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)