tag:blogger.com,1999:blog-6642011.post112788453105685592..comments2023-10-29T10:32:36.914-04:00Comments on Philosophy, et cetera: Fair GrapesRichard Y Chappellhttp://www.blogger.com/profile/16725218276285291235noreply@blogger.comBlogger17125tag:blogger.com,1999:blog-6642011.post-1128231290726586622005-10-02T01:34:00.000-04:002005-10-02T01:34:00.000-04:00Ha, yeah, sorry about that, my deletions did leave...Ha, yeah, sorry about that, my deletions did leave your comment looking kind of odd :)<BR/><BR/>"<I>I caught 'em after just the second comment</I>"<BR/><BR/>If only all of us could be so lucky!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1128230512060612492005-10-02T01:21:00.000-04:002005-10-02T01:21:00.000-04:00Let me just point out, for posterity, that there w...Let me just point out, for posterity, that there was one spam comment before my last comment (actually, before Genius's last comment), and two more after it. So it made sense when I left it, before plan B was ruthlessly enacted.<BR/><BR/>I'll add that spammers found my blog about two days ago, so I have gone to word verification. I was both ruthless and quick, so I caught 'em after just the second comment (though a third one slipped in while I was changing the settings).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1128138611868338992005-09-30T23:50:00.000-04:002005-09-30T23:50:00.000-04:00Well, Richard, it appears that They have discovere...Well, Richard, it appears that They have discovered how to "leave a comment". What's your plan B?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1128136527258263062005-09-30T23:15:00.000-04:002005-09-30T23:15:00.000-04:00I can think of 3 ways to put intelligence into thi...I can think of 3 ways to put intelligence into this (as surely Richard - or anyone who wants to devalue chimps, or rats for that matter, must want to do)<BR/><BR/>1) As a multiplier - i.e. Bill matter more than Mark by the ratio to which he is smarter (therefore every unit on his preference list is multiplied by that ration when comparing them<BR/><BR/>Thus every unit of utility is measured as intelligence * approximated utility (from the above method)<BR/><BR/>This one requires a much extended upper end to the scale to provide the discrimination against lower life forms that is desired BUT that also would provide quite an elitist society (Richard and I might benefit from but some of our friends might not).<BR/><BR/>2) As a matter of having more interests and more abstract interests<BR/>This works by the fact that there will simply be more opportunities to make a smart person happier.<BR/>After calibrating normal things like being pricked with a pin there will be a large set of higher goals that only exist in the set of the more intelligent individual - therefore the system might automatically favor that individual.<BR/>This is related to "domain specific knowledge gathering" sort of things discussed in the humans thread.<BR/>However this probably won’t produce the total domination of lower life forms most would desire. Philosophers might benefit a lot from this.<BR/><BR/>3) Writing off certain things as invalid for comparison, for example saying a shark can't feel pain even if it fears pain more than a set of other things on its preference scale than also exists on our preference scale in almost identical form.<BR/>Maybe a better example is saying a chimp can’t feel love or something along those lines.<BR/>This seems more like what people actually do but also seems pretty dubious morally but I guess one could argue each step is an evolutionary advancement and that each advancement carries with it some sort of potential to have rights or intrinsic right.Geniushttps://www.blogger.com/profile/11624496692217466430noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1128120872232993582005-09-30T18:54:00.000-04:002005-09-30T18:54:00.000-04:00Since no one else seems to be thinking about it......Since no one else seems to be thinking about it...<BR/><BR/>comparing interpersonal utility is not a fatal problem because I can fairly accurately predict your preferences (e.g. I expect you do not want to be poked in the eye) and therefore I must have a concept of their relative value and that I have that concept implies our scales are fairly similar and that I could get a 90% correct allocation of preferences all utility for all people for all events if I had enough time and information. (Of course that would probably fall short of the allocation they might achieve if they allocated it to themselves which might, in turn, fall short of the allocation with perfect information). I can test that with a few random events if we really want.<BR/><BR/>Anyway, almost every moral philosophy faces some sort of approximation problems.<BR/><BR/>So we have a set of ordinal scales that are about as accurate as any other moral philosophy and we can reasonably compare them BUT only when we have a huge set of variables and only probabilistically. i.e. I cant say you like grapes more than apples unless I can match up our ordinal scales with maybe 10 other standard events and find that you place them in a certain identical order (also I would want to be sure we had removed any game theory aspects to it!) with just the grapes being out of place.<BR/><BR/>But still this demonstrates that maybe A wants the grapes "more" than B but doesn’t make it very clear how much more<BR/><BR/>economics helps a bit here in that we can record a large number of "trades" or much better "gambles" (because this avoids diminishing marginal utility - although it does add risk avoidance) where an individual might make bets with money (or similar) to own various things.<BR/><BR/>Once we calibrate the desires of the person we could then use a standard event (lets say having $10 was in a matching location in both preference lists) and allow the people to make bets with that event to give us a quantitative scale.<BR/><BR/>Of course there is an issue with this "preference utilitarianism" since it implies that people will make the right decisions for them (and won’t try to second guess the system).<BR/>The system could be adjusted for more perfect decision making by assessing the persons after the event analysis of their own decisions and their effective satisfaction. Then using that to determine where people are making potentially irrational decisions (such as killing someone in a fit of rage lets say).<BR/><BR/>this still leaves me with the most troubling problem of defining the utility of a person that dies or fails to came into existence and thus if we want to maximize average or total utility. Both seem to create unpleasant conclusions<BR/>1) A single super happy man<BR/>2) A billion only marginally happy peopleGeniushttps://www.blogger.com/profile/11624496692217466430noreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1128053612549359512005-09-30T00:13:00.000-04:002005-09-30T00:13:00.000-04:00I'm not convinced that we really can imagine a uti...I'm <A HREF="http://pixnaps.blogspot.com/2005/05/best-distribution.html" REL="nofollow">not convinced</A> that we really can imagine a utility monster. But I'll grant that utilitarianism doesn't always fit well with our initial intuitions about fairness. But, in light of C's reasoning, I'm inclined to think "so much the worse for our initial intuitions."<BR/><BR/>(Though, as Blar suggests, <A HREF="http://pixnaps.blogspot.com/2005/06/indirect-utilitarianism.html" REL="nofollow">indirect utilitarianism</A> would probably recommend a more intuitively "fair" division in any real life situation anyway.)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1128030080020609142005-09-29T17:41:00.000-04:002005-09-29T17:41:00.000-04:00Then the Chicago-school economist invokes Coase's ...Then the Chicago-school economist invokes Coase's theorem, and declares that it doesn't matter how you divide the grapes: in absence of transaction costs, A will end up with all the grapes anyway after some amount of bartering with B, and total utility will be maximized.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127994946416351652005-09-29T07:55:00.000-04:002005-09-29T07:55:00.000-04:00B : OK, you can have all the grapes, as long as yo...B : OK, you can have all the grapes, as long as you do my dishes.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127983180686454452005-09-29T04:39:00.000-04:002005-09-29T04:39:00.000-04:00> it's just plain silly to deny that we can make i...> it's just plain silly to deny that we can make interpersonal comparisons<BR/><BR/>maybe it would be useful to define the calibration of our utility scale (eg sharing utility ratings in a sort of tender with an equal starting amounts of negitive and positive and attributing 0 to sleeping). <BR/><BR/>A little thought reveals there are quite different ways to do this - and the simple ones seem quite counter intuitive... Has anyone done an analysis of how it is best done?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127980669202655122005-09-29T03:57:00.000-04:002005-09-29T03:57:00.000-04:00It's an interesting position, but I don't know tha...It's an interesting position, but I don't know that most people would think of that as fair. Imagine the "utility monster", that really likes everything. Under an "equal concern" rule, the utility monster would get everything, but I doubt that most people would consider that allocation fair.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127975435213634032005-09-29T02:30:00.000-04:002005-09-29T02:30:00.000-04:00Chris - that's right.Craig - see here: "it's just ...Chris - that's right.<BR/><BR/>Craig - see <A HREF="http://pixnaps.blogspot.com/2005/06/sacrifice-and-separate-persons.html" REL="nofollow">here</A>: "it's just plain silly to deny that we can make interpersonal comparisons [of utility]. If I get a papercut and you get your head chopped off, it is absurd to deny that you have suffered a greater harm."<BR/><BR/>While the difference in utility two people get from grapes might be a lot closer, and thus more difficult to discern, I don't see any reason why such a case is so different <I>in principle</I> as to justify your claim that such comparisons, however difficult, are necessarily "invalid".Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127974918745622642005-09-29T02:21:00.000-04:002005-09-29T02:21:00.000-04:00Also - there is utility from dynamics outside of t...Also - there is utility from dynamics outside of that created by the eating itself.<BR/><BR/>For example the two people will probably be happier if they percieve some sort of fairness and some sort of percieved order. <BR/><BR/>The person with 100 grapes may feel bettr if he shares some rather than eats them all since that might cause feelings of guilt etc while the other person may get feelings of jelliousy resulting in them having negitive utility from each grape the other person eats.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127967930880395032005-09-29T00:25:00.000-04:002005-09-29T00:25:00.000-04:00geniusnz is almost there: " how does one compare y...geniusnz is almost there: " how does one compare you eating a grape and B eating a grape."<BR/><BR/>The proper answer is "I deny the premise of the question". In particular, one CANNOT compare A's utility for grapes with B's. It's an invalid operation on utilities.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127955583159166642005-09-28T20:59:00.000-04:002005-09-28T20:59:00.000-04:00I'm not an expert in econ-speak, but does constant...I'm not an expert in econ-speak, but does constant marginal utility mean the utility one gets from grapes is directly related to the number of grapes and not, say, the square root of the number of grapes?<BR/><BR/>If so, giving them all to A makes sense because it maximizes total utility. This is perhaps more obvious if you turn the grapes into a single piece of dark chocolate. Why should the situations be different, given that assumption of utility?<BR/><BR/>But in practice, the utility from eating 100 grapes isn't twice the utility from eating 50. Maximizing total utility would mean giving some grapes to B, but perhaps not 50, since B is likely to gain more utility from eating his first 10 grapes than A would from eating grapes numbers 91-100. <BR/><BR/>This fits with what would likely happen in real life: both begin eating grapes at a leisurely rate, but B gets sick of eating grapes quicker because he doesn't like them as much.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127930368825897062005-09-28T13:59:00.000-04:002005-09-28T13:59:00.000-04:00Bren, A and B are under angelic orders to consume ...Bren, A and B are under angelic orders to consume the grapes, which presumably rules out this kind of exchange.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127925909437467902005-09-28T12:45:00.000-04:002005-09-28T12:45:00.000-04:00And the indirect utilitarian says: C, your solutio...And the indirect utilitarian says: C, your solution maximizes grape-utility, but there is more to life than grapes, and more utility at stake than grape-utility. Equal division of grapes is a resolution that no one will be too offended by, and it is good to stick to the principle of the equal allocation of goods in order to keep people from justifying allocations that disproportionately benefit their side.<BR/><BR/>And the indirect utilitarian <A HREF="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=9150585&dopt=Citation" REL="nofollow">cited empirical research</A> and it was good.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6642011.post-1127892204012576322005-09-28T03:23:00.000-04:002005-09-28T03:23:00.000-04:00I am not sure how meaningful it is to say "A gets ...I am not sure how meaningful it is to say "A gets twice as much utility as B" since you can compare utility between you eating a grape and you eating an apple but how does one compare you eating a grape and B eating a grape.<BR/><BR/>But otherwise I agree, even if a situation where Bren's solution does not at least somewhat apply seems quite unlikely.Anonymousnoreply@blogger.com