In the comments at Agoraphilia, I outline a rough specification of the sorts of objective facts that could serve as truthmakers for interpersonal utility comparison (IUC) claims, in hope of making them less mysterious. (Some people hold such comparisons to be impossible in principle. I want to claim that the difficulty is merely epistemic.) In short, my strategy is to convert the IUC into a hypothetical intra-personal utility comparison, by appealing to the global preferences of an idealized agent who gets to experience both lives sequentially. (Like my God, say.)
This seems clearly unproblematic at least for simple hedonistic theories of welfare. Our hypothetical agent can easily compare two experiences across the lifetimes, and determine which is the more pleasurable. But will it still work once we bring in other values? If the two original individuals had very different global preferences, with what "common currency" can we compare them? How could our idealized agent choose between them fairly? (To adopt either preference system would seem to unfairly exclude the other.) We might worry that the two are simply incommensurable.
Indeed, the same problem arises within a single life, if the person endorses different value systems at different times. Perhaps the young idealist most wants to have a positive impact on the world, whereas his older self would prefer to live a comfortable life and look out for his family. Each thoroughly rejects the values of the other. Which lifestyle would be "best" for this person? Here I learn towards the Parfitian response of considering them to be two distinct persons. That allows us to say what is best for each, but it remains unclear how we are to weigh the relative costs and benefits between them, so as to determine what would be best overall.
The problem could be overcome if we assume convergence of idealization. If there is just one maximally coherent and unified desire set, just one value system that an idealized (perfectly rational and fully informed) agent could hold, then the ideal agent could adjudicate the dispute. We could ask him: "Supposing that you will experience this life, first from the young man's perspective and then from the elder's, how do you want it to go?" This yields a determinate answer which can be used to weigh the conflicting interests authoritatively. The idealized young man and the idealized old man would both agree, for we have supposed that idealization would cause their preferences to converge.
But what if such ideal convergence would not, in fact, occur? The two idealized selves would continue to disagree about how to weigh the various tradeoffs. In cases of such persisting disagreement, it seems we must conclude that there is no absolute fact of the matter about which harm or benefit is the greater. In those -- perhaps rare -- cases, the welfare facts would be agent-relative (but in a sophisticated way).
That seems an odd result. Perhaps it arises because I am conflating the distinction between one harm "factually outweighing" another (i.e. being a greater harm) versus "morally outweighing" it (i.e. being the more important harm). Perhaps the appeal to idealized preferences really latches on to the latter kind of assessment. But then how are we to get a grip on the former class of facts? Suggestions welcome...
[Thanks to Blar for bringing these problems to my attention.]