Give to an actual individual A unqualified cognitive and imaginative powers, and full factual and nomological information about his physical and psychological constitution, capacities, circumstances, history, and so on. A will have become A+, who has complete and vivid knowledge of himself and his environment, and whose instrumental rationality is in no way defective. We now ask A+ to tell us not what he currently wants, but what he would want his non-idealized self A to want - or, more generally, to seek - were he in the actual condition and circumstances of A. (pp.173-4, bold added.)
In class yesterday, Dave came up with a wonderful example to suggest that even this double-counterfactual creates interference. Suppose that A's strongest desire is that his cognitive capacities never decline. He desires that, if at any future moment he becomes stupider than he previously was, he dies. (This is just a more extreme version of the common preference many of us have to die rather than succumbing to Alzheimer's or similar mental degeneration.) Given Railton's merely instrumental conception of rationality, there's no reason why this desire couldn't survive idealization, and so be shared by A+. But now the indexical character of the desire is latching on to a new content, given by A+'s context rather than A's. Given that A+'s strongest desire is to be no stupider, what he would want were he to find himself "in the actual condition and circumstances of A" is simply to die! This clearly does not reflect what is in A's objective interest at all, since A has not actually suffered any degeneration. The problem is merely an artifact of the counterfactual scenario.
Liz Harman then suggested a couple of clever solutions. The problem, recall, is that the counterfactual context changes the content of A's indexical desires. So one solution would be to construct the idealization according to the (actual) content rather than character (meaning) of A's desires. That is, even in the idealized context, we treat the desires as referring to A and his actual circumstances. Then A+'s strongest desire is merely to be no stupider than A.
A second option, which I like even more (though I'm not sure how much of it is a reconstruction on my part) would be to bring A's context over to A+. That is, ask A+ to assess an indicative rather than subjunctive conditional: not "what would you want if you were to find yourself in A's condition", but "under the hypothesis that you are in A's actual condition, what do you want?" (Very 2-D!) I think that should work, right?
(Mind you, it's a bit of a mystery why Railton appeals to this idealization process at all. Given that he only builds in full information + instrumental rationality, it doesn't seem that A+ is allowed to revise any of A's ultimate ends. So what work is he doing? Why not just directly identify A's objective interest with whatever would best fulfill his ultimate desires in fact? Presumably that's what is supposed to be guiding A+'s decision. Smith mentioned Railton's "wants/interests mechanism" as going beyond mere instrumental rationality, by tending to bring our motivations more into line with our affective responses, but this alignment does not seem to be included in the idealization process quoted above. Can anyone think of a case where A+ would appropriately choose something other than what would best fulfill A's ultimate desires? Divergence - as in the 'degeneration' case above - seems to indicate precisely that the idealization has gone wrong!)