Thursday, September 27, 2007

Can Railton Avoid the Conditional Fallacy?

In 'Moral Realism' (1986), Railton suggests a form of ideal agent theory (of one's non-moral good) designed to avoid the conditional fallacy:
Give to an actual individual A unqualified cognitive and imaginative powers, and full factual and nomological information about his physical and psychological constitution, capacities, circumstances, history, and so on. A will have become A+, who has complete and vivid knowledge of himself and his environment, and whose instrumental rationality is in no way defective. We now ask A+ to tell us not what he currently wants, but what he would want his non-idealized self A to want - or, more generally, to seek - were he in the actual condition and circumstances of A. (pp.173-4, bold added.)

In class yesterday, Dave came up with a wonderful example to suggest that even this double-counterfactual creates interference. Suppose that A's strongest desire is that his cognitive capacities never decline. He desires that, if at any future moment he becomes stupider than he previously was, he dies. (This is just a more extreme version of the common preference many of us have to die rather than succumbing to Alzheimer's or similar mental degeneration.) Given Railton's merely instrumental conception of rationality, there's no reason why this desire couldn't survive idealization, and so be shared by A+. But now the indexical character of the desire is latching on to a new content, given by A+'s context rather than A's. Given that A+'s strongest desire is to be no stupider, what he would want were he to find himself "in the actual condition and circumstances of A" is simply to die! This clearly does not reflect what is in A's objective interest at all, since A has not actually suffered any degeneration. The problem is merely an artifact of the counterfactual scenario.

Liz Harman then suggested a couple of clever solutions. The problem, recall, is that the counterfactual context changes the content of A's indexical desires. So one solution would be to construct the idealization according to the (actual) content rather than character (meaning) of A's desires. That is, even in the idealized context, we treat the desires as referring to A and his actual circumstances. Then A+'s strongest desire is merely to be no stupider than A.

A second option, which I like even more (though I'm not sure how much of it is a reconstruction on my part) would be to bring A's context over to A+. That is, ask A+ to assess an indicative rather than subjunctive conditional: not "what would you want if you were to find yourself in A's condition", but "under the hypothesis that you are in A's actual condition, what do you want?" (Very 2-D!) I think that should work, right?

(Mind you, it's a bit of a mystery why Railton appeals to this idealization process at all. Given that he only builds in full information + instrumental rationality, it doesn't seem that A+ is allowed to revise any of A's ultimate ends. So what work is he doing? Why not just directly identify A's objective interest with whatever would best fulfill his ultimate desires in fact? Presumably that's what is supposed to be guiding A+'s decision. Smith mentioned Railton's "wants/interests mechanism" as going beyond mere instrumental rationality, by tending to bring our motivations more into line with our affective responses, but this alignment does not seem to be included in the idealization process quoted above. Can anyone think of a case where A+ would appropriately choose something other than what would best fulfill A's ultimate desires? Divergence - as in the 'degeneration' case above - seems to indicate precisely that the idealization has gone wrong!)


  1. "under the hypothesis that you are in A's actual condition, what do you want?" (Very 2-D!) I think that should work, right?

    No I think that is illdefined. what exactly is the difference between your A and your new A+ in A's condition?

    > Can anyone think of a case where A+ would appropriately choose something other than what would best fulfill A's ultimate desires?

    Maybe, A wants to commit suicide, A+ knows if he can get over this depression he will be very happy in the future possibly by a process of changing what he desires.

  2. 1. The idea is that A+ is still cognitively supercharged, but simply considers what he would want under the hypothesis (not necessarily believed, let alone true) that he is - and always has been - in A's condition. This ensures that it is A's condition that serves as the baseline for his assessments, even as it is A+'s ideal rationality that conducts the assessment.

    2. That sounds like a case where suicide would not, in fact, best fulfill A's desires (for future happiness, etc.)

  3. What concerned me is you are proposing that you are A+ in A's condition but A's condition includes fundimentally that they are not A+.

    In your example - in a sense maybe A should desired to die because he is not A+ I'm not sure that is entirely seperable from anything else you might be trying to seperate it from.

    2. What if A has only one desire (to die) no others (including being happy). However future A has the desire to be happy AND is very happy. A+ therefore decides the second set of desires is more valuable and creates them.

    More counterintuitivly lets say A+ sees A who has one desire - to play games. A+ knows playing games is not very safe - so he makes A love to be a brain in a big metal box (no interface with anything - just a brain in a box). Problem solved.

  4. 1. There's nothing incoherent about asking A+ to imagine that he is someone else. (He will need to compartmentalize his background knowledge, but that is possible. Playing children do this every day.)

    2. Where are A+'s decisions coming from? He only has A's desires. If A doesn't care about his future happiness, then neither does A+. They have the same ultimate values. Remember, Railton is invoking a very limited idealization: full information and instrumental rationality only.

  5. do you mean compartmentalize the same non-A knowledge that he is using to make a superior decision?

    Yes he could do that - just the proposal that he would do it doesn't seem to explain exactly when and how it is and isn't being used.

    2) hmm ... ok

  6. The thought is that he would use his full background knowledge to inform his deliberations about what would be best under the hypothesis that he lacks this knowledge. E.g. "X sure is a fascinating topic. Imagining now that I've never heard of it, I should take a course to learn more about X!". Basically, he just needs to block the practical inference from 'P' to 'I believe/know that P'. His background knowledge thus has an impersonal slant to it. It can figure in his reasoning, but under the guise of free-floating facts, rather than as knowledge he possesses.

    Or, for the trouble case: "I'd hate to mentally degenerate. So, under the hypothesis that I am A, I'd better take care to never become stupider than that. Maybe I should take a logic course..." (no matter that A+ himself has impeccable logic skills. Again, he employs logic in his deliberations here, but he avoids the personal fact that he is logically omniscient.)

  7. Hi Richard,
    Very interesting post. One thought: I'm not (yet) convinced that A and A+ must share the same ultimate desires. Railton's idealization may appear on the face of it to be a limited one--involving only full information and instrumental rationality. But, if any of us is capable of modifying his/her ultimate desires, then I don't see why such a thing could not happen as one is bombarded with a wealth of information about the world. In fact, if anything is likely to change one's desires, I expect that radical shifts in one's epistemic state could. So perhaps Railton's idealization is not so minimal as it appears.

    But perhaps I'm thinking of ultimate desires in the wrong way.

  8. Right, that's pretty much what Railton said when I asked him too. I find the suggestion puzzling, though, for the reasons discussed here, especially the following idea I quote from Railton's own 'Taste and Value':

    "Of course, as a fool I have no antecedent desire identifiable as a desire to lead a more reflective or more Socratic life. But, if my motivational set contained no potential positive sentiment that could be ‘recruited by’ the information I gain about the Socratic life, then how could my novel exposure to the Socratic have any tendency to engage me...?"

  9. > In fact, if anything is likely to change one's desires, I expect that radical shifts in one's epistemic state could.

    maybe our thought experiment is practically impossible (ie you couldn't maintian the same desires in practice) but that it is one of the given variables in the hypothetical that you DO maintian those desires and so it is out of bounds for challenging.

  10. Hi Genius,
    Hmm. So, are you wanting to say that Railton (or the Railton of "Moral Realism" at least) is committed to a constraint on the idealization to the effect that A+ can't have different ultimate desires than A? That seems wrong to me. (And apparently, Railton thinks so too.) Is there some passage that you have in mind where he implies that he accepts this kind of constraint? Granted, one can't get new ultimate desires from the instrumental rationality aspect of the idealization, but the idealization involves more than that. As I see it, alteration in ultimate desires is permissible so long as it is a natural consequence of the epistemic alteration. And really, one might think that this feature is a significant benefit of an account of nonmoral good.

  11. "alteration in ultimate desires is permissible so long as it is a natural consequence of the epistemic alteration."

    I guess that's the key point of contention. I would require it to be a rational consequence, which we cannot get from instrumental rationality alone. Why care about merely "natural" facts, after all? Suppose we are naturally constituted such that learning all the facts about snails would (incomprehensibly) trigger the formation of an ultimate desire to count blades of grass. It seems to me that this merely highlights something buggy in our contingent natural psychology. It has no normative force, because the new desire is not traceable to a rational cause.

  12. Richard,
    Fair enough, though I don't think that's a point of contention between us. I've mainly been discussing how to understand Railton's view--not what may or may not be wrong with it. Much of the discussion above presupposes that A and A+ must have the same ultimate desires, but I'm not convinced that that's true on Railton's view, or that he thinks it is.

    Your complaint sounds akin to Gibbard's "learning about innards" objection to full info theories. It's a powerful sort of reductio, I grant. I suppose one response a full info theorist can make is to bite the bullet and say "If our psychology did lead to this, then it would be in our interest to count grass blades." But then, she could add the reassuring point that we have little reason to expect that our psychology is like that. That seems to go some way towards softening the pain of bullet-biting. After all, is it so costly for her to admit that crazy idealized desires imply crazy interests, if she's not convinced that we ourselves have crazy idealized desires?

    At any rate, that's a crude approximation to what one line of reply might be. I'm not sure what I think of it.

    A few other (half-baked) thoughts on all this:
    -Would people only have ultimate desires for grass-counting? That might qualify as a crazy world distant from ours.
    -Also, wouldn't the idealized agents know about the sketchy nature of these grass-counting desires they have (which are triggered "incomprehensibly" or in unconventional ways)? If so, we might plausibly expect that they would have desires countering the grass-counting desires. And so the grass-counting desires might get stifled/overridden in such a way that it wouldn't be true that grass-counting is in our best interests.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.