Recall that there are some desirable ends that are "essential byproducts" in the sense that they can only be achieved by aiming at something else. This is importantly different from the case of utility discussed more recently, whereby one could employ the indirect strategies for the sake of the ultimate goal. Here I instead mean to discuss the situation where one must act for reasons independent of the desirable byproduct. For example, the good life plausibly involves genuine friendships, wherein one values the other person intrinsically, for their sake, and not merely for the sake of one's own happiness. Let us suppose that this is so, i.e. true happiness can only be attained if it is not the ultimate reason for which one acts. Then the rational egoist cannot overcome the problem merely by becoming an 'indirect egoist'. They must cease to (consciously) be an egoist altogether.
There are different theories about what we have reason to do. For example, Instrumentalism says that we have reason to fulfill our own present desires, whatever they may be. Rational Egoism says that we have reason to promote our self-interest, and nothing else (no matter how we might feel about this!). And Rational Altruism says that we have reason to advance everyone's interests. I discuss these further here. For sake of the present discussion, I will pretend -- contrary to fact -- that Rational Egoism is a tenable position.
Now, for any such account of reasons, we can pair it with a corresponding account of local rationality, based on the following general schema:
(G) Rationality is a matter of responding appropriately to p-reasons. (I define an agent as having "partial" or p-reasons when they hold the belief that P, and -- regardless of whether the agent recognizes P as a reason -- the truth of P would, in fact, constitute a reason. This is close to, but subtly different from, my more recent account of 'apparent reasons'.)
The different accounts posit different reasons, hence different p-reasons, and hence different specifications of local rationality in this sense. For example, according to Rational Egoism, agents are locally rational insofar as they seek to advance their own interests. But what if (as we earlier supposed) acting in such a way would in fact tend to undermine one's best interests? It would follow that we ought not to be locally rational. Instead, we have most reason to make ourselves insensitive to these reasons (or newly sensitive to "false" reasons, say by coming to believe some other theory, like Rational Altruism).
Suppose the enlightened egoist successfully brainwashes himself into becoming an altruist, and ends up living a happier life in consequence. If rational egoism is true, then he has done what he had most reason to do, namely, what was best for him. But, post-brainwashing, he was not responding to those reasons. He did not act that way for those reasons, or with the conscious aim of advancing his self-interest. So he was no longer being locally rational, according to that theory.
This allows us to uphold Parfit's account of "rational irrationality". The egoist has reason to become an irrational altruist, and recognizing those reasons makes it rational for him to become so. Moreover, it would be irrational for him to cause himself to lose this disposition towards irrationality. If, during a moment of rare lucidity, he became temporarily resensitized to egoistic reasons, he would see that he has no reason to cause himself to remain so rational, and so it would be irrational for him to take such steps. Instead, it is rational for him to stay irrational. But that doesn't make his general insensitivity to egoistic reasons any less irrational in itself. He is still failing to consciously respond to p-reasons, and thus (by my definition) irrational. He is doing what he has most reason to do, but he does so non-rationally, i.e. he is not acting for those reasons.
(Similar things can be said of other 'essential byproducts'. There's no rational way to fall asleep, or to not think of an elephant. Local rationality involves means-ends reasoning, and you can't achieve these ends by having them "in mind" in such a way!)
Parfit's account of Rational Egoism risks incoherence when he speaks of a "supremely rational aim", however. Consider:
(S1) "For each person, there is one supremely rational aim: that his life go, for him, as well as possible." (Reasons and Persons, p.4)
We have supposed that having such an aim would in fact cause one's life to go worse. Knowing this, rational egoism tells us that it would be irrational to have this aim. That then seems to contradict the claim that this aim is "supremely rational".
But perhaps we simply need to distinguish two senses in which one might assess the rational status of an aim. An aim might embody rationality, in that it constitutes supreme sensitivity to one's p-reasons. Or an aim might be recommended by rationality, in the sense that one's p-reasons tell one to have this aim. Armed with this distinction, we can (alas) defend S1 from claims of incoherence. S1 should be interpreted as saying that the egoistic aim supremely embodies rationality, even though it might not be rationally recommended to embody rationality in such a way.
Curiously, this could make it impossible for some agents not to be irrational, if rationality recommends that they cease to embody rationality. Either they irrationally ignore this recommendation, or else they heed the recommendation and so make themselves come to be irrational in future. Either way, they're irrational at some point. There's no possible way for such an agent to successfully act from their p-reasons at all times, if one's earlier p-reasons precisely recommend desensitizing oneself to the later ones! You've gotta love the self-referential logic of it all.