It's worth distinguishing two superficially similar but fundamentally different answers to the traditional question 'why be moral?'. They are superficially similar in that both support benevolent behaviour in ordinary circumstances. And both involve some revision to traditional conceptions of rationality. But the precise nature of the called-for revision differs significantly between the two approaches, in a way that may also lead to significant practical differences in certain (neglected) cases.
(1) What we might call the 'contractarian' route takes for granted one's ultimate ends (however selfish they might be), and justifies moral behaviour on fundamentally instrumental grounds. This typically involves some appeal to the mutual benefits of cooperation in Prisoner's Dilemmas and the like. It is revisionary conception of instrumental rationality in that it prescribes reasoning in a collective rather than individual manner -- e.g. reasoning as if the other will decide as you do, no matter that in fact the other agent's decision procedure is (by stipulation) completely independent of your own.
(2) Alternatively, the axiological route calls for a revision of our ultimate ends. On this view, the purely self-interested agent values the wrong things: he ought to value others' welfare as well as his own. This is revisionary in that it requires us to go beyond merely instrumental rationality, by also treating ultimate ends as rationally evaluable. [We may further sub-divide this approach depending on whether the irrationality of certain ends is just a brute, self-contained, substantive normative fact; or whether it instead derives from some more formal or procedural considerations, such as incoherence within one's larger desire set.]
Note that typical cases of conflict between self-interest and altruistic co-operation have the unfortunate effect of masking the differences between the two views. To really see the difference, we need to consider a case where benevolence and co-operation come apart. See, for example, Eliezer's True Prisoner's Dilemma, where you're in conflict with an alien 'paperclip maximizer' about whether to save sentient lives or paperclips. If you both co-operate, the result will be that more lives and more paperclips are saved than if both of you defect. But taken individually -- whatever the paperclip maximizer happens to choose -- your choice to "co-operate" rather than "defect" would gain a paltry two paperclips at the cost of a billion lives. Assuming that the other agent's decision truly is independent of yours, then, the benevolent thing to do in this case is surely to defect.
We can then ask: which choice -- the co-operative one, or the benevolent one -- is the morally right choice to make in such a case of conflict?
[Update: replaced the label 'contractualist' with 'contractarian' to better match standard usage and avoid misunderstandings.]