First, Hanson proposes, Morality Should Exist -- which sounds like a category error, but what he really means is that "there should exist creatures who know what is moral, and who act on that." But this is much less intuitive as a substantive principle than the previous intuition that happy people should exist. And as a formal constraint, the proposal that any (morally eligible) utility function must assign utility to agents using this very utility function seems baseless. After all, it seems perfectly coherent to care only about the welfare of sentient beings, and not at all about whether their welfare is achieved by means of explicit attempts to promote welfare. If Hanson is looking for an uncontroversial principle to rest his case on, gratuitously ruling out traditional consequentialism like this seems like a bad place to start.
Next, Hanson proposed that Morality should be adaptive:
[M]orality evolved to help us survive [... So if] we apply that morality in such a way as to make ourselves go extinct, that seems a rather dysfunctional broken application of such morality!
But this conflates moral belief with truth, as well as evolutionary with normative goals. The fundamental moral facts, if there are any, did not evolve: like other abstract truths (e.g. mathematics), they just are. Perhaps our moral beliefs/dispositions were shaped in part by evolutionary selective pressures. But even if the evolutionary "purpose" of our moral beliefs (like everything else) is to help us survive and propagate our genes, that doesn't make it a "purpose" we must share. Normatively speaking, belief aims at truth, so the purpose of our moral beliefs is to accurately represent whatever the moral truths are. And whether it's good for us to survive is a substantive normative question -- albeit one that's plausibly settled by whether our lives tend to be good for us on net.
Hanson concludes with a challenge:
The evolutionary context of our moral intuitions gives a rich detailed framework for defining and estimating moral error. If you reject that framework, the question is what other framework will you substitute? How do you otherwise define and estimate the error in your specific moral intuitions?
As always in philosophy, the only way to proceed is by means of reflective equilibrium: starting with what we judge likely to be true, and seeing how these judgments cohere with other (specific and general) claims that strike us as plausibly true, resolving any conflicts in whatever way strikes us as most plausible.
Hanson's proposal is but a particular instance of this, where you start with overwhelming confidence that moral goals should coincide with evolutionary goals. But I have no such confidence in that assumption. I find it more plausible to start with such substantive claims as that happy, flourishing lives are good, and misery is bad. Hanson's hope for a purely formal moral framework offers but a wild goose chase, ending in the smuggling of (less plausible) substantive moral assumptions through the back door.