I'm going to contrast two broad approaches to this issue. The first -- what we might call "instruction manual" -- approach, tries to specify what Holly Smith (in 'Subjective Rightness') calls “a type of duty to which the agent has infallible access in his decision-making,” however confused or misguided he may be. The focus is very much on developing a decision procedure that (almost) any agent can implement without difficult. Feldman follows this tradition in his 'True and Useful'. On p.164, for example, he suggests as a heuristic for explicating subjective rightness a "Moral Guide" who, faced with an irresponsibly reckless agent, responds: "I won't comment on your policy concerning this sort of case. I will simply tell you what, given that you have those moral views, you ought2 to do..."
I think that this is not good advice. We should not just hold fixed an agent's unreasonableness when trying to determine what they ought (in the deliberative sense) to do. Rather, on the alternative approach that I favour, we're interested in what a rational decision-procedure would look like, or what it would be rational for the agent to do, given their evidence. Part of what this requires is that the agent responds appropriately to their evidence, and so holds non-crazy normative views. If an agent begins with crazy normative views, or reckless dispositions, there's no normatively significant sense in which they "ought" to follow through on this initial mistake. Rather, they ought (by any genuinely normative sense of "ought") to revise their irrational views and behave reasonably. That, I hold, is what is required for an agent to act commendably. The relevant decision procedure for a two-level moral theory is thus whatever decision procedure constitutes rationality from a moral point of view, or evidence-relative rightness, not Smith's "subjective rightness".
As I put it in my old essay on Objective and Subjective Oughts:
Even when considering the project of creating an 'instruction manual' that agents may use as a guide in making their way about the world, I think we should be satisfied by norms that will serve to guide a sufficiently competent and well-functioning agent in the right direction. Others have hoped to find moral norms that even the most incompetent agent can apply without difficulty. The problem with this hope is that incompetent agents will, being incompetent, inevitably end up doing rather poorly on many occasions. So if we want to tailor our instruction manual to cater to their limited abilities, we are going to end up instructing agents to act poorly...
So we shouldn't feel compelled to write an instruction manual that's easy for anyone (however incompetent) to follow, because the resulting advice would no longer have any value. At the other extreme, I certainly acknowledge that there's little use for instructions that appeal to unknowable conditions (e.g. the Moore-paradoxical: “what to do if you believe that p but p is false at the time of reading this instruction...”). Still, there may be plenty of value to be found in an instruction manual that appeals to evidential conditions (e.g. “what to do if your evidence supports 0.2 - 0.4 credence in p”), since we are often able to judge our evidence correctly. Granted, if we are so ill-constituted as to be unable to respond appropriately to evidence, then we will be equally unable to be guided by these normative (rational) requirements. That'd certainly be unfortunate, but I think any complaint about this situation is more fairly directed at the world (for containing ill-constituted agents in the first place) than at our normative theory (for failing to achieve the impossible with them).
I conclude there that "real normativity is inevitably laced with objectivity, in the sense that merely trying hard to be reasonable is no guarantee of success." Once we appreciate this point, I think it undermines the motivation for the excessively "subjective" approach of Smith and Feldman. It's true that we need to supplement our objective criteria of rightness with an account of what it's rational for agents to do given their limited evidence. And for cognitively limited human agents, I don't think the answer will be as simple as "maximize expected utility" -- I spell out a more nuanced approach here. But once we have an account of what it's rational for non-ideal agents to do, I'm not convinced that there are any norms more subjective than that, that are worth looking for.