People often assume that there's some genuine sense in which what we ought to do (believe) is determined by whatever we believe we have most reason to do (believe). Call this 'subjectivism'. Subjectivist positions seem common in debates over peer disagreement and normative uncertainty (to name just a couple of examples). But I think it is mistaken.
Granted, there may well be wide-scope requirements e.g., to not believe (i) that the evidence conclusively supports P without also believing (ii) that P is true. But it doesn't follow from my believing of (i) that I ought to also believe (ii). Perhaps I should instead give up my belief in (i).
As I pointed out in 'Rational Objectivity', rational status is not perfectly transparent: we can be irrational without realising it. In particular, it's possible to believe that I rationally ought to φ [e.g. believe P] without this truly being so. This possibility of error is essential to any non-trivial rational norm, thus ruling out the possibility of subjectivism. (Bootstrapping cases are helpful to illustrate this objection more vividly. We can describe a scenario in which it an agent is patently unreasonable in believing P. But subjectivism implies, absurdly, that their belief may be justified by the mere fact that they erroneously take their evidence to support the ludicrous proposition.)
If subjectivism is so daft, why are so many people initially tempted to accept it? I think there are three main reasons. The first, noted above, is the confusion of narrow- and wide-scope requirements. The second is that in bootstrapping cases, the agent is at least exhibiting some (perhaps limited) procedural epistemic virtues. A good epistemic agent will, after all, align their beliefs with their judgments about the evidence. The problem is that this is woefully insufficient to qualify as a good epistemic agent, if one's judgments about the evidence are not themselves reasonable. Indeed, taken in isolation, partial "virtue" may simply lead one further astray. (Compare: an instrumentally rational psychopath at least displays certain 'executive virtues', but their competence actually becomes a bad thing given how warped their ends are.)
The third - and I think most important - reason has to do with considerations of 'action guidance'. The theoretical role of rational norms is, after all, to guide us when we can't tell what we (objectively) ought to do. So there has got to be something a bit more subjective about them. The considerations that make one option rationally superior to another must be considerations that are accessible to us. Subjective beliefs are the obvious candidates: they're accessible to us in a way that external facts are not. And, indeed, there are independent theoretical motivations for accepting a kind of 'internalism' about rationality, i.e. the thought that what's rational for me depends entirely upon facts internal to my mind, not the external world.
But it's simply a mistake to think that internalism implies subjectivism. After all, subjectivism restricts itself to a very specific subset of my beliefs, namely my normative beliefs about what I ought to do. What about my ordinary non-normative beliefs? If I know that a generally reliable source just told me "P is false", but I irrationally interpret this as evidence that P is true, subjectivism licenses my irrational belief that P. But we needn't go along with this. There's a perfectly accessible fact which counts against the belief, namely the testimonial evidence I just heard (and perfectly well remember). Again, I know full well what the source said -- this information is as accessible to me as any -- my error is one of normative interpretation. I unreasonably interpreted this basic fact as evidence for P when really it is (as I should have known) evidence against P. My mistake, right?
This is the key issue. The subjectivist claims that what's rational is determined by what the agent treats as evidence. These normative judgments are themselves taken as 'given' and beyond dispute. But I contend that we cannot get any worthwhile action-guiding norms when so much is taken as given. If my normative judgments are sufficiently unreasonable, then their implications are of no rational help. (Garbage in, garbage out.) There is no sense in which I 'ought', automatically, to do whatever it seems to me I ought to do. Even in the most subjective of genuine rational norms, tailored for non-ideal agents, my beliefs about what I ought to do are always open to question, and so might be rationally trumped by certain of my other beliefs -- even if I'm too irrational to realize it.