Fred Feldman's 'True and Useful: On the Structure of a Two Level Normative Theory' discusses the "implementability problem" that confronts not just consequentialism, but -- he suggests -- any moral theory that attributes normative significance to features that an agent may be ignorant of (as surely any plausible theory does). Any plausible moral theory, then, will need to have two "levels": the theoretical criterion of rightness, and the practical (i.e. implementable) decision procedure that goes with it. Feldman's generality claim strikes me as an important insight, but I want to raise some questions about how we should understand the relevant "decision procedures".
First, it's crucial here to distinguish the fitting and the fortunate. The most "useful" or consequentialist-recommended decision procedure might not be in any way a "fitting" or consequentialist-exemplifying decision procedure. It's a purely empirical question what mindset or decision procedure would be most useful, in consequentialist terms, and so not a matter for a priori moral theorizing. But we can at least theorize about the fittingness question, i.e. what sort of decision procedure truly exemplifies the consequentialist perspective. (Or, more generally, how to get from criteria of rightness to fitting decision procedures.) So any further philosophical inquiry can only concern this latter issue, i.e. to find a decision procedure that is not merely "useful", but -- as we might say -- an "implementation of the true".
With that distinction out of the way, we can broach the main issue I'm interested in here, namely, how much idealization we should aim for in specifying a proper decision procedure.
One might think that there's no single right answer to this, but merely (in Frank Jackson's words) "an annoying proliferation of oughts", ranging from the fully-informed "objective ought", through varying degrees of belief-relative oughts (accommodating the agent's non-normative uncertainty only, or else holding fixed both descriptive and normative uncertainty, and asking what's rational in light of that), all the way to the trivial "fully subjective ought" which holds fixed both the agents beliefs and their inferential dispositions, yielding as an output just whatever it is that the agent is actually disposed to conclude (no matter how stupid or misguided this may be). I'll grant that one can stipulatively introduce this whole range of standards, but I don't think that they all have equal normative significance. Indeed, I think it's pretty clear that the "fully subjective" decision procedure, which simply outputs whatever the agent will actually choose, has no intrinsic normative significance: there's nothing genuinely commendatory about merely doing whatever you happen to actually do.
Some idealization is clearly needed, in order for the agent's compliance with the normative standard to qualify as any sort of achievement. Moreover, there will be a particular one of these many candidate "oughts" that is privileged in virtue of properly guiding the agent's deliberation. There will also be a particular decision procedure the following of which will render the agent most praiseworthy. I assume that these two coincide, i.e., that the most praiseworthy action is also the one that the agent ought, in the sense relevant to deliberation, to make (this latter fact explaining why commendation is deserved: the agent deliberated correctly!).
Revealing this underlying issue of normative substance helps to clarify my question. I'm interested in how much idealization is involved in specifying the decision procedure constitutive of praiseworthy practical deliberation. My next post will go about answering this question. I'll argue that Feldman's view is excessively "subjective", and that we shouldn't let concerns about "implementability" erode our rational standards. In particular, what one ought to do is not beholden to one's unreasonable starting beliefs.