Monday, December 22, 2008

Subjective Oughts

People often assume that there's some genuine sense in which what we ought to do (believe) is determined by whatever we believe we have most reason to do (believe). Call this 'subjectivism'. Subjectivist positions seem common in debates over peer disagreement and normative uncertainty (to name just a couple of examples). But I think it is mistaken.

Granted, there may well be wide-scope requirements e.g., to not believe (i) that the evidence conclusively supports P without also believing (ii) that P is true. But it doesn't follow from my believing of (i) that I ought to also believe (ii). Perhaps I should instead give up my belief in (i).

As I pointed out in 'Rational Objectivity', rational status is not perfectly transparent: we can be irrational without realising it. In particular, it's possible to believe that I rationally ought to φ [e.g. believe P] without this truly being so. This possibility of error is essential to any non-trivial rational norm, thus ruling out the possibility of subjectivism. (Bootstrapping cases are helpful to illustrate this objection more vividly. We can describe a scenario in which it an agent is patently unreasonable in believing P. But subjectivism implies, absurdly, that their belief may be justified by the mere fact that they erroneously take their evidence to support the ludicrous proposition.)

If subjectivism is so daft, why are so many people initially tempted to accept it? I think there are three main reasons. The first, noted above, is the confusion of narrow- and wide-scope requirements. The second is that in bootstrapping cases, the agent is at least exhibiting some (perhaps limited) procedural epistemic virtues. A good epistemic agent will, after all, align their beliefs with their judgments about the evidence. The problem is that this is woefully insufficient to qualify as a good epistemic agent, if one's judgments about the evidence are not themselves reasonable. Indeed, taken in isolation, partial "virtue" may simply lead one further astray. (Compare: an instrumentally rational psychopath at least displays certain 'executive virtues', but their competence actually becomes a bad thing given how warped their ends are.)

The third - and I think most important - reason has to do with considerations of 'action guidance'. The theoretical role of rational norms is, after all, to guide us when we can't tell what we (objectively) ought to do. So there has got to be something a bit more subjective about them. The considerations that make one option rationally superior to another must be considerations that are accessible to us. Subjective beliefs are the obvious candidates: they're accessible to us in a way that external facts are not. And, indeed, there are independent theoretical motivations for accepting a kind of 'internalism' about rationality, i.e. the thought that what's rational for me depends entirely upon facts internal to my mind, not the external world.

But it's simply a mistake to think that internalism implies subjectivism. After all, subjectivism restricts itself to a very specific subset of my beliefs, namely my normative beliefs about what I ought to do. What about my ordinary non-normative beliefs? If I know that a generally reliable source just told me "P is false", but I irrationally interpret this as evidence that P is true, subjectivism licenses my irrational belief that P. But we needn't go along with this. There's a perfectly accessible fact which counts against the belief, namely the testimonial evidence I just heard (and perfectly well remember). Again, I know full well what the source said -- this information is as accessible to me as any -- my error is one of normative interpretation. I unreasonably interpreted this basic fact as evidence for P when really it is (as I should have known) evidence against P. My mistake, right?

This is the key issue. The subjectivist claims that what's rational is determined by what the agent treats as evidence. These normative judgments are themselves taken as 'given' and beyond dispute. But I contend that we cannot get any worthwhile action-guiding norms when so much is taken as given. If my normative judgments are sufficiently unreasonable, then their implications are of no rational help. (Garbage in, garbage out.) There is no sense in which I 'ought', automatically, to do whatever it seems to me I ought to do. Even in the most subjective of genuine rational norms, tailored for non-ideal agents, my beliefs about what I ought to do are always open to question, and so might be rationally trumped by certain of my other beliefs -- even if I'm too irrational to realize it.

7 comments:

  1. Richard:

    People often assume that there's some genuine sense in which what we ought to do (believe) is determined by whatever we believe we have most reason to do (believe)

    I'm very puzzled by this and the rest of the post. I think there is a subjective sense of 'ought' (more than one in fact). However, I noticed that you posted this under epistemology not ethics or normativity. I have no opinion on the ought of epistemology (if there even is one). Did you mean to include the moral case? If so, how do you deal with any of the standard moral cases involving the agent having degrees of belief about who is in which mineshaft etc.

    ReplyDelete
  2. Hi Toby, yes I think everything I said applies in the moral case too. On mineshaft cases, see my second to last paragraph. I agree that there are (relatively) subjective senses of 'ought' that take into account our entire belief set (including non-normative beliefs about how many people are in which mineshaft). What I deny is that there is any genuine normativity in anything so subjective as to appeal only to our normative beliefs (without regard for whether they're actually reasonable in light of our other beliefs, etc).

    ReplyDelete
  3. Another way to make the point is to distinguish de dicto vs. de re beliefs "about reasons" (or what I here called 'subjective' vs. 'partial' reasons).

    The subjectivist grounds their normative claims on facts like: "S believes that P is [true and] a reason to phi." A better view will instead appeal to facts like: "S believes that P. Moreover, P is such that, were it true, it really would constitute a reason to phi."

    ReplyDelete
  4. OK so you are talking about subjective oughts that arise purely from uncertainty regarding normative claims. This wasn't very clear from the post: it definitely looks like it is against all subjective oughts including the much more common kind which involves non-normative uncertainty.

    As it happens I believe there is some important form of normativity-like-thing that concerns uncertainty over the purely normative beliefs. For example, there seem to be laws of thought which govern what to do in cases where you don't know which moral theory is correct. For example, if the degree of belief in theory B goes down, this shouldn't make you more likely to do as it says. If you were disposed this way, it seems that there is something wrong with you in some sense.

    Personally, I think this sense cannot be the moral ought, and is actually a form of rationality: you are behaving irrationally given that you are aiming to do what is right. If rationality is not a form of normativity then I suppose I agree with you.

    ReplyDelete
  5. I expressed myself poorly. My real target is the view that there are subjective oughts that arise from particular beliefs considered in isolation (and normative beliefs are the most common example).

    Instead, subjective oughts derive from reasonable beliefs. (These might in turn be determined by my total belief state, especially if we are coherentists about justification.) The mere fact that I believe that P, and that P entails Q, tells you absolutely nothing about whether I ought (in any sense whatsoever) to believe that Q. It depends whether my antecedent belief in P is reasonable.

    I should also clarify that I meant to be talking about (the 'ought' of) rationality all along. What I deny is precisely that there are narrow-scope principles of the form [if your degree of belief in theory B goes down, you rationally should not become more likely to do as it says]. (Indeed, normative uncertainty was one of the two examples I mentioned in the introduction!)

    I deny that anything like this is a true requirement of rationality, because it might be the case both that my degree of belief in B (unreasonably) goes down and I rationally ought to become more likely to do as it says (because I rationally ought to give B more credence).

    Of course, there is "something wrong" with the guy who unreasonably lowers his credence and keeps on acting according to B anyhow -- that's uncontroversial. What's wrong with him (on my account) is that he reduced his credence in theory B when he shouldn't have. Contra subjectivism, there is no genuine sense in which his heightened compliance with B is the mistake. When subjectivists claim that there is something to be said for his conforming less to B, they are simply confused. Towards the end of my post I explain the three bases of their confusion.

    ReplyDelete
  6. Thanks for the clarifications. I'm still somewhat skeptical but I now mostly understand what you are saying. Unfortunately I think that to really determine if we disagree on anything I'd have to ask a series of questions that is longer than I have time for (the problem of having such discussions on the internet). Do let me know if you plan to come to Oxford at some stage, I'm sure we would have lots to talk about.

    ReplyDelete
  7. Will do. (Of course, you're also welcome to return to this thread at a later date when you've more time to spare -- an advantage of having such discussions on the internet!)

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.