Tuesday, June 23, 2009

The Deliberative Question

What exactly do we mean when we ask the deliberative question, "What should I do"? It's surprisingly elusive. With a bit of work, we can pin down a behaviouristic kind of answer -- specifying when it's appropriate to offer and to challenge various responses to the question. But I suspect that in a more fundamental/philosophical sense, it isn't really a well-formed, determinate question at all.

First, note that we're not asking what would be objectively best. (Consider Parfit's Mineshafts case. You know that one of option A or B will save all ten lives, but the other will save none, and you don't know which is which. Option C is guaranteed to save nine lives. Clearly, the answer to the deliberator's question is "choose option C", even though this is the one option we can know is not objectively best.)

Perhaps we're asking (roughly) what would maximize expected value for the agent. This explains why option C is the answer in the ordinary Mineshafts scenario: relative to the agent's knowledge, it is worth 9 expected lives, whereas options A and B each have expected value of only 5 lives saved. But (as Kolodny and MacFarlane point out) this standard view has trouble accommodating our assertoric practices. For suppose Informant comes along and tells Agent that he's made a mistake, and in fact it's option A that is guaranteed to save nine lives, and options B and C that are the all-or-nothing gambles. Informant claims to reject (disagree with) Agent's previous answer to the question what he ought to do, and Agent himself seems likely to acquiesce in this by repudiating his previous response. ("I was mistaken to think that I should pick option C; really I should do A.") What's worse, we can further imagine an omniscient observer saying, "No, really he should pick option B -- that's the one that'll save all ten lives."

Here's the dilemma: is there a single, constant question, to which these various responses offer conflicting answers? Theoretically, it's difficult to see how this could be so. There's the question what maximizes expected utility relative to this evidence or that -- but these are different questions, so the diverging answers don't really conflict. On this picture, Agent should respond to Informant by saying, "Ah! You've changed my epistemic context in a most helpful manner. Granted, I answered my initial question [what ought I to do relative to my then-available evidence] correctly. But now I can ask an even better question: 'what ought I to do relative to my now-available evidence?' And, I agree, my answer to this question is 'pick option A!'"

(Compare ordinary context-dependent terms like the indexical 'now'. If tomorrow I say, "It is raining now," I won't thereby have to retract my current assertion that it isn't raining now. There is no single, constant question "Is it raining?" to which these are competing answers. There are only the more specific questions whether it is raining at this or that time and place.)

Unfortunately, when it comes to the deliberative question, this isn't how our linguistic practices seem to actually work. Instead, it seems, Agent will repudiate his previous answer, implicitly treating it as a competing answer to one and the same question (what should he do, period). My question is: does this make sense?

The relativist formally accommodates this behaviour by positing a semantics for 'ought' on which the truth of any token assertion 'Agent ought to phi' varies across assessors. We effectively end up understanding the deliberative question as having the constant meaning 'What should I do relative to the relevant evidence?' whilst allowing the relevant evidence to vary across assessors. Disagreeing as to what evidence is relevant thus translates into disagreeing about the deliberative question. But there's no absolute fact of the matter as to which evidence really is "relevant", and hence the correct answer varies from perspective to perspective, even when assessing a single token utterance.

I actually think that this 'relevant evidence relativist' (unlike the moral relativist) gets it right as a pragmatic account of when it's appropriate to make, challenge, and retract assertions. Intuitively, it seems appropriate for everyone involved to behave as if there was a single constant question here (unlike in the 'raining now' case). For example, it seems appropriate for Agent to initially judge that he ought to go with Option C, and then to retract [not merely "move beyond"] this judgment when faced with new evidence.

But does that really answer my initial question? I guess Wittgensteinians would think so -- "meaning is use", and all that. But intuitively, it seems like there's a further question here: not just about what assertoric behaviours are appropriate, but the more 'metaphysical' question of what is really meant, and really true.

If we think this is a genuine further question, we may be unsatisfied by the relativist's answer, since it seems most plausible that, strictly speaking, substantive answers exist only for complete or 'absolute' questions -- "Is it raining at such-and-such time and place?", not just "Is it raining?". The general question, "What should I do?" is likewise incomplete until we fill in the missing parameter of whose evidence we're assessing this against. Granted, we can offer a sociological story about how it's useful for a community to adopt linguistic norms that allow us to treat this incomplete question as though it were complete -- "disagreeing" in practice even when there's not really any substantive proposition at stake. But then it looks like this really is just a "language game", lacking in philosophical substance. (As MacFarlane himself concludes in 'Relativism and Disagreement': "From lofty philosophical heights, the language games we play with [relativistic words] may seem irrational. But that is no reason to deny that we do play these games, or that they have a social purpose.")

Really, we should conclude, there isn't any single question here. Strictly speaking, it doesn't really make sense for Agent to retract his earlier judgment. And the apparent 'disagreement' (of Informant or the omniscient observer), though "appropriate" according to the rules of the game, is - in a more important sense - philosophically empty.

17 comments:

  1. In terms of whether there's a further question we can appeal to, I'm left thinking that this whole scenario is a problem that only rises up when the question is incomplete. The further consistent question is specifically one which is complete and I would argue immune to reframing by anyone who is honest. Now we can discuss if a question is complete, but lets not forget that in most cases all we need is for it to be sufficient for it to be pragmatically useful, though this too is often a higher standard than we often achieve.

    ReplyDelete
  2. I think that you're hoping for too much when you look for 'philosophically meaningful' (by your definition) questions in many areas of philosophy, especially ethics. No question can be absolute when it is framed by a person with, necessarily, an incomplete view of the world.

    And bringing in a second person to then consider the same question is impossible, because the question is an entirely new one when framed in a different mind.

    Therefore all questions of morality or rationality - and one could even say, in my opinion, simply all questions - are impossible to talk about in objective ("perfect", "god's-view") terms from behind a human pair of eyes.

    But why so glum about this? If you admit the above, admit that you can never approach 'true' metaphysical and ontological understanding of the universe, and adopt a new definition of 'objectivity', then you become a student of what it is like to be human.

    What could be more philosophically meaningful?

    ReplyDelete
  3. Matt, who says a person's view of the world is necessarily incomplete?. Anyone with a reasonable grasp of Tic Tac Toe knows that to start in the middle is to aim for a draw. Tada, complete knowledge of a choice. You could of course widen the scope of the question to include how do I know the board is there but I think you'd be on fairly good grounds to call such a person at best obtuse or maybe even a bit nuts.

    In terms of other minds, say what? I always wonder when people say things like that what do they think happens when people agree. If all relevant beliefs are the same, it's the same belief no matter who or what is thinking it.

    ReplyDelete
  4. Obtuse or nuts, yes these are valid things to call somebody who lets skepticism intrude into the day to day running of their life.

    I wouldn't refuse to play tic-tac-toe on the grounds that the board might not be there.

    And yet, if we're talking about ontological hard truths here, then I don't see how I have any real proof that the board is there.

    My senses are imperfect, this is obvious. They tell me that a stick half-submerged in water is bent, or that a speck of dust is the smallest particle in the world. At nights in my dreams they tell me all sorts of things that I won't share here...

    It's impractical to doubt, most of the time, but the possibility of it precludes objective knowledge.

    Logic or mathematics may seem like firmer footholds for genuine knowledge about the world but really you're no better off with those. I may say "a=a" and you may say "a=a" but where is the causal link between our thoughts that proves them identical?

    Do you and I mean the same thing when we say "a" or even "equals"? At best we have inductive evidence that we share a reference point, but this is guesswork and a poor man's substitute for hard facts.

    If one lunatic in the asylum says "a≠a" and another says "a≠a", are we to consider them agreeing about an objective truth?

    Aren't you begging the question by presupposing that you know what it is for one belief to be the same as another one?

    Anyway, despite all of the above, I think that skepticism is the real philosophically meaningless exercise. Just accept it and move on. Understand the limits of what our philosophy can achieve and then enjoy working within those limits; yes, pushing them even, to explore the world as we experience it.

    To all intents and purposes, there isn't any other.

    ReplyDelete
  5. I'm not really interested in radical skepticism here. I take it there are plenty of straightforwardly 'philosophically meaningful' questions in ethics. A couple of examples are implicit in the main post itself, e.g. 'what is objectively best?' and even the complete question 'what is rational (or has highest expected value relative to the agent's current evidence)?'

    To keep the discussion focused, then, I'm wanting to consider whether the deliberative question 'What ought I to do?' has a precise meaning in the sense that the above examples do. I argued that it is defective (essentially incomplete) in a way that the other questions aren't, and which makes disagreements about the other questions more 'substantive' and philosophically interesting.

    ReplyDelete
  6. Richard, I appreciate your desire to keep this on track, so apologies for derailing it!

    Respectfully then, I won't engage further with the discussion at hand because to me, all of those questions are answered by "it depends on the subjective opinion of the person in question" and so I can't seperate out your third query with a straight face.

    I'm not a radical skeptic but I do believe that there are no objective moral facts.

    Apologies again for the tangent; lay on, moral philosophers!

    ReplyDelete
  7. Okay, though it's worth noting that even a subjectivist could make these distinctions. One might hold, for example, that what's best is whatever would in fact fulfill your desires, whereas what's rational is whatever your evidence suggests will maximize your expected utility (again taking your own desires as the ultimate standard), and then we can still talk about the generic 'ought' that fails to specify which evidence is relevant for making this expected utility calculation.

    ReplyDelete
  8. If there is a useful concept to ought, surely it's doing what we have the most reason to do, a position that will include consideration such as the utility and your own personal desires. Get rid of those and obviously an ought question loses any meaning, but I by your wording only because of an attempt to seperate it from what gives it impact.

    ReplyDelete
  9. I get the sense that readers may be confusing two very different dimensions along which a putative 'ought' concept might vary.

    (1) It is usual for such concepts to vary along the dimension of what they are ultimately concerned with. For example, we might have "prudential oughts", "moral oughts", as well as 'oughts' of spelling, etiquette, and any number of other more or less arbitrary rules. Here I'm inclined to agree with Greybe that the only really important notion is that of all-things-considered normativity or reasons. (And as I pointed out to Matt, a subjectivist could understand this as ultimately rooted in one's own arbitrary desires, or whatever.) But I'm not really talking about any of that here.

    (2) This post instead concerns a different (more instrumental) dimension of variation, namely, how ignorance or uncertainty affects what one 'ought' to do (even holding fixed the ultimate end -- be it prudence, morality, or the all-things-considered reasons).

    I think it's clear that there are at least two philosophically important points on this latter dimension. There's the more objective notion of what we have most reason to do (given the actual facts, whether we know them or not), and then there's the more subjective question of what's rational given our evidence. Cases like Mineshafts show why we need the latter concept too.

    I guess it must also be socially "useful" to have the generic (relevant evidence relative) notion of 'ought', or else it wouldn't be so prominent in ordinary language and practice. It might help people with different evidence co-ordinate and fix on better (more informed) decisions more easily, for example. But even if it is socially useful, I argued, there's an important sense in which this ordinary usage seems philosophically defective. (Though if anyone think otherwise, I'm all ears.)

    ReplyDelete
  10. Strictly speaking, it doesn't really make sense for Agent to retract his earlier judgment.

    I was interested in similar issues when I was working on my dissertation. My view is that what you say above is right on certain kinds of "subjectivist" views, but that what you say above itself doesn't really make sense, and so certain kinds of "subjectivist" views are false.

    Now why doesn't what you say above really make sense? It's that sometimes we later come to reject views or ways of acting that we previously thought were "ok." I suppose you could say that there's a problem of hindsight bias here, such that I'm not being fair to my past self if I say things like, "I should have known better," or "I should have noticed that." Sometimes, at least, that's surely right. Maybe that's enough to get your point: if I was being careful enough at the time, it doesn't make sense to retract my earlier judgment. (I'm assuming that by "retract" you mean something like: judge that I shouldn't have judged as I did at that time, given what I knew, etc., at that time...)

    But I'm not sure that we're always falling prey to hindsight bias when we repudiate some past judgment or action. Wouldn't denying that involve an implausible "idealization" of one's past self?

    I see the point of thinking that "What should I do?", pragmatically, has to be answered from where one currently stands. (Pragmatically, it's a truism.) Maybe the really puzzling question concerns: "What should I have done?" For some purposes, it makes sense to take the former perspective, but for others, that would make no sense (say, if we're using a case from our own history to meditate on matters of moral truth...or to understand what we have done...) I hope this last point makes enough sense.


    Maybe it would help to distinguish between two senses of "What should I do?": the pragmatic sense (to which we can affix "now") and the "ponderous" sense (to which we affix "period").

    P.S.: At least once, Wittgenstein did, roughly, give an answer to a 'what should I do' sort of question. An acquaintance (while he was teaching in Austria) said that he wished to do something to improve the world, and W responded, "Just improve yourself; that is the only thing you can do to better the world."

    ReplyDelete
  11. Hi Matthew, I mean "retract" in the ordinary sense, which is something more like judging that your previous judgment was mistaken (however reasonable it may have been). For example, upon learning from Informant that the way to save 9 lives is to press button A, Agent will appropriately retract his previous judgment that the way to save 9 lives is to press C. But he will not retract his judgment that pressing C maximized expected utility (and hence was the rational choice) given what he knew back then. He was right about that perspectival fact, after all.

    (Of course, sometimes people can be mistaken about what's rational, and so could later retract their mistaken judgments here. But I'm talking about a case where no such rational error is made. The agent reasons perfectly well; his only problem is incomplete information.)

    Now the puzzle arises because the agent will retract his previous judgment that he "ought" to press button C. I claim that this is a mistake, since the best candidate meaning for the 'ought' he's using is the ought of rationality, and (as we've seen) he shouldn't retract his previous judgment about what's rational. (Of course, he should admit that now option A is rational instead, given the new information now available to him. But it remains the case that option C was previously rational, so his previous judgment to this effect was perfectly true. Since the two judgments are made relative to different information bases, and so answer different questions, the new answer does not require retracting the old one, or considering it mistaken.)

    p.s. This is easier to make sense of on a non-cognitive analysis of 'ought': on that view, we simply understand that agent as sensibly retracting his earlier plan (or prescription, pro-attitude, or whatever) in light of new information. It does seem plausible that this is what the agent is "doing" with his words, and it makes perfect sense of his behaviour. But if we think there's also a cognitive element to these ordinary ought-claims, such that they can be meaningful objects of substantive philosophical inquiry, then (again) I can't see any better candidate meaning than the rational ought.

    ReplyDelete
  12. Hi Richard: I see what you mean about this case, and what you said seems right. But then I'm not sure I'm seeing the deep mystery any more, and shouldn't we see what agents say (or look for a charitable interpretation of what's said)? That is, of course what one ought to do can change in light of new evidence, but why exactly should we construe an agent's saying something like, "Oh, well, then I shouldn't press C, I should press A," as a retraction? That is, why not charitably read an implicit now after "press C"? (i.e. "...then I shouldn't press C now..."

    Maybe when you compress all this temporally so that the new info comes before the agent acts, but after the agent makes a judgment, then the revision looks like a retraction. But then you can just slice up your moments more finely: when the agent gets new information, there is, to an extent, a new situation, calling for a new judgment. The old judgment is "wrong" because it's outdated; I don't retract it...I throw it out with the VCR. "I shouldn't press C now" could be read as a way of "throwing it out".

    ReplyDelete
  13. Well, it just seems (e.g. on introspection) to mischaracterize how we (as agents) think about such cases. As explained in the post, we're inclined to say things like, "I was mistaken to think that I should pick option C." You can't just tack a "now" on to this sentiment as a quick fix. Rather, we must insist that such explicit repudiations are, strictly speaking, confused.

    I agree that we as agents should (theoretically) just "throw out" the old judgment as outdated rather than mistaken -- just as we do for claims about whether it is "raining now". It just doesn't seem to me that this is how most of us actually treat the deliberative question in everyday life.

    ReplyDelete
  14. Richard, I disagree. I think in a previous post about rational akrasia, you said that if a person X believes incorrectly that he ought to do B instead of A, but still does A, he has acted rationally because he has responded to the correct reasons for acting. It therefore follows that if I believe due to faulty info that I should choose shaft C instead of A, I would be wrong even though I reached the decision by a fairly reliable procedure. Therefore I am not wrong if I say that I should not have chosen C in the first place.

    ReplyDelete
  15. Murali, the 'rational akrasia' post was talking about people who had false beliefs about what's rational. Nothing of relevance to the present post follows from that.

    ReplyDelete
  16. If you falsely believe that you have to do choose mineshaft C because you get incomplete or incorrect information, when in fact you should have chosen A (where A really is the one that would save 9 people), the irrationality of the initial assumption (or its factual wrongness for that matter) is inherited into your final decision as to which mineshaft you ought to go through isnt it?

    PS.Sorry, I actually seem to be referencing your inherited irrationality post.

    ReplyDelete
  17. But there's nothing irrational about your initial belief. It just happens to be false.

    To recap: I say there are two important senses of 'ought': there's the objective, fact-based ought, and then there's the rational, evidence-based ought. Mistakes of objective fact (deriving from "incomplete or incorrect information") are only relevant to the fact-based 'objective ought'. So we need to consider, when the agent asks "What ought I to do?", are they using the fact-based ought? And I take it the answer is obviously "no", because they know all along that option C is not the objectively best option (it will only save 9 lives, whereas some other option will save 10, which is objectively better). So if option C initially seems an appropriate answer, they must be asking a different question. The obvious candidate is that they are asking what they rationally ought to do. But then option C is not mistaken at all. It's really true that, relative to the then-available evidence, it's rational for them to choose option C.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.