Saturday, December 26, 2009

Who has intuitions about whether there are state-given reasons?

In 'Why Disallowing State-Given Reasons for Belief Might Be the End of the World', Andrew Reisner argues that "theorists who deny that there are any state-given reasons at all are forced to bite a large and very counter-intuitive bullet: in some circumstances, the only way for an agent to act and be in accord with reason will lead to the otherwise avertable end of the world."

Here's the situation: a demon provides you with credible evidence that p is false, and then tells you that he'll blow up the world unless you believe that p. On the fittingness view, this provides no reason to believe that p; at most it provides you with reason to desire this belief, or to act so as to bring it about. But now we suppose that the demon adds a further clause: he'll also destroy the world if you perform any such action. So you no longer have any reason to so act. Is the fitting-reasons theorist forced to conclude that there's nothing to be said in favour of believing p, then, even though that's the only way to save the world?

Well, no. It's still desirable (fitting to desire) that you [irrationally] believe p. Even if the demon additionally threatens to blow up the world if you desire anything but to believe the truth, that just makes fitting the higher-order desire that you [irrationally] desire nothing but to believe the truth. Granted, this higher-order desire is self-defeating: by forming it, you guarantee that it is thwarted -- you now desire something other than to believe the truth. This is a curious phenomenon, but it isn't clear that it makes the desire unfitting: intuitively, it is desirable that you form no such desires in this situation. That'd be the best outcome, and hence the rationally fitting one to want or hope for. It's just that there's no rational way to achieve this end: being fully rational suffices to place this outcome beyond your grasp. You can only achieve it if you are irrational in the first place.

This is a weird scenario, for sure, but isn't this the right way to describe the normative situation? I don't see anything "counterintuitive" here. To be clear: it would be a problem for the fittingness theorist if we were forced to conclude that there's nothing to be said for saving the world. But we aren't. Saving the world is (as always) desirable. So that objection fails. We're left with Reisner's original worry that by being rational ("in accord with reason"), an agent would cause "the otherwise avertable end of the world." But how is that a problem for our theory? An evil demon could punish rationality, and in such cases it would be better to be irrational. This is clearly true. Where's the bullet?

I suspect that objections like this trade on the ambiguity of 'reasons' talk. Here are two claims I take to be uncontroversially true of the described scenario:

(1) To save the world, one must be irrational in various respects. In particular, one must irrationally believe p, and irrationally fail to desire to save the world. (In the most extreme case, we could stipulate that the demon will destroy the world if any of your responses to his threat are rationally fitting. Then the only way to save the world is to be unintentionally insane. This is surely a possible set-up.)

(2) Forming the necessary beliefs and desires to save the world is more important than being rational.

Here's a controversial claim:
(*) The beliefs and desires necessary to save the world are unsupported by reasons. There is insufficient reason to believe p, and also insufficient reason to desire only that you believe the truth (the world be damned), though these attitudes are necessary to avoid global destruction.

I take it that Reisner thinks (*) is "counterintuitive". But we should take care to distinguish semantic vs. substantive intuitions. (*) might seem counterintuitive only because one is presupposing that (*) means that having the necessary beliefs and desires "isn't important", and what one substantively intuits is that (2) is true. I share this latter intuition: (2) is indeed true. I simply think that, for the reasons given here, we should define 'reasons' primarily in terms of fittingness rather than fortunateness, in which case (*) is basically equivalent to (1), and perfectly compatible with (2).

Now, if you agree that (1) is unobjectionable, and allow that what I mean when I affirm (*) is something roughly equivalent to (1), then perhaps you should also find my affirmations of (*) to be unobjectionable.

One might deny this if one had very strong semantic intuitions about the meaning of the word 'reasons', such that so-called "state-given reasons" must be included alongside fittingness reasons. But that seems odd. I, at least, have no such intuitions either way. (Do you? Comments welcome!) It seems to me that 'reasons' (as used in these debates) is a largely technical term, so that the question of how best to use the term is best answered on the kinds of theoretical grounds I appealed to in my previous post. We could even explicitly disambiguate "reasons_1" and "reasons_2", and say only the latter includes 'state-based reasons'. If my linked argument is correct, then reasons_1 is the more philosophically useful concept; indeed, I don't see any need for talking about reasons_2 at all. But I trust that (*) would no longer seem controversial when we replaced all mention of 'reasons' with the less ambiguous 'reasons_1'. So that's at least suggestive that any remaining dispute here is merely terminological.

(My charge here would be misguided if it turns out that, rather than being a technical term, there was in fact a single determinate pre-theoretic conceptual role for which reasons_1 and reasons_2 are competing candidates. One could then have substantive intuitions about which of these candidates best correspond to the pre-theoretical idea. But again, it seems clear to me that we have two pre-theoretical notions in this vicinity: fittingness and fortunateness. The real work is done here when we argue that the latter notion can be reduced to a special case of the former -- namely, fittingness to desire.)

11 comments:

  1. I think I find it contrary to my intuition to speak of state-given reasons as (genuine) reasons to believe. Psychological fact, take it for what it's worth.

    ReplyDelete
  2. Why don't we just skip all of the middle-steps and postulate a demon that will destroy the world unless you behave irrationally?

    ReplyDelete
  3. [Andrew Reisner sent in the following response...]

    I think you've raised a number of interesting questions. I want to try to address one specific point and one methodological point. The specific point concerns the notion of fitting reasons. I think, for the most part, fittingness is a concept that threatens to be vacuous. I've given a longer argument for this in 'Abandoning the Buck-Passing Analysis of Final Value', but I'll give a very short argument here.

    Very briefly, fit or correctness is (supposed to be) a relation between a thing (very broadly construed) and an attitude. It is correct to admire Jill because she is a brave firefighter. Admiring is a fitting attitude to have towards Jill (because she is a brave firefighter). Admiring fits Jill, or perhaps her bravery. It's worth noting that historically, this notion of correctness or fit (I have Brentano in mind) has been notoriously hard to cash out, but Brentano suggests that correctness for pro-attitudes (or love in particular) in analogous to truth for belief. Sven Danielsson and Jonas Olson stress this analogy, too.

    The trouble is that we have an independent theory of truth (whichever one we happen to like). Beliefs are true or are not true, depending on whether their contents are true. Admiration or pro-attitudes are not correct or fitting based on whether their contents are correct or fitting- fittingness is a relation between an attitude and its contents.

    We could, of course, reject Brentano's and Danielsson/Olson's view of correctness, and say that correctness is not analogous to truth for belief, but in either case, we're just left with out intuitions about fit. What makes to things fit? Well, we can start stipulating, or spelling out our intuitions. I take it intuitions aren't wholly clear. Sarah Stroud, for example, has argued in her paper 'Epistemic Partiality and Friendship' that being epistemically partial, that is, being resistant to certain kinds of evidence about one's friends, is a constitutive requirement of being someone's friends. If that's the case, then it sounds like it is fitting to have beliefs about one's friends that are not responsive to truthy (evidential) considerations. I take it, in fact, that pre-theoretically there are at least some people who hold this notion of correctness or fittingness.

    When we start playing around with normative primitives, intuitions get dicey. Adding new normative primitives (like fittingness) just adds new chains of intuitions to rely on. If we analyse fittingness in terms of reasons, well, then we run into questions about reasons, and we can't be sure we're going to get fittingness coming out like we hoped.

    I've tried to argue more fully that the case against there being pragmatic reasons for belief is actually quite hard to build without simply standing on the view that it's a conceptual truth ('The Possibility of Pragmatic Reasons for Belief and the Wrong Kind of Reason Problem'). I can't reproduce all the arguments succinctly, but let me just suggest that think we actually need an argument, either for or against, pragmatic reasons for belief, rather than relying on our pre-theoretic intuitions too strongly. I certainly may be wrong, but I think many evidentialists about reasons for belief just state evidentialism as conceptual truth. That's very hard to argue against, but I don't think just saying that is very persuasive (since at least some people disagree).

    [to be continued...]

    ReplyDelete
  4. [Reisner's response continues...]

    In the demon case in the paper here, I really hoped to emphasise two points. One is that we can construct scenarios in which various 'make yourself believe' or 'desire to believe' accounts of attitude-given reasons won't get things right (because of blocked ascent). These are Parfit/Skorupski/Shah type views that I have in mind as targets. It clearly isn't an adequate response to say that what looks like an attitude or state given reasons to believe something is just an object-given reason to desire to believe it or to make oneself believe it. (Danielsson and Olson make some related points about this in their paper.)

    The other point is a bit tricky to phrase. First off, I think Richard is certainly right in saying that even the fittingness theorist has something good to say about the case, namely that it's desirable that you have the belief that will save the world. What worries me is that if an agent is as she ought to be (conforms to all her reasons, everything she ought to do and believe and feel, etc.), the world will end if we reject state-given reasons (and oughts) for belief. We can make a strictly evaluative claim: it's good that Jill isn't as she ought to be. But, it's normativity that tells us the right way to be, not value. So, I still stand by my claim that nothing there is nothing normatively to commend Jill for having a belief that would prevent the world for ending if it's for state-given reasons. We can make an evaluative claim, but since those aren't guiding (at least, not directly), I still think we're left in an unattractive position if we reject state-given reasons. This last point gets at what I take to be the heart of, as Richard puts it, the question of what's more important. If what there is to being important is just being better, I think that's not good enough, since evaluative concepts are not guiding without some linking view to reasons/oughts. So, to say that it's more important to have the belief that will save the world than to believe what the evidence suggests, but to say that one nonetheless ought not (has no reasons to) believe it, gives us a pretty neutered view of importance.

    I also should clarify one aspect of my own views, since terms get used in a lot of different ways. I'm someone who believes that rationality and normativity are not the same thing (my view is much like John Broome's, and this the Kolodny line in 2005 and Parfit in 2001, and I think it's fair to say Wedgwood's in 2004), and further I don't think that rationality involves responding to real or believed reasons (again pretty Broome-like). So, there's nothing odd to my mind about saying that Jill ought to be irrational, nor do I mind the idea that Jill is irrational if she has the belief that will save the world. I just think all that is a separate issue, and a bit of red herring in this case. I'm interested in how Jill ought to be (or if one likes Parfit's language, most reason to be, or if one likes Skorupski, sufficient reason to be), not in what are the rational requirements that hold amongst her mental states in virtue of the contents. I realise it's contentious to divorce rationality from normativity, but nonetheless I find the arguments for doing so pretty persuasive (for now, anyway).

    ReplyDelete
  5. Hi Andrew, you've raised a number of interesting issues here.

    (1) First, you point out that while beliefs aim at truth, pro-attitudes have no obvious analogue, so the unanalysed notion of fittingness ends up pulling a lot of weight. I agree with this (naturally, since I think we should use fittingness as our sole normative primitive), but I don't see it as unusually problematic. You worry that "we're just left with our intuitions about fit", but one could say similar things about all of philosophy. The crucial thing is that (for me, at least) intuitions about fit seem a lot more clear and unambiguous than intuitions about reasons. So that's a reason to prefer the former notion as our normative primitive.

    (That's not to deny that there may be some tricky cases. Aside: I think the way to respond to Stroud's case is to distinguish 'local' and 'global' rational fittingness. Evidentialist norms most plausibly govern the local attitude of belief, but a globally well-formed agent might be disposed to find her friends' virtues more salient than their vices, which may often result her having biased beliefs. I would want to stress that it's really just the immediate salience, not the resulting false belief, that is fitting here.)

    (2) You claim that "It clearly isn't an adequate response to say that what looks like an attitude or state given reasons to believe something is just an object-given reason to desire to believe it or to make oneself believe it."

    Your paper only explictly argues against the latter formulation (successfully, I agree). You claim that a similar argument will work against the 'reason to desire' formulation. But my post argued that that's mistaken. Since an "object-given reason to desire" is just a characteristic in virtue of which the object is desirable, and its serving to save the world makes the irrational belief desirable, it follows that its serving to save the world is an object-given reason to desire that you have this belief. This doesn't change if you make it undesirable to have this desire. That just adds further reasons into the mix; it doesn't undercut the present one. [Still, I think this objection is more clearly phrased in terms of fittingness than 'object given reasons', so I'll move on.]

    ReplyDelete
  6. (3) Your central worry is that "it's normativity that tells us the right way to be, not value." You allow that we can make evaluative claims here, "but since those aren't guiding... [we're left with] a pretty neutered view of importance."

    I guess I find this pretty puzzling. We could, of course, stipulate that whenever an attitude would be good to have, we can thereby talk of there being a "state-based reason" to have it. (This is like Sidgwick's "ought of general desirability": e.g. we can say "it ought to be that there is world peace" to express in new words the same old idea that world peace would be good.) The question is what talking in this way would gain us. Such reasons are presumably not followable -- a lucky agent might happen to conform to such a "reason", but one couldn't be rationally guided to form (at will) a belief based on a pragmatic reason. Or do you deny this?

    In any case, I'm not sure what more you want from the concept of 'importance' besides everything we should care about. I could see someone getting worried if they thought that my view implied that we in some sense ought to prefer that Jill destroy the world rather than form unfitting beliefs (or that we should properly feel disappointed when she turns out to be irrational in just the way required to save the day). But of course my view is just the opposite: saving the world is much more important (desirable, worth caring about) than fitting belief. Now, you seem to be suggesting that the notion of what's worth caring about just isn't ... what, worth caring about? ... in comparison to some other deontic notion. I feel like I must be missing something.

    You worry that we cannot "normatively commend" Jill for saving the world. Can you clarify what you mean by this? I take it you agree that Jill isn't really morally praiseworthy -- it's not like she intentionally did anything. She was just irrational in a fortunate way. More broadly, she didn't exhibit any kind of agential (rational) competence -- again, she was quite irrational. Things just happened to turn out well, is all. Is there some special kind of "normative commendation" we bestow on people whenever they (even accidentally) cause good results? Then Jill will qualify for that, in virtue of causing good results. But I don't see what else you could want to say here.

    ReplyDelete
  7. One more thought. You write: "What worries me is that if an agent is as she ought to be (conforms to all her reasons, everything she ought to do and believe and feel, etc.), the world will end."

    But you're quite happy to allow that if Jill is rational then the world will end. There's nothing especially worrying about that claim, you agree. But then, for those of us who reject state-based reasons, our 'ought' claims are closely related to our 'rationality' claims (modulo incomplete information, etc., which isn't relevant in the cases that interest us here).

    Since you want to use the word 'rationality' in a constrained fashion, I might need to find a different word to communicate the concept I have in mind. But hopefully 'fittingness' is clear enough. Do you find it at all worrying that if Jill has fitting attitudes then the world will end? I presume not. But then if this is all I mean when I say that "if Jill is in accordance with her reasons then the world will end", then this latter claim shouldn't worry you either. It would be worrying if we were claiming that it would be in any sense preferable for the world to end; but nobody is claiming that. So I don't see anything to worry about here.

    (This is just to reiterate the argument in my post concerning the ambiguity of (*), but phrased in terms of fittingness rather than rationality, since you use the latter term differently from me.)

    ReplyDelete
  8. [Posted on behalf of Andrew Reisner...]

    1. I am not sure whether fit or reason is clearer, but I am happy to concede there's nothing especially obvious about the concept of 'a reason'. My comment was probably unfair to your view. You're treating fit as a- the- normative primitive and want to develop a broader picture based on that. My remarks are aimed at people who are relying on having a basic notion of a reason and then using fit to distinguish legitimate from illegitimate reasons. In those contexts, as far as I can see anyway, there isn't any productive way to separate the 'right' from the 'wrong' kind of reasons using fit. I've argued for this in 'Abandoning the Buck-Passing Analysis of Final Value', but I need to read what you've written to know if I think anything I've said really tells against your view. I'll go do that, if you could recommend something of yours to read. I think I'd better do that to better understand your response to Stroud, I suspect, although I can't speak for her, that she would have different intuitions about local fit.

    2. Your point about desiring to believe has sent me back to my mental drawing board for a while. Let's suppose that we're working with a primitive notion of fit. Ignoring any difficulties with using good in this context, suppose it was fitting to desire things that have good results. Of course, it will be fitting to desire that you believe something against the evidence in demon cases. From my point of view, the theoretically undesirable result is that you end up with more reason to desire to believe it than to not desire it (imagine the demon will punish you if you have any higher-order attitudes towards believing it). In light of that, let me make a revision on the higher-order attitude accounts of state-given reasons. My view is that higher-order attitude accounts (or analyses) of state-given reasons are false, because they will, ex hypothesi, give you most reason to have attitudes that will cause the end of the world in the demon context. You can never get reasons not to have higher-order attitudes, since those will inevitably be state-given reasons. And, if one shares the intuition that having attitudes that will prevent the world from ending gives us reason not to have those attitudes, this will be a problem.

    3. This comment was especially interesting to me. I wonder if here we have a bit of a divergence about particular kind of externalism. Let me try to spell out how I see things, and perhaps this will be useful in seeing if there's some common ground that the argument can progress from. My view is unlike the ought of general desirability. However, my view is like another of Sidgwick's views. I'm reminded of his theory of blame: you ought to blame someone just in case doing so promotes the best outcome.

    At any rate, I'm a strong externalist about reasons. I think they are the normative entities that aggregate (or combine in some other fashion) to produce oughts, which themselves denote the way to be, the thing to do, the way to feel, etc., for agents. I don't think there is an ought of general desirability (I think that normative and evaluative language, naturally enough, tend to shift around in ordinary speech, but that the ought of general desirability is really used in expressing claims about what's best, not about anything strictly speaking normative). I do not share the Kantian intuition that failing to do what one ought to do (interpret 'do' here as a universal verb) implies that one is blameworthy, or that doing it makes one praiseworthy. While I agree that those concepts play a heavy role in our pre-theoretical ethical thought, I don't think they have a very important place in working out the central elements of normative philosophy.

    ReplyDelete
  9. [continued...]


    What I mean by normatively commend is just to say that Jill is doing (again, using 'do' as a universal verb) what she ought to do, or what she has most reason to do if one prefers that. Indeed, even worse, there is no reason at all for her to do it. My complaint about denying that there are state given reasons is that if Jill did everything that she ought to do, believed everything she ought to believed, etc., she would cause the end of the world. Further, if she saved the world, all we could say about her in this narrow sense is that she did (believed) something she had no reason to do. If one thought that failing to do what one ought to do made one blameworthy, then I suppose the consequence would seem even less reasonable to me: she'd be blameworthy for saving the world. But, I don't have strong intuitions about blame, so I don't want anything I say to hinge on intuitions or claims about blameworthiness.

    4. There was no 4, but let me add one, just to try to clarify why I have some interest in using the terms the way I do. I agree with what I take to be an implicit concern of yours, namely that in a debate like this, one can just stipulate one's way into the argument one wants. I think that's a worry too, so I want to say something briefly about why doing things this way isn't arbitrary, even if it's wrong. I think it's easy enough under ordinary circumstances to see that demon cases don't pose much of a challenge to 3rd parties whose attitudes the demon is ignoring. Of course we have reason to prefer that the agent believe against the evidence. And, that's intuitively fitting, too.

    What I take to be the real challenge is this. If you think there are objective theories of the right in moral philosophy, which are not (necessarily) identical to plausible moral decision procedures, you hope these theories will generate the right results. We could disagree about what those are. Thus, one philosopher might think that the theory of the right should say that one ought never to lie, even if lives are at stake, whereas others think the correct theory of the right should say that one ought to lie, when lives are at stake. Despite this disagreement, we might all agree that it would be hard not to lie when lives are at stake, that lying is in some aesthetic sense odious, etc. But, there is some notion of ought picking out what morality requires of the agent that is at dispute that is what, from the point of view of morality, really matters.

    I think there is not only such a moral ought, but an all-things-considered ought. It applies to beliefs, desires, actions, etc. It's not about states of world; it's agential in the sense that it sets out what agents are to do. It's the sort of thing someone like Broome has in mind, and I think Parfit at various stages. I think we can't easily do without such a notion, since we can generate lots of conflicts between subscripted oughts (oughts of belief can be made to conflict with oughts of action, for example). So, we need some conceptual apparatus (or, at least, it seems to me that we need it) for thinking about those cases. I think we can easily talk about what 3rd parties have reasons to desire, but that doesn't resolve the interesting conflicts about what first parties should do/believe etc.

    ReplyDelete
  10. (1) I haven't yet written much about this, except for the blog posts I've already linked to. 'Consequentialist Agents: Fittingness and Fortune' introduces the concept of fittingness (in a slightly different context), and 'Reasons-Talk and Fitting Attitudes' defends its usefulness (with particular comparison to views like yours).

    (2) Okay, I don't think I have anything new to add here. To sum up: you originally had a very powerful-looking argument to the effect that state-given reasons clearly couldn't be reduced to object-given reasons for action or desire, because we could construct cases where there wouldn't be any such object-given reasons for the apparent state-based reasons to reduce to. I responded that this argument works against the 'reasons for action' formulation but not the 'desire' version. So it's actually quite defensible to hold that "what looks like state given reasons to believe something is just an object-given reason to desire to believe it". Your remaining worry is that on this view being in accord with reason could cause the otherwise avertable end of the world. I agree that this is a consequence of the view, though I don't think there's anything worrying about it. That's because on my view, "being in accord with reason" just means having fitting attitudes, and to call an attitude 'fitting' is clearly compatible with its being disastrous (as we see in these cases).

    (3) I'm quite happy to talk about objective oughts. For example, suppose the demon never warns Jill that he'll destroy the world unless she believes that p. There's still a sense in which it's objectively fitting for Jill to desire that she form this belief, even though she's completely blameless for failing to have this desire (since she lacks the relevant information). For another example, it's objectively fitting to have true beliefs, even though what's reasonable (or subjectively fitting) to believe instead depends on the available evidence, etc.

    I'm also happy to play along and say that there's a sense in which we can say that Jill "ought" to form the belief itself, if by this we mean nothing more than that it is best (or, to introduce an explicitly first-personal aspect, that it is desirable from Jill's perspective, i.e. fitting for her to desire) that this occur.

    So there are a couple of 'objective oughts' that I can easily make sense of: one corresponding to the objective fittingness of the attitude itself, and another corresponding to the desirability of your possessing the attitude (that is, the objective fittingness of *desiring* this state of affairs). But it sounds like you want to talk about some notion of "ought" that doesn't correspond to either of these notions. I'm afraid I don't have any grasp of what you have in mind, if not either of these.

    ReplyDelete
  11. (4) As just noted, we can accommodate questions about objective rightness within the fittingness framework, so I don't see any difference between us there. However, I'm a little confused by your talk of "subscripted" oughts. Paradigmatic examples would be things like what you morally ought to do, or legally ought to do, or X-ally ought to do (for some X). These then contrast with what you ought to do, all things considered. This latter isn't itself subscripted. There's what's fitting to do (or believe) in some subscripted respect, and then there's what's fitting to do (believe) simpliciter. I take it you also want to treat the "to do" (or "to believe") as subscripts; this strikes me as odd, but I'll return to the point shortly.

    First, let me address your claim that "oughts of belief can be made to conflict with oughts of action". I don't think there's any real conflict here -- we just need to be clear about what question we're asking. It may be true that I ought to take a pill that will cause me to believe what I ought not to believe. (That is, a fitting action may be to bring about an unfitting belief.) But there's no real "conflict" here: no single normative question to which there are conflicting or indeterminate answers. If we ask what you should believe, the answer is whatever's true (not-p, say). And if we ask what you ought to do, the answer is that you should take the pill that will cause you to falsely belief p. Where's the conflict? (Should you do as you ought to do, even when this causes you to believe something you shouldn't? Tautologically, yes.)

    Here is where I guess you want to make some kind of global evaluation of whether the agent is as they "ought to be", all attitudes considered. (In my framework, this would correspond to judging whether the agent as a whole is somehow globally fitting, rather than just locally assessing the fittingness of their various acts and attitudes.) It's not obvious to me that this is a sensible question. I can get a rough grasp of it, by way of the thought that some rational failings are more severe than others. Maybe we can develop the concept from there, but it isn't entirely clear. And -- relating this back to the initial disagreement -- insofar as I have any grasp of it at all, I don't see any reason to think that it's always best to be a globally fitting agent. (Maybe a demon will blow up the world unless we are as objectively misguided/unfitting as can be. That sounds coherent to me.) So, again, I don't see why we should be at all bothered by the theoretical conclusion that "being as we ought" (in this technical sense) would cause the avertable end of the world. That would only be a problem if "being as we ought" must be somehow desirable, but no opponent of state-given reasons should think that. (Sorry, I'm starting to repeat myself. I'll leave it at that.)

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.