Wednesday, September 21, 2005

Why Be Rational?

Kolodny argues that we don't have conclusive reason to be rational. He appeals to the "boot-strapping objection": rationality requires us to do what we have apparent conclusive reason to do. Suppose (for reductio) that we have conclusive reason to be rational. Then we have conclusive reason to do whatever we have apparent conclusive reason to do. But that's absurd -- you can't just drop the 'apparent' like that. Apparent reasons need not be real reasons. From the mere fact that you take yourself to have a reason, it doesn't follow that you really do have a reason - it's possible to be mistaken about such things! So the supposition is false: we do not have conclusive reason to be rational.

Still, it seems to me that we have a reason of sorts to be rational. And not just instrumental ones (e.g. a rule-utilitarian-style suggestion that rationality is the most reliable strategy in the long run). Here's why:

Suppose an angel comes along and offers you the following choice. She will alter your brain in one of two ways. (1) She will make you perfectly rational. You will be able to reason perfectly well, and never suffer from weakness of the will, so you will always intend to do what you have apparent conclusive reason to do, etc. or (2) She will make you totally irrational but perfectly lucky. Knowing the future, the angel will set up your brain so that you always pick the option that you in fact have conclusive reason to choose. But this "choice" will not be because you consciously reflected on these reasons. Instead, you will engage in terribly bad reasoning, and make fallacious inferences, all of which just happens by good fortune (and angelic tampering) to yield true conclusions.

Which option would you choose? It seems to me that there is something very much preferable about option #1. This shows that we have reason to be rational. It's a good disposition or characteristic to have. I wouldn't want to be incapable of rational reflection, that would suck much of the value out of life. It would be like losing your free will. (Indeed, on some conceptions, it would be to lose your free will!)

Now suppose the angel adds that, whatever you pick, she'll re-offer these options in one week's time. Now you should pick option #2. Why? Because you can't be sure which is the best permanent option, but if you temporarily pick option #2 then the angel will guarantee that your later decisions -- including next week's decision about which of these two options to keep -- are always right.

So, what will the angel make you pick for your permanent choice? Like I said above, it seems that option #1 is the better. By restoring (and improving!) your rationality and free will, this option provides you with the most worthwhile life. Sure, you'll make some mistakes later on, but that's better than being a coincidentally perfect robot. Suppose I'm right about this, so that when you temporarily choose option #2, the angel would set you to pick option #1 the second time around. What does this show? Well, the angel sets you up to pick whatever you have conclusive reason to do. So it would show that you have conclusive reason to pick #1 - to make yourself rational.

Does this mean you have conclusive reason to be rational? This looks like the same thing, but perhaps it isn't. Making yourself rational seems like a kind of acting upon yourself. It might be like the distinction between believing P and bringing it about that you believe P. These are two distinct types of action, as I explain in my essay on reasons for belief. So we might grant the above argument and still deny that we have reason to be rational. On the other hand, if we accept that there can be 'external' or 'inaccessible' reasons, then we might say that we have reason to be rational (there is something to be said in favour of it), even if this is not a reason that we could actually recognize and act upon.

(Kolodny thinks it's "fetishistic" to value rationality for its own sake. But I don't know about that. It seems to me that there really is something intrinsically valuable about being rational in general, or having rational capacities, at least. Though perhaps I'd agree that it's fetishistic to care instrinically about doing what is rational in any particular case.)

So I guess I haven't really made much progress here. Maybe you should go read Clayton instead. Then come back and leave me some helpful comments :)

10 comments:

  1. We have reason to follow that which is apparently rational based on the two alternatives that follow from this course of action. Either

    1. We are acting with true rationality, and the appearance is justified. In this case, we gain the benefits of having figured out some aspect of the world around us.

    2. We are acting with only the semblance of rationality, but it is not ultimately justified. Here, we will not gain the benefits of true rationality, but we do at least stand the chance of learning from our mistakes. If we had not even attempted to be rational, our actions would be far less likely to succeed, and we would be completely incapable either of learning from our mistakes or of crafting better plans for the future.

    ReplyDelete
  2. I've read Clayton but I can't get the characters straight!

    Suppose that rationality is intrinsically valuable. Seems plausible enough. The question is whether this value should or could play a role in deliberation. I happen to think that rationality is counter-deliberative. It cannot function in proper deliberation.

    Let us say that the rational choice is the choice that sides with what the agent not unreasonably takes the balance of reasons to require. Whenever the agent is in a choice situation, she will not be able to distinguish in thought the choice that is on balance best supported by the reasons from the one that is best supported by the reasons as she takes them to be. But which one is the BASIS of her choice from her perspective? The reasons and not the reasons as she takes them to be.

    So, in one kind of situation (the good one) in which that which is required by the reasons = that which is required by the reasons as the agent takes them to be, the rational agent will conform to both the rules:

    (1) Do what the reasons require.
    (2) Do what the reasons as you take them to be require.

    In the bad case, it is not the case that what the reasons require = what the reasons as you take them to be require. However, it's being a bad case is counter-private (i.e., knowable only to those not in it). Thus, from the agent's point of view, she conforms to both rules but complies with (1). If you say she made the right decision or did what she ought, you give (2) the role of determining which choice is correct but the agent would reject this [Moreover, the theory of reasons will be incoherent since by hypothesis what (1) and (2) requires will be different but you'll be committed to saying that what the reasons require is following (2)].

    To remove the incoherence, you have to have a conception of reasons according to which (1) and (2) do not diverge (in their own ways, Jackson and Dancy try to do this). However, I think this is a step in the wrong direction because from the point of view of the rational agent, what matters or what we should care about are the considerations that are reasons and not considerations that serve primarily to determine what choice would be rational. To remove the incoherence, you have to override the agent's conception of what could justify her actions.

    I have a name for those kind of reasons. They are 'alienating reasons', reasons that no rational agent could recognize as reasons and comply with in deliberation. I think once you allow rationality to serve as a reason in its own right, you usher in alienating reasons or counter-deliberative reasons. That's bad. Since no rational agent could act on them, your theory doesn't provide guidance. Ironic since that is the alleged problem with the theories of Scanlon, Moore, and the objectivists, their theories do not provide guidance to rational agents.

    Sorry, this was written under duress but I think I can back up each of these claims.

    ReplyDelete
  3. 1) Your argument seems to be a slam dunk for option 2 (not 1) if you are perfectl lucky you will by definition make the right choice the next time the angel asks you - far superior to having rationality. I think you need ot cook the argument some more.

    2) Second both options to an extent involve killing the old you - something that would concern me in itself.

    Otherwise the previous two posts make some good points.

    ReplyDelete
  4. Paul,

    I'm obviously sympathetic with Kolodony's position and was wondering if you might elaborate on your point that:

    (1) We have conclusive reason to do whatever we appear to have conclusive reason to do.

    I think it would be right to say that whosoever acts so as to satisfy the demands of what the reasons appear to require will be shown to be rational but that doesn't mean that this response is correct. Not obviously.

    Don't we often distinguish between an excusable failure to satisfy the demands of reasons and choices that are justified by appeal to those reasons? I don't see that this distinction can be drawn in the way we typically draw it given (1).

    You might be right that the difference could not make a 'practical' difference. The difference between apparently conclusive and conclusive reasons must be hidden from the agent but the difference might make a 'theoretical' difference. That is, it is a difference that matters to theorists. In turn, this allows us to formulate better theories of permissible action, etc... but we have to grant that occassions will arise when people are not in a position to satisfy the demands of such a theory and should be excused rather than blamed.

    ReplyDelete
  5. I'm in a rush, so have only skimmed the comments so far, but I note Paul suggests the following:

    (a) a "reason" is just "a representation that justly motivates action."

    I don't think this is the sense of "reason" that I'm talking about. As described in my post, apparent reasons will justly motivate action. But I take a "reason" to be a more objective notion, something we might be unaware of and later come to discover. That is, I take a "reason" to be "a fact that counts in favour of an action".

    Hopefully that makes it clearer why we shouldn't want to say that we have (factual) reasons to do whatever we have apparent reason to do.

    ReplyDelete
  6. Mmm, I have skimmed the comments. The post is definitely interesting but I am struck by the sense that this degenerates into a determinism of sorts. Precisely, because we have rational and irrational motives there is a plurality that is eviscerated by excluding the other. I may be wrong, misunderstood the argument and that's alright, but I am inclined to think that rationality is a good not only in itself but because it empowers us to a self-control in the face of irrational modalities. In sum my question is are we speaking merely abstractly? or are we speaking in the context of being human? If we are, then necessarily there is something to be said to minimizing the qualities that make us human i.e our implicit irrationality.

    ReplyDelete
  7. "isn't (a) the way we really understand reasons?"

    No, not as philosophers use the term, anyway. Suppose I falsely believe that my food has been poisoned. This belief motivates me to throw my food away. I take myself to have a reason to throw the food away. But I don't really have any (fact-based) reason to do this. The food isn't really poisoned. While I thought I had a reason to chuck it, I was mistaken. I really have most reason to eat the food instead (supposing I'm hungry, and it is nutritious, etc.).

    In Satre's example, there are presumably strong reasons on either side. The youngster has reason to stay (due to the fact that this would help his mother), and reasons to go (assuming it's a fact that he would thereby be doing something glorious and heroic with his life). He weighs these up, and decides which reason he takes to be more pressing.

    He might make a mistake. Suppose he leaves, but it turns out the resistance is corrupt and he ends up becoming just another bureaucrat. Then he didn't have reason to leave his dying mother after all. He thought he did, but he was mistaken. The (fact-based) reasons aren't always as we take them to be. But so long as he reasoned well, based on the apparent reasons that presented themselves to him, he might still be rational, despite not choosing what he had most (fact-based) reason to do.

    Hopefully the distinction I'm pointing to here is reasonably clear. If you don't think the words "reason" and "rationality" are quite right for the ideas I have in mind, that doesn't really matter. Pick another word and stick it in its place. What matters is the concepts themselves. Given how I'm using the terms, do we have reason to be rational? Are there facts which count in favour of acting on merely "apparent reasons"?

    Jason says: "If we had not even attempted to be rational, our actions would be far less likely to succeed,"

    While this may often be true in practice, and offers instrumental reasons to be rational (i.e. it's a good strategy in the long run, as it is what will most likely lead us to do what we have most reason to do), it doesn't seem necessarily true. After all, in the main post I described a scenario where you could be more successful by choosing option #2 and making yourself irrational! If rationality is just a good means to other desires ends, then this leaves open the possibility of alternative means to those same ends. So the real question is whether rationality is valuable in itself.

    But I think I agree with Clayton that even if rationality is intrinsically valuable, it still cannot play an intrinsic role in deliberation. (For one thing, there seems something a bit off about reasoning: "Oh, I can't decide between X and Y; I suppose X would be more rational; oh, so that's another thing in its favour! I shall do X!") However, I'm not so sure that counter-deliberative reasons are such a terrible thing to admit into our ontology.

    As I use the term 'reason', I think the mere fact that a state is intrinsically valuable would guarantee that we have reason to bring this state about. So if rationality is intrinsically valuable...

    ReplyDelete
  8. Heh, that's quite clever! I don't think it works though. Kolodny is not claiming that we never have conclusive reason to do what is rational. It's certainly possible for (fact-based) reasons and rationality (apparent reasons) to coincide. Suppose an act A is both rational and supported by conclusive reasons. Then you ought to do what is rational, namely A, but the reason why you ought to do A is because A is supported by conclusive reasons, rather than because A is rational. The fact that A is rational is somewhat superfluous here, it does not provide any further reason to do A. That's all that Kolodny is saying.

    So Kolodny's claim does not entail that we never have conclusive reason to do what is rational. Instead, he is merely claiming that we don't necessarily have reason to do what is rational. The mere fact that A is rational does not in itself provide conclusive reason to do A; but this leaves open the possibility of there being other reasons to do A.

    Hope that clears things up a bit.

    ReplyDelete
  9. "...rationality requires us to do what we have apparent conclusive reason to do."

    That's the bad premise right there. Rationality requires no such thing. Rationality simply requires a willingness to revise our critical preferences (whether these are preferences in beliefs, actions, values or whatever) in the light of evidence and argument. Nothing need be "conclusive" at all, and indeed "conclusiveness" is flatly impossible. All we need is good reasons to be rational that have held up well in the light of criticism up to now. Grasping for anything more solid than that is just a mad quest for the philosopher's stone and a hop-skip-jump into irrationalism.

    ReplyDelete
  10. "Kolodny argues that we don't have conclusive reason to be rational."

    I agree.

    That said, I have faith in rationality, which is why I wish to be rational (to answer your question).

    Do you have faith in reason?

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.