Sunday, August 07, 2005

Does Truth Govern Belief?

I think Musgrave is right to draw a distinction between reasons for believing vs. reasons for the thing believed, even if I think he goes on to misapply it. But Nishi Shah, in 'How Truth Governs Belief', seems to have no sympathy for the distinction whatsoever. He holds that indicators of truth are the only relevant reasons when deliberating on what to believe. This strikes me as completely wrongheaded.

Shah begins:
Why, when asking oneself whether to believe some proposition p, must one immediately recognize that this question is settled by answering the question whether p is true? Truth is not an optional end for first-person doxastic deliberation, providing an instrumental or extrinsic reason that an agent may take or leave at will. Otherwise there would be an inferential step between discovering the truth with respect to p and determining whether to believe p, involving a bridge premise that it is good to believe the truth with respect to p. But there is no such gap between the two questions within the first-personal deliberative perspective; the question of whether to believe p seems to collapse in to the question whether p is true.

He calls this phenomenon "transparency". I deny that it exists. Sure, we usually take truth to settle the question of what to believe, but there is no "must" about it. Consider the following scenario:

(Rage) Suppose that shopkeeper Apu suffers from a rare mental illness such that if he forms the belief that someone is a thief, his anger will overwhelm him, and he will violently attack (and possibly kill) the accused. Apu knows this, and knows that such behaviour is wrong, but he cannot help it. One day he finds some food missing from the store. He wonders whether Nelson - a young troublemaker - is to blame. Apu deliberates over the matter, within arm's reach of two large buttons, which activate and control his brain implants. Apu knows that if he hits the red button, the implants will cause him to form the belief that Nelson is a thief. But if he hits the blue button, they will cause him to form the belief that Nelson is innocent. (In each case, the implants will also wipe any prior beliefs that are inconsistent with this conclusion, including Apu's memory of pressing the button, and making the decision to do so, etc.)

Now, Apu is trying to decide what to believe. He decides that he has conclusive moral reasons to avoid falling into a violent rage, and so (given his illness) these are also reasons to believe that Nelson is innocent. However, upon considering the evidence, he begins to get the sneaking suspicion that Nelson is in fact guilty. Before the rage can overwhelm him, Apu desperately hits the blue button. He then believes that Nelson is innocent, and life goes on as normal. The End.


Isn't it obvious that, in the (Rage) scenario, the question of whether Nelson was truly guilty or not does not settle the question of what Apu should have believed? Certainly Apu, as I've described him, did not experience the phenomenon of "transparency". He was deliberating over what to believe, but for him this did not merely collapse into the question of what is true. Indeed, the result of his deliberations was that he should believe something that he (presently) thought was false. (That's why he needed to press the button, so the brain implants could do what is psychologically impossible by willpower alone.)

So I think Shah is simply wrong about transparency. There can be reasons for believing other than truth-indicative ones, and this may be recognized by agents who are deliberating over what to believe.

I instead agree with Dretske, who claims that false beliefs are bad in much the same way as foul weather is bad. We just happen to dislike both of them, and understandably so: they can muck up our plans! But there is nothing essentially normative about belief. A false belief is not necessarily a bad one, and even if our beliefs do usually aim at truth, it simply isn't true that they must do so -- as the (Rage) scenario demonstrates. It is an unfortunately narrow view of doxastic rationality that takes only truth-indicative reasons into consideration. Why exclude moral and practical reasons in such a way? The sphere of reasons is larger and more diverse than Shah makes out.

Some might respond that a belief must, by definition, be governed by evidence or indicators of truth. Otherwise it's not a belief, but some other mental state: perhaps a 'delusion'. I don't think that's a helpful definition. The sort of mental state I'm concerned with is the type through which the agent represents the world as being a certain way, and which influences the agent's behaviour, and causes him to say things like: "I think that such and such is the case." etc. It is a separate question how these states are formed, or how responsive they are to evidence, etc. But answers to those questions are not going to affect whether or not something qualifies as being this sort of mental state that I've described. Now, I think the word "belief" is a pretty good name for this mental state, but if others disagree, we can call them "sbeliefs" instead. Because what I'm interested in here is how agents (should) deliberate over how to represent the world, not the terminology we use to describe it.

Must sbelief aim at truth? Well, my earlier arguments suggest not. There are other (e.g. moral) sorts of reasons that can influence how we should represent the world to be. And that - rather than how we define the word 'belief' - is the important thing.

5 comments:

  1. Richard,

    I don't see that Shaw denies the distinction per se. If what you mean between reasons for believing vs. reasons for the thing believed comes down to something like the distinction between explanatory reasons or reasons why and normative reasons or reasons to, Shah can say that only truth indicative considerations can serve as reasons to believe. In fact, he is willing to admit that there are things that could not be taken as truth indicative that could serve as the reasons why someone believed what they did.

    Are you suggesting that there are good normative reasons to believe a claim that are not truth indicative or play the role of normative reasons because the agent took them to be indicative?

    As for the Apu example, there is a distinction that Shah could appeal to that others have discussed between reasons to get yourself to believe and reasons to believe. The first sort of reasons are reasons to take whatever available means there are to bring it about that you believe and it is interesting to see that these typically involve some sort of manipulation. The second sort are as Hieronymi puts it such that when one finds them convincing one thereby believes (you can find her work on this on her page at the UCLA phil. dept. website). Some will say (and this is what I'd say) is that Apu's deliberation was concerned (exclusively) with the theoretical question 'What should I believe about X' but 'What should I do about getting a belief about X'. The first sort of deliberation might properly be resolved when belief is formed but the second might be properly resolved when an intention is formed. At any rate, I think Shah has the resources available for explaining why the Apu case is not a problem for the transparency view.

    ReplyDelete
  2. Hi Clayton, nice to hear from you :)

    Musgrave's distinction is within the sphere of normative reasons. I don't mean to be discussing explanatory/motivating reasons here at all. Rather, I think we can have good normative reasons to believe something, that are not reasons confirming the truth of the thing believed (i.e. not truth indicative). They might instead be moral or prudential reasons to believe it.

    "there is a distinction that Shah could appeal to that others have discussed between reasons to get yourself to believe and reasons to believe."

    That's an intriguing idea I hadn't come across before. It doesn't strike me as very plausible though. Presumably the reason to get yourself to believe is instrumental to obtaining said belief. Apu has reason to press the button and get himself to believe p only because he has a reason to believe p. If he didn't have a reason to believe p, why would he have a reason to get himself to believe it?

    (Sure, one can imagine other scenarios where the action or attempt is what really matters. Perhaps God would reward us for attempting to believe in him, no matter whether we're successful. But the (Rage) case is not like this. What matters is the belief, not the action.)

    ReplyDelete
  3. Richard,

    I'm not sure you do dispute transparency. In the sense I have in mind, deliberating whether to believe that p entails intending to arrive at a belief as to whether p. If my answering a question is going to count as deliberating whether to believe that p, then I must intend to arrive at a belief as to whether p just by answering that question. I can arrive at such a belief just by answering the question whether p; however, I can't arrive at such a belief just by answering the question whether it is in my interest to hold it.

    The kind of example you raise is, as Clayton says, one in which the conclusion of deliberation is an intention or action, not a belief, and thus it is not deliberation about whether to believe that p in my sense.

    Now I agree that transparency doesn't establish that there are no non-evidential normative reasons for belief (I was too quick to draw this conclusion in 'How Truth Governs Belief'), but I do think such an argument can be made. See my "A New Argument for Evidentialism" available on my webpage in the 'works in progress' section.


    Nishi

    ReplyDelete
  4. Hi Nishi -- thanks for the comment!

    If you stipulate that one must "intend to arrive at a belief... just by answering that question", then I guess I can't dispute transparency any longer. But doesn't such a stipulation make transparency a trivial -- and merely psychological -- fact? I would have thought the important issue is whether we can/should allow non-evidential reasons to influence our judgment of what we ought to believe (regardless of how we can/intend to obtain this belief).

    But I will definitely check out your new paper -- thanks very much for the pointer.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.