Saturday, August 27, 2005

Inquiry and Deliberation

Returning to the question of whether truth governs belief, I've been thinking about the position of philosophers like Adler, Shah, and now Velleman too, who claim that evidentialism is contained within the very concept of belief. A major motivation for this position (let's call it "conceptualism") is the phenomenology of first-person doxastic deliberation: when deliberating about what to believe, the question of what to believe seems to collapse into the question of what is true. If I offer you a million dollars to believe the world is flat, this practical incentive doesn't carry any weight in your deliberation -- the only thing to influence the immediate outcome of your deliberation is evidence concerning the truth of the belief in question. That's the story. I want to argue that it rests on a question-begging definition of 'deliberation'.

Here's the simple counterexample: Suppose you have a pill that, if swallowed, will cause you to believe the Earth is flat. Now, given my generous offer, you ask yourself what you ought to believe. After reflecting on the costs and benefits of having a false belief in this case, you conclude that you (all things considered, rationally) ought to believe that the earth is flat. So you take the pill, and so receive the million dollars.

Surely this is a case of deliberating about what you ought to believe. But it's a case where you took practical incentives, and not merely truth-indicative evidence, to be relevant reasons influencing your decision. So "transparency" does not hold in all cases of doxastic deliberation after all. There is a special type of deliberation, which we might call rational inquiry, which is (by definition) exclusively concerned with the pursuit of truth and knowledge. But not all deliberation about what to believe need be so constrained.

Now, Shah responds to this line of argument by redefining 'doxastic deliberation':
In the sense I have in mind, deliberating whether to believe that p entails intending to arrive at a belief as to whether p. If my answering a question is going to count as deliberating whether to believe that p, then I must intend to arrive at a belief as to whether p just by answering that question. I can arrive at such a belief just by answering the question whether p; however, I can't arrive at such a belief just by answering the question whether it is in my interest to hold it.

In the practical counterexample, as a result of your deliberation you do end up intending to believe that the earth is flat, it's just that to realize this goal you must take the extra step of swallowing the pill. You can't achieve the goal through deliberation alone. But why should that matter? In any sort of practical deliberation, you end up with an intention to perform some further action. If I decide that I ought to give to charity, I cannot achieve this goal through deliberation alone - I need to actually go out and do it! That doesn't mean I wasn't deliberating about whether to give to charity. So why should it mean that I wasn't "truly" deliberating about whether to believe that p? It seems arbitrary and ad hoc to restrict doxastic deliberation in such a way.

No doubt the conceptualist will want to reject my analogy with practical reasoning -- after all, their whole point is that theoretical reasoning is of an entirely distinct nature. They grant that one can deliberate practically about what to do regarding one's beliefs (e.g. whether to take the flat-earth pill), but they want to distinguish this from theoretical deliberation over what to believe.

I think this separation is artificial, however. In the flat-earth case, the only reason you need to take external action (i.e. the pill) is because it's psychologically impossible to believe at will for non-evidential reasons. If you could change your beliefs by sheer force of will, you presumably would do so. It's just that you lack the capacity. But that fact doesn't seem to be of any great normative significance. Suppose you've already taken another pill, which temporarily gives you precisely this capacity. Then you could come to believe that the earth is flat by deliberation alone. As soon as you conclude "I ought to believe that p", you will thereby find yourself believing p, no further action required.

So doesn't that serve as a counterexample to transparency? Further, I don't see why the stipulations about 'further action' should really matter in principle. It should be enough that one can deliberate and come to the conclusion that I ought to believe that p, for non-evidential reasons. This is surely the crucial step. How you intend to acquire this belief, or even whether it is psychologically possible for you to do so, are further questions that have no obvious relevance here.

Now, there are deliberative contexts where transparency holds without fail: namely, those I described above under the moniker of "rational inquiry". But this holds trivially. If we define 'inquiry' as the single-minded pursuit of truth, then it should come as no great surprise that we accept only truth-indicative reasons when deliberating in a context of inquiry. Transparency is here built into the very definition of this deliberative context.

So here's the problem for conceptualists: If they define theoretical reason broadly enough to capture any deliberation over what one ought to believe, then it will include practical reasons as in the case of the flat-earth pill. Alternatively, if they restrict theoretical reason to the specific practice of inquiry, then they have simply built transparency into the definition, and can derive no interesting conclusions from this tautology. Neither extreme can support their claims. But attempts to stake out a middle ground by appealing to our psychological capabilities just seems arbitrary. Why should the nature of a deliberation be affected by how we intend to implement our conclusions?

10 comments:

  1. Hi Richard,

    We both agree that one cannot arrive at a belief that p just by deliberating on whether it would be beneficial to believe that p. I'm willing to grant for the sake of argument that if this is merely a psychological fact about us, then it doesn't tell us anything about what reasons there are for belief. But, as I said before, I have argued that our inability to believe for practical reasons is not a mere psychological inability. It is, or so I have argued, the reflection of a normative commitment to truth, a commitment that is expressed in applying the concept of belief. Therefore, in arguing against me, one can't help oneself to the assumption that the inefficacy of practical considerations on belief is a mere psychological fact about us.

    Maybe my arguments aren't any good, but they don't depend on stipulations about what counts as doxastic deliberation.

    ReplyDelete
  2. I don't know much about what's currently being said in this field. I've never been convinced, however, that we do have an inability to believe for practical reasons -- it seems to me to be a case of being mislead by the particular examples one chooses. There is, however, an entire field of practical reasons that do appear to have a real effect on belief, namely, moral reasons. And I'm not convinced that it is always impossible to come to believe p by deliberating on whether it is morally beneficial to believe p. Even outside of moral reasons, there are pragmatic reasons closely related to inquiry that can't be dismissed. On Humean principles, for instance, one can argue that we often believe things that are useful in a currently unrivaled way for solving problems we consider important; having found that they are useful in this way, we sometimes back-label them as 'true', but truth, as such, wasn't relevant to the actual process of coming to believe them at all. Instead we were just compelled by the force of imagination, on the basis of purely practical considerations. If you choose examples where the practical reasons are remote enough from those usually used in inquiry, it might well seem that we have an inability to believe on practical reasons; but it has never been clear to me why one should hold this given other sorts of practical reasons. Clear recognition of truth might always override practical reasons; but in the muddy waters in which we often find ourselves deliberating, it's not clear to me that practical reasons don't have room to operate.

    But as I said, I'm not up on the literature on this subject. Is there any standard sort of reply to this? I have a sort of quasi-professional interest in this, because if it were true (whether logically or psychologically) that we can't believe for practical reasons, large sections of Kant and Hume would collapse -- certainly an interesting result.

    ReplyDelete
  3. Nishi, fair enough, I will need to engage with your other arguments. I had understood them as concerning what we might conclude from our deliberations. So that's why I thought them question-begging, since it seems clear that we can conclude that we ought to believe something on the basis of practical reasons. But you can avoid this by instead talking about our ability to arrive at a belief just by deliberating (rather than talking about the conclusions of our deliberation). Then the concerns I express here miss the mark.

    Just out of curiosity: do you really think that, if you were offered a million dollars to believe the world is flat, this would not give you any reason whatsoever to believe it? Because that does sound awfully odd.

    Brandon - that sounds interesting. Can you offer a (plausible) concrete example where someone comes to a belief via deliberation over its moral benefits? I'm having trouble seeing how this could happen.

    ReplyDelete
  4. Nishi, I've been re-reading your new paper on evidentialism, and it does look to me as though your argument there rests on such a stipulation.

    You apply the Deliberative Constraint on Reasons to the specific case of beliefs to yield:

    "B3) R is a reason for X to believe that p only if R is capable of disposing X towards believing that p in the way characteristic of R’s functioning as a premise in doxastic deliberation."

    You then assert:
    "the attractiveness of believing that p cannot similarly engage one’s doxastic deliberation as a consideration in favor of believing that p. This is because the attractiveness of a belief does not tell for or against the truth of p, and the question of p’s truth occupies the sole focus of our attention in doxastic deliberation. When we ask ourselves the deliberative question whether to believe that p, this question gives way to the question whether p is true, such that the only way for us to answer the former question is by answering the latter question... Transparency, when combined with the deliberative constraint on reasons, thus rules out pragmatic considerations from being our reasons for belief."

    But your conclusion only follows if we artificially limit what is to count as "doxastic deliberation". I'm happy enough to accept the Deliberative Constraint on Reasons because practical reasons can be recognized and acted upon in our deliberations about what to believe. I might take the flat-earth pill precisely because I see the million-dollar incentive as providing me with a reason to believe the earth is flat. Pragmatic reasons are not in conflict with the DCR, unless we artifically restrict the range of "doxastic deliberation" to those contexts in which transparency occurs. But I see no reason to make such a restriction.

    ReplyDelete
  5. Hi Richard,

    Keep reading. I think that deliberation that concludes in an intention or action is practical deliberation, not doxastic deliberation, and I don't think it is artificial to distinguish these two types of deliberation. But my argument that the norm of truth is embedded in the concept of belief later in that paper doesn't depend on drawing any such distinction.

    ReplyDelete
  6. Richard: At least prima facie, people often disbelieve things for purely moral reasons; thus, for instance, someone might refuse even to consider (regardless of any evidence) a claim that seems to them morally tainted (by racism, for example). If some scientist were to come up with some evidence, however apparently good, that incest is good for people's health, it seems very plausible that a lot of people would reject the claim precisely because they would find it morally unacceptable even to think of incest as good at all. Beliefs that are moral, or that have some direct connection to morality, are often rejected as a result of deliberation (however quick or slight) on whether it would be morally detrimental to hold them; or so it seems. It's a common explanation, too, for social reactions against evolution: some people see it as morally dangerous, and therefore reject it. I'd be more concrete, but I'm not sure what secret conditions are lurking behind your parenthetical 'plausible'! You're more familiar with the literature than I am; I can only guess at whether people might not have already considered a possible case and rejected it.

    Further, there are at least claims by people that they believe something for purely moral reasons; if someone accepts something like Kant's moral argument for the existence of God, that would involve believing that something exists purely because it is morally beneficial to do so. (Note that it is irrelevant whether the argument is a good one; the only thing relevant is whether someone thinks that it is, in fact, right that believing God to exist is so morally beneficial that failure to do so is a moral failing.) If Kant is right that there can be purely regulative beliefs, then it is automatically true that we can believe (some things) on practical considerations. Likewise, if failure to believe in something (God, the natural goodness of human nature, the hope of liberal democracy, or what have you) is genuinely regarded as morally despicable, people who think they have good reason not to believe it might be unable to believe it, but it isn't clear from an armchair that people without such reasons would be able to suspend judgment. There's at least prima facie something to be said for the Humean view that what leads us to believe is the force with which something strikes us; and moral considerations can pack a lot of punch. If someone in the course of deliberation comes to believe that a particular kind of disbelief is a sign of moral perversion, it's not obvious that that would not be sufficient for most people (who make no fine distinctions between practical and evidential considerations) to believe.

    As an additional consideration, it's difficult to see how we could explain some phenomena of self-deception and temptation if it were genuinely impossible (whether logically or psychologically); most accounts of self-deception and temptation that are even remotely plausible appear to imply that practical considerations do have an effect on what people believe or disbelieve -- what people believe or disbelieve is often affected by their emotions, and these can be directly worked on by practical considerations. (Not that it's particularly easy to explain self-deception even if we allow that people believe for practical reasons. But some self-deception and temptation certainly are the result of deliberation; and if it is not to be explained by practical enticement leading to certain beliefs, it's hard to see how they could be explained. Eve believes the fruit will kill her; someone whispers in her ear that it might not be the case; she's not in a position to know for sure, and has good reason to think this new claim is false, but she sees how delicious the fruit looks; so she accepts the view that the fruit is safe, and acts on it.)

    So, at the very least, it can't merely be assumed that we don't ever believe on purely practical considerations, even if it's hard to find a clear-cut case in as murky an area as real-life deliberative inquiry, where everything but the kitchen sink gets tossed into the mix. There needs to be some argument for it. And it's not required, of course, that every sort of practical consideration have any effect; people might not believe for money, but that wouldn't tell us anything about whether they would believe for morals.

    But it could be that I'm just inclined to give more credence to these, because I don't think there is any one phenomenon identified by the label 'belief', only a loose collection of phenomena that have similarities, but need not all act according to exactly the same principles. Because of that, to deny that in any case reasonably falling under the label 'belief' we ever believe on purely practical considerations strikes me as an extremely strong claim. It requires saying that people can't do it, however you gloss belief (within the ordinary limits of the word); that they can't do it, whatever the inducement; and that they can't do it, however irrational they may be. That's a strong claim. I'm curious as to what sort of reasoning is behind it. Is there any [plausible ;) ] argument to that effect?

    ReplyDelete
  7. Hi Brandon,

    Here is how I see the dialectic. There are clear cases of deliberating whether to believe that p in which this deliberative question whether to believe that p is transparent to the question whether p is true. For example, I think empirical beliefs about observables are clearly governed by transparency. But I don't think there are any clear cases of deliberating whether to believe that p in which the deliberative question whether to believe that p isn't transparent to the question whether p is true. It is unclear whether the cases you mention, as you yourself admit, constitute cases in which people genuinely arrive at the belief that p solely because they have concluded that it would be morally right to believe that p.

    I think the correct methodology is to first explain the clear cases, and then see how well the hypothesis that best explains them can handle the unclear cases. I think that the hypothesis that best explains the clear cases is that exercising the concept of belief involves accepting the norm that one's belief that p is correct iff p, and that cases that may seem to conflict with this hypothesis can be explained away in a plausible way.

    This, at least, is the implicit strategy of my paper 'A New Argument for Evidentialism', which is available on my website.

    ReplyDelete
  8. Nishi, I outline an alternative explanation of transparency here. (Also see here for a potential counterexample to DCR.)

    ReplyDelete
  9. Nishi, thanks, that's very helpful. I'm much more skeptical about whether there are clear cases of deliberation with transparency in your sense; there are obviously clear cases where such reasons arise, but it's very tricky in any real-life case to be certain that any given deliberation has primarily concluded on the basis of these reasons. This is just the same murkiness that arises in cases of practical considerations; it's a simple fact that people do bring up such considerations during deliberative inquiry. It's just not clear what role these considerations actually play. And while I don't think it's the best way to see it, it is possible to hold that only practical considerations ever actually decide the issue, and to assume these to be the clear cases -- in which case the transparency cases the unclear ones (such a view is implicit in cases involving a hermeneutic of suspicion - one assumes that the conclusion is due to self-interest rather than reasons transparent to truth, unless they can prove otherwise; as far as I can see there's nothing inconsistent with the facts in doing so -- that reasons transparent to truth are involved is undisputed, it's just denied that they are the deciding factors). So it's possible to argue that which cases are the clear cases is simply an artifact of one's prior assumptions about what the nature of deliberation must actually be.

    But I should say that this is devil's advocacy more than anything I believe myself; I've just never seen any arguments for the view that in deliberative inquiry we can't come to believe on practical considerations, which is why I asked the question.

    ReplyDelete
  10. Here is how it seems to me. When someone tells me that the train is getting into Amherst at 2, and I ask myself whether to believe what he said, this question immediately gives way to the question whether what he said is true. And when I check the train schedule and conclude from this that what he said is true, it seems that I thereby arrive at the belief that the train is getting into Amherst at 2. This is what I had in mind by a clear case.

    Now maybe there are philosophical arguments that would get us to doubt the phenomenology of such cases, and thus undermine our justification for taking the phenomenology at face value, but absent any such argument, it seems that we have a right to take the phenomenology at face value. In any case, it is a clear case in the sense that we have firm pre-philosophical convictions about what it is like to go through such a deliberation.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)