Wednesday, June 18, 2008

Suspecting Wishful Thinking

It's curious how often people accuse each other of rationalizing, or holding a position "because they want to believe it" rather than because they have genuine reasons for thinking it true. Sometimes people do engage in wishful thinking, of course. (Sometimes they even admit as much -- e.g. some theists explicitly cite pragmatic reasons, such as 'comfort', for believing in God.) So sometimes such accusations may be warranted. But often they're simply a result of ignorance of the other person's reasons -- if you don't really understand why they hold the position they do, it's all too easy to imagine that they do so for no good reason at all, in which case less charitable explanations will suggest themselves. Such dismissal carries an obvious epistemic risk, however, since you lose the opportunity to learn about their reasons if you assume from the start that they don't have any. I can think of two such misaccusations from my own experience:

(1) Consciousness. It's a common trope that "people" reject physicalism in favour of some form of dualism simply because they want to believe that humans are "special". No doubt this is true of some people. But it certainly isn't true of every critic of physicalism (much though some might like to pretend otherwise -- using a straw-man foil for rhetorical effect). If anything, I would prefer that physicalism be true (as you can see from my 'Wishful Thinking Alert' a few years ago). I've just come to the honest conclusion that the weight of arguments is against it. You can accuse me of bad judgment in this respect, but you can't accuse me of wishful thinking.

(2) Disputing alleged 'obligations'. Eric Schwitzgebel recently wrote:
I suspect that if, indeed, ethicists don't tend to consider voting a duty that may be post-hoc rationalization rather than genuine moral insight.

But, again, I personally enjoy voting and similar political acts (serving on a jury, etc.), so my belief that it is not morally obligatory certainly isn't a rationalization of personal preference.

I find the above quote particularly strange, because you would think the reasonable prior assumption would be to favour the experts over folk opinion in case of disagreement. (Several commenters raised similar concerns about Eric's project, here.) Surely if anyone has reasons worth considering on a controversial moral question, it's going to be moral philosophers! So that strikes me as an especially inadvisable case of alleging rationalization.

Are there any general rules on offer here? Wishful thinking is an epistemic vice -- a form of irrationality -- so I guess one's readiness to make such an accusation should reflect one's prior judgment of how unreasonable (how poor an epistemic agent) one's interlocutor is.

13 comments:

  1. I'd go further. This kind of claim is totally useless. Why is the source of someone's views relevant to any question we care about at all? Either the view can be backed up by a convincing argument or it cannot. In the first case, it ought to be accepted regardless of whether the fact that the view is actually held is wishful thinking or not, and in the second case it ought to be rejected, ditto.

    Taking interest in the "explanation" for someone's view is nothing more than an invitation to slip into genetic fallacies.

    The only exceptions are a) when you believe that someone is ignorant of the source of their view, and would rationally come to reject it if they knew that source (e.g. "hey proletariat, you only think the relations of production are just because of all these ideologies"), and b) when you're doing some kind of silly bayesian thing and trying to figure out how much evidentiary weight someone else's opinion has for your own beliefs.

    But a) just reduces to the truth of the opinion (essential in calling a set of beliefs an "ideology" in the Marxist sense is that those beliefs be false), and so the origin of the belief is again irrelevant. And b) is subject to the usual difficulties with such reasoning.

    ReplyDelete
  2. My take on rationalizing is a little different: I don't see it as something I can accuse other people of to score points in an argument, but as a trap that I can easily fall into myself. Clever people are excellent rationalizers; I am (I flatter myself) clever; therefore it stands to reason that I will find myself rationalizing my prejudices unless I watch out. If you think that argument is typically a cooperative pursuit aimed at mutual better understanding, then it's good to remind your interlocutors when you think they're at risk for rationalizing, even if you don't have definitive proof that they are.

    I admit that some arguments really are combative. If I'm publicly debating somebody who believes all contraceptive use is wrong, I don't expect to learn much about their side of the argument. I've heard their side of the argument before and I've got credence .99999999999999 (at least) that their arguments make no sense. In that case, "you're rationalizing" really is an accusation. But I'm also unlikely to get evidence that it's a false accusation, at least if we decide what's "evidence" according to my subjective standpoint.

    The upshot is that I'm much more worried by people who treat all arguments like a form of combat than I am by people who suggest that their interlocutor is rationalizing. And I'm worried about the combative thing even though in a special few cases, it makes sense to treat arguments as combat.

    ReplyDelete
  3. I think in philosophy this is pretty common if only because at best there are only weak reasons for a position often with weak reasons on the otherside.

    So take an issue like libertarian free will. Outside of religious commitments what is there to recommend it? People make arguments of course but often in those debates it seems like the arguments are more defending what one wants to be true than providing reasons to believe.

    I think though that we unfairly treat such positions so negatively. (Basically apologetics - why it is rational to hold X rather than why one should hold X) There's nothing wrong with acknowledging that many of our beliefs aren't primarily held for rational reasons. One could even go so far as Davidson and say many beliefs simply aren't justified out of evidence. (He says this primarily of first person accounts but I think it applies more broadly)

    ReplyDelete
  4. Paul, I think it is useful though as part of an argument to show the genealogy of belief. If someone sees that they are believing because of some irrational belief that may well change the value of the evidence. Often there are multiple defensible views. Deciding between them then becomes an issue of values. And genealogy (such as pointing out wishful thinking) is amazingly helpful there.

    This is what I was getting at by distinguishing between arguments defending a view as rational versus promoting a view as what we should do.

    ReplyDelete
  5. Clark - I think considerations of meta-coherence deflate that distinction. (If you really think that multiple conflicting views are equally well-supported or likely to be true, then you should split your credence between them.)

    ReplyDelete
  6. Richard, I think you're missing the point I was making. In terms of what we believe it isn't always what the evidence judges most probable (ignoring for the moment how to judge that). Rather it is what has a reasonable likelihood, is defensible as a rational belief, and that we value.

    This is why (IMO) persuasion is just as important in philosophy as reasons.

    Ultimately my point is that our beliefs, as a practical matter, aren't purely rational or the most likely probability. Since I don't think beliefs are purely volitional all this says is that the causes for beliefs aren't purely argument strength.

    ReplyDelete
  7. Yes, and my point is that what you describe is an irrational agent. Merely saying that people DO these things is no defence of them.

    ReplyDelete
  8. Why is the source of someone's views relevant to any question we care about at all?

    Providing a causal explanation of a person's beliefs can show those beliefs to depend counterfactually on events unconnected to the truthmakers of the believed proposition. The more evidence we have for this explanation, the less probable the proposition conditional on the explanation.

    Either the view can be backed up by a convincing argument or it cannot

    Our capacity to know the truth by means of philosophical argument is very limited, and expectedly so given our evolutionary origins. A rational epistemic agent will take this fact into consideration, and when discussing philosophy will rely much more often on debunking explanations and much less often on ordinary arguments. Facing an opponent who appears to have a flawless argument for a proposition that is transparently self-serving, this agent will believe it more likely that the argument has a flaw that she cannot spot than that an antecedently improbable view not formed by a truth-tracking process miraculously happens to be true.

    Are there any general rules on offer here? Wishful thinking is an epistemic vice -- a form of irrationality -- so I guess one's readiness to make such an accusation should reflect one's prior judgment of how unreasonable (how poor an epistemic agent) one's interlocutor is.

    We were selected to deceive ourselves. Some people do it more often than others, but all of us do it to a considerable extent. In deciding whether someone's position is the result of wishful thinking, it is these general facts about the human species should that we should primarily rely upon; the specific character traits of our interlocutor are of secondary importance.

    ReplyDelete
  9. Pablo -- your first response to Paul is precisely the exception "(b)" he'd already explained in his comment.

    "Facing an opponent who appears to have a flawless argument for a proposition that is transparently self-serving, this agent will believe it more likely that the argument has a flaw that she cannot spot than that an antecedently improbable view not formed by a truth-tracking process miraculously happens to be true."

    The relevant feature here is not the 'self-serving' nature of the belief, so much as the fact that the view is 'antecedently improbable'. If we have good reason to think a conclusion false, that's also good reason to think one of the premises is false.

    Having said that, I do agree that in very clear cases of biased inquiry (e.g. think tanks, industry-sponsored "research", religious apologetics) we should be especially skeptical of their arguments. But in this post I meant to discuss ordinary cases which are not based on any such particular facts about one's interlocutor (their interests, incentives, etc.), but the mere fact that they hold a position X that some people might conceivably hold due to wishful thinking. In other words, this is not any special circumstance, but a perfectly ordinary circumstance that probably applies universally.

    "In deciding whether someone's position is the result of wishful thinking, it is these general facts about the human species should that we should primarily rely upon; the specific character traits of our interlocutor are of secondary importance."

    Note that, as Robin Hanson points out, the worst bias of all is meta-bias (i.e. dismissing others' disagreement on the assumption that they're just biased).

    Also, given that the kind of situation I'm talking about is ubiquitous and symmetrical, applying merely 'general facts' about humanity won't license differential treatment. (They apply just as much to oneself as to one's interlocutor.)

    ReplyDelete
  10. your first response to Paul is precisely the exception "(b)" he'd already explained in his comment.

    I don't think so, since debunking explanations are not "silly bayesian thing[s]". Nor do I believe these explanations are "subject to the usual difficulties with such reasoning." Paul thinks debunking explanations are totally useless. I think they are highly useful.

    The relevant feature here is not the 'self-serving' nature of the belief, so much as the fact that the view is 'antecedently improbable'. If we have good reason to think a conclusion false, that's also good reason to think one of the premises is false.

    I think this is incorrect: both the causal origin of the belief and its antecedent probability are relevant. But I'm afraid I'm the one responsible for the confusion. I wrote:

    "Facing an opponent who appears to have a flawless argument for a proposition that is transparently self-serving, this agent will believe it more likely that the argument has a flaw that she cannot spot than that an antecedently improbable view not formed by a truth-tracking process miraculously happens to be true."

    This claim is muddled; it conflates (a) the reasons for dismissing an argument provided by the antecedent improbability of its conclusion with (b) the reasons for dismissing this argument provided by a debunking explanation of the interlocutor's belief in that conclusion.

    Suppose that you argue for P; that I come up with an explanation E for your belief in P; and that it follows from E that you believe P, not because P is true, but as a result of some truth-insensitive process of belief formation, like self-interest or wishful thinking. The probability that your argument supports P conditional on E plus my background knowledge K will then be lower than the probability that your argument supports P conditional on K alone. So E can give me a reason to dismiss your argument for P even if I wouldn't be warranted in doing so on the antecedent improbability of P alone.

    We can see this point more clearly by comparing two cases, both involving arguments for propositions which are equally antecedently improbable. In the first case, I have an excellent debunking explanation of my interlocutor's belief; in the second case, I have no such explanation. I claim that, other things equal, I would have better reasons to dismiss my interlocutor's argument in case one than I would in case two.

    in this post I meant to discuss ordinary cases which are not based on any such particular facts about one's interlocutor (their interests, incentives, etc.), but the mere fact that they hold a position X that some people might conceivably hold due to wishful thinking.

    Consider

    (S) 'My interlocutor believes P because she wants to believe P.'

    We may distinguish two questions:

    (1) 'What facts about my interlocutor give me good evidence for believing (S)?'

    and

    (2) 'Is the fact that my interlocutor conceivably believes P because she wants to believe P good evidence for believing (S)?'

    In the comment I quote above, you appear to be asking question (2). But in the last paragraph of your post the question raised seems to be (1).

    Also, given that the kind of situation I'm talking about is ubiquitous and symmetrical, applying merely 'general facts' about humanity won't license differential treatment. (They apply just as much to oneself as to one's interlocutor.)

    It is true that debunking arguments can only debunk if they meet a "reflexivity condition"--they must survive their own application. But this condition can be met even if the agent relies on facts which also true of himself. The atheist can dismiss arguments for Christianity as being motivated by a general human tendency for self-delusion which afflicts theists and atheists. The relevant difference is that belief in God fulfills an obvious human need that disbelief in God doesn't.

    ReplyDelete
  11. This comment has been removed by the author.

    ReplyDelete
  12. [This comment clarifies a misleading sentence in the deleted one above.]

    Incidentally, is there any philosophical literature on this topic? When it comes to debunking explanations, I find myself arguing for a view that I've developed on my own, in complete ignorance of what other philosophers might have written about it. Since I don't place much confidence on my own judgment, I would welcome any pointers to what's been said by more credible thinkers (other than Richard and some of the other regular commenters here).

    ReplyDelete
  13. Richard I think there's a difference between what is a-rational versus ir-rational.

    I think it irrational to believe something improbable. I don't think it irrational to believe only what is judged most probable.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.