Friday, May 10, 2013

Justification and Explanatory Normalcy

In his very interesting 'Justification, Normalcy and Evidential Probability', Martin Smith argues that justification requires one's belief to be normally true, rather than just very likely true, given one's evidence.  The relevant sense of 'normalcy' is explanatory rather than statistical: for the belief to turn out false would call out for explanation.  Beliefs based on perception, for example, are generally "normically supported" in this way -- for the belief to be false, something weird must have happened (perhaps the agent hallucinated, or was tricked somehow -- there must be some further explanation).  By way of contrast, the belief that your lottery ticket is going to lose might be as likely as can be (stretch the odds of the lottery as much as you like), still it would require no special explanation were this ticket to turn out to be the winning one -- some or other ticket must win, after all, it's just a matter of random chance.

It's an interesting proposal, and seems to capture common usage pretty nicely, but I wonder about the normative significance of normic support.  It seems to me that we are better off, epistemically speaking, with beliefs that are very likely true than we are with beliefs that are normally true (given our evidence).  On p.18, Smith offers the following:
If one believes that a proposition P is true, based upon evidence that normically supports it then, while one’s belief is not assured to be true, this much is assured: If one’s belief turns out to be false, then the error has to be explicable in terms of disobliging environmental conditions, deceit, cognitive or perceptual malfunction or some other interfering factor. In short, the error must be attributable to mitigating circumstances and thus excusable, after a fashion. Errors that do not fall into this category are naturally regarded as errors for which one must bear full responsibility – errors for which there is no excuse. And if error could not be excused, then belief cannot be permitted.

But if I'm making a high-stakes decision, I would (I hope!) prefer any mistake on my part to be unlikely rather than excusable.  We should want to get things right, and not to merely offload responsibility onto "disobliging environmental conditions".  And the best, most reliable way to get things right is to follow the probabilities, rather than rely on cooperative environmental conditions by taking perceptual evidence (and the like) at face value.

A background point: I'm a little unsure about the significance of so-called "all-out belief", as opposed to credence (or degrees of belief).  So, rather than claiming that Smith is mistaken about what justifies all-out beliefs, I might instead say that we shouldn't be interested in them at all.  In an uncertain world, rational decisions should be informed by our credences, not our beliefs.  That would be another way to express my main point.  But however we say it, the crucial point is just that likelihood, rather than normalcy, seems to be what really matters, epistemically speaking.

The practical importance of this comes out in Smith's example of privileging (notoriously unreliable) eye-witness testimony over statistical evidence.  His account captures standard practices very well, but this seems like an issue that calls out for a more revisionist approach.  If we care about getting accurate verdicts, we should want to reform the legal system (and others like it) to rely less on eye-witness accounts, and more on statistical evidence that has a higher probability of seeing us right.

What do you think?

4 comments:

  1. it seems that if science had adopted Smith's view as a fundamental guiding principle, then it would have had even more difficulty making progress. major revolutions might never have happened (because we would have kept on believing what was thought to be "normally true"). if i am not misinterpreting this, then it seems that this fact indicates that Smith's normic project is not the best possible theory.

    ReplyDelete
    Replies
    1. Could you expand on that? If we get new evidence that our old assumptions were mistaken, then they will no longer be "normally true" given our total evidence. But maybe I'm missing what you had in mind...

      Delete
    2. Sure. I suppose I was thinking that 'normal' was "historically usual." In this case, it seems that if something is true based on evidence that "normally" supports it, then it seems that new evidence might not be effective in changing our beliefs. We could get new evidence, but this would not change the fact that old (and perhaps misinterpreted) evidence "normally" supports the truth of a belief.

      The problem seems to be that by appealing to 'normal' one might prioritize the historical usual to new evidence. So in the situation you mention—get new evidence that our old assumptions are mistaken—old assumptions about what is true might still be normal (again, historically usual). And in this case, new evidence does not necessarily change what is "normally" true since previous evidence still "normally" support certain truths.

      I suppose the question I have is what does it take to change what is "normal" when it comes to evidence and belief?

      (perhaps I am confused about how 'normal' is used)

      Delete
    3. Ah, yes, it's crucial that "normal" here does not mean "historically usual", but something more like non-chancy (i.e. requiring some explanation if it fails to turn out as the "normic evidence" would suggest, in contrast to lottery-style beliefs which can turn out false without any special explanation required). So it doesn't build in the kind of bias you describe.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.