Thursday, October 06, 2011

The Kripke-Harman Dogmatism Paradox

Some thoughts inspired by yesterday's epistemology seminar...
If I know that h is true, I know that any evidence against h is evidence against something that is true: so I know that such evidence is misleading. But I should disregard evidence that I know is misleading. So, once I know that h is true, I am in a position to disregard any future evidence that seems to tell against h.

-- Gil Harman, Thought, p.148.

Apparently the standard solution is to distinguish what you know at various times: at t0 you know that h is true, and hence also know that any evidence against h is misleading. But once you actually acquire such evidence at t1, your total evidence no longer supports either fact, which is why you're no longer in a position to disregard this new evidence against h.

I think this is right, but it might help to say a little more to further dispel the air of paradox. (I don't have Gil's book so I'm not sure to what extent, if any, this actually goes beyond what he says...)

Suppose your current evidence E supports h (to a degree sufficient for knowledge), and consider some possible piece of contrary evidence e such that E+e would no longer sufficiently support h. Since E supports h, it likewise supports the entailment that "any evidence against h is misleading", and hence, in particular, "if e obtains, it is misleading". Now it's important to stress that this is merely a material conditional. Recall that your justification for believing h is contingent on the absence of e. This justificatory constraint is presumably inherited by the inferred conditional. That is, your justification for believing "if e obtains then e is misleading evidence" is likewise contingent on the absence of e -- or, in other words, the falsity of the antecedent. You're justified in believing the conditional only insofar as it is vacuously true.

More generally: You're only justified in believing that "any evidence against h is misleading" insofar as you're justified in believing that there isn't any such (sufficiently weighty) evidence against h. After all, if there were sufficiently weighty evidence against h, then that'd undermine your basis for believing h, and hence for believing that the evidence against h is misleading. And, indeed, that's exactly the position you end up in if such evidence later comes to light.

So there's no paradox -- no grounds for "disregarding" evidence. If you initially know that h is true, but later uncover some evidence e that would undermine belief in h, then you can't appeal to h as grounds for disregarding e. You were never justified in believing the subjunctive conditional that were e to obtain it would be misleading evidence. (You initially believed h only because you believed e to be absent. You may well have believed that in the nearest possible world where e obtains, it serves as accurate evidence of h's falsity in that world. You just never expected to find e in the actual world.) The same may be true of the indicative conditional, though I'm less confident in assessing that. (Plausibly, if your justification for believing h is contingent on the absence of e, then you're not justified in believing the indicative conditional "if e, then h is true".)

In sum: I think that much of the intuitive force of the paradox rests on our implicitly inflating the material conditional ("if e obtains then it's misleading evidence") into some more robust conditional that we could retain belief in, and subsequently reason from, even after learning that e actually obtains. But our initial material conditional is not like that -- it is immediately undermined by the appearance of e, which is why we can't then use it to disregard e.


  1. I'm not sure it's so simple as that. In typical cases, I think there's lots and lots of unpossessed evidence on both sides. I know that p: some philosophers are coming to dinner at my apartment later today. So I know that any evidence against this proposition is misleading evidence. Furthermore, I am confident that there is some such evidence -- perhaps if I had it, I'd lose my knowledge. I don't know which such potential evidence is actual, of course, but I'm sure that some is.

    For example, the disjunction of ~p and any extremely surprising truth would be pretty good evidence against p. I'm confident there are such propositions.

  2. Hmm, that's interesting. I wonder then how we are to explain the epistemic significance of actually coming to possess such antecedently predictable evidence. Perhaps there's an implicit assumption that the evidence we come to possess is (something close to) randomly sampled from all the evidence that's out there? Say if God tells you he's identified for you the strongest evidence against p, without regard for whether there's comparably strong further evidence for p, perhaps coming to possess this new evidence in this way would no longer undercut your knowledge? (After all, if you already expected that there was some evidence roughly this weighty out there somewhere, merely being told what it was doesn't seem like it should change things.)

    But whatever the details, so long as we have some story to tell about how ordinarily acquiring new evidence can defeat knowledge, then it seems like we should be able to run some version of my above story along with it. (Perhaps shifting to a more specific conditional, e.g. "If I will come to possess e in the ordinary way, then e is misleading." Justified at t0 as a material conditional, but again only because you expect the antecedent to be false.)

  3. I think it is possible to resist the idea that 'If e obtains then e is misleading evidence' can only come out true when read as a material conditional.

    Consider a Stalnaker-Lewis style approach to indicative conditionals, employing epistemically possible scenarios or worlds (rather than subjunctively/metaphysically possible worlds). The set of scenarios to be considered in evaluation may vary contextually.

    So in one context, where one wishes to consider all scenarios not ruled out by current knowledge, and h is taken to be known, the conditional will come out true. In a context where h is being called into question, the conditional may not come out true.

    Obviously, this suggestion is incompatible with certain views of indicative conditionals, but it seems plausible to me. What do you think?

  4. (I realize, of course, that Lewis himself did not take a Stalnaker-Lewis style approach to indicative conditionals.)

  5. Hi Tristan, that sounds ok to me, as it means that your context-specific indicative conditional will be undermined by e in much the same way that I suggested a material conditional would.

    You've basically suggested a way that even the indicative conditional could be "vacuously true", i.e. compatible with e being accurate evidence in the closest possible world where e obtains. You're just ruling out consideration of such worlds by contextual fiat. But if the agent believes e to be accurate evidence in the closest possible world where it obtains, and then they learn that it actually obtains, this plausibly undermines their basis for believing the conditional -- since they only believed it because they took it to be vacuously true. They had no basis for thinking it to be non-vacuously true, which is what the paradox would require.

  6. Hi Richard,

    Yep, I agree with all that! (I wasn't trying to resuscitate the paradox, by the way - just to improve your solution.)

  7. I don’t think distinguishing what you know at various times works. At t0 I may not know p but at t1 I do know p works fine. But it’s a lot more difficult to know h at t0 and then stop knowing it later on, under factivity of knowledge. “If you initially know that h is true, but later uncover some evidence e that would undermine belief in h, then you can't appeal to h as grounds for disregarding e.” But you did not initially know h as, if h were true, no non-misleading evidence could arise.

    However I don’t think the paradox “gets going” unless we make explicit a denial of scepticism and factivity of knowledge.

    If you initially know that h is true, but later uncover some evidence e that would undermine belief in h, then you can't appeal to h as grounds for disregarding e.

    If knowledge is factive then if you know at t0 that h is true then h is true. If true then any apparent evidence against h is misleading. If some later evidence undermines your belief in h then you have been mislead and should have been more dogmatic.

    There doesn’t seem to be any paradox here.

    Perhaps the problem is that:

    1. “You’re only justified in believing that "any evidence against h is misleading" insofar as you're justified in believing that there isn't any such (sufficiently weighty) evidence against h.


    2. you are not justified in believing that there isn’t any such evidence against h?

    If you aren’t so justified then you are not justified in believing h to be true. h entails that there is no non-misleading evidence against h and you are not justified in believing that. You may be justified in believing that some of h is true, that h is likely to be true, that h is way-the-best-explanation for the current e etc. but not that h is true.

    But this is not sufficient for knowledge, if knowledge is factive. If “knowledge” can be something like “espousing the best hypothesis” or “reasonably accepting a hypothesis that has more verisimilitude than it’s current rivals” rather than “justified/warranted true belief” then we can (and do) have oodles of knowledge. h doesn’t need to be true, we don’t even need to believe that h is true: just that h is the best available to us right now. h no longer entails that there will be no non-misleading evidence against h and the paradox disappears.

    The paradox also disappears with knowledge-scepticism. “If we knew h then we could ignore later e” is true by that falsity of the antecedent thing you mentioned.

  8. Tony - Note that sometimes we rationally ought to believe falsehoods, if the best available evidence happens to be (unbeknownst to us) misleading. Just because we've been misled doesn't mean that we "should have been more dogmatic", if by that you mean the "should" of epistemic rationality. What we should believe is whatever is best supported by our evidence. If the more dogmatic person ends up believing the truth, that's just because they got lucky on this occasion, not because they're believing as they ought.

    Regarding your later question: I was thinking that at time t0 you are justified in believing (falsely) that there's no sufficiently strong evidence against h, which why you're also justified in believing (truly) that h is true.

    Then, when you then acquire the new evidence e at t1, you're no longer justified in either belief. The factivity of knowledge doesn't suffice to prevent later evidence from undermining the justification for your earlier knowledge-constituting belief, and hence (because knowledge requires justification in addition to truth) causing this true belief to no longer qualify as knowledge.

  9. This paradox relates to a question I had in my theory of knowledge course that didn't get answered satisfactorily. Maybe someone can answer it here?

    It seems like the paradox is based on the view that as soon as you have evidence that indicates that h has a high chance of being true, and you believe h to be true, you are compelled to say that you 'know' h to be true, and therefore must act as if there is a 100% chance of h being true. And since you apparently believe that h has 100% chance of being true you of course disregard evidence to the contrary.

    This idea that having a justified belief means you must act as if it is absolutely certainly true is what came up in my philosophy of knowledge class. I still don't get it, it seems absurd. Surely, when you are faced with evidence that gives 90% confidence that h is true, the correct attitude is that h has a 90% chance of being true, rather than that H has a 100% chance of being true? And if you accept this fairly obvious fact, surely there's no paradox here!

  10. I'm sympathetic to the view that we generally do better to talk in terms of rational credence rather than knowledge. Still, the latter needn't be problematic -- so long as we remember that, as fallibilists, we can't move from "S knows that h" to "S should be absolutely certain that h".


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.