Sunday, November 30, 2008

Presuppositions Aren't Premises

A questioner at AskPhilosophers points out that we rely on contingent empirical facts (e.g. about the reliability of our short-term memory) when working through and assessing allegedly 'a priori' proofs. So, they ask, does this mean that our justification for believing the proof's conclusion is not really a priori after all?

The short answer, I think, is 'no'. This argument for empiricism mistakenly conflates (i) the justificatory basis for a belief, with (ii) the presuppositions or prerequisites that must be met in order for one's justification to not be defeated.

(Another common version of this mistake is to claim that a priori justification is impossible because experience is required to acquire the concepts with which we think in the first place. It may well be true that experience is a prerequisite for concept-possession and hence thought. But that does nothing to undermine the possibility of a priori justification, i.e. the claim that the basis for believing P needn't include any reference to experience. Experience may play an essential role in belief-formation, without it thereby playing an essential justificatory role. But I'll return to the more subtle mistake, since that is more interesting.)

I discussed this issue further in the last section of my post 'Arguing with Eliezer'. But I'll reproduce it here, since the comments to that post focused on other issues, and I'd be interested to hear what others think of this one...

Eliezer writes:
When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

I responded:
It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence". For one thing, no amount of mere observation will suffice to bring us to a conclusion, as Lewis Carroll's tortoise taught us. Further, it mistakes content and vehicle. When I judge that p, and subsequently infer q, the basis for my inference is simply p - the proposition itself - and not the psychological fact that I judge that p. I could infer some things from the latter fact too, of course, but that's a very different matter.

In discussion, Eliezer emphasized the demands of (what I call) 'meta-coherence' between our first-order and higher-order beliefs. If you reason from p to q, but further believe that your reasoning in this instance was faulty or unreliable, then this should undermine your belief in q. I agree that reasoning presupposes that one's thought processes are reliable, and a subjectively convincing line of thought may be undermined by showing that the thinker was rationally incapacitated at the time (due to a deceptive drug, say). But presuppositions are not premises. So it simply doesn't follow that the following are equally good arguments:

(1) P, therefore Q
(2) If I were to think about it, I would conclude that Q. Therefore Q.

(Related issues are raised in my post on 'Meta-Evidence' [update]. See also my argument for the inescapability of a priori justification.)

Any objections?


  1. This reminds me of question 1161, and in particular Alexander George's response.

  2. I certainly agree that we should distinguish between first-order justification and higher-order justification; we cannot talk about their coherence if they are not distinguishable. There are also Lob's Theorem type issues with trusting your own reasoning mechanisms that I have not yet resolved to my own satisfaction.

    But it still seems to me that if you trust your pure thought that 1 + 1 = 2, but do not agree that thinking "1 + 1 = 2" is good evidence that 1 + 1 = 2, then you are in very bad shape epistemologically.

  3. Just to be clear: I ("sentience") am Eliezer Yudkowsky.

  4. I pretty much agree with you. But I think there's a serious challenge to this sort of approach in, for example, some of Timothy Williamson's recent work. He attacks the distinction between justificatory experience and 'merely enabling' experience; the abilities from which our allegedly a priori judgments arise, Williamson says, constitutively depend on the experiences we've had through life.

    This is the challenge that must be met in order to have this (attractive) view.

  5. Jonathan - thanks, I'll have to look into that.

    Maxa - thanks for the link. That looks like the parenthetical issue I discuss briefly. But I should emphasize that the broader issue here goes beyond that.

    Eliezer - I'm not sure what to make of that weaker claim. It's true that an ideally positioned agent will be in a position to trust their own judgments, so in that sense a failure to take one's thoughts as evidence (i.e. a failure to think oneself reliable) indicates that one is in a less than ideal position. But I take it you want to claim something a bit stronger than that.

    On the other hand, stronger claims seem dubious. First- and higher-order evidence may come apart, after all, and in cases where the former is stronger, it will be entirely appropriate to believe against the higher-order evidence. I discuss such a case in my 'bootstrapping' example here.

    There's nothing inherently "very bad" about belief in the face of conflicting (weaker) evidence, so long as one ultimately believes what one has most reason to believe. And I don't see why higher-order reasons should be any different in this respect. They can, and (if my above linked post is correct) sometimes should, be overridden.

  6. Robin Hanson emails:

    Each of our beliefs depends on many other beliefs in the sense that if those other beliefs were to change, we might need to revise this belief to achieve maximal belief coherence. What does it mean for some but not all of these beliefs to be the "basis" for a belief?

  7. There are two very different ways that revising a belief X might epistemically require one to also revise one's belief that P. It might be that X itself provided the evidential basis for P, or it might be that X merely enabled other reasons to pull their epistemic weight (without pulling in any particular direction itself).

    This is the difference between not-X being evidence for not-P, versus not-X being a defeater for some other evidence (Q) for P.

    For example, let P = "the widgets are red", Q = "the widgets look red", and a possible defeater for this evidence (not-X) = "the widgets are irradiated by red light". Red lighting isn't positive evidence that the widgets are not red. It merely undercuts the evidential import of Q, leaving you without any evidence one way or the other.

    For another illustration of the need for this distinction, consider any simple logical inference, like modus ponens:

    1. If P then Q.
    2. P
    Therefore, Q.

    Here the reasons (evidence, justificatory basis) for the conclusion Q are just (1) and (2). They exhaust the basis for believing Q offered by this argument; but there are other possible claims that would defeat the epistemic justification given by this argument. For example, suppose you came to disbelieve the conditional associated with this inference rule, i.e. "if (1) and (2), then Q". If you explicitly disbelieved this conditional, that would undercut your warrant for infering Q from (1) and (2). But, as Lewis Carroll famously showed, it would be absurd to require that we add this conditional as a further premise:

    (1) If P then Q
    (2) P
    (3) If (1) and (2), then Q
    Therefore, Q.

    ... because the problem obviously iterates. Again we presuppose the validity of the inference rule from the three premises to the conclusion, which one might explicitly formulate as a fourth premise:

    (4) If (1) and (2) and (3) then Q.

    And so on, ad infinitum.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)