Tuesday, October 28, 2008

Rational Justification vs. Objective Warrant

I'm trying to make sense of how a foundationalist can reject coherentism (or 'wide reflective equilibrium'). I interpret those latter terms fairly broadly, and perhaps that's the problem. But here's the key thought: surely if you have a maximally coherent belief set then you are not open to any rational criticism? Or again: if you are open to rational criticism, it must be that you can improve the internal coherence of your beliefs. Rational persuasion just is coherent persuasion -- after all, any rationally compelling argument must start from premises that one accepts (even if they go on to radically undermine one's other beliefs -- that just shows there was some initial incoherence there to exploit).

Can the foundationalist deny this? Do they even try? Perhaps they mean something different: their concern is not rational justification, but something more 'objective' -- call it warrant. Maybe there are certain foundational beliefs or priors that are objectively warranted, in the sense that they're what we really ought to believe, but one may fail to believe appropriately without thereby making any rational error (or being susceptible to any rational improvement).

A further application of this idea would be to distinguish proper proof from other deductive arguments. A proper proof will establish the conclusion from warranted premises. Otherwise, it is a mere 'rational argument' that may persuade (ad hominem) those who accept the premises, but it doesn't establish that they really should believe the conclusion, because they shouldn't have accepted the premises either.

[Two worries: (1) Is there really a theoretical role for 'warrant' and 'proper proof' to play here, distinct from simple 'truth' and 'soundness'? I'm not sure about this. I guess truth doesn't come in degrees, whereas warranted credence presumably would. (2) I'm not sure whether this relates to the distinction between ideal vs. non-ideal rationality. It appears independent, I guess, since even 'perfect internal coherence' is an unattainable ideal; but I feel like there's some important connection here nonetheless.]

P.S. Of course, this isn't how foundationalists have traditionally portrayed their view. They talk of rational "self-evidence", etc., but if they just mean that they assign near-certain credence to a claim (such that it will readily swamp almost any contrary beliefs) then (again) the coherentist can accommodate this. In fairness, I guess a 'self-evident' claim is meant to be one such that anyone who understands it can thereby see that it must be warranted/true. But how could this be, if not that its negation is incoherent?


  1. Given that words are imprecise, is it actually possible to be a coherentist? If we leave to the side issues of a mathematical or logical nature, surely any argument essentially full of words of imprecise definition is fundamentally built on a house of cards.

    Consequently, couldn't one be a foundationalist without being a coherentist?

    Doesn't a coherentist essentially take for granted the tendentious claim that the employment of language is essentially transparent?

  2. Rational persuasion just is coherent persuasion -- after all, any rationally compelling argument must start from premises that one accepts (even if they go on to radically undermine one's other beliefs -- that just shows there was some initial incoherence there to exploit).

    Is this true? Couldn't one have a maximally coherent belief set but be missing some true beliefs that could be demonstrated to one (e.g. by displaying them)?

    For example, suppose my relevant belief set is, in its entirety, "this table is brown" and "this computer is black." That belief set is maximally coherent, and trivially so because there are no relationships (we can suppose) between those two beliefs.

    Now suppose someone wanted to persuade me of some proposition that followed from the brownness of the table, the blackness of the table, and the blueness of the sky. He could rationally do so by pointing up at the sky. In such a case (assuming that the moment I observe the sky, I make the necessary inferences on my own) my belief set hasn't been made more coherent, but I've been persuaded, and correctly.

  3. I'm inclined to think that Paul's on the right track here; even if the premise-set is maximally coherent, there is a way to persuade without starting from premises of the set, namely, by introducing new premises. Clearly our premise-sets aren't hermetically sealed: we're adding to them constantly. I have more beliefs than I did when I was two months old. So we can be persuaded by introducing premises as well as conclusions.

    (I'm not sure that the example Paul gives is quite right, though. I had thought that to be a maximally coherent belief set required that you could infer the content of any belief from some other belief. But I might be wrong here.)

    I'm not sure which foundationalists you have in mind that are worried about coherence; I would have thought that the distinction between the two is not that foundationalists reject coherence but that coherentists reject foundations.

  4. Sprach - I'm not sure I follow. Can you clarify what you see as the connection between coherentism and 'words'? (Arguments are expressed in words, but that's true of the foundationalist no less than the coherentist. And in either case I take it the words refer to extra-linguistic propositions.)

    Paul - that's interesting. We certainly do acquire new beliefs through experience; but is this a rational process? I would model it as a brute new belief 'I am sensing such-and-such an appearance', followed by an inference from the general principle that we take appearances to be reliable guides to reality, to the conclusion that there is (probably) such-and-such in reality. So the rational element there is still coherence-based, since it is a matter of bringing our experiential beliefs into line with general principles we accept about their veracity. (That's not to say human beings explicitly go through all the steps in this model, but we only ever approximate perfect rationality.) So I'm not sure that the early steps of demonstration are properly described as "persuasion". Or even if they are, do you think there are any analogous cases when it comes to more 'pure' a priori philosophy?

    (Incidentally, I don't think it's possible to only have two beliefs. You'd also have to have beliefs about what tables and colors are -- and to count as maximally coherent you must also have beliefs about in what possible situations they would apply to other things, etc. But I grant the essential point that one may be missing various a posteriori beliefs.)

    Brandon - but the question is, how does one rationally add new beliefs, if they're not inferred from one's prior beliefs? They might just appear, but that sounds to me like a non-rational process. Or at least I'd like to hear more about what determines whether a new premise may justifiably be "introduced".

    On your last question, I'm especially thinking of Sidgwick. Rawls claimed Sidgwick used 'reflective equilibrium', since he argued that utilitarianism best systematizes common-sense morality. But Singer argues that Sidgwick's argument here is a mere ad hominem to win over the common-sense moralists, and not something that Sidgwick himself considers a sound proof. So I guess I'm wondering how a foundationalist would go about criticizing a proponent of reflective equilibrium if their conclusions didn't happen to coincide in the end.

  5. Richard, I don't think I disagree, we're just working with different conceptions of rationality -- you, with a conception that reduces rationality to something like valid inference, and me with a conception that bears more affinity to ideal-agent type ideas. How would the phronimos form new beliefs? :-)

  6. To clarify what I said earlier, it's difficult enough to specify what is and isn't a physical table according to any given definition. When you start talking abstractly, about what constitutes democracy or a university or the world, the problems magnify.

    Of course, none of this is particularly enlightening. It does, however, cause problems for any naive coherentism, problems that I consider fatal.

    If I were to start theorising about democracies, for example, there would be no precise definition that I could offer which would describe to all and sundry, including myself, exactly the set of actual and possible governments that I'm talking about. Undoubtedly, there will be edge cases.

    If so, any theorising I do on my given definition of a democracy, or my definitional axiom, is bound to be fuzzy. If it's bound to be fuzzy, it can't be neatly coherent, and even if any two people agree on a definition, there's no telling whether or not they will agree on any theorem that is developed from it.

    I am of the opinion that coherentism, though, is a possibility in the realm of the ideal. Yes, I'm a Kantian, so, yes, I do believe one can be a coherentist in the field of mathematics, for instance. Only then would a foundationalist and a coherentist agree wholeheartedly if they start from the same transparent axioms.

  7. I like Paul's general approach. Perhaps there are more and less rational evidence-gathering policies. (I certainly think there are epistemically better and worse evidence-gathering policies, but I refuse to understand what "rational" means, so I can't tell you whether evidence-gathering norms are rational norms.)

    In fairness, I guess a 'self-evident' claim is meant to be one such that anyone who understands it can thereby see that it must be warranted/true. But how could this be, if not that its negation is incoherent?

    Quibble: there might be some claims that you can understand imagine false, but can't understand and believe false. Some candidates might be "I exist", and "there are some beliefs". These claims have coherent negations, but (arguably) not believable negations.

  8. Thanks, Richard, that's helpful.

    If we take 'rational' to require valid inference (or some such), then I think the foundationalist will deny that premise-adding is rational; rather, it would be a precondition for rationality in that sense (since one couldn't be rational at all without adding at least a few premises to our at-birth empty premise-set). Incoherence might be able to show cases of unacceptable premise-adding, but this is a different issue from whether acceptable premise-adding depends on coherence. But I don't think most people would use the term 'rational' that narrowly. After all, it may be that forming the belief "I am seeing a donkey playing a lyre" on the basis of seeing a donkey playing a lyre is not rational in the narrow sense; but most of us would say that (setting aside deception, hallucination, and the like) it's rational to believe on the basis of what you've actually seen. And this, as you say, doesn't really seem to be a matter of rationality in the narrow inferential sense. I think I'm starting to lose the thread of the dispute here, since it looks verbal (but still might not be).

    I think in your comment to Paul you are conflating two things: showing that the premise is acceptable on the basis of coherence and showing that it is unacceptable on the basis of incoherence. Since showing coherence and showing incoherence are not symmetric operations -- the former is much, much harder to do when we aren't working with very simple premise-sets -- it's possible to hold (for instance) that the two come apart, and that while the standards of the coherentist can eliminate unacceptable premises, they are neither necessary nor sufficient for determining that any premises are acceptable. (I think this is related to the Wizard of Oz's 'quibble'.) I think most foundationalists would be tempted by such a view.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)