Sunday, August 02, 2009

Asymmetries in Peer Disagreement

Does higher-order evidence swamp the first-order evidence in determining what we rationally ought to believe? If we disagree with someone whom we'd previously considered an 'epistemic peer', should we moderate our credence in their direction, or may we steadfastly infer that they must be the mistaken one? Consider the following case:
Right and Wrong are mutually acknowledged peers considering whether P. At t0, Right forms 0.2 credence in P, and Wrong forms a 0.8 credence in P. The evidence [E1] available to both of them actually supports a 0.2 credence in P. Right and Wrong then compare notes, and realize they disagree. (Christensen, p.3)

What credence should Right and Wrong then settle on? One version of the Equal Weight View would have them 'split the difference' and settle on 0.5. But, as Tom Kelly has objected, this seems to neglect the first-order evidence entirely. The higher-order (psychological) evidence is equally balanced between .2 and .8, so if we add in the first-order evidence (which supports .2), this should presumably tilt the balance of evidence to something less than .5.

In 'Disagreement, Question-Begging and Epistemic Self-Criticism' [pdf], David Christensen offers a novel response to this objection. He suggests that there's an important asymmetry between the evidence available to the two agents. Basically, the idea is that we get higher-order evidence from other people's judgments, since they can act as 'checks' on our own conclusions; but it would be double counting to treat one's own judgments in the same way. (Just imagine: "E supports .2 credence in H; plus, I just judged that this is so -- all the more reason to give H .2 credence!") Or, as Brian Weatherson might put it, the first-order evidence subsumes whatever evidential force is had by the psychological fact of one's own judgment (regarding that very evidence).

As Christensen points out (p.10), this has very interesting consequences for the case of Right and Wrong:
the important determinants of what's rational for Right to believe are the original evidence E1 (which should, and does, move her to put 0.2 credence in P), and Wrong's dissent (which does and, according to the Equal Weight Conciliationist, should move her from 0.2 to 0.5). In contrast, the determinants of what Wrong should believe are E1 (which should move him toward having 0.2 credence in P), and Right's belief (which also should move him toward 0.2).

In other words, Right should be moved by the (alas, misleading) disagreement from 0.2 to 0.5, whereas Wrong should be moved all the way to 0.2, with his initial judgment counting for naught.

This is interesting, because it's often assumed that both agents share all the same evidence and hence ought to conclude the same thing. And perhaps the most common alternative view would secure divergence in outcome simply by allowing each agent to stubbornly stick close to their own initial estimates. Christensen's solution, by contrast, introduces a genuine asymmetry between the two agents -- thus doing justice to the first-order evidence -- whilst also requiring each agent to give 'equal weight' to the opinion of their peer, rather than downgrading them in a question-begging manner. He can thus offer intuitively appealing answers to both of the opening questions of this post -- an impressive achievement!

12 comments:

  1. Doesn't this get the relationship between higher order and first order evidence wrong? Just take science where higher order evidence is used to modify first order evidence by introducing controls. Creedence then comes after first order evidence overcomes higher order concerns.

    ReplyDelete
  2. When Nietzsche's Zarathustra is challenged to explain his reasons for saying the poets lie too much, he replies "it was long ago that I experienced the reasons for my opinions. Would I not have to be a barrel of memory if I wanted to carry my reasons around with me? It is already too much for me to remember my own opinions, and many a bird flies away..."

    Of course, Zarathustra goes on to give reasons for the view, but I hope everyone knows how much faith to put in our after the fact rationalizations (and anyway, Zarathustra is a poet). Surely what he describes is the usual state of things; we don't remember most of the evidence for most of our beliefs, but mostly we believe things because we did once encounter evidence (if it ever seems otherwise, this is because our mistakes tend to draw attention to themselves by causing trouble in ways our vastly more numerous accurate beliefs don't). If we didn't tolerate this, we wouldn't have many beliefs left, and the ones we lost would include many critical ones (we might find ourselves considering trusting poets, for example!)

    I don't know how to take into account the importance of relying on beliefs for which we don't recall evidence (I suspect just giving some specific evidential weight to the holding of a belief would not produce workable results), but it is an important enough phenomenon that it seems to me it shouldn't be ignored.

    ReplyDelete
  3. I think I'm sympathetic to some version of the idea that our evidence is different, but I must be missing something in the case from Christensen you mention. The example says that E1 supports a credence of 0.2, but do Right and Wrong both know that, or is this stipulated as true, although they don't know it? If the latter, then it's not clear how it figures into what they should believe, from their own perspectives. If the prior, then part of the case is that Wrong knows that E1 supports a credence of 0.2 but still has a credence of 0.8. But that would--it seems--only make sense if Wrong has some other evidence. So either something important seems to be missing from the case, or I've missed something.

    Somewhat related, would whether one's own "intuition" should be "counted" depend upon whether we think the issue is one over which intuitions "count"? E.g. intuitions about the shape of the earth aren't plausibly counted as evidence, but perhaps in something like moral cases (trolley cases, if you want), they do count. So maybe there are questions about which "order" of evidence intuitions belong to which are context-specific?

    ReplyDelete
  4. Kallan - I'm not sure I follow. (Controlled experiments yield better first-order evidence. I'm not sure where the 'higher order' aspect comes in.)

    Aaron - that's an important issue, though it seems a different issue from what's being discussed here.

    Matthew - it's stipulated that E1 supports a credence of 0.2, that Right truly believes this, and that Wrong mistakenly believes that E1 instead supports a credence of 0.8. I'm not sure I understand your objection that "it's not clear how it figures into what they should believe, from their own perspectives."

    Evidence is, by definition, the stuff that figures into determining what one should believe (period). Your talk of "their own perspectives" suggests a radically subjectivist model according to which what matters is not what's rationally required of you, but simply what you believe to be rationally required of you. See here for my objections to this subjectivist view.

    ReplyDelete
  5. Richard - I get the stipulation, and I get the point that if E1 supports a credence of 0.2 then, in some sense, Wrong should move in the direction of 0.2. What I don't get is how that point gets any traction with Wrong if Wrong mistakenly takes E1 to support a credence of 0.8.

    I didn't mean for the perspective-talk to distract. I'm just trying to understand what the point is supposed to be.

    What I didn't get was how the ought here is accompanied by a can. Wrong ought to have a credence of such-and-such, says Christensen. But the ought seems to be based on an interpretation of the evidence that Wrong rejects.

    Or is it the case that in this sort of normative epistemology, it needn't be the case that "ought implies can"? (I.e. what's rationally required of Wrong might not be something that he can do "from his perspective"?)

    At any rate, doesn't this all seem fishy to you, since the rational requirement, on the view offered by DC, leads to Right and Wrong trading places (i.e. Right should adjust his credence such that he's a little wrong, and Wrong should adjust his credence so that he's right???)

    ReplyDelete
  6. I seem to have not quite understood Wrong's situation. Still, if our attitude toward his misinterpretation of evidence is just that he should have gotten it right, that seems similar to saying that we should base our credence on forgotten evidence only when it's good. This is not entirely satisfying. Is radical subjectivism really the only alternative?

    Consider this; suppose Wrong has good evidence that he's usually reliable in matters like this. Does that affect things?

    ReplyDelete
  7. Aaron - I think I see where you're coming from now. Note that DC's claim (as I interpret it) is just that when you have access to first-order evidence, this subsumes the evidential import of your making a certain judgment about the evidence. This is entirely compatible with holding that the psychological meta-evidence (even if ultimately misleading) counts in full when you don't have access to the original evidence. So he's not committed to "saying that we should base our credence on forgotten evidence only when it's good". (If anything, I think the opposite claim seems a more natural fit. Without the first-order evidence to subsume it, the psychological meta-evidence should presumably count.)

    Matthew -- I think we need a fairly broad understanding of 'can' if we want to apply 'ought implies can' in epistemology. Consider: Wrong has the general rational capacities required to appreciate that his evidence supports credence of 0.2. So in this sense he can form the required belief. Perhaps it's unlikely that he actually will, given that he's initially mistaken what the evidence requires. But of course an 'ought implies will' principle would be far too narrow!

    Perhaps you're thinking that we need to hold fixed Wrong's current beliefs in assessing what he can (and so possibly ought to) conclude from here. But that just seems wrong to me. Sometimes we're rationally required to revise our prior beliefs, if they're unreasonable.

    But perhaps it's simplest to just forget about 'ought implies can', and consider this as a project of critical evaluation. For example, we may ask: what conclusion would be ideally rationally (no matter whether the agent in question has the capacity to recognize this)?

    I'm not seeing the "fishiness". Conciliatory views claim that when a reliable 'epistemic peer' disagrees with us, we should take the higher-order evidence provided by their judgment seriously, and hence move our credence in their direction. This isn't "trading places", exactly, since Right balances the original evidence E1 against the new evidence provided by Wrong's judgment (hence his ending up somewhere in the middle). Wrong, on the other hand, is rationally required to go all the way to 0.2, for the principled reason that all the relevant evidence (both E1 and Right's testimony) support this conclusion.

    ReplyDelete
  8. I agree that if the point is critical evaluation, then Wrong is, well, wrong. If they are mutually acknowledged peers, however, then I'm assuming there's something significant to Wrong's initial mistake here. If it's an identifiable mistake that Wrong can be brought to see, then it seems that that's what should drive a revision of credence, not just Right's disagreement. However, now I think I see how, at some point in the conversation, Right will move up to 0.5 as Wrong moves to 0.2. But if Wrong can be brought to see his mistake in this case, then shouldn't Right, as it were, return back to 0.2 at some point?

    Or is the idea that what we're rationally required to believe just can diverge from what is the case? (That's how I was seeing Right's situation here...that's what seemed fishy to me.)

    ReplyDelete
  9. Richard:
    I'm not sure I follow. (Controlled experiments yield better first-order evidence. I'm not sure where the 'higher order' aspect comes in.)

    Actually that's precisely my point, that controlled experiments provide better first-order evidence. The question is what are controls designed to do and the answer is they're designed to remove subjective biases which are, by definition when we put credence over evidence.

    ReplyDelete
  10. Matthew - suppose there's no (easily) identifiable mistake, so that in fact Wrong will continue to believe falsely that E1 supports 0.8. Further suppose that Right and Wrong are in subjectively symmetrical situations: there's no independent reason for Right to think himself more likely correct in this case. So he lacks any basis for downgrading Wrong's judgment in such a way as would allow him to return to 0.2.

    I think the most interesting result here is DC's claim that Wrong (ideally) ought to move all the way to 0.2. Traditionally, "Equal Weight" theorists have counted Wrong's own judgment as admissible evidence for himself (though they neglect E1), and hence conclude that he -- like Right -- should settle on the middle position of 0.5.

    ReplyDelete
  11. I still don't understand why philosophers insist on arguing about these things without reference to formal models. I see nothing considered here that could not be expressed in a formal model of agents with beliefs who make mistakes in their reasoning.

    ReplyDelete
  12. I'm sure it could also be expressed in Klingon. The question is what additional insight would be gained by this.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.