Monday, December 13, 2010

Actual vs. Possible Disagreement

I'm very puzzled by Parfit's concern, in On What Matters, to establish that most actual moral theories converge. That is, I'm not sure why he sees the 'argument from disagreement' as troubling only when it involves alternative views with actually existing advocates. If it's epistemically undermining to be faced with an internally coherent alternative to your present views, why should it matter whether the advocate of this alternative view really exists, or is merely a figment of your imagination playing devil's advocate?

In Chapter 34, Parfit seems to accept the basic validity of the argument from disagreement. He writes that if...
even in ideal conditions, we and others would have deeply conflicting normative beliefs, [then] it would be hard to defend the view that we have the intuitive ability to recognize some normative truths. We would have to believe that, when we disagree with others, it is only we who can recognize such truths. But if many other people, even in ideal conditions, could not recognize such truths, we could not rationally believe that we have this ability. How could we be so special?

But what part of this argument depends on the "others" in question actually existing? It seems enough to consider a possible alternative psychology, like the Future Tuesday Indifferent agent (or similar, less gerrymandered, characters), who Parfit takes to be fully procedurally rational but merely substantively mistaken. When we consider this world of procedurally rational lunatics, we would have to believe that it is only we, and not they, who are capable of recognizing normative truths. But then the very same question arises: "How could we be so special?"

If we can answer this in relation to possible alternative views (e.g. by answering "we're just lucky!"), then presumably this very same answer will apply in case of actual disagreement. Yes, if I'd been raised differently, I might have ended up a Kantian, or a Creationist. So isn't it a fine stroke of luck that I wasn't raised in such a misguided (if well-meaning) community!

But Parfit doesn't take this tack. Instead, he seems to see the case of possible disagreement as less troubling than cases of actual disagreement. See, for example, his response to the worry that "different people might find conflicting beliefs self-evident":
If we claim that we have some ability, however, it is no objection that we might have lacked this ability. Different people might have conflicting visual experiences, which were like dreams and hallucinations, and were not a source of knowledge. But that is not in fact true. Different people's visual experiences seldom conflict, and believing what we seem to see is a fairly reliable way of reaching the truth. It may be similarly true that, after careful reflection, different people would seldom find conflicting beliefs self-evident. Believing what seems self-evident, after such reflection, may be another fairly reliable way of reaching the truth.

I'm not sure what Parfit has in mind here. Is he just making an externalist argument: that what matters is that our faculties are truly reliable, whether or not we can show it? But then why would it matter whether or not "different people", with different faculties from us, were constantly hallucinating? (That wouldn't change the fact that believing what we seem to see would be a reliable way for us -- if not for them -- to reach the truth.) It's odd. Can anyone else make sense of how to reconcile these quoted passages?

I think Parfit would do better to stick to his guns in the manner suggested by (one part of) his response to Street in an earlier chapter on epistemology (32 or 33, I think?):
Some whimsical despot might require us to show that some clock is telling the correct time, without making any assumptions about the correct time. Though we couldn't meet this requirement, that wouldn't show that this clock is not telling the correct time. In the same way, we couldn't possibly show that natural selection had led us to form some true normative beliefs without making any assumptions about which normative beliefs are true. This fact does not count against the view that [we know] these normative beliefs are true.

At the end of the day, the only way to avoid radical skepticism is to insist that we are, in a sense, epistemically lucky. There are alternative starting points that we might have found ourselves with, and no non-question-begging way to argue for the superiority of our actual starting points. Still, that in itself is no reason to abandon them. We might be wrong, but we may as well take ourselves to be right -- for that way we at least have a chance of being right.

6 comments:

  1. In many cases, there's an important difference between merely possible evidence, and actual evidence.

    Suppose I know that thermometers tend to be reliable about the temperature, but are not perfect. My thermometer reads 70 degrees. Of course, I know that it's possible that there could be another externally indistinguishable thermometer in these circumstances that would read 75 degrees--the thermometers aren't perfect. But that possibility doesn't undermine my current high confidence that the temperature is close to 70 degrees.

    But seeing an actual thermometer that read 75 degrees would be quite different. In that case, I should be much less confident that the temperature is 70.

    If you think of people's beliefs as being like thermometers--i.e., as relatively reliable but imperfect and chancy indicators of the truth--then you'll think that learning that somebody actually disagrees with you is importantly different from learning that somebody could possibly have disagreed with you.

    More specific to the case at hand, if you thought that pursuing reflective equilibrium was a strategy that had a high chance of leading to true beliefs (but was not guaranteed to do so), then this analogy between beliefs and thermometers (and the corresponding distinction between the evidential significance of merely possible disagreement and that of actual disagreement) would look pretty good. You'd be worried if you found out about conflicting actual reflective equilibria, but you wouldn't be phased by conflicting merely possible reflective equilibria. I suspect, however, that you'll think that this is the wrong way of thinking of reflective equilibrium. But it may be what best makes sense of Parfit.

    ReplyDelete
  2. Hmm, yeah, that's probably the best interpretation. It does seem misguided though. (For one thing, I'm always suspicious when people lean too heavily on analogies between perception and reasoning!)

    We can tell a story about why actually existing thermometers are generally reliable (we designed them to be that way!). It's less clear why we should expect actually existing people to have true normative beliefs -- except insofar as they have a tendency to believe the sorts of normative claims I already take to be true: that pain is bad, etc.

    It seems especially problematic when we talk about reflective equilibrium, since Parfit seems to allow that the conclusions one would reach through the process of RE are radically dependent upon one's starting points. So it certainly isn't generally reliable. At best, it is reliable for those who start in roughly the right place. But again, what formal reason is there to think that actually represented alternative starting points are more likely to be right than those of merely possible agents? It would seem to depend entirely on the (expected) content of those starting points, which we can only assess for inherent plausibility from the standpoint of our own tendentious perspective.

    ReplyDelete
  3. A non perception example:

    Case 1: You produce a sound proof for a difficult mathematical proposition. 100 actual professional mathematicians claim that you made an error, identifying the very same step in your reasoning.

    Case 2: You produce a sound proof for a difficult mathematical proposition. You imagine a possible scenario in which 100 professional mathematicians claim that you made an error, identifying the very same step in your reasoning.

    There seems to be a big difference between actual and merely possible disagreement. In the first case, you would have very good reasons to significantly decrease your confidence in your conclusion. In the second case, you would have essentially no reason to decrease your confidence.

    ReplyDelete
  4. Yeah, actual existence makes a difference when we have antecedent reason to consider the critics to be reliable in the relevant domain.

    Though even here, I take it, it isn't really their actual existence per se that is doing the work. Even if I merely knew that counterfactually, were there 100 previously reliable mathematicians, they would all disagree with this step of my proof, that would presumably be similarly undermining. (Or even, randomly choose 100 possible expert mathematicians, etc...)

    Back to Parfit's case: Given how reflective equilibrium works, there doesn't seem to be any reason to consider other (actually existing or not) people who start from different starting points to be reliable.

    ReplyDelete
  5. Yes, it isn't exactly actual existence that matters. Knowing the counterfactual would be equally good. The point is just that ordinary instances of actual disagreement are different from just imagining that some people disagree with you.

    In the case of reflective equilibrium, we certainly don't have experts like mathematicians or anything like that. But unless we have reason to think that others are less reliable than us, it seems like we should temper our confidence in cases of disagreement with those other people. (I'm not sure how much it should be tempered, but at least significantly.)

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.