Thursday, March 01, 2018

On Parfit on Knowing What Matters

If I had to pick a "favourite philosopher", it would be Derek Parfit.  His book Reasons and Persons is, in my view, the best there is -- containing striking insights and arguments on every page, and laying the groundwork for basically all subsequent work on the deepest puzzles surrounding consequentialism, personal identity, and population ethics.  So it was a great honour to have him respond to my paper 'Knowing What Matters' in his third volume of On What Matters.  I wish he were still around to be able to continue the conversation further, as I would have liked to prompt him to engage more closely with various claims (that he was instead initially inclined to reject by just re-asserting his antecedent view). Sadly, that's no longer possible.  But I guess I can at least continue my side of the conversation, and perhaps other readers will suggest further comments and responses that could be made on Parfit's behalf.

'Knowing What Matters' argues that Parfit concedes too much to the moral skeptic, and explores how the robust realist might defensibly take a less conciliatory line on moral epistemology.  In particular:

1. I argue: Given that the moral facts are causally inefficacious, Street-style skeptical worries about the causal origins of our beliefs being unrelated to their truth-makers does not depend specifically on the idea that our moral beliefs have evolutionary causes.  So I think it was a mistake for Parfit to be so invested in trying to refute that particular causal story.  It renders his view hostage to empirical fortune, and misses the larger philosophical issue.

Parfit responds (OWM v.3, p.286):
[W]e cannot defensibly assume that all possible causes of our normative beliefs would have been unrelated to their truth.  We do not yet know enough about these other possible causes to be justified in making any such assumption.

What conceivable natural causes would qualify as sufficiently "related" to the (non-natural) moral facts, if evolutionary causes do not?  Parfit complains that the same normative beliefs would have evolved "whether or not they were true." (287)  But Parfit accepts the standard robust-realist view that normative facts are not causally efficacious.  So whatever the natural causes of our normative beliefs, imagining (per impossibile) switching the truth values of the believed proposition can't affect the causal fabric of the natural world and hence whether we come to have the belief in question.

So, if robust realists are to make sense of the possibility of moral knowledge, we must reject Parfit's truth-switching test. (Indeed, the need to reject such strict demands for 'sensitivity' to the truth is a familiar lesson from considering radical skepticism.)  But on a looser understanding of the needed "relation" to truth -- mere reliability, say -- it's unclear why we must think that evolutionary causes (for social creatures in our ecological niche) are "unrelated" to the moral truth.

2. We do better, I argue, to regard the causal origins of a (normative) belief as lacking intrinsic epistemic significance.  The important question is instead just whether the proposition in question is itself either intrinsically credible or otherwise justified.  Parfit rejects this (p.287):
Suppose we discover that we have some belief because we were hypnotized to have this belief, by some hypnotist who chose at random what to cause us to believe. One example might be the belief that incest between siblings is morally wrong. If the hypnotist's flipped coin had landed the other way up, he would have caused us to believe that such incest is not wrong. If we discovered that this was how our belief was caused, we could not justifiably assume that this belief was true.

I agree that we cannot just assume that such a belief is true (but this was just as true before we learned of its causal origins -- the hypnotist makes no difference).  We need to expose it to critical reflection in light of all else that we believe.  Perhaps we will find that there is no basis for believing such incest to be wrong. Or perhaps we will find a basis after all (perhaps on indirect consequentialist grounds).  Either way, what matters is just whether there is a good justification to be found or not, which is a matter completely independent of us and how we originally came by the belief.  Parfit commits the genetic fallacy when he asserts that the causal origins "would cast grave doubt on the justifiability of these beliefs." (288)

3. I clarify Street's view by drawing a distinction between substantive and constitutive explanations of the reliability of our faculties.  Robust realists can offer the former kind of explanation, whereas Street demands the latter.  I explain why the method of wide reflective equilibrium supports the realist here, and further explain why Street's "moral lottery" analogy fails.

4. I argue that Street's constructivist view is self-defeating.

5. I offer a positive account and defense of a kind of internalist reliabilism that I think provides the best moral epistemology for the robust realist.

6. I offer an analysis of when actual disagreement matters (i.e. over and above the mere recognition that there are other possible internally coherent views out there, competing with our own), in terms of whether it is "non-ideal" or indicative of a purely procedural mis-step on the part of either ourselves or our interlocutor (with no basis for being more confident about which of us slipped up).  Absent any evidence of such procedural mis-steps on our part, it's unclear why various coherent alternative worldviews should be any less epistemically threatening just because they don't have actual advocates.  What new evidence or information is provided by the existence of advocates for a view?

Parfit does not really engage with any of this, disappointingly.  He just asserts (290):
This is not, I believe, true. The mere possibility of such [ideal] disagreements would be a much weaker challenge to our beliefs than deep actual disagreement, even in ideal conditions.

If anyone who has read both papers can offer some argument in support of this claim, I would be very grateful.

6 comments:

  1. I think your response to the hypnotist example is right, but I wonder if there might not be more that could be said on the other side. I've only had a chance to glance briefly at the Parfit response, so I'm not sure if this is strictly consistent with Parfit's own view, but I had the impression that part of the point was that he thinks of normative beliefs in something like a foundationalist way: they make up a system, depending on certain beliefs or, better, certain kinds of beliefs, so what really seems to be at issue for him is that the problem of justifiability in the incest example can be extended to any normative belief whatsoever -- but (he thinks) any attempt to justify any of those normative beliefs in terms of other beliefs is going to have to treat other normative beliefs as true. Thus the problem with the hynotist is not the bare fact of the causal story but the fact that the causal story includes indifference to the truth of the belief in question, and if this same kind of causal story can be given to all beliefs, then the whole mass of normative belief is itself in question, being something we only believe by (as it were) the flip of a coin and which we cannot justify except by assuming some part of it to be true. That is, the causal conditions don't give a reason to think beliefs of this kind justifiable, and the beliefs we are talking about can't be justified by anything else except other beliefs of the same kind.

    ReplyDelete
    Replies
    1. Thanks Brandon, that's a helpful way of developing the objection. One worry is that when you extend the criticism to one's entire (normative) belief system, the insistence on a non-question-begging defense becomes an excessively skeptical one. So I think there are (at least) two reasonable responses to that kind of systematic skeptical worry: (i) the standard coherentist / reflective-equilibrium style reply that we can legitimately assess small subsets of our beliefs in light of our other beliefs, whereas the system as a whole is not really up for dispute "all at once" like that; or (ii) that what matters is the intrinsic properties of the system -- such as whether it accurately reflects the facts about intrinsic credibility / self-evidence, and reliably yields true a priori beliefs about its domain -- not how one came by the system in question. The latter sort of view is the one I develop in depth in my paper.

      To support this, suppose a quantum fluke results in an intrinsic duplicate of Euler (or some other great mathematician) emerging from a bolt of lightning striking a swamp. It seems to me that Swamp-Euler is also a great mathematician, and can have all sorts of justified mathematical beliefs (perhaps after working through a proof on some conveniently generated swamp-papyrus), even though its a total fluke that his brain is set up to be so good at math. Although the swampman might just as easily have been created with an incompetent brain, given that he actually has a highly competent brain the results he gets are as reliable, justified, and generally epistemically meritorious as one could possibly want. (Whether he could demonstrate this to a skeptic's satisfaction is a separate matter, but not one I take to necessarily undermine his actual epistemic status.)

      Delete
  2. Hi Richard,

    In light of your view on the weight of disagreement, I don't think you'll find the following argument in support of Parfit's statement persuasive, but one way I think one might defend Parfit's statement is as follows (I don't know whether Parfit would have defended his statement in a similar manner, though).

    a. Actual disagreement among humans and for non-procedural reasons would provide good evidence that humans do not have a generally reliable moral sense/ or sense of whatever the disagreement is about. More specifically, it would be evidence that humans get the starting rules and/or values wrong very often.
    b. You're also human, so the probability that you got the right moral starting points is not higher than the probability that some other human did, factoring in at this point the fact that there is such disagreement, and before you factor in other considerations like your reasoning from your own starting points with regard to rules and/or values.
    c. Further evidence that you might get from reasoning on the basis of your starting points is not going to yield an increase in the probability that you got the initial values right.
    d. As long as the people who disagree with you are also roughly as good as you are at the procedural stuff, you very probably will not find a way to raise the probability from a very low level.
    e. Deep disagreement (or I'd say "disagreement") with advanced aliens would be (I think good) evidence that evolution generally does not get intelligent entities with the right initial values, as long as one insists on the further assumption that there would be disagreement rather than miscommunication with aliens (my actual take on this is that they would be talking past each other, or rather, that that would be so if they did not realize the mean different things, but civilizations advanced enough to make contact with each other and discuss stuff would very likely have figured that out, so they would acknowledge they're talking about different stuff, rather than talk past each other. But I'm assuming this is not for the sake of the argument).

    Granted, it might be objected that you may properly use your own assessments as good evidence that the people who disagree with you got the starting points wrong, because you assume your own starting points about morality to be right. Then, you would conclude (depending on whether the disagreement is with humans or aliens) that either even if evolution does not generally lead to having the right values, humans got lucky, or that if humans don't generally have the right starting points, you personally got lucky. I believe this is a mistake: while we can't "jump out of our heads" and assess the odds that all of our starting points be correct, we can properly assess a subset of them, such as moral starting points, and the lucky assessment is not right, and it would not be proper to dismiss evidence from unreliability due to the origins of the starting points.

    ReplyDelete
    Replies
    1. Hi Angra, that's an interesting way to develop the argument from disagreement -- sort of collapsing it into Street's "moral lottery" objection. I'd just note a couple of reasons to be dubious of this: (1) it makes the epistemic relevance of disagreement depend on extrinsic factors: disagreement with quantum-generated swampmen, for example, would not seem relevant to assessing the reliability of our own faculties. (2) It's not clear that we really need the actual disagreement to get the worry off the ground (as Street's own argument shows).

      Delete
    2. Hi Richard,

      In re (1), I consider that a feature, not a bug. More precisely, I don't think that that's a reason to doubt the objection. On the contrary, I think disagreement with quantum-generated swampmen is not a good reason to doubt the reliability of our faculties, but rather, a good reason to doubt the reliability of the faculties of the specific quantum-generated swampman. The situation would be different if we were quantum-generated swampmen and we somehow knew that, but in this case, I also think the objection gets the right result.

      In re (2): While I do think Street's argument (or rather, a variant that avoids some of what I think are shortcomings in Street's formulation, but I'll say "Street's argument" for short) would raise a challenge to the reliability of some of our faculties if one insisted that there would be disagrement rather than miscommunication (but I don't think that that's the proper answer), actual disagreement would provide a piece of evidence that is independent of whether Street's succeeds on the basis of counterfactual considerations, and more evidence in case it does succeed.
      More precisely, actual disagreement would give us further evidence that humans do not normally start with the right bedrock/starting points, either due to direct disagreement among humans, or - in the aliens variant - indirectly by providing further evidence that evolution does not generally produce species whose members start with the right bedrock/starting points. Thus, I think this would add more evidence to Street's argument, making it proper for us to lower the probability that we got a correct moral sense, regardless of how much we already lowered it (a little bit or a lot) due to Street's argument. Of course, this lowering is also conditioned to the event "there is disagreement rather than miscommunication", but Parfit believed so, so I think this would be a defense of his statement that is in line with his views - though as I mentioned, I don't know whether he would have defended along those lines.

      That said, Parfit said that ideal disagreements would be a much weaker challenge, rather than a weaker challenge. That "much" part is more difficult to defend, but then again, that's probably so because of my take on humans and aliens under ideal conditions, etc. I'm not sure what Parfit thought about it, but if Parfit expected that if future human philosophers met alien ones, there would be agreement (under ideal reflection, perhaps) on moral matters, this would explain why he reckons that the challenge would be much stronger.

      Also, I don't know what your take on that (i.e., aliens) is. Do you think they would very likely agree with humans on ideal reflection about morality?
      In any case, I think that all other things equal, the higher the probability you assign to ideal agreement with aliens, I'd say the stronger the challenge from actual (alien) disagreement would be, relative to the challenge from hypothetical disagreement (and a similar point holds about human actual disagreement under ideal conditions and your present probabilistic assessment about human agreement under ideal conditions).

      Delete
    3. A clarification point in re: quantum-generated swampmen. I meant actual disagreement with them, were we to find them. If one assumes that hypothetical disagreement with quantum swampmen is already a challenge to our faculties, I would say that disagreement with actual swampmen would not add a non-negligible amount of evidence against our faculties.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.