Friday, February 23, 2018

Philosophical Expertise, Deference, and Intransigence

Here's a familiar puzzle: David Lewis was a better philosopher than me, and certainly knew more and had thought more carefully about issues surrounding the metaphysics of modality.  He concluded that modal realism was true: that every concrete way that a world could be is a way that some concrete universe truly is (and that these concrete universes serve to ground modal truths -- truths about what is or is not possible).  But most of us don't feel the slightest inclination to defer to his judgement on this topic.  (I might defer to physicists on the 'Many Worlds' Interpretation of quantum mechanics, but that's a different matter.)  Are we being irrational?


A familiar response: philosophical 'experts' themselves disagree.  Kripke, for example, may be weighed against Lewis on this topic.  But then it might seem to follow that I should suspend judgment entirely: if even the top experts on the topic cannot agree, what hope do I have of coming to a justified conclusion here?  And it would seem epistemically shady to cherry-pick the experts who agree with you, claim that you're responsibly deferring to them, and just ignore all the ones that don't.

I think a better response is available.

The puzzle presupposes that we ought to defer to experts.  But that only makes sense if we've reason to expect that expertise in a domain sufficiently increases epistemic reliability, i.e. the likelihood of true beliefs.  That's certainly the case for many domains -- it's why we should defer to scientific experts, for example.  But it arguably isn't so for philosophy in general.

Philosophical expertise seems compatible with being completely off the rails when it comes to the substantive content of one's philosophical views.  And this is to be expected once we appreciate that (i) there are many possible internally coherent worldviews, (ii) philosophical argumentation proceeds through a mixture of ironing out incoherence and making us aware of possibilities we had previously neglected, and so (iii) even the greatest expertise in these skills will only help you to reach the truth if you start off in roughly the right place.  Increasing the coherence of someone who is totally wrong (i.e. closer to one of the many internally coherent worldviews that is objectively incorrect) won't necessarily bring them any closer to the truth.

To put a more subjective spin on it: One's only hope of reaching the truth via rational means is to trust that your starting points are in the right vicinity, such that an ideally coherent version of your worldview would be getting things right.  So we've only got reason to defer to others if their verdicts are indicative of what our idealized selves would conclude.  Often, we can reasonably judge that other philosophers have views so alien to our own that it's unlikely that procedurally ideal reflection (increasing internal coherence) would lead us to share those views.  In such cases, we've no reason to defer to those philosophers, however 'expert' they may be.

(Terminological variant: If you want to build into the definition of a subject-area 'expert' that deference to expert judgement is mandatory, then you should restrict attributions of expertise to those whose starting points are sufficiently similar to your own.)

tl;dr: We should only be epistemically moved by peer disagreement (and related phenomena) when we take the other person's views to be evidence of what we ourselves would conclude upon ideal reflection.  Philosophical intransigence is thus often justified, insofar as we can justifiably believe that an improved version of our view could be developed that is at least as internally coherent as the opposing views. This remains true even if we judge that the defenders of the opposing views are (in purely procedural terms) smarter / better philosophers than we are ourselves.

16 comments:

  1. Richard,

    I don't know that Lewis was a better philosopher than you, but that aside, are you assuming or at least implying that your own starting points are the rational ones, or the most rational ones, but that Lewis (and probably many other philosophers) have irrational starting points?

    If that is so (I may be misreading), it seems to me that on this view, they face a dilemma: either they stick to their starting points - which includes their irrational starting points - or they toss them, but that's irrational too: they're abandoning their only hope of reaching truth via rational means. And given the amount of disagreement, it seems that that sort of unavoidable irrationality is quite widespread.

    This seems to raise serious ethical problems, such as: what are their moral obligations on the matter of defending philosophical views, and how can they find out?
    Perhaps, modal realism is not much of an issue, as it does not have significant meatspace consequences (at least, not that I'm aware of, but further argumentation might change that!). But there is wide disagreement on philosophical issues whose answer has significant impact on moral questions, and defending some philosophical views involve defending or rejecting views about moral obligations, desert, etc.

    That said, if I misunderstood and you're not implying that Lewis has irrational starting points, please clarify (but I'm thinking that that view would be problematic).

    ReplyDelete
    Replies
    1. Hi Angra, I like to distinguish 'substantive' vs 'procedural' rationality, where only the former involves building in the right (or close enough to right) starting points, and the latter is about internal coherence etc. Lewis is then substantively irrational due to having sufficiently misguided (a priori false) starting points, though of course he might be extremely epistemically responsible in a procedural sense, and so very very far from a paradigm case of irrationality in the ordinary sense.

      Given the fact of coherent philosophical diversity (i.e. that there are a plurality of internally coherent possible worldviews), and the limitations of procedural reason, it just seems a fact (rather than a 'dilemma') that someone starting in the wrong place / with the wrong priors will be unable to reason themselves to the truth. (Think of a counterinductivist, or someone who embraces incorrect laws of deductive logic, for an extreme example.) But perhaps they'll be lucky and have some non-rational process edit their priors in a way that happens to lead them to subsequently be substantively rational. That'd be an epistemically good thing to happen for them, objectively speaking, though of course there's no way that they could coherently recognize that beforehand.

      I don't see any great moral objection to defending a priori false moral views, as long as they're not 'beyond the pale' or expressive of moral disrespect or viciousness.

      Delete
    2. Hi Richard,

      I think that one shouldn't count as starting points the points at which people start doing philosophy in a professional/academic setting, because to get there, they've been studying, making assessments, forming beliefs, etc., for many years before that, and so it seems to me they probably have other, more basic starting points (or have no starting points at all, if it's vague at first). So, for that reason (and others), I'm not inclined to think that procedural rationality would take humans so far away from each other as the diversity of philosophy seems to indicate (aliens from another planet are a very different matter: I think they would likely end up with species-specific views that are very different from those of humans, but that would likely not be disagreement, but rather talking about different things).

      However, if that is in fact so, I still see a significant problem when it comes to moral matters (and others; more below). It's not only about defending false moral claims in general, but a number of more specific issues. For example, some (many) people, reasoning in a procedurally correct manner, would end up - say - defending that people who do X ought to be punished, whereas others would advocate that people who attempt to punish people for X ought to be punished, and so on. This raises questions like:

      a. What are their moral obligations? More precisely, do they have a moral obligation to defend the view that follows rationally from their irrational priors? Or is that immoral? Or neither?
      b. Is it rational to attempt to persuade procedurally rational people who will keep rejecting the views one is defending, because of their irrational priors, once one reckons disagreement very probably comes from different priors? More to the point, is there a solution other than conflict?
      c. Does this imply that (at least very probably), not only will aliens from other planets end up with many different moral-like analogues (or, if you want to stick to the view that there is only one morality, with many wrong moral views they can't rationally get out of), but also that humans (or post-humans) in the future will very likely end up with very different moral views on a number of issues (of course, at most one of them true), or with agreement, but probably after coming to the wrong assessment, etc.?
      d. Why not "beyond the pale" views?
      There is widespread disagreement also about whether certain views are beyond the pale. Moreover, often adversaries see each other as having views beyond the pale. If we go with this account, it seems to me disagreement on what's beyond the pale would have also (likely) the same characteristics, so procedurally rational people will end up defending beyond the pale views, what's beyond the pale would be subject to similar disagreement, and so on.

      There are other problems, also for morality but for philosophy in general, I think. For example, suppose I have to assess the odds that a philosopher will get to the truth on issues A1, A2, ..., An. What are the odds that she will get all of them right? As long as there are several different views on each of them (all of them defended seriously in the present), it seems extremely low (this could change if I know more about her and about the Aj, of course). So, philosophy is very likely to get it wrong. This also has implications for doomsday arguments: an AI is very likely to start with wrong priors (given human programmers). There isn't even the solution to tell the AI to look for (say, moral) answers by looking at human minds and where they would end when procedurally rational. Can the evil superintelligent procedurally rational AI be stopped?

      Granted, this does not imply that those problems are problems for your argument. They might indeed be problems for morality, AI, etc.

      Delete
    3. Yes, I'm happy to accept that there are such questions! Indeed, many of these are familiar questions that ethicists discuss in relation to "subjective" oughts, normative uncertainty, whether past slave-owners were blameworthy (on the assumption that there's a sense in which the wrongness of slavery wasn't culturally accessible to them), etc. etc.

      Delete
    4. Alright, but in addition to questions, I think we can make some predictions.

      For example, I would say that if your theory is correct, then it's very probable that any future superintelligent AI will go wrong on many moral issues, in a way that it will never be able to correct (and likely no one will be able to stop it). The reason is that it will begin with the wrong priors, and there is no correcting them - on the contrary, it will very probably end up coming up with more and more wrong conclusions, as it obtains more and more information and uses it to make assessments on the basis of the wrong priors, etc. Unless somehow the AI can be boxed, it seems it would be unstoppable. But if it can be put in a box, then the problem remains for the first AI that is not in a box.

      Regarding your slavery example. Do you think it's possible that A engages in morally wrong behavior X, but A is not blameworthy?

      Delete
    5. Yeah, a wrong action may be excusable rather than blameworthy due to non-culpable ignorance or other extenuating circumstances.

      It's not obvious to me that we should expect future AI to be horribly misguided. That would follow if we were to pick its priors randomly. Otherwise, it just depends who is programming it, right? And amongst the moral views commonly found in actual people, even the wrong ones tend not to be too drastically wrong (compared to hypothetical worldviews we can imagine). If some kind of "AI safety" process leads to an especially cautious implementation of consensus values, building in normative uncertainty that tries to avoid atrocities relative to any common human values so far as possible, then again we might find the result tolerable enough even if the AI isn't getting the objective truths exactly right.

      Delete
    6. I think on the account you propose, we can expect that AI will probably not be nearly as misguided as it would if the priors were random. The problem, I think, is of a different kind: there are some moral priors (even if a small minority of the total number) in which the AI will be horribly misguided. The rest, not so much. But the horribly misguided ones will lead to very bad mistakes that will never be corrected. No matter how good the AI is at instrumental rationality, it will never get out of it. But the problem with even some big mistakes escalates given the AI's increasing power and knowledge: a small percentage of wrong priors is likely to snowball into a lot of very wrong assessments in the end (about, say, the long term future of Earth's civilization), given it will make many more judgments. But whatever the final percentage of very bad judgments (i.e., very far from moral truth), the main problem is the for all intents and purposes near certain impossibility of future correction.

      I would say that we can make that assessment if we do not know who is programming it, because whoever they are they're likely to have some (even if a small minority) of priors very wrong. I make that assessment on the basis of the disagreement we see among philosophers (or, if you prefer, philosophers specialized in ethics), and the thesis you proposed in the post in regard to the source of some of the persistent disagreements (i.e., wrong priors).
      Trying to implement consensus values seems to be an improvement, but it seems to me it has significant difficulties that will make it likely to fail, like:

      1. If the programmers agree with your point about the weakness of the evidence from disagreement (peer or otherwise), and in particular about trusting one's starting points as the only hope of achieving truth via rational means, it seems to me that they would be inclined to tell the AI to figure out what their own ideal selves would conclude, rather than give a significant amount of value to disagreement. On the other hand, if they don''t agree, they got that wrong! (we're assuming you're correct).
      2. Aside from that, even today, there is vast disagreement about at least some moral issues, even involving fundamental matters of political organization. You mention that the moral views commonly found in actual people. Are you talking about, say, American people, or Chinese people? There are of course other countries too, but the first AI seems likely to come from one of those two. And Chinese AI sounds pretty bad to me if there is no essential human convergence under ideal conditions, especially in regards of political organization, freedom of speech, and so on. But even if it's America, there are vast disagreements about some specific issues (also, I think both sides in the culture wars are vastly mistaken about some things, though that is my assessment, and can't be made on the basis of your analysis only). While some of the problems might be resolved in the coming decades, it's very unlikely that all of them will (and if they did, they might in the wrong direction).

      Delete
  2. Could you say more about whether you think what you've written avoids the need for philosophers to suspend judgment? I take it that we defer to experts in science only when they arrive at consensus, not just when a lone smart scientist makes any bold proclamation. We non-physicists only get to buy Special Relativity once basically all physicists buy it, not as soon as Einstein introduces it. And I think we trust expert consensus precisely because they land in the same position despite starting from a wide range of epistemic priors.

    It seems, then, that the methodologies successful sciences utilize provide a much deeper ironing out process than the methodologies of philosophy. The lack of consensus among philosophers is the more salient feature, not the fact that any specific philosopher (or subset of philosophers) you deem to have more expertise than you disagrees with you.

    ReplyDelete
    Replies
    1. Hi Sean,

      Right, I think what's going on in the science case is that we take the community of scientists as a whole to most likely represent what our idealized selves would conclude. And in the absence of a scientific consensus (say about the correct interpretation of quantum mechanics) we might suspend judgement precisely because we have no idea what side of the debate our idealized selves would end up coming down on. (By contrast, it'd be perfectly reasonable for a quantum physicist to have a view on this matter, and not suspend judgement, no matter that many of their academic peers reached different conclusions.)

      Given this account of when deference is called for, there's no general need to suspend judgement just because there are other possible views out there that could have been reached by similar methods (broadly speaking, i.e. not building one's substantive starting points into the 'method') to your own. Indeed, any such general epistemic principle would be self-undermining, given that there are coherent views (like my own) that reject it. See my discussion of Street's "moral lottery" for much more detail along these lines.

      Delete
    2. I think there is a general need to suspend judgment when there are other *actual* views out there that *actually have been* reached by similar methods (I've found myself [silently] disagreeing with you when you've written about actual vs. possible disagreements before).

      The case for this is as follows: when expert physicists reach consensus over Special Relativity, that's defeasible evidence that their methods are reliable (or, at least, it provides no evidence that their methods are unreliable). But when, say, expert philosophers of time fail to reach consensus over A-theory and B-theory, this is evidence that their methods are unreliable (in the way that the mere possibility of physicists rejecting Special Relativity isn't).

      In philosophy, actual disagreements abound, which suggests the current methods employed to answer philosophical questions are unreliable guides to the truth. This way of framing the issue doesn't seem obviously self-undermining.

      Delete
    3. Ah, well I'm glad you are breaking your silence now! :-)

      I agree that actual disagreement is what's relevant regarding empirical/scientific questions -- my account predicts this.

      But if what you're worried about is our reliability, why think we need actual disagreement in order to recognize the unreliability of philosophical methods (when individuated coarse-grainedly)? I would've thought this was largely a priori. As soon as we recognize that there are multiple internally coherent viewpoints, that the abstract truths of philosophy cannot exert any causal influence on us, and that all that rational argumentation can do is increase coherence, it would seem to follow already that rational argumentation is not in general reliable -- it will only work for those of us who are starting in close enough to the right place.

      Actual philosophical disagreement provides no extra evidence on this front. Conversely: Widespread philosophical agreement would not show that philosophical methods were reliable, but just that actual philosophers happen to start from similar places.

      Of course, none of this commits us to thinking that our own philosophizing is unreliable, because we can individuate our own method more finely, as starting from the particular starting points that we do. Disagreement from philosophers using a different fine-grained method (i.e. different starting points) obviously does nothing at all to show that my fine-grained method is unreliable.

      Delete
  3. Lewis is an unusually difficult target for these arguments. He generally cares a lot about defending widely accepted common-sense views. There's a nice strategy for preserving all the ordinary intuitions in the skepticism debate (the contextualism of Elusive Knowledge), and a general theory that tries to make lots of our intuitive judgments about counterfactuals come out true.

    The trouble is that the theory of truthmakers for counterfactuals commits us to modal realism. But if that really turns out to be the consistent way to hold onto the stuff we all believe in that he tries to account for, maybe that's where we'd end up after ideal reflection?

    ReplyDelete
    Replies
    1. Maybe! It's true that we can't really know in advance where ideal reflection will lead us, so we should hold our views tentatively and with a non-trivial credence that we'll end up later rejecting them for something quite different. But I certainly give much more credence to the claim that modal realism is not the only consistent way to "hold onto the stuff we all believe in" here.

      Delete
  4. Nice piece! Having a Masters in Quant Modeling - I can tell you that there are many world models that rapidly go off the rails if you are nowhere near a starting point that has a path to your goal. You can also find many local model minima that mislead you that you have obtained an optimal solution. As a 'fun' undergrad Philo student I had never thought about those relations to Philo

    ReplyDelete
  5. I agree, although I think it is not at all obvious how someone who is not a meta-ethical quietist can consistently say this about ethical disagreement.

    ReplyDelete
  6. Another way to go, which I find tempting, is to deny that we should expect universal norms for "rational" deference to expertise. You should defer to people's expertise insofar as they can be shown access to knowledge you don't, superior arguments, etc. The reason not to defer to Lewis isn't because Kripke disagreed with him, but because the arguments Lewis offered are subject to substantive criticism. Do philosophers think we can say much more than "you should defer to the opinions of experts who have demonstrated said expertise, and only when there aren't countervailing reasons not to?" I think if you take on this context-dependent view of deference to experts, the "puzzle" vanishes.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.