Tuesday, August 18, 2015

Judgmentalism vs Non-commitalism

Call Non-commitalism the view that we sometimes ought to suspend belief, assign imprecise credences spanning the entire interval [0,1], or otherwise refrain from doxastic commitment.

Opposing this, we have Judgmentalism, the view that we're never required to suspend judgment: there's always some doxastic commitment or other that we could at least as reasonably hold.

We might go further and consider Strong Judgmentalism, the view that there is always some doxastic commitment (e.g. some level of credence) that's rationally superior to suspending judgment entirely.

Which of these views is most plausible?  And (for any epistemologists in the audience) is there any existing literature on the topic?  (I just made up these names, so they might go by different labels if so...)

I find myself drawn to Strong Judgmentalism, but would settle for defending Judgmentalism against Non-commitalism.  I see three main routes to doing so:

(1) Co-opt existing arguments against imprecise credence.  Since non-commitalism is effectively the view that some particular (namely, maximally open) imprecise credence is sometimes required, existing arguments (e.g. this one) against the rationality of imprecise credence are a fortiori arguments against non-commitalism.

But these are controversial, and have stronger implications than I need.  Even friends of imprecise credence can be Judgmentalists, so long as they don't insist that we're ever required to have maximally open imprecise credences.  (Maybe we should sometimes have imprecise credences that range over some smaller interval, e.g. 0.4 - 0.6ish.)

(2) Make the intuitive case for Judgmentalism. I'm fond of the slogan, "Use your best judgment, don't suspend it!"  Of course it's tendentious: the non-commitalist will say that they use their best judgment in determining whether they possess sufficient evidence to undertake a doxastic commitment -- a judgment (about their evidential state) which might lead them to suspend judgment (about the proposition in question).

But I think there's something to the slogan nonetheless, As I previously put it:
[T]here's nothing especially admirable about answering every philosophical question with "Who knows?" The philosophically mature skeptic would add, "But here are a couple of possible options...", which is certainly a huge improvement. Best of all, it seems to me, would be to further make a tentative judgment as to which of those options is best, and go from there. You can always change your mind later. 
I guess suspending judgment is a way to 'play it safe', if it's more important to you to avoid being wrong than to actually get things right. But that seems a kind of intellectual cowardice. Better to actively seek the truth, and if you end up in the wrong place, just turn around and try again.

Here it's important to stress that judgmentalism need not (and presumably should not!) involve a dogmatic attitude.  We're not infallible, and when we have very little evidence to go on, we should presumably be especially open to the possibility of subsequently revising our opinions.  But that's no reason (it seems to me) not to give it our "best shot" in the meantime.

(3) Flag the theoretical benefits of Judgmentalism.  Judgmentalism supports the practice of philosophy against skeptical objections.  If we're required to suspend judgment about anything, the irresolvable disputes of philosophy are likely among them.  (So non-commitalism itself is probably something that you're rationally required to suspend belief about, if non-commitalism is true.)

Judgmentalism, by contrast, offers a powerful response to the skeptic: "If you think my current level of credence is unjustified, what alternative credence would be better?"  If suspending judgment is off the table, then lazy skeptics can no longer rest on their laurels with negative judgments.  In order to productively disagree, they must put forward some alternative positive proposal about which credences are most rational.

You might wonder: Even if I'm right that Judgmentalism better supports the practice of philosophy, is this any reason to think Judgmentalism likely to be true, or is it just an invitation to engage in wishful thinking?  I'm hoping the former!  Seriously though, I think there's a decent case to be made for having some default trust in propositions that are preconditions for knowledge or inquiry.  And while there are limits to how far you can push such a principle (even if tarot cards were the only possibility of gaining knowledge of the universe outside our light cone, it wouldn't follow that we should trust in tarot cards as an epistemic method!), I can't think of any particularly strong reasons to reject Judgmentalism.

Can you?

14 comments:

  1. Susanna Rinard has a paper entitled "Against Radical Credal Imprecision" that argues against a view close or identical to what you call non-commitalism. Here's a link.

    I vaguely remember that Samir Okasha has a responses to Humean skepticism that I remember thinking presupposed something like the negation of non-commitalism. Maybe it's "What Did Hume Really Show About Induction", but I'd have to reread the paper to be sure.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. Non-commitalism as you described it comes in two versions; a maximally-open imprecise credences version and an 'other' version. I'm probably with you in rejecting maximally-open imprecise credences, but there are other ways of refraining from belief that I find more appealing. More precisely, there are cases in which I think it is irrational for some (at least human-level) agents to assign either a precise credence (even based on a 'universal prior' like .5) or a 'narrow' credence range. I'm not sure that counts as refraining from belief in the sense you have in mind, but I think it counts in an interesting, related sense.

      One example might be cases in which there are multiple candidate prior probabilities and no easy way to choose among them (for human-level reasoners with little information to rely on).

      'Choosing among priors' is a big problem for Bayesian reasoning in general, but it's especially problematic in areas where our best guesses are probably bad. An old LessWrong post called one sort of problematic case
      Bead Jar Guesses (http://lesswrong.com/lw/em/bead_jar_guesses/).
      I think problems like this, in which you have some information, but not much, and no easy way to update, come up a lot in ordinary life. Based on the information you have, you could use one of several, roughly equally plausible, heuristics, each of which assigns a different prior. How should you proceed?

      Importantly, I'm not necessarily talking about cases of pure ignorance, I'm talking about cases where you have some information from background heuristics, but that information seems weak and different sources of information maybe conflict with one another.

      Say you're trying to decide whether or not to reduce your chocolate consumption out of fear of increased risk of cancer. You've read conflicting studies on the health impact of chocolate, some saying that it slightly increases the risk of getting cancer and some saying that is slightly increases the risk of getting cancer. Furthermore, one of your grandparents has been a self-described 'chocoholic' for years and has remained cancer-free to an advanced age. Relative to the proposition 'Reducing my chocolate intake will decrease my chances of getting cancer' I've got evidence based on one set of studies, evidence based on the other, evidence from the general heuristic 'how much do I trust studies that conflict with other studies' anecdotal evidence from observations of my grandparent, evidence about my genetic similarity to my grandparent, and lots of other evidence besides.

      Overall, it seems like none of this information, even taken in concert, gets me much closer to deciding whether or not reducing my chocolate intake will reduce my risk of cancer. So what should I do? I could scrap all of this evidence and go with an abstract, universal prior (e.g. .5), or I could start from such a prior and then modify it best as I could, but since my evidence is so weak, the prior itself will be doing most of the work. For me to think that's rational, I would have to be pretty sure about that prior to make decisions in this way, but I don't feel at all confidence in any particular choice of universal prior. What should I do?

      There are methods of assigning priors that avoid this problem, but many of them are only available to pretty powerful Bayesian reasoners. High-power prior assigning schemes ( Solomonoff Induction schemas and strategies like those found in e.g. http://www.stat.cmu.edu/~kass/papers/rules.pdf) aren't suitable for ordinary reasoning. One solution might be to outsource one's own belief-forming abilities to a more powerful agent whenever possible. If you can't do that, isn't it preferable to avoid assigning a credence to the proposition in question, at least in cases where you don't have to immediately act on this information?

      Delete
    2. I deleted my earlier comment because I thought it was more confusing than helpful. Sorry if that makes the blog hard to read!

      Delete
    3. The above response is too long as it is, but I wanted to mention Mark Schroeder's defense of the rationality of withholding belief based on stakes considerations. His account is not-particularly Bayesian-friendly, and his views on the connection between knowledge and action change the context a bit, but his model is really interesting. I've linked it below. http://tinyurl.com/opxjoh4

      Delete
    4. Thanks, I'll have to check out the Schroeder paper. On the "what should I do?" question for non-ideal agents, see my response to Ryan downthread.

      Delete
  3. Hi Richard,

    Really interesting question as usual. I am just a tourist in the epistemology literature, but--in keeping with the spirit of your own view--I will offer my quick thought.

    I am actually kind of drawn to Non-commitalism. It seems at least possible to me that one's epistemic environment could be extremely poor, such that any commitment other than withholding or suspending judgment might be unwarranted. Of course, it could be that some doxastic commitment would always be warranted by the evidence, if one had access to it. Suppose I do not have access to the evidence, or the evidence is being distorted in some way such that I cannot understand it or something, and I nevertheless commit to some belief, which happens to be the belief that the evidence (properly understood) supported. It seems to me that I just got lucky, and despite my good epistemic luck, it would have been more rational for me to withhold. So it seems to me we should attach to the epistemic "ought" to the agent's subjective position, rather than to the objective facts about the evidence. Maybe this sort of thinking supports Non-commitalism?

    Regarding your intuitive case--the literature on epistemic value might be useful here. The issue seems to be about what types or errors one has most reason to fear. You seem inclined to think that commitment (of a suitably non-dogmatic sort) will best lead to progress. Maybe, but I can imagine merits to the opposing hypothesis as well. For example, it might depend on whether one fears type 1 or type 2 errors more, and there may be no global answer to which fear is more serious.

    The literature on epistemic permissiveness might also be interesting--although I've lately thought that literature applies to a bunch of questions.

    One last thing. My personal report is that I am sometimes in philosophical situations (for example, the metaphysics seminar) where someone will give me a case and ask me my intuition, and I tell them I have no intuition at all about the case. Sometimes my interlocutor will appear baffled at this, or will act like I am not playing the game right, or something. But if I were to generate any intuition, it would be arbitrary, and so it really would not be an intuition at all. What I honestly have is nothing. It is surprising to me that this report is unacceptable to people. Perhaps differences in responses to your question can help explain the awkwardness of these situations?

    ReplyDelete
    Replies
    1. Hi Ryan! Yeah I think something along those lines (i.e. "what am I meant to do, when I haven't got a clue?") is the best case for non-commitalism. And I think that in practice, as cognitively limited agents, suspending judgment is often a reasonable policy. But I don't think it's ever required of ideal agents, i.e. who start from the objectively rational priors, and update according to whatever (however limited) information they're then exposed to. And so I likewise don't think it's strictly required of non-ideal agents, either, if we could instead approximate the functioning of the rationally ideal agent in a particular circumstance (i.e. start from roughly the right prior, and update accordingly).

      Having said that, I agree that there's no obvious reason to prefer (e.g. when you have no clear priors) "picking arbitrarily" over simply withholding judgment, and so would want to be clear that I don't take my view to imply otherwise. So even if, strictly speaking, what one rationally ought to believe is whatever follows from the objectively rational priors by updating on your limited available evidence, I don't mean to present this as advice to be followed. (See this old post for why I don't like the more "subjective", "instruction manual" approach to theorizing about rationality.)

      Funny that anyone would expect you to have an opinion about everything. I'm afraid I, too, am surprised by your interlocutors here!

      Delete
  4. Ah, I just read through your linked post about the "instruction manual" approach. That is helpful. Thanks!

    ReplyDelete
  5. Thanks for the links, I have a much better hold on what you're thinking about now. I'd like to know more about your view on 'sufficiently competent agents'. I kind of doubt that (almost) fully rational but cognitively-limited agents should withhold in situations where their limitations are of degree (an ideal agent could assign a precise probability here, but maybe I have to assign a range). I think it might be rational to withhold when the difference is of kind (The evidence is fully indeterminate, an ideal agent would e.g. compute the minimum information length of the each competing option and make a parsimony-derived judgment). In the former case, the human does what the ideal agent does, just worse, in the latter case the human gets to a certain point and can't do the 'rational' thing at all. Do you think that's an important distinction?

    As for ideal agents, I think there's still a case for withholding of a certain kind. It seems to me that whether or not withholding is ever rational for an ideal agent depends how many 'objectively rational prior' generation algorithms there are. If there is one best prior schema, (say, a specific version of solomonoff induction using a specific universal turing machine) then ideal agents should never withhold. If there are multiple equally rational prior schemas then two ideal agents could come to different conclusions from the same evidence (without disagreement), then maybe some version of withholding is ok? Even in that case, assigning a 0-100 credence seems weird, but I don't know what else one would do.

    ReplyDelete
    Replies
    1. Yeah, that's an interesting distinction. Though my background view here is that it's most rational to just start with the (unique) objectively rational prior internalized -- rational agents don't have to work them out, any more than you have to work out whether modus ponens is valid, or whether you accept inductive or counter-inductive norms. So, insofar as it's also possible for humans to also just start from the right priors (if they're lucky -- but there's always a sense in which your most basic programming is out of your control and hence a matter of constitutional "luck"), it's not the case that humans in general "can't" do the rational thing here. But you're right that a substantial subset might find themselves in circumstances when they can't, and something like withholding might be the right account of "second-best" rationality for those unlucky ones.

      Delete
  6. Assigning a credence p(X) to a proposition which is bigger than 0 and less than 1 is *already* the correct way to withhold judgement concerning a proposition X. Having "intervals" of credence has always seemed to me to be overegging the pudding. It's like saying "not only am I not sure about X, I'm not even sure whether I'm not sure about X!" That seems to me like a self-contradictory doxastic attitude, even if I can empathize with the emotion behind it.

    If I really weren't sure what probability to assign, as a good Bayesian I should make a probability distribution over what I think the correct probability should be. But then I can always take the weighted mean of that distribution, and that will be a sharp probability. In any case, the Dutch book arguments imply that if there is any rational decision theory based on credence intervals, it must be equivalent to one with sharp probabilities.

    Also, I think we have the tools to estimate probability in any situation, regardless of how unsure I am. For example, suppose a great but incomprehensible philosopher tells you that there are 3 exclusive philosophical positions on a topic, A, B, and C. And suppose he is so bad at explaining what these are, that when he is done lecturing, all you know are the names of the positions. Therefore I have no intuition about the underlying merits of the situation, but I am perfectly comfortable assigning each 1/3 probability. If it seems from his muddled explanation that the situation is more like a related pair of theories A1, A2, versus a completely different theory B, then I might consider making it p = 1/4, 1/4, 1/2. If I'm not sure whether I should do that, then I make some average. If you don't know, you guess. (If your probability theory / epistemology can't handle situations which involve a lack of knowledge, you aren't doing it right. Lack of knowledge is precisely what it's designed for.)

    The fact that it makes absolute skepticism impossible is a nice bonus.

    ReplyDelete
  7. I might add that quoting an (approximate) range of probabilities might be useful when one is quantifying 2 different kinds of uncertaintly. For example, even if you have a perfectly sharp *current* credence about a question, you might want to say that reasonable people might assign any credence in such-and-such range, and you wouldn't think they were crazy to do so. Or you might want to communicate that, if you thought about the problem for 2 more weeks, your probabilities would be likely to shift with time a lot (or only a little, because you already have as much grip on the problem as you ever will, e.g. a coin toss).

    In theory I think a rational Bayesian should always be able to put forward a sharp set of credences. In practice, I find myself most unwilling to do so when I think that maybe my initial answer might look really stupid if I thought about it for longer. But that's irrelevant, if the goal is just to do the best I can right in the present instant.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.