Saturday, May 15, 2021

Why Belief is No Game

In 'The Game of Belief', Barry Maguire and Jack Woods nicely set out a broadly "pragmatist" understanding of normativity.  In this post, I'll try to explain why I think it is misguided, and what alternative understanding we should adopt instead.

The gist of M&W's view is that practical reasons (including for belief and other attitudes) are the only truly authoritative normative reasons, but there are also all kinds of (non-authoritative) practice-relative normative reasons that provide "standards of correctness" -- e.g. for playing chess "correctly" (i.e. strategically well) or even for believing "correctly" (i.e. in line with purely epistemic standards).  We will often, but not always, have practical reasons to do things "correctly"--that just depends upon circumstantial details.

My biggest complaint about this sort of view is that it completely divorces reasons from rationality.  They conceive of reasons as things that support (either by the authoritative standard of value, or some practice-relative standard of correctness) rather than as things that rationalize.  As a result, they miss an important disanalogy between practice-relative "reasons" and epistemic reasons: violating the latter, but not the former, renders one (to some degree) irrational, or liable to rational criticism.

Of course, there are more important things than being rational: I'm all in favour of "rational irrationality" -- taking magic pills that will make you crazy if that's essential to save the world from an evil demon or the like.  But I still think its important to recognize rationality as the objective/"authoritative" standard of correctness for our cognitive/agential functioning.  It's really importantly different from mere practice-relative reasons, which I don't think are properly conceived of as normative at all.  There's really nothing genuinely erroneous (irrational) about playing chess badly in order to save the world, in striking contrast to the person who (rightly and rationally) turns themselves irrational in order to save the world.

So, whereas M&W are happy to speak of "chess reasons" as genuinely normative (just not authoritative) reasons, I would reject this on the grounds that chess reasons do not rationalize action. If the evil demon will punish us all if you play chess well, then you really have no good reason at all to play well.  (By contrast, if you're punished for believing in line with the evidence, that doesn't change what it is rational to believe, it just provides an overwhelmingly important practical reason to [act so as to] block or override your epistemic rationality somehow!)

M&W write:

We take activity-specific reasons... clearly to favour the options they favour (relative to the ends or values of the activity in question). In the context of the relevant activity, they are the sorts of things that could be offered as advice, or justification.

Compare hypothetical imperatives.  I think it's really important to stress that hypothetical (or end-relative) favouring is not real favouring.  One may speak of "genocide-relative reasons", but they could not correctly be "offered as advice, or justification" -- a Nazi might offer them as such, but they would be making a mistake.  I worry that M&W's view renders epistemic reasons basically on a par with genocide-relative reasons -- typically less damaging in practice, of course, but just as inherently lacking in genuine normativity.  When we call epistemically-supported beliefs "correct", that's just relative to our standards, in the same way that constructing gas chambers may be "correct" according to the Nazi's preferred activity.  I find such relativism impalatable.  Epistemic normativity is more objective than this view allows.

Surprisingly, M&W take "non-compliance" with "operative standards of correctness" to "render one liable to certain kinds of criticism", even if one has violated these non-authoritative standards precisely in order to comply with authoritative normative reasons, or what one all-things-considered ought to do. This claim strikes me as substantively nuts.  If you rightly violate your professional code in order to save the world from destruction, it simply isn't true that you're thereby "liable to professional criticism." (Especially if your profession is, say, a concentration camp guard.)  Anyone who criticized you would reveal themselves to be the world's biggest rule-fetishist. Put another way: conforming to the all-things-considered ought is an indisputable justification, and you cannot reasonably be blamed or criticized when you act in a way that is perfectly well justified.

So I don't think that (reasonable) critical practices support the claim that practice-relative "reasons" are normative. By contrast, as mentioned above, you are liable to epistemic criticism for holding epistemically irrational beliefs, even if you acted rightly and rationally in inducing your state of epistemic irrationality. In this case, your act of inducing irrationality was justified and blameless, but the resulting irrational belief is a distinct locus of rational assessment, which is why we can make sense of this separate (and more negative) assessment.  It is, after all, really true that your doxastic faculties weren't functioning well (by design!) in forming the irrational belief, and that -- unlike violating professional codes -- reveals a kind of genuine agential failure.

Indeed, it gets worse for M&W: I think our critical practices actually reveal the opposite of what they want.  For consider an agent who (lacking access to magic pills or other ways to act upon themselves) continues to believe in accordance with the evidence even when this is practically disastrous.  This agent may be perfectly rational: they recognize that saving the world is more important than being rational, so they desperately wish to be epistemically irrational, but they simply lack the means to bring about this desired result.  Is the agent liable to criticism?  The pragmatist view predicts that they are: they are failing to believe as they (putatively) ought, in violation of overwhelmingly strong practical (authoritative) reasons for belief. But that verdict strikes me as absurd.

It is of course tragic that the agent isn't more irrational, but that's hardly their fault, and not something for which they are in any way liable.  They are (unfortunately!) perfectly rational, and -- I would say -- responding appropriately to all the normative reasons that they have.  They believe in accordance with their epistemic reasons (which are the only real reasons for belief), and they desire just what's desirable.  If they could manipulate their own beliefs so as to save the world, then they would. But they can't.  That's a lamentable fact about the situation, but not in any way a rational failing on the part of the agent.

So we see that pragmatists are wrong about what people are rationally criticizable for, and hence wrong about what reasons there are.

[For five more general reasons to prefer my fittingness framework over indiscriminate pragmatism, see my old post on 'Reasons Talk and Fitting Attitudes'.]

P.S. M&W take the question of what we "just plain ought" to do (or believe) to be settled by the authoritative (for them: practical) reasons.  They thus see little difference between questions like, "What ought I to do, when abiding by my professional code of conduct would seriously harm people?" and "What ought I to believe, when believing the truth would cause serious harm?"

I think those questions are subtly -- but importantly -- different. A yet different question would be, "What ought I to do about my beliefs, when believing the truth would cause serious harm?" You may have good reason to act upon yourself in some way that yields more beneficial beliefs.  But we can distinguish such external belief-directed practical activity from the internal doxastic processes of deliberation and believing.  As Shah and others have argued, doxastic deliberation is "transparent to truth"--asking what to believe gives way to the question of what is true, unlike in practical deliberation (perhaps the latter instead gives way to the question of what is good, or worth attaining).  Pragmatists like M&W seem to conflate doxastic deliberation with belief-directed practical deliberation, whereas I think our understanding will be more accurate if we maintain the distinction. (It's not obvious that all that much hangs on this, but conflations can lead to other philosophical confusions, so seem best avoided if possible.)


Post a Comment

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.