Monday, January 18, 2021

Epistemic Calibration Bias and Blame-Aversion

People typically treat having an importantly false belief as much more problematic than failing to have an importantly true belief.  They're more concerned about being over-confident than being under-confident in their credences.  But why?  Is such an epistemic asymmetry warranted?

I'm dubious.  The ideal is to be epistemically well-calibrated: to have just the degree of confidence in an important proposition that is warranted by your evidence, such that in the long run exactly X% of your "X% confident" beliefs turn out to be true -- no more and no less.  Moreover, it seems to me that we should be equally concerned about miscalibration in either direction.  If we are underconfident (or withhold judgment entirely) when our evidence strongly supports some important truth, that's just as bad, epistemically speaking, as being correspondingly overconfident.

In thinking about this, it's important to distinguish two dimensions of confidence: what we might call credal value and robustness.  To see how these come apart, note that I might have weak evidence that something is very probable.  My credence in the proposition should then be high -- for now -- but I should regard this credal value as tentative, or likely to change (in an unknown direction) in the face of further evidence.  "Bold beliefs, weakly held," to put the idea in slogan form.

This distinction carries over, in obvious fashion, to expected-value judgments.  Given high uncertainty and lots of important "unknowns", our conclusions should generally be tentative and subject to change in light of future evidence.  But this is compatible with their having pretty much any first-order content whatsoever.  One could, for example, tentatively hold that the expected value of some policy proposal -- given one's current evidence -- is extremely positive. Indeed, this is my view of my preferred pandemic policy.  It strikes me as having had extremely high expected value, and I would even say that this evaluation seems tolerably obvious, given my available evidence, and yet I wouldn't be terribly surprised if some new evidence emerged that required me to radically change my opinion -- as I of course acknowledge that my epistemic basis is limited.

It's interesting that such epistemic limitations don't by themselves do anything to undermine taking a bold view.  As I previously wrote in response to Jason Brennan on voting: "so long as you've no special reason to think that the unknowns systematically favour going one way rather than the other, their influence on the expected values of your choices simply washes out."  This strikes me as an important, yet widely unappreciated, point.  People -- even smart philosophers -- seem to assume that "unknowns" as such are epistemically undermining.  This is simply a mistake.

I think it's especially important to correct for this bias in relation to pandemic policy. In a pandemic, you should be at least as concerned about mistakenly neglecting a good policy solution as you are about mistakenly advancing a bad policy.  At the very least, some argument is needed for thinking that the latter risk is greater.  You can't just fall back on your old conservative epistemic habits.  So I would urge everyone to blame others less for exploring new ideas -- even ones that ultimately prove misguided.  At least half of our epistemic sanctions should be directed towards those who are unduly conservative or closed-minded. I would even go further, and argue that excessive conservatism is much the greater risk (given how bad the status quo is in a pandemic) -- and so if anything, a greater share of our epistemic sanctions should be directed against that error.

What do you think?

1 comment:

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.