Sunday, October 03, 2021

Ruling out Helium-Maximizing

Joe Carlsmith asks: is it possible you should maximize helium?  Robust realism per se places no constraints on what the normative truths might end up being.  So, in particular, there's no guarantee that what we objectively ought to do would hold any appeal whatsoever to us, even on ideal reflection -- the objective requirements could be anything!  (Or so you might assume.)

But I think that's not quite right.  Metaphysically, of course, the fundamental normative truths are non-contingent, so they could not really be anything other than what they in fact are. Epistemically, the fundamental normative truths are a priori (if knowable at all), so it's not clear that erroneous views are "possible" in any deep sense.  A somewhat wider range of views may be "possible" in the superficial sense that we don't currently know them to be false, but unless you're a normative skeptic, we can currently know that pain is bad and that maximizing helium is not the ultimate good.

It's an interesting question how we can have any normative knowledge at all. (I offer my answer here.) But given that we can, it's important not to lose sight of this fact when thinking about the implications of non-naturalism.  For while the "non-natural" status of normative properties does not constrain their application, it doesn't follow that they really could apply to just anything (either metaphysically or epistemically).

Compare two very different bases for the confident rejection of helium-maximization:

(1) Normative internalism rules out the possibility of a mismatch between normative truth and the attitudes we'd hold on procedurally ideal reflection.  So on purely formal grounds, we can be confident that what we objectively ought to do cannot be something (like maximizing helium) that would never appeal to us.

(2) Normative externalists must instead appeal to substantive normative claims, such as the datum that well-being matters (non-instrumentally) and helium does not.

I think the substantive explanation is the better one. After all, it seems an open possibility that some fool might actually want nothing more than to maximize helium (even on ideal reflection), so to maintain that they would be mistaken we need to leave room for possible mismatches between subjective appeal and objective normativity.  Furthermore, in addressing the question why helium-maximizing would be so misguided, I think the answer, "because people are what really matter!" is better than "because there's no way I would ever care about helium so much!" The real problem with helium-maximizing is substantive, not merely formal, so it's entirely appropriate that our response to it should lie on this (first-order rather than metaethical) level.

So, while (externalist) non-naturalists view deep alienation as a live possibility in general, they need not regard it as a possibility that's compatible with their current attitudes, if they're able to know that their current attitudes are actually (at least roughly) right.  We may thus be confident that normative reality will not completely baffle us (while allowing that it might baffle others).

But, importantly, it may still surprise us in a weaker sense.  Consider: I may give some credence to a view (e.g. prioritarianism) that strikes me as somewhat reasonable, even while I am near-certain that I would not myself believe the view even upon ideal reflection. If prioritarianism turned out to be the objectively correct view, this would be surprising (even to my idealized self), but it's the kind of surprise I think we should be open to.  It seems a problem for internalist views that they cannot leave room for normative reality to slightly surprise our idealized selves in such a way.

In sum, when reflecting on these issues, I think we should ideally want our metaethical theories to accommodate the following three desiderata:

  • Allow us to rule out helium-maximization (and other "baffling" views that are at odds with views that we are rightly confident of).
  • Allow that wrong-headed agents can be wrong, and so suffer an "alienating" mismatch between their (procedurally idealized) attitudes and normative reality.
  • Allow that, even given our broadly reasonable starting points, our idealized selves may be surprised by some aspects of normative reality, as we may be robustly disposed towards a subtly-mistaken view (that is close to the correct view without being exactly right).
Externalist non-naturalism can accommodate all three (whereas internalist views secure only the first, and that arguably for the wrong reason).  So, far from posing a problem for the view, I think that reflection on alienation and related issues should bolster our confidence in normative externalism.

0 comments:

Post a Comment

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)