Saturday, January 22, 2022

Utilitarianism and Reflective Equilibrium

In 'Why I Am Not a Utilitarian', Michael Huemer objects that "there are so many counter-examples, and the intuitions about these examples are strong and widespread, it’s hard to see how utilitarianism could be justified overall."  But I think it's actually much easier to bring utilitarianism (or something close to it) into reflective equilibrium with common sense intuitions than it would be for any competing deontological view.  That's because I think the clash between utilitarianism and intuition is shallow, whereas the intuitive problems with non-consequentialism are deep and irresolvable.

To fully make this case would probably require a book or three.  But let's see how far I can get sketching the rough case in a mere blog post.

Firstly, and most importantly, the standard counterexamples to utilitarianism only work if you think our intuitive responses exclusively concern 'wrongness' and not closely related moral properties like viciousness or moral recklessness:

They generally start by describing a harmful act, done for purpose of some greater immediate benefit, but that we would normally expect to have further bad effects in the long term (esp. the erosion of trust in vital social institutions). The case then stipulates that the immediate goal is indeed obtained, with none of the long-run consequences that we would expect. In other words, this typically disastrous act type happened, in this particular instance, to work out for the best. So, the argument goes, Consequentialism must endorse it, but doesn't that typically-disastrous act type just seem clearly wrong? (The organ harvesting case is perhaps the paradigm in this style.)

To that objection, the appropriate response seems to me to be something like this: (1) You've described a morally reckless agent, who was almost certainly not warranted in thinking that their particular performance of a typically-disastrous act would avoid being disastrous. Consequentialists can certainly criticize that. (2) If we imagine that somehow the voice of God reassured the agent that no-one would ever find out, so no long-run harm would be done, then that changes matters. There's a big difference between your typical case of "harvesting organs from the innocent" and the particular case of "harvesting organs from the innocent when you have 100% reliable testimony that this will save the most innocent lives on net, and have no unintended long-run consequences." The salience of the harm done to the first innocent still makes it a bitter pill to swallow. But when one carefully reflects on the whole situation, vividly imagining the lives of the five innocents who would otherwise die, and cautioning oneself against any unjustifiable status-quo bias, then I ultimately find I have no trouble at all endorsing this particular action, in this very unusual situation.

Utilitarianism clearly endorses our being strongly reluctant to murder innocent people (and respecting commonsense moral norms more generally).  While it's possible to imagine hypothetical cases in which an agent ought (by utilitarian lights) to override this general disposition, it's an open question what lesson we should draw from our intuitive resistance to such overriding.  If someone insists that they not only endorse the utilitarian-compatible claims in this vicinity, but additionally judge that the act itself "clearly" ought not to be done (even in the "100% reliable" version of the case), then I'll grant that they find utilitarianism counterintuitive in this respect.  But then the question still remains whether they might find further implications of deontology to be even more counterintuitive.

* Deontology prioritizes those who are privileged by default; but this violates the strong theoretical intuition that status quo privilege is morally arbitrary. (Why should the five have to die rather than the one, just because organ failure happened to occur in their bodies rather than his?)

* It rests on a distinction between doing and allowing that doesn't seem capable of carrying the weight that deontologists place upon it. 

* It implies that we should often hope/prefer that others act wrongly: since, after all, impartial observers should want and hope for the best outcome.

* Worse, according to my new paradox of deontology, deontic constraints are self-undermining in the strong sense of being incompatible with taking their violations (e.g. the killing of an innocent person) to be particularly important.

* Most importantly, deontology makes incredible claims about what fundamentally matters.  It seems completely wild to claim that keeping a deathbed promise (to borrow one of Huemer's examples) is seriously more important, in principle, than the entire lives of many innocent people.  So either deontologists are stuck making completely wild claims of this sort, or their normative prescriptions (concerning what we allegedly ought to do) bear no relation to what really matters.

Now, I think our deepest intuitions about what really matters are much more methodologically significant, and should play a greater role in determining our ethical theory, than superficial verdicts about the extension of the word 'wrong' in various highly-specified cases.  So that's why I think (something close to) utilitarianism is actually the most intuitive moral theory.


  1. Amateur philosopher here. The comments below are bit disorganized sorry, but I contend there's some good stuff in there.

    Virtue ethics already captures intuitions excellently in many contexts, but it requires a lot of judgment to be exercised and is sort of silent in many important contexts as we get more tech at our disposal. I think part of the issue here is people put different weights on how much you put on sparsity vs. fit (to make the obvious statistics analogies). They say "the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience." There are tradeoffs here. I think most philosophers feel that no set of basic elements has yet been found (and perhaps never will) that captures intuition enough to trust it consistently over strong intuition. In some ways, such as measurement of utility

    The question becomes how one might try to work towards building a better set of basic elements to approximately capture most intuitions. The better such a theory we can find, the more we'd be willing to trust it when it goes against stronger and stronger intuitions?

    What would a good theory that approximated deontological ethics look like? It would need to use methods from economic theory and philosophical logic (including fuzzy logic) to describe 1) tradeoffs between different values and 2) different kinds of "relativity" that are inherent in deontology. It would need to talk about how much weight you give to the various deontological ideas in different situations (see in two sentences for a list of these). In principle some of these weights, parameters, and marginal rates of substitution could be elicited from asking people about cases and then calibrating models, but this will be extremely hard and will probably not be feasible in a serious way without at least another 1000 years of progress in various disciplines. The aforementioned deontological ideas include: desert, reciprocity, local egalitarianism, preference for humans, status quo preference, respect of institutions, history-dependence (e.g. redressing past injustice), prioritization of close-ones, honesty. Deontological ideas make use of other concepts like causation which are themselves tricky.

    I don't think reflective equilibrium, coherentism, or foundationalism are very helpful terms. It seems like Bayesian epistemology would be the natural thing to use try to make reflective equilibrium more precise though I know there's lots of difficulties there. There's a difference between one the one hand trying out a theory and seeing what it implies and comparing that with intuition and comparing this with the performance of alternative theories, and, on the other hand, "treating the theory as unrevisable" which is what Rawlsians usually accuse utilitarians of.

    Some relevant references:
    - Alan Thomas: Should Generalism be our Regulative Ideal?
    - Horgan and Timmons: What Does the Frame Problem Tell us About Moral Normativity?
    - Susan Schneider: The Language of Thought: A New Philosophical Direction
    - Eyal Zamir: Law, Economics, and Morality

  2. To finish the typoed missing end to para 1: In some ways, such as measurement or identification of pleasure, even hedonism must rely on intuition, and is also incomplete: what about Einstienian relativity in physics? A fully explicit theory capturing the deontological morality that humans are born with could not be written down in a way humans could understand since it would be too complicated.

  3. This will be one of those weird comments about utilitarianism in which I ask you to assume the truth of a whole bunch of probably-false things. The first of these is to suppose that the Omicron variant of COVID-19 will act as a relatively benign, self-administering COVID vaccine that quickly sweeps away the far more dangerous variants from the entire world, essentially ending the pandemic by replacing the lung-damaging COVID-19 variants with an new coronavirus-caused cold (possibly true; too soon to tell, but suppose). Second, suppose that the utilitarian scientists who created Omicron in their secret island compound and then released it in Africa knew enough about virology to be justifiably confident that this would be the result of the genetic changes they made to the Wuhan strain, and they were justifiably certain that it wouldn't kill more people than the non-engineered alternative and its likely descendants. Of course, scientists did not engineer Omicron. Omicron got all its mutations from the Wuhan strain infecting mice who passed the virus among themselves before reinfecting humans. We just got incredibly lucky that the mice accidentally made the almost-perfect pandemic solution for us humans. But if human scientists created it deliberately under the suppositions made, I claim that this would make them the greatest moral heroes of our generation.

    Here is my evidence that my very strong intuition about the heroic nature of that deed would be seen differently by the general public: Nobody was remotely even exploring this kind of deliberate pandemic-ending strategy, presumably because it's so ethically indefensible that it's not even worth contemplating. Sure, there was some talk about engineering contagious coronavirus vaccines *for bats and other wildlife* but never humans. I presumed that this was due to the inherent unpredictability of viruses in the wild, that we couldn't be sure the engineered virus wouldn't do something weird and become a killer. But then I looked into the actual immunology involved and found that while this is strictly true, we can make very accurate predictions about which systems a virus will affect in a human. What's more, we also can't be sure that the natural Delta strain won't do something weird and become an even bigger killer. It's not hard to suppose that Delta is far more likely to mutate into a catastrophic variant than a carefully engineered Omicron is. Still I feel like ordinary people who suppose all these things would say it's still *obviously* wrong for scientists to infect humans with Omicron, and that it's morally preferable to leave Delta circulating and try beating it back with masks and distancing and vaccines.

    Obviously, my intuitions go the other way. If we know enough about viruses to create something like Omicron (which isn't safe: it still kills people with co-morbidities; it's only far safer than the natural alternative), I consider it a moral crime that we didn't do it in 2020. In order to abide by ethical rules, we caused the needless deaths of millions and years of needless suffering for the survivors. A sentence like that seems genuinely paradoxical to me, but apparently, everyone else finds it simply right. I've never felt like more of a moral freak than I do when thinking about this, especially when arriving at conclusions like "being good requires becoming a Bond villain." I'd appreciate your thoughts.

    1. One obvious downside to being a Bond villain is that the rest of society will judge you to be a (highly unpredictable) threat, and react accordingly. That alone is sufficient reason for utilitarians to instead focus on more co-operative (less rights-violating) sorts of endeavours, of which there are plenty. Moral uncertainty is another reason (not to mention the obvious empirical uncertainty).

      That said, I've argued since the start of the pandemic that conventional morality is deadly in a pandemic and we should be more open to exploring the possible benefits of (consensual) deliberate infection to allow for early targeted immunity (before vaccines were available). So while I wouldn't go all the way to your "Bond villain" position, I do agree that there's plenty to criticize in folk morality here.

  4. Not sure if this is objectionable self promotion but I wrote a ten part series responding to Huemer's argument here
    I addressed each of the specific thought experiments.

  5. "* Most importantly, deontology makes incredible claims about what fundamentally matters. It seems completely wild to claim that keeping a deathbed promise (to borrow one of Huemer's examples) is seriously more important, in principle, than the entire lives of many innocent people. So either deontologists are stuck making completely wild claims of this sort, or their normative prescriptions (concerning what we allegedly ought to do) bear no relation to what really matters."

    Is this just an "incredulous stare"? Why couldn't the deontologist simply say right back that your claims about what fundamentally matters are "incredible" and "wild"? Or do you intend the links to do the argumentative work and this remark is just a summary of your conclusions?

    1. BTW, I don't mean "Is this just an 'incredulous stare'" to come off as accusatory here. Some people think that's a fine philosophical response - I'm simply wondering if that is indeed the response you're giving here.

    2. Deontologists typically makes claims about what's right or wrong, not what's important. My suggestion is that their claims sound much less plausible when re-stated in terms of what's important. Of course, it's always possible for someone to reject an argument like this by "biting the bullet" and just accepting the verdicts that I think sound crazy. (In just the same way that a utilitarian could dismiss any putative counterexamples by biting the bullet and saying nothing more than that they aren't bothered by those implications.)

      Argument can get no grip on someone who isn't the slightest bit bothered by the implications of their view that seem bothersome to others. But in practice, there tends to be pretty strong overlap between what people find prima facie intuitive or bothersome, which is why philosophers don't usually just "bite the bullet" without saying at least something more to try to weaken the apparent force of the objection (as I did with the putative counterexamples to utilitarianism, for example).

    3. (I mean, overlap between different people regarding what they find intuitive, etc...)

    4. Ah, I see. I thought that by, "what fundamentally matters" you meant to refer to some common ground between deontologists and utilitarians, not employ a different way of stating what's at issue from what deontologists typically use. My mistake.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.