Wednesday, April 14, 2021

Follow Decision Theory!

Back in January, I wrote that there's no such thing as "following the science" -- that scientists and medical experts "aren't experts in ethical or rational decision-making. Their expertise merely concerns the descriptive facts, providing the essential inputs to rational decision-making, but not what to do with those inputs."

It's worth additionally emphasizing that this question of how to convert information into rational decisions is not something about which academic experts are entirely at sea. On the contrary, there's a well-developed academic subfield of decision theory which tells us how to balance considerations of risk and reward in a rational manner.  The key concept here is expected value, which involves multiplying (the value of) each possible outcome by its probability.  For example, we know that (all else equal) we should not accept a 50% chance of causing 10 extra deaths for the sake of a 1% chance of averting 100 deaths, for the latter's expected value (one death averted) does not outweigh the former's expected cost (5 extra deaths).

Tuesday, April 13, 2021

Imagining an Alternative Pandemic Response

I received my first shot of the Moderna vaccine yesterday -- which naturally got me thinking about how this should've been accessible much, much sooner.  I don't think anyone's particularly happy about the way that our pandemic response played out, but there's probably a fair bit of variation in what people think should've been done differently.  What alternative history of COVID-19 do you wistfully yearn after?  Here's mine (imagining that these lessons were taken on board from the start)...

Tuesday, April 06, 2021

Num Nums

My paper 'Negative Utility Monsters' is now forthcoming in Utilitas.  It's a very short and simple paper (2500 words, expanding upon this old blog post), but kind of fun.  Here's the conclusion:

Nozick’s utility monster should no longer be seen as a damning objection to utilitarianism. The intuitive force of the case is undermined by considering a variant with immensely negative wellbeing. Offering significant relief to such a “Negative Utility Monster” plausibly should outweigh smaller harms or benefits to others. Our diverging intuitions about the two kinds of utility monsters may be explained conservatively as involving standard prioritarian intuitions: holding that benefits matter more the worse-off their recipient is (and matter less, the better-off their recipient is). This verdict undermines the distinctiveness of the utility monster objection, and reduces its force to whatever level one attributes to prioritarian intuitions in general. More ambitiously, the divergence between the two cases may be taken to support attempts to entirely explain away the original utility-monster intuition, e.g. as illicitly neglecting the existence of an upper bound on the monster’s wellbeing. Such an explanation, if successful, suggests that our intuition about the original utility monster scenario was based on a mistake. Either way, the force of Nozick’s objection is significantly undermined by the Negative Utility Monster.

NUM: just imagine that the cookies are people, and the monster only looks so happy because this is his first respite from torture for several centuries...

Monday, April 05, 2021

Guest Post: 'Save the Five: Meeting Taurek’s Challenge'

[My thanks to Zach Barnett for writing the following guest post...]

At its best, philosophy encourages us to challenge our deepest and most passionately held convictions. No paper does this more forcefully than John Taurek’s “Should the Numbers Count?” Taurek’s paper challenges us to justify the importance of numbers in ethics.

Six people are in trouble. We can rescue five of them or just the remaining one. What should we do? This may not seem like a difficult question. Other things equal, you might think, we should save the five. This way, fewer people will die. 

Taurek rejects this reasoning. He denies that the greater number should be given priority. In effect, Taurek challenges us to convince him that the numbers should count. Can we meet his challenge?

You might be pessimistic. Even if you yourself agree that the numbers do count, you might worry that... just as it’s hopeless to try to argue the Global Skeptic out of Global Skepticism... it’s equally hopeless to try to argue someone like Taurek, a Numbers Skeptic, out of Numbers Skepticism. But that’s what I’ll try to do.

Saturday, April 03, 2021

What's at Stake in the Objective/Subjective Wrongness Debate?

A decade ago I wrote an introductory essay on 'Objective and Subjective Oughts', and the theoretical role of each. In short: the objective ought identifies the best (most desirable) decision, or what an ideal observer would advise and hope that you choose. The subjective or rational ought identifies the wisest or most sensible decision (given your available evidence), departures from which would indicate a kind of internal failure on your part as an agent.  Both of these seem like legitimate theoretical roles.  (Beyond that, various more-subjective senses of ought -- derived from instructions that any agent could infallibly follow -- risk veering into triviality, and are best avoided.)

Now, Peter Graham's Subjective versus Objective Moral Wrongness (p.5) claims that there's a single "notion of wrongness [either objective or subjective] about which Kantians and Utilitarians disagree when they give their respective accounts of moral wrongness."  This strikes me as a strange claim, as the debate between Kantians and Utilitarians seems entirely orthogonal to Graham's debate between objectivists and subjectivists.  More promisingly, Graham continues: "And that notion of wrongness is the notion of wrongness that is of ultimate concern to the morally conscientious person when in their deliberations about what to do they ask themself, 'What would be morally wrong for me to do in this situation?'."

My worry about the latter approach is that our assertoric practices reveal the deliberative question to be ill-formed (in that "correct" answers do not correspond to any fixed normative property).  It doesn't truly ask about the objective or the subjective/rational 'ought', but instead a dubious relativistic (or expressivist) construct. As I summarize (in the linked post):

Tuesday, March 30, 2021

Is Effective Altruism "Inherently Utilitarian"?

A recent post at the Blog of the APA claims so.  Here's why I disagree...

It's worth distinguishing three features of utilitarianism (only the weakest of which is shared by Effective Altruism):

(1) No constraints.  You should do whatever it takes to maximize the good -- no matter the harms done along the way.

(2) Unlimited demands of beneficence: Putting aside any intrinsically immoral acts, between the remaining options you should do whatever would maximize the good -- no matter the cost to yourself.

(3) Efficient benevolence: Putting aside any intrinsically immoral acts, and at whatever magnitude of self-imposed burdens you are willing to countenance: you should direct your selected resources (time, effort, money) that are allocated for benevolent ends in whatever way would do the most good.

EA is only committed to feature (3), not (1) or (2).  And it's worth emphasizing how incredibly weak claim (3) is.  (Try completing the phrase "no matter..." for this one.  What exactly is the cost of avoiding inefficiency?  "No matter whether you would rather support a different cause that did less good?" Cue the world's tiniest violin.)

Saturday, March 27, 2021

Stable Actualism and Asymmetries of Regret

Jack Spencer has a cool new paper, 'The Procreative Asymmetry and the Impossibility of Elusive Permission' (forthcoming in Phil Studies).  I found reading it to be really helpful for clarifying my thoughts on the procreative asymmetry.

Back in 'Rethinking the Asymmetry' (CJP, 2017), I argued for two main claims: (i) we have reason to bring good lives into existence, whereas "strong asymmetry" intuitions to the contrary can be explained away; and (ii) the intuition that we should prioritize existing lives is better accommodated by a form of modest partiality towards the (antecedently) actual than by Roberts' Variabilism (or any other strong-asymmetry-implying view).  To avoid incorrectly permitting miserable lives to be brought into existence, I argued, actualist partiality should be supplemented with a principle proscribing the predictably regrettable.

Thursday, March 25, 2021

Learning from Lucifer

Via Daily Nous, I came across this funny comic on "Effective Villainy", which in turn got me thinking about what we could learn from the apparent symmetry between good and evil.  After all, it might be clearer what evil calls for, in which case we could -- if we accept symmetry -- then draw interesting conclusions about what's right and good.

Consider side-constraints, as in this evil transplant scenario: a car crash victim is about to bleed out, but then their healthy organs would save five transplant patients -- unless Lucifer sweeps in and "heroically" stems the bleeding, saving the car crash victim (and ensuring the other five die).  What should_[subscript evil] Lucifer do?  It seems clear that a side-constraint against saving lives would make no sense, from the perspective of evil: the rational pursuit of evil would lead Lucifer to do whatever ultimately proves most harmful, regardless of whether he had to get his hands "dirty" helping (ick!) various individuals along the way.   Better_[sub evil] to save one than to allow five to be saved.

Or consider aggregation. Opponents of aggregation may ground their view in either axiological intuitions (e.g. in Scanlon's transmitter room case, it just seems worse for the one person to suffer from electrocution than for the billions to miss out on seeing the football game live), or in hand-waving claims that aggregation somehow fails to respect the "separateness of persons".  Parfit showed the former to be incoherent, and I've argued that the latter is baseless, but suppose you're not yet convinced. Let's learn from Lucifer!

In the transmitter room case, it's natural to assume that Lucifer would want the guy to be electrocuted, which I think reveals our intuition that this is (seemingly! but we need not endorse this seeming upon reflection) the worse outcome -- not actually a case in which what's right diverges from what's consequentially best.  So that seems like a problem for the anti-consequentialist's use of this thought experiment.

In other cases, where it's clearer that the many's interests really do (factually) outweigh the one's, it would seem weird for Lucifer to prioritize harming the one a lot over harming the many by more in aggregate (except insofar as this was motivated on prioritarian grounds, perhaps).  It would seem especially bizarre for Lucifer to embrace a strict "numbers don't count" view, and be indifferent between drowning one or drowning five in a lifeboat case. ("Killing more people isn't worse for anyone than killing one is for the one, so in what sense is it really worse at all?  Why should I bother to kill more?" - Evil Taurek. Given how poorly Evil Taurek seems to understand evil-doing, why would you consider his light-side self any less confused?)

Overall, then, it seems clear that Satan would be a consequentialist.  So, shouldn't you follow suit? (Just, you know, aiming at the opposite ends...)

Or should we instead reject the apparent symmetry, and think that genuine morality should look rather different from the simple opposite of pure immorality?  Perhaps one could think that morality is responsive to certain considerations -- respect for persons, or whatnot -- that evil is simply indifferent to (rather than being positively opposed to)?  It'd be interesting to hear the idea spelled out in greater depth.

For those who are on board with the symmetry, what other lessons might be drawn?  (I like Michael Slote's suggestion, from his 1985 Common-Sense Morality and Consequentialism, p.77, that the implausibility of the minimalist view that only the very worst option is wrong provides us with grounds to doubt maximizing accounts of the right...)

Population ethics remains tricky.  Would Lucifer prefer to immiserate existing people, or bring into existence an entirely new population of even more miserable future people (who would otherwise not exist at all)?  Not obvious...

Thursday, March 18, 2021

Three Dogmas of Utilitarianism

I think that something very close to utilitarianism is the right moral theory, and most of the standard objections are bunk.  But here are what I take to be three genuine flaws in "orthodox" utilitarianism. (Two can be fixed from within utilitarianism.  One pushes us to accept a slightly different consequentialist view that is no longer strictly speaking utilitarian.)

Wednesday, March 17, 2021

Appeasing Anti-Vaxxers

As the ongoing pandemic obviously causes immense harms, there are correspondingly immense benefits to vaccinating people sooner. Our actual policies have failed at this in a number of ways (from failing to encourage experimental vaccination, to gratuitous delays in approving successful vaccines even after the trial data were received). Now some countries are suspending use of the AZ vaccine due to (poorly-grounded) fears about rare side-effects, seemingly oblivious to the fact that there's a much more serious (and high-probability) "side-effect" to non-vaccination, namely, COVID-19.   This all seems bad enough, on straightforwardly utilitarian grounds.  But I now want to argue that it's even worse than that: even if these delays did some good, by reassuring the vaccine-fearful, they would still be wrong.