Saturday, May 15, 2021

Why Belief is No Game

In 'The Game of Belief', Barry Maguire and Jack Woods nicely set out a broadly "pragmatist" understanding of normativity.  In this post, I'll try to explain why I think it is misguided, and what alternative understanding we should adopt instead.

The gist of M&W's view is that practical reasons (including for belief and other attitudes) are the only truly authoritative normative reasons, but there are also all kinds of (non-authoritative) practice-relative normative reasons that provide "standards of correctness" -- e.g. for playing chess "correctly" (i.e. strategically well) or even for believing "correctly" (i.e. in line with purely epistemic standards).  We will often, but not always, have practical reasons to do things "correctly"--that just depends upon circumstantial details.

My biggest complaint about this sort of view is that it completely divorces reasons from rationality.  They conceive of reasons as things that support (either by the authoritative standard of value, or some practice-relative standard of correctness) rather than as things that rationalize.  As a result, they miss an important disanalogy between practice-relative "reasons" and epistemic reasons: violating the latter, but not the former, renders one (to some degree) irrational, or liable to rational criticism.

Tuesday, April 27, 2021

'Risky Research' Redux

I'm looking forward to participating in 1DaySooner's Zoom panel discussion on 'What is the Upper Limit of Risk in Clinical Trials?' next week (May 4th, @6pm ET) -- you can register here if you're interested in attending.

My basic view is that there is no absolute upper limit: given informed consent, the risk just needs to be proportionate, i.e. outweighed by the social value of the information gained from the research.

Indeed, this strikes me as entirely straightforward.  There are two key values that public policy should be guided by: beneficence (promoting the overall good) and autonomy (respecting individuals' choices about their own lives).  Conflicts between the two values can be morally tricky.  But if both of these values point in the same direction, as they do in the case of valuable research involving willing volunteers, then it really should be a no-brainer.  There's just no good reason to engage in anti-beneficent paternalism.  So: let's please stop doing that!

I think that's the simplest case for "risky research".  In my paper with Peter Singer, we additionally proposed a principle of risk parity according to which, "if it is permissible to expose some members of society (e.g. health workers or the economically vulnerable) to a certain level of ex ante risk in order to minimize overall harm from the virus, then it is permissible to expose fully informed volunteers to a comparable level of risk in the context of promising research into the virus."  Again, it just makes no sense to block willing volunteers from taking on some level of risk if such obstruction effectively condemns a far greater number of unwilling people to even greater harms.

What principled value here could outweigh the combined force of autonomy and beneficence?  I look forward to hearing what my fellow panelists have to say...

Monday, April 19, 2021

Get Parfit's Ethics Free (till May 3)

Cambridge University Press is offering free PDF downloads of Parfit's Ethics until May 3. (After that, you can always access the pre-print from PhilPapers, which has the same essential content but differs wildly in pagination and doesn't reflect subsequent copyediting.)

At just 55 pages of main text, it's the most concise introduction you'll find to Parfit's wide-ranging ethical thought. Perfect for grad seminars, or anyone interested in highlights from the greatest moral philosopher of the past century (or, indeed, ever).

Wednesday, April 14, 2021

Follow Decision Theory!

Back in January, I wrote that there's no such thing as "following the science" -- that scientists and medical experts "aren't experts in ethical or rational decision-making. Their expertise merely concerns the descriptive facts, providing the essential inputs to rational decision-making, but not what to do with those inputs."

It's worth additionally emphasizing that this question of how to convert information into rational decisions is not something about which academic experts are entirely at sea. On the contrary, there's a well-developed academic subfield of decision theory which tells us how to balance considerations of risk and reward in a rational manner.  The key concept here is expected value, which involves multiplying (the value of) each possible outcome by its probability.  For example, we know that (all else equal) we should not accept a 50% chance of causing 10 extra deaths for the sake of a 1% chance of averting 100 deaths, for the latter's expected value (one death averted) does not outweigh the former's expected cost (5 extra deaths).

Tuesday, April 13, 2021

Imagining an Alternative Pandemic Response

I received my first shot of the Moderna vaccine yesterday -- which naturally got me thinking about how this should've been accessible much, much sooner.  I don't think anyone's particularly happy about the way that our pandemic response played out, but there's probably a fair bit of variation in what people think should've been done differently.  What alternative history of COVID-19 do you wistfully yearn after?  Here's mine (imagining that these lessons were taken on board from the start)...

Tuesday, April 06, 2021

Num Nums

My paper 'Negative Utility Monsters' is now forthcoming in Utilitas.  It's a very short and simple paper (2500 words, expanding upon this old blog post), but kind of fun.  Here's the conclusion:

Nozick’s utility monster should no longer be seen as a damning objection to utilitarianism. The intuitive force of the case is undermined by considering a variant with immensely negative wellbeing. Offering significant relief to such a “Negative Utility Monster” plausibly should outweigh smaller harms or benefits to others. Our diverging intuitions about the two kinds of utility monsters may be explained conservatively as involving standard prioritarian intuitions: holding that benefits matter more the worse-off their recipient is (and matter less, the better-off their recipient is). This verdict undermines the distinctiveness of the utility monster objection, and reduces its force to whatever level one attributes to prioritarian intuitions in general. More ambitiously, the divergence between the two cases may be taken to support attempts to entirely explain away the original utility-monster intuition, e.g. as illicitly neglecting the existence of an upper bound on the monster’s wellbeing. Such an explanation, if successful, suggests that our intuition about the original utility monster scenario was based on a mistake. Either way, the force of Nozick’s objection is significantly undermined by the Negative Utility Monster.


NUM: just imagine that the cookies are people, and the monster only looks so happy because this is his first respite from torture for several centuries...

Monday, April 05, 2021

Guest Post: 'Save the Five: Meeting Taurek’s Challenge'

[My thanks to Zach Barnett for writing the following guest post...]

At its best, philosophy encourages us to challenge our deepest and most passionately held convictions. No paper does this more forcefully than John Taurek’s “Should the Numbers Count?” Taurek’s paper challenges us to justify the importance of numbers in ethics.

Six people are in trouble. We can rescue five of them or just the remaining one. What should we do? This may not seem like a difficult question. Other things equal, you might think, we should save the five. This way, fewer people will die. 

Taurek rejects this reasoning. He denies that the greater number should be given priority. In effect, Taurek challenges us to convince him that the numbers should count. Can we meet his challenge?

You might be pessimistic. Even if you yourself agree that the numbers do count, you might worry that... just as it’s hopeless to try to argue the Global Skeptic out of Global Skepticism... it’s equally hopeless to try to argue someone like Taurek, a Numbers Skeptic, out of Numbers Skepticism. But that’s what I’ll try to do.

Saturday, April 03, 2021

What's at Stake in the Objective/Subjective Wrongness Debate?

A decade ago I wrote an introductory essay on 'Objective and Subjective Oughts', and the theoretical role of each. In short: the objective ought identifies the best (most desirable) decision, or what an ideal observer would advise and hope that you choose. The subjective or rational ought identifies the wisest or most sensible decision (given your available evidence), departures from which would indicate a kind of internal failure on your part as an agent.  Both of these seem like legitimate theoretical roles.  (Beyond that, various more-subjective senses of ought -- derived from instructions that any agent could infallibly follow -- risk veering into triviality, and are best avoided.)

Now, Peter Graham's Subjective versus Objective Moral Wrongness (p.5) claims that there's a single "notion of wrongness [either objective or subjective] about which Kantians and Utilitarians disagree when they give their respective accounts of moral wrongness."  This strikes me as a strange claim, as the debate between Kantians and Utilitarians seems entirely orthogonal to Graham's debate between objectivists and subjectivists.  More promisingly, Graham continues: "And that notion of wrongness is the notion of wrongness that is of ultimate concern to the morally conscientious person when in their deliberations about what to do they ask themself, 'What would be morally wrong for me to do in this situation?'."

My worry about the latter approach is that our assertoric practices reveal the deliberative question to be ill-formed (in that "correct" answers do not correspond to any fixed normative property).  It doesn't truly ask about the objective or the subjective/rational 'ought', but instead a dubious relativistic (or expressivist) construct. As I summarize (in the linked post):

Tuesday, March 30, 2021

Is Effective Altruism "Inherently Utilitarian"?

A recent post at the Blog of the APA claims so.  Here's why I disagree...

It's worth distinguishing three features of utilitarianism (only the weakest of which is shared by Effective Altruism):

(1) No constraints.  You should do whatever it takes to maximize the good -- no matter the harms done along the way.

(2) Unlimited demands of beneficence: Putting aside any intrinsically immoral acts, between the remaining options you should do whatever would maximize the good -- no matter the cost to yourself.

(3) Efficient benevolence: Putting aside any intrinsically immoral acts, and at whatever magnitude of self-imposed burdens you are willing to countenance: you should direct your selected resources (time, effort, money) that are allocated for benevolent ends in whatever way would do the most good.

EA is only committed to feature (3), not (1) or (2).  And it's worth emphasizing how incredibly weak claim (3) is.  (Try completing the phrase "no matter..." for this one.  What exactly is the cost of avoiding inefficiency?  "No matter whether you would rather support a different cause that did less good?" Cue the world's tiniest violin.)

Saturday, March 27, 2021

Stable Actualism and Asymmetries of Regret

Jack Spencer has a cool new paper, 'The Procreative Asymmetry and the Impossibility of Elusive Permission' (forthcoming in Phil Studies).  I found reading it to be really helpful for clarifying my thoughts on the procreative asymmetry.

Back in 'Rethinking the Asymmetry' (CJP, 2017), I argued for two main claims: (i) we have reason to bring good lives into existence, whereas "strong asymmetry" intuitions to the contrary can be explained away; and (ii) the intuition that we should prioritize existing lives is better accommodated by a form of modest partiality towards the (antecedently) actual than by Roberts' Variabilism (or any other strong-asymmetry-implying view).  To avoid incorrectly permitting miserable lives to be brought into existence, I argued, actualist partiality should be supplemented with a principle proscribing the predictably regrettable.