Sunday, March 22, 2020

"Lives" are the Wrong Measure

When thinking about triage situations, it's common for people to assume that saving lives (as many of them as possible) should be our moral goal.  But this is wrong, for the straightforward reason that some deaths are vastly more tragic than others.

It's worth bearing in mind that lives can't be saved, but only extended.  So "saving lives" is not even a coherent goal.  You can aim to maximize the number of lives extended (for any period whatsoever), but we can now see that this is akin to trying to blindly maximize the number of patients treated.  By ignoring how much the patients gain from different treatments, you're clearly neglecting what actually matters -- the underlying health benefits that are the whole purpose of medical interventions in the first placeWillfully blinding yourself to the magnitudes of different interests will lead to predictable injustice: you might foolishly prioritize two patients' papercuts over another's spreading gangrene, for example.  Raw numbers helped is not the important thing.  Moreover, this principle is as true of life-extending treatments as it is of any other.  (This is most obvious if you imagine a treatment that will extend life by mere minutes.)  I don't see how any remotely sensible person could possibly deny this.

Sunday, March 15, 2020

No Utility Cascades

Max Hayward has an interesting paper, 'Utility Cascades', forthcoming in Analysis.  We're told: "Utility Cascades occur when a utilitarian’s reduction of support for an intervention reduces the effectiveness of that intervention, leading the utilitarian to further reduce support [...] in a negative spiral." (p.1)  The basic puzzle Hayward sets up involves the following additional assumptions:

(1) Holding fixed their (practical) normative commitments, accurate/rational updating can (often!) tip utilitarians into a utility cascade.

(2) This "negative spiral" predictably makes things worse than if the utilitarian had stuck with their initial level of support, even given that the intervention is less effective than initially believed.

Putting these together, we obtain a surprising apparent tension between epistemic and practical normativity for utilitarians.  For, Hayward suggests, it would often promote predictably better results for utilitarians to bury their heads in the sand rather than rationally updating on new evidence of the sort that might trigger a utility cascade.

It's a fun argument.  But I don't see how (1) and (2) could both be true.  After all, if the act of reducing support for intervention X would predictably have worse results than maintaining initial levels of support, then utilitarianism straightforwardly requires the latter.

Why does Hayward believe otherwise?  He describes Bill, supposedly an act utilitarian / effective altruist agent, who "supports highly effective initiatives in proportion to their effectiveness score [expected value]." (bold added)  As a result, any drop in expected value (e.g. due to new evidence) necessarily reduces Bill's investment in an intervention.

But this is not an accurate representation of utilitarian (or effective altruist) normative commitments.  You should not allot half as much funding to a charity that's half as good as others.  You should give the suboptimal charity nothing, and instead send every dollar to the very best charity (until it is so saturated with funds that it no longer offers the best marginal return on your next dollar, which should then instead go to the next (now-best) intervention, and so on).

As a result, it's entirely possible that a slight reduction in its expected value makes no difference to how much Bill should fund X, so long as the (marginal) expected value of funding X to this degree still exceeds the expected value of shifting any of this funding to some alternative intervention Y. In such a case, assumption (1) above fails to hold. Otherwise, (2) is false: if shifting some funding to Y is really (expectably) for the best, despite reducing the effectiveness of the remaining X-funds (if any), then it's not true that it would've been (predictably) better for Bill to ignore the evidence and keep funding X at initial levels.  In neither case does epistemic rationality for the utilitarian agent make things (predictably) go worse.  Contra Hayward, in ordinary cases (i.e., barring an evil demon who punishes epistemic rationality, or the like), there are not "utilitarian reasons to adopt ostrich behaviour".

Friday, February 28, 2020

Who's Responsible for Offset Harms?

Here's a fun puzzle (that I owe to Caspar Hare): Polluter is trying to work out how to dispose of her toxic waste barrel economically, when she sees her neighbor about to pour his waste barrel into the river.  Delighted, she interrupts her neighbor and pays him to find a more eco-friendly way to dispose of his waste.  Having offset this harm, Polluter now feels free to dump her own waste into the river.  The downstream farm is ruined.  Who is responsible?

Tempting answer: Polluter! She dumped waste, while her neighbor (Paid-Off) didn't.  Polluter clearly caused the harm, and is the only eligible agent to be held morally responsible.

I think this tempting answer is importantly mistaken.

Thursday, February 27, 2020

A New Paradox of Deontology

[Update: I wrote a better version of this argument over at PEA Soup.]

There's something odd about the view that it'd be wrong to kill one innocent even to prevent five other (comparable) killings.  Given plausible bridging principles, this implies that we should prefer Five Killings over Killing One to Prevent Five.  But that seems an odd preference: how can five killings be preferable to one?  The deontologist (like Setiya) must think that agency is playing a crucial role here.*  While we should prefer one gratuitous killing over five, there is (on this view) a special kind of killing -- killing as a means -- where the good results of the killing don't get to count. So Killing One to Prevent Five is treated as morally akin to Six Killings, rather than to One Killing.

This is odd enough, but I think it gets worse.  For compare some variations of the case.  First note that if the good results of the killing-as-a-means don't get to count, then it seems it shouldn't matter to our moral verdicts whether the intended good results actually eventuate or not.  So consider Killing One in a Failed Attempt to Prevent Five (KOFAPF).  Clearly, KOFAPF is much worse an outcome than Killing One to Prevent Five (KOPF): it has the same agential intervention, but with six killings instead of just one.  So we should strongly prefer KOPF over KOFAPF.  But then how can we coherently prefer Five Killings over KOPF?

Thursday, February 20, 2020

Emergence and Incremental Impact

In 'What’s Wrong with Joyguzzling?', Kingston and Sinnott-Armstrong claim that individual greenhouse gas emissions never make a difference.  I find this to be a deeply bizarre claim, since they don't dispute that large amounts of GHG emissions together make a difference, and that large amounts of GHG can be produced by adding together many smaller amounts.

Thursday, February 06, 2020

When is Inefficacy Objectionable?

There's something I find puzzling about the dialectic on this issue.  Many philosophers suggest that there is an "inefficacy problem" or objection to consequentialism. But we need to take care to correctly diagnose what is supposed to be problematic.  If we truly are incapable of securing some good outcome, after all, it would hardly seem fair to fault a theory that (correctly) tells us that we needn't bother.  Our practical inefficacy per se cannot sensibly be held against a theory; it may just be a sad fact of life.

Really the issue here concerns a kind of mismatch between individual and collective verdicts that appears to result from collective action problems (voting, polluting, etc.) in which we combine (apparent) individual inefficacy with (apparent) collective efficacy.  But even here, care must be taken in our identification of the relevant group or 'collective'.  Suppose that everyone else is determined to bring about the collectively harmful outcome, and are certain to do so no matter what I do.  Then there's no point in delusional attempts to "cooperate" that are guaranteed to fall on deaf ears.  Permitting laxity in the face of such inefficacy is not a "problem", it's sensible -- the plainly correct verdict in this case.  The fault here clearly lies with the bad actors, not with our moral theory.

So we further need to specify that we're concerned with collective harms that could result from those who are successfully following the target moral theory.  (This clarifies why the previous scenario was not objectionable: the one follower of consequentialism did not, as a "group" of one, actually do any harm.)  More generally, the key structural feature is to generate a kind of "each/we dilemma" in which each person acts rightly in bringing about a situation that they collectively abhor.  The agents' shared moral theory would then be a failure by its own lights, in a tolerably clear and important sense: it would be (as Parfit showed common-sense morality to be) collectively self-defeating.

Curiously, the recent literature on the inefficacy objection largely focuses on arguments which, even if successful (in establishing inefficacy), would not establish collective self-defeat.  The strongest arguments for thinking that individual consequentialists shouldn't bother Φ-ing are, I think, equally reasons for thinking that consequentialists collectively shouldn't Φ.  So there is then no real mismatch between the individual and collective moral verdicts.

Consider the arguments mentioned (from Nefsky's recent survey piece) in my previous post:

Wednesday, February 05, 2020

Nefsky on Tiny Chances and Tiny Differences

In her Philosophy Compass survey article, 'Collective Harm and the Inefficacy Problem', Julia Nefsky expresses skepticism about appeals to "expected value" to address worries about the ability of a single individual to really "make a difference".  In section 4.2, she notes that the relevant cases involve either "(A) an extremely small chance (as in the voting case) or (B) a chance at making only a very tiny difference."  Addressing each of these in turn:

Monday, January 20, 2020

Yang/Warren: that's the ticket!

There's a lot to be said for Yang as a presidential candidate: he's funny and likeable (able to use humour to deflate Trump's chest-thumping appeal to voters' lizard-brains), an "outsider" candidate (which in recent history appears to be a necessary feature for Democratic candidates to actually win the presidential election), and betting markets currently give him the highest conditional probability (amongst those with a greater than 1% chance of nomination) of beating Trump if nominated (77%).  Yang's automation-focused economic narrative seems broadly plausible, and may have a real chance of combating the immigrant-demonizing narrative peddled by Trump & co, and winning over swing voters in key states.  As a NY Times editorial board member wrote of their interview: “He really seemed to have an almost emotional sense of what people have been going through and what the problems are. His portrait of the fundamental economic problems were more moving than Bernie’s, and Bernie has been selling this for 30 years.”

Friday, January 10, 2020

Parfit's Cat

My favourite story from Simon Beard's Parfit Bio:
"Like my cat, I often simply do what I want to do." This was the opening sentence of Derek Parfit's philosophical masterpiece, Reasons and Persons. He believed that it was the best way to begin his book because it showed something important about people. Often we are not as special as we think we are. For instance, when people simply do what they want to do they appear to be utilizing no ability that only people have. On the other hand, when we respond to reasons, we are doing something uniquely human, because only people can act in this way. Cats are notorious for doing what they want to do, and the sense of proximity between a cat and its owner pleasingly heightens our sense of their similarity. Hence, there could be no better way for this book to begin. 
However, there was a problem. Derek did not, in fact, own a cat. Nor did he wish to become a cat owner, as he would rather spend his time taking photographs and doing philosophy. On the other hand, the sentence would clearly be better if it was true. To resolve this problem Derek drew up a legal agreement with his sister, who did own a cat, to the effect that he would take legal possession of the cat while she would continue living with it.

Tuesday, December 31, 2019

2019 (and '18) in review

(Past annual reviews: 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.)

I didn't get around to this last year due to being in the midst of an international move (not an easy process with a toddler, to put it mildly, but worth it nonetheless).  So I guess this post can serve to summarize my past two years of blogging...

Epistemology, Metaethics, and Mind

* Philosophical Expertise, Deference, and Intransigence argues that we should only be moved by peer disagreement (and related phenomena) when we take the other person's views to be evidence of what we ourselves would conclude upon ideal reflection.  So there's no epistemic pressure to defer to even your acknowledged "philosophical superiors" if their starting points are too different from your own.

* Why a related (intransigence-based) argument against cognitivism goes wrong.

On Parfit on Knowing What Matters - responding to Parfit's response to my paper. (Includes a summary of my original paper's central claims.)

Ambiguously Normative Testimony - why Bedke's objection to non-naturalism over-generalizes.

Normativity for Value Realists -- if you don't believe in (genuine) "Nazi value" you shouldn't believe in (genuine) "Nazi reasons" either.  (May have converted Norcross to normative realism!)

* Sub-experiences and Minimal Duration -- what's the best way to make sense of experiences that have a minimum duration?

Applied Ethics

* How to Make a Difference -- exposing the fallacy in claims that "individual action" is necessarily inefficacious in the face of global problems.

Three kinds of offsetting -- exploring what kinds of harms can (or cannot) be morally "offset", and why.

Worthless Harm-Prevention and Non-Existence -- how some great harms might nonetheless not be worthwhile to prevent.

Is Price-Gouging Good? - maybe!

Police Shootings: Mortal Threats vs Tragic Mistakes - how to tell if there are too many of the latter.

The Value of Academic Research -- there's more to it than Michael Huemer realizes.

Political Theory

Charity Vouchers: Decentralizing Public Spending with follow-up posts:
Philanthropy Vouchers and Public Debate: Political vs Civic Advocacy
What Compassionate Conservatism Could Be

Ideological Ascent and Asymmetry - can you always diagnose whether someone's political behaviour is "unreasonable" in a completely value-neutral way (abstracting away from the details of what's under dispute)?

Ethical Theory

Constitutive Instrumentality: a response to Lazar - how to make sense of fungible values.

* Is the 'separateness of persons' better understood as constraining our actions or our attitudes?  I argue for the latter.

* Negative Utility Monsters - a twist on the original case may serve to undermine its intuitive force.

Does Welfare have Diminishing Marginal Value? - an alternative (utilitarian-compatible) way to capture prioritarian intuitions: assign DMV not to welfare, but to the basic goods (e.g. happiness) that contribute to one's welfare.

Consequentialism, Moral Worth, and the Fitting/Fortunate Distinction - why consequentialists should not conflate "right reasons" with "consequentialist-recommended motivations".

Good Motives, Act-Features, and What Matters - how to understand talk of "right-making features".

When Killing is Worse than Letting Die -- when the victim is more salient, (all else equal) the harmful act reveals a worse quality of will.  In other cases, there may be no moral difference between killing vs letting die. (Related: Options without Constraints at PEA Soup.)

Actualism, Evaluation and Prerogatives - addresses an objection from Pete Graham.

The Aim(s) of Practical Deliberation - is there a fact of the matter about "how high" we morally ought to aim?  I defend a pluralist answer.

Stacking Time-Relative Interests and Acquired Tastes and Necessary Interests -- exploring McMahan's account, and developing it in response to objections.

Off the blog...

Four of my previously-accepted papers appeared in print this year:
* 'Willpower Satisficing' (Noûs)
* 'Why Care About Non-Natural Reasons?' (APQ)
* 'Fittingness Objections to Consequentialism' (OUP)
* 'Overriding Virtue' (OUP)

I also wrote a new paper, 'Deontic Pluralism and the Right Amount of Good' (summarized here), that I'm pretty excited about.  (It aims to put to rest the debate between maximizers, satisficers, and scalar consequentialists, by showing how the views are best understood as not actually being in conflict with one other.)

Happy New Year!