Monday, June 21, 2021

The Paralysis of Deontology?

MacAskill & Mogensen's Paralysis Argument (forthcoming in Phil Imprint) argues that deontological constraints entail paralysis, once long-term indirect effects are taken into account:

According to most non-consequentialists, reasons against doing harm are weightier than reasons to benefit. Since you have no greater reason to expect that benefits as opposed to harms will predominate among the indirect effects of any action you perform, it therefore seems that you should try as best you can to avoid bringing about any significant indirect effects through your actions at all. Since virtually anything you do will inevitably result in significant numbers of indirect harms, you should therefore try to do as little as possible.

It's a cute argument!  I recommend reading the full paper, where they address (and handily dispose of) a number of possible responses on behalf of the deontologist. For example, the Arms Trader case (p. 17) shows that it won't do to exclude causal chains that involve others' voluntary choices.  And Mystery Box cases (p. 20) similarly warn against excluding convoluted causal chains. They ultimately conclude that the best option for deontologists is to embrace extreme demands: "to escape paralysis, your every motion must be at the service of posterity." (p.34)

But I wonder if deontologists might get by with some more conservative revisions to their views. In our recent reading group discussion of the paper, David O'Brien (mentioned with permission) suggested that something like the Doctrine of Double Effect seems well-suited to resist M&M's argument.  For even if we can, in some sense, foresee that our acts will have long-term effects some of which are harms, we generally do not intend those harms, or use them as a means to whatever everyday goals we are pursuing.  So if deontic constraints are restricted to harms that feature in our intentions, or that we make use of as a means, paralysis may be avoided.

Of course, not all wrongs involve intended harms in this way.  But that's fine.  It's a familiar point that DDE is a supplemental principle, not the entirety of a moral theory.  At a minimum, DDE proponents should agree with consequentialists that even merely foreseen harms (or "collateral damage") can make an act wrong if they outweigh the expected benefits.  The trickier question is whether DDE suffices for all the distinctively deontological constraints that the non-consequentialist might have wanted.  I'd guess that most additionally want some kind of harm/benefit asymmetry, e.g. to rule out fatally driving over one (as collateral damage) on the way to rescuing five (p.7).

At this point, I suspect that our deontological intuitions are mostly just tracking the salience of the harm.  If you see or touch the harmed person, we're apt to attribute outsized importance to the harm. Distant future harms to unknown ("statistical") victims, by contrast, seem maximally non-salient, and so avoid activating deontological intuitions.  As a result, the suggestion that there could be deontological constraints against these sorts of harms can seem intuitively absurd.  But insofar as we doubt that salience provides a sufficiently principled basis for counting some harms more than others, we may be forced to conclude that deontological constraints against salient harms are ultimately in no better position.

It's a nice challenge, at any rate, which deontologists will need to address if they want to appeal to anything stronger than the Doctrine of Double Effect.

[Update: I've been pointed to Nye's very similar 2014 paper, 'Chaos and Constraints', which also contains some nice arguments against appealing to the DDE here, e.g. by comparing lesser means-harms with greater collateral harms.]

Thursday, June 10, 2021

Conscientious Sadism

I've previously argued that sadistic pleasure (in oppressing the innocent) lacks value. But consider a complication.  Suppose this time that the sadistic majority are all conscientious utilitarians who would never willingly increase net suffering in the world.  They all appreciate that their victim's suffering is a bad thing in itself, and so would genuinely prefer to realize the same amount of pleasure without any suffering at all, if possible.  But alas, this just isn't possible in the circumstances.  We may further suppose that they would each be willing to themselves be tortured in order to generate greater net pleasure for their companions.  But alas, this isn't possible, either.  Their only options are to torture an unwilling innocent person, generating population-wide sadistic pleasure, or do nothing and have uniformly neutral experiences throughout the population.

In this revised case, many will of course still think it would be wrong to torture the innocent.  But I wonder whether this assimilates it to a standard sort of rights-violation scenario (e.g. involving non-sadistic pleasure), or whether we should still regard the sadistic pleasure itself as entirely lacking in value?

Friday, June 04, 2021

Email subscriptions update

My old email subscriptions provider (Feedburner) is apparently shutting down next month, so I've set up a new service with MailChimp -- see the "subscribe via email" form towards the bottom of the sidebar if you'd like to be added to the list.  Hope it works.  Tomorrow I'll try to transfer over the 160-odd subscribers from my Feedburner list, but if you find that you aren't getting email updates any more, maybe try re-subscribing manually, check your spam folder, and then let me know if the problem persists...

P.S. I still personally recommend RSS using a service like Feedly, if you follow multiple blogs.  I also share most of my posts on Twitter and FB for those who prefer that, but I gather those platforms are less reliable as their algorithms will determine which content ends up getting presented to you.

Wednesday, June 02, 2021

Philosophical Pluralism and Modest Dogmatism

Philosophers are sometimes prone to excessive skepticism, especially in the face of persisting disagreement. People often seem really bothered by the lack of consensus in philosophy (including, e.g., Derek Parfit, Jason Brennan, and most recently, Liam Kofi Bright).  But a large portion of such worries seem to stem from a failure to appreciate when actual disagreement is (distinctively) undermining:

In cases of what we might call 'non-ideal' disagreement, there's a presumption that the disagreement is rationally resolvable through the identification of some fallacy or procedural mis-step in the reasoning of either ourselves or our interlocutor. The disagreement is 'non-ideal' in the sense that we're only disagreeing because one of us made a blunder somewhere. We are sufficiently similar in our fundamental epistemic standards and methods that we can generally treat the other's output as a sign of what we (when not malfunctioning) would output. The epistemic significance of the disagreement is thus that the conflicting judgment of a previously-reliable source is some evidence that we have made a blunder by our own lights, though we may not yet have seen it.

Many philosophical disagreements do not have this crucial feature.  This is because "(i) there are many possible internally coherent worldviews, (ii) philosophical argumentation proceeds through a mixture of ironing out incoherence and making us aware of possibilities we had previously neglected." As a result, many philosophical disagreements simply reflect different substantive starting points rather than any purely procedural blunder.  And the fact that somebody exists who holds different substantive starting points than you has zero epistemic import over and above the prior observation that there are coherent alternatives to your own view (which you really should already know!).

Now, it's totally fair to worry about the epistemic significance of coherent alternatives. As I put it previously, it follows that "(iii) even the greatest [philosophical] expertise... will only help you to reach the truth if you start off in roughly the right place. Increasing the coherence of someone who is totally wrong (i.e. closer to one of the many internally coherent worldviews that is objectively incorrect) won't necessarily bring them any closer to the truth."

Good reasoning provides no guarantee of truth. There's a real possibility that we're irreparably mistaken, in which case no amount of procedurally conscientious reasoning would see us right.  Moreover, there's no neutrally-recognizable standard by which we can determine whether we are irreparably mistaken in this way. (If there were, the error wouldn't be so irreparable, after all!)  These facts may be disheartening to those who hoped for a transparent light of Reason to guide our way; but epistemic maturity requires us to recognize that such Cartesian hopes were never really reasonable or realistic in the first place.  We muddle by as best we can, and hope for the best.  If we successfully reach (or at least approximate) an internally-coherent position, it's possible that the one we reached is the one true view of the matter, but we cannot expect to be able to prove this to any who doubt us -- not even ourselves.  The most we can hope for is to be, in a sense, philosophically lucky. (But that's a fine thing to hope for!  Nothing is gained by holding out for unattainable Cartesian certainties.  It remains well worth striving for coherence, since that at least gives us a chance at being right.)

Once the above lessons are properly internalized, viewpoint diversity and philosophical dissensus comes to seem entirely appropriate. We need advocates to work out all the options, after all. (Nothing is gained by having a coherent alternative view be routinely ignored or overlooked: a merely sociological uniformity of opinion is no "consensus" worth having.) And if a view is internally coherent, you shouldn't expect to be able to argue a sophisticated advocate out of that position. Nothing forces them to share your premises!

Rather than seeing this as some deep failure of "analytic philosophy" (as if different training would somehow break the logical symmetry between modus ponens and modus tollens), I'd encourage clear-eyed acceptance of this reality, combined with default trust or optimism about your own philosophical projects. After all, unlike all those fools who disagree with you, YOU'RE beginning from roughly the right starting points, right? ;-)

Monday, May 24, 2021

Five Fallacies of Collective Harm

It's often thought that "collective harm" can result from a collection of contributions despite each individual increment to the number of contributions allegedly making no difference at all. I think this is incoherent, or at any rate entirely unmotivated. There seem to be five main reasons why people tend to hold this dubious view. In this post, I'll briefly explain why each is misguided.

(1) The Rounding to Zero Fallacy. As Parfit noted in his famous discussion of "moral mathematics", it's really important not to neglect tiny chances of having a huge impact.  The latter could well have high expected value, which you'll lose sight of if you mistakenly treat "tiny chance" as equivalent to "no chance at all". (Previously discussed here.)

Saturday, May 15, 2021

Why Belief is No Game

In 'The Game of Belief', Barry Maguire and Jack Woods nicely set out a broadly "pragmatist" understanding of normativity.  In this post, I'll try to explain why I think it is misguided, and what alternative understanding we should adopt instead.

The gist of M&W's view is that practical reasons (including for belief and other attitudes) are the only truly authoritative normative reasons, but there are also all kinds of (non-authoritative) practice-relative normative reasons that provide "standards of correctness" -- e.g. for playing chess "correctly" (i.e. strategically well) or even for believing "correctly" (i.e. in line with purely epistemic standards).  We will often, but not always, have practical reasons to do things "correctly"--that just depends upon circumstantial details.

My biggest complaint about this sort of view is that it completely divorces reasons from rationality.  They conceive of reasons as things that support (either by the authoritative standard of value, or some practice-relative standard of correctness) rather than as things that rationalize.  As a result, they miss an important disanalogy between practice-relative "reasons" and epistemic reasons: violating the latter, but not the former, renders one (to some degree) irrational, or liable to rational criticism.

Tuesday, April 27, 2021

'Risky Research' Redux

I'm looking forward to participating in 1DaySooner's Zoom panel discussion on 'What is the Upper Limit of Risk in Clinical Trials?' next week (May 4th, @6pm ET) -- you can register here if you're interested in attending.

My basic view is that there is no absolute upper limit: given informed consent, the risk just needs to be proportionate, i.e. outweighed by the social value of the information gained from the research.

Indeed, this strikes me as entirely straightforward.  There are two key values that public policy should be guided by: beneficence (promoting the overall good) and autonomy (respecting individuals' choices about their own lives).  Conflicts between the two values can be morally tricky.  But if both of these values point in the same direction, as they do in the case of valuable research involving willing volunteers, then it really should be a no-brainer.  There's just no good reason to engage in anti-beneficent paternalism.  So: let's please stop doing that!

I think that's the simplest case for "risky research".  In my paper with Peter Singer, we additionally proposed a principle of risk parity according to which, "if it is permissible to expose some members of society (e.g. health workers or the economically vulnerable) to a certain level of ex ante risk in order to minimize overall harm from the virus, then it is permissible to expose fully informed volunteers to a comparable level of risk in the context of promising research into the virus."  Again, it just makes no sense to block willing volunteers from taking on some level of risk if such obstruction effectively condemns a far greater number of unwilling people to even greater harms.

What principled value here could outweigh the combined force of autonomy and beneficence?  I look forward to hearing what my fellow panelists have to say...

Monday, April 19, 2021

Get Parfit's Ethics Free (till May 3)

Cambridge University Press is offering free PDF downloads of Parfit's Ethics until May 3. (After that, you can always access the pre-print from PhilPapers, which has the same essential content but differs wildly in pagination and doesn't reflect subsequent copyediting.)

At just 55 pages of main text, it's the most concise introduction you'll find to Parfit's wide-ranging ethical thought. Perfect for grad seminars, or anyone interested in highlights from the greatest moral philosopher of the past century (or, indeed, ever).

Wednesday, April 14, 2021

Follow Decision Theory!

Back in January, I wrote that there's no such thing as "following the science" -- that scientists and medical experts "aren't experts in ethical or rational decision-making. Their expertise merely concerns the descriptive facts, providing the essential inputs to rational decision-making, but not what to do with those inputs."

It's worth additionally emphasizing that this question of how to convert information into rational decisions is not something about which academic experts are entirely at sea. On the contrary, there's a well-developed academic subfield of decision theory which tells us how to balance considerations of risk and reward in a rational manner.  The key concept here is expected value, which involves multiplying (the value of) each possible outcome by its probability.  For example, we know that (all else equal) we should not accept a 50% chance of causing 10 extra deaths for the sake of a 1% chance of averting 100 deaths, for the latter's expected value (one death averted) does not outweigh the former's expected cost (5 extra deaths).

Tuesday, April 13, 2021

Imagining an Alternative Pandemic Response

I received my first shot of the Moderna vaccine yesterday -- which naturally got me thinking about how this should've been accessible much, much sooner.  I don't think anyone's particularly happy about the way that our pandemic response played out, but there's probably a fair bit of variation in what people think should've been done differently.  What alternative history of COVID-19 do you wistfully yearn after?  Here's mine (imagining that these lessons were taken on board from the start)...