Monday, May 24, 2021

Five Fallacies of Collective Harm

It's often thought that "collective harm" can result from a collection of contributions despite each individual increment to the number of contributions allegedly making no difference at all. I think this is incoherent, or at any rate entirely unmotivated. There seem to be five main reasons why people tend to hold this dubious view. In this post, I'll briefly explain why each is misguided.

(1) The Rounding to Zero Fallacy. As Parfit noted in his famous discussion of "moral mathematics", it's really important not to neglect tiny chances of having a huge impact.  The latter could well have high expected value, which you'll lose sight of if you mistakenly treat "tiny chance" as equivalent to "no chance at all". (Previously discussed here.)

(2) The Chunky Fallacy involves claiming that a system "isn't sensitive to small changes" even though it is sensitive to large changes, and a sufficient number of small changes constitutes a large change. (I take this to include "threshold-moving" maneuvers; see, e.g., my response to the claim that "the system is not sensitive to a single vote, and anything close to even will be decided by the courts or the like.")

(3) The First-Increment Fallacy involves generalizing from the first increment in a sequence, even when it is not representative.  Sinnott-Armstrong's argument that emergence blocks individual impact for GHG emissions rests on this fallacy.  (As do certain anti-aggregative intuitions.)

(4) The Verbal Vagueness Fallacy.  As I argue in 'Apparent Vagueness and Graded Harms', many of Nefsky's influential examples (from a fish population's "healthy ability to replenish" to phenomenal sorites like "looks red") involve vague verbal categorizations while the underlying phenomena are in fact perfectly precise (merely graded, or gradually changing along a scale). 

(5) The Fluctuation Fallacy. I just had a referee argue that there would be no definite point along the scale of overfishing at which the resulting fish population fails to replenish if this latter outcome "depends on contingent factors and matters of luck" besides just the raw numbers removed.  But of course contingency is not the same as indeterminacy. (Compare Nefsky's conflation of temporal with modal criteria for triggering.) Some increase in the number of fish removed makes a difference to the population's ability to replenish, even if which number this is is unknowable and hyper-contingent on fragile and fluctuating circumstantial details.


Is anyone aware of an argument for problematic inefficacy (or collective harm without individual difference-making) that avoids all five of these fallacies?  Please do share it, if so.

[See my paper, 'There is No Problem of Collective Harm', for much more detail.]

0 comments:

Post a Comment

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.