Nefsky wants a case where the morally significant features of a situation are vague, or have fuzzy boundaries. It could then be that no individual increment makes a difference as to what "side of the boundary" the outcome falls under, and yet that many such increments collectively do make a morally significant difference.
But it seems overwhelmingly plausible to me, as a pre-theoretic datum, that whatever is of fundamental moral significance cannot admit of vagueness. After all, (i) there is plausibly no "ontic vagueness": the world itself is precise/determinate in all fundamental respects; vagueness merely enters (as a semantic matter) into our high-level descriptions -- whether we call a thus-sized collection of grains a "heap", or whether we call a person with a certain number of hairs on their head "bald", etc. It's not as though there's some objective property of baldness out there in the world that we're trying to latch on to. (ii) The things that matter are features of the world, not of our vague descriptions. So (iii) the things that matter don't admit of vagueness.
One thing that struck me, both in reading Nefsky's paper, and in some related discussion at the recent RoME conference, is how readily people attribute "vagueness" to a case which could just (or even more) plausibly be accounted for in terms of graded harms. Consider, for example, Nefsky's discussion of overfishing (p.377):
if every angler in the village takes one more fish than she is allotted, this will result in the fish population losing its ability to replenish itself; but, for each angler, taking a single fish more is not itself enough to make a difference. Kagan suggests that this is a triggering case. But... this might be a nontriggering case. What would it take for it to be a nontriggering case? Instead of the removal of some particular fish triggering the problem, it could be that there is no precise triggering point—no sharp boundary between the population having a healthy ability to replenish and not having it.
It may be that our categorization of populations as "having a healthy ability to replenish" is vague. But it's not our categorizations that matter here. It's the actual outcomes for the fish population. And insofar as our categorization is vague, this is presumably just because the underlying scale of population-health is graded, rather than fundamentally vague. When we reach the "fuzzy boundaries" of our categorization, each fish (or few fish, if we have some small-scale triggering going on) removed makes things a bit worse for the population's ability to replenish itself. There's no vagueness in the actual propensity of the fish population to replenish itself (what would that even mean -- are some fish going to have an indeterminate number of offspring?!) -- all that's vague is the point along the scale at which things have gotten so bad that we no longer call it "healthy".
Later on Nefsky points out that there can be "phenomenal sorites series" for terms like "looks red" or "sounds loud". But again, these are just high-level descriptions that admit of fuzzy boundaries, not cases where the phenomenology itself is fundamentally vague. When we start thinking about phenomenal properties that actually matter: e.g. how painful it feels to be in a certain state, this again seems to merely be a graded (rather than vague) scale, at the fundamental (and fundamentally significant) level.
I was also puzzled by Nefsky's discussion of (what we might call) "external" triggers (p.391):
Imagine that you are working with this machine that registers charges only in whole kilovolts, increasing the current applied to it nanovolt by nanovolt. Eventually the current will be within the margin of error of a kilovolt. So, the machine could change from registering 0 kV to registering 1 kV at any moment. But, given that you are within the margin of error of 1 kV for that device, it would be a mistake to think that, at the moment when it actually does register 1 kV, this is due to the last minuscule increase in voltage that you made. It is due to the fact that many increases were made, such that the current is in some rough, very close vicinity of 1 kV. If the current had not been within the margin of error, the machine would not have registered it as 1 kV. But that it registered 1 kV at the precise point in your adding nanovolts that it did is most likely due to mechanical or environmental factors and not to the addition of some single nanovolt. This means, I think, that we cannot say that had you not added that last nanovolt, the machine would not have registered 1 kV.
Unless I'm missing something, this seems to confuse temporal and counterfactual criteria for triggering. I agree that cases like this show that temporal criteria are no good: the latest-in-time increment may not have been essential, if a previous setting (from time t-1, say) together with "external" fluctuations at time t jointly suffice to bring about the change in outcome. But holding fixed the precise details of the fluctuations that occur at t, there must be some minimum voltage level n such that the change will occur if the voltage level is at n nV, but would not have occurred with merely n-1 nV. So while it's true that n need not be "that last nanovolt" added, it's still the case that some (perhaps earlier) individual increment n was in fact counterfactually responsible (given the later environmental fluctuations) for the triggering.
And the same will then be true in the "Harmless Torturers" case that Nefsky goes on to discuss (p.393). Even if it's true that some "other factor would trigger this change for the worse were that [last] increase in voltage not to occur", that just goes to show that it was some previous increment in voltage that was counterfactually responsible for triggering the increase in pain in this case.
So I remain unconvinced that (traditional, "individualistic") consequentialism has any problem accommodating these sorts of cases. What do you think?