Here's a fun puzzle (that I owe to Caspar Hare): Polluter is trying to work out how to dispose of her toxic waste barrel economically, when she sees her neighbor about to pour his waste barrel into the river. Delighted, she interrupts her neighbor and pays him to find a more eco-friendly way to dispose of his waste. Having offset this harm, Polluter now feels free to dump her own waste into the river. The downstream farm is ruined. Who is responsible?
Tempting answer: Polluter! She dumped waste, while her neighbor (Paid-Off) didn't. Polluter clearly caused the harm, and is the only eligible agent to be held morally responsible.
I think this tempting answer is importantly mistaken.
Friday, February 28, 2020
Who's Responsible for Offset Harms?
Labels:
ethics - applied,
ethics - consequentialism
Posted by
Richard Y Chappell
at
2:33 pm
31
comments

Thursday, February 27, 2020
A New Paradox of Deontology
[Update: see related discussion at PEA Soup, and a much improved version of the argument in my paper, 'Preference and Prevention: A New Paradox of Deontology', summarized on Good Thoughts.]
There's something odd about the view that it'd be wrong to kill one innocent even to prevent five other (comparable) killings. Given plausible bridging principles, this implies that we should prefer Five Killings over Killing One to Prevent Five. But that seems an odd preference: how can five killings be preferable to one? The deontologist (like Setiya) must think that agency is playing a crucial role here.* While we should prefer one gratuitous killing over five, there is (on this view) a special kind of killing -- killing as a means -- where the good results of the killing don't get to count. So Killing One to Prevent Five is treated as morally akin to Six Killings, rather than to One Killing.
This is odd enough, but I think it gets worse. For compare some variations of the case. First note that if the good results of the killing-as-a-means don't get to count, then it seems it shouldn't matter to our moral verdicts whether the intended good results actually eventuate or not. So consider Killing One in a Failed Attempt to Prevent Five (KOFAPF). Clearly, KOFAPF is much worse an outcome than Killing One to Prevent Five (KOPF): it has the same agential intervention, but with six killings instead of just one. So we should strongly prefer KOPF over KOFAPF. But then how can we coherently prefer Five Killings over KOPF?
There's something odd about the view that it'd be wrong to kill one innocent even to prevent five other (comparable) killings. Given plausible bridging principles, this implies that we should prefer Five Killings over Killing One to Prevent Five. But that seems an odd preference: how can five killings be preferable to one? The deontologist (like Setiya) must think that agency is playing a crucial role here.* While we should prefer one gratuitous killing over five, there is (on this view) a special kind of killing -- killing as a means -- where the good results of the killing don't get to count. So Killing One to Prevent Five is treated as morally akin to Six Killings, rather than to One Killing.
This is odd enough, but I think it gets worse. For compare some variations of the case. First note that if the good results of the killing-as-a-means don't get to count, then it seems it shouldn't matter to our moral verdicts whether the intended good results actually eventuate or not. So consider Killing One in a Failed Attempt to Prevent Five (KOFAPF). Clearly, KOFAPF is much worse an outcome than Killing One to Prevent Five (KOPF): it has the same agential intervention, but with six killings instead of just one. So we should strongly prefer KOPF over KOFAPF. But then how can we coherently prefer Five Killings over KOPF?
Thursday, February 20, 2020
Emergence and Incremental Impact
In 'What’s Wrong with Joyguzzling?', Kingston and Sinnott-Armstrong claim that individual greenhouse gas emissions never make a difference. I find this to be a deeply bizarre claim, since they don't dispute that large amounts of GHG emissions together make a difference, and that large amounts of GHG can be produced by adding together many smaller amounts.
Thursday, February 06, 2020
When is Inefficacy Objectionable?
There's something I find puzzling about the dialectic on this issue. Many philosophers suggest that there is an "inefficacy problem" or objection to consequentialism. But we need to take care to correctly diagnose what is supposed to be problematic. If we truly are incapable of securing some good outcome, after all, it would hardly seem fair to fault a theory that (correctly) tells us that we needn't bother. Our practical inefficacy per se cannot sensibly be held against a theory; it may just be a sad fact of life.
Really the issue here concerns a kind of mismatch between individual and collective verdicts that appears to result from collective action problems (voting, polluting, etc.) in which we combine (apparent) individual inefficacy with (apparent) collective efficacy. But even here, care must be taken in our identification of the relevant group or 'collective'. Suppose that everyone else is determined to bring about the collectively harmful outcome, and are certain to do so no matter what I do. Then there's no point in delusional attempts to "cooperate" that are guaranteed to fall on deaf ears. Permitting laxity in the face of such inefficacy is not a "problem", it's sensible -- the plainly correct verdict in this case. The fault here clearly lies with the bad actors, not with our moral theory.
So we further need to specify that we're concerned with collective harms that could result from those who are successfully following the target moral theory. (This clarifies why the previous scenario was not objectionable: the one follower of consequentialism did not, as a "group" of one, actually do any harm.) More generally, the key structural feature is to generate a kind of "each/we dilemma" in which each person acts rightly in bringing about a situation that they collectively abhor. The agents' shared moral theory would then be a failure by its own lights, in a tolerably clear and important sense: it would be (as Parfit showed common-sense morality to be) collectively self-defeating.
Curiously, the recent literature on the inefficacy objection largely focuses on arguments which, even if successful (in establishing inefficacy), would not establish collective self-defeat. The strongest arguments for thinking that individual consequentialists shouldn't bother Φ-ing are, I think, equally reasons for thinking that consequentialists collectively shouldn't Φ. So there is then no real mismatch between the individual and collective moral verdicts.
Consider the arguments mentioned (from Nefsky's recent survey piece) in my previous post:
Really the issue here concerns a kind of mismatch between individual and collective verdicts that appears to result from collective action problems (voting, polluting, etc.) in which we combine (apparent) individual inefficacy with (apparent) collective efficacy. But even here, care must be taken in our identification of the relevant group or 'collective'. Suppose that everyone else is determined to bring about the collectively harmful outcome, and are certain to do so no matter what I do. Then there's no point in delusional attempts to "cooperate" that are guaranteed to fall on deaf ears. Permitting laxity in the face of such inefficacy is not a "problem", it's sensible -- the plainly correct verdict in this case. The fault here clearly lies with the bad actors, not with our moral theory.
So we further need to specify that we're concerned with collective harms that could result from those who are successfully following the target moral theory. (This clarifies why the previous scenario was not objectionable: the one follower of consequentialism did not, as a "group" of one, actually do any harm.) More generally, the key structural feature is to generate a kind of "each/we dilemma" in which each person acts rightly in bringing about a situation that they collectively abhor. The agents' shared moral theory would then be a failure by its own lights, in a tolerably clear and important sense: it would be (as Parfit showed common-sense morality to be) collectively self-defeating.
Curiously, the recent literature on the inefficacy objection largely focuses on arguments which, even if successful (in establishing inefficacy), would not establish collective self-defeat. The strongest arguments for thinking that individual consequentialists shouldn't bother Φ-ing are, I think, equally reasons for thinking that consequentialists collectively shouldn't Φ. So there is then no real mismatch between the individual and collective moral verdicts.
Consider the arguments mentioned (from Nefsky's recent survey piece) in my previous post:
Wednesday, February 05, 2020
Nefsky on Tiny Chances and Tiny Differences
In her Philosophy Compass survey article, 'Collective Harm and the Inefficacy Problem', Julia Nefsky expresses skepticism about appeals to "expected value" to address worries about the ability of a single individual to really "make a difference". In section 4.2, she notes that the relevant cases involve either "(A) an extremely small chance (as in the voting case) or (B) a chance at making only a very tiny difference." Addressing each of these in turn:
Subscribe to:
Posts (Atom)