Friday, February 28, 2020

Who's Responsible for Offset Harms?

Here's a fun puzzle (that I owe to Caspar Hare): Polluter is trying to work out how to dispose of her toxic waste barrel economically, when she sees her neighbor about to pour his waste barrel into the river.  Delighted, she interrupts her neighbor and pays him to find a more eco-friendly way to dispose of his waste.  Having offset this harm, Polluter now feels free to dump her own waste into the river.  The downstream farm is ruined.  Who is responsible?

Tempting answer: Polluter! She dumped waste, while her neighbor (Paid-Off) didn't.  Polluter clearly caused the harm, and is the only eligible agent to be held morally responsible.

I think this tempting answer is importantly mistaken.

Correct answer: Paid-Off is responsible.  Why?  Consider the counterfactuals: if Polluter hadn't been there, Paid-Off would have dumped his waste yielding the same result.  If Paid-Off hadn't been there, by contrast, Polluter would have found an alternative waste-disposal solution (or so we are supposing).  So it's ultimately only because of Paid-Off that any harm was done here.

Objection: Paid-Off may ask, "How can you blame me?  I didn't even dump anything in the end!"

Response: "You were willing to increase the harm done.  And you were unwilling to do your share to avoid harm.  Collectively, there were two barrels to deal with.  To avoid harm, you and Polluter ought each to have acted to reduce the collective dumping level by one barrel.  Polluter did their part, taking on the burden of appropriately disposing of one barrel, by paying you off.  You, by contrast, took on no such burden, and so failed to play your part.  The one barrel that collectively was dumped is thus on your conscience, regardless of who did the physical dumping."

Again, it's precisely because of Paid-Off's failure to do his part that the river got polluted.  Had Paid-Off been antecedently willing to take care of his own waste (i.e., without having to be paid off), Polluter could no longer have "offset" her barrel by paying him off.


(a) What if Paid-Off wasn't aware that Polluter had a barrel and planned to dump it if she was able to offset this? 

I think this makes no essential difference.  Paid-Off remains responsible for one barrel, and fails to discharge this responsibility by requiring a pay-off.  He could have been lucky, had there been no other barrels in play, such that Polluter took care of all that was needed.  But Paid-Off took no precautions to verify that this was the case.  It wasn't.  More was needed (though he didn't know it), it was on him to do more if needed, and he didn't.

(b) What if Polluter was merely an opportunistic offsetter, who would have dumped her barrel anyway, and merely took this offsetting opportunity in an attempt to salve her conscience?

Now our two lines of argument diverge: Polluter did *as it happens* take responsibility for reducing the dumping level by one barrel, in a way that Paid-Off did not.  But she remains counterfactually responsible, in just the same way as Paid-Off: the presence of either one of them (given their actual dispositional profiles) sufficed to ensure that dumping would occur, though no more for having both of them than for having either one alone.

If we give some weight to each line of argument, we get the result that both are then responsible for the harm, though Paid-Off more so.

(c) The same as in (b) above, except that there is a back-up philanthropist who will pay off Paid-Off if Polluter doesn't.

This leads to a stronger divergence between the two lines of argument.  For now Polluter is uniquely counterfactually responsible: without her, no polluting would occur (thanks to the philanthropist stepping in).  But it remains the case that she actually took responsibility for reducing the collective dumping by one, in a way that Paid-Off did not (but should have).

Which consideration is more important?  I'm not sure what to think about this case!


  1. Non-philosopher's comment here.

    Hello prof, Chappell. I presume that lack of responsibility excludes blameworthiness. Does this mean that the farmers can't justifiably confront Polluter? This would sound counterintuitive. I can't imagine a way that Polluter can make her case to the farmers without arousing their warranted indignation. Pre-theoretically, it feels to me that the farmers have a justified complaint of the form:"no matter what Paid Off was about to do, he stopped. The agent of harm was you, and you could have chosen to refrain from dumping your barrel, and let Paid Off be solely responsible. As things stand, both of you are responsible".
    Somehow, the farmers' indignation at Polluter for the destruction of the farm does not seem irrational to me, even after considering the counterfactuals. But i was wondering how you imagine the Polluter should respond to the farmers, if they confronted her with (a variant of) my claim about the Polluter's agency being involved in the destruction of the farm. It seems to me that whatever answer she would go for would be morally corrupting, in the sense that she would be defending her defective quality of will -- defective in that she would be perceiving as morally irrelevant the fact that she intentionally became the agent of unjust harm.

    P.S. I also have an extra worry which i cannot clearly form in my head yet. Anyway, i hope i haven't diverted the discussion too much from your theoretical concerns in this post!

    1. Interesting! I think Polluter can truthfully reply, "You are no worse off because of me. Although it's true that I dumped the barrel that harmed you, you would have been equally harmed had I not (as I would then have permissibly refrained from paying off my neighbor), so it is a mistake to get fixated on these physical details. What matters is that I -- unlike Paid-Off -- was determined to ensure that my presence in the situation made things no worse for you. That was my moral responsibility, and I fulfilled it. There are others in the situation who -- deplorably! -- failed in their corresponding responsibility, and that is why you have been harmed. Again, I urge you not to be distracted by assessments of the proximate cause of how this harm happened to eventuate. What matters morally is the robust-process explanation, not the actual-sequence explanation, of the harm."

      I don't expect this would convince a common-sense deontologist, but I think it's correct nonetheless.

  2. PART 1
    Prof Chappell thank you for the very stimulating response! I have to think about it along with reading the link you provided (but first a good night's sleep)! Just one relevant question that occurred to me: does Polluter's behaviour strike you as callous (like it seems to me) but you think this is not grounds for blameworthiness in this instance, or do you find the behaviour/agent as not being callous? If the latter, i wonder if your intuitions (like mine) change as the stakes in the thought experiment become higher (the more deaths i imagine the dumping is causing, the more callous the behaviour seems to me, and my perceptions concerning how warranted the farmers' indignation is change accordingly --namely, the more the deaths the more the intensity of farmer indignation i see as appropriate.
    Now, the part of the Polluter's hypothetical reply to the farmers that refers to the 'fixation' onto the physical details of the Polluter's behaviour found an agreeable response in me, as i think that our ruminations on the true injustices that we might have suffered are indeed deleterious for us humans--they turn us vindictive and hateful and these are emotions that are directly contributing to a deterioration of our well-being most of the time—obvious caveat here that the ruminations might be instrumentally indispensable for certain oppressed groups’ liberation; me, I am speaking from an ultra-privileged perspective. But as the size of the injustice rises, the more inescapable the ruminations become. There must be a point where the farmers’ loss is so big that it warrants indignation against the perpetrator who was caught with her hand on the trigger, even if the killings she perpetrated would have been committed by someone else—I am deliberately changing the thought experiment for illustrative purposes. I would want my ethical theory to be able to be responsive to these intuitions regarding (what seems to me to be) the callousness of the Polluter’s behavior even if, as we stipulate, the behavior makes the farmers no worse off. After all, isn’t the insensitivity to others’ pain justifiably seen as a predictor of future behavior?(not that I have read any relevant studies, I am just speculating on one source of the strength of our intuitions against behavior that seems callous to us). I think the farmers, in the midst of their misfortune, would feel “threatened” by a refusal to acknowledge the correctness of their perception of the Polluter as a threat: even if the Polluter’s attitude did not make things any worse off now, it is bound to do so in the future –that’s how I think the farmers will see it, that’s how it seems to me too (my intuitions come from a utilitarian background, if it is shown to me that better outcomes come by excising my intuitions from my head, then I agree my intuitions are mistaken. If the Universe is such that what I perceive as callousness is not callousness, or if it is callousness but a certain degree of callousness makes for a sustainably better Universe in the long run, then I am all for acknowledging the falsity of my intutions, though I m not sure what I would want to say regarding my attitude towards such a hypothetical Universe).
    Continuing in PART 2

    1. PART 2

      Besides, if we miss the opportunity to blame the Polluter now that she was caught red-handed, I would argue that we miss a chance to maximize the deterrence value of our blame. I speculate that the pedagogic value of blame is much bigger when there are culpable thoughts, or culpable omissions of thinking considerate thoughts, that can be made salient to the blamee by the blamer. The Paid Off won’t have as vivid relevant imaginings of the situation as the Polluter – I would speculate that if this is so, it is because of the physical proximity of the Polluter’s bodily movements to the eventuation of the harm to the farmers. All this presupposes that Polluter has done something that justifiably counts as callous—otherwise I wouldn’t think the blame to be appropriate.

      But I certainly need to think more about your response and read the link you posted. Thanks again for the fascinating post and your response.

    2. I am going to post a link to 4 pages from philosopher Jonathan Bennet’s book The Act Itself because I find them especially illuminating with regard to the broader neighborhood that is the source of my intuitions concerning the moral defect, if any, of Polluter’s action of polluting, and of her subsequent defense of her initial behavior. I will then post my reflections on it – but my reflections are still less than half-baked, even for a non-philosopher.
      Bennett, Jonathan (1998). The Act Itself pp.58-61. All emphases added except when stated otherwise:

      The phenomenon of ‘something like blame’ for actual consequences falls into two parts that are to be explained differently. One part is not much like blame; it involves coolly holding a person responsible for actual consequences while knowing that he is not to blame for them. Following Susan Wolf … [s]uppose that you cause harm to someone without being in the least at fault: a small child dashes into the path of your car, which hits and hurts her. The probability principle allows you to treat this accident as though you were not involved – stopping to help the child, no doubt, but only as you would do as a concerned bystander. In fact though you are likely to have a sense of being especially obliged to care for the child. … My topic … is the difference between two ways of acting on the knowledge that oner is causally involved: a minimalist one based on the judgment ‘The accident was in no way my fault, so I am in not especially obliged to help’, and another based on the judgment ‘I did it, so I should help’.
      Susan Wolf illuminates this difference by comparing it with two attitudes that one might take to payment for a shared meal in which one has consumed less than others. A person so placed is entitled to ask for separate bills, so that he pays only for his share; but one might brush entitlement aside and offer to split the bill evenly, so as to keep things simple. Someone who does that for that reason exhibits generosity [emphasis in the original] in the broad sense of that term … This virtue [generosity] shows itself in a careless breath of conduct and feeling, an unwillingness to do the sums needed to mark the boundaries of one’s rights and entitlements. In the person who causes unforeseeable harm it will show in his taking upon himself some responsibility for the harm that has been done, rather than computing his way to a conclusion that exculpates him. In conclusion, Wolf suggests that if we as onlookers have a sense that the person’s moral status is affected by his having caused the harm, however blamelessly, that reflects our sense of how he will see his situation if he is generous. … the core of this account is … [Wolf’s] and it seems to me right.

    3. A few thoughts:
      1 One of my takeaways from Jonathan Bennett is that, according to both Bennett and Susan Wolf, ‘something like blame’ is warranted in cases where the agent is the agent of harm in the actual sequence of events even if the bad outcome of his conduct was unforeseen. But then, a fortiori, ‘something like blame’ must be warranted in cases where the agent is the agent of harm in the actual sequence of events where the bad outcome of his conduct is foreseen, as in the Polluter case. Now, if I am allowed to take my ‘a fortiori’ literally, I will claim that something stronger than ‘something like blame’ is warranted in the Polluter’s case. Can full fledged blame be that stronger warranted attitude?

      2 A major concern that my mind keeps reverting to is that Polluter can accurately be described as insensitive to causing pain. She gladly helps herself to a pain-causing role. This, to my mind, is an attitude, that needs to be addressed because it increases the probability of the agent becoming the source of bad outcomes in the future. Blame in the Polluter case seems to me to be a proper way of addressing this concern. (Is this motive Utilitarianism?)

      3 After Polluter paid off PAID OFF, and before she dumped her barrel in the river, she had enough time to consider that if she stopped acting at that time then no harm would eventuate to the farmers. It is this lack of thinking the relevant appropriate thoughts after it became clear to her that her behavior is at that time the sole determinant of the farmers’ misery or happiness that I find especially blameworthy.

      4 The speech of Polluter that is meant to defend her conduct is bound to produce extra indignation in the farmers. This is an empirical claim, but I think it is psychologically uncontroversial. It will be seen as a second slight, but I can’t pin down for philosophical-reflection purposes the nature of the wrong they will be (justifiably, to my mind) perceiving themselves as having suffered.

      5 Here is a thought concerning the perceived danger of Polluter’s attitude: Let’s imagine the neighbor opposite my house is a gun lover. Though I despise guns (and gun owners) I keep my views to myself (except when posting on the internet ����) and in no way do I think myself entitled to impose upon gun owners my views via the application of State power: they are free to keep their guns, as far as I am concerned, though I certainly prefer to live among a community of people that feel uncomfortable in the presence of guns. Now back to my neighbor, let’s stipulate that he has the habit to stand in his porch and point his empty gun at passers by without their knowing; he just points the gun, say, to their backs. Let’s also stipulate that I know his mental health is checked often because he is a member of the Security Forces, and let’s also stipulate that I am convinced that the gun is always empty (because, say, I have seen a giant poster in his house reading “the true badass carries a loaded gun only during practice and only when it’s time to kill, never in the presence of children”, and there are lots of his children around). I think that some mild fear of him is warranted. And I think that Polluter is relevantly similar, in that her attitude and behavior bespeaks of possible future trouble.

    4. Thanks Dionissis, lots of interesting thoughts here (and a great quote from the Bennett book)!

      I share your sense that the callousness question depends (at least in part) upon the stakes here. If the polluting causes deaths, that seems very different from if it is mere property damage (or a few failed crops). In the higher-stakes case, she might well be obliged to positively help the situation, by both paying-off her neighbour and refraining from polluting herself. And a lot of what you say does seem plausible of the high-stakes case.

      I implicitly had in mind a lower-stakes version of the case (which seems a fairer comparison to ordinary cases of, e.g., carbon offsetting, where the harms are more "statistical" in nature, and much less salient as a result). Do you think there's any intrinsic significance to being the "agent of harm" then too? Or only in higher-stakes, salient-harm scenarios?

    5. PART A
      Prof Chappell I am very glad you found my thoughts interesting! Normally I would have rested on these laurels bestowed on me by a philosopher, but you asked me a question, so I need to try to answer it. First of all epistemic shame on me but I am not sure what you mean by ‘intrinsic significance’ of an agent’s becoming the agent of harm. I took it to mean what it means in your paragraph from the brilliant link you posted in your response to prof Caspar Hare in a downstream comment:

      There's no intrinsic or fundamental significance to the distinctions between doing and allowing, killing vs letting die, etc. (It's notoriously difficult to give a sound metaphysical account of these distinctions that seems to be getting at anything fundamentally important, after all.) Together, these points suggest that we should want to account for such distinctions having a kind of indirect significance -- say, by typically correlating with something else that has intrinsic significance.

      My view on the doing/allowing distinction is exactly like yours, namely that the mere fact that a state of affairs was a doing instead of an allowing has no ‘intrinsic’ significance, but that usually impermissible doings are such that ‘make the victim more salient’ to the agent compared to the corresponding impermissible allowings, and hence the doings reveal a very striking lack of adequate concern for the victim, and that it is this latter feature of the act that is of intrinsic significance for the purposes of blameworthiness ascriptions.

      But on this interpretation of your ‘intrinsic’ I am not sure I see the choice to become the agent of harm as having intrinsic significance, even in the high-stakes cases; I mean, the mere fact of intentionally becoming the agent of harm is not significant in itself but only insofar it reveals a lack of adequate concern for the victim’s pain (I was trying to convey this idea with the phrase “insensitivity to others’ pain”). Perhaps (as it seems to me) all cases of becoming-agent-of-harm in high-stakes cases of offset harms have the property of being indicative of a lack of adequate concern, but I can think of other high-stakes cases that are not indicative of a deficiency in concern: in Jim and the Indians the agent who shoots one (consenting) victim (that would have died anyway) to save the rest is intentionally becoming the agent of harm, but she isn’t callous for that. Am I getting something wrong with regard to the ‘intrinsic’ terminology?

    6. PART B
      Anyway, in the low stakes cases of offset harms I still see becoming an agent of harm as a bit blameworthy, albeit of course much less blameworthy. The lack of adequate concern in all these cases, it seems to me, is due to the fact that the harm is salient to the agent , as revealed by the fact that she adverted to the details of the harm, i.e. she devoted time thinking about it and, hence, she had time to think about the frustration of the farmers and respond accordingly with a “not through me” resolution, instead of brushing off mental images of angry or sad farmers. I think it’s this ‘premeditation’ that strikes me as a bit inappropriate (albeit far less inappropriate compared to cases where the harm involves physical injuries; at first I was thinking that the counterfactuals are meant to nullify the responsibility of the agent in all cases of offset harms, not just the low stakes ones).
      The robust-process explanation of the harm that points to PAID OFF (i.e. the explanation that highlights features of the situation that are common to all the possible worlds where the harm occurs) seems to me to be indeed important for the victims, to the extent that they want to plan how to avoid future harms or understand deeply the ultimate source of their misfortune ( though I would urge Polluter to refrain from being the one to offer the explanation to the victims, unless she wears a bulletproof vest 😊). But I still see the victims as entitled to their resentment against Polluter.
      Your article was fascinating!

  3. The signs ���� after the sentence "except when posting on the internet" were meant to be smileys :) 😊

  4. Hi Richard

    That's interesting. I find the question 'Who is responsible?' a little hard to get an immediate grip on. A different, preliminary question is 'Has Polluter done something terribly wrong?'. Let's start with that.

    The way you described the case, Polluter pays off then pollutes. So there are three actions to assess:

    1) paying off then polluting (the composite action)
    2) paying off
    3) polluting

    Polluting is, on all moral theories of which I am aware, wrong. It causes the ruination of the downstream farm.

    So Polluter does something terribly wrong, that causes the ruination of the farm. This would seem to me to make her an appropriate object of blame, resentment etc. So, in that sense, she is responsible for the ruination of the farm.

    The 'If I had never been around the very same thing would have happened' defense does not seem to me very strong. It is true that if Polluter had never been around then the farm would have been ruined. It is also true that if Polluter had left the scene after paying off Paid Off then the farm would have been just fine. That's enough to make her an appropriate object of blame. (Consider: You are floating helplessly in the ocean. A great white shark is heading towards you. I drag you into my boat. Someone offers me $10 to throw you back in. I do that. The shark eats you in just the way it would have eaten you if I had never been around. Is it appropriate to blame me for your death? Yes.)

    An interesting variant of the case is one in which Polluter pollutes then pays off, and would not have paid off if she had not polluted. Now the three actions to assess are:

    1) Polluting then paying off
    2) Polluting
    3) Paying off

    From a consequentialist point of view, (2) looks morally okay -- its outcome is no better or worse than the outcome of its alternative (I am supposing we take an 'actualist' treatment of Professor Procrastinator - type cases). (3) looks morally great -- it's outcome is better than the outcome of its alternative. And (1) looks no more or less wrong than never doing anything -- its outcome is worse than the outcome of one alternative (doing nothing then paying off) and no better or worse than the outcome of another alternative (never doing anything).

    So it isn't obvious that Polluter has done anything terribly wrong in this case. And the question 'is she an appropriate object of blame?' seems open.

    Perhaps it matters whether she would have polluted had she not recognized that she would later have an opportunity to pay off. Let's suppose, first that she would have polluted anyway. Now this looks like a case of malicious overdetermination. The presences of Polluter and Paid Off each suffice for the ruin of the farm, but the farm would still have been ruined had either been absent.

    I am inclined to think that in cases like this both parties are appropriate objects of blame.

    Suppose, second, that Polluter would not have polluted if she had not recognized that she would later have an opportunity to pay off. Now this no longer looks like a case of overdetermination. The presence of Paid Off suffices for the ruin of the farm. The presence of Polluter does not. Now is Polluter to blame? Is she more to blame than if she had done nothing?

    The natural way for a consequentialist to answer these questions: Polluter is to blame -- she could have not-polluted then paid off. Polluter is no more to blame than if she had done nothing.

    A different way to answer these questions: Polluter is to blame, and more to blame than if she had done nothing. By polluting and paying off Polluter has involved herself in the farm-situation. Involving yourself in a situation is morally risky. By involving yourself in a situation you expose yourself to blame for not doing the best thing with respect to the situation.


    1. Hi Caspar, thanks for this great comment!

      I certainly agree with your verdict about the shark case -- and hence that making things overall no worse for one's presence is not a universal defense. Sometimes we have positive obligations to improve situations, not just negative obligations to avoid making them worse. (Singer's pond is of course another such case.) But perhaps you meant to make the stronger claim that rescuing me and then tipping me back overboard is morally even worse than failing to rescue me in the first place. And that does seem plausible, too -- perhaps for the kinds of salience/'callousness' reasons that Dionissis adverts to up-thread. I think it's less obvious that such reasons apply as strongly in the pollution case (that'll depend on the details, I guess).

      Comparing your two main scenarios: There's something a bit odd about thinking that the order of the acts makes such a big moral difference. The first scenario invites us to evaluate polluting in isolation, holding fixed that paying off has already occurred, and it's certainly true that this additional act makes things a lot worse than if the agent at this last moment had a change of heart and decided to not pollute after all (despite already reducing pollution elsewhere via the pay-off). But I'm inclined to think that a more holistic evaluation is what's relevant to questions of blameworthiness and obligation (as opposed to questions of what choice would be morally optimal). So I'm inclined to treat the first scenario as like the second so far as blameworthiness is concerned.

      On to the latter case: "The natural way for a consequentialist to answer these questions: Polluter is to blame -- she could have not-polluted then paid off. Polluter is no more to blame than if she had done nothing."

      I was implicitly assuming that it was a negative-duties-only scenario: that paying-off (without polluting) would be entirely supererogatory, and an agent wouldn't be blameworthy at all for doing nothing. So, since Polluter is (as you say) "no more to blame than if she had done nothing," we should conclude that she likewise isn't blameworthy in this case.

    2. Hi Richard,

      Interesting scenario!

      As I see it, what actually happened to the downstream farm is not relevant to the question of whether the polluter behaves unethically, and I don't see a further question of responsibility (leaving legal issues aside). In my view, the relevant matters would be what Polluter expected, what information was available to her, and so on. In particular, did she expect that the downstream farm would be ruined?

      If so, then it's a salient (unjust) harm, not a diffuse one. A parallel: someone sees a man about to steal a car (just for the money, so no justification), chases him away, and then steals the car himself (also, just for the money). Surely, that would be immoral, even if the harm was indeed offset. Same answer - though somewhat less immoral - if she reckoned there was a high probability that it would be ruined or seriously damaged. The lower the expected damage (all other things equal), the less immoral the behavior.

      On the other hand, if she made an irrational probabilistic assessment as to the degree of harm to expect, then it's harder to say, depending on the causes of her irrational assessment (but at any case, she may be guilty of failing to consider the matters properly, spending enough time thinking about it).

      Finally, if she did not expect any identifiable harm to the downstream farm (and/or potential third parties), and her assessment was epistemically rational, a question is whether she (or the Paid-Off for that matter) had any obligation not to dump the waste. Maybe they did not. But then, it's described as a "toxic waste barrel", so it seems to me it's pretty dangerous. I'm inclined to think that Paid-Off had a moral obligation not to dump the barrel, and if so, the same goes Polluter assuming no relevant difference in expected harms, etc. (see car theft example above). So, I reckon that both of them behaved immorally at different points. Paid-Off acted immorally when he planned to dump the toxic waste barrel and took action to carry out that plan, and when he accepted money in order not to behave immorally. Polluter behaved immorally when she planned to pay off the neighbor and dump her own toxic waste barrel, and from then on till she dumped the barrel - well, and further, for failing to let the people in the downstream farm know what she had done, so that they might at least try to stop the damage.

      By the way, regarding the relevance of actual consequences (or lack thereof), here's another scenario: suppose that after the toxic waste barrel is dumped, unexpectedly someone in the downstream farm releases a prototype AI drone in the water, designed to clean it in a number of ways. While they did not expect someone to dump toxic waste, the AI performs flawlessly, preventing any significant harm to the farm that would otherwise have been ruined. Polluter and Paid-Off had no idea anyone was working on such a technology, let alone planning to test it right there. I reckon that they are exactly as guilty as in the scenario in which the downstream farm is ruined.

    3. Well, they're not guilty of ruining the farm if the farm doesn't actually get ruined. So there's the old issue of resultant moral luck here (compare two equally drunk drivers, just one of whom kills a pedestrian). But I agree with you that the most important normative assessments concern "internal" evaluation of the agent's choice, given the info available to them at the time.

      I think I agree with you on salient harms (we may be obliged to intervene to prevent the car theft, so offsetting back to a baseline of non-intervention may not be good enough). So let's make it more diffuse to weaken intuitions of positive obligation. The toxins slowly leak out of the barrel as it travels downstream, doing proportionate (1/n) slight damage to each of n downstream farms, for some large-ish value of n. This seems more akin to standard (e.g. carbon) offsetting cases.

      There's a bit of a puzzle as to why we should think that "Polluter behaved immorally when she planned to pay off the neighbor and dump her own toxic waste barrel." After all, this total plan of action does not make anyone worse off. And in our revised case, it involves diffuse rather than salient harms. It would've been permissible for Polluter to do nothing, allowing Paid-Off to (wrongly) pollute. By preventing this and substituting her own barrel, Polluter's plan is equivalent (in terms of expected harms) to the permissible plan of doing nothing. Are you inclined to attribute intrinsic moral significance to "dirty hands" / causal involvement here?

    4. Regarding being guilty of ruining the farm, I was trying to say I don't think anyone is guilty of results. That is a very common and sometimes perhaps convenient way of talking, but I think some other times (most) it's problematic as it gives the wrong impression. I do think the drunk drivers in your example are equally guilty, provided that they intended the same, had the same info, etc. Generally, I do not think anyone can be made guilty retroactively. For example, suppose 3 people separately engage in celebratory gunfire, in an equally populated area (based on the info available to them), using the same kind of weapons, firing the same number of shots, etc. Once the bullets are in the air, their actions as moral agents are over. In one case, one of the bullets kills a person. In a second case, a bullet hits a parked car, breaking a window and nothing else. In the third case, all of the bullets land harmlessly. As I see it, all three are equally guilty, and they can't be made retroactively guilty depending on consequences.

      Anyway, regarding your question about why we think it's immoral since her plan of action does not make anyone worse off, I think it's because of her intent to get rid of the toxic waste despite the fact that it ruins the farm downstream (if she expects that, or gives it a high probability, etc.). I don't think, in general, that plans need to make anyone worse off in order to be immoral. For example, consider the car thief again. If A intended to chase off the would be thief B in order to steal the car himself, A behaves immorally, even though no one is worse off...well, okay, B is worse off because he doesn't get to steal the car, but in my assessment, his being worse off is not what makes A's action immoral. As an alternative - also salient -, one can consider the following scenario: C drugs D in order to rape her, for fun. He also has a knife and intends to mutilate his victim afterwards. E also wants to rape D for fun, though has no intentions of doing anything else. For that purpose, E pays off C. That is hugely wrong even if no one is worse off, and in fact D would be worse off without E's intervention (since she also gets mutilated), and even if E knows that. Further, E knows C is a murderer and armed with a knife, and properly reckons that trying to pay him off is very dangerous. So, E has no obligation to try to pay C off due to the high risk to his own life. But E likes the thrill of the danger too, so he does intervene, and C takes the bribe and leaves. So, E has no obligation to intervene (say it's 1970, no phone or means of calling the police at hand, either, etc.), and his plan predictably - and as predicted - leaves no one worse off and D better off. But E's actions are extremely immoral regardless.

      In re: revised scenario, I'm not sure I can make an assessment as to who behaved immorally and when on the basis of that amount of info. For example, did Paid-Off have an obligation not to cause that slight level of damage in the first place? How slight was it? Why? Was his action illegal? If so, did Polluter have no moral obligation even to anonymously make a call/fill an online form to denounce Paid-Off to the proper authorities? If not illegal, was the damage beyond what people generally do in the course of their commercial or industrial activities? In any event, my - tentative - guess would be that if Paid-Off had an obligation not to pollute, probably so did Polluter.

    5. Hi Angra. Concerning your point that E has no obligation to intervene by paying C,given that C is dangerous and E would be risking his own life if he did try to intervene, i think prof Chappell can respond that E has an obligation to intervene (by paying C) if he discovers via any means that C is not a threat to E if E tries to pay off C. After E pays off C and discovers that C is not a threat to E, at that point in time he has fulfilled an obligation to intervene in order to save D -- an obligation that was existing even before E's attempt to pay off C,and was existing conditionally upon the fact that C is not a threat for E, after all. What do you think?

    6. Hi dionissis,

      I think that conditional obligations in that sense (conditional to not being a threat or to some other thing) exist in nearly all if not all cases, even those of diffuse harm, but as they are not actual obligations, I do not think an objection to my scenario on that basis would succeed. Moreover, in the scenario, I stipulated that E knows C is a murderer and armed with a knife, and properly reckons that trying to pay him off is very dangerous. So, according to the scenario, on the basis of the information available to E and also based on what E believes (rationally, in this case), C was a threat. So, in my assessment, E does not have an obligation to intervene (if you like, the antecedent of the conditional obligation doesn't come true, as E does not discover by any means that C is not a threat to him).

      That said, if you think the above argumentation fails, then I would suggest making the following modification to the scenario: when E offers C to pay him off, C is undecided, and chooses to toss a coin to decide whether to take the bribe (if it's heads) or kill E (if it's tails). It's heads, so C takes the bribe. Now, I do not think this modification changes the obligations of E, for the same reasons I gave in the other post: I do not think what happens later can retroactively change the guilt of a person, and so the same goes for the obligations. However, given that the potential reply you suggest involves a condition that "C is not a threat to E if E tries to pay off C", this modification entails that not only E rationally reckoned that C was dangerous to E, but also C was indeed very dangerous to E, under any conception of dangerousness that might be construed as relevant (if needed, one can increase the risk to C as much as one wants).

    7. PART A
      Hi everyone, this is a comment with very hazy thoughts of mine. If you read it, and the risk of wasting your time materializes,it's on you! :)

      Hi Angra and thanks for the stimulating response. Just to be clear regarding my stance on an important issue that you touched upon, namely that of punishment due to retroactive guilt, I wholeheartedly agree with you that there is no such thing. If my tentative thinking that starts with certain intuitive desiderata forces me to accept as a matter of logic the appropriateness of punishment for retroactive guilt, then I consider this to be a reductio ad absurdum of my hypothesized intuitive desiderata. I won’t accept the conclusion. Actually, I don’t even believe in retributivism.

      What I had in mind was more forward-looking. I was trying to think a defense of prof Chappell’s intuition that certain counterexamples such as the ones you posited are not threatening to the perspective he wishes to experiment with (namely the perspective concerning certain low-impact diffuse harms). It seemed to me that in your example, which prima facie is a counterexample to prof Chappell, agent E is obligated to think about the situation in the following way: “If I can thwart the threat that C poses to D, then I owe it to D to preserve her current level of well being. This is not a case of throwing my garbage at my neighbor’s yard, this is more like not saving my neighbor from drowning. I am not obligated to risk my own life in the process, and given that C looks dangerous and it is rational to fear him, I am under no obligation right now at time t to try to engage C and thwart the threat he poses to D. But if I finally decide to engage C at time t+1, and C’s threat is thwarted at t+2, then I am obligated to look at my normative situation as being such that it obligates me at t+2 to treat D at t+3 in a way that assumes that I had all along an obligation to help D preserve her current level of well-being, though the absence of costs to me was not known to me at time t and hence I am not, and couldn’t have been, guilty of any relevant act or omission of mine at that time t. And the way to honor at t+2 this newly-established obligation is by looking at D’s well-being in this case as not amenable to offset-harms thinking” .
      I am thinking in a very ad hoc way, to be sure, I am not offering a principled argument, I am merely reporting an intuition that an aspect of the perspective that prof Chappell wishes to explore is indeed as he sees it, namely that in high stakes cases his tentative perspective allows for doing the obviously right thing (i.e. refrain from becoming the agent of harm) and is not threatened by counterexamples in cases that “involve high stakes” – I don’t know how to put my quotation more accurately, I am deliberately vague, I am just pointing to aspects of my still hazy thinking. But maybe this way of thinking makes also the low stakes case impermissible? In that case my attempt to describe a possible defense is no help to prof Chappell’s perspective.

    8. PART B

      A more general point that I would like to explore in my head is whether in cases similar to the one you described there is indeed a lack of obligation to, say, save someone from grave harm even though I am not afraid of the potentially very high costs to me, and I actually make the decision and attempt to try to rescue the victim. I think that once I engage, I am under the obligation to do my best to save her because the demandingness objection that initially nullifies any obligation to save is no longer present: if I do fall into the sea, in spite of the tsunami, in order to save the baby, then the minute I fearlessly enter the sea I am under the obligation to do my best to save the baby. I just can’t get out of the sea merely because I realized that I was wearing my expensive suit which will be ruined if I stay in the sea for long. If an adrenalin addict refuses to engage in an objectively very dangerous situation that he would have otherwise gladly engaged in, thereby saving many lives, and if he refuses merely for the reason that he doesn’t want to be late for his romantic rendezvous with Sue, I think he violates an obligation. ( I am not implying that you are not thinking the same, I am just exploring an issue that popped to my mind while reading your thought experiment).

      PS. Concerning your final point, where C tosses a coin (let’s make it a very unfair coin, against E), I had in mind a deterministic world where, ex hypothesi, the objective probability that C harms E is zero. Though of course, from the point of view of E, E could not possibly know that C is not dangerous after all in case E tries to pay him off.

    9. Thanks Dionissis, that all sounds very amenable to me! (Indeed, it's an implication of my 'Willpower Satisficing' view that while you sometimes need not exert great effort to help others, e.g. in the face of significant barriers, any effort you do expend had better be used to optimal effect. This can help to explain why overcoming the barriers only to then exploit the newly-rescued victim yourself is not permissible.)

    10. Interesting points, dionissis!

      In re: a person is not afraid of the high costs, actually in my scenario E is afraid of being killed, even if he likes the thrill of danger (being afraid is part of the "thrill" in question). But that aside, I'm not sure that liking the thrill of danger makes intervention obligatory. For example, suppose that someone likes the thrill, but still reckons it would be bad for him to take a 1/2 risk of dying, so even though he likes it, he chooses not to. I don't think he would behave immorally just because he likes it. So, I'm not sure that E had an obligation to intervene becaused E likes the thrill of dangerous situations. However, I think that's a debatable matter, and the most direct way of addressing this possibility is to modify the scenario and stipulate that C does not want to take the risk and does not enjoy being in danger, and further stipulate E would not try to pay C off if there were no expected reward for him. However, E is motivated because he finds the expected gain (i.e., the chance to rape D) sufficiently tempting.

      In re: determinism, I do not think that that changes the matter, not only because we may not assume determinism in the real world (I think we just don't know), but also because there are plenty of cases in which we do not have obligations to intervene due to the danger involved, and this holds regardless of whether determinism is true, so clearly dangerousness should not be assessed on the basis of complete information and deterministic results. Otherwise, it would never be the case that doing X is dangerous because there is a t chance we would get killed (0<t<1) as the chance would be always either 1 or 0, and if obligations depended on that, we would not be able to figure them out, etc. So, I do not think the determinism stipulation can work in this context.

      In re: high stakes vs. diffuse harm, my example was meant to show that a plan does not need to make anyone worse off in order to be immoral, but not to address the high-stakes vs. diffuse harm distinction. In the revised waste scenario (i.e., proportionate (1/n) slight damage to each of n downstream farms) as I mentioned I do not have enough information to tell whether Paid-off or Polluter has an obligation. I tentatively guess that if one of them has an obligation, so does the other (probably), but it's very tentative.

    11. PART 1

      Angra hi again and thanks for the interesting discussion!
      You are right about the determinism point I made that it does not do any work to object to your thought experiment. I only mentioned determinism in order to hint why in my first reply to you I was arguing that the agent was obligated from the beginning (time t, the time he is deliberating whether to intervene to pay off C) to preserve D’s well-being. In order to argue this I was using an objective “ought”, i.e. an ought that arises from a set of facts not all of which are within the agent’s perspective (I have in mind here a discussion that I read in Pea Soup, the only thing I have ever read on this issue of objective vs subjective ought. I also remember that in one of your past discussions with prof Chappell here he had mentioned to you that he was operating in the discussion you two were having with an objective sense of ‘ought’. Anyway, in the Pea Soup discussion the agent is asked to pass the sugar to her friend. She doesn’t know it, but the seeming sugar is poison. The objective ought dictates that she shouldn’t pass the sugar, given the fact that the sugar is not sugar but poison. The subjective ought, which incorporates the obligation that arises only from facts that are known or knowable –in some suitable sense of ‘knowable’- by the agent, dictates that she should pass the sugar, because that’s the polite thing to do given that the agent has no clue that the sugar is poison in reality). I was thinking that a deterministic universe allows for the objective ‘ought’ that I needed in order to establish that the obligation of E to D existed from the beginning (time t) even though the agent was unaware of the future outcome of his attempt to pay off C. In my second reply to you I changed tack and used a subjective sense of ‘ought’, and argued that what exists from the beginning (time t) is not an obligation to preserve D’s well being (given that E is not obligated to risk his life) but an obligation at time t to consider himself at t+2 as bound by an obligation to preserve D’s well-being at t+3 on the assumption that C accepts to be paid off. As soon as the condition is fulfilled (i.e. as soon as C accepts to be paid off, even though E had no obligation to try to pay him off), E is bound by his decision at t to see the preservation of D’s current well-being as his obligation at t+2. This second reply of mine does not need to assume determinism.

    12. PART 2
      Angra you are right about fear being an ingredient of the thrill in your thought experiment. In my sentence “even though I am not afraid of the potentially very high costs to me” I had a “functional” sense of the word ‘fear’, namely, the affective state that would ground a demandingness objection against the claim that the agent is obligated to jump in the sea in spite of the tsunami in order to rescue the baby. In your sense of ‘fear’ I think we can claim that no such demandingness-due-to-fear objection arises: if she is afraid in a “thrilled” kinda way, then it’s not too demanding to expect from her to do her usual thing of ignoring danger. There may be other facts that set her free from the claws of obligation, but the demandingness of the experience of fear to φ (in my sense of ‘fear’, the thrill-less sense, the fear that freezes you) is not among them in this case. One such fact could be exactly what you hypothesized, namely E’s unwillingness to engage C in this particular case, even though he would enjoy it, because he deems the 50% risk of dying too high. In such a case, I agree with you that E has no obligation to intervene by paying off C (or no obligation to jump into the tsunami). Which brings us to your new scenario where E is not thrilled to intervene, and recognizes the magnitude of the risk as giving him pro tanto reason to refrain from attempting to pay off C, but takes himself to have an all-things-considered reason to intervene by paying off C, and in fact intervenes motivated by sexual desire. In the case you propose let us make the stipulations even stronger and say that, in spite of his being voluntarily motivated, it was only a desire to rape or someone else’s putting a gun on his (E’s) head that could have ever motivated him. In such case I think we can say that he was not obligated at t to intervene by paying off C in order to preserve D’s well-being. But we can still say that he was obligated at t to see himself at t+2 (the time the danger ceases to exist) as obligated at t+2 to preserve D’s current well-being, which is I think what prof Chappell needs in order to accommodate the intuition that the harm of, say, murder or rape can’t be offset, while at the same time he maintains the option to draw a distinction between what I loosely call ‘high-stakes’ cases and ‘low-stakes-cases’, so that he can theoretically experiment with the idea that agents may not be obligated to refrain from causing certain much lesser diffuse property damages, if they have made up for them by preventing similar harms. But I need to re-read prof Chappell’s brilliant “Willpower Satisficing” to get a better sense of how he would put it more rigorously.

    13. Prof Chappell, now that I got a praise from you I am going to really rest on my laurels, and cease engaging in the discussion, if only to reread your insightful Willpower Satisficing! I remember it was the first time I commented here at your blog that I had told you how brilliant this paper seemed to me.

    14. Hi dionissis,

      Thanks to you for the interesting discussion as well.

      In re: different 'oughts', I wouldn't call the 'ought' that depends on the information available to the agent "subjective", since I think there is generally an objective fact of the matter (usual sense of the expression) about whether an agent ought to X (moral "ought" in this case), but that aside, I think that moral obligations depend on available information. In the sugar case, I would say that if the agent passes the sugar, she does not behave immorally, but if she fails to, she does behave slightly immorally (i.e., she fails to do what she morally ought to). In short, I do not agree there is that sort of obligation that does not depend on available information. At any rate, if there is in some sense an obligation that does not depend on the available information, it is not doing any work here. In particular, intuitions of positive obligation are intuitions about obligations that depend on available information, not obligations that do not (whatever the latter might be).

      In re: conditional obligation at t to see himself at t+2 obligated to preserve D's well-being, I don't think the immorality of C's plan is due to a specific conditional duty to preserve D's well-being, but rather, it's because it involves raping D for fun, and that would be immoral regardless of whether the other person is D or any third party, and also regardless of any previous actions by C or E. That aside, I'm not sure I'm getting what you're trying to show in this context. More precisely, the purpose of the example was to show a case in which it is immoral to intend to carry out a plan, even though the person making the plan rationally reckons that if the plan is successful, no one is worse off and even an innocent person would be worse off without the implementation of the plan. Assuming your hypothesis about E's conditional obligation is correct, do you think that that undermines the example? In other words, do you think the example does not show what is intended to?

      In re: ‘high-stakes’ cases and ‘low-stakes-cases’, yes, my example was not meant to be an objection to the low-stakes cases. As I mentioned in a previous reply, I would need more information about the specifics of the case to assess whether even Paid-Off had an obligation not to dump the barrel. Assuming he did, I'm inclined to say that Polluter probably had an obligation to intervene, though not to pay him off. For example, it would (probably!, again I have too little info) be sufficient for Polluter to warn the people in the downstream farms about what Paid-Off did, so that they can take measures to have him punished and/or be compensated, etc., depending on the legal system and other details.

    15. Hi Angra, thanks for the response.
      Regarding your question about what exactly it is that I am trying to show with my discussion of your thought experiment involving E,C,D C: I do not think my discussion undermines in any way what you were trying to show with regard to the grounds of the immorality of E’s plans (pay off C first, then rape D). What I was trying to do was (I quote myself from a previous comment so that I won’t rewrite the same things, and then I elaborate):

      [offer a possible defense of prof Chappell’s tentative perspective, a defense that allows him to] say that he [E] was obligated at t to see himself at t+2 (the time the danger ceases to exist) as obligated at t+2 to preserve D’s current well-being, which is I think what prof Chappell needs in order to accommodate his intuition that the harm of, say, murder or rape can’t be offset, while at the same time he maintains the option to draw a distinction between what I loosely call ‘high-stakes’ cases and ‘low-stakes-cases’, so that he can theoretically experiment with the idea that agents may not be obligated to refrain from causing certain much lesser diffuse property damages, if they have made up for them by preventing similar harms.

      I saw your thought experiment as undermining prof Chappell’s desideratum of distinguishing between high stakes and low stakes cases in that it was a case that does not allow prof Chappell to claim that this high stakes case of yours is a case where E has an obligation from the beginning (time t) to preserve D’s well being or at least an obligation not to allow D;s well-being to deteriorate below a baseline acceptable level (henceforth I am referring to these 2 distinct obligations interchangeably, though it is the latter that is the correct one for my purposes) . I was understanding prof Chappell as needing this claim to be true so that he could then differentiate the high stakes cases from the low stakes cases by insisting that the latter cases do not give rise to the specific obligation, and so that he could then counter a possible argument of the form “but prof Chappell, if we accept that we can offset the low stakes property damage, why can’t we offset murder?”. Prof Chappell, under my hypothesized defense, could answer, that in high stakes cases we are obligated from the beginning (time t) to preserve the victim’s current level of well-being, or at least not allow it to deteriorate below an acceptable baseline, but in low stakes cases there is no such obligation, and that this fact explains why we are not allowed to offset murder even though we are allowed to offset minimal diffuse property damage. Your counterexample was threatening this distinction by showing a case of high stakes where there was seemingly no prior obligation to preserve D’s current well-being. But if that is so, then prof Chappell has to admit that there is some other reason that obligates us to refrain from offsetting murder. And if that’s the case, why can’t that hypothesized reason be applicable also to cases of low stakes offset harms, thus turning them impermissible too? The distinction between high and low stakes cases is threatened by this question. My “strategy” aimed at sketchily resolving this problem for prof Chappell by arguing in an ad hoc utilitarianish way: at first I tried insisting that there was after all an initial obligation to preserve D’s current well-being in the high stakes case, in which case the distinction is straightforwardly preserved:high stakes cases involve a specific prior obligation at time t to preserve the current well being of the agent, which is an obligation not present in low stakes cases. Then, in response to your responses, I tried establishing some other utilitarianish obligation that arises at time t that allows for distinguishing between high stakes and low stakes cases in that it applies to high stakes cases but not to low stakes cases without having to claim that there was an obligation at time t to preserve D’s well being in the high stakes case of your thought experiment.

      Regarding your point about there being an objective fact of the matter about whether the agent ought to φ even in case the subjective ought is the ought we should prefer, I agree 100% ,and I think this turns us both into moral realists in the sense that we believe that ethical statements express objectively true or objectively false propositions. Which makes both of us immoral 😊 😊:

      Here are some things that I think (and argue for): it is immoral to be a moral realist;

      The above was a link of prof Chappell to philosopher Max Hayward, who wrote a paper that prof Chappell recently discussed here:

      Concerning the subjective or objective ought, I haven’t made up my mind yet. Your stance sounds to me more intuitive, but the objective ought facilitates my (novice’s) theoretical thinking more. I don’t know which is better and my protothoughts are so vague they will be a waste of time. But I am sure occasions will arise that we can discuss it substantively in the future. A pro-reason for the objective ‘ought’: I want to be able to claim that A’s owning a slave was morally wrong even a thousand years ago, no matter how non-culpable the slave-owner might have been given the facts within his perspective.
      As for my use of the terminology ‘subjective ought’, I had seen it in a paper from philosopher Ralph Wedgwood, which, shame on me, I haven’t yet read.
      Objective and Subjective 'Ought'

      Angra thanks a lot for the interesting (and for me also very useful) discussion--‘useful’ in that it forced me to clarify to myself a small number of my numerous very hazy thoughts on the subject). I might have to abstain for a while, to do some reading for a change! I mean I won’t be ignoring your responses if you don’t see a future response of mine. Thanks again.

    17. Thanks to you as well for the explanation, dionissis (and generally for the discussion)

      Briefly, I think that the obligation that E has at time t+2 (or at any time) is not to rape D for fun, regardless of C's actions (and also, that is an obligation E should reckon at t he would have at t+2 if E ponders the matter at t, conditioned of course to E's being alive at t+2). But if we are to assume that E has either an obligation not to make things worse for D or else a positive obligation to make things better, I think it's the former, only that at t+2 D is already not in danger from C, so E would have an obligation not to make things worse for her . I do not think, however, this is how it works, as I mentioned above (if it did, then, one could ask why that would not apply also in the lower stakes case: Why would Polluter not have the obligation, after paying off Paid-Off, to refrain from polluting?). One of the reasons I think this is not what is happening is this: Suppose that agent F is in a terrible situation, and the best situation for F that agent G can bring about would be for G to actually kill (variant: rape) F. Then, G still behaves immorally if G kills (variant: rapes) F for fun, even if G is aware that he is bringing about the best situation for F that he can bring about. There are possible scenarios like that, and the wrongdoing is not related to expected consequences for others, but rather, it's about the intention of the perpetrator.

      All that aside and going back to the original scenario, after further considering the matter, my impression is that when Polluter sees that her neighbor is about to dump the barrel, Polluter (in the original scenario or the slight 1/n damage variant) has an obligation to make things better for the people in the downstream farms, at least by means of warning them about the actions of the neighbor. That is because that has (probably) essentially no cost for Polluter. In that way, they can take action against the neighbor. Now, if instead Polluter pays off the neighbor and dumps her own toxic waste, the people in the downstream farms are actually worse off than they would be if Polluter had instead fulfilled her obligation, since they do not know who damaged their farms, nor do they have a witness that would say what happened. This makes it more difficult for them to take either (legal assuming there is a law) defensive or retaliatory action (or both) against whoever is harming them.

  5. [Allen Hazen writes in:]

    The logic of the puzzle is similar, it feels to me, to that of the "Whose shadow is it?" puzzle: a room is illuminated by a point source of light. Two rectangular cards are mounted on music stands, a smaller one closer to the light source, a bigger one further away, so that the shadow of the small card precisely covers the face of the larger one. There is a rectangular shadow on the wall. But what is this shadow a shadow OF? Not the small card, for its shadow is on the large card. Not the large card, for it is not illuminated and so casts no shadow. (Due to some of Fogelin, Van Fraassen, and Daniels in the 1960s.)

    - Allen Hazen

    1. Fun puzzle! Definitely some structural parallels. Though I wonder if the shadow puzzle is more easily dissolved by simply denying that there is a rectangular shadow on the wall. (There is a rectangular area of shade, which looks just like a shadow. But it turns out, upon closer examination of the situation, that this unilluminated area is not best understood as a 'shadow', given what we tend to build into that concept.)

      Part of the problem there is that our ordinary notion of a shadow, as involving shade that is literally(?) "cast" by an object, implicitly conflicts with the facts about how illumination works. So the puzzle has more of a linguistic feel to it (at least to my thinking) than the offsetting puzzle which strikes me as raising entirely substantive normative questions.

  6. Richard,

    Regarding the optimality of efforts, I'm not sure I understand your point correctly. Wouldn't that apply to your scenario as well? I mean, Polluter helps the people in the downstream farms by paying off Paid-off, but then goes on to harm them. Granted, Polluter is not acting in order to help them, but then, also in my scenario, C is not acting in order to help D, even though he reckons that D is going to be worse off without his intervention than with it if the intervention is successful.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)