Sunday, April 22, 2018

Three kinds of offsetting

Distinguish the following kinds of "offsetting" behaviour:

Preventative offsetting -- when potential harms depend on just the global amount of something (say, greenhouse gas emissions), it seems that one can prevent the potential harm done by one's contributions by "offsetting" or paying to reduce others' contributions, so that the net effect of one's behaviour leaves the global magnitudes unchanged.

Cause-specific (or harm-type) offsetting -- when you cause a harm of a certain type, but then seek to 'offset' the badness of this by preventing a like harm from occurring elsewhere.  E.g. donating to a relevant environmental charity after polluting your local river.

Cause-neutral (or net utility) offsetting -- when you cause a harm of a certain magnitude, and then seek to 'offset' the badness of this by preventing a similar amount of harm elsewhere.  E.g. donating to a global poverty charity after polluting your local river.

Preventative offsetting seems the most easily justified.  While he doesn't use this exact terminology, this is the basic idea that Will MacAskill appeals to in Doing Good Better to distinguish carbon offsetting (which he thinks can justify our carbon emissions) from things like murder offsetting (which surely can't justify murder).

But preventative offsetting can only get us so far.  For one thing, it's not entirely obvious that even carbon offsets are entirely preventative: it seems an empirical possibility that there could be some localized effects to greenhouse gas emissions, such that planting trees in Peru doesn't literally undo the effects of flying around Europe.  Moreover, our modern lives entail all sorts of localized environmental harms (e.g. air pollution from driving cars, creating demand for electricity that's partly from non-renewable sources, habitat destruction to make room for our housing and food needs, adding to oceanic plastic buildup, etc. etc.) that aren't subject to preventative offsetting.  So if we want to be able to "make up for" these harms we (and our children) cause, we will need to appeal to some form of non-preventative offsetting (whether cause-specific or cause-neutral).

Perhaps the most promising way to do this (without absurdly justifying murder offsets or the like) would be to develop Scott Alexander's idea that "you can offset axiology, but not morality."  Scott happened to appeal to a specifically rule-utilitarian conception of morality, but we can broaden the idea beyond this.  The basic idea (as I would prefer to develop it) is just that you can offset diffuse harms but not specific wrongs or rights-violations.

On any plausible conception of rights (whether foundationally consequentialist, contractualist, or taking the rights themselves as foundational) we have a right that our neighbours not murder us, but no right that they refrain from exhaling carbon dioxide into the atmosphere.  So there we have our easy cases: carbon offsetting is legitimate, whereas murder offsetting is not.  This analysis also plausibly identifies the difficult cases: reasonable theories disagree about whether factory farming involves rights violations or just harms to the animals' welfare, and so the legitimacy of meat offsetting (e.g. paying others to become vegetarian in your place) is a similarly open question.

My hope is that this analysis could convince even non-consequentialists that we can legitimately offset the environmental harms entailed by procreation (and indeed by our own continued existence), thereby undermining the environmental anti-natalist arguments of Travis Rieder, Sarah Conly, and others.  Our everyday environmental externalities seem like precisely the kind of diffuse, untargeted harms that can legitimately be offset, after all.

I'm further inclined to think that there's no clear moral reason to prefer cause-specific moral offsetting: if we can more efficiently make the world a better place through other means, we should feel free to do so.  And this makes it much more feasible in practice to successfully offset the harms we cause: rather than having to somehow find specific charities that address each specific kind of harm we cause, it suffices to simply do more good than harm overall in our lives (which I expect is already achieved by most law-abiding folks just through their everyday social and economic contributions to society, before even taking into account any explicit philanthropy).

Any thoughts / objections?


  1. Hi Richard,

    I think this matter needs some clarification. For example, if to offset something is to make a behavior not immoral, then it seems tautological to me that one cannot upset moral wrongs. If a behavior is morally wrong/immoral, then so it is.

    That aside, with regard to the hypothesis that there is no murder offsetting because we have a right that our neighbors not murder us, the behavior of the person who intends to murder a neighbor would be just as immoral if the neighbor is a non-player character (NPC) in a simulation (given the same reasons for the murder), but the would-be murderer has no further info that allows him to figure that out. So, (using rights-talk because everyone does), unless the NPC has the right not to be killed, the account does not seem to work.

    I'm inclined to think that for agent A to have a (moral) right to X is just for other agents (who they are depends on context) to have a moral obligation not to prevent A from Xing (though I don't know that obligations are foundational, either). Moral obligations are properties of the mind of the agent having them, and do not depend (in a constitutive manner) on any property of other agents, and so they don't depend on rights. The NPC is not an agent, and so has no rights, but the perpetrator still has a moral obligation not to kill it, at least as long as it does not have any info that indicates it's a NPC.

    Alternatively, one can say the NPC has that right, but an actual person wouldn't if the perpetrator rationally believes he's dealing with a NPC, but even then, rights would seem to depend not on the mind of the entity having them, but exclusively on the mind of the agent who might have an obligation not to respect them, and so it seems to me the more basic stuff are still obligations, and rights are only derivative (the only difference is would seem to be whether one attributes rights to non-agents).

    1. Hi Angra,

      To avoid tautology, read 'wrongs' here as pro tanto, like rights violations (which I take it can be justified / outweighed by other considerations).

      I'm not sure that I follow the rest of your comment, but I find it helpful to distinguish objective and subjective modes. Objectively, no harm is done when you "murder" a fictional character, so there is nothing to offset. Subjectively, the agent ought to believe that their action is a rights-violation and hence not offsettable (and if they follow through on the "murder" then it reveals vicious character, etc.). So I don't see the problem for my account here.

    2. In re: wrongs, and with respect to the "pro tanto" stipulation win, I'm not sure that works, for the following reason: if we're talking about legitimately offsetting a harm, it seems implicit that if the harm were to be done without taking measures to offset it, the behavior would be morally wrong/immoral. In short, in all cases we seem to have a pro-tanto wrongful behavior. Now, you might say that in some cases, the behavior is immoral even if offsetting measures are taken. I would agree, but I don't see how the account you propose help us distinguish between the cases, since in all cases we have a behavior that would be immoral if committed without any action taken to offset the harm it causes, but only in some of those cases it's immoral to carry it out while taking other measures to offset it.

      In re: the attempted murder of the NPC (attempted because it's a NPC, so it cannot be murdered), my point was that even if the NPC has no rights, the action cannot be offset, so your account didn't seem to work. For example, you say the neighbors do not have a right that people refrain from exhaling carbon dioxide into the atmosphere, and so carbon offsetting is legitimate, but they do have a right not to be murdered, so murder offsetting is not legitimate. My point was that even though the NPC does not have the right not to be shot in the head for fun, shooting the NPC in the head for fun is not legitimate if the shooter believes and should believe that the NPC isn't a NPC but a person (and in a number of other cases too, but at least in that case). Now if I'm reading the subjective/objective account correctly, your view would seem to hold that what appears the agent ought to view (in an epistemically rational manner) as a violation of rights can't be legitimately offset (so, the action would be immoral), even if in fact there is no violation of rights. Is that a correct interpretation of your account?

      At any rate, the other problem seems to me to remain: if we try to avoid a tautology by adding a "pro tanto" stipulation, then it seems that the harms that we're trying to offset still involve a pro-tanto wrongful behavior, and indeed it would be morally wrong to cause them without the offset in question.

    3. Diffuse harms of the sort that seem offsettable don't involve rights violations, which is what I indicated 'specific wrongs' to be getting at in this context, i.e. (pro tanto) wronging an individual, not mere wrongful behaviour.

      re: the NPC case, it just depends whether you're talking about objectively or subjectively permissible/legitimate behaviour.

    4. My impression is that diffuse harms also seem to involve rights violations. For example. consider a factory that dumps a lot of mercury in a river. The damage is diffuse - it's not clear who will get hurt -, but predictably, some people will get hurt, their lifespans will be reduced, etc. Are they not being wronged? (assuming no sims, NPC, etc.). It seems to me that they are (I would say even those who happen not to get hurt in the end).
      Now suppose the amount of diffuse harm is less than that. Wouldn't that imply that they're not being wronged, even if the behavior is still morally wrong as long as there are no steps for offsetting it? I don't know why that would be. The diffuse nature of the harm does not seem to prevent the behavior from being an instance of wronging someone (as the "lots of mercury" example seems to indicate), and if the amount of harm is not enough to make the behavior not immoral, I'm not sure why they are not being wronged, even if to a lesser extent. My impression is that they are being wronged.

      Still, assuming diffuse harms do not involve pro-tanto wronging or rights violations, we can consider the case of taxation (or other laws, but this case seems to work). Is the harm diffuse?
      I don't think so in general. For example, if a new tax law is proposed, people can figure how much it would harm them, oppose the tax, protest and so on. In addition, it seems very plausible that some taxes would be cases of violations of people's rights. Moreover, it may well be that some specific, famous persons (e.g., owners of some big companies, etc.) are clearly going to be harmed by a new proposed tax, and they would be wronged if - say - the tax had no legitimate purpose. Yet, imposing the tax might still be not immoral, depending on its purpose, predictable outcome, etc.

      In re: NPC. I would say that shooting the NPC in the head for fun (in the specified scenario) is impermissible, immoral, blameworthy, etc., in the usual sense of the words in colloquial speech. I do think permissibility depends on the information available to the agent that is acting, so I guess it's "subjective" in the sense you have in mind, but I think it's objective also in the usual sense of the words, i.e., there is an objective fact of the matter as to whether it's permissible, immoral, etc.

  2. Hi Richard,

    It looks as though you are appealing to a distinction between doing something wrong (or at least, wrong absent offsetting), and doing a wrong *to* someone (i.e. "wronging" them), and saying that the former can be offset but the latter cannot. That sounds interesting and potentially plausible, but I wonder if that distinction might cut across the distinction between harming specific individuals and harming diffuse collections of people. I might harm some specific individual without wronging them (if I reject their romantic advances), and perhaps I might wrong a collection of people (if I outlaw a religion).

    (I'm guessing one might reply that the latter just consists in my wronging every member of that collection individually. I don't know quite what to make of that. I'm guessing there's probably interesting discussion out there on this and/or on related issues about group rights.)

    If these distinctions did come apart, there might be an interesting question about whether (e.g.) climate change involves wronging future generations (as a collective): if so, perhaps it is not offsettable despite harming a diffuse collection of people.


    1. Hi Alex, good point -- I agree that many specific harms aren't wrongings (though it'd be interesting to find an example of this that was still wrongful). Going in the other direction, I'm skeptical of group rights, and hence sympathetic to your anticipated reply that they really just involve the individual members being wronged.

      The non-identity problem creates a significant barrier to thinking that future generations are wronged by climate change (or indeed anything), but perhaps a more 'collective' approach would sidestep that.

    2. I'm not sure what I'm missing (please let me know), but if future generations cannot be wronged because of the non-identity problem, then as far as I can tell, the account implies that any pro-tanto wrongful action that will harm only them can be offset, or else someone else is wronged. However, the latter is at best unclear, and improbable if one assumes that diffuse harms are not examples of wronging someone, whereas the former is clearly false: one cannot offset, say, sending a bunch of ships into space, programmed to start lobbing asteroids in Earth's direction 200 years from now, or burying a nuke in one's backyard in a large city, programmed to go off in 200 years (even if one makes sure it's safe before that), and so on (always for fun, or just because one can, some similar motivation).

      That aside, I don't understand why diffuse harms aren't examples of wronging people, or if they are, why they can be offset. For example, things like celebratory gunfire in a town cannot be offset even if no one gets hurt (someone might), but there no specific harm (assume no one in fact gets hurt). One could argue that everyone in the city is wronged; but for that matter, why would that not be so if one, says, starts a mining operation nearby which pollutes their river, air, etc., to the extent current mining operations regularly do?

    3. Thanks Richard, that all sounds reasonable. Just because you got me thinking about whether there are any harmful and wrong non-wrongings(!): If I promise A that I won't reject B's romantic advances, and then I reject those advances, I might have done something wrong, and harmed B, and yet not wronged B. I suppose the more difficult question is then whether an act might be wrong *in virtue of* the harm it does someone, and yet not wrong that person. Perhaps some harms to self? (To be clear, this is all just a tangent and not relevant to your central claims!)

    4. Angra -- cool cases, I'll have to think on them some more.

      Alex -- what fun, it's like an ethics Gettier case!


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.