Friday, September 27, 2019

Worthless Harm-Prevention and Non-Existence

We typically assume that it's really important to prevent great harms.  And indeed, usually it is.  But there are at least a couple of exceptions.

Most obviously, some harms might be outweighed by greater associated benefits.  Benatar thinks it's terrible to allow someone to come into existence given all the subsequent harms their life will contain (no matter how overall happy their life will be).  That's obviously nuts.  These harms are more than compensated for by the overall happiness of the life.  So it's only uncompensated harms, or "net harms", that we should seek to prevent.

More interestingly, even net harms may nevertheless not warrant preventing (in a certain way).  For suppose the harm is comparative in nature: the harmful event does not put the victim in an intrinsically bad state, but rather harms them in virtue of depriving them of some much better alternative.  There are then two very different ways in which a comparative harm could be prevented.  You could ensure that they get the better alternative.  (That's the good way!)  Or, you could prevent the "better alternative" from ever arising as a possibility to be deprived of in the first place.  There is generally no reason whatsoever to prevent a harm (however grave) in this way.

Acquired Tastes and Necessary Interests

Suppose that what time-relative interests you'll have in future depends upon what decision you make now.  For example, in McMahan's case of The Cure you can choose between five years of happy life continuous with your current psychology, or else taking a cure that greatly extends your future life but at the cost of introducing an immediate radical psychological discontinuity, such that the future happy person -- while numerically identical to you -- has nothing psychologically in common with your current self (in terms of memories, personality, values, etc.).  Now: If you never take the cure, then there are no time-relative interests that would be served by taking the cure, since the psychological discontinuity discounts the value (to you) of the discontinuous future life to zero.  But if you do take the cure, then you've many future timeslices who have strong time-relative interests in this future.  So, is it prudentially rational to take the cure or not?  Do contingent time-relative interests count?

Thursday, September 26, 2019

Stacking Time-Relative Interests

Jeff McMahan, in The Ethics of Killing, introduces the notion of a 'time-relative interest' in one's future life possibilities, the strength of which depend upon the degree of psychological relatedness that would hold between one's present and future self.  This explains why early abortion is harmless (contra Marquis): the early embryo has no psychology, and hence no unity whatsoever with the future person.  It thus has no time-relative interest in having that possible future be realized, so a death that "deprives" it of that future is not contrary to its interests, or in other words, does no harm.  A later fetus or newborn infant, by contrast, would have some (slight) psychological connectedness with its adult future self, so is harmed by early death (at least slightly).

It's a neat account.  One puzzle that emerges (which I haven't seen discussed elsewhere) concerns how we should aggregate time-relative interests.  In explaining the wrong of pre-natal injury (which causes harm to the future adult), McMahan writes that "we must evaluate the act in terms of its effect on all those time-relative interests it affects, present or future." (283)  That is, while the pre-natal being has little or no time-relative interest in avoiding the pre-natal injury, the future adult's time-relative interests would be gravely affected (we may suppose), which explains why the pre-natal injury is a morally weighty affair.  (In case of abortion, by contrast, there are no future time-relative interests to be negatively affected; the abortion prevents those interests from arising in the first place.)

So far, so good.  But how exactly are we to take all the time-relative interests into account?  Utilitarian "equal consideration of interests", adding them all up, clearly won't work.  That would cause massive over-counting.  Compare the deaths of a pair aged 20 and 40.  Suppose that each would have lived for exactly twenty more years of (equally) happy life if not for their current demise. But the 40-year old has twice as many time-relative interests that are thwarted, and many of them will be stronger (assuming stronger connections between his 21-yr old self and current self than are found between the twenty year old and her past infant self).  So aggregating intrapersonal time-relative interests would lead us to the conclusion that the 40 year old's death is a much greater harm.  But that isn't plausible.

Suppose instead that the (apparent) 40 year old instead popped fully-formed into existence as an apparent thirty-nine year old, one year ago.  This should make no difference to how bad his death now is at forty (given that he would be deprived of just as good a future, to which he would be just as strongly psychologically connected, etc.).  But he has many fewer past time-relative interests in avoiding this death.  So if we are to add them all up, they will now add up to less.

Surely, a person's interest in some event should only be counted once, not once for every moment that they were alive.  This suggests to me that the appropriate way to "take into account all the time-relative interests" in some event is not to count them all, but rather, to find the one strongest time-relative interest from across one's lifetime of moments, and count just that one.  One's weaker time-relative interests in an event are to be subsumed within (not added to) one's greatest time-relative interest in it.

Typically, your strongest time-relative interest in an event will be concurrent with that event.  So this explains why the harmfulness of an event typically doesn't depend upon the length of your life history.  But in the rare cases when one's strongest time-relative interest is found at a different time (as in the pre-natal injury case), it can appropriately override the weaker interest one had at the time of the event in question.

Does that seem right?

Tuesday, August 27, 2019

Ideological Ascent and Asymmetry

There's a certain dialectical move I sometimes see, wherein you criticize someone's political conduct as unreasonable on grounds that abstract away from the (first-order) details that they're actually responding to.  We might call this ideological ascent, as the critic insists on looking only at abstract features of the dialectical situation, e.g. the mere fact that it involves an "ideological disagreement", without any heed to the actual details of the dispute.

Ideological ascent seems to presuppose a symmetrical view of political/ideological merit: that "both sides" of a dispute are (at least roughly) equally reasonable.  This convenient assumption saves one from the hard work of actually evaluating the first-order merits of the case under dispute.  (See also: in-betweenism.)  Alas, people have been known to advance unreasonable political views from time to time.

Saturday, August 17, 2019

Normativity for Value Realists

At the recent Rocky Mountain Ethics Congress (great conference, btw), I was surprised to learn that Alastair Norcross doesn't believe in normative reasons.  He's happy to speak of "moral reasons", "prudential reasons", and even "Nazi reasons", but seems to view these all as objectively on a par. He happens to prefer the morality framework of standards to the Nazi one, and will condemn Nazis accordingly, but not in any way that implies that they are making an objective, framework-independent practical error.  In interpreting this view, since I don't think that framework-relative "reasons" are genuinely normative reasons at all ("Nazi reasons" do not count as providing genuine considerations in favour of genocide), Alastair's view strikes me as a form of normative nihilism.

Interestingly, though, Alastair is a value realist.  He thinks there is intrinsic value (and disvalue), and seems to accepts a traditional hedonistic account of these (the wrong view, IMO, but not our topic for today).  Such Value Realism may naturally lead one to a broader Normative Realism, I think, in a couple of ways.  So I'll address the rest of my post to any readers who share Alastair's starting point of Value Realism without Normative Realism, and see whether either of these arguments is persuasive.

Wednesday, July 17, 2019

Does Welfare have Diminishing Marginal Value?

Many people are drawn to the Prioritarian view that "Benefiting people matters more the worse off these people are." (Parfit 1997, 213)  Importantly, this is not just the (utilitarian-compatible) idea that many goods have diminishing marginal value, so that better-off people are likely to benefit less than worse-off people from a certain amount of material goods.  Even after accounting for all that, the idea goes, the interests of the worse-off just matter more; we should even give a lesser benefit to the worst off rather than a (genuinely) slightly greater benefit to someone who is already well-off.

This view always struck me as deeply misguided. By effectively attributing diminishing marginal value to welfare itself, you can end up implying that even considering just a single individual, it might be "better" to do what is worse for him (e.g. giving him a smaller benefit at a time when his quality of life is lower, rather than a greater benefit at a different time).

But there is a closely related view that is more theoretically cogent.  Rather than attributing diminishing marginal value to welfare itself, you attribute it to the basic goods that contribute to one's welfare (happiness, etc.).  This goes beyond the familiar idea that material resources have diminishing instrumental value, say for making you happy.  We are now introducing a kind of non-instrumental diminishing value, by saying that happiness itself makes more of a difference to your welfare the less of it you have.

Monday, July 15, 2019

Options without Constraints

Now up at PEA Soup. (Related to my earlier post, 'When Killing is Worse Than Letting Die', but with a neater framing / set-up, I think.)

Wednesday, July 10, 2019

Philanthropy Vouchers and Public Debate: Political vs Civic Advocacy

It's interesting to compare the ways we talk and think about political vs non-political (civic/philanthropic or market) agents, advocacy, and organization.  Consider the common objection to Effective Altruism, that it allegedly "neglects the need for systemic change."  I've rebutted this objection before, but a different aspect of it that I want to focus on today is that the criticism seems to presuppose that only politics can be systemic.  But why assume that?

Monday, July 08, 2019

Charity Vouchers: Decentralizing Public Spending

People sometimes object to the charitable tax deduction on grounds that it is "undemocratic", incentivizing wealthy individuals to exert philanthropic influence instead of filling the public purse. On the other hand, well-targeted philanthropy surely achieves more good than paying extra to the government (which may just go to paying down the public debt, funding unnecessary wars, military parades for the Great Patriotic Leader, corporate welfare, and tax breaks for the wealthy).  If choosing where best to donate your money, "the US government" would seem an unlikely answer.  We recognize that charities could use extra funds more effectively. So it seems worth exploring ways to boost the philanthropic sector whilst avoiding the potential downside of concentrating power in the hands of the ultra-wealthy. The obvious solution: charity vouchers.

Tuesday, July 02, 2019

MacAskill on Aid Skepticism

The whole paper is great, but I especially wanted to share his concluding remarks:

Often, critics of Peter Singer focus on whether or not aid is effective. But that is fundamentally failing to engage with core of Singer’s argument. Correctly understood, that argument is about the ethics of buying luxury goods, not the ethics of global development. Even if it turned out that every single development program that we know of does more harm than good, that fact would not mean that we can buy a larger house, safe in the knowledge that we have no pressing moral obligations of beneficence upon us. There are thousands of pressing problems that call out for our attention and that we could make significant inroads on with our resources. Here is an incomplete list of what $10,000 can do (noting, in each case, that any cost-effectiveness estimates are highly uncertain, with large error bars, and refer to expected value):
  • Spare 20 years’ worth of unnecessary incarceration, while not reducing public safety, by donating to organisations working in criminal justice reform (Open Philanthropy Project 2017b).
  • Spare 1.2 million hens from the cruelty of battery cages by donating to corporate cage-free campaigns (Open Philanthropy Project 2016).
  • Reduce the chance of a civilisation-ending global pandemic by funding policy research and advocacy on biosecurity issues (Open Philanthropy Project 2014).
  • Contribute to a more equitable international order by funding policy analysis and campaigning.
In order to show that Singer’s argument is not successful, one would need to show that for none of these problems can we make a significant difference at little moral cost to ourselves. This is a very high bar to meet. In a world of such suffering, of such multitudinous and variegated forms, often caused by the actions and policies of us in rich countries, it would be a shocking and highly suspicious conclusion if there were simply nothing that the richest 3% of the world’s population could do with their resources in order to significantly make the world a better place.The core of Singer’s argument is the principle that, if it is in our power to prevent something very bad from happening, without thereby sacrificing anything morally significant, we ought, morally, to do so. We can. So we should.