Sunday, November 19, 2017

Giving Game 2017 results

This past week I ran a 'Giving Game' for my Effective Altruism class, letting each student decide (after class discussion) how to allocate £100 of my charitable budget for the year.  There was just one restriction: if they wanted to pick something other than one of the four EA Funds options (which have expert managers directing funds in the fields of "global health & development", "animal welfare", "long-term future", and "EA community"), they had to convince at least one other classmate to join them.  In the first seminar group, half the class ended up choosing alternative options; in the second, all stuck with the EA funds.  The end result was a bit more varied and (less conservative) than the first time I tried this, so that was interesting to see.  (I think it helped both to allow individual discretion rather than requiring group consensus decisions, and also to have the new "EA funds" available to enable responsibly contributing to a cause area without having to identify or select particular outstanding organizations within the area.  You can now just make the value judgment, and defer to trusted experts on the empirical details.)

Friday, November 03, 2017

Drawing the Consequentialism / Deontology Distinction

I previously mentioned that Setiya's 'Must Consequentialists Kill?' defines consequentialism vs deontology in a way that I think we should resist.  (This is part of what allows Setiya to reach his surprising-sounding conclusion that "consequentialists" aren't committed to killing one to prevent more killings.)  Setiya defines "consequentialism" as the conjunction of two theses:

ACTION-PREFERENCE NEXUS: Among the actions available to you, you should perform one of those whose consequences you should prefer to all the rest.
AGENT-NEUTRALITY: Which consequences you should prefer is fixed by descriptions of consequences that make no indexical reference to you.

Friday, October 20, 2017

Iterating Badness in the Paradox of Deontology

In 'Must Consequentialists Kill?' (forthcoming in J Phil), Setiya convincingly argues against the "orthodox" view that commonsense verdicts about the ethics of killing entail agent-relativity.  Instead, he observes: "In general, when you should not cause harm to one in a way that will benefit others, you should not want others to do so either." (p.8 on pre-print version)  For example, it's not just the agent that should prefer to avoid themselves killing one to prevent five killings, but we should generally prefer that others likewise avoid killing one to prevent five other killings.  The preference here mandated by commonsense morality is thus agent-neutral in nature: it makes no essential reference to your role in the situation.

Sunday, October 08, 2017

Intelligible Non-Natural Concerns

I've previously argued that -- even by non-naturalist lights -- what matters are various natural properties (e.g. causing pleasure or pain), and the role of the non-natural normative properties is instead to "mark" the significance of these natural properties.

But it's worth flagging that there are exceptions. While I take it that typically what matters are natural features of the world, this is not a universal restriction on what matters. After all, normative properties plausibly have the further normative property of being worthy of philosophical scrutiny. So I do not deny that there may be special cases when it is perfectly reasonable to take an interest in morality de dicto. (Responding to moral uncertainty may be another such case.) My claim was the more modest one that non-naturalism does not commit us to having non-natural properties take center stage in our moral lives.

The special cases where normative properties themselves are of legitimate interest are precisely cases in which it no longer seems perverse or unintelligible to take a special interest in a non-natural property. There's clearly nothing unintelligible about taking a philosophical interest in non-natural properties, after all. (They raise all sorts of interesting questions!) The case of moral uncertainty may be less obvious, so let me discuss that a bit further.

Monday, October 02, 2017

Harms, Benefits, and Framing Effects

Kahneman and Tversky famously found that most people would prefer to save 200 / 600 people over a 1/3 chance of saving all 600, and yet would prefer a 1/3 chance of none of the 600 dying over a guaranteed 400/600 deaths.  This seems incoherent, since it seems our preferences over a pair of options are reversed merely by describing the very same case using different words.

In 'The Asian Disease Problem and the Ethical Implications Of Prospect Theory' (forthcoming in Noûs) Dreisbach and Guevara argue that the folk responses are compatible with a coherent non-consequentialist view.  Their basic idea (if I understand them correctly) is that the "400 will die" case is suggestive of a different causal mechanism: perhaps the 400 die from our intervention, so the choice is between guaranteed or gambled harms, whereas the "saving" choice is between guaranteed or gambled benefits.  They then suggest that non-consequentialist principles might reasonably mandate a special aversion to causing guaranteed harm (and so think it better to risk harming either all or none, despite no difference in expected value between the sure thing and the gamble).  In the first case, by contrast, they suggest that non-consequentialists might think it easier to justify saving some lives as a "sure thing" rather than taking a gamble that would most likely save nobody at all.

Sunday, August 06, 2017

Anomaly v Huemer on Immigration

People often assume that to allow immigration is an act of charity: a country generously sharing its land and institutions with outsiders who have no real claim to be there.  Michael Huemer's work forcefully upends this assumption, showing that immigration restrictions are in fact a form of harmful coercion (like blocking a starving man from accessing a public market where he could trade for food). This reconceptualisation shifts the argumentative "burden", insofar as we generally accept that it is much more difficult to justify coercively harming someone (a seeming rights-violation) than to merely refrain from assisting them.

Sunday, May 21, 2017

Nanoseconds that Matter

Take an arbitrarily short duration -- I'll speak of 'nanoseconds' for familiarity and convenience, but you could use an even smaller measure of time.  Could removing a mere (arbitrary) nanosecond from your life plausibly make your life any worse on the whole?  You might think not, on the basis that "surely nothing of any significance could occur during such a short time."  On the other hand, if you remove all the nanoseconds then we have no life left at all, which is certainly a significant difference.  Is it coherent to think that many individually worthless moments might collectively have value?

I have my doubts, and have previously suggested that such putatively vague goods (as a "sufficient duration to matter") are better understood as graded and/or involving threshold effects.  A friend suggested minuscule scales of time as a challenge to this view, but I think my approach still makes good sense of this case.  Here's how...

Aggregating the Right Moments

Should we prefer to give one person half a million minutes (i.e. one year) more life, or to give a million people one minute more each?  If iterated a million times over (once for each person in the million), the latter repeated choice is clearly better for all (by half a million minutes).  Moreover, as I suggested in comments to that post, if we assume that the million choices are independent of each other in value -- that is, the value of making one such choice does not depend on how the other choices are made -- then it quickly follows that it's better to give the million tiny benefits rather than the one big benefit, even in a one-off choice situation.

However, it's worth flagging that on one very natural (but philosophically distorting) way of imagining the situation, the independence assumption will not hold.

Saturday, April 22, 2017

Universalizing Tactical Voting

I regularly come across two objections to tactical voting, i.e. voting for Lesser Evil rather than Good in hopes of defeating the Greater Evil candidate.  One objection is just the standard worry that individual votes lack instrumental value, debunked here.  More interestingly, some worry that tactical voting is positively problematic, morally speaking, on grounds of its putative non-universalizability.

On one version of the worry, tactical voting involves (something approaching) a contradiction in the will, insofar as even if those who most prefer Good constituted a majority, they could get stuck in the inferior equilibrium point of all (unnecessarily, and contrary to their collective preference) supporting Lesser Evil.  On another version of the worry, tactical voting involves (something like) a contradiction in conception, insofar as it involves responding to how others plan to vote, which might seem to depend upon those others voting non-tactically, i.e. not waiting to first learn how you plan to vote.

Monday, April 10, 2017

Assessing the NMC's Defense of its Independent Midwifery Ban

After receiving much criticism for its effective ban on independent midwifery, the NMC released a document [pdf] that seeks to explain and justify their position (see especially the fourth and final page).

Their central conclusion is that they are simply following orders, and it isn't their responsibility to do anything to mitigate the harms they're thereby causing: