Sunday, March 15, 2020

No Utility Cascades

Max Hayward has an interesting paper, 'Utility Cascades', forthcoming in Analysis.  We're told: "Utility Cascades occur when a utilitarian’s reduction of support for an intervention reduces the effectiveness of that intervention, leading the utilitarian to further reduce support [...] in a negative spiral." (p.1)  The basic puzzle Hayward sets up involves the following additional assumptions:

(1) Holding fixed their (practical) normative commitments, accurate/rational updating can (often!) tip utilitarians into a utility cascade.

(2) This "negative spiral" predictably makes things worse than if the utilitarian had stuck with their initial level of support, even given that the intervention is less effective than initially believed.

Putting these together, we obtain a surprising apparent tension between epistemic and practical normativity for utilitarians.  For, Hayward suggests, it would often promote predictably better results for utilitarians to bury their heads in the sand rather than rationally updating on new evidence of the sort that might trigger a utility cascade.

It's a fun argument.  But I don't see how (1) and (2) could both be true.  After all, if the act of reducing support for intervention X would predictably have worse results than maintaining initial levels of support, then utilitarianism straightforwardly requires the latter.

Why does Hayward believe otherwise?  He describes Bill, supposedly an act utilitarian / effective altruist agent, who "supports highly effective initiatives in proportion to their effectiveness score [expected value]." (bold added)  As a result, any drop in expected value (e.g. due to new evidence) necessarily reduces Bill's investment in an intervention.

But this is not an accurate representation of utilitarian (or effective altruist) normative commitments.  You should not allot half as much funding to a charity that's half as good as others.  You should give the suboptimal charity nothing, and instead send every dollar to the very best charity (until it is so saturated with funds that it no longer offers the best marginal return on your next dollar, which should then instead go to the next (now-best) intervention, and so on).

As a result, it's entirely possible that a slight reduction in its expected value makes no difference to how much Bill should fund X, so long as the (marginal) expected value of funding X to this degree still exceeds the expected value of shifting any of this funding to some alternative intervention Y. In such a case, assumption (1) above fails to hold. Otherwise, (2) is false: if shifting some funding to Y is really (expectably) for the best, despite reducing the effectiveness of the remaining X-funds (if any), then it's not true that it would've been (predictably) better for Bill to ignore the evidence and keep funding X at initial levels.  In neither case does epistemic rationality for the utilitarian agent make things (predictably) go worse.  Contra Hayward, in ordinary cases (i.e., barring an evil demon who punishes epistemic rationality, or the like), there are not "utilitarian reasons to adopt ostrich behaviour".

Update: Here's a simple way to bring out the incoherence in Hayward's argument.  Consider his central example of "effectanol" funding.  Draw up a table which lists the expected values of donating in the following ways: <$10k to effectanol, $0 elsewhere>, <$8k, $2k>, <$6k, $4k>, ... , <$0 to effectanol, $10k elsewhere>.  After all the evidence regarding its effectiveness has come to light, which line on the table has the highest expected value? To get the cascade going, he needs each subsequent row on the table to have higher expected value than those above it. But then we are to suppose that it would have been predictably "better" to stick with fully funding effectanol, implying that the first row has the highest expected value. There's no way of filling out the numbers so that both of those claims come out true.

3 comments:

  1. Since the paper isn’t up yet maybe I’m getginv it wrong but is the situation under consideration really a kind of collective action problem in which the widespread belief that the intervention will be succesful is necessary for it to succed.

    For instance, you might imagine an attempt to introduce fiat currency into a barter economy of utilitarians. If they all believe it will work they will not only vote to adopt it but also actually accept the money and it will improve efficency. If they stop believing it will work they might not accept the money once it’s rolled out making it not even worthwhile to try.

    But this kind of case also seems to be confusing questions about individual rationality with coordination problems.

    ReplyDelete
    Replies
    1. You can find the pre-print on Max's website [here's the PDF link]. He does discuss a variant case that includes co-ordination issues, but the core idea doesn't depend on that, but rather on some interventions having increasing marginal value.

      Delete
  2. Hi! TruePath gives a nice example, and yes, that is right - belief in the probable success of the policy rationalises the actions that will make it true that the policy succeeds. In fact, these sorts of co-ordination cases are /examples/ of increasing marginal utility - the expected value of adopting the fiat currency goes up the more people do likewise.

    How do coordination problems relate to individual rationality? Because I think that act-utilitarians who think like effective altruists have problems co-ordinating with /themselves/, because they can't bind their future behaviour. I have a longer explanation for why I think that's true which I may post here later. But I might be wrong! If not, I think the interpersonal variant (which is the one I care about more) still stands.

    I should add, Richard - I don't take my arguments to count against utilitarianism per se. What they challenge is the combination of utitarianism with certain normative claims about how to choose and reason that seem to be embodied in effective altruism. I think that the best outcomes aren't promoted when we think and deliberate in those ways, and hence, there are utilitarian reasons to reject the way of choosing promoted by effective altruists.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.