Friday, January 29, 2016

Expected Value without Expecting Value

I'm currently teaching a class on "Effective Altruism" (vaguely related to this old idea, but based around MacAskill's new book).  One of the most interesting and surprising (to me) results so far is that most students really don't accept the idea of expected value.  The vast majority of students would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives.  This, even though the latter choice has 100 times the expected value.

One common sentiment seems to be that a 90% chance of doing no good at all is just too overwhelming, no matter how high the potential upside (in the remaining 10% chance), when the alternative is a sure thing to save some lives.  It may seem to neglect the "immense value of human life" to let the thousand die in order to choose an option that will in all likelihood save no-one at all.  (Some explicitly assimilate the low chance of success to a zero chance: "It's practically as though there's no chance at all, and you're just letting people die for no reason!")

Another thought in the background here seems to be that there's something especially bad about doing no good.  The perceived gap in value between 0 and 1000 lives saved is not seen as nine hundred and ninety nine times smaller than the gap in value between 1000 and one million lives saved, as it presumably should be if we value all lives equally.  (Indeed, for some the former gap may be perceived as being of greater moral significance.)

Interestingly, people's intuitions tend to shift when the case is redescribed in a way that emphasizes the opportunity cost of the first option: (i) letting exactly 999,000 / 1,000,000 people die, or (ii) taking a 10% chance to save all one million.  Many switch to preferring the second option when thus described.  (Much puzzlement ensues when I point out that this is the same case they previously considered, just described in different words! In seminar groups where time permitted, this led to some interesting discussion of which, if either, description should be considered "more accurate", or which of their conflicting intuitions they should place more trust in.)  Makes me think that Kahneman and Tversky should be added to our standard ethics curriculum!

One way to make the case for expected value is to imagine the long-run effects of iterating such choices, e.g. every day.  Those who repeatedly choose option 1 will save 10k people every ten days, whereas the option 2 folk can expect to save 1 million every ten days on average (though of course the chances don't guarantee this).  Most agree that the second option is better in the iterated choice situation.

There are a couple of ways to argue from this intermediate claim to the conclusion that expected value should guide our one-off decisions.  One is to suggest that each of the individual choices are equally choice-worthy, and that -- from an impartial moral perspective -- the intrinsic choice-worthiness of the option should not depend on external factors like whether one gets to make similar choices again in future.  In that case, we could reach the conclusion that each individual option-2 choice is 1/10th as choice-worthy as the collection of ten such choices, which is much more choice-worthy than an option-1 choice.

The second route would be to suggest that even if one doesn't get to make this particular choice repeatedly, we may, in our lives, expect to fairly often have to make some or other choices under conditions of uncertainty.  And if we habitually violate norms of expected value, then we can expect that the overall result of our choices (across the whole of our lifetime) will be less good than if we regularly conform to this norm.  (This argument is limited in scope, though.  It arguably can't help to justify EV reasoning in cases where the probabilities are so low that even a lifetime's worth of like choices could not be expected to lead to the upside's actually eventuating at least once.  Attempts to mitigate global catastrophic risks, for example, might fall on this scale.)

Alternatively, one might defend EV reasoning by critiquing the principles that seem to underlie the opposing intuitions.  It seems morally problematic to treat the first person saved as more important than the millionth person saved, for example.  Or to treat a 10% increase in probability as more significant when it moves us from (say) 45% to 55% likelihood, than when it moves us from 0% to 10%.  There are principled grounds for treating like considerations alike, and violations of EV seem to violate those principles.

What do you think?  Do the above arguments seem convincing? Are there better arguments for EV that could be raised here?  Or is there more to be said for preferring the low-value "safe bet" than I've appreciated thus far?

17 comments:

  1. My suspicion is that the first response is a paradigm example of how our intuitions about large numbers are untrustworthy. It's difficult to process just how great the difference between 1000 and 1,000,000 really is. Past a certain quantity, all numbers just seem "very big" to the brain, and it's hard to get a good handle on the actual differences between the values. Your rephrasing of the questions is an interesting way of trying to get around this problem: perhaps it's easier to perceive the similarity between 999,000 and 1,000,000 than it is to properly appreciate the difference between 1000 and 1,000,000.

    One additional justification for choosing the "Save 1000" option would be an appeal to maximin-style reasoning. Saving 1000 with certainty does mean that you avoid the worst possible outcome -- saving no one. But it's obviously controversial when one should follow the maximin rule, so to make that case properly, we'd need to set out some criteria for when the rule is to be followed and then show how these circumstances meet them.

    ReplyDelete
    Replies
    1. Yes, the "we can't really grasp large numbers" problem definitely seems to be playing a role here!

      Delete
  2. It would be interesting if we could compare people's estimation of their chance of saving someone with the actual chance.

    If it turned out that, for example, people are are overly optimistic when they estimate the chance of saving someone when the chance is small, this might make the intuitive response less sub-optimal, in real world cases where the probability isn't strictly known.

    Say for example, in reality people always over-estimate their chance of saving someone by 10 percentage points, the intuition in the presented example would be correct, because we actually have a 90% chance of saving 1000 people, or a 0% chance of saving a million.

    This would be a subconscious correction to a subconscious bias, which would be interesting.

    (I don't think this is likely though, it seems more likely that our intuition is just wrong :) )

    ReplyDelete
    Replies
    1. Hi Jim, that's an interesting suggestion (and fits well with GiveWell's warnings about trying to implement EV reasoning in practice), but I suspect our bias/caution here is much greater than would really be optimal...

      Delete
  3. > (This argument is limited in scope, though. It arguably can't help to justify EV reasoning in cases where the probabilities are so low that even a lifetime's worth of like choices could not be expected to lead to the upside's actually eventuating at least once. Attempts to mitigate global catastrophic risks, for example, might fall on this scale.)

    A simpler example would be a lottery; on occasion lotteries can be +EV. But they are still terrible ideas to play; but sometimes syndicates like the famous Parisian syndicate that made Voltaire rich will play the lottery and win. Why? Well, it's because if you don't have at least a certain chance to win, then you will tend to go bankrupt and be unable to make *any* further +EV choices (Gambler's ruin); one's simple expected-value calculation is static and assumes one can make choices indefinitely, but more realistically, we have limited resources and if we diverge from the Kelly criterion or something akin to that, we will be unable to afford opportunities that come up and will soon be out of the game entirely, leading to extremely low expected value. Considered singly, the expected value of a low probability bet may be high; considered as part of a series of decisions stretching out indefinitely until our death, the expected value is extremely low because losing it pushes us closer to bankruptcy and thus losing our ability to engage in thousands of other bets.

    This resolves the lottery paradox: as a single person, we can never buy up enough tickets to make winning cumulatively likely, and attempting to do so will drive us bankrupt; but as a large syndicate, we can afford to buy up a large fraction of tickets and profit.

    Similarly, it also resolves the existential risk question: as individuals, it is no good as an investment to try to defeat asteroids since we increase our life expectancy so little and would go bankrupt trying to make any dent in the problem; but as a planet, we can invest in asteroid prevention without any risk of globally going broke or being unable to engage in other lucrative investments like malaria prevention (1-5 billion USD a year is more than enough, while global GDP is closer to 80,000 billion USD).

    ReplyDelete
    Replies
    1. The lottery hasn't ever been +EV. Also, it isn't the actual outcome that matters, or the chance that over a lifetime you guarantee at least one instance of winning the lottery, but the fact that if you consistently make +EV decisions in areas where EV is applicable you will over a significant sample size you should eventually come to the expected result. As for Gamblers Ruin, this is where living within your means or bankroll management comes in. Given bankroll management, were playing the lottery +EV, you should only do it if you have enough of a bankroll that you can reasonably expect to miss many many times without making yourself bankrupt.

      Delete
    2. > The lottery hasn't ever been +EV.

      I think you are wrong about that. The lottery has been +EV on a number of occasions; besides the already-mentioned French example, syndicates have successfully beaten (or would have beaten) a number of lotteries over the years - IIRC Massachusetts had a few instances due to rolling over unclaimed jackpots. (Other instances are more debatable - some people have claimed Powerballs have been +EV occasionally but it looks like that goes away when you properly account for the probability of splitting the payouts.)

      Delete
  4. Hi Richard.

    Given their answers, I suspect maybe this is related to the issue of non-repeatability and very low chances (and how people value the outcome they would actually expect). But then again, that wouldn't explain the "letting die" question. On that note, I think there are two potentially relevant differences:

    1. The "letting die" expression might be interpreted as implicitly suggest a fault and/or a responsibility for those lives.
    2. The scenario specifies that those 1000 are among the 1000000. Maybe in the first scenario, some (many) of them think that if they pick to take the 10% chance of saving 1000000, the other 1000 will certainly die (i.e., they're a different bunch of people, and a different procedure would be required to save them), and that might also affect their assessments (but I don't know; this depends on how it's worded).

    By the way, in the second scenario, do you know (roughly) the percentages? (i.e., how many choose each option?).

    ReplyDelete
    Replies
    1. It varied a bit across seminar groups, but I'd guess overall about 75% preferred the second option (10% chance of saving all) in the second scenario.

      Your two suggested differences do indeed correspond to some suggestions the students themselves offered. So it's important to clarify that there's no difference in fault/responsibility between the two scenarios (in both cases your choice affects who has a chance to survive and who doesn't, but you have no further responsibility or involvement than that). It's a little unclear to me why the second difference should matter so much (if it's bad to let 1000 certainly die, why isn't it ever so much worse to let a million certainly die? I could understand the 10% chance being slightly more appealing when it gives a chance to everyone in the scenario, but I'm not sure how this could be a sufficiently significant difference to make the second option worth choosing when it previously wasn't) -- but could be an interesting idea to explore further.

      Delete

    2. 75% is a huge difference with the other scenario. I wonder what they'd say if they were asked to choose between (i) letting exactly 900,000 / 1,000,000 people die, or (ii) taking an 9% chance to save all one million. (to see how much - if anything - the "letting die" language is driving their judgment, rather than expected value).

      With regard to point 2., I'm not sure about whether it would matter to them, but I'm thinking maybe if they see it that way, they're thinking it's between letting 1000 die willfully - 1000 who would otherwise certainly live -, whereas they don't see the alternative as letting anyone die - they think those people very likely will die anyway.

      Delete
  5. Hi prof Chappell, hi everyone. My suggestion might be naive because i don't have a degree in analytic philosophy, but i was thinking of a possible reply to the worry you raised about your second suggestion for defending EV reasoning. Could we respond to our recalcitrant interlocutor that EV reasoning is preferable even if the choices she faces are not repeatable and/or even if they have a very low probability of success, because we should want (on utilitarian grounds) the whole of humanity to be diachronically habituated to EV reasoning, and the most natural way to contribute to this end is to act in a way (i.e. abiding by the EV reasoning conclusions) that makes a clear statement about what our aim is (namely, in this and all EV-choice-related cases, the adoption of EV reasoning by the general population)?

    ReplyDelete
    Replies
    1. Cool suggestion! That sort of "universalization" move seems a very neat way of dealing with the worry in question.

      Delete
  6. For completeness's sake, I think it's worth pointing out that this sort of "risk aversion" is compatible with expected utility maximization if there's diminishing returns on the utility of saving a life. That's an assumption which mainstream utilitarians reject, but it might have some plausibility to it.

    I mean that's a bit pedantic, since saying preferences can be described as utility maximization is different from people explicitly embracing in utility maximization. (The VNM theorem shows quite a lot of preferences can be described by a utility function.) But that way of looking at the issue might describe people's intuitions on some level. One death is a tragedy, a million deaths is a statistic, as they say.

    ReplyDelete
  7. Only tangentially related to your post, but would you mind making the syllabus available? I know a few other academics who are interested in teaching a course on effective altruism, and it would be helpful for them to have access to your list of readings.

    ReplyDelete
    Replies
    1. Yes, at the end of the term I'll write a post with some thoughts about the readings I used, what worked and what didn't, etc.

      Delete
  8. Would also be interesting to see how students respond to yet a further redescription: (i) letting exactly 999,000 / 1,000,000 people die, or (ii) taking a 90% chance of letting all 1 million die. Bet you could get some of the switchers to switch back..

    One further factor, related to the iteration factor, concerns agency. (Indeed, this seems related to dionissis mitropoulos's thought.) If Jones is just one person in a one-off position to save 1,000 or take a 10% chance of saving a million, he might well think "This is my one chance to do some good! I don't want to take a large chance of wasting it!" But if he can be convinced that he will have future opportunities, or--and here enters the agency factor--if he can be convinced that he should think of himself as a constituent of some collective agent that will face iterated choices, then he might be able to be brought around to adopt the EV perspective.

    Since I have some sympathy for the the concern about non-iteration, and since I also have sympathy for EV in public policy, I think the agency factor may be a useful factor to explore further.

    ReplyDelete
  9. P.S. I'm planning to teach a course like this in Fall 2017, so I'm gonna come looking to you for lessons-learned!

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.