I'm currently teaching a class on "Effective Altruism" (vaguely related to this old idea, but based around MacAskill's new book). One of the most interesting and surprising (to me) results so far is that most students really don't accept the idea of expected value. The vast majority of students would prefer to save 1000 lives for sure, than to have a 10% chance of saving a million lives. This, even though the latter choice has 100 times the expected value.
One common sentiment seems to be that a 90% chance of doing no good at all is just too overwhelming, no matter how high the potential upside (in the remaining 10% chance), when the alternative is a sure thing to save some lives. It may seem to neglect the "immense value of human life" to let the thousand die in order to choose an option that will in all likelihood save no-one at all. (Some explicitly assimilate the low chance of success to a zero chance: "It's practically as though there's no chance at all, and you're just letting people die for no reason!")
Another thought in the background here seems to be that there's something especially bad about doing no good. The perceived gap in value between 0 and 1000 lives saved is not seen as nine hundred and ninety nine times smaller than the gap in value between 1000 and one million lives saved, as it presumably should be if we value all lives equally. (Indeed, for some the former gap may be perceived as being of greater moral significance.)
Interestingly, people's intuitions tend to shift when the case is redescribed in a way that emphasizes the opportunity cost of the first option: (i) letting exactly 999,000 / 1,000,000 people die, or (ii) taking a 10% chance to save all one million. Many switch to preferring the second option when thus described. (Much puzzlement ensues when I point out that this is the same case they previously considered, just described in different words! In seminar groups where time permitted, this led to some interesting discussion of which, if either, description should be considered "more accurate", or which of their conflicting intuitions they should place more trust in.) Makes me think that Kahneman and Tversky should be added to our standard ethics curriculum!
One way to make the case for expected value is to imagine the long-run effects of iterating such choices, e.g. every day. Those who repeatedly choose option 1 will save 10k people every ten days, whereas the option 2 folk can expect to save 1 million every ten days on average (though of course the chances don't guarantee this). Most agree that the second option is better in the iterated choice situation.
There are a couple of ways to argue from this intermediate claim to the conclusion that expected value should guide our one-off decisions. One is to suggest that each of the individual choices are equally choice-worthy, and that -- from an impartial moral perspective -- the intrinsic choice-worthiness of the option should not depend on external factors like whether one gets to make similar choices again in future. In that case, we could reach the conclusion that each individual option-2 choice is 1/10th as choice-worthy as the collection of ten such choices, which is much more choice-worthy than an option-1 choice.
The second route would be to suggest that even if one doesn't get to make this particular choice repeatedly, we may, in our lives, expect to fairly often have to make some or other choices under conditions of uncertainty. And if we habitually violate norms of expected value, then we can expect that the overall result of our choices (across the whole of our lifetime) will be less good than if we regularly conform to this norm. (This argument is limited in scope, though. It arguably can't help to justify EV reasoning in cases where the probabilities are so low that even a lifetime's worth of like choices could not be expected to lead to the upside's actually eventuating at least once. Attempts to mitigate global catastrophic risks, for example, might fall on this scale.)
Alternatively, one might defend EV reasoning by critiquing the principles that seem to underlie the opposing intuitions. It seems morally problematic to treat the first person saved as more important than the millionth person saved, for example. Or to treat a 10% increase in probability as more significant when it moves us from (say) 45% to 55% likelihood, than when it moves us from 0% to 10%. There are principled grounds for treating like considerations alike, and violations of EV seem to violate those principles.
What do you think? Do the above arguments seem convincing? Are there better arguments for EV that could be raised here? Or is there more to be said for preferring the low-value "safe bet" than I've appreciated thus far?