Holden at GiveWell has posted a very interesting analysis of Why We Can’t Take Expected Value Estimates Literally. I've always been suspicious of the idea that we should treat rough subjective estimates of risk (e.g., an "X% probability" that [insert scary futuristic technology here] will destroy the world) equivalently to robustly established probabilities (e.g., an X% chance that a large asteroid will hit the earth within a century). Holden's analysis backs up this intuition, by appealing to the idea that we need to adjust "explicit expected value" calculations by the variance in our "estimate error".
The upshot: robustly established estimates count for nearly their full weight, whereas highly uncertain estimates should barely move us away from our priors. To illustrate: "It seems fairly clear that a restaurant with 200 Yelp reviews, averaging 4.75 stars, ought to outrank a restaurant with 3 Yelp reviews, averaging 5 stars." Why? Because a mere three reviews is not robust enough evidence to shift us far from our prior expectation (i.e. that the restaurant is just average).
Anyway, this strikes me as a very important (and intuitive) result, which my rough summary here doesn't really do justice to. So, go read the whole thing!