This principle accommodates normal empirical risks. But note that 'merely normative' risk is treated differently. Suppose (for simplicity) that hedonistic utilitarianism is true: what we (objectively) should do is maximize net pleasure. And what we rationally should do is maximize expected net pleasure. That means we should take into account the risks of accidental shootings, but not the 'risk' that some guilty pleasures are intrinsically bad.
Consider the alternative, i.e. following a rule that would have you be indifferent between a 1% chance of killing someone vs. a certain harmless experience which you give a 1% chance of being intrinsically as bad as killing. Clearly, following this rule will not tend to produce the best outcomes, because -- unlike empirical risks -- the 'merely normative' risk will never eventuate, no matter how many times you repeat the scenario.
Importantly, this account can accommodate uncertainty about some non-contingent (e.g. mathematical) propositions, as Carl pointed out:
Suppose that someone constructs a device that includes a Doomsday Device, a big red button, and a supercomputer capable of calculating pi to an extraordinary number of places. When someone presses the red button, the supercomputer will compute the nth and nth+1 digits of pi (in base 10), where n is some cosmically large number, and if both digits turn out to be 2s, the Doomsday Device will be activated. The designers of the machine selected n randomly.
Further suppose that I have sufficient empirical evidence to assign overwhelming probability to the proposition that the device is as described above, but lack the computational resources to determine the values of the nth and nth+1 digits of pi. If, in a series of situations such as this (with different values of n), I fail to treat pushing the red button as a 1% chance of disaster I will wind up regretting my alternative decision procedure.
I guess it's appropriate to abstract away from the particular value of n here, because the agent is computationally incapable of any more fine-grained level of response. I don't think any such abstraction is available to accommodate merely normative risk, however. Thoughts?