Friday, August 15, 2008

Rules for Normative Risk

One proposal for dealing with uncertainty is to follow the rule that will tend (in the long run) to produce the best outcomes. For example, if you worry that shooting a gun out the window has a 1% chance of doing X damage, you shouldn't do it unless you can expect to produce at least X/100 worth of benefits by doing so.

This principle accommodates normal empirical risks. But note that 'merely normative' risk is treated differently. Suppose (for simplicity) that hedonistic utilitarianism is true: what we (objectively) should do is maximize net pleasure. And what we rationally should do is maximize expected net pleasure. That means we should take into account the risks of accidental shootings, but not the 'risk' that some guilty pleasures are intrinsically bad.

Consider the alternative, i.e. following a rule that would have you be indifferent between a 1% chance of killing someone vs. a certain harmless experience which you give a 1% chance of being intrinsically as bad as killing. Clearly, following this rule will not tend to produce the best outcomes, because -- unlike empirical risks -- the 'merely normative' risk will never eventuate, no matter how many times you repeat the scenario.

Importantly, this account can accommodate uncertainty about some non-contingent (e.g. mathematical) propositions, as Carl pointed out:

Suppose that someone constructs a device that includes a Doomsday Device, a big red button, and a supercomputer capable of calculating pi to an extraordinary number of places. When someone presses the red button, the supercomputer will compute the nth and nth+1 digits of pi (in base 10), where n is some cosmically large number, and if both digits turn out to be 2s, the Doomsday Device will be activated. The designers of the machine selected n randomly. Further suppose that I have sufficient empirical evidence to assign overwhelming probability to the proposition that the device is as described above, but lack the computational resources to determine the values of the nth and nth+1 digits of pi. If, in a series of situations such as this (with different values of n), I fail to treat pushing the red button as a 1% chance of disaster I will wind up regretting my alternative decision procedure.

I guess it's appropriate to abstract away from the particular value of n here, because the agent is computationally incapable of any more fine-grained level of response. I don't think any such abstraction is available to accommodate merely normative risk, however. Thoughts?

13 comments:

  1. "One proposal for dealing with uncertainty is to follow the rule that will tend (in the long run) to produce the best outcomes."

    Perhaps I'm missing something, but in the present context, this seems like an odd proposal. One shouldn't make consequentialism true by making it the only way to deal with empirical risk.

    To then consider normative risk in light of this proposal again seems odd, since you've defined away a series of normative risks, namely all possibility that consequentialism is false.

    But perhaps I'm missing something: Would you mind rephrasing what's going on here?

    ReplyDelete
  2. "the 'merely normative' risk will never eventuate, no matter how many times you repeat the scenario."

    This doesn't make sense to me. There are many different potential areas of moral uncertainty, and if you ignore well-calibrated subjective probabilities you will almost certainly act quite wrongly in some of them.

    ReplyDelete
  3. Alex, I'm not sure what you mean. The question is how to deal with various kinds of uncertainty, and I'm suggesting that at least one answer -- the consequentialist answer -- would have us treat empirical and normative uncertainty differently.

    "To then consider normative risk in light of this proposal again seems odd, since you've defined away a series of normative risks, namely all possibility that consequentialism is false."

    I think you're confusing metaphysics and epistemology here. The truth of consequentialism is compatible with agents facing normative uncertainty. But yes, whatever normative theory is actually true, there's no real "risk" that it will sometimes be false. That's my whole point.

    "Would you mind rephrasing what's going on here?"

    Okay. It's standard practice to hedge against empirical risks. We think it's rational to combine possible outcomes and their probabilities in expected value calculations, so that an action with 1% chance of X is treated as having expected value of X/100.

    The question is how to extend this standard practice to 'merely normative' risks. One proposal is to treat normative risks in a superficially similar way, so that performing some act which you give a 1% chance of being as bad as X is on that basis treated as having expected moral value of X/100. (Roughly put.) I'm suggesting an alternative: we should instead interpret the standard practice more objectively, as a matter of following rules that will actually tend to be for the best.

    Carl - can you suggest an example?

    ReplyDelete
  4. Within a consequentialist framework, you assign 40% probabilities to being wrong about the following questions:

    1. Whether multiple instantiations of the same mind undergoing the same experiences have greater value than one.
    2. Whether various nonhuman animals have welfare to be accounted for.
    3. How the subjective passage of time affects the value of pleasure.

    Your probabilities about moral questions you are uncertain about have, in the past, been generally well-calibrated in the sense that X% of moral views that you assign an X% chance of being wrong are subsequently rejected by informed thinkers through convincing proofs and demonstrations. Thus you expect that if you behave as if your current views on these questions are correct that you will act less well than you could by acting as if your views were probabilistic. You lack the computational resources to instantly perform decades or centuries of philosophy, but expect that such computational work would greatly reduce the uncertainty and must make various behavioral choices now.

    It seems that this is the same situation as if you were playing 1000 rounds of Russian Roulette (with the mechanism designed in such a way that quantum many-world effects are negligible, e.g. the results of each round are predetermined with a random number generator and recorded). Either you will get through the thousand rounds safely or you won't, and whether the policy of not playing Russian Roulette pays off can only be known *probabilistically* from your perspective.

    Another example: suppose that you compute a theory of consequentialism that you assign 99% probability to, and then are subjected to an experimental modification by a being that alters your brain to change your views on one third of important moral questions, altering your recollection of moral arguments entirely. In order to restore the previous set of views you would need years of thought and access to information, but you must now make a moral decision of great weight.

    ReplyDelete
  5. I think what Carl says is right. If you keep taking small normative risks over and over again, you're as liable to eventually do something very bad as you are if you keep taking small non-normative risks over and over again. After all, life is not a succession of qualitatively identical actions.

    But maybe your point is instead that if you keep taking non-normative risks in (more or less) the SAME KIND of situation, these risks will eventuate on some occasions even if they don't eventuate on others; by contrast, if you keep taking normative risks in (more or less) the same kind of situation, these risks will never eventuate on any occasion if they don't eventuate on a single occasion. Is this what you're saying?

    If this is what you're saying, I have a couple of comments:

    1) In assessing rules for action under uncertainty, why care about long run success on the assumption that the agent will face the same kind of situation over and over again, rather than just long run success cross-situationally?

    2) Take note of the parenthetical "more or less" in my characterization of this position. If "repeating the scenario", as you put it, involves doing the exact same thing over and over again, then, assuming macro-level determinism, what you say about normative risk will also hold for non-normative risk. If the bad thing didn't happen the first time around, it's not going to happen ever. If you adopt a more liberal conception of "repeating the scenario", in which qualitatively dissimilar situations can count as "repeats", then I don't see how you can avoid the conclusion that normative risk could catch up with the risk-taker eventually, even if it isn't realized the first time. Or rather, you can avoid this conclusion, but only by ad hoc restrictions on which qualitative differences in scenarios are consistent with one's being a "repeat" of the other.

    3) I wonder what you'd say about "merely mental" uncertainty. Suppose I know all there is to know about neurological-level features of an organism's nervous system, but I'm not sure whether that organism feels pain if it's prodded with a hot poker. I'm not sure about the psychophysical "bridge" laws, you might say. These laws, like normative laws, are presumably eternal, so if this organism feels pain upon being poked now, it'll feel pain if it's poked later. If it doesn't feel pain now, it won't later. Does that mean we're also to treat merely mental uncertainty differently from the kind of uncertainty that attends the "gun firing" case?

    ReplyDelete
  6. I tend to agree with Carl and andrews, with the modification that, unlike the roulette case, the right answer in situation X is likely to be correlated with the right answer in situation Y, and any decision procedure will presumably need to take account of this. The situation is probably more like an investment portfolio than playing roulette: consequently maximising expected goodness may involve an element of moral "diversification".

    ReplyDelete
  7. P.S.

    "the 'merely normative' risk will never eventuate, no matter how many times you repeat the scenario"

    I too am not sure what this means, unless it's (IMHO) falsely assuming that probabilities are properties of moral systems rather than expressions of our (lack of) information about them.

    ReplyDelete
  8. Andrew - right, I'm talking about long run frequencies in the "same kind" of situation. Two situations may be said to be of the "same kind" (in my sense) if the same rule covers both of them. So this is just a way of assessing the long-run success of an appropriate rule.

    The case of "merely mental" uncertainty is very interesting. I think I want to say the same thing about it as in the mathematical (pi) case, namely that any rule humans are capable of following will introduce some variability after all. (Maybe we'll be unable to justifiably distinguish between a whole bunch of similar organisms, and 30% of them will (always) feel pain, so we should treat each individually as a 30% risk. Something like that.)

    Carl - "you expect that if you behave as if your current views on these questions are correct that you will act less well than you could by acting as if your views were probabilistic"

    Right, I certainly don't think people should always follow their current views. They could be wrong or unreasonable, after all. The suggestion in my main post is that we should follow the rules that will tend to have the best long-run effects. (This is a relatively objective norm, insofar as people won't always be able to tell what the correct answer is. But as noted in the previous discussion thread, there's no avoiding the fact that some people may be irremediably irrational. Anyway, I'll leave those arguments [about the appropriate degree of objectivity] in the other thread.)

    You would only have a counterexample to my account here, I think, if we required a single rule to cover all of your scenarios 1-3.

    ReplyDelete
  9. Richard,

    You say that "two situations may be said to be of the 'same kind' (in my sense) if the same rule covers both of them." But rules like "maximize expected value" purport to cover every situation, so shouldn't the "outcomes" of all situations count when figuring out the long run value of following these rules (whether or not those situations are similar in other respects)?

    ReplyDelete
  10. Sure. But we may find that we would do better to follow some alternative collection of more fine-grained rules. I doubt the optimal rules will end up licensing the sorts of normative hedging you're after.

    ReplyDelete
  11. Ok, I think I see what you mean now, sorry. But it still seems that something fishy is going on.

    "the 'merely normative' risk will never eventuate, no matter how many times you repeat the scenario."

    Isn't this strictly false? There is a 1% chance that it eventuates *every* time you repeat the scenario.

    The same point in more length: It's true that the normative risk can't be that 1% of acts of this exact kind have a 100% chance of being wrong. But it is the different risk that 100% of acts of this exact kind have a 1% chance of being wrong. But that difference doesn't tell us much we didn't know already: that merely normative risk is not empirical risk.

    But perhaps I'm again missing something?

    ReplyDelete
  12. "Isn't this strictly false?"

    No, I stipulated I was talking about an actually "harmless" case. It's just that the agent doesn't know this.

    In another case, where the 'risk' truly (and always) holds, the best rule to follow is again non-probabilistic.

    "perhaps I'm again missing something?"

    Yes. My point is that the noted difference between normative and empirical risks justifies their differential treatment, according to the independently plausible principle that one should "follow the rule that will tend (in the long run) to produce the best outcomes".

    ReplyDelete
  13. Richard,

    I'm not sure why you make this distinction based on whether the risk is empirical or not. After all, it seems to me that there are empirical risks that will never instantiate, either.
    For example, let's say that given your evidence, there is 0.1 chance that water is composed of H2O2, or that 700nm light is not red, or that neutrinos can travel faster than light, or that a nuclear bomb like one of the first bombs will start a chain reaction that will devastate the whole planet, etc.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.