Wednesday, July 23, 2008

Introducing 'Merely Normative' Risk

I expect to write a few posts on this topic, so here's a quick overview and introduction:

We should be wary of bringing about terrible outcomes, in face of empirical uncertainty. I assume it's terrible to kill innocent persons (i.e. conscious, rational beings with goals for the future). So if there was a 10% chance that fetuses had mature psychological capacities, that would - I take it - count as a decisive reason against getting an abortion (in typical circumstances). But what about merely normative uncertainty? Suppose we're sure that fetuses have minimal mental lives of type F, but we nonetheless grant a 10% chance that killing a type-F is as morally bad as killing a mature person. What weight should we grant this moral uncertainty in our practical reasoning? Here are three possible answers:

(I) Full weight: The two kinds of uncertainty are normatively equivalent.

(II) Some weight: Normative uncertainty should count for something, but not so much as a corresponding empirical risk.

(III) No weight: Merely normative risks have no place in practical reasoning.

The 'full weight' view has some bizarre implications. For example, if you think shooting a gun out the window has a 1% chance of killing someone, and allow a 1% chance that masturbation is as morally bad as killing someone, then - according to (I) - you should be indifferent between the two actions. (Helen suggests that this just goes to show you shouldn't grant even that much credence to the loony moral view. That seems a good response; I'll probably return to it later.)

I previously suggested that we should distinguish between probabilistic reasons - i.e. real reasons that derive from the modal fact, X, that some outcome Y is epistemically possible - and probabilities of reasons, i.e. the mere epistemic possibility that a certain fact Z even qualifies as a reason in the first place.

One may object: why can't the epistemic possibility of a normative proposition [Y] constitute a real reason? I'll explore this more in the next post: Evidence, Reasons and Normative Doubts.

4 comments:

  1. I want to cut this whole question off at the knees by denying that probability statements make sense for normative propositions. The reason being that we can never have the kind of evidential basis that we'd use to give specific probability estimates to empirical propositions.

    ReplyDelete
  2. I think you'll like my follow-up post, then! (It's sort of related, anyway.)

    ReplyDelete
  3. Richard,

    Thanks for taking up this issue on your blog. I have just two comments about this introductory post:

    1. Your formulation of the "some weight" view strikes me as a mischaracterization of the issue. This is mainly due to this locuation of how much normative uncertainty "counts". The question is "What's it rational -- in a certain sense of the term -- to do under normative uncertainty? (assuming there is normative uncertainty)" All parties to the debate should agree that normative uncertainty counts, and counts fully, if by this you mean that what it's rational to do depends on the subject's credences in various normative propositions. They'll disagree, though, about how it depends on these credences. Some will want to say that it's rational to maximize expected value. Others will want to say that it's rational to maximize the value of some other function. For example, those who think the rational thing to do is simply whatever the most subjectively probable normative view says is best fall into the latter camp. But to say that one's credences in normative propositions as a whole don't "count", or "count" less than one's other credences, seems to signal avoidance of the issue.

    2. I think what Helen says is right. It's hardly surprising that the rational thing to do, given a "loony" credence distribution over normative propositions, is something that you have very little objective reason to do. And similarly, the rational thing to do, given a "loony" credence distribution over non-normative propositions, will also be something that you have very little objective reason to do. Suppose my credence is .01 that everytime I snap my fingers, I kill someone. Then it may be rational, given the objective disvalue I assign to killing someone, not to ever snap my fingers. Rationality under uncertainty (of whatever type) is, like instrumental rationality, "garbage in = garbage out".

    And on Paul Gowder's comment:

    3) This "evidential basis" concern is a really interesting one, although I can't see how it tells against assigning subjective probabilities/degrees of belief. Even if it doesn't make sense to talk about the objective probability of P being thus-and-so, we can still meaningfully assign to subjects credences in P, via representation theorems or whatever mechanism you prefer. de Finetti, for example, didn't think there were objective probabilities between zero and 1, but he saw this as no bar to there being subjective probabilities with intermediate values. For what it's worth, though, I do think even non-personal probabilities can be assigned to normative propositions. I'll say something more about that when I get around to commenting on Richard's next post.

    -Andrew S.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.