Thursday, October 19, 2006

Uncertain Obligations

My Epistemic Argument against Vigilantism assumes that if the vigilante is probably mistaken in thinking that X is immoral, then they shouldn't coercively interfere. Alex denied this, suggesting that if the stakes are high enough, even a low probability that X is morally outrageous could suffice to justify such coercion. Better safe than sorry, you might think. But I think this is a deeply misguided way of thinking.

First, note that this approach leads to absurd consequences. Suppose I'm 99% sure that abortion is perfectly permissible, or at least not as bad as genocide (as some pro-lifers claim). But I do think that preventing genocide is over 100 times as important as the right to abort. So should my 1% concession to the possibility that abortion might be atrocious therefore commit me to denying women the right to abort? (If blasphemy might be the ultimate sin, should we outlaw it just in case?) Surely not! What's atrocious is to force baseless (or probably mistaken) moral views on others.

There's something bizarre about trying to calculate the "expected value" of possible deontic outcomes. Despite the superficial similarity, it's not really like an expected utility calculation. Let us distinguish real probabilistic reasons from mere probabilities of real reasons. If Chaos Theorists determined that there was a 1% chance that U.S. abortions could cause a real genocide to occur on the other side of the world, then that might justify banning abortion. There is a real reason here that definitely exists, namely the unlikely chance to prevent genocide. But in the earlier ("abortion is genocide") case, it is unlikely that there is any real reason at all. (If abortion is permissible, as I assume is most likely, then there's nothing at all to be said for banning it.)

It's difficult to formalize the exact difference here (suggestions welcome!), but I hope you get the intuitive idea. "Expected utility" and "expected deonty" (for want of a better term) invoke two different moral 'levels'. EU is purely first-order, attempting to determine the one true answer to what we have most reason to do. ED is second-order, weighing up competing responses to the former question, by balancing the likelihood and putative force of merely possible reasons.

But here's the thing: we should do what we have most reason to do. That's all. There's no need to insure ourselves against merely possible reasons. They have no actual normative force. It doesn't matter how incredibly forceful a counterfactual reason is, it pulls no weight in the actual world.

Stronger still: no moral hedging is allowed. We must work with the reasons as we take them to be. Hence, the contribution of moral beliefs to practical reasoning is all or nothing. If I reasonably believe that abortion is permissible, then my moral reasoning will start from the unhedged premise that abortion is permissible. Not "99% likely to be permissible"; just "permissible", simpliciter. Beliefs may come in degrees, but reasons don't, and we base our decisions on the latter.

This shouldn't be confused with dogmatism. We should question our moral beliefs, of course. And as our credence falls, we should become more reluctant to use them as basic premises in our practical reasoning. We may come to doubt that there are real reasons there after all, and hence reject the premise entirely. Nevertheless, it is always a binary decision. We can reason from the assumption that abortion is permissible, or we can refrain from making that assumption. What we can't do is reason from the probability of its permissibility. This is because we must judge what actual reasons we think there are, and work from there. Rational agents cannot reason from what they take to be merely possible reasons.

Hence, if you judge that X is in fact permissible, you should give no weight at all in your practical reasoning to the possibility that X is the gravest evil in the universe. [But don't let English grammar mislead you, that 'should' needs to have wide scope!] Of course, it might influence your theoretical reasoning -- you should take extra care when judging X's permissibility -- you wouldn't want to get this one wrong! But given that I am reasonably sure (I mean, reasonable in being mostly sure) that X is permissible, the alternative possibilities provide me with no reasons whatsoever.

It's worth noting that this theoretical issue has important practical implications. It's not unusual for both pro-lifers and vegetarians to argue from uncertain obligations to actual ones. (That is, they argue from the seriousness of "getting it wrong", to the conclusion that a responsible moral agent will "play it safe" and oppose the killing of fetuses and/or animals.) But if I'm right here, then we should reject such arguments.

4 comments:

  1. I think I'm with Alex on this one. Deontic probabilities don't seem that different from ordinary probabilities. They often depend on the uncertainty about empirical facts about the world. Maybe the reason for someone's uncertainty about the moral status of killing and eating cows is that the empirical evidence on what mental activity cows are capable of having is not definitive. The same could be true if you're thinking about whether you're thinking about whether to terminate the life of someone in a vegetative state, or a fetuses. But people reason probabilistically based on lack of factual knowledge all the time. If the existing evidence isn't definitive on whether Chemical X is deadly or harmless, then we'll decide what to do about Chemical X based on probabilistic practical reasoning. That's the right thing to do even though the potential deadliness of Chemical X is a merely possible reason, and it may be the case that it's not a real reason at all. Similarly, when first responders come across an injured man, either he has a head injury or he doesn't (and he probably doesn't), but because of the seriousness of getting it wrong they "play it safe" and treat him like he does have a head injury, based merely on the possibility, until he can be thoroughly examined. And they're right to do so, since, if they consistently follow this policy they'll do more good and less bad than they would have if they had treated the most likely state of affairs as if it were certain. The same is true of people who treat deontic probabilities like regular probabilities in a large number of cases of similar importance - by maximizing "expected deonty" they'll end up doing more good and less bad than people who followed your advice (assuming they can do a reasonably good job of calculating deontic probabilities and deontic value).

    ReplyDelete
  2. I'm wanting to claim that in first-order cases, like those you discuss, we actually have real reasons of a probabilistic sort, rather than merely possible reasons. You might worry that this is an ad hoc or obscure distinction I'm drawing, but there seems something intuitive about it, to me at least. The epistemic possibility of Chemical X's poisonousness provides us with real reasons for acting with caution, in a way that the possibility that 'ingesting X is intrinsically immoral' does not.

    "by maximizing "expected deonty" they'll end up doing more good and less bad than people who followed your advice"

    I'm not so sure. I guess I'd want to subsume mere empirical uncertainty back into the standard 'expected utility' question, whereas 'expected deonty' is reserved for uncertainty about fundamental moral principles. (My post was careless in failing to distinguish these.) Part of the weight will go to non-consequentialist theories which claim that it's hugely evil to do completely harmless things like blaspheme or whatever. So it won't necessarily do a good job of tracking what's good or bad for the world.

    ReplyDelete
  3. there is reason to believe possible histories exist anyway - a little like possible presents exist in a quantum mechanical context.

    Richard

    I'm inclined towards your conclusion - that a person should stick within the rules, but I think that maybe you CAN calculate returns on things like "If blasphemy might be the ultimate sin, should we outlaw it just in case?" because it is part of a whole set of actions including "doing everythign besides blasphemy being the ultimate sin". they might not be all equaly likely but a wholistic assesment will tend to result in confusion as opposed to stupid results.

    Excuse me if I ramble a little....

    maybe what you are saying instead is that science and philosophy has an obligation to find the truth as a whole picture. So those determining these things should do so regardless of the effect (to take an extreme example - if germans ARE superior to jews they should just say it) THEN at a second level everyone else can react based on that information.

    the problem of NOT doing that is that you slowly build up disinformation in the system that cripples your ability to understand the world. legislating in favour of religion would be that sort of a building in of disinformation (at least it implies a level of confidence that we don't have).

    then your only choice is either the scientific - "I chose world view A with a 51% probability not world view B with a 40% probability or any other world view" or the pascal's wager "I choose world view a with 40% *2 untility points over world view B with 51% * 1 utility point.

    But you dont use that to make any other decisions because of the danger that that would present to your world view being consistant. A state would work in the same way - taking a position baised on the evidence for basic assumptions from which you can extrapolate a world view and then applying it.

    GNZ

    ReplyDelete
  4. Alex: "It's equally easy to draw up extreme examples in the other direction."

    I don't think so, because - as suggested at the end of my post - the practical risks might influence our theoretical reasoning. That is, 51% credence would not suffice for me to have a basic belief that X is permissible, if I also have 49% credence that it might be "the worst act in human history". Perhaps the latter credence is even high enough that I might sometimes be willing to take 'X is impermissible' as a basic premise which guides my action? I'm not sure about that. But the key claim is that we should never reason from the premise "X is 49% likely to be impermissible".

    On counterfactual reasons: I guess I'm here assuming an 'internalist' conception of reasons, since I'm concerned with the question of what is "most reasonable", rather than "objectively best", to do. (Moral obligation seems to go with the former.) We can only reason from the reasons we have. So speculating about other reasons -- which we don't have, but that might exist "for all we know" -- becomes incoherent. So, to clarify my argument: normative force (of the subjective or "reasonable" kind) derives from the judgments it is most reasonable for us to make. We need not insure against unreasonable possibilities (even understanding this as the epistemic rather than subjunctive modality). They do not provide us with actual reasons.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.