Saturday, December 12, 2009

The Normative Irrelevance of the Actual

Mike Almeida comments:
Imagine the oracle tells the utilitarian that almost all of the actions that we ordinarily take to be right have bad consequences and similarly for wrong actions. The cost... [would] be too high to remain a utilitarian. To remain utilitarian you have to abandon more or less every moral intuition.

This is quite mistaken. Firstly, I take my moral intuitions to concern "ordinary" situations, where "all else is equal". I think it's fine to wear mismatched socks because I expect such a choice to be harmless. But if the oracle tells us that every time you wear mismatched socks this causes a kitten to suffer agonizing pain, well, it would be bizarre to just stick to one's prior intuition as though we thought that acts of this type must be permissible no matter what. I trust that nobody (sane) has that intuition.

More broadly, it's important to see that our fundamental normative principles don't depend in any way on how things actually turn out. For the fundamental principles can all be stated in conditional terms: in circumstance Ci, you ought to do Ai. A moral theory (e.g. utilitarianism) may be represented as an exhaustive list of such conditionals, covering every possible circumstance (completely described). So if you accept utilitarianism, part of what you're accepting is the claim that if wearing mismatched socks would cause great suffering on net, in that case you ought not to wear mismatched socks. If you don't accept that such an act would be wrong in such circumstances, then you shouldn't accept utilitarianism in the first place. It makes no difference whether the circumstance is actual or not. (That's why ethicists can get away with trading in thought experiments, rather than tediously waiting around for an actual run-away trolley to come along!)

Another way to make the point is this. Our credence in a moral theory (e.g. utilitarianism) should be independent of our credences in various ordinary non-moral facts [modulo complications regarding the higher-order evidence provided by expert testimony, etc.]. So for each possible circumstance Ci, one's credences should assign the conditional probability P(Utilitarianism | Ci) = P(Utilitarianism). Whatever circumstances happen to be actual, your confidence in the pure conditionals offered by a moral theory should not be affected. Moral theorizing is, in this sense, autonomous, rather than hostage to the concrete facts about how things are.

Not all philosophical theories are autonomous from the concrete facts in this way. In particular, my previous post argued that Lewisian Modal Realism is not. The Lewisian assigns P(LMR) > P(LMR | only one concrete world). Indeed, they are likely to assign the latter near-zero probability. That's okay, because they think the condition of there being 'only one concrete world' is not a genuine metaphysical possibility that their theory needs to be compatible with (their view instead presupposes a plenitude of worlds). But if they somehow learnt that this supposed impossibility was actual after all, they would respond by rejecting their theory -- that's just what the low conditional probability indicates.

Anyway, see my previous post for more on that. For now, I'm more interested in whether anyone remains unconvinced about the autonomy of the normative. Can anyone justify the claim that the actuality of a world w (e.g. where wearing mismatched socks causes great suffering on net) would somehow undermine utilitarianism to a greater degree than the mere possibility of w?


  1. The general issue here brings to mind a passage by David Wiggins:

    "Not only was the world not made for us or to fit our concerns; we have not made our moral concerns (that the world be thus or so) simply in order to fit the world, or even to perfect the accommodation between our very best intentions and that which we shall definitely, despite contingency, be able to achieve. Even if we had the power and the foresight to do this, we might still despise to do so." ("Truth, and Truth as Predicated of Moral Judgments," Needs, Values, Truth (Blackwell, 1987), p. 176).

  2. This will sound like a nit-pick, but I don't think it is; for standard modal realism there is actually only one concrete world, so discovering this would produce no change. The discovery would be that there aren't non-actual concrete worlds. I realize that you can certainly rephrase your claim that actuality is relevant to Lewis, but I'm inclined to suspect that any way of rephrasing it will just end up misrepresenting the view in some other way similar to how your present description misrepresents Lewisian "actual."

  3. By "concrete world" I mean 'spatiotemporally isolated concrete world', and by "there is" I mean to invoke the unrestricted existential quantifier. Lewis has no objection to this. Ordinary usage involves a more restricted quantifier, on his view, but we can certainly use the unrestricted quantifier when we're doing philosophy.

  4. Replacing "actually" with "there are" is just the sort of rephrasing which I thought missed the point. If the game is treating mere impossibilities as having some credence because the theory according to which they're impossible isn't certain, there are surely things I presently consider impossible which would, were I to discover them to be true, change my credence in consequentialism (say, for example, if I discovered that the logical connections between "right" and "good" were wildly different from I thought they were). If your reply is that this wouldn't be a matter of concrete worlds being different, then again I think the Lewisian would say that you're treating non-actual concrete things and a priori abstractions as being quite different things, without explaining what the difference is. And of course the Lewisian is skeptical that you can explain the difference.

    I feel like I've made this point before, but perhaps I wasn't clear last time. Of course, maybe I'm not clear this time either, but I have to hope.

  5. Our credence in a moral theory (e.g. utilitarianism) should be independent of our credences in various ordinary non-moral facts [modulo complications regarding the higher-order evidence provided by expert testimony, etc.]. So for each possible circumstance Ci, one's credences should assign the conditional probability P(Utilitarianism | Ci) = P(Utilitarianism).


    You seem to be saying that no observation could disconfirm utilitarianism, which seems clearly false. Let's take a simple case. I have the moral intuition that the basic rights and opportunities of women, blacks and other minorities should be as well protected as the rights of anyone else. If I find myself in a KKK world in which protecting the rights of these minorites is associated with overall unhappy people, I'm not in the least inclined to conclude that my moral intuitions are mistaken. I'm inclined to conclude that the KKK world is deeply immoral. I'm not in the least inclined to conclude that the right thing to do is not to protect the rights of women, blacks and minorities.
    Now suppose the oracle informs you that you are not just in a KKK world, but you are in the worst of the KKK worlds in which the large majority are deeply misogynistic, racist, anti-semitic, homophobic, xenophobes. A world in which these minorities are afflicted with the worst sort of discrimination and hatred. We could probably bring our discussion to a quick close. If you insist on telling me that the sort of behavior engaged in by the large majority in the KKK world is perfectly morally right in that world--indeed obligatory--then we don't have much to talk about. It is obvious to any sane person that this sort of behavior is morally horrendous, no matter how happy it makes the KKK people. If you tell me that you have conditional intuitions on which this sort of behavior is perfectly moral fine--indeed obligatory--in the circumstances obtaining in KKK worlds, then again we don't have much to talk about. This sort of behavior is not perfectly fine in the KKK world. So, I don't think it's "quite mistaken" to reach the conclusion that, if an oracle informed you that you inhabit such a world, the cost of remaining a utilitarian would be too high.

  6. Aaron - I guess I'm not too worried about taking for granted our grasp of the abstract-concrete distinction. But in any case, couldn't even a Lewisian happily take the relevant hypotheses to be those that can be expressed in a purely categorical (non-modal, non-moral) vocabulary?

    Mike - I'm not here defending utilitarianism. I'm defending the claim that it makes no difference whether an alleged counterexample is actual or merely possible. If you're right that utilitarianism gives the wrong results in the KKK world, then we should already reject utilitarianism. We don't need to wait for the oracle to tell us which world is actual, no?

  7. Just to clarify, Mike, is it your view that different moral theories are true in different possible worlds? So, for example, perhaps we should think that utilitarianism is the true fundamental moral theory of our world, and yet also think that had the KKK world been actual, then utilitarianism instead have been false?

    If that's your view, do you also mean to deny that part of the content of utilitarianism is the conditional claim that if the KKK-world circumstances obtain, one ought not to protect the rights of minorities? Or do you not think that we can evaluate this conditional claim independently of knowing whether the antecedent is actually true or not?

  8. Just to clarify, Mike, is it your view that different moral theories are true in different possible worlds?


    I did take you to be defending utilitarianism. So, leave that aside. What you're doing (it seems) is considering an ideally informed utilitarian who knows already what utilitarianism recommends in each possible world. Here's what I would say: an ideally informed utilitarian must either (i) deny that there are worlds like the one I describe or (ii) treat utilitarianism as a contingently true moral theory (perhaps restricted to a subset of worlds sufficiently similar to ours). One thing he can't reasonably do is maintain utilitarianism is true and bite the bullet on these horrible worlds. I'm not sure what to say about (ii), since I don't know anyone who takes such a view (Mackie talks like this sometimes, though). I think the utilitarian has to take the route in (i). So, I'd again say that he is in the same position as the GMR. GMR's deny the possiblity that we inhabit a world that is inaccessible to other worlds. The deny too the possibility that we inhabit the uniquely existing world. So, both the utilitarian and the GMR would reject their respective positions if, per impossibile, they should learn that ours is a KKK world (or near enough) or ours is the only existing world.

  9. Just incidentally, I'm having a hard time reviewing comments before submitting. Don't know if anyone else is having this problem.

  10. I don't think any utilitarian would be inclined to claim that the described circumstance -- call it Ci -- is strictly impossible. That'd be completely ad hoc, lacking in any theoretical motivation (unlike the Lewisian case). And it wouldn't get them out of affirming the horrible conditional in any case. For a complete moral theory arguably entails commitments about what ought to be done in any coherently conceivable situation ("metaphysically possible" or not).

    There's no denying that Ci is coherently conceivable. So our moral theory should tell us which features of the imagined situation are the morally relevant ones, and how competing considerations weigh against each other. To insist that the situation isn't "really possible" is just a cheap dodge to deflect attention away from some of the core commitments of the theory: according to classical utilitarianism, sadistic pleasures are goods like any other pleasure, and may morally outweigh the suffering of the oppressed. Again, if one doesn't accept those core commitments, one should not accept the theory in the first place. But there's no point in claiming to accept the theory and then refusing to draw the obvious implications as applied to cases like Ci.

    You write: "One thing he can't reasonably do is maintain utilitarianism is true and bite the bullet on these horrible worlds."

    But that's just what it is to maintain utilitarianism. Insofar as one is inclined towards utilitarianism, one is thereby inclined to bite those bullets. You're really just saying that one shouldn't be a utilitarian. I agree, but it's a distraction from the topic at hand. We're just using utilitarianism as a toy theory here, to illustrate the nature of fundamental moral commitments in general.

    If you find the substance of utilitarianism too distracting, here are two possible methods for refocusing on the main topic:

    (1) Focus on credence ratios, not all-out belief. Even if you only grant P(utilitarianism | Ci)=.00001, the point is that you should grant the exact same credence, no more or less, to P(utilitarianism) simpliciter.

    (2) Alternatively, just replace all mention of 'utilitarianism' with some other theory you can more easily imagine someone honestly maintaining, e.g. some other consequentialist view with a more plausible axiology (on which sadistic pleasures are bad, etc.).

    Either way, the point is just to bring out that it seems our fundamental moral commitments shouldn't be contingent on how things actually happen to be. I'm suggesting that a moral theory entails commitments to a whole raft of conditionals, and it doesn't matter which of those conditionals has an actually true antecedent. So one's various conditional credences in the theory (given such-and such circumstances, described in non-moral terms) should be identical to one's unconditional credence in it.

    Nothing analogous seems to hold of Modal Realism. One can coherently assign greater credence to (GMR) than to (GMR | only one concrete world).

    [Aside: If Blogger responds with an error the first time you try clicking 'post' or 'preview', a second attempt usually does the trick.]

  11. Hey Richard,

    I wonder if all of our moral intuitions are really "all else being equal" type intuitions. I sometimes think that I have intuitions such as 'John shouldn't have lied to Suzy just now' that don't come with an 'all things being equal clause'. They just seem true full stop.
    These types of intuitions might then have implications concerning which nonmoral facts are true. (e.g. 'in the actual world John's lying to Suzy didn't save thousands of people from painful deaths.')

    If we do have these types of intuitions then we might have more reason to reject theories that give certain results about actual cases then we have to reject theories that give the same results concerning a merely possible (but structurally identical)case.

    van Inwagen seems to use similar reasoning to reject causal determinism. He moves from his intuition that we are morally responsible in the actual world to the conclusion that cd is false in the actual world. If he is right to do this then our intuition that we are responsible gives us more evidence against the view that cd is actually true then it gives us against the view that cd is possibly true.

  12. Philip - isn't it always the case that you could learn further details about the case that would require you to revise your initial moral assessment? I agree that a certain act might seem wrong, "full stop", insofar as we assume the circumstances to be a certain way. But if we revise our conception of the circumstances (e.g. if we learn that Suzy would have used the information to cause great harm to others) then surely our initial intuition should be responsive to this news. Further, I take it that the moral intuitions we have when we presuppose C to be the case do not themselves positively claim that C is the case; the intuition is fully honoured by recognizing the conditional, "if C then [this moral conclusion]." We are not barred from reconsidering our assessment of the non-moral circumstances, or rejecting C.

    In other words, I take it that our circumstantial moral assessments are properly secondary or derivative of our assessment of the factual circumstances. It would surely be unreasonable to argue, "John shouldn't have lied to Suzy just now, therefore Suzy wouldn't have abused the information to greatly harm others." Don't you agree? Isn't it a datum that this is bad reasoning?

    Van Inwagen's inference is less blatant but I think ultimately unreasonable for much the same reason. Of course, we'll want some account of when this form of reasoning is or isn't legitimate. That was the purpose of my previous post.

  13. I think there is a simple way to summarize your point. It probably won't satisfy people in the humanities but math people would be inclined toward this:

    Utilitarianism can't be contradicted by facts it doesn't assert.

    So since utilitarianism doesn't assert any particular state of the world the state of the world isn't going to be grounds for rejecting it.

    Well, with one caveat. Utilitarianism does assert a particular conception of the good (whatever the utility function measures) and a particular conception of morality (maximize the good). We might learn things about the world that make us reject either of these two premises. In the KKK world for instance we learn somehow that net happiness just isn't all there is too morality we so reject the utility function "sum of happiness." (But if our utility function were "product of happiness" or "min(happiness_i)" we probably wouldn't reject our utility function.)

    But I think conceptions of the good and conceptions of morality count as moral facts, so I agree with you. Whatever physical facts we learn will have to make us revise our views of these moral facts before they change our mind. But how fair is it to say they are non-moral facts since they are casually tied to (indistinguishable from?) moral facts?

  14. Doesn't this all depend on your meta-ethics?

    Richard, it seems that your realism comes from Kantian-style rationalism - ethics can be deduced from reason. I'm willing to accept that in any potential world we imagine, logic must still be in effect (otherwise we couldn't meaningfully talk about this world). So you are probably right if we accept this rationalism.

    But if we accept a moral facts realism, then moral values are interwoven into the fabric of the universe, and there is no reason why there could not be a potential universe with different moral facts. This would still be an intelligible universe.

    And it gets even more complicated when you consider the variety of non-realisms. For example, my own view is that moral theorising is a human way of shaping our interactions with others and rationalising our moral intuitions. While having a moral view involves applying it universally, I can simultaneously accept that if I had been brought up in different circumstances I would probably have come to a different moral view which I apply universally.

    So your view that moral theories do not depend on facts about the universe may hold true with rationalist meta-ethics, but otherwise this is very much not necessarily true.

  15. Hi Pejar, that's an interesting suggestion, but I think mistaken. You'll notice that my argument didn't appeal to any particular meta-ethics. It follows simply from the nature of first-order moral theory as a collection of conditionals from non-moral circumstances to moral conclusions (that apply in the specified circumstances).

    Even non-realists like Blackburn want to accommodate this datum. (They can do this by distinguishing what we actually prescribe for counterfactual circumstance C, versus what we counterfactually would prescribe in C.) It sounds like you're on board with this, given what you say about how "having a moral view involves applying it universally" (though you could've had a different view). So I don't think we really disagree here. It's not like you think an agent needs to do empirical inquiry to find out what the rest of the world is like before they can settle conditional moral judgments (e.g. about the permissibility of X-ing in C).

    As for other realisms, I'm not sure what it means to say that "moral values are interwoven into the fabric of the universe", but it had better not imply that the normative fails to supervene on the natural, since this supervenience is a datum that any adequate meta-ethical theory needs to account for, right? (Granted, two conflicting moral theories might each be individually intelligible. But I'd have trouble making sense of the conjunctive claim that theory M1 is true of our world w, and yet there's a natural duplicate of w in which M2 instead holds.) Can you suggest anyone who denies this?

    (You definitely don't have to be a rationalist. Consider "reasons fundamentalists" like Scanlon and Parfit, who make similar claims to mine here.)


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.