Friday, November 26, 2010

Cosmic Injustice is Bad, Mmkay?

In Chapter 11 of On What Matters, Parfit defends the following form of skepticism about moral responsibility:
We cannot justifiably have ill will towards [...] wrong-doers, wishing things to go badly for them. Nor can we justifiably cease to have good will towards them, by ceasing to wish things to go well for them. We could at most be justified in ceasing to like these people, and trying, in morally acceptable ways, to have nothing to do with them.

This strikes me as a pretty unappealing view (especially if we extend it to the positive reactive attitudes: is gratitude likewise never warranted?). Parfit defends it by assuming that desert-involving moral responsibility requires the impossible, namely ultimate sourcehood: "To be responsible for our acts in some way that could make us deserve to suffer, we must be responsible for being in the relevant ways how we are."

I'm not sure why we should accept this theoretical assumption. I certainly don't find it anywhere near as plausible as the concrete intuitions about cosmic (in)justice: e.g. that all else equal, it's preferable that the vicious become miserable rather than happy. Granted, the notion of 'desert' is a bit slippery. But we needn't appeal to it here. We can instead appeal directly to our axiological intuitions concerning the combination of virtue/vice and welfare harms/benefits. It's just a bad thing when good things happen to bad people, just as it's bad when bad things happen to good people. Virtue-welfare mismatches are amongst the intrinsic bads. How one got to be vicious or virtuous has nothing to do with it.

Parfit doesn't really address this view. All he does is mention the weakest reason for holding it, namely religion:
Some people would reject [the ultimate sourcehood requirement]. There are people who believe that, though our wrong acts are merely events in time, and are causally inevitable, we could deserve to be sent by God to suffer in Hell. On such views, to deserve to suffer, we don't have to have any kind of contra-causal freedom, or to be in any way responsible for our own character, or for being how we are.

Of those who make such claims, some admit that they cannot understand how such claims could be true. God's justice, these people claim, is incomprehensible. [... But we] have no reason to expect such moral truths to be incomprehensible.

I do not find it "incomprehensible" to think that virtue-welfare mismatches might be intrinsically bad. I probably couldn't give any argument for this axiological claim, but then nor can you say much to argue for the claim that pain is intrinsically bad. It's just bedrock. But that is not the same as being incomprehensible.

Peter Singer suggests the stronger objection from evolutionary debunking: we can understand how selection pressures would give rise to norms of punishment in social animals like ourselves, so why think these norms have any independent truth to them? To this I'm inclined to respond, 'Why not?' Such genetic challenges do nothing to show that the belief in question is false; they merely prompt us to examine them more closely. If, on closer examination, we found that our concrete intuition conflicted with more plausible general principles (as in the case of various historical biases, etc.) then we'd have good grounds for revising our beliefs. But I find, on the contrary, that reflection on the ultimate sourcehood requirement (and the impossibility of anything ever satisfying it) instead undermines this conflicting theoretical belief, whereas the supporting theoretical principle -- that virtue/welfare mismatches are a bad thing -- seems as inherently plausible as ever. But maybe that's just me?


  1. Here's something that motivates me to reject the simple welfare/virtue matching view. Suppose Sam is virtuous and has the appropriate level of happiness for his virtue. Then, a cancerous tumor makes Sam less virtuous. Next, something happens which makes him less happy (a headache, perhaps). True or false: it was in some way good (or less bad) that Sam had this headache? Intuitive answer: false. But it looks like the welfare/virtue matching view would say otherwise.

    Now, you could have a more sophisticated view about how welfare and virtue are appropriately related, explaining why cancerous tumors are relevantly different from bad parenting/genetic dispositions to wrongdoing/what have you. I find this to be a rather hopeless enterprise, much like the way I feel about explaining why doings are fundamentally different from allowings. (When I make this complaint, I am not asking, “But in virtue of what is the link between welfare and virtue important?”)

    What should we make of the counter-intuitive consequences of rejecting this link? Folks like Parfit should say the following: the other intuitions only have their force because we presuppose false things about ourselves, and so we need not honor these intuitions. (Compare: Many economists think that having a high minimum wage is bad for the poor because it causes unemployment. Suppose they are right. If we find it intuitive that justice requires a high minimum wage law, but we have this intuition because we implicitly believe that it will benefit the poor to have a higher minimum wage, we need not favor (even prima facie) theories that entail that justice requires a high minimum wage law.)

    Moreover, it's far from clear (to me at least) that it is intrinsically good for bad people to suffer. What's clear is that it is frequently justifiable to punish/avoid/personally dislike people who act wrongly, and we can fight about what the best explanation of this fact is. (Likewise, Parfit need not think that it makes no sense to thank people for doing good deeds. Again, what's clear is not that people are intrinsically thank-worthy because of their virtue, but that it is typically appropriate to thank people under certain conditions.)

  2. Hi Nick, that's an interesting objection (much more powerful than Parfit's). I'd generally agree with you that it's a hopeless project to try to distinguish exculpatory (e.g. tumors) from non-exculpatory (parenting/genetics) causes of bad character. The person who is vicious because of a tumor is no less vicious for that. However, in case of a person who is initially virtuous and only later becomes vicious, the nature of the change might well be relevant to our assessment of their life as a whole. In particular, I think we can distinguish the internal development of character from wholly extrinsic transformations of character, and think that the former degrades the narrative structure of one's live whilst the latter is merely tragic in a way that doesn't reflect poorly on the earlier self. If we come to think of the vicious timeslices as a 'new person', we might be happy to see them suffer accordingly. But if we still see them as a continuation of the earlier person, then we might refrain from any ill-wishes, out of respect for the genuinely good person that they once were. (I don't have a fully worked out story here, of course, but it doesn't strike me as obviously hopeless.)

    Another plausible suggestion is that there's a baseline of "decent person" above which more welfare is always better. So if a very decent person becomes slightly less awesome, we still might not want them to suffer in the slightest.

    Finally, some might think it matters whether the harms are caused by the person's bad actions. Perhaps punishment is desirable in a way that accidental misery for the vicious is not. [I'm less inclined towards this route, however. I'll take whatever Hitler-suffering I can get. I might just be vindictive like that, though ;-).]

    On the debunking story: I'm pretty confident about distinguishing final from instrumental value intuitions. One could strengthen the debunking story by appeal to the idea of internalized contingent norms. But again, while there are plenty of norms that on reflection I have no trouble identifying as merely contingent and instrumental, this isn't one of them. (That's not to say that anyone else need share these intuitions. You may not. But for one who does, Parfit's objection is not at all persuasive.)

  3. Trigger Warning: non-philosopher commenting, therefore some glaringly uninformed thoughts are to be expected!

    Hi prof Chappell.

    I was wondering if there is a way to support the idea that intuitions that are seemingly tracking virtue/welfare mismatches and cast those mismatches as intrinsically bad may be amenable to change.

    Let’s imagine that there are only 2 possible worlds, one of which includes Hitler after his crimes suffering enormously for an eternity (because, say, souls exist and survive death, or because we invent the immortality pill), or one where Hitler leads an ordinary mortal life after his crimes (but fully deterred and guaranteed not to commit the slightest harm, ever). Suppose we intuit that the eternal-suffering for Hitler is preferable to his non-suffering at all. Now, on the assumption that beyond a certain point in time (say after one trillion years) Hitler’s suffering will have added up to more that the collective suffering he had inflicted upon each and every of his victims, we have found preferable the option that creates the biggest mismatch between virtue and welfare (no matter how depraved Hitler was, his stipulated negative welfare is infinite, therefore the mismatch we hypothetically chose between virtue and welfare is infinite). What would this mean for the aforementioned intuitions? My best guess would be that, despite stipulations to the opposite in the thought experiment, the dangerousness of Hitlerian mentality compels the intuitions to prefer Hitler’s eternal suffering.

    P.S. Have I made a mistake in my hedonic assumptions about the infinity of the mismatch?

    1. Interesting case! I can see a few possible options in response: (i) Perhaps Hitler was so bad that there is no limit to how much punishment he deserves; (ii) Perhaps the magnitude of "mismatch" is determined on a non-linear scale giving more weight to valence / numbers nearer to zero, so that a mismatch of "zero suffering when a huge finite amount is deserved" counts as worse than a mismatch of "infinite suffering when (only) a huge finite amount was deserved." (iii) Alternatively, one could try to debunk the intuition by arguing that our grasp of large numbers (let alone infinity) is so bad that we fail to (intuitively) grasp that the gap between "one trillion" and "infinite" is actually greater than the gap between "zero" and "one trillion". So while our intuitions try to track the raw mismatch, they fail in cases like this due to our poor grasp of large numbers.

      I suspect that a mix of options two and three may be on the right track...


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)