Wednesday, August 26, 2015

Criterial vs Ground-level Moral Explanations

To help show why certain objections to consequentialism are misguided, let us distinguish two importantly different kinds of explanation of particular moral facts. [Revising and expanding upon a distinction I originally drew back here.]

What we can call a criterial explanation appeals to the necessary and sufficient conditions for the truth of some moral claim, i.e. the conditions that appear in place of the `X(Y)' in theoretical accounts of the form, ``An act is right (wrong) iff X(Y).'' If I randomly kick Joe in the shins, the wrongness of my act can be explained criterially by the fact that my act has the general property Y, which is necessary and sufficient for an act's being wrong. (Maybe Y is the property of failing to maximize value, or maybe it is the property of violating the weighted balance of one's prima facie duties.)

A ground-level explanation, by contrast, appeals to the particular non-normative features of the act or evaluand which ground its having the moral status that it does. So, for example, the ground-level explanation of my action's wrongness may consist in the fact that I (gratuitously) harmed Joe. This is also the wrong-making feature of the action.

What is the relationship between these two kinds of explanation? It seems that criterial explanations serve to specify the general conditions under which any more particular ground-level explanation will obtain. My (gratuitously) harming Joe is the specific morally relevant feature in virtue of which my action meets the general conditions for wrongness. These general conditions for wrongness might have been met in different ways, say if my kick had hurt Jane instead of Joe. There would then be a slightly different wrong-making feature, or ground-level explanation of how my action came to satisfy the general criteria for wrongness.
I think it's now fairly widely recognized that the properly conscientious agent's "moral concern" is primarily de re rather than de dicto. That is, rather than caring about "morality'' or "rightness" in the abstract, the morally conscientious agent cares about the things that are of moral significance or the right-making features. It would seem perverse, after all, to neglect the concrete things that matter in favour of the abstract property of mattering. Likewise, I propose, it would be perverse to neglect what is of ground-level moral significance in favour of what is merely criterial -- i.e., a mere general guarantee that some or other feature of ground-level moral significance obtains.

To illustrate concretely: It is perverse for a wannabe Rossian to care only about the balance of prima facie duties, rather than caring about avoiding harm to Joe, and maintaining fidelty to Jane, and so forth. Such an agent is criticizable on internal, Rossian grounds, for failing to care appropriately about what (according to Rossian pluralism) has ground-level moral significance, namely the concrete contents of their prima facie duties.

But now notice that the same is true of the utilitarian who cares only about aggregate utility and not particular people. Any plausible form of utilitarianism must be token-pluralistic in its axiology, ascribing intrinsic value -- and hence ground-level significance -- to each distinct individual's welfare. So an agent who fails to care about each particular person's welfare is thus failing to care about what (according to token-pluralistic utilitarianism) has ground-level moral significance. The imagined perverse agent is not accurately exemplifying the utilitarian perspective after all.

4 comments:

  1. "It seems that criterial explanations serve to specify the general conditions under which any more particular ground-level explanation will obtain. My (gratuitously) harming Joe is the specific morally relevant feature in virtue of which my action meets the general conditions for wrongness. "

    Isn't the main problem for the consequentialist that the two explanations can come apart? I.e. that sometimes gratuitously harming Joe will in fact maximize value (or meet whatever criteria is on the table)? At that point, the consequentialist has to prioritize the criterial explanation (on pain of no longer being a consequentialist), it seems. That's what acceptance of the theory appears, to some anyway, to lead agents to have excessively abstract or general things in mind.

    Now, maybe your token-pluralistic utilitarianism takes care of the objection, but my second thought is that it seems to do this all on its own, at the criterial level, and that the distinction you offer here isn't actually doing any of the work.

    ReplyDelete
    Replies
    1. Hmm, so it's true that something that's a wrong-making feature in one situation (e.g. harming Joe) will, in another situation, be outweighed by other considerations, and so may be part of an action (e.g. preventing some much greater harm to Sally) that's overall right. But that's true of moderate (e.g. Rossian) deontology, too: prima facie duties can be outweighed.

      What this means is that while criterial explanations are constant (they are what all right/wrong actions have in common), ground-level explanations will vary from case to case. So if I should harm Joe to save Sally (making it no longer a gratuitous harm, mind you), but I fail to do so, the ground-level explanation of the wrongness of this (in)action may instead be that I've failed to give due weight to Sally's interests. But in any given case, the criterial explanation and the ground-level explanation that's true of this particular case will be consistent.

      The general point is that whenever an act is wrong by utilitarian lights, failing to maximize utility, there will (modulo non-identity cases) be some individuals who were harmed, or whose interests the agent failed to give due weight to, and these more particular facts provide an important part of the moral explanation. (In particular, they provide the part that should feature in the motivations of good-willed moral agents.)

      Is that clearer?

      Delete

  2. But how does that work when it comes to, say, saving - or helping, etc., people one does not know?
    The consequentialist ought to save a zillion people she does not know. But she ought to act not because of aggregate welfare, but because of a concern for particular people. But she does not even know any of the particular people in question. There is a psychological concern here - is it humanly possible, in all cases?
    For example, let's say that she has to pick between preventing one bomb attack (prediction: she would save 1000 people, including her two daughters, her sister, and 5 of her friends, and others she does not know), or another one (prediction: she will save a million people (it's a nuke), but she does not personally know a single one of them).
    Now, in the first case, she definitely care about some of the particular people (eight of them, to be more precise), but in the other case, she does not...unless there is a way of caring for particular people one does not know. Perhaps, I'm misinterpreting the "particular people" expression?
    But even then, there remains a problem about numbers: human limited minds probably aren't capable of caring for numbers like a million particular people. In other words, other than worrying about the number itself, there seems to be no way for a human being to psychologically distinguish between a million people and a thousand people. The point is, one can psychologically say "that's a thousand people getting killed (or getting tortured, or whatever); that's horrible", and one can say "that's a million people getting killed (or getting tortured, or whatever); that's horrible", but one plausibly can't psychologically represent the scenarios differently (or "feel" them differently, as "more horrible") other than by just counting numbers (if you think otherwise, we can just pick greater numbers, say a 10 million vs. a million, or as big as needed to make it impossible for human minds to find a difference that they can "feel" or in any way comprehend other than in an abstract fashion; but I think a thousand is large enough).

    That aside, I take it that the theory is that the criterial level explanation is failing to maximize expected value (or utility) - i.e., given the info available to the person, and some priors -, rather than actual value (or utility)?

    ReplyDelete
    Replies
    1. I think this distinction can be applied to explanations of either actual-value or expected-value "oughts".

      re: unknown people and psychological limitations: see this old post, and section 3.1 of my 'Value Receptacles' paper, for more detail. In short, we can compensate for our psychological limitations by having a "stop-gap" desire for the aggregate welfare of those we lack particularized concern for. But it's still the individuals that matter -- this more abstract desire is just our best (imperfect) means of accommodating them.

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.