Sunday, April 26, 2020

Monotonicity and Inadvisable Oughts

Daniel Muñoz & Jack Spencer have a great new paper, 'Knowledge of Objective ‘Oughts’: Monotonicity and the New Miners Puzzle' (forthcoming in PPR).  In it, they dispute that knowing that you objectively ought to do something entails that you subjective ought to do it, on the basis of non-maximal act-types, which might be performed in multiple ways (some ideal, some disastrous). Their argument depends upon 'ought' being upward monotonic (UM): "if you ought to do a certain act X, and X-ing entails Y-ing, then you ought to do Y."  I think their central case instead demonstrates why we should reject UM (and similar normative inheritance claims, as found, e.g., in Doug Portmore's Opting for the Best).

In a classic mineshafts case, you know that (to save the most lives) either you objectively ought to block shaft A, or you objectively ought to block shaft B, but you don't know which.  Because blocking the wrong shaft would be disastrous, you rationally (or "subjectively") ought to block neither. M&S now highlight that the above disjunction, together with UM, entails the less-specific prescription that you objectively ought to block a shaft.  You could know this to be true, they argue, but still you (rationally) shouldn't block a shaft, given the risk of disaster.

UM violates a plausible constraint on the objective ought: that if it would be morally worse for you to ϕ, then it is not the case that you objectively ought to ϕ.  Since you might block the wrong shaft, we cannot know that you objectively ought to block a shaft: depending on how you did it, you might kill everyone!  And it's certainly not the case that you objectively ought to do something that would kill everyone.  So we should reject UM.

M&S write: "UM is backed up by some formidable arguments, and the objections to it, even if they work, don’t apply in the Miners case." (p.8).  Let's take a closer look.

First, they offer a tendentiously possibilist analysis on which "‘ought(X)’ is true just if you do X in all of the best relevant possible worlds." This disastrously ignores whether you also do X in the worst possible worlds.  Actualists will insist that the relevant question is instead how the outcome of your actually doing X compares to alternatives: are the nearest possible worlds in which you do X better or worse than the nearest in which you do otherwise?  Straightaway, actualism entails the falsity of UM, because the betterness of some specific way of X-ing does not entail the betterness of your actual way of X-ing (as their New Miners Puzzle itself demonstrates, with X = "block a shaft").
Even if UM can sometimes fail, it clearly doesn’t fail for the objective ‘ought’ in the Miners case. Just ask yourself: objectively speaking, should you block a shaft?

There is a shaft such that you should block it, specifically.  But whether you should block a (generic) shaft depends entirely upon which one you would block!  If the way you would block a shaft would kill everyone, then you objectively ought not to do that. Clearly!
We suggest that an omniscient being would advise blocking a shaft, in the course of advising you to block the particular shaft where the miners are.

This conflates advising the blocking of a specific shaft with advising the blocking of a shaft.  No beneficent omniscient being would advise blocking a shaft in the circumstances where the shaft you would actually block is the disastrous one.  Adding extra (more specific) advice changes the circumstances in a way that's cheating the intended test here. (Cf. footnote 12, and remember: in the New Miners Puzzle, the agent has not been advised which specific shaft to block.)

In perhaps the most puzzling passage of the paper, M&S address the challenge from Actualism as follows:
[T]he special features of the counterexamples to UM aren’t present in the miners case given full knowledge. In Procrastinate, since you will put off your tasks, opting for the best general option (accepting) will lead to the worst specific option (flaking). This same kind of problem might arise if you don’t know where the miners are; opting for the best general option (blocking a shaft) can lead to the worst (100 doomed). But if you do know the right shaft, you’ll block it. And it is only the case where you do know that determines what ought objectively to be done. [bold added]

This is a strikingly whole-hearted embrace of the conditional fallacy.  Normally, we want to avoid implications like: I objectively ought not to ever bother learning anything, because my idealized self already has full information.  It's not really true that "it is only the case where you do know that determines what ought objectively to be done."  That's just an approximation, a way of getting at the idea that the actual facts of my situation (rather than my beliefs) determine what objectively ought to be done.  This isn't quite the same as what I ought to do if given full information, because sometimes that very condition -- of being given full information -- changes my objective situation in normatively relevant ways.  The Miners case is precisely such a case (at least, if the agent is not antecedently disposed to pick the correct shaft via sheer luck).

Finally, consider the link between 'ought' and 'decisive reason'.  M&S suggest that it's "independently obvious that there is decisive objective reason to block a shaft."  To test this claim: suppose that you block the wrong shaft.  You thereby block a shaft.  Have you done something that you had decisive objective reason to do?  Obviously not.  There was no reason at all to do what you did, absolutely nothing counted in favour of killing everyone.  So it is not the case that you had decisive reason to block a shaft.  UM is false.

(At least, I find this the most natural way to extend normative talk to non-maximal options, but I actually suspect the issue is largely terminological!)

15 comments:

  1. "A plausible constraint on the objective ought: that if it would be morally worse for you to ϕ, then it is not the case that you objectively ought to ϕ." I don't think that this is a plausible constraint. Take Frank Jackson (2014) case in which you can raise either, neither, or both of your two hands. Raising both your hands would be morally fantastic and best. Raising neither hand is morally okay but not great. Raising just your left hand would be morally disastrous. Given that you're an evil person, you plan on raising just your left hand. Thus, if you were to raise your left hand, you would raise just your left hand. So, here's a case in which it's morally worse to raise your left hand, but yet you clearly ought to raise your left hand given that raising your left hand is entailed by your raising both your hands. Even Jackson, the actualist, admits as much.

    ReplyDelete
    Replies
    1. Hi Doug! Your crucial inference there just strikes me as invalid. You clearly ought to raise both hands, so (I agree) there's something you clearly ought to do which entails raising your left hand. But I don't think it follows from this that you ought to raise your left hand, that's a distinct claim which raises separate issues.

      After all, there's a possible reading of 'ought' on which it simply tracks betterness, and -- you agree -- it is not better to raise your left hand (given that you're not going to raise the right one with it). So there's nothing incoherent about the pattern of evaluation I'm insisting upon here.

      Do you deny that there is such a possible reading of 'ought'? If we grant that either way of talking is coherent (as I think we clearly should), the question is whether the "entailment-preserving" sense or the "betterness-tracking" sense of 'ought' is more useful / appropriate to its theoretical role in action-guidance. I'd rather practical agents tracked betterness than entailments. What do you think?

      Delete
    2. Or again, apply the test I suggest at the end of my post: when the evil agent raises just their left hand, can you sensibly console yourself by saying, "Well, at least he did something that he ought to have done!" I do not think this is sensible. I don't think we should hold that the agent (in making that choice) did anything that they ought to have. They made the worst choice possible, after all.

      Delete
    3. I don't know where to go from here, because I just take the fact that your view implies that it is not the case that you ought to raise your left hand as a reductio of your view. Few people, these days, accept the verdicts of synchronic actualism. Even Frank Jackson has come to reject synchronic actualism. Of course, you can just stipulate that by 'ought' you just mean 'the agent's best option', but I don't think that this resulting stipulative notion is either interesting or one that tracks any ordinary usage of the word 'ought'. Lastly, I don't see how the fact that I wouldn't say "Well, at least he did something that he ought to have done!"is at all probative. If you take the necessary means (grabbing a life preserver) to saving a drowning person but don't then throw the life preserver to the drowning child, I wouldn't say "Well, at least you did something you ought to have done!" But I wouldn't deny that you ought to have grabbed the life preserver.

      Delete
    4. Well, I can't do much against an incredulous stare, except to express my surprise that anyone would take the denial that you ought to do something disastrous to be a "reductio" of any sort!

      I think there are two closely-related features that make the betterness-tracking sense of 'ought' more interesting than the entailment-preserving one: (1) it better tracks advisability, which seems conceptually tied to ought-verdicts and (2) it seems more apt for action-guidance.

      So I guess one way forward would be to engage with those two claims, or suggest some alternative respect in which the entailment-preserving sense of 'ought' is more interesting or appropriate?

      As I mention at the end of the post, I do think this is largely terminological. But I see more advantages to the terminological choices that I'm recommending here. You've reported what you're inclined to say. But is there anything to be said for it beyond sheer dispositional inertia?

      Delete
  2. [Pete Graham writes in:]

    On one reading of what you claim, Richard, it seems that you accept:

    (#) If it would be worse for S to phi than it would be for S to not phi,
    then it is not the case that S is morally obliged to phi.

    It seems to me, Richard, that, if you also accept the following:

    (*) If it would be better for S to phi than it would be for S to
    not-phi, then S is morally obliged to phi.

    then you are committed to denying agglomeration of obligation:

    (&) If (O(A) and O(B)), then O(A & B)

    [Argument omitted for brevity; can reproduce in a separate comment if requested.]

    Do you reject (&)?

    - Pete

    ReplyDelete
    Replies
    1. Hi Pete, I'm wary of affirming (*), since it might be that some specific form of not-phi-ing could be better yet, and hence more advisable an option.

      As a result, I'm not sure about (&). I don't feel any great attachment to it, so wouldn't be terribly bothered if it turned out that I was committed to rejecting it (for some allowable senses of 'ought'). It certainly seems possible that two things could be individually advisable, but disastrous when combined, for example. (But you'd presumably want to model that as a case where you're obliged to do the disjunction, not each particular disjunct. And I think that's probably right: it's not like you do anything wrong by just doing A, when further adding B would be disastrous. On the other hand, if you're already doing A in such a circumstance then O(B) would no longer hold.)

      It's worth stressing again that I'm a pluralist about such things: I'm sure there are some allowable senses of 'ought' for which (&) applies. But I'd be wary of taking it as essential to *every* allowable sense.

      Delete
  3. The line of argument reminds me a little bit of recent work by Lyndal Grant and Milo Phillips-Brown on 'ways-specific desire satisfaction' (in part because I think it identifies a general feature of intentionality in practical contexts, one that's not confined either to oughts or desires alone):

    https://philarchive.org/rec/GRAGWY

    ReplyDelete
  4. Hey Richard, thanks for these objections -- just in time for final revisions!

    I agree with Doug about your constraint, so I'll just reply to the other points. Of course, don't feel any obligation to respond.

    1.
    You say that our analysis of 'ought' is tendentious. But we explicitly admit this on p. 9 ("we can't just appeal to the authority of the orthodox semantics"). That's why we lean harder on the semantic argument from von Fintel, which seems to me pretty forceful.

    (In a nutshell: "You don't have to ___" is a down-monotone environment: "You don't have to block a shaft" entails "You don't have to block Shaft A." Negation flips monotonicity. So, "You have to ___" is up-monotone. What holds for 'have to' holds for 'ought'. So, "ought ___" is up-monotone, too.)


    2.
    You write:

    "whether you should block a (generic) shaft depends entirely upon which one you would block! If the way you would block a shaft would kill everyone, then you objectively ought not to do that. Clearly!"

    Clearly, you objectively ought not to block shaft B. Why does that tell us anything about whether you objectively ought to [block a shaft]?

    It *would* be relevant if the only way to block a shaft were to pick at random. But that's not the case. You can just pick A; that's what you objectively ought to do, which is why we think that you objectively ought to block a shaft (since this is necessary for blocking A).

    I can only think of one other way for B's badness to be relevant. You might be invoking a free choice inference: ought(block a shaft) --> ought(block B). But following von Fintel, we don't think free choice inferences are valid.


    3.
    You write:

    "No beneficent omniscient being would advise blocking a shaft in the circumstances where the shaft you would actually block is the disastrous one. Adding extra (more specific) advice changes the circumstances in a way that's cheating the intended test here."

    Why is it cheating? The question for the advisor is: what should I do? The correct answer, we think, is: block A! This answer entails blocking a shaft. So it seems natural to me and Jack that, when the advisor recommends blocking A, they're eo ipso recommending blocking a shaft. (This does not "conflate" the de dicto and de re readings of "block a shaft;" we clearly intend the de dicto reading, though of course the de re reading is true.)

    It seems to me that you have in mind a somewhat different scenario, where the advisor is *only* able to answer one question: should I block a shaft? In this case, as you know, we *agree* that it would be wrong to answer "yes." We say this in footnote 12, which you reference.

    ReplyDelete
    Replies
    1. Hi Daniel! Right, I think the scenario you address in footnote 12 is the appropriate way to apply the advice test. That is, to apply the advice test for whether one objectively ought to do X, we should assess: would the omniscient advisor answer "Yes" to the Y/N question, "Should I do X?"

      This is more direct and relevant than your (very different) method of asking whether the advisor would recommend something else, Y, which merely entails X. It's a real stretch to call that an application of the advice test for whether one objectively ought to do X. Indeed, a major upshot of your paper is that objective oughts (as you interpret them) are no longer so intimately connected with advisable action. So it seems to me that you should just grant this point and admit that you reject the direct advisability test for objective oughts.

      Delete
  5. 4.
    Next you accuse us of wholeheartedly embracing the conditional fallacy. Harsh!

    This wasn't my best writing. But I agree that the counterfactual test for objective 'oughts' is just a test, and that there will be cases where, if we imagine away risks and ignorance, we also lose some of the objective reasons that were there. You objectively ought to learn about history, even though "in the case where you know everything," learning is pointless.

    But on the other hand, to evaluate whether I objectively ought to play the slots, you're supposed to ignore my subjective credences. What matters is if the slots actually pay out.

    So here's the question. In the mineshaft case, is the miners' unknown location like the unknown state of the slot machine, or like the unknown history facts? Like the slots, I'd say. What matters in the mineshaft case isn't reducing ignorance, but saving lives -- just as what matters at the slots is making money. If you are weighting down the reasons to block a shaft simply because of ignorance about possible effects, you are doing exactly the sort of thing that you are *not* supposed to do in evaluating an objective 'ought'.

    5.
    Finally, we say that there is decisive objective reason to block a shaft, and you say that this claim fails a test: if I block Shaft B, I didn't "do something that I had decisive objective reason to do."

    But like Doug, I don't think the "did I do something..." test is probative. Indeed, the test conflicts with *your* view that I have decisive objective reason not to [block a shaft]. Suppose I comply by blocking neither shaft. Did I do what I had decisive objective reason to do? Obviously not.

    That's about it from me. Thanks again for so many thought-provoking objections, and I hope you got something out of the paper. I certainly got a lot out of your post. (As always.)

    ReplyDelete
    Replies
    1. "What matters in the mineshaft case isn't reducing ignorance, but saving lives."

      Precisely! So now I worry that you haven't understood my objection at all. Actualists hold that the way to assess whether one objectively ought to do X is to look at what would actually happen if one were to do X. Now, in the standard Miner's Case it is left unspecified what would happen if one were to block a shaft (that is, it's left unspecified which shaft the agent is disposed to block, if they block a shaft). So the case underdetermines whether one objectively ought to block a shaft or not.

      Suppose the agent is disposed to block the wrong shaft. That is, the nearest worlds in which they do X ("block a shaft") are ones in which everyone dies. Since X would actually result in unnecessary deaths, this is an objectively bad result. So there's a straightforward sense in which (in this case) it objectively ought not to be done.

      On your account, by contrast, you do not ask whether X "actually pays out". You instead ask whether X is entailed by some other action, Y, that pays out. That would be a good way of objectively evaluating Y. I do not think it is a good way to objectively evaluate X.

      At any rate, it's important to note that the source of my objection here does lie in objective assessments, not subjective ones. The agent's ignorance may play an important role in explaining why, if they block a shaft, they may not block the right one. But what *I'm* responding to is not the agent's ignorance per se, but their (presumed) dispositions: what they will actually do (if they do X at all), and whether or not this gamble "actually pays out".

      To flesh out your analogy: It's not true that an agent objectively ought to play the slots just because there's a possible way of pulling the lever that would lead to a great payoff, if the actual way that they would pull the lever would simply lead to their losing. This is the standard actualist objection, and it applies in just the same way to the Miner's case. Do you disagree?

      Delete
  6. Hi Richard,

    Jack here. Thanks for these thoughts!

    A quick point on actualism. — We countenance a notion that fits your (actualist) conception of the objective ought. If we let the objective value of X be the value of the world that would be actual if X were true, then we could say that one a-ought to X (relative to some set of mutually exclusive alternatives) if X maximizes objective value (relative to the alternatives). For all that we say in the paper, it may be true that one subjectively ought to X if they know that they a-ought to X. But we doubt that 'ought' has any meaning on which it means a-ought; we think that the best semantics for 'ought' says that every 'ought' is upward monotonic.

    We do not assume that one has knowledge of anything significant when they know something about what they objectively ought to do. Indeed, part of the interest of the paper lies in the pressure in puts on truistic-sounding principles in ethics. It might have seemed obvious that ethicists should be concerned with what people (objectively) ought to do. Upon reading this paper, one could be convinced that ethicists should not be concerned with what people (objectively) ought to do. Indeed, one could be convinced that ethicists instead should be concerned with what people a-ought to do.

    ReplyDelete
    Replies
    1. Thanks Jack, that all sounds fair enough! I guess I'm inclined to give more weight to theoretical role considerations than to ordinary use. It's part of my ought-concept that it tracks something important or interesting to ethicists. So if ordinary usage of 'ought' tracks something philosophically uninteresting, I'm inclined to conclude that ordinary usage is a poor guide here.

      'Objective ought' is a term of art anyway, so I think it's especially odd to defer to descriptive semantics in the way you do here!

      FWIW, I agree there are senses of 'ought' which are upward-monotonic (roughly, the "obligation/have to" sense), but I'm not sure why we should think that all must be. Besides the "obligation/have to" sense, it seems to me that there's a perfectly servicable sense of 'ought' as meaning something closer to "advisable", for which there's an eligible interpretation that clearly violates UM (and the down-monotone test: "It's not advisable to block a shaft" does not entail "It's not advisable to block shaft A."). My intended sense of 'advisability' for X, to be clear, tracks the Y/N test in response to the direct question "Should I do X?"

      So now I wonder: what's the basis for assuming that all 'oughts' must behave like "have to", rather than like (this sense of) "advisable"?

      P.S. Thanks again to you and Daniel, both for the thought-provoking paper, and for this additional interesting discussion!

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)