Monday, August 08, 2016

Self-Torturers without Diminishing Marginal Value

My last post mentioned in passing that the puzzle of the self-torturer may be complicated by the fact that money has diminishing marginal value.  This can mean that a few increments (of pain for $$) may be worth taking even if a larger number of such increments, on average, are not.  So to make the underlying issues clearer, let us consider a case that does not involve money.



Suppose ST is equipped with a self-torturing device that functions as follows.  Once per day, he may (permanently) increase the dial by one notch, which will have two effects: (i) marginally increasing the level of chronic pain he feels for the rest of his life, and (ii) giving an immediate (but temporary) boost of euphoric pleasure.  Before it is permanently attached, ST is allowed to play around with the dials to become informed about what it is like at various levels.  He realizes that after 1000 increments, the burst of pleasure is fully cancelled out by the heightened level of chronic pain he would then be feeling.  So he definitely wants to stop before then. (We may assume that he will live on for several years after this point.)  Is it rational for ST to turn the dial at all?

Surely not.  Each increment imposes +x lifetime pain in return for a temporary boost of y pleasure. We may treat these as being of constant value (bracketing any slight differences in, e.g., the duration of ST's subsequent "lifetime" between the first day and the thousandth -- we could make it so that the pain only starts on the 1000th day if necessary).  And we know that it would be terrible for ST to endure 1000 increments.  That is, the disvalue of +1000x lifetime pain vastly outweighs the value of 1000 shorts bursts of y pleasure.  Since the intrinsic values here are (more or less) constant, it follows that the intrinsic disvalue of +x lifetime pain vastly outweighs the intrinsic value of a short burst of y pleasure.

So -- assuming that there are no extrinsic values in play (e.g. we're not to imagine that ST has never experienced euphoria, such that a single burst would add a distinctive new quality to his life, or anything like that) -- it follows that each individual increment of the self-torture device is not worth it.  It would be irrational for ST to turn it at all.  So there is clearly no great "puzzle" or "paradox" here.

Compare this result to the original puzzle involving money.  Since money has diminishing marginal value, it might be that (n times) $y is worth (n times) x pain (for some n < 1000) even if $1000y is not worth 1000x pain.  That contributes to the intuitive force of the "puzzle", insofar as at least early increments seem like they might be worth taking.  But it should be clear that merely adding a resource with diminishing marginal value can't really create a paradox here where there wasn't one previously.  There will still be some threshold point n where it is irrational (of net intrinsic disvalue) for ST to turn the dial a single notch more.

So there is no great "puzzle" to the self-torturer.

9 comments:

  1. For the problem to really be fun, you just have to rule out the "not even once" solution. Take a sadly realistic case: $10,000 would pay half my student debt, and the suffering I feel from my debt is certainly detectable and sadly enduring. Trading it away for some undetectable suffering should be a rational move on every theory. But maybe that spoils the problem in a different way, because then the answer is just calculated by where the suffering relief provided by the money (marginally decreasing) matches the suffering caused by the machine. Also, expected length of remaining life matters, so with one day to live, again you'd be irrational to not take the money.

    ReplyDelete
    Replies
    1. Right, that's what I was thinking: in the money case, it might be worth taking a certain number of times, but it should be clear that you'll still eventually reach a point where the next increment isn't worth it (assuming, as noted in the main post, that you will "live on for several years"). So there isn't the sort of "paradox" you'd get if somehow every individual increment was worth taking while the collection as a whole wasn't (as some philosophers seem to think is the case here).

      Delete
  2. Hi Richard,

    Please let me know if I'm missing something, but isn't pleasure also subject to diminishing marginal value and even an upper bound, at least for human agents?
    For example, let's consider the value of each short burst of y pleasure.
    If there is no diminishing marginal value, then for any negative event NE with a finite negative value (from the perspective of the individual, perhaps? I'm not sure you're counting value in that way, but this objection would work anyway), there would seem to be a number n(NE) such that a (perhaps specific) human agent should rationally pick n(NE) bursts of y pleasure even if that requires also that event NE will happen.
    Then, ST should pick +1000x lifetime pain preceded by n(+1000x lifetime pain) bursts of y pleasure. But that doesn't seem plausible, and there would be a number also for 10000000x, etc.
    If you think that's not a problem, I'll try a variant:

    Let's say Bob is an ordinary human agent. Let NE(Bob1) be the event that Bob's spouse, children and parents are horribly tortured to death (the torture method may be as bad as one wants, including burning at the stake, being raped, cut in pieces, etc., or a combination of them).
    If there is no diminishing marginal value of pleasure and no upper bound, and assuming that NE(Bob1) has a finite negative value (from the perspective of Bob, it seems to me, but it's from whatever perspective value is assessed in the ST situation), then it seems to me Bob should rationally choose n(NE(Bob1)) bursts of y pleasure even if that requires also that NE(Bob1) be brought about. Moreover, it would seem that it would be irrational of Bob to choose otherwise. But that doesn't seem right. If Bob is a normal human person, I think it would be at least not irrational on his part to reject the trade.
    Granted, it might be objected that the negative value of NE(Bob1) from Bob's perspective (or from a moral perspective, or from any perspective from which you're considering the matter) is infinite. But that would raise other issues, like comparing different infinities. Alternatively, you might need to introduce multi-dimensional value, but that's also a big complication in the theory, and I'm not sure how the ST example would work in that context.

    ReplyDelete
    Replies
    1. Yeah, good question. It seems a bit odd (I think) to hold that pleasure has diminishing marginal value, at least after the first instance -- surely the 100th burst of euphoria is just as good in its own right as the 10th was? An upper bound seems similarly odd. But I agree that it's difficult to conceive of any amount of pleasure being worth enduring certain harms. I think one natural explanation of this is that those harms are so extremely bad that (i) there simply isn't sufficient time in an ordinary human lifetime to fit in a sufficient amount of pleasure to outweigh the harm, and (ii) it's hard to imagine the agent being able to really enjoy anything after enduring such horror (but maybe amnesia could help). Also worth flagging that Bob's love for his family presumably precludes using them for purposes of utilitarian sacrifice.

      But if those responses seem insufficient, such that you think we really do need to hold that pleasure has diminishing marginal value (and an upper bound)... is there any alternative good we could substitute in my original argument instead? Or do you think your response shows that *all* goods must have diminishing marginal value and upper bounds, since all seem lexically inferior to certain horrendous bads?

      Maybe I could substitute a temporary harm-reduction in place of the temporary burst of pleasure. Suppose ST gets really bad (but fortunately short-lived!) migraines for say an hour a day, and turning the dial is the only way to relieve the migraine for the day. But turning it 1000 times gives him worse-than-migraine-level pain permanently. Yeah, I think that should do it, right...?

      Delete
    2. With regard the issue of an ordinary human lifetime, my impression is that if that limit is in place (i.e., life isn't extended as needed so that one can fit in any number of bursts of pleasure), there is an even more direct case for diminishing returns or an upper bound to pleasure due to loss of other good things. At least, I'm not inclined to think that a lifetime that contains only bursts of euphoric pleasure (between meals and sleep, perhaps?), one after another, without any time left for - say - spending time with one's loved ones, or studying for the sake of knowing stuff, etc., is better than a regular life, at least for many people, even if a single burst (or a few) would be worth enjoying.
      Otoh, if Bob is human but with a radically long life (as long as needed), extended by perhaps some future tech, the previous argument applies.
      Regarding his love for his family, when you say that it "precludes" using them for purposes of utilitarian sacrifice, are you saying he's compelled not to do that, so there is no freedom of choice on the matter?
      Because as long as he's not compelled, it seems to me there is still the question of whether it would be irrational on his part not to sacrifice them for a certain amount of pleasure.

      However, one can work around that by removing the condition that the people being sacrificed are his family: suppose aliens credibly promise to give him the bursts of pleasure for as long as it takes (no matter how long), but in return, he has to push a button that will release a number of biological weapons inflicting horrible pain, uncontrollable fear, etc., and generally horrible suffering and death on millions of strangers (the horrible suffering of each one doesn't last more than 10 years). All people close to Bob have already been innoculated by the aliens, so they're immune and will not suffer that fate. Would it be irrational of Bob to refrain from getting the pleasure in exchange for inflicting horrible torment on the strangers?
      If there is a problem due to potential indirect suffering of Bob's loved ones, one can set up the whole thing in the past, and stipulate that the strangers live on a different continent, isolated from any person Bob knows.
      Would it be irrational on Bob's part not to press the button?
      I guess it might be argued Bob also is compelled not to press the button (i.e., there is no free choice), but I don't see why that would be so; it seems clear to me he would have the choice. At any rate, one can stipulate one of the aliens would press the button if he accepts.
      My impression is that the common factor behind these examples (that are intuitive to me) is that it does not appear irrational for a person with a normal human mind not to inflict (or choose that others inflict) horrific suffering on other people out of concern for those people, even if in exchange that person would get a lot of pleasure (pleasure of any kind you might pick, and in any amount you may want), and even if he or she has a very long life (indefinitely long, perhaps).
      I don't think this is only because of factors such as no good reason to fully trust the aliens, or such things.

      As for whether this sort of argument applies to all goods, I think it applies to all finite goods (I don't know whether there are infinite ones), as long as we don't count avoiding bad things as goods. If we do, then I think it applies to all goods that do not consist in avoiding bad things, but on getting something "extra" (i.e., pleasure beyond a content normal human experience) so to speak, and at least to many goods that do consist in avoiding bad things. There might be goods consisting in avoiding bad things that do not have that diminishing return and/or an upper bound, for all I know. I'll have to think more about that matter.

      As to your harm-reduction scenario, prima facie, it seems to me that it works; I'll do more thinking, just in case.

      Delete
    3. Well, presumably it's in Bob's interest to press the button that gives him limitless pleasure in exchange for widespread harm to strangers. But we may think there are moral reasons against it. Even if the pleasure outweighs the harms (in aggregate), people may have anti-utilitarian moral intuitions, e.g. against violating others' rights. So involving others' interests makes the case messier. I wouldn't feel confident about drawing any conclusions about in-principle limits to the value of pleasure from such cases.

      But yeah, I'm happy to stick with the solo harm-reduction scenario instead, as that seems to avoid such worries in the first place.

      Delete
    4. After further consideration, I don't think the alternative works, either, and for similar reasons.
      Assuming there is no diminishing marginal benefit, let's say that Bob's life lasts for a sufficiently long time. Would it be irrational of Bob to refrain from saving himself from the daily migraines, at the expense of inflicting horrific suffering on his loved ones?
      I think the answer is probably negative, and that supports the hypothesis that there is diminishing marginal value and an upper bound, at least as long as his life lasts for that long. Maybe there is no significant reduction in case his life lasts for a shorter period, but that's not clear to me, either.

      That said, all of these arguments are made under the hypothesis that value is ordered like the real numbers, or the rationals, or even the integers. But if it's like - say - R^2 with the lexicographic order, or the hyperreals, one doesn't need diminishing marginal benefits (though in such cases, there would be upper bounds).

      Regarding the interests of others, I don't think this makes the case messier, but rather, stronger and more clear, precisely because of those intuitions (I consider those intuitions a feature of my examples, not a bug).
      So, we disagree about that, but I think I can make an argument based on your position too: if you're not confident about drawing any conclusions about in-principle limits to the value of pleasure from such cases, then I would say that on that basis, there is insufficient info to tell whether the marginal value is diminishing and/or there is an upper bound.
      Going by similar examples, there is also insufficient info in the case of harm-reduction.

      So, while this is not as strong as the conclusions I reach (i.e., that assuming value is like the reals, etc., there is an upper bound and diminishing marginal value), it seems sufficient to block the claim that there is no diminishing marginal value.

      Delete
    5. I'd like to clarify my point on the issue of moral intuitions.
      The direct intuition is that it would not be irrational on Bob's part to refrain from inflicting horrific suffering on third parties in order to get pleasure. We don't need to analyze the matter in terms of morality or moral intuitions to get that result.

      But we can also analyze this in terms of moral intuitions if we so choose:
      It's intuitively clear (to some of us) that it would be extremely immoral of Bob to inflict horrific suffering on third parties just for the sake of getting pleasure, of any kind and amount (the issue of avoiding migraines is somewhat more complicated).
      That alone does not give us the result that pleasure has a diminishing value or upper bound (in the case of Bob).
      It might still be suggested that it would be irrational of Bob not to engage in that very immoral behavior, out of concern for the people who would suffer as a result (i.e., maybe Bob rationally should do what he morally should not do in such cases).
      However, as I see it, if Bob is a normal human being, it's not the case that it's irrational on his part to refrain from engaging in extremely immoral behavior out of concern for the victims, in order to get pleasure, at least as long as we're dealing with finite cases to simplify matters (though as usual, that depends on an inductive argument from more specific situations).
      This seems to give us the conclusion that the value of pleasure has an upper bound (and also there is diminishing marginal value) as long as the order is like the reals, or the rationals, etc.

      As I mentioned, we can circumvent those issues (see my weaker argument above), but your point about moral intuitions is intriguing; I'd like to ask whether you share or reject the intuitive assessment that Bob would be acting immorally if he were to inflict horrific suffering on third parties in order to get a lot of pleasure.

      Delete
    6. I definitely share the prima facie moral intuition! Not sure if I'd necessarily accept it on reflection though (assuming the benefits truly outweigh the harms in the case at hand). It seems to have two parts: (i) the anti-selfishness principle that Bob shouldn't impose costs (even just, say, depriving others of lots of benefits) on others in order to benefit himself, and (ii) the pleasure/pain asymmetry, according to which pain is more bad than pleasure is good. I think questions can be raised about both parts, though that's a topic for another post!

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.