## Friday, April 01, 2011

### The Puzzle of the Self-Torturer

Warren Quinn's (1993) 'Puzzle of the Self-Torturer' can be described as follows:
Suppose someone — who, for reasons that will become apparent, Quinn calls the self-torturer — has a special electric device attached to him. The device has 1001 settings: 0, 1, 2, 3, …, 1000 and works as follows: moving up a setting raises, by a tiny increment, the amount of electric current applied to the self-torturer's body. The increments in current are so small that the self-torturer cannot tell the difference between adjacent settings. He can, however, tell the difference between settings that are far apart. And, in fact, there are settings at which the self-torturer would experience excruciating pain. Once a week, the self-torturer can compare all the different settings. He must then go back to the setting he was at and decide if he wants to move up a setting. If he does so, he gets \$10,000, but he can never permanently return to a lower setting. Like most of us, the self-torturer would like to increase his fortune but also cares about feeling well. Since the self-torturer cannot feel any difference in comfort between adjacent settings but gets \$10,000 at each advance, he prefers, for any two consecutive settings s and s+1, stopping at s+1 to stopping at s. But, since he does not want to live in excruciating pain, even for a great fortune, he also prefers stopping at a low setting, such as 0, over stopping at a high setting, such as 1000.

This is generally framed as a puzzle for rational choice: what setting should the self-torturer rationally select? But I think it is illuminating to first consider the question of objective value: which outcome would in fact be best for the agent?

The described scenario is one in which indiscriminability is intransitive. This implies that indiscriminability is not sufficient to establish phenomenal identity (since identity must be transitive). It cannot be that setting 0 feels the same as 1 which is the same as 2 ... 1000, if setting 0 does not feel the same as 1000. So it may well be that setting 1 is actually more painful than 0, and to that extent worse for the agent to experience, even if he cannot tell that this is so.

So, supposing that pain and wealth are perfectly commensurable, there will be some setting s that is objectively best for the agent -- i.e., representing the ideal tradeoff between pain and wealth -- even though superficial introspection would lead the agent to consider s+1 a better option (offering more wealth and seemingly no more pain). So far so good: there's nothing particularly puzzling about the idea that non-omniscient agents may fail to recognize the best option.

But once we've secured the result that there is some fact of the matter as to which stopping point would be best, this also dissolves the 'puzzle' about rational choice. It is just like any other case of decision-making under uncertainty. The agent shouldn't determinately prefer each s+1 to the previous s, since he knows that at some point along the way there will be an s that is the optimal choice, even though the next s+1 will superficially seem like a better option to him.

In short: the puzzle is dissolved by recognizing that the agent shouldn't take pairwise indiscriminability to necessarily show that two experiences are phenomenally just as good. The puzzle arises because we're tempted to think that if you can't distinguish two experiences then they must feel the same way. But if one's ability to distinguish between experiences isn't transitive, then the tempting principle must be false. (The temptation is amplified by the ambiguity of 'seeming'. We might say that two experiences 'seem' the same when (i) the agent judges them alike, i.e. cannot tell them apart, or (ii) when they are phenomenally identical, i.e. have the same phenomenal feel. These two criteria come apart in the case under consideration!)

* My thoughts here arose in the course of class discussion, and are especially indebted to John Hawthorne.

1. I'd like to hear more about phenomenal identity. I'm a bit puzzled since what you say here is false: ``It cannot be that setting 0 feels the same as 1 which is the same as 2 ... 1000, if setting 0 does not feel the same as 1000. So it may well be that setting 1 is actually more painful than 0, and to that extent worse for the agent to experience, even if he cannot tell that this is so.'' but clearly you mean it to be true. It absolutely can be that 1 feels the same as 2, 2 feels the same as 3,...,999 feels the same as 1000 since `feels the same as' is intransitive. It's a paradigm case of intransitivity, second only to `looks the same as'. Or so I thought. Could you say what you have in mind?

2. This is the same ambiguity as that explained in my final paragraph. (1) There's an intransitive sense of 'looks/feels the same as' which simply means that the agent cannot identify any difference in the look/feeling. (2) Then there's the transitive sense of 'looks/feels the same as' which means something like has the same phenomenology.

These come apart since there might be differences in phenomenology which the agent simply isn't capable of discriminating. (Loosely speaking: imagine that there are "sense data" of slightly different shades, or different intensities of painfulness, but which the agent cannot distinguish.)

3. Richard: why do you think there are two senses here? The notion of a difference in phenomenology which is not discriminable needs to be explained.

4. What needs to be explained? There's the notion of what the phenomenology is, and that's distinct from the notion of what you judge the phenomenology to be. What would need explaining is the claim (which you seem to be presupposing) that agents are necessarily infallible when it comes to the phenomenal qualities of their experiences.

5. Hey Guys!

Nice post Richard!

Quinn's original example, however, is quite different from the one quoted at the top of your post. The author, as you’ve plausibly interpreted him, describes an agent with transitive preferences. Preferences range over outcomes, in this case, amounts of pain and profit. The preferences, along with the relevant probability judgments, determine the correct choice. In the example, it’s the agents’ series of choices that are intransitive. But this is just because the agent is bad at making decisions under conditions of uncertainty. The apparent paradox suggests a revision, not in decision theory, but in the confused terminology of the quoted author.

Quinn’s original example involves a person who is indifferent between outcome A and outcome B just in case there is no noticeable difference between the painfulness of outcome A and B. This person’s preferences are intransitive. In this case, we can’t "suppos[e] that pain and wealth are perfectly commensurable." In Quinn's example, comparison you want to make is not well-defined. Only when the agent’s preferences are transitive can we can make cardinal comparisons between the strength of his preferences for money and painlessness. According to his preferences, there is no best outcome.

Decision theory pronounces this person irrational. Fine, Quinn says, but how might we advise the irrational?

Thanks for prompting the interesting discussion Richard!

--Joe Rachiele

6. An intriguing post!

I don't dispute the moral you want to draw but I'm not sure the case gets you there.

When you say "the self-torturer cannot tell the difference between adjacent settings" I'm guessing you mean "right off". But it seems to me that tells us less than we need to know. There are differences you can tell right away and others only after a while. And there are some differences so small that you can tell only after a few of them have accumulated. So can our guy tell the difference after a week? If so then he could sensibly ask himself at the end of each week if it is worth the money to feel that much worse again next week. And if it takes longer than a week it does not follow that he cannot tell the difference made by multiple increments over longer periods of time. Thus there might come a week X when he recognizes that he now feels worse than when the setting was 0, so much worst in fact that it would not be worth the return to feel that much worse again. In that event he would have a rational reason --and, you might say a phenomenal reason-- to stop some time before week X*2. And X might be much less than 500.

7. Joe - Let's separate two possible sources of difficulties. One (which I take to be central to this puzzle) arises from the agent being indifferent over an intransitive relation (viz. 'not noticeably more painful than'). Here my solution is for the agent to change his preferences so that they track the transitive relation of objective painfulness, instead of discernible painfulness. Note that we don't need to make any mention at all of a second type of good (wealth) in stating this problem.

A second, more general complication arises when we are making tradeoffs between not-perfectly-commensurable goods: say wealth and the absence of pain. This strikes me as a very different issue, which is why I wanted to bracket it here. But we might plausibly think that there are a range of permissible relative weightings, such that an agent can count as rational if they consistently apply any one of the permissible weightings.

8. Hi Tomkow, we're to imagine that the agent is totally incapable of making pairwise discriminations between consecutive options. But he is well able to distinguish distant options (0 and 500, say). He may even be able to distinguish consecutive options via triangulation: say 3 is discernibly more painful than 1, but not discernibly more painful than 2, then we have it that 2 is indirectly shown to be more painful than 1, though the agent couldn't tell this by comparing the two directly.

I agree that there might come a point well before 500 at which the agent already regrets having gone this far, or at least can tell that he wouldn't want to double his pain. But that alone isn't enough to solve the puzzle: we also need to explain why he shouldn't just go one step more, since that would not be discernibly more painful, and then one step more again. That is, we don't just need a reason to stop; we also need to defeat the apparent reasons to keep going.

9. Sure, but couldn't the guy have a reason not to go one week more that is consistent with everything you say. At Weekx the guy feels Px and he decides he does not want to go to Week(2*x) because the return wouldn't be worth it for P(x*2). At Week(x+1) the guy realizes that he couldn't even take Week((2*x)-1)… &c . By projecting the line he could have reason to think that some particular week in the future-- which at some point will the next week-- is as far as he should go.

10. Hey Richard,

I don't think that you have solved the first difficulty you identified. Suppose the correct theory of objective value holds that only money is objectively valuable and it is always better to have more of it. Would the truth of such a theory solve Quinn’s puzzle? This seems to be exactly the solution you are proposing. (I merely substituted “money” for “pain” in your view of objective value and made the appropriate modifications.)But I don’t see how these claims about objective value are relevant to the original puzzle. Agents who have completely mistaken views about objective value can still make perfectly rational decisions. They can be perfectly rational even if their preferences fail ‘to track objective value.’

In the original Quinn example, the agent’s preferences are intransitive. Quinn asks, How we can use these preferences to determine which choices are more rational than others? Your answer: The agent should prefer the objectively preferably outcomes. Sure. But this doesn’t answer the question. It changes the subject.

Imagine someone who claims to have solved Hume’s problem of induction as follows: We should believe whatever statements about the future are true. Does this reply to Hume tell us anything at all about what beliefs about the future are rational?

--Joe

11. Joe - if you desire undesirable things, that is a form of practical irrationality, analogous to the theoretical irrationality of having crazy priors that place (high) credence in the incredible. (Here I assume that there's more to rationality than just internal coherence.)

If you think that internal coherence is all that there is to rationality, then the advice for the agent is simply 'make your preferences coherent, however you like'. What's the puzzle in that?

Perhaps you have in mind an in-between position, whereby we have epistemic access to some (perhaps rough) objective value facts, and rationality requires us to conform our preferences to the evidence we have concerning objective value. But then it seems my proposed solution could be easily expanded to accommodate this. (Advice to agent: give up some of your pairwise preferences! Which ones? Well, take your best stab at working out where the threshold of 'too much pain' most likely lies. E.g. they might start by making some of the larger comparisons that Tomkow mentions, and work from there...)

12. Richard,

Nice post. But what's wrong with the following seemingly plausible principle? If there is no discernable difference in the amount of pain resulting from options A and B and if A offers a discernably greater amount of wealth than B does (and, as a result, a discernably greater amount of pleasure than B does, stemming from the greater amount of wealth associated with A), then it is rational to choose/prefer option A over/to B.

13. Hi Doug, that principle seems to rest on the following assumption:

(Only discernible pain matters): Options A and B are equal in respect of their pain-based disvalue iff there is no [directly] discernible difference in the amount of pain resulting from A and B.

This principle seems plausible at first. But if discernibility is intransitive (as assumed in the self-torturer scenario), then this principle entails a contradiction (viz. that each consecutive pair in the self-torturer sequence is of equal pain-based disvale, but also that some distant pairs of of unequal pain-based disvalue).

The lesson is that we cannot take value (or rational preference) to track an intransitive relation. 'Not discernibly more painful than' is an intransitive relation. So we cannot take value or rational preference to track the 'not discernibly more painful than' relation.

14. Hi Richard,

I don't think that my principle relies on your only-discernible-pain-matters principle. It seems to me that it instead relies on the intuitive idea that large discernible differences in pleasure matter more than small indiscernible differences in pain. Why do you think that my principle must rely on your implausible only-discernible-pain-matters principle?

15. Hi Doug, once we appreciate that indiscernible differences in pain can matter, the remaining question is 'how much'. It's built into the scenario that option 1000 is worse than option 0. Since the 'better than' relation is transitive, we can deduce that not every increment in the series is better than the previous option. That is, for some consecutive pair s, s+1, s is preferable to s+1 even though s+1 has a discernible benefit and no discernibly greater pain. So your principle is false.

That's not necessarily to deny that large differences in pleasure matter more than small differences in pain, given an appropriate measure. But I think any such appropriate measure will not apply as you assume to the self-torturer case, since the total pain experienced at setting 1000 is stipulated to greatly outweigh the pleasure obtainable through increased wealth. A couple of considerations to note in this regard: (1) An indiscernible increase in pain may yet be 'large' when one factors in its duration; and (2) a discernible increase in pleasure may, for all that, be fairly insignificant.

16. Hey Richard-

I can't see how your proposal is at all related to Quinn's example. Remember to distinguish the two very different types of intransitivity. For simplicity, I’ll stick to the pure case where the agent receives no money for increasing the dial setting.

In your example, you stipulate that the agent prefers less painful experiences. His preferences are transitive. These preferences determine an objective rational choice, i.e. a choice that the agent would adopt in conditions of certainty. This is the setting that results in the least amount of pain. Your agent simply adopts a bad decision procedure for estimating the objectively rational choice in conditions of uncertainty. It is his decisions procedure that is intransitive here. You propose a better decision procedure.

In Quinn’s case, the agent has reasonable, but intransitive preferences. To avoid confusion, I’ll suppose he has misleading evidence about what is objectively valuable. Quinn’s agent is indifferent between painful experience A and painful experience B iff he could never, by any means whatsoever, tell the difference between the phenomenology of the two experiences. Quinn assumes we can make the increments small enough so that his agent will be indifferent between sequential settings. There is no threshold of pain or noticeable pain here.

Your proposal concerns decision-making under uncertainty about which setting is objectively rational. It assumes the usual connection between transitive preferences and the objectively rational act. Quinn’s agent has intransitive preferences that reflect his best evidence about what is valuable. How can he use this information to determine the objectively rational act? Your proposal is not relevant to this question.

Also, if you distinguish the two senses of “best” in your post—objectively valuable and objectively rational—you’ll see your discussion of objective value to be a red herring in solving this paradox.

--Sleepless in Seattle

17. Joe - I don't think you can just stipulate that the agent's original preferences are "reasonable". My argument is that they are not reasonable; that they are demonstrably unreasonable (since it is a priori knowable that such preferences couldn't possibly be tracking the objective values); and hence that the agent is rationally required to revise his preferences in the manner described in my previous response.

I'm not sure why you think this response is "not relevant", unless you are assuming -- I think falsely -- that we cannot be rationally required to revise our preferences. Aside from being false, such an assumption would seem to preclude any possibility of a solution, since it's doubtful whether one can get coherent norms of decision from incoherent (intransitive) preferences. I guess one might be independently interested in just that question, but it is not what I find distinctively interesting about Quinn's case. What makes the self-torturer case so interesting is that the agent's intransitive preferences seem so intelligible to us; they even seem plausibly correct, at least at first glance -- as previously noted, it seems quite plausible to think that 'only discernible pain matters'. What, we wonder, is the self-torturer's mistake? This is the puzzle that I sought to address.

P.S. It isn't true, in the self-torturer case, that consecutive pains cannot be distinguished "by any means whatsoever". They cannot be distinguished by direct comparison, but as previously noted (in my response to Tomkow) they can be distinguished by more indirect comparisons.

18. At the end of your post you draw the following moral from your discussion: "We might say that two experiences 'seem' the same when (i) the agent judges them alike, i.e. cannot tell them apart, or (ii) when they are phenomenally identical, i.e. have the same phenomenal feel. These two criteria come apart in the case under consideration!" But I'd have thought that what you show is something rather different. You show that though we might have thought that two experiences feel the same when they are indiscernible in terms of how they feel in a pairwise comparison with each other, this turns out not to be sufficient. Rather, two experiences feel the same when there is no experience such that they are indiscernible in terms of how they feel in a pairwise comparisons with that experience. Another way of putting the same point is this. Though we might have thought that two experiences that feel the same in a pairwise comparison with each other have the same phenomenal properties, this turns out to be a mistake. For one experience might have the property of feeling different from a third experience in a pairwise comparison, whereas the other feels the same as that experience in a pairwise comparison. Since the two experiences therefore have different phenomenal properties, we should conclude that they feel different from each other. How an experience feels is a function of all of its phenomenal properties, not just some of them.

[RC comment: I also like this way of putting it.]

19. Richard,

What you say in your postscript is false. I suspect that it is the source of your false belief that your solution applies to Quinn’s original example. For Quinn’s example is much different than you have been assuming.

Quinn is very explicit about his agent being unable to distinguish consecutive steps by triangulation. His agent cannot distinguish 0 from 1, even when he triangulates these settings with 500. If you dispute that such a sequence of 1000 setting exists, he claims that he can simply make the increments smaller. That such a sequence of settings exists is an empirical claim. So it is actually true, in Quinn’s example, that we can’t distinguish between consecutive steps even by triangulation. He needs this fact to get the puzzle going. From here on, I will assume that Quinn's agent has the preferences I ascribe to him in the previous post.

The only argument you give such preferences could not be tracking objective value appears in a response to Doug. But that argument is unsound. Quinn asserts that such preferences are reasonable. Quinn might very well think that this simply shows that the “better than” of objective value is NOT TRANSITIVE. In order to show that such a view leads to contradiction you assume a false premise: that “just as good as” is not a relation but a disguised identity claim between amounts of value. Other than that unsound argument, you’ve given no reason to believe that “objectively better than” must be a transitive relation. (Temkin sometimes seems to conclude this from arguments nearly identical to Quinn’s.)

The only relation that all parties agree is transitive is “more objectively rational.” This relation obtains between actions.

I agree with you that we can sometimes be rationally required to change our preferences. A suitable solution to the self-torturer, then, may take two forms. One solution would be to identify a principled way for him to revise his evidentially valuable, but intransitive preferences. Another solution would be to provide a transitive “more rational than” ordering on the actions. (Quinn opts for the latter. He takes himself to have accomplished a task you take to be impossible: providing coherent norms for action from incoherent preferences.)

You keep asserting that you’ve given a solution of the first form. But now that we’ve distinguished what makes Quinn’s case so different from yours, it should be clear that you haven’t. I’ll even spot you the premise that “objectively better than” is transitive. Suppose I’m Quinn’s self-torturer. You tell me that I ought to care about differences in pain that are in principle indistinguishable. But why? What is the mistake I am making if I care about differences in pain that could, by any means whatsoever, never be detected by me? I have pointed out to you that I am not making the same mistake that your self-torturer makes. He reasons poorly about the phenomenal properties of experiences using the test of pair-wise discernability. Not I. So you have yet to indentify a principled way for me to reform my preferences. Without such a reason for changing my preferences, your solution is no better than the paraody: don’t care about pain, care about money! But surely not just any old set of transitive preferences will do as a solution.

More familiar philosophical paradoxes arise when an agent believes a set of individually plausible, but mutually inconsistent beliefs. To point out that HE IS RATIONALLY REQUIRED TO REVISE HIS BELIEFS does not amount to a solution, the boldface notwithstanding.

20. Joe - thanks for the clarification! (My previous boldface was because you seemed to be interpreting me as simply "propos[ing] a better decision procedure" for an agent with the objectively right preferences, and not addressing the case of an agent who began with intransitive preferences.)

You're right about Quinn on triangulation -- I'd missed that. It leaves me very unsure that the scenario he's describing is actually coherent. But even supposing it is, the spirit of my response would suggest this much: for any setting s where the agent prefers 0 to s, he ought to give up at least one of his incremental preferences for i over i-1, for some i less than or equal to s.

I'm happy to just take it as an unargued premise that the 'better than' relation is transitive. Some deny this, sure, so those philosophers will not be satisfied with my solution. That's compatible with it being a reasonable (and even correct) solution.

"you have yet to indentify a principled way for me to reform my preferences. Without such a reason for changing my preferences, your solution is no better than the paraody: don’t care about pain, care about money!"

I don't think so. An acceptable solution to a paradox shows that one of the initially-plausible claims can plausibly be rejected. I've shown (at least to my own satisfaction, if not yours) that the rationality of the self-torturer's incremental preferences can plausibly be rejected. That's quite different in kind from your parody (which is clearly not a plausible account of what it's really rational to prefer).

21. Interesting. Just a quick note that Williamson's anti-luminocity argument in his Knowledge and Its Limits uses similar Sorites serieses to argue that there must be evidence transcendent phenomenal facts which seems close to your conclusion. I'm tempted to draw from this the conclusion that there must also be therefore evidence transcendent evaluative facts.

I would like to know more about what sort of decision-procedure under uncertainty would be appropriate here though. It seems to me that the phenomenon we face here is vagueness rather than uncertainty, and that might be from the perspective of rational choice quite different.

22. Just discovered this blog and here's my first comment.

The original passage as stated states that the increments to be increased are of the *device's settings* (i.e. amount of electric current). But the "puzzle" seem to arise when we consider increments of degrees of pain.

But pain is dependent not simply on one factor such as external stimulation from electric currents. It is a function also of psychological factors. Psychological experiments over the years have shown that individual's pain tolerance and subjective evaluation of the degree of pain vary considerably based on the individual's state of mind at the time. People may experience more pain and tolerate more pain such as electric shocks or having their hand immersed in ice water simply by changing a state of mind either self-induced or by the experimenter or through some uninduced capricious change of the state of the mind. Thus the agent may experience two shocks with very small differences in electric current as drastically different in subjective pain.

So the agent may not be able to predict when his state of mind changes for the worse making him vulnerable to experience an intransitive and unbearable rise in pain in that transitive series of electric current. When it does happen it will be experienced as a pain that is noticeably greater than the pain resulting from the shock administered immediately before.

23. Excellent post. I think you've dissolved a whole class of problems here.

24. If we adopt tomkow's mathematical approach, we should also bear in mind that in moving from 0 to 1 the man would potentially be viewing the \$10,000 as a considerable sum of money, whereas once he reaches the decision of whether to move from 99 to 100 it represents a mere 1% of the wealth he has already amassed.

Also by this point he will be aware of the cumulative negative effect that the device is having on him and will be able to weigh the cost/benefit of increasing his pain by even a slight amount (presumably for the rest of his life) against the benefit of (temporarily?) increasing his wealth by a small amount.

25. Is the person rational if he forgoes the 'wealth' but still endures the pain? Is the self-harmer irrational to choose pain to achieve a sense of being?

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)