Wednesday, June 21, 2006

Levels of Rationality (draft)

[If any readers feel like wading through this 5000 word
monster of an essay, any constructive comments/criticism would be greatly appreciated!]


Let us assume a broadly Consequentialist framework: Certain states of affairs have value, making them worthy of our pursuit. By failing to pursue the good, one thereby reveals that they suffer some defect in their rational awareness. Perhaps they are ignorant of the descriptive facts, or perhaps they fail to appreciate how those facts provide them with normative reasons for action. We would expect neither defect to beset a fully informed and perfectly rational agent. Ideal rationality entails being moved by reasons, or being motivated to pursue the good. But what of those goods that elude direct approach? Could one rationally aim at them in full knowledge that this would doom one to failure? Conversely, aiming wholeheartedly at an inherently worthless goal seems in some sense misguided or irrational. But what if doing so would better achieve the elusive good? Does rationality then recommend that we make ourselves irrational, blinding ourselves to the good in order to better achieve it?

Such questions may motivate a distinction between ‘global’ (holistic) and ‘local’ (atomistic) rationality: pitting the whole temporally extended person against their momentary stages. This essay explores the distinction and argues that the usual exclusive focus on local rationality is misguided. Global optimality may sometimes require us to do other than what seems optimific within the confines of a moment. Holistic rationality, as I envisage it, tells us to adopt a broader view, transcending the boundaries of the present and identifying with a timeless perspective instead. It further requires that we be willing to treat the dictates of this broader perspective as rationally authoritative, no matter how disadvantageous this may seem from the particular perspective of our local moment.[1] This amounts to an intrapersonal analogue of the ‘social contract’: each of our momentary stages abdicates some degree of rational autonomy, in order to enhance the rationality and autonomy of our person as a whole.

I will begin by sketching an understanding of reasons and rationality, or the objective and subjective normative modes. Next, the general problem of elusive goods and globally optimal indirect strategies will be introduced by way of indirect utilitarianism. After clarifying the Parfitian paradox of “blameless wrongdoing”, I will show how epistemic principles of meta-coherence may undermine this particular application of the local/global distinction, though there remain cases involving “essential byproducts” which escape this objection. These issues will be further clarified through an exploration of the distinction between object- and state-based modes of assessment. Finally, I present a class of game-theoretic cases that present a paradox for local rationality, which can be resolved by embracing the more holistic understanding sketched above.

Reasons and Rationality

Reasons are provided by facts that count in favour of an action. For example, if a large rock is about to hit the back of your head, then this is a reason for you to duck, even if you are unaware of it. As this example suggests, the objective notion I have in mind is largely independent of our beliefs.[2] As inquiring agents, we try to discover what reasons for action we have, and hence what we should do. Such inquiry would be redundant according to subjective accounts, which restrict reasons to things that an agent already believes. Instead, I use the term ‘reason’ in the sense that is closely tied to notions of value. In general, we will have reason to bring about good states of affairs, and to prevent bad ones from obtaining.[3] I take it as analytic that we have most reason to do what is best. We may also say that this is what one ought, in the reason-implying sense, to do.[4]

There is another sense of ‘ought’, tied to the subjective or evidence-based notion of rationality rather than the objective or fact-based notion of ‘reasons’. Sometimes the evidence can be misleading, so that what seems best is not really so. In such cases, we may say that one rationally ought to do what seems best, given the available evidence. But due to their ignorance of the facts, they would not be doing what they actually have most reason to do. Though they couldn’t know it, some alternative action would have been better in fact.

This raises the question of what to do when reasons and rationality diverge. Suppose that someone ought, in the reason-implying sense, to X, but that they rationally ought to Y. Which takes precedence? What is it that they really ought to do? There is some risk of turning this into a merely terminological dispute. I am not concerned with the meaning of the word ‘ought’, or which of the previous two senses has greater claim to being the “true meaning” of the word. But we can make some substantive observations here. In particular, I think that the reason-involving sense of ‘ought’ is arguably the more fundamental normative concept. This is because it indicates the deeper goal, or what agents are ultimately seeking.

The purpose of deliberation is to identify the best choice, or reach the correct conclusion. In practice, we do this by settling on what seems to us to be best. But we do not think that the appearances have any independent force, over and above the objective facts. We seek to perform the best action, not merely the best-seeming one.[5] Of course, from our first-personal perspective we cannot tell the two apart. That which seems best to us is what we take to truly be best. Belief is, in this sense, “transparent to truth”. Because our beliefs all seem true to us, the rational choice will always seem to us to also be the best one.[6] We can thus take ourselves to be complying with the demands of both rationality and fact-based reasons. Nevertheless, it is the latter that we really care about.

This is especially clear in epistemology. We seek true beliefs, not justified ones. Sure, we would usually take ourselves to be going wrong if our beliefs conflicted with the available evidence. Such conflict would indicate that our beliefs were likely false. But note that it is the falsity, and not the mere indication thereof, that we are ultimately concerned with. More generally, for any given goal, we will be interested in evidence that suggests to us how to attain the goal. We will tend to be guided by such evidence. But this does not make following the evidence itself our ultimate goal. Ends and evidence are intimately connected, but they are not the same thing. Normative significance accrues in the first instance to our ends, whereas evidence is merely a means: we follow it for the sake of the end, which we know not how else to achieve. Applied to the particular case of reasons and rationality, then, it becomes clear that the reasons provide the real goal, whereas rationality is the guiding process by which we aim to achieve it. Since arriving at the intended destination is ultimately more important than faithfully following the guide, we may conclude that the reason-implying sense of ‘ought’ takes normative precedence. I will use this as my default sense of ‘ought’ in what follows.

Indirect Utilitarianism and Blameless Wrongdoing

Act Utilitarianism is the thesis that we ought to act in such a way as to maximize the good. Paradoxically, it is likely the case that if people tried to act like good utilitarians, this would in fact have very bad consequences. For example, authorities might engage in torture or frame innocent persons whenever they believed that doing so would cause more good than harm. Such beliefs might often be mistaken, however, and with disastrous consequences. Let us suppose that attempting to directly maximize utility will generally backfire. Utilitarianism then seems to imply that it is wrong to be a utilitarian. But the conclusion that utilitarianism is self-defeating only follows if we fail to distinguish between criterions of rightness and decision procedures.[7]

We typically conceive of ethics as a practically oriented field: a good moral theory should be action-guiding, or tell us how to act. So when utilitarianism claims that the right action is that which maximizes utility, it is natural for us to read this as saying that we should try to maximize utility. But utilitarianism as defined above does not claim that we ought to try to maximize utility. Rather, it claims that we should achieve this end. If one were to try and fail, then their action would be wrong, according to the act-utilitarian criterion. This seems to be in tension with the general principle, introduced above, that we rationally ought to aim at the good. The utilitarian criterion instead tells us to have whatever aims would be most successful at attaining the good. This is not necessarily the same thing. The distinction will be clarified in the section on ‘object- and state-based modes of assessment’, later in this essay. For now, simply note that the best consequences might result from a steadfast commitment to human rights, say, and a strong aversion to violating them even if doing so appears expedient. In this case, the utilitarian criterion tells us that we should inculcate such anti-utilitarian practical commitments.

This indicates a distinction between two levels of normative moral thought: the practical and the theoretical.[8] Our practical morality consists of those principles and commitments that guide us in our everyday moral thinking and engage our moral emotions and intuitions. This provides our moral decision procedure. It is often enough to note that an action would violate our commitment to honesty, for instance, to settle the question of whether we should perform it. This is not the place for cold calculation of expected utilities. They instead belong on the theoretical level. We wish to determine which of our intuitive practical principles and commitments are well-justified ones. And here we may appeal to indirect utilitarianism to ground our views. Honesty is good because (we may suppose) being honest will do a better job of making the world a better place than would being a scheming and opportunistic direct utilitarian. The general picture on offer is this: we use utility as a higher-order criterion for picking out the best practical morality, and then we live according to the latter. Maximizing utility is the ultimate goal, but we do well to adopt a more reliable indirect strategy – and even other first-order “goals” – in order to achieve it.[9]

What shall we say of those situations where the goal and the strategy conflict? Consider a rare case wherein torturing a terrorist suspect really would have the best consequences. Such an act would then be right, according to the utilitarian criterion. Yet our practical morality advises against it, and ex hypothesi we ought to live according to those principles. Does this imply a contradiction: the right action ought not to be done? Only if we assume a further principle of normative transmission:

(T) If you ought to accept a strategy S, and S tells you to X, then you ought to X.

This is plausible for the rational sense of ‘ought’, but not the reason-involving sense that I am using here. We might have most reason to adopt a strategy – because it will more likely see us right than any available alternative – without thereby implying that the strategy is perfect, i.e. that everything it prescribes really is the objectively best option. S might on occasion be misleading, and then we could have more reason to not do X, though we remain unaware of this fact. So we should reject (T), and accept the previously described scenario as consistent. To follow practical morality in such a case, and refrain from expedient torture, would constitute what Parfit calls “blameless wrongdoing”.[10] The agent fails to do what they have most moral reason to do, so the act is wrong. But the agent herself has the best possible motives and dispositions, and could say, “Since this is so, when I do act wrongly in this way, I need not regard myself as morally bad.”[11]

Parfit’s solution may be clarified by appealing to my earlier distinction between local and global rationality. Our ‘local’ assessment looks at the particular act, and condemns it for sub-optimality. The ‘global’ perspective considers the agent as a whole, with particular concern for the long-term outcomes obtained by consistent application of any given decision-procedure. From this perspective, the agent is (ex hypothesi) entirely praiseworthy. The apparently conflicting judgments are consistent because they are made in relation to different standards or modes of assessment. I have illustrated this with the example of indirect utilitarianism, but the general principle will apply whenever some end is best achieved by indirect means. More generally than “blameless wrongdoing”, we will have various forms of (globally) optimal (local) sub-optimality.

Meta-coherence and Essential Byproducts

The above discussion focuses on the reason-involving sense of ‘ought’. Let us now consider the problem in terms of what one rationally ought to do. Rationality demands that we aim at the good, or do what seems best, i.e. maximize expected utility. But the whole idea of the indirect strategy is to be guided by reliable rules rather than direct utility calculations. One effectively commits to occasionally acting irrationally (in the “local” sense), though it is rational – subjectively optimal – to make this commitment. Parfit thus calls it “rational irrationality”.[12] But we may question whether expected utility could really diverge from the reliable rules after all.

Sometimes we may be in a position to realize that our initial judgments should be revised. I may initially be taken in a visual illusion, and falsely believe that the two lines I see are of different lengths. Learning how the illusion worked would undercut the evidence of my senses. I would come to see that the prima facie evidence was misleading, and the belief I formed on its basis likely false. Principles of meta-coherence suggest that it would be irrational to continue accepting the appearances after learning them to be deceptive, or more generally to hold a belief concurrently with the meta-belief that the former is unjustified or otherwise likely false.[13] This principle has important application to our current discussion.

We adopt the indirect strategy because we recognize that our direct first-order calculations are unreliable. The over-zealous sheriff might think that torturing a terrorist suspect would have high expected utility. But if he recalls his own unreliability on such matters, he should lower the expected utility accordingly. As a good indirect utilitarian, he believes that in situations subjectively indiscernible from his own, the best results will generally be obtained by respecting human rights and following a strict “no torture” policy. Taking this higher-order information into account, he should revise his earlier judgment and instead reach the all-things-considered conclusion that refraining from torture maximizes expected utility even for this particular act. This seems to collapse the distinction between local and global rationality. When all things are considered, the former will come to conform to the latter.[14]

This will not always be the case, however. A crucial feature of the present example is that one can consciously recognize the ultimate goal at the back of their mind, even as they employ an indirect strategy in its pursuit. But what if the pursuit of some good required that we make ourselves more thoroughly insensitive to it? Jon Elster calls such goods “essential byproducts”, and examples might include spontaneity, sleep, acting unselfconsciously, and other such mental absences.[15] Such goods are not susceptible to momentary rational pursuit. Higher-order considerations are no help here: we cannot achieve these goods while intentionally following indirect strategies that we consider more reliable. Rather, to achieve them we must relinquish any conscious intention of doing so. As we relax and begin to drift off to sleep, we cannot concurrently conceive of our mental inactivity as a means to this end. One cannot achieve a mental absence by having it “in mind” in the way required for the means-ends reasoning I take to be constitutive of rationality. In the event of succeeding, one could no longer be locally rational in their pursuit of the essential byproduct, for they would not at that moment be intentionally pursuing it at all.

Nevertheless, there remains an important sense in which a person is perfectly rational to have their momentary selves abdicate deliberate pursuit of these ends. If we attribute the goal of nightly sleep to the whole temporally extended person, then this abdication is precisely what sensible pursuit of the goal entails. In this sense, we can understand the whole person as acting deliberately even when their momentary self does not. So the distinction is upheld: global rationality recommends that we simply give up on trying to remain locally rational when we want to get some rest.

Object- and State-based modes of assessment

Oddly enough, even local rationality recommends surrendering itself in such circumstances. From the local perspective of the moment, pursuit of the goal is best advanced by ensuring that one’s future self refrains from such deliberate pursuit. Does this mean that one rationally should cease to value and pursue the good? The puzzle arises because mental states are subject to two very different modes of assessment: one focusing on the object of the mental state, and the other focusing on the state itself.[16] Suppose an eccentric billionaire offers you a million dollars for believing that the world is flat. The object of belief, i.e. the proposition that the world is flat, does not merit belief. But this state of belief would, in such a case, be a worthwhile one to have. In this sense we might think there are reasons for (having the state of) believing, which are not reasons for (the truth of) the thing believed.[17] It seems plausible that desire aims at value in much the same way as belief aims at truth. Hence, indications of value could provide object-based reasons for intention or desire – much as indications of truth provide object-based reasons for belief – whereas the utility of having the desire in question could provide state-based reasons for it.[18] This is the difference between an object’s being worthy of desire, and a desire for the object being a state worth having.

There are various theories about what reasons we have for acting, and hence what objects merit our pursuit. For example, we may call “Instrumentalism” the claim that we have reason to fulfill our own present desires, whatever they may be. Egoism claims that we have most reason to advance our own self-interest. And Impartialism says that we have equal reason to advance each person’s interests. For any such account of reasons, we can pair it with a corresponding account of object-based local rationality, based on the following general schema:

(G) Rationality is a matter of pursuing the good, i.e. being moved by the appearance of those facts ____ that provide us with reasons for action.

Let us say that S has a quasi-reason to X, on the basis of some non-normative proposition p, when the following two conditions are satisfied: (i) S believes that p; and (ii) if p were true then this would provide a reason for S to X. We may then understand (G) as the claim that one rationally ought to do what one has most quasi-reason to do.

The different theories posit different reasons, so different quasi-reasons, and hence different specifications of local rationality in this sense. For example, according to Egoism, agents are locally rational insofar as they seek to advance their own interests. There is a sense in which the theory thereby claims this to be the supremely rational aim.[19] But let us suppose that having such an aim would foreseeably cause one’s life to go worse, as per “the paradox of hedonism”.[20] Egoism then implies that we would be irrational to knowingly harm ourselves by having this aim. This conclusion seems to contradict the original claim that this aim is “supremely rational”. The theory seems to be not merely self-effacing, but downright inconsistent.[21]

The distinction between object- and state-based assessments may help resolve this problem. We might say that an aim embodies rationality in virtue of its object, in that it constitutes supreme sensitivity to one’s quasi-reasons. Or the aim might be recommended by rationality, in the sense that one’s quasi-reasons tell one to have this aim, in virtue of the mental state itself. As before, the apparent incoherence can be traced to the conflation of two distinct modes of assessment. The aforementioned theories should be interpreted as claiming that their associated aims supremely embody rationality, even though it might not be rationally recommended to embody rationality in such a way. This reflects the coherent possibility that something might be desirable – worthy of desire – even if the desire itself would, for extrinsic reasons, be a bad state to have.

It is worth noting that this distinction appears to hold independently of the local/global distinction. We might, for example, imagine a good that would be denied to anyone who ever entertained it as a goal. If one sought it via the standard “globally rational” method of preventing one’s future momentary selves from deliberate “locally rational” pursuit, it would already be too late. There is no rational way at all, on any level, to pursue the good. Still, being of value, the good might merit pursuit. It might even provide reasons of sorts, even if one could never recognize them as such. (For example, one would plausibly have reason not to entertain the good as a goal. But one could not recognize this reason without thereby violating it, for it would only move an agent who sought the very goal it warns against.) So although the object/state distinction may recommend a shift from local to global rationality, it further establishes that even the latter may, in special circumstances, be disadvantageous. Reasons and rationality may come apart, even when no ignorance is involved, because it may be best to achieve a good without ever recognizing it as such. This would provide reasons that elude our rational grasp, being such that we ought to act unwittingly rather than by grasping the reason that underlies this very ‘ought’-fact.

Global Rationality

We have seen how various distinctions, including that between the local and global levels of rationality, can help us make sense of the indirect pursuit of goods. If we know our first-order judgments to be unreliable, then meta-coherence will lead us to be skeptical of those judgments. Indirect utilitarianism stems from recognizing that expected utility is better served by instead following a more reliable – globally optimal – strategy, even if this at times conflicts with our first-order judgments of expedience. Global rationality paves the way for utilitarian respect for rights, and meta-coherence carries it over to the local level. Essential byproducts highlight the distinction, as we may understand such a goal as being rationally pursued at the level of the temporally extended person, but not at the level of every momentary stage or temporal part. Although the object/state distinction implies that even global rationality may be imperfect, the preceding cases suggest that we would do well at least to prize the global perspective over the local one. I now want to support this conclusion by considering a further class of problems that could be fruitfully analyzed as pitting the unified agent against their momentary selves.

Consider Newcomb’s Problem:[22] a highly reliable predictor presents you with two boxes, one containing $1000, and the other with contents unknown. You are offered the choice of either taking both boxes, or else just the unknown one. You are told that the predictor will have put $1,000,000 in the opaque box if she earlier predicted you would pick only that; otherwise she will have left it empty. Either way, the contents are now fixed. Should you take one box or both? From the momentary perspective of local rationality, the answer seems clear: the contents are fixed, it’s too late to change them now, so you might as well take both. Granted, one would do better to be the sort of person who would pick only one box. That is the rationally recommended dispositional state. But taking both is the choice that embodies rationality, from this perspective. This reasoning predictably leads to a mere $1000 prize. Suppose one instead adopted a more global perspective, giving weight to the kind of reasoning that, judging from the timeless perspective, one wants one’s momentary stages to employ. The globally rational agent is willing to commit to being a one-boxer, and so will make that choice even when it seems locally suboptimal. This predictably leads to the $1,000,000 prize, which was unattainable for the locally rational agent.

Similar remarks apply to Kavka’s toxin puzzle.[23] Suppose that you would be immediately rewarded upon forming the intention to later drink a mild toxin that would cause you some discomfort. Since you will already have received your reward by then, there would seem no reason for the locally rational agent to carry out their intention. Recognizing this, they cannot even form the intention to begin with. (You cannot intend to do something that you know you will not do.) Again we find that local rationality disqualifies one from attaining something of value. The globally rational agent, in contrast, is willing to follow through on earlier commitments even in the absence of local reasons. He wishes to be the kind of person who can reap such rewards, so he behaves accordingly. As Julian Nida-Rumelin writes: “It is perfectly rational to refrain from point-wise optimization because you do not wish to live the life which would result.”[24]

In both these cases, the benefits of global rationality require that one be disposed to follow through on past commitments. One must tend to recognize one’s past reasons as also providing reasons for one’s present self. This allows one to overcome problems, such as the above, which are based on a localized object/state distinction.[25] But occasional violation of this disposition might allow one to receive the benefits without the associated cost. (One might receive the reward for forming the sincere intention to drink the toxin, only to later surprise oneself by refusing to drink it after all.) So let us now consider an even stronger sort of case that goes beyond the object/state distinction and hence demands more than the mere disposition of global rationality. Instead, the benefits will accrue only to those who follow through on their earlier resolutions.[26]

Pollock’s Ever Better Wine improves with age, without limit.[27] Suppose you possess a bottle, and are immortal. When should you drink the wine? Local rationality implies that you should never drink it, for at any given time you would do better to postpone it another day. But to never drink it at all is the worst possible result! Or consider Quinn’s Self-Torturer, who receives $10,000 each time he increases his pain level by an indiscernible increment.[28] It sounds like a deal worth taking. But suppose that the combined effect of a thousand increments would leave him in such agony that no amount of money could compensate. Because each individual increment is – from the local perspective of the moment – worth taking, local rationality will again lead one to the worst possible result. A good result is only possible for agents who are willing to let their global perspective override local calculations. The agent must make in advance a rational resolution to stop at some stage n, even though from the local perspective of stage n he would do better to continue on to stage n+1.

It seems clear that the global perspective is rationally superior. The agent can foresee the outcomes of his possible choices. If he endorses the local mode of reasoning then he will never have grounds to stop, and so will end up stuck with the worst possible outcome. It cannot be rational to accept this when other options are open to him. If he is instead resolute and holds firm to the choice – made from a global or timeless perspective – to stop at stage n, then he will do much better. Yet one might object that this merely pushes the problem back a step: how could one rationally resolve to choose n rather than n+1 in the first place?

The problem of Buridan’s ass, caught between two equally tempting piles of hay, shows that rational agents must be capable of making arbitrary decisions.[29] It cannot be rational for the indecisive ass to starve to death in its search for the perfect decision. Indeed, once cognitive costs are taken into account, it becomes clear that all-things-considered expected utility is better served by first-order satisficing than attempted optimizing.[30] (“The perfect is the enemy of the good,” as the saying goes.) Applying this to the above cases, we should settle on some n, any n, that is good enough. Once we have made such a resolution, we can reject the challenge, “why not n+1?” by noting that if we were to grant that increment then we would have no basis to reject the next ones, ad infinitum, and that would lead us to the worst outcome.


The standard picture of rationality is thoroughly atomistic. It views agents as momentary entities, purely forward-looking from their localized temporal perspective. In this essay, I have presented and prescribed an alternative, more holistic view. I propose that we instead ascribe agency primarily to the whole temporally extended person, rather than to each temporal stage in isolation. This view allows us to make sense of the rational pursuit of essential byproducts, since we may ascribe deliberate purpose to a whole person even if it is absent from the minds of some individual stages. Moreover, global rationality sheds light on the insights of indirect utilitarianism, though meta-coherence allows that these conclusions may also become accessible from a temporally localized perspective. Finally, I have argued that there are cases where reasoning in the locally “rational” manner of point-wise optimization leads to disaster. Such disaster can be avoided if the agents embrace my holistic conception of rational agency, acting only in ways that they could endorse from a global or timeless perspective. Persons are more than the sum of their isolated temporal parts; if we start acting like it then we may do better than traditional decision theorists would think rationally possible.


[1] Harsanyi, p.122, seems to be getting at a similar idea in his discussion of the “normal mode” of playing a game.
[2] Of course we can imagine special circumstances whereby one’s holding of a belief would itself be the reason-giving ‘fact’ in question. If I am sworn to honesty, then the fact that I believe that P may provide a reason for me to assert that P.
[3] I leave open the question of whether such value is impersonal or agent-relative.
[4] Parfit (ms), p.21. I also follow Parfit’s use of the term “rationally ought”, below.
[5] Here I am indebted to discussion with Clayton Littlejohn.
[6] Cf. Kolodny’s “transparency account” of rationality’s apparent normativity.
[7] This is a familiar enough distinction, see e.g. the Stanford Encyclopedia entry on ‘Rule Consequentialism’, [accessed 21/6/06].
[8] The following is strongly influenced by R.M. Hare.
[9] Hare, p.38.
[10] Parfit (1987), pp.31-35. But cf. my section on ‘meta-coherence’ below.
[11] Ibid., p.32. (N.B. Here I quote the words that Parfit attributes to his fictional agent ‘Clare’.)
[12] Ibid., p.13.
[13] I owe the idea of “meta-coherence” to Michael Huemer. See, e.g.: (accessed 16/6/06)
[14] Two further points bear mentioning: (1) We might construct a new distinction in this vicinity, between prima facie and all things considered judgments, where the former allows only first-order evidence, and the latter includes meta-beliefs about reliability and such. This bears some relation to ‘local’ vs. ‘global’ considerations, and again I think the latter deserves to be more widely recognized. Nevertheless, I take it to be distinct from the “momentary act vs. temporally-extended agent” form of the local/global distinction, which this essay is more concerned with. (2) Even though a meta-coherent local calculation should ultimately reinforce the indirect strategy, that’s not to say that one should actually carry out such a decision procedure. The idea of indirect utilitarianism is instead that one acts on the dispositions recommended by our practical morality, rather than having one “always undertake a distinctively consequentialist deliberation” [Railton, p.166]. So my local/global distinction could apply to indirect utilitarianism after all.
[15] Elster, pp.43-52.
[16] Cf. Parfit (ms), p.30.
[17] Musgrave, p.21.
[18] This evidence-based notion is another common use of the term “reasons”. But in light of my earlier remarks, we should instead hold that objective reasons are provided by the ultimate facts, not mere “indications” thereof. The more subjective or evidence-based “reasons” might instead be conceptually tied to rationality, as per my “quasi-reasons” below.
[19] Indeed, Parfit (1987) takes this as the “central claim” of the self-interest theory.
[20] Railton, p.156.
[21] Cf. Dancy, p.11.
[22] Nozick, p.41.
[23] Kavka, pp.33-36.
[24] Nida-Rumelin, p.13.
[25] From a local perspective, the state of intending is worth having, even though the object (e.g. the act of drinking Kavka’s toxin) does not in itself merit intending. The problem arises because we regulate our mental states on the basis of object-based reasons alone (as seen by the impossibility of inducing belief at will). The global perspective overcomes this by treating past reasons for intending as present reasons for acting, and hence transforming state-based reasons into object-based ones.
[26] For more on the importance of rational resolutions, see McClennen, pp.24-25.
[27] Sorensen, p.261.
[28] Quinn, pp.79-90.
[29] Sorensen, p.270.
[30] Weirich, p.391.


Dancy, J. (1997) ‘Parfit and Indirectly Self-defeating Theories’ in J. Dancy (ed.) Reading Parfit. Oxford : Blackwell.

Elster, J. (1983) Sour Grapes. Cambridge : Cambridge University Press.

Hare, R.M. (1981) Moral Thinking. Oxford : Clarendon Press.

Harsanyi, J. (1980) ‘Rule Utilitarianism, Rights, Obligations and the Theory of Rational Behavior’ Theory and Decision 12.

Kavka, G. (1983) ‘The toxin puzzle’ Analysis, 43:1.

Kolodny, N. (2005) ‘Why Be Rational?’ Mind, 114:455.

McClennen, E. (2000) ‘The Rationality of Rules’ in J. Nida-Rumelin and W. Spohn (eds.) Rationality, Rules, and Structure. Boston : Kluwer.

Musgrave, A. (2004) ‘How Popper [Might Have] Solved the Problem of Induction’ Philosophy, 79.

Nida-Rumelin, J. (2000) ‘Rationality: Coherence and Structure’ in J. Nida-Rumelin and W. Spohn (eds.) Rationality, Rules, and Structure. Boston : Kluwer.

Nozick, R. (1993) The Nature of Rationality. Princeton, N.J. : Princeton University Press.

Parfit, D. (ms.) Climbing the Mountain [Version 7/6/06].

Parfit, D. (1987) Reasons and Persons. Oxford : Clarendon Press.

Quinn, W. (1990) ‘The puzzle of the self-torturer’ Philosophical Studies, 59:1.

Railton, P. (2003) ‘Alienation, Consequentialism, and the Demands of Morality’ Facts, Values and Norms. New York : Cambridge University Press.

Sorensen, R. (2004) ‘Paradoxes of Rationality’ in Mele, A. and Rawling, P. (eds.) The Oxford Handbook of Rationality. New York : Oxford University Press.

Weirich, P. (2004) ‘Economic Rationality’ in Mele, A. and Rawling, P. (eds.) The Oxford Handbook of Rationality. New York : Oxford University Press.


  1. what are the key assumptions here.

    I would guess

    1) that there is basicaly equal information at all stages :

    "no matter how disadvantageous this may seem from the particular perspective of our local moment."

    makes sense when conparing that moment to a time when you contemplated it for a long time unless some new extremely important information has come up (for example "you will die if you follow the rule".

    2) each of our momentary stages abdicates some degree of rational autonomy.

    Is "the whole" a decision making body? or in practice will it be somthing else making the decision? (eg a past you in a rule based way - maybe someone else in a concequentialist way?)

  2. Re: #1, Yeah, the cases I discuss involve no new information, to avoid unnecessary complication. For the record, I'd say the global perspective is flexible in that it can endorse local changes on the basis of new/superior information. So that's no problem. What my view rejects is the local overriding of an earlier resolution when there is no new reason for doing so. (The self-torturer knows all along that after n increments he will be tempted to progress to n+1, since the "extra pain" will be subjectively indiscernible: from the local perspective of n it will seem like a benefit with no cost. Nevertheless, reflection from a global perspective leads him to reject that reasoning, and insist that each of his momentary stages likewise rejects it. This leaves open the possibility that he might endorse a change of plans on the basis of some other, unforeseen, line of reasoning.)

    Your #2 raises some interesting questions. Our actual decisions are surely made within particular moments, by our momentary stages. We can't literally transcend time. Still, I think our momentary stages can make decisions from a timeless perspective, by adopting a "global point of view". (Compare talk of adopting the "moral point of view", wherein one abstracts from their self-identity and interests, instead considering the interests of all people equally. There's a certain kind of "transcendence" involved in such imaginative projects.) So our decisions are made in time, but on the basis of reasons which are not so constrained. Hence, strictly speaking, the decisions are made by our momentary stages. But it may be illuminating to interpret them as coming from the "whole person" more broadly (since that is the perspective which the decision-maker is attempting to embrace), so long as we're clear that such a way of thinking is ultimately merely metaphorical.

  3. 1) Have you considered what happens if you assume that attempting to directly maximizing utility might result in maximizing utility? I.e. you have to neutralize the other half of the hypothetical as well as proving your half.

    2) Are you saying:
    A) Aiming at things generally backfires – in this case I am concerned the fabric of how we view and understand the world falls apart so I doubt you are saying that

    B) Something like "utility is an almost optimal system - as such it is possible (maybe likely) that doing anything will upset that system"

    C) global reasoning is equal to local reasoning except in the cases where it is better. Surely there would be some cost of over planning or some benefit to local reasoning.

    3) What can you define as a key indirect objective (as you have with torture)? Is not aiming at good things a primary objective?

    Regarding the visual trick scenario - what is it that clearly separates the "visual information" from the knowledge that that information is not reliable?

    Re sleeping - is lets say calming my thoughts and slowing my breathing an indirect strategy for trying to sleep? Or a direct one? How do you tell the difference?

    3) Does this relate to idealism vs. pragmatism – i.e. do you optimize your decisions for an ideal scenario or do you optimize them for a pragmatic scenario. You seem to be arguing for the former. Do you support that?

    4) "blameless wrongdoing"
    I think in utilitarianism the two terms should be even more separate than you seem to propose. "wrongdoing" should trump "blameless" at a moral level (i.e. morality is defined by utility) even if "blameless" trumps "wrongdoing" at a practical one (i.e. we set rules designed to have certain effects). Elevating blameless to a moral principle seems deontological but you seem in danger of doing that.

    is that intentional?

    5) RE Newcombs paradox,
    you could equally say you have given us a counter example here since you might say that your solution is the wrong one (that is why it is called a paradox!).
    hidden in here seem to be some assumption about how you can know certain things at different levels. I.e. that you could globally know something potentially impossible (such as the fact that the predictor is almost always right regardless of any other information). But can locally know the two boxes are better than 1 rule.

  4. I posted mine before I had seen your reply so apologies for any non sequitors - I also tried to write more impartially as I would going over a friends paper as opposed to debating on a blog, I hope that shows!

    > What my view rejects is the local overriding of an earlier resolution when there is no new reason for doing so.

    Hmm it makes a lot more sense that way! I can conceive of an exception (where for whatever reason you by default analyze much more deeply the second time) but its getting less likely. I can even conceive that maybe one should resist rethinking concepts on principle (thus avoiding the exception) although that says some things about how you view the world.

    > So long as we're clear that such a way of thinking is ultimately merely metaphorical.

    OK I'm sympathetic with this response too.
    I think it detracts a little from the ability to just leap to practical conclusions but I can see how you might approach a more global perspective and that sounds like a reasonable goal.

  5. Yup, I appreciate the friendly feedback! To respond...

    1) The "other half" seems straightforward: if you can best achieve a goal by aiming straight for it, then do so! (I trust your timeless self will not object.)

    2) Yup, I meant option "C". If local optimization would also prove globally optimal (as one would often expect), then the global perspective will endorse it. That's not to say that one should always stop to think about such things -- most of the time in our everyday lives we shouldn't. But our dispositions should at least be potentially endorseable, were we to reflect on them.

    3a) By "indirect objective", do you mean what I called "rationally recommended aims"? I gave human rights as an example, but I'm not really concerned with the details here. That'd be an empirical question. This essay is more about general structural/conceptual issues.

    "what is it that clearly separates the "visual information" from the knowledge that that information is not reliable?"

    The former is first-order information, whereas the latter is second-order (info about info).

    Re: sleeping -- hard to say. If your actions are consciously performed as a means to the goal of getting to sleep, then that seems "direct" in an important sense. (But the act itself is "indirect" or non-basic: you are acting upon yourself to bring it about that you sleep.) I'm not sure that the terminology matters here though. Either way, note that you cannot achieve the goal of sleep while maintaining it in mind. (Even if your earlier stage was rationally pursuing sleep, the stage that actually succeeds surely isn't.) I think that's enough for my relevant arguments, since I mainly just used the example to point out that we may wish to "ascribe deliberate purpose to a whole person even if it is absent from the minds of some individual stages."

    3b) I'm not sure I understand this question. I'd say you should optimize your strategy/decisions for whatever scenario you're actually (expecting to be) faced with.

    4) Agreed. I was simply borrowing Parfit's term, which could just as well be "right wrongdoing" -- besides the label, I don't really talk about blame in the essay.

    5) Fair enough, "one man's modus ponens..." and all that. But I don't think knowledge differs between the levels. Say all of you knows that 2-boxing is locally rational. That needn't stop you from rationally choosing one-box because "you do not wish to live the life which would result" from being locally rational. (I can happily concede that example to anyone who remains unconvinced, however. It's a bit ambitious for me to try to overcome the state/object distinction as I do in footnote 25. The later examples avoid any such need, and are stronger in any case.)

  6. By the way, I'd welcome any specific suggestions for parts of the essay that I should clarify. The two points in my first comment seem fairly vital, and perhaps should be more explicit in my essay. Anyway, thanks again for the probing questions!

  7. 2) As far as you take option C...

    Is it a reasonable asumption that it will be easy to substitute global reasoning for local reasoning when and only when it is superior? Will there either be transaction costs (You might argue the savings made by not engaging in local reasoning result in there being a net transaction cost saving of course) or a lack of information to make that call? (it may be hard to weigh global and local reasoning against each other in practice, is this true in: none, some, most or all cases?)

    Of course, pragmatically, it seems pretty irrational to ignore or entirely fail to utilize global reasoning in as far as one can concieve of a situation it would be superior. So your position is pretty solid although not making huge claims there.

    > 3b) I'm not sure I understand this question.

    To an extent it might miss the point in relation to your clarifications, but I would have thought that asking questions like "what should my consistent rules be" contemplated globally might, in practice, tend to result in idealistic (i.e. not so real world) positions.

    You might tend to simplify morality to make it easy to consider a wide range of possibilities consistently, and simplify assumptions about how others will act and so forth. The “you” who makes decisions locally might be less consistent and more pragmatic (all other things being equal).Not just because there is more information but because you perform your analysis little differently (which also could explain why you might torture etc).

    This is in a sense a pragmatic question of psychology so this would be testable I presume and may be outside the scope of your discussion.


    I am aware at the moment that the essay is a little "bumpy" (I am sure there is a better term!) and if you try too hard to cover all the exits it might get too "bumpy".

  8. So, with Newcomb's problem, a globally optimal strategy would be to hire an assassin to kill you if you picked both boxes?

    I don't think that, by a force of will, people can commit to doing some particular action in the future, when at the time the action will *actually be* locally rational. It's a weakness of human psychology. If we could hack our brains, we could do that. Or, we can introduce other factors that make the globally optimal choice alse be the locally rational one.

    As far as the wine example goes, I think it's really pretty silly. There is no such thing as an ever-improving wine, given human brain architecture. We can only register so much pleasure at once. Also, the problem really pushes buttons. Humans have a bias to discount future value in favor of present value. We would rather eat the ice cream now than ten minutes from now. So, if part of the problem is stipulating that the immortal doesn't have such a bias, then we're speculating on what's rational for a being who's more rational than us. Also, in any reasonable universe a certain amount of future value discounting will be rational, because of the chance of the wine being stolen, the immortal getting sucked into a black hole, etc.

  9. "I don't think that, by a force of will, people can commit to doing some particular action in the future, when at the time the action will *actually be* locally rational."

    I dispute that. We willfully bind ourselves with the sheer force of our past resolutions all the time. They're called "promises". So even if I'm not allowed to hire assassins or otherwise externally manipulate my incentives, I can beat the local paradoxes by having previously committed myself to reasoning only in ways that I could globally endorse. I will refrain from pointwise optimization, no brain-hacking required, because I have made a commitment to do just that. (Perhaps there is a sense in which you could take this to alter the locally optimal choice, by treating the "broken commitment" as a large cost. But it will only seem a cost to someone who embraces the holistic self-conception in the first place.) I think it's clear that someone really could reason as I recommend for the Self-Torturer and E-B Wine cases. My solutions are not psychologically impossible.

    As for your objections to E-B Wine, you appeal to contingencies which need not be shared by all possible rational agents. So an adequate general theory of rationality will need to cover such cases nonetheless. We can stipulate away such complications as risk of theft, etc. (This is standard fare for thought-experiments.) I don't see any problem for our "speculating" about what would be ideally rational for this immortal agent. I offered a specific answer, after all. If you think I'm mistaken there, you really need to engage my proposal on its merits, rather than simply asserting that it can't be done (else I'm inclined to respond "But look, I did it, right there!").

  10. "I dispute that. We willfully bind ourselves with the sheer force of our past resolutions all the time. They're called "promises"."

    It is rare that is not locally rational to keep a promise, both because breaking promises leads to future problems dealing with the people for whom the promise was broken, and because human psychology makes it so people feel guilty about broken promises, and it's rational to avoid guilt. So, no, the binding force is *anything* but sheer force of will.

    In fact, sheer force of will is practically impotent in the face of other motivators. Only 5% of alcoholics can voluntarily quit (even in AA) without some sort of medication.

    Sure, you can make that sort of commitment. Why, just the other day, I committed myself to going to the gym every evening. I just talked to a friend who had committed to spend at least two hours each day on his dissertation. I believe he's averaging about half an hour.

    Now, I'm by no means saying that most commitments to global rationality would be defeated thusly. In fact, I'm sure there is quite a large number of them that are pretty easy to follow, once you have a good understanding of the costs and payoffs and such. (While it might be tempting, I think it's very reasonable to expect intelligence agents to be able to refrain from abusing civil liberties.) And so it's worth trying. But it's easily defeated in circumstances where stronger human desires (and there are many) come to bear. Humans are terrible at this.

    To the wine. Sure, we can stipulate no risk of theft. But it's not as immediately clear that there is a possible brain or software architecture that produces sentience and that can also meaningfully register an unbounded amount of pleasure. But we can also work around this. Perhaps the wine quality approaches a finit limit, as time approaches infinity. It's still ever increasing, it's just not a linear increase.

    But mainly, it's not clear that it's meaningful to talk about rational behavior in situations where some normally finite aspect is infinite. Perhaps rationality is only well defined for finite problems. I wouldn't have a problem stipulating this, because I don't believe that any rational agent will ever have to solve a problem involving concrete, infinite aspects.

  11. Put it this way. I think the wine paradox has about the same potentially to shed light on philosophical problems as does the various time travel paradoxes. That is, minutely little.

    1. Is the main thrust of the essay to argue for Rule Utilitarianism over Act Utilitarianism? New arguments with an old purpose? Is Rule Utilitarianism pretty much what you mean by 'Global Rationality'?
    2. Which box are you saying the Globally Rational person will choose? I'm thinking from what you say that they pick only the opaque one, thus dealing themselves out of $1000. It would seem to me the best strategy would be to profess to be a Global Rationalist, but actually be lying, thus fooling her into putting the mil into the opaque box and getting the grand as a bonus. She will have thought of this, but this game is as much of a luck game for her as you. From her point of view it is obviously to her advantage not to put the mil into the box. Then if you take it, she gets to save a mil *and* be right. But if you don't, and take either of the boxes on their own, she loses at most a grand and some face. The worst case for her is to put both the grand and the mil in there and you take both and she's both wrong and out of pocket to the maximum effect. The maximum psychological impact would be probably to take only the grand, and leave the empty box there for her to ponder, being both wrong and a grand out, despite having run the best strategy. Personally I would take both. As you say, the contents are fixed. Everything else is in the past.
    3. What is the toxin puzzle? Sounds like a drinking game. Personally I've drunk mild toxins with discomfort and eventually unconsciousness on many an occasion, with no reward other than the notoriety. Which was not inconsequential.
    4. The Even Better Wine example is not the least bit clear in it's ideal outcome. The agent is, as you say, immortal. So they have the potential to own the best possible wine, which may be far more valuable than simply drinking it. But I imagine an immortal would have a pretty impressive cellar of vintage million year old wines to drink on special occasions, like their next 1000 year birthday. Why would they ever touch the absolutely oldest, most precious one? Only mortals would ever do something as crass as drinking it.
    5. The self torturer example sounds like daily life for most people. And I think the math is all wrong. A steady pain increment does not have a steady utility value. An increment that goes from bearable to unbearable has a lot more negative utility than $10,000. This is true from a global perspective *and* a local one. When the pain gets too much, you stop and take the cash. What's irrational about that, and how does it differ from the global strategy? The only out for you here I can see is that if you do your research you might be able to abstractly work out how much pain you can take before you get actual damage. And that might be more than you think. Then again, you could be very, horribly wrong, in which case the local strategy is actually better.

    Overall criticism:

    We have the exact same problem with Global Rationality as we have with Local Rationality. Any particular abstraction you care to define could actually turn out horribly wrong in some particular case. The abstractions are merely taking more cases into account, and the local ones are taking more factors into account. We still have no clear mechanism to make our abstractions other than countless examples, for which there are plenty of counterexamples.

    As an elucidation, consider chess. There are numerous abstractions of strategic principles, such as 'capture the centre' and 'avoid isolating or doubling pawns' and 'rapidly mobilize your forces'. But there are so many ways in which pursuit of these principles leads to defeat if you place the principle above the ultimate purpose of checkmating your opponent. Strong centres are defeated by strong flank attacks. King pressure alone can beat a massively outnumbered opponent, through forced moves. You can make further abstractions to save the grander ones, but you end up with a vast, vast web of them. And even then, you still get beaten by the chess computer which just counts through the moves and then violates every principle you've ever thought of to force a strange computer-like victory. And computers and their phenomenal counting ability can be beaten by chessmasters and their principles, coupled with inspired, imaginative moves. That is what makes chess interesting - it is analogous to life. Eventually it will not be, when a computer can actually examine every move.

    My point is that when choosing a chess move, the local considerations (what exact moves are possible here) are in conflict with global ones (what general chess principles are we following), and which should win is never clear. I think the same goes for moral choices. That is why both kinds of Utilitarianism, Rule and Act are not ideal systems. I do think they are very good systems, but they are not perfect since they have no clear mechanisms for conflict resolution. At any time, there are many choices possible, and even computer powered rationality can't guarantee optimal choice. It is too heavily reliant on estimations of probable outcomes that are really unknowns.

    Unless you can define the 'optimal strategy' by exhausting every move possible, you can never be sure your rules are good enough. With hindsight we can see where we went wrong, but that is no guarantee for future strategizing, since unknown events can occur.

    I may be misreading you. Let me know if I am. It seems to me that you equate the ultimate goal of Utilitarianism: "We should maximize utility" with one means to that goal - rule following. And while I can see that rule following is probably a better strategy than no strategy, it should not be confused with the goal itself. It is just another means, and individual cases can overturn rules at any moment.

  13. Yeah, I think I can agree with most of that. I hope the latter part of my section on "Reasons and Rationality" makes clear the distinction between the best strategy vs. the goal itself. I'm certainly not advocating blind rule-following. Rather, I presented two types of situations in which rule-following should be preferred:

    1) When we are in a poor epistemic position, so that the apparent "evidence" in individual cases is likely misleading. (Cf. my discussion of utilitarian torture.) We should not let individual cases "overturn rules at any moment" if we know in advance, or from a global perspective, that such an overturning is likely to be in error! (And if it's not likely in error, then the global perspective won't oppose it. It's not opposed to letting your judgment override rules of thumb in chess games, for instance. Stubborn rule-following is presumably NOT globally optimal at all in such a situation.)

    2) When local optimality logically precludes global optimality, as in the EBWine and Self-Torturer type cases.

    What the two types have in common is that if we consider our situation from a more objective or disengaged perspective, we will no longer endorse the form of reasoning that we were tempted to endorse from the local perspective. We will realise that if we were to endorse it, in that and subjectively relevantly similar circumstances, we would end up worse off. (Either due to evidential problems, or logical ones.) So to avoid that bad result, we must reject the tempting forms of reasoning that would lead to it.

    "An increment that goes from bearable to unbearable has a lot more negative utility than $10,000."

    The point of the case is that there is no "increment that goes from bearable to unbearable". Each individual increment is indiscernible. You simply can't tell the difference. It's only large collections of increments that are discernible.

    (As for the wine case, we're clearly meant to stipulate away any other possible values besides the taste.)

  14. Richard,

    1. I don't know when you could ever know you were in a poor epistemic position. Part of the misleading evidence is that the rule appears to have this exception which could overturn it. So how do you decide which to do? The rule says one thing and your local perception says another. Perhaps it is the rule that is wrong in this case? You talk as thought the 'globally optimal' scenario is knowable. But even in simple, purely logical games like chess it isn't practically knowable (yet). At least not until we are near the end of the game. Then we can look back and think 'Hmmm, yes perhaps I should have violated my rule of never advancing the king's bishop's pawn during the opening. That led to a weakness on the kings side'. But even then you may be wrong unless you really can think through all the possibilities if you had taken that course.

    2. I grasp that local optimality could be in error if you *define* it to be so, but that doesn't really convince me that it will be in any general cases. You really have to keep paring away reasonableness from both the EBWine and Self Torturer cases to get your conclusion.

    For instance, you say "The point of the case is that there is no "increment that goes from bearable to unbearable". Each individual increment is indiscernible.". That is probably true, in that you can't practically discern the exact moment that anything crosses any line, except in math. The *exact* moment. Which is a problem for global reasoning too. Even if you can work out a theoretically optimal amount of pain, how can you measure that you've reached it in practice? You're still going to have to use local sensation, measuring stuff or whatever your method is that is supposed to beat the subjective feeling of pain as a measure.

    In the EBWine case, you're saying that the choice of time to drink the wine is the solution to the problem of 'what is the largest finite number in an infinite set?'. Which is simply a malformed question. There is no such number. That is the definition of an infinite set, that for every finite number there is a larger one. So there is no best moment for the immortal to drink the wine. It's that simple. Any time you could pick, it would surely be even better a year later. The problem has no solution either from a local or a global perspective.


Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)