1) Remember to submit a post to the Philosophers' Carnival by the end of this week.
2) If anyone reading this is going to be in Canberra next Tuesday morning, feel free to come along to ANU for my AAP talk on why the actual world is not a possible world. (It's less crazy than it sounds.) It's scheduled to be in room "Moran G009" from 10-10:55am. Kim Sterelny's talk is in a competing stream, so I don't expect too large an audience for my one. Being my first conference presentation, that may well be a good thing. Enter at your own risk ;-)
(Hmm, I wish that were purely a joke. Blogging's okay, since I can just pretend I'm talking to a general audience, and there's plenty that can fruitfully be said there. No-one expects much from the medium anyway. But claiming to have something worth saying to philosophers, who all know far more than I do? It suddenly seems awfully presumptuous...)
Thursday, June 29, 2006
Tuesday, June 27, 2006
Moral Philosophy Quiz
Here's another fun one (via Dr Freeride):
The list below is modified by your input. The results are scored on a curve. The highest score, 100, represents the closest philosophical match to your reponses. This is not to say that you and the philosopher are in total agreement...
1. John Stuart Mill (100%)
2. Kant (93%)
3. Epicureans (80%)
4. Jeremy Bentham (76%)
5. Aquinas (73%)
6. Jean-Paul Sartre (69%)
7. Ayn Rand (67%)
8. Aristotle (67%)
9. Ockham (53%)
10. Prescriptivism (52%)
11. Stoics (37%)
12. St. Augustine (36%)
13. Spinoza (34%)
14. Nietzsche (28%)
15. Plato (27%)
16. Cynics (24%)
17. Thomas Hobbes (24%)
18. Nel Noddings (17%)
19. David Hume (16%)
Sunday, June 25, 2006
Boosting Blogger: two technical proposals
There are enough hacks out there that with a bit of tinkering you can actually turn Blogger into a half-decent platform. Nevertheless, there are two features in particular which I think could still use further improvement:
1) Recent Comments. The recent comments hack only picks up on comments from posts on the current page. It can also be a bit slow if there are a large number of comments to sort through. Both these problems would be avoided if we could create an appropriate RSS feed for comments. I have no idea why Blogger don't provide this themselves (surely it wouldn't be that hard for them to do?), but it seems like something external users could create for themselves. We receive email notification of each new comment, after all, and should be able to construct the needed RSS feed out of this.
Note that existing "email-to-RSS" services (e.g. Mailbucket) aren't good enough. Some further processing is required here. In particular, we need to extract from the end of the notification email two bits of data: (i) the commenter's name, and (ii) the post link; and use them as the title and link, respectively, for the syndicated entry. Then we could use any old RSS-to-JavaScript tool to display the recent comments list in our sidebars. (Without the link extraction, the list entries won't actually link to the original comments, and so would be useless. Readers would have to guess which post the quoted comments are from!) That sort of data extraction shouldn't be too difficult, should it?
[Apparently there's some Ning-based hack to create comment RSS feeds, but it involves more template-meddling and - worse - requires XML validity (which this blog apparently lacks). So it would be nice if someone could figure out how to implement my simpler suggestion.]
2) Categories. I'm generally pretty happy with the del.icio.us based "Freshtags" system that I'm using. The main problem is that it's very difficult (time-consuming) to alter the category tags of an old post. You have to edit both the blog post itself, and your corresponding del.icio.us entry. I would like to categorize my old posts which pre-date my installation of FreshTags, and I also often want to change the classifications of some more recent posts (say if I decide to introduce a new category). But it would just be too much work at present.
This needn't be the case, however. Someone with better programming skills than me could overcome the problem by designing an "editing program" which integrates control of both your blog and del.icio.us tags.
First, it would import a list of all your blog posts (if there's no more direct way to get the data out of Blogger, one could simply get the user to paste in the list that Blogger displays when you "republish all" of your blog. Though I guess it would also need the post ID# in addition to the URLs, someone else will need to figure out how to get those!). Second, you likewise import your relevant del.icio.us posts. The program then pairs them up, where appropriate, and may highlight discrepencies (e.g. if your del.icio.us post has stored different tags from those listed on the corresponding blog post itself).
Here's the crucial bit: the program would be capable of making edits to both Blogger and del.icio.us in one go. Let's say I scroll down my list of blog posts, and select one from last year which lacks any tags. I click "edit tags", and the program asks me to enter the new tags. It then automatically completes two tasks: (i) it logs into Blogger for me, and edits my post to append the provided list of "categories" (just like you see on this post), and (ii) it posts that blog's permalink to my del.icio.us account, with the tags provided. (Or, if that's impossible, it brings up the del.icio.us posting page with the appropriate data fields already filled in.)
You might even make multiple edits at once, say by selecting from the program's list multiple blog posts, which you want to classify together, and then enter the tags that you want them all to share.
Is such a program possible? plausible? soon-to-be actual? It would be incredibly convenient, in any case, so I hope someone sufficiently skillful might look into it...
1) Recent Comments. The recent comments hack only picks up on comments from posts on the current page. It can also be a bit slow if there are a large number of comments to sort through. Both these problems would be avoided if we could create an appropriate RSS feed for comments. I have no idea why Blogger don't provide this themselves (surely it wouldn't be that hard for them to do?), but it seems like something external users could create for themselves. We receive email notification of each new comment, after all, and should be able to construct the needed RSS feed out of this.
Note that existing "email-to-RSS" services (e.g. Mailbucket) aren't good enough. Some further processing is required here. In particular, we need to extract from the end of the notification email two bits of data: (i) the commenter's name, and (ii) the post link; and use them as the title and link, respectively, for the syndicated entry. Then we could use any old RSS-to-JavaScript tool to display the recent comments list in our sidebars. (Without the link extraction, the list entries won't actually link to the original comments, and so would be useless. Readers would have to guess which post the quoted comments are from!) That sort of data extraction shouldn't be too difficult, should it?
[Apparently there's some Ning-based hack to create comment RSS feeds, but it involves more template-meddling and - worse - requires XML validity (which this blog apparently lacks). So it would be nice if someone could figure out how to implement my simpler suggestion.]
2) Categories. I'm generally pretty happy with the del.icio.us based "Freshtags" system that I'm using. The main problem is that it's very difficult (time-consuming) to alter the category tags of an old post. You have to edit both the blog post itself, and your corresponding del.icio.us entry. I would like to categorize my old posts which pre-date my installation of FreshTags, and I also often want to change the classifications of some more recent posts (say if I decide to introduce a new category). But it would just be too much work at present.
This needn't be the case, however. Someone with better programming skills than me could overcome the problem by designing an "editing program" which integrates control of both your blog and del.icio.us tags.
First, it would import a list of all your blog posts (if there's no more direct way to get the data out of Blogger, one could simply get the user to paste in the list that Blogger displays when you "republish all" of your blog. Though I guess it would also need the post ID# in addition to the URLs, someone else will need to figure out how to get those!). Second, you likewise import your relevant del.icio.us posts. The program then pairs them up, where appropriate, and may highlight discrepencies (e.g. if your del.icio.us post has stored different tags from those listed on the corresponding blog post itself).
Here's the crucial bit: the program would be capable of making edits to both Blogger and del.icio.us in one go. Let's say I scroll down my list of blog posts, and select one from last year which lacks any tags. I click "edit tags", and the program asks me to enter the new tags. It then automatically completes two tasks: (i) it logs into Blogger for me, and edits my post to append the provided list of "categories" (just like you see on this post), and (ii) it posts that blog's permalink to my del.icio.us account, with the tags provided. (Or, if that's impossible, it brings up the del.icio.us posting page with the appropriate data fields already filled in.)
You might even make multiple edits at once, say by selecting from the program's list multiple blog posts, which you want to classify together, and then enter the tags that you want them all to share.
Is such a program possible? plausible? soon-to-be actual? It would be incredibly convenient, in any case, so I hope someone sufficiently skillful might look into it...
Bootstrapping Possibility as Conceivability
I previously introduced epistemic modality as effectively involving the familiar space of possible worlds under a different mode of presentation. (I hope that's not too misleading a summary; do read the linked post if you're not familiar with it.) This view assumes a plenitude of metaphysical possibilities, which is very plausible but might be denied by some. For example, a theist who holds that God's existence is metaphysically necessary but not a priori thereby admits more epistemic scenarios than he does possible worlds, and so must insist that the two modal spaces are distinct. To accommodate such views, Dave Chalmers (see, e.g., section 5 of his epistemic space paper) describes how we could construct epistemic modal space from purely epistemic notions. Say, P is (deeply) epistemically possible iff P is ideally conceivable (i.e. ~P is not knowable a priori). But what kind of modality is involved in those "ables" (knowable, conceivable)?
Metaphysical modality won't do, since that would defeat the stated purpose. For example, one might hold that ideally rational agents are brutely impossible. But we require such an idealization for this epistemic modality nonetheless. ~P might be a priori even if there is no brute metaphysically possible world containing an agent who a priori knows ~P. So this is not the appropriate sense of "a priori knowable".
Given our independent grasp of apriority (as demonstrated in the previous paragraph), we might simply take as a modal primitive the kind of possibility involved in something's being "a priori knowable". That seems the safest option.
Intriguingly, Chalmers hopes that we might instead be able to take it to be epistemic possibility. This seems circular: P is epistemically possible iff it is not epistemically possible that an agent knows ~P a priori? Where has the idealization gone? Here Chalmers appeals to a kind of "bootstrapping" effect. The core idea is that although we're far from ideal reasoners ourselves, we can conceive of slightly more ideal reasoners, who in turn could conceive of even better reasoners, and so forth, until we reach ideal conceivability.
It's a neat idea, but still seems to rest pretty heavily on an independent primitive modality. We need it to fill out the claim that our imagined reasoners could do better than us, and to let their improved results be sufficiently "real" to contribute to modal reality. If our actual reasoning powers were bedrock, then it's hard to see how we could get beyond them. Second-order possibility (what we imagine our better reasoners could imagine) would seem to revert back to first-order modality, with all our limitations. The bootstrapping effect just can't get off the ground. It needs some independent modal element, so that the imagined agents could do more than we can conceive of them as being capable of.
We could then say that our non-ideal imaginings tap into this irreducible modal reality. It might even be traceable through the kind of bootstrapping procedure described above. Plausibly, we can imagine better reasoners, and they in turn also could imagine better yet, and tracing these conceptions through modal space would eventually map out the full idealization. But the "can" and "could" here presuppose the full idealization, and so cannot be used to reductively construct it. We can find our way to the end point only if we have its help right from the start.
(Disclaimer: I'm not entirely confident that I've understood Dave's position here. When I asked him about it after the conference, he suggested that we might be able to imagine a kind of general "blueprint" for a better reasoner, and that this would suffice to determine -- perhaps through some kind of mathematical necessity -- the stage-2 modal facts, even if the details go beyond what we can grasp in our ground position. So that doesn't sound entirely reductionist in any case. The modal properties of the blueprint must be grounded in something other than our actual epistemic capabilities. *shrug*)
P.S. This is all inspired by the recent Epistemic Modality conference -- Kenny offers a general overview.
Categories:
Metaphysical modality won't do, since that would defeat the stated purpose. For example, one might hold that ideally rational agents are brutely impossible. But we require such an idealization for this epistemic modality nonetheless. ~P might be a priori even if there is no brute metaphysically possible world containing an agent who a priori knows ~P. So this is not the appropriate sense of "a priori knowable".
Given our independent grasp of apriority (as demonstrated in the previous paragraph), we might simply take as a modal primitive the kind of possibility involved in something's being "a priori knowable". That seems the safest option.
Intriguingly, Chalmers hopes that we might instead be able to take it to be epistemic possibility. This seems circular: P is epistemically possible iff it is not epistemically possible that an agent knows ~P a priori? Where has the idealization gone? Here Chalmers appeals to a kind of "bootstrapping" effect. The core idea is that although we're far from ideal reasoners ourselves, we can conceive of slightly more ideal reasoners, who in turn could conceive of even better reasoners, and so forth, until we reach ideal conceivability.
It's a neat idea, but still seems to rest pretty heavily on an independent primitive modality. We need it to fill out the claim that our imagined reasoners could do better than us, and to let their improved results be sufficiently "real" to contribute to modal reality. If our actual reasoning powers were bedrock, then it's hard to see how we could get beyond them. Second-order possibility (what we imagine our better reasoners could imagine) would seem to revert back to first-order modality, with all our limitations. The bootstrapping effect just can't get off the ground. It needs some independent modal element, so that the imagined agents could do more than we can conceive of them as being capable of.
We could then say that our non-ideal imaginings tap into this irreducible modal reality. It might even be traceable through the kind of bootstrapping procedure described above. Plausibly, we can imagine better reasoners, and they in turn also could imagine better yet, and tracing these conceptions through modal space would eventually map out the full idealization. But the "can" and "could" here presuppose the full idealization, and so cannot be used to reductively construct it. We can find our way to the end point only if we have its help right from the start.
(Disclaimer: I'm not entirely confident that I've understood Dave's position here. When I asked him about it after the conference, he suggested that we might be able to imagine a kind of general "blueprint" for a better reasoner, and that this would suffice to determine -- perhaps through some kind of mathematical necessity -- the stage-2 modal facts, even if the details go beyond what we can grasp in our ground position. So that doesn't sound entirely reductionist in any case. The modal properties of the blueprint must be grounded in something other than our actual epistemic capabilities. *shrug*)
P.S. This is all inspired by the recent Epistemic Modality conference -- Kenny offers a general overview.
Categories:
Wednesday, June 21, 2006
Maverick Rational Holism
I don't often agree with the Maverick Philosopher, but this short post is a gem:
Cf. Global Rationality. (Though I suppose local rationality could cope with this, with a little help from meta-coherence.)
It it unwise to second-guess oneself. The later moment of doubt almost always lacks the clarity of the earlier moment of decision; the later moment has no right to judge.
Cf. Global Rationality. (Though I suppose local rationality could cope with this, with a little help from meta-coherence.)
Lying and Withholding Information
A reader emailed the following interesting question:
Aside: such examples suggest that it would be a good public strategy for one to regularly assert one's right to privacy as a matter of principle, even when you could happily tell the truth. In other words, don't answer "no" to any question that you wouldn't also have been willing to answer "yes" to. That way, others can't employ those magical information-producing inferences when you try to withhold information from them. If you spurn intrusive questions even when you don't need to, others can no longer infer such a need from the mere fact of your silence. Conversely: an unprincipled privacy is no privacy at all, since your ad hoc silences will be revealing.
(For the record, I follow this principle myself. I noticed, when other blogs responded to my abstract discussions of sexual ethics, that some readers left comments speculating about my private life. Since I have a principled policy to "neither confirm nor deny" such invasions of privacy, they have no basis to draw any inference either way when I rebuff them for their rudeness.)
But that doesn't address the central question. Supposing you found yourself in a situation where honest withholding of information really was impossible, what should you do? Of course, sometimes lying is justified in any case, and sometimes withholding information isn't. But I take it the question here is whether lying would be more easily justified in such a case than usual. Perhaps being deprived the option of neutral withholding counts as "extenuating circumstances". I'm not sure that counts for much. But at the very least, someone who would prefer to avoid lying is clearly less bad than someone with no compunction here at all (say someone who still would have lied even if withholding had been a live option). However, we also think a remorseful murderer is a better person than a remorseless one, yet that doesn't excuse the actual killing. Other circumstances might excuse it, but that's an independent issue. The justification may hold regardless of the killer's reluctance. Perhaps the reluctant liar is like that: either justified, or not, and his reluctance doesn't really have much to do with it?
Is it moral to lie in order to preserve a secret, in a situation where refusing to answer cannot preserve the secret?
For instance, if a close friend comes up and asks me, "Have you ever used cocaine?", and I say "That's none of your business", they have a strong reason to believe that I have used cocaine, because if I hadn't I would have told them that I hadn't. (I'm sure there are better examples.) So the only way to really preserve the secret is to lie. But since I'm not trying to lie to the person, but only trying to withhold information, it seems like saying that isn't *really* lying, or, at least, doesn't have the same moral significance. What do you think?
Aside: such examples suggest that it would be a good public strategy for one to regularly assert one's right to privacy as a matter of principle, even when you could happily tell the truth. In other words, don't answer "no" to any question that you wouldn't also have been willing to answer "yes" to. That way, others can't employ those magical information-producing inferences when you try to withhold information from them. If you spurn intrusive questions even when you don't need to, others can no longer infer such a need from the mere fact of your silence. Conversely: an unprincipled privacy is no privacy at all, since your ad hoc silences will be revealing.
(For the record, I follow this principle myself. I noticed, when other blogs responded to my abstract discussions of sexual ethics, that some readers left comments speculating about my private life. Since I have a principled policy to "neither confirm nor deny" such invasions of privacy, they have no basis to draw any inference either way when I rebuff them for their rudeness.)
But that doesn't address the central question. Supposing you found yourself in a situation where honest withholding of information really was impossible, what should you do? Of course, sometimes lying is justified in any case, and sometimes withholding information isn't. But I take it the question here is whether lying would be more easily justified in such a case than usual. Perhaps being deprived the option of neutral withholding counts as "extenuating circumstances". I'm not sure that counts for much. But at the very least, someone who would prefer to avoid lying is clearly less bad than someone with no compunction here at all (say someone who still would have lied even if withholding had been a live option). However, we also think a remorseful murderer is a better person than a remorseless one, yet that doesn't excuse the actual killing. Other circumstances might excuse it, but that's an independent issue. The justification may hold regardless of the killer's reluctance. Perhaps the reluctant liar is like that: either justified, or not, and his reluctance doesn't really have much to do with it?
Levels of Rationality (draft)
[If any readers feel like wading through this 5000 word
monster of an essay, any constructive comments/criticism would be greatly appreciated!]
Introduction
Let us assume a broadly Consequentialist framework: Certain states of affairs have value, making them worthy of our pursuit. By failing to pursue the good, one thereby reveals that they suffer some defect in their rational awareness. Perhaps they are ignorant of the descriptive facts, or perhaps they fail to appreciate how those facts provide them with normative reasons for action. We would expect neither defect to beset a fully informed and perfectly rational agent. Ideal rationality entails being moved by reasons, or being motivated to pursue the good. But what of those goods that elude direct approach? Could one rationally aim at them in full knowledge that this would doom one to failure? Conversely, aiming wholeheartedly at an inherently worthless goal seems in some sense misguided or irrational. But what if doing so would better achieve the elusive good? Does rationality then recommend that we make ourselves irrational, blinding ourselves to the good in order to better achieve it?
Such questions may motivate a distinction between ‘global’ (holistic) and ‘local’ (atomistic) rationality: pitting the whole temporally extended person against their momentary stages. This essay explores the distinction and argues that the usual exclusive focus on local rationality is misguided. Global optimality may sometimes require us to do other than what seems optimific within the confines of a moment. Holistic rationality, as I envisage it, tells us to adopt a broader view, transcending the boundaries of the present and identifying with a timeless perspective instead. It further requires that we be willing to treat the dictates of this broader perspective as rationally authoritative, no matter how disadvantageous this may seem from the particular perspective of our local moment.[1] This amounts to an intrapersonal analogue of the ‘social contract’: each of our momentary stages abdicates some degree of rational autonomy, in order to enhance the rationality and autonomy of our person as a whole.
I will begin by sketching an understanding of reasons and rationality, or the objective and subjective normative modes. Next, the general problem of elusive goods and globally optimal indirect strategies will be introduced by way of indirect utilitarianism. After clarifying the Parfitian paradox of “blameless wrongdoing”, I will show how epistemic principles of meta-coherence may undermine this particular application of the local/global distinction, though there remain cases involving “essential byproducts” which escape this objection. These issues will be further clarified through an exploration of the distinction between object- and state-based modes of assessment. Finally, I present a class of game-theoretic cases that present a paradox for local rationality, which can be resolved by embracing the more holistic understanding sketched above.
Reasons and Rationality
Reasons are provided by facts that count in favour of an action. For example, if a large rock is about to hit the back of your head, then this is a reason for you to duck, even if you are unaware of it. As this example suggests, the objective notion I have in mind is largely independent of our beliefs.[2] As inquiring agents, we try to discover what reasons for action we have, and hence what we should do. Such inquiry would be redundant according to subjective accounts, which restrict reasons to things that an agent already believes. Instead, I use the term ‘reason’ in the sense that is closely tied to notions of value. In general, we will have reason to bring about good states of affairs, and to prevent bad ones from obtaining.[3] I take it as analytic that we have most reason to do what is best. We may also say that this is what one ought, in the reason-implying sense, to do.[4]
There is another sense of ‘ought’, tied to the subjective or evidence-based notion of rationality rather than the objective or fact-based notion of ‘reasons’. Sometimes the evidence can be misleading, so that what seems best is not really so. In such cases, we may say that one rationally ought to do what seems best, given the available evidence. But due to their ignorance of the facts, they would not be doing what they actually have most reason to do. Though they couldn’t know it, some alternative action would have been better in fact.
This raises the question of what to do when reasons and rationality diverge. Suppose that someone ought, in the reason-implying sense, to X, but that they rationally ought to Y. Which takes precedence? What is it that they really ought to do? There is some risk of turning this into a merely terminological dispute. I am not concerned with the meaning of the word ‘ought’, or which of the previous two senses has greater claim to being the “true meaning” of the word. But we can make some substantive observations here. In particular, I think that the reason-involving sense of ‘ought’ is arguably the more fundamental normative concept. This is because it indicates the deeper goal, or what agents are ultimately seeking.
The purpose of deliberation is to identify the best choice, or reach the correct conclusion. In practice, we do this by settling on what seems to us to be best. But we do not think that the appearances have any independent force, over and above the objective facts. We seek to perform the best action, not merely the best-seeming one.[5] Of course, from our first-personal perspective we cannot tell the two apart. That which seems best to us is what we take to truly be best. Belief is, in this sense, “transparent to truth”. Because our beliefs all seem true to us, the rational choice will always seem to us to also be the best one.[6] We can thus take ourselves to be complying with the demands of both rationality and fact-based reasons. Nevertheless, it is the latter that we really care about.
This is especially clear in epistemology. We seek true beliefs, not justified ones. Sure, we would usually take ourselves to be going wrong if our beliefs conflicted with the available evidence. Such conflict would indicate that our beliefs were likely false. But note that it is the falsity, and not the mere indication thereof, that we are ultimately concerned with. More generally, for any given goal, we will be interested in evidence that suggests to us how to attain the goal. We will tend to be guided by such evidence. But this does not make following the evidence itself our ultimate goal. Ends and evidence are intimately connected, but they are not the same thing. Normative significance accrues in the first instance to our ends, whereas evidence is merely a means: we follow it for the sake of the end, which we know not how else to achieve. Applied to the particular case of reasons and rationality, then, it becomes clear that the reasons provide the real goal, whereas rationality is the guiding process by which we aim to achieve it. Since arriving at the intended destination is ultimately more important than faithfully following the guide, we may conclude that the reason-implying sense of ‘ought’ takes normative precedence. I will use this as my default sense of ‘ought’ in what follows.
Indirect Utilitarianism and Blameless Wrongdoing
Act Utilitarianism is the thesis that we ought to act in such a way as to maximize the good. Paradoxically, it is likely the case that if people tried to act like good utilitarians, this would in fact have very bad consequences. For example, authorities might engage in torture or frame innocent persons whenever they believed that doing so would cause more good than harm. Such beliefs might often be mistaken, however, and with disastrous consequences. Let us suppose that attempting to directly maximize utility will generally backfire. Utilitarianism then seems to imply that it is wrong to be a utilitarian. But the conclusion that utilitarianism is self-defeating only follows if we fail to distinguish between criterions of rightness and decision procedures.[7]
We typically conceive of ethics as a practically oriented field: a good moral theory should be action-guiding, or tell us how to act. So when utilitarianism claims that the right action is that which maximizes utility, it is natural for us to read this as saying that we should try to maximize utility. But utilitarianism as defined above does not claim that we ought to try to maximize utility. Rather, it claims that we should achieve this end. If one were to try and fail, then their action would be wrong, according to the act-utilitarian criterion. This seems to be in tension with the general principle, introduced above, that we rationally ought to aim at the good. The utilitarian criterion instead tells us to have whatever aims would be most successful at attaining the good. This is not necessarily the same thing. The distinction will be clarified in the section on ‘object- and state-based modes of assessment’, later in this essay. For now, simply note that the best consequences might result from a steadfast commitment to human rights, say, and a strong aversion to violating them even if doing so appears expedient. In this case, the utilitarian criterion tells us that we should inculcate such anti-utilitarian practical commitments.
This indicates a distinction between two levels of normative moral thought: the practical and the theoretical.[8] Our practical morality consists of those principles and commitments that guide us in our everyday moral thinking and engage our moral emotions and intuitions. This provides our moral decision procedure. It is often enough to note that an action would violate our commitment to honesty, for instance, to settle the question of whether we should perform it. This is not the place for cold calculation of expected utilities. They instead belong on the theoretical level. We wish to determine which of our intuitive practical principles and commitments are well-justified ones. And here we may appeal to indirect utilitarianism to ground our views. Honesty is good because (we may suppose) being honest will do a better job of making the world a better place than would being a scheming and opportunistic direct utilitarian. The general picture on offer is this: we use utility as a higher-order criterion for picking out the best practical morality, and then we live according to the latter. Maximizing utility is the ultimate goal, but we do well to adopt a more reliable indirect strategy – and even other first-order “goals” – in order to achieve it.[9]
What shall we say of those situations where the goal and the strategy conflict? Consider a rare case wherein torturing a terrorist suspect really would have the best consequences. Such an act would then be right, according to the utilitarian criterion. Yet our practical morality advises against it, and ex hypothesi we ought to live according to those principles. Does this imply a contradiction: the right action ought not to be done? Only if we assume a further principle of normative transmission:
(T) If you ought to accept a strategy S, and S tells you to X, then you ought to X.
This is plausible for the rational sense of ‘ought’, but not the reason-involving sense that I am using here. We might have most reason to adopt a strategy – because it will more likely see us right than any available alternative – without thereby implying that the strategy is perfect, i.e. that everything it prescribes really is the objectively best option. S might on occasion be misleading, and then we could have more reason to not do X, though we remain unaware of this fact. So we should reject (T), and accept the previously described scenario as consistent. To follow practical morality in such a case, and refrain from expedient torture, would constitute what Parfit calls “blameless wrongdoing”.[10] The agent fails to do what they have most moral reason to do, so the act is wrong. But the agent herself has the best possible motives and dispositions, and could say, “Since this is so, when I do act wrongly in this way, I need not regard myself as morally bad.”[11]
Parfit’s solution may be clarified by appealing to my earlier distinction between local and global rationality. Our ‘local’ assessment looks at the particular act, and condemns it for sub-optimality. The ‘global’ perspective considers the agent as a whole, with particular concern for the long-term outcomes obtained by consistent application of any given decision-procedure. From this perspective, the agent is (ex hypothesi) entirely praiseworthy. The apparently conflicting judgments are consistent because they are made in relation to different standards or modes of assessment. I have illustrated this with the example of indirect utilitarianism, but the general principle will apply whenever some end is best achieved by indirect means. More generally than “blameless wrongdoing”, we will have various forms of (globally) optimal (local) sub-optimality.
Meta-coherence and Essential Byproducts
The above discussion focuses on the reason-involving sense of ‘ought’. Let us now consider the problem in terms of what one rationally ought to do. Rationality demands that we aim at the good, or do what seems best, i.e. maximize expected utility. But the whole idea of the indirect strategy is to be guided by reliable rules rather than direct utility calculations. One effectively commits to occasionally acting irrationally (in the “local” sense), though it is rational – subjectively optimal – to make this commitment. Parfit thus calls it “rational irrationality”.[12] But we may question whether expected utility could really diverge from the reliable rules after all.
Sometimes we may be in a position to realize that our initial judgments should be revised. I may initially be taken in a visual illusion, and falsely believe that the two lines I see are of different lengths. Learning how the illusion worked would undercut the evidence of my senses. I would come to see that the prima facie evidence was misleading, and the belief I formed on its basis likely false. Principles of meta-coherence suggest that it would be irrational to continue accepting the appearances after learning them to be deceptive, or more generally to hold a belief concurrently with the meta-belief that the former is unjustified or otherwise likely false.[13] This principle has important application to our current discussion.
We adopt the indirect strategy because we recognize that our direct first-order calculations are unreliable. The over-zealous sheriff might think that torturing a terrorist suspect would have high expected utility. But if he recalls his own unreliability on such matters, he should lower the expected utility accordingly. As a good indirect utilitarian, he believes that in situations subjectively indiscernible from his own, the best results will generally be obtained by respecting human rights and following a strict “no torture” policy. Taking this higher-order information into account, he should revise his earlier judgment and instead reach the all-things-considered conclusion that refraining from torture maximizes expected utility even for this particular act. This seems to collapse the distinction between local and global rationality. When all things are considered, the former will come to conform to the latter.[14]
This will not always be the case, however. A crucial feature of the present example is that one can consciously recognize the ultimate goal at the back of their mind, even as they employ an indirect strategy in its pursuit. But what if the pursuit of some good required that we make ourselves more thoroughly insensitive to it? Jon Elster calls such goods “essential byproducts”, and examples might include spontaneity, sleep, acting unselfconsciously, and other such mental absences.[15] Such goods are not susceptible to momentary rational pursuit. Higher-order considerations are no help here: we cannot achieve these goods while intentionally following indirect strategies that we consider more reliable. Rather, to achieve them we must relinquish any conscious intention of doing so. As we relax and begin to drift off to sleep, we cannot concurrently conceive of our mental inactivity as a means to this end. One cannot achieve a mental absence by having it “in mind” in the way required for the means-ends reasoning I take to be constitutive of rationality. In the event of succeeding, one could no longer be locally rational in their pursuit of the essential byproduct, for they would not at that moment be intentionally pursuing it at all.
Nevertheless, there remains an important sense in which a person is perfectly rational to have their momentary selves abdicate deliberate pursuit of these ends. If we attribute the goal of nightly sleep to the whole temporally extended person, then this abdication is precisely what sensible pursuit of the goal entails. In this sense, we can understand the whole person as acting deliberately even when their momentary self does not. So the distinction is upheld: global rationality recommends that we simply give up on trying to remain locally rational when we want to get some rest.
Object- and State-based modes of assessment
Oddly enough, even local rationality recommends surrendering itself in such circumstances. From the local perspective of the moment, pursuit of the goal is best advanced by ensuring that one’s future self refrains from such deliberate pursuit. Does this mean that one rationally should cease to value and pursue the good? The puzzle arises because mental states are subject to two very different modes of assessment: one focusing on the object of the mental state, and the other focusing on the state itself.[16] Suppose an eccentric billionaire offers you a million dollars for believing that the world is flat. The object of belief, i.e. the proposition that the world is flat, does not merit belief. But this state of belief would, in such a case, be a worthwhile one to have. In this sense we might think there are reasons for (having the state of) believing, which are not reasons for (the truth of) the thing believed.[17] It seems plausible that desire aims at value in much the same way as belief aims at truth. Hence, indications of value could provide object-based reasons for intention or desire – much as indications of truth provide object-based reasons for belief – whereas the utility of having the desire in question could provide state-based reasons for it.[18] This is the difference between an object’s being worthy of desire, and a desire for the object being a state worth having.
There are various theories about what reasons we have for acting, and hence what objects merit our pursuit. For example, we may call “Instrumentalism” the claim that we have reason to fulfill our own present desires, whatever they may be. Egoism claims that we have most reason to advance our own self-interest. And Impartialism says that we have equal reason to advance each person’s interests. For any such account of reasons, we can pair it with a corresponding account of object-based local rationality, based on the following general schema:
(G) Rationality is a matter of pursuing the good, i.e. being moved by the appearance of those facts ____ that provide us with reasons for action.
Let us say that S has a quasi-reason to X, on the basis of some non-normative proposition p, when the following two conditions are satisfied: (i) S believes that p; and (ii) if p were true then this would provide a reason for S to X. We may then understand (G) as the claim that one rationally ought to do what one has most quasi-reason to do.
The different theories posit different reasons, so different quasi-reasons, and hence different specifications of local rationality in this sense. For example, according to Egoism, agents are locally rational insofar as they seek to advance their own interests. There is a sense in which the theory thereby claims this to be the supremely rational aim.[19] But let us suppose that having such an aim would foreseeably cause one’s life to go worse, as per “the paradox of hedonism”.[20] Egoism then implies that we would be irrational to knowingly harm ourselves by having this aim. This conclusion seems to contradict the original claim that this aim is “supremely rational”. The theory seems to be not merely self-effacing, but downright inconsistent.[21]
The distinction between object- and state-based assessments may help resolve this problem. We might say that an aim embodies rationality in virtue of its object, in that it constitutes supreme sensitivity to one’s quasi-reasons. Or the aim might be recommended by rationality, in the sense that one’s quasi-reasons tell one to have this aim, in virtue of the mental state itself. As before, the apparent incoherence can be traced to the conflation of two distinct modes of assessment. The aforementioned theories should be interpreted as claiming that their associated aims supremely embody rationality, even though it might not be rationally recommended to embody rationality in such a way. This reflects the coherent possibility that something might be desirable – worthy of desire – even if the desire itself would, for extrinsic reasons, be a bad state to have.
It is worth noting that this distinction appears to hold independently of the local/global distinction. We might, for example, imagine a good that would be denied to anyone who ever entertained it as a goal. If one sought it via the standard “globally rational” method of preventing one’s future momentary selves from deliberate “locally rational” pursuit, it would already be too late. There is no rational way at all, on any level, to pursue the good. Still, being of value, the good might merit pursuit. It might even provide reasons of sorts, even if one could never recognize them as such. (For example, one would plausibly have reason not to entertain the good as a goal. But one could not recognize this reason without thereby violating it, for it would only move an agent who sought the very goal it warns against.) So although the object/state distinction may recommend a shift from local to global rationality, it further establishes that even the latter may, in special circumstances, be disadvantageous. Reasons and rationality may come apart, even when no ignorance is involved, because it may be best to achieve a good without ever recognizing it as such. This would provide reasons that elude our rational grasp, being such that we ought to act unwittingly rather than by grasping the reason that underlies this very ‘ought’-fact.
Global Rationality
We have seen how various distinctions, including that between the local and global levels of rationality, can help us make sense of the indirect pursuit of goods. If we know our first-order judgments to be unreliable, then meta-coherence will lead us to be skeptical of those judgments. Indirect utilitarianism stems from recognizing that expected utility is better served by instead following a more reliable – globally optimal – strategy, even if this at times conflicts with our first-order judgments of expedience. Global rationality paves the way for utilitarian respect for rights, and meta-coherence carries it over to the local level. Essential byproducts highlight the distinction, as we may understand such a goal as being rationally pursued at the level of the temporally extended person, but not at the level of every momentary stage or temporal part. Although the object/state distinction implies that even global rationality may be imperfect, the preceding cases suggest that we would do well at least to prize the global perspective over the local one. I now want to support this conclusion by considering a further class of problems that could be fruitfully analyzed as pitting the unified agent against their momentary selves.
Consider Newcomb’s Problem:[22] a highly reliable predictor presents you with two boxes, one containing $1000, and the other with contents unknown. You are offered the choice of either taking both boxes, or else just the unknown one. You are told that the predictor will have put $1,000,000 in the opaque box if she earlier predicted you would pick only that; otherwise she will have left it empty. Either way, the contents are now fixed. Should you take one box or both? From the momentary perspective of local rationality, the answer seems clear: the contents are fixed, it’s too late to change them now, so you might as well take both. Granted, one would do better to be the sort of person who would pick only one box. That is the rationally recommended dispositional state. But taking both is the choice that embodies rationality, from this perspective. This reasoning predictably leads to a mere $1000 prize. Suppose one instead adopted a more global perspective, giving weight to the kind of reasoning that, judging from the timeless perspective, one wants one’s momentary stages to employ. The globally rational agent is willing to commit to being a one-boxer, and so will make that choice even when it seems locally suboptimal. This predictably leads to the $1,000,000 prize, which was unattainable for the locally rational agent.
Similar remarks apply to Kavka’s toxin puzzle.[23] Suppose that you would be immediately rewarded upon forming the intention to later drink a mild toxin that would cause you some discomfort. Since you will already have received your reward by then, there would seem no reason for the locally rational agent to carry out their intention. Recognizing this, they cannot even form the intention to begin with. (You cannot intend to do something that you know you will not do.) Again we find that local rationality disqualifies one from attaining something of value. The globally rational agent, in contrast, is willing to follow through on earlier commitments even in the absence of local reasons. He wishes to be the kind of person who can reap such rewards, so he behaves accordingly. As Julian Nida-Rumelin writes: “It is perfectly rational to refrain from point-wise optimization because you do not wish to live the life which would result.”[24]
In both these cases, the benefits of global rationality require that one be disposed to follow through on past commitments. One must tend to recognize one’s past reasons as also providing reasons for one’s present self. This allows one to overcome problems, such as the above, which are based on a localized object/state distinction.[25] But occasional violation of this disposition might allow one to receive the benefits without the associated cost. (One might receive the reward for forming the sincere intention to drink the toxin, only to later surprise oneself by refusing to drink it after all.) So let us now consider an even stronger sort of case that goes beyond the object/state distinction and hence demands more than the mere disposition of global rationality. Instead, the benefits will accrue only to those who follow through on their earlier resolutions.[26]
Pollock’s Ever Better Wine improves with age, without limit.[27] Suppose you possess a bottle, and are immortal. When should you drink the wine? Local rationality implies that you should never drink it, for at any given time you would do better to postpone it another day. But to never drink it at all is the worst possible result! Or consider Quinn’s Self-Torturer, who receives $10,000 each time he increases his pain level by an indiscernible increment.[28] It sounds like a deal worth taking. But suppose that the combined effect of a thousand increments would leave him in such agony that no amount of money could compensate. Because each individual increment is – from the local perspective of the moment – worth taking, local rationality will again lead one to the worst possible result. A good result is only possible for agents who are willing to let their global perspective override local calculations. The agent must make in advance a rational resolution to stop at some stage n, even though from the local perspective of stage n he would do better to continue on to stage n+1.
It seems clear that the global perspective is rationally superior. The agent can foresee the outcomes of his possible choices. If he endorses the local mode of reasoning then he will never have grounds to stop, and so will end up stuck with the worst possible outcome. It cannot be rational to accept this when other options are open to him. If he is instead resolute and holds firm to the choice – made from a global or timeless perspective – to stop at stage n, then he will do much better. Yet one might object that this merely pushes the problem back a step: how could one rationally resolve to choose n rather than n+1 in the first place?
The problem of Buridan’s ass, caught between two equally tempting piles of hay, shows that rational agents must be capable of making arbitrary decisions.[29] It cannot be rational for the indecisive ass to starve to death in its search for the perfect decision. Indeed, once cognitive costs are taken into account, it becomes clear that all-things-considered expected utility is better served by first-order satisficing than attempted optimizing.[30] (“The perfect is the enemy of the good,” as the saying goes.) Applying this to the above cases, we should settle on some n, any n, that is good enough. Once we have made such a resolution, we can reject the challenge, “why not n+1?” by noting that if we were to grant that increment then we would have no basis to reject the next ones, ad infinitum, and that would lead us to the worst outcome.
Conclusion
The standard picture of rationality is thoroughly atomistic. It views agents as momentary entities, purely forward-looking from their localized temporal perspective. In this essay, I have presented and prescribed an alternative, more holistic view. I propose that we instead ascribe agency primarily to the whole temporally extended person, rather than to each temporal stage in isolation. This view allows us to make sense of the rational pursuit of essential byproducts, since we may ascribe deliberate purpose to a whole person even if it is absent from the minds of some individual stages. Moreover, global rationality sheds light on the insights of indirect utilitarianism, though meta-coherence allows that these conclusions may also become accessible from a temporally localized perspective. Finally, I have argued that there are cases where reasoning in the locally “rational” manner of point-wise optimization leads to disaster. Such disaster can be avoided if the agents embrace my holistic conception of rational agency, acting only in ways that they could endorse from a global or timeless perspective. Persons are more than the sum of their isolated temporal parts; if we start acting like it then we may do better than traditional decision theorists would think rationally possible.
Endnotes
[1] Harsanyi, p.122, seems to be getting at a similar idea in his discussion of the “normal mode” of playing a game.
[2] Of course we can imagine special circumstances whereby one’s holding of a belief would itself be the reason-giving ‘fact’ in question. If I am sworn to honesty, then the fact that I believe that P may provide a reason for me to assert that P.
[3] I leave open the question of whether such value is impersonal or agent-relative.
[4] Parfit (ms), p.21. I also follow Parfit’s use of the term “rationally ought”, below.
[5] Here I am indebted to discussion with Clayton Littlejohn.
[6] Cf. Kolodny’s “transparency account” of rationality’s apparent normativity.
[7] This is a familiar enough distinction, see e.g. the Stanford Encyclopedia entry on ‘Rule Consequentialism’, http://plato.stanford.edu/entries/consequentialism-rule/#4 [accessed 21/6/06].
[8] The following is strongly influenced by R.M. Hare.
[9] Hare, p.38.
[10] Parfit (1987), pp.31-35. But cf. my section on ‘meta-coherence’ below.
[11] Ibid., p.32. (N.B. Here I quote the words that Parfit attributes to his fictional agent ‘Clare’.)
[12] Ibid., p.13.
[13] I owe the idea of “meta-coherence” to Michael Huemer. See, e.g.: http://bengal-ng.missouri.edu/~kvanvigj/certain_doubts/?p=394 (accessed 16/6/06)
[14] Two further points bear mentioning: (1) We might construct a new distinction in this vicinity, between prima facie and all things considered judgments, where the former allows only first-order evidence, and the latter includes meta-beliefs about reliability and such. This bears some relation to ‘local’ vs. ‘global’ considerations, and again I think the latter deserves to be more widely recognized. Nevertheless, I take it to be distinct from the “momentary act vs. temporally-extended agent” form of the local/global distinction, which this essay is more concerned with. (2) Even though a meta-coherent local calculation should ultimately reinforce the indirect strategy, that’s not to say that one should actually carry out such a decision procedure. The idea of indirect utilitarianism is instead that one acts on the dispositions recommended by our practical morality, rather than having one “always undertake a distinctively consequentialist deliberation” [Railton, p.166]. So my local/global distinction could apply to indirect utilitarianism after all.
[15] Elster, pp.43-52.
[16] Cf. Parfit (ms), p.30.
[17] Musgrave, p.21.
[18] This evidence-based notion is another common use of the term “reasons”. But in light of my earlier remarks, we should instead hold that objective reasons are provided by the ultimate facts, not mere “indications” thereof. The more subjective or evidence-based “reasons” might instead be conceptually tied to rationality, as per my “quasi-reasons” below.
[19] Indeed, Parfit (1987) takes this as the “central claim” of the self-interest theory.
[20] Railton, p.156.
[21] Cf. Dancy, p.11.
[22] Nozick, p.41.
[23] Kavka, pp.33-36.
[24] Nida-Rumelin, p.13.
[25] From a local perspective, the state of intending is worth having, even though the object (e.g. the act of drinking Kavka’s toxin) does not in itself merit intending. The problem arises because we regulate our mental states on the basis of object-based reasons alone (as seen by the impossibility of inducing belief at will). The global perspective overcomes this by treating past reasons for intending as present reasons for acting, and hence transforming state-based reasons into object-based ones.
[26] For more on the importance of rational resolutions, see McClennen, pp.24-25.
[27] Sorensen, p.261.
[28] Quinn, pp.79-90.
[29] Sorensen, p.270.
[30] Weirich, p.391.
References
Dancy, J. (1997) ‘Parfit and Indirectly Self-defeating Theories’ in J. Dancy (ed.) Reading Parfit. Oxford : Blackwell.
Elster, J. (1983) Sour Grapes. Cambridge : Cambridge University Press.
Hare, R.M. (1981) Moral Thinking. Oxford : Clarendon Press.
Harsanyi, J. (1980) ‘Rule Utilitarianism, Rights, Obligations and the Theory of Rational Behavior’ Theory and Decision 12.
Kavka, G. (1983) ‘The toxin puzzle’ Analysis, 43:1.
Kolodny, N. (2005) ‘Why Be Rational?’ Mind, 114:455.
McClennen, E. (2000) ‘The Rationality of Rules’ in J. Nida-Rumelin and W. Spohn (eds.) Rationality, Rules, and Structure. Boston : Kluwer.
Musgrave, A. (2004) ‘How Popper [Might Have] Solved the Problem of Induction’ Philosophy, 79.
Nida-Rumelin, J. (2000) ‘Rationality: Coherence and Structure’ in J. Nida-Rumelin and W. Spohn (eds.) Rationality, Rules, and Structure. Boston : Kluwer.
Nozick, R. (1993) The Nature of Rationality. Princeton, N.J. : Princeton University Press.
Parfit, D. (ms.) Climbing the Mountain [Version 7/6/06].
Parfit, D. (1987) Reasons and Persons. Oxford : Clarendon Press.
Quinn, W. (1990) ‘The puzzle of the self-torturer’ Philosophical Studies, 59:1.
Railton, P. (2003) ‘Alienation, Consequentialism, and the Demands of Morality’ Facts, Values and Norms. New York : Cambridge University Press.
Sorensen, R. (2004) ‘Paradoxes of Rationality’ in Mele, A. and Rawling, P. (eds.) The Oxford Handbook of Rationality. New York : Oxford University Press.
Weirich, P. (2004) ‘Economic Rationality’ in Mele, A. and Rawling, P. (eds.) The Oxford Handbook of Rationality. New York : Oxford University Press.
Introduction
Let us assume a broadly Consequentialist framework: Certain states of affairs have value, making them worthy of our pursuit. By failing to pursue the good, one thereby reveals that they suffer some defect in their rational awareness. Perhaps they are ignorant of the descriptive facts, or perhaps they fail to appreciate how those facts provide them with normative reasons for action. We would expect neither defect to beset a fully informed and perfectly rational agent. Ideal rationality entails being moved by reasons, or being motivated to pursue the good. But what of those goods that elude direct approach? Could one rationally aim at them in full knowledge that this would doom one to failure? Conversely, aiming wholeheartedly at an inherently worthless goal seems in some sense misguided or irrational. But what if doing so would better achieve the elusive good? Does rationality then recommend that we make ourselves irrational, blinding ourselves to the good in order to better achieve it?
Such questions may motivate a distinction between ‘global’ (holistic) and ‘local’ (atomistic) rationality: pitting the whole temporally extended person against their momentary stages. This essay explores the distinction and argues that the usual exclusive focus on local rationality is misguided. Global optimality may sometimes require us to do other than what seems optimific within the confines of a moment. Holistic rationality, as I envisage it, tells us to adopt a broader view, transcending the boundaries of the present and identifying with a timeless perspective instead. It further requires that we be willing to treat the dictates of this broader perspective as rationally authoritative, no matter how disadvantageous this may seem from the particular perspective of our local moment.[1] This amounts to an intrapersonal analogue of the ‘social contract’: each of our momentary stages abdicates some degree of rational autonomy, in order to enhance the rationality and autonomy of our person as a whole.
I will begin by sketching an understanding of reasons and rationality, or the objective and subjective normative modes. Next, the general problem of elusive goods and globally optimal indirect strategies will be introduced by way of indirect utilitarianism. After clarifying the Parfitian paradox of “blameless wrongdoing”, I will show how epistemic principles of meta-coherence may undermine this particular application of the local/global distinction, though there remain cases involving “essential byproducts” which escape this objection. These issues will be further clarified through an exploration of the distinction between object- and state-based modes of assessment. Finally, I present a class of game-theoretic cases that present a paradox for local rationality, which can be resolved by embracing the more holistic understanding sketched above.
Reasons and Rationality
Reasons are provided by facts that count in favour of an action. For example, if a large rock is about to hit the back of your head, then this is a reason for you to duck, even if you are unaware of it. As this example suggests, the objective notion I have in mind is largely independent of our beliefs.[2] As inquiring agents, we try to discover what reasons for action we have, and hence what we should do. Such inquiry would be redundant according to subjective accounts, which restrict reasons to things that an agent already believes. Instead, I use the term ‘reason’ in the sense that is closely tied to notions of value. In general, we will have reason to bring about good states of affairs, and to prevent bad ones from obtaining.[3] I take it as analytic that we have most reason to do what is best. We may also say that this is what one ought, in the reason-implying sense, to do.[4]
There is another sense of ‘ought’, tied to the subjective or evidence-based notion of rationality rather than the objective or fact-based notion of ‘reasons’. Sometimes the evidence can be misleading, so that what seems best is not really so. In such cases, we may say that one rationally ought to do what seems best, given the available evidence. But due to their ignorance of the facts, they would not be doing what they actually have most reason to do. Though they couldn’t know it, some alternative action would have been better in fact.
This raises the question of what to do when reasons and rationality diverge. Suppose that someone ought, in the reason-implying sense, to X, but that they rationally ought to Y. Which takes precedence? What is it that they really ought to do? There is some risk of turning this into a merely terminological dispute. I am not concerned with the meaning of the word ‘ought’, or which of the previous two senses has greater claim to being the “true meaning” of the word. But we can make some substantive observations here. In particular, I think that the reason-involving sense of ‘ought’ is arguably the more fundamental normative concept. This is because it indicates the deeper goal, or what agents are ultimately seeking.
The purpose of deliberation is to identify the best choice, or reach the correct conclusion. In practice, we do this by settling on what seems to us to be best. But we do not think that the appearances have any independent force, over and above the objective facts. We seek to perform the best action, not merely the best-seeming one.[5] Of course, from our first-personal perspective we cannot tell the two apart. That which seems best to us is what we take to truly be best. Belief is, in this sense, “transparent to truth”. Because our beliefs all seem true to us, the rational choice will always seem to us to also be the best one.[6] We can thus take ourselves to be complying with the demands of both rationality and fact-based reasons. Nevertheless, it is the latter that we really care about.
This is especially clear in epistemology. We seek true beliefs, not justified ones. Sure, we would usually take ourselves to be going wrong if our beliefs conflicted with the available evidence. Such conflict would indicate that our beliefs were likely false. But note that it is the falsity, and not the mere indication thereof, that we are ultimately concerned with. More generally, for any given goal, we will be interested in evidence that suggests to us how to attain the goal. We will tend to be guided by such evidence. But this does not make following the evidence itself our ultimate goal. Ends and evidence are intimately connected, but they are not the same thing. Normative significance accrues in the first instance to our ends, whereas evidence is merely a means: we follow it for the sake of the end, which we know not how else to achieve. Applied to the particular case of reasons and rationality, then, it becomes clear that the reasons provide the real goal, whereas rationality is the guiding process by which we aim to achieve it. Since arriving at the intended destination is ultimately more important than faithfully following the guide, we may conclude that the reason-implying sense of ‘ought’ takes normative precedence. I will use this as my default sense of ‘ought’ in what follows.
Indirect Utilitarianism and Blameless Wrongdoing
Act Utilitarianism is the thesis that we ought to act in such a way as to maximize the good. Paradoxically, it is likely the case that if people tried to act like good utilitarians, this would in fact have very bad consequences. For example, authorities might engage in torture or frame innocent persons whenever they believed that doing so would cause more good than harm. Such beliefs might often be mistaken, however, and with disastrous consequences. Let us suppose that attempting to directly maximize utility will generally backfire. Utilitarianism then seems to imply that it is wrong to be a utilitarian. But the conclusion that utilitarianism is self-defeating only follows if we fail to distinguish between criterions of rightness and decision procedures.[7]
We typically conceive of ethics as a practically oriented field: a good moral theory should be action-guiding, or tell us how to act. So when utilitarianism claims that the right action is that which maximizes utility, it is natural for us to read this as saying that we should try to maximize utility. But utilitarianism as defined above does not claim that we ought to try to maximize utility. Rather, it claims that we should achieve this end. If one were to try and fail, then their action would be wrong, according to the act-utilitarian criterion. This seems to be in tension with the general principle, introduced above, that we rationally ought to aim at the good. The utilitarian criterion instead tells us to have whatever aims would be most successful at attaining the good. This is not necessarily the same thing. The distinction will be clarified in the section on ‘object- and state-based modes of assessment’, later in this essay. For now, simply note that the best consequences might result from a steadfast commitment to human rights, say, and a strong aversion to violating them even if doing so appears expedient. In this case, the utilitarian criterion tells us that we should inculcate such anti-utilitarian practical commitments.
This indicates a distinction between two levels of normative moral thought: the practical and the theoretical.[8] Our practical morality consists of those principles and commitments that guide us in our everyday moral thinking and engage our moral emotions and intuitions. This provides our moral decision procedure. It is often enough to note that an action would violate our commitment to honesty, for instance, to settle the question of whether we should perform it. This is not the place for cold calculation of expected utilities. They instead belong on the theoretical level. We wish to determine which of our intuitive practical principles and commitments are well-justified ones. And here we may appeal to indirect utilitarianism to ground our views. Honesty is good because (we may suppose) being honest will do a better job of making the world a better place than would being a scheming and opportunistic direct utilitarian. The general picture on offer is this: we use utility as a higher-order criterion for picking out the best practical morality, and then we live according to the latter. Maximizing utility is the ultimate goal, but we do well to adopt a more reliable indirect strategy – and even other first-order “goals” – in order to achieve it.[9]
What shall we say of those situations where the goal and the strategy conflict? Consider a rare case wherein torturing a terrorist suspect really would have the best consequences. Such an act would then be right, according to the utilitarian criterion. Yet our practical morality advises against it, and ex hypothesi we ought to live according to those principles. Does this imply a contradiction: the right action ought not to be done? Only if we assume a further principle of normative transmission:
(T) If you ought to accept a strategy S, and S tells you to X, then you ought to X.
This is plausible for the rational sense of ‘ought’, but not the reason-involving sense that I am using here. We might have most reason to adopt a strategy – because it will more likely see us right than any available alternative – without thereby implying that the strategy is perfect, i.e. that everything it prescribes really is the objectively best option. S might on occasion be misleading, and then we could have more reason to not do X, though we remain unaware of this fact. So we should reject (T), and accept the previously described scenario as consistent. To follow practical morality in such a case, and refrain from expedient torture, would constitute what Parfit calls “blameless wrongdoing”.[10] The agent fails to do what they have most moral reason to do, so the act is wrong. But the agent herself has the best possible motives and dispositions, and could say, “Since this is so, when I do act wrongly in this way, I need not regard myself as morally bad.”[11]
Parfit’s solution may be clarified by appealing to my earlier distinction between local and global rationality. Our ‘local’ assessment looks at the particular act, and condemns it for sub-optimality. The ‘global’ perspective considers the agent as a whole, with particular concern for the long-term outcomes obtained by consistent application of any given decision-procedure. From this perspective, the agent is (ex hypothesi) entirely praiseworthy. The apparently conflicting judgments are consistent because they are made in relation to different standards or modes of assessment. I have illustrated this with the example of indirect utilitarianism, but the general principle will apply whenever some end is best achieved by indirect means. More generally than “blameless wrongdoing”, we will have various forms of (globally) optimal (local) sub-optimality.
Meta-coherence and Essential Byproducts
The above discussion focuses on the reason-involving sense of ‘ought’. Let us now consider the problem in terms of what one rationally ought to do. Rationality demands that we aim at the good, or do what seems best, i.e. maximize expected utility. But the whole idea of the indirect strategy is to be guided by reliable rules rather than direct utility calculations. One effectively commits to occasionally acting irrationally (in the “local” sense), though it is rational – subjectively optimal – to make this commitment. Parfit thus calls it “rational irrationality”.[12] But we may question whether expected utility could really diverge from the reliable rules after all.
Sometimes we may be in a position to realize that our initial judgments should be revised. I may initially be taken in a visual illusion, and falsely believe that the two lines I see are of different lengths. Learning how the illusion worked would undercut the evidence of my senses. I would come to see that the prima facie evidence was misleading, and the belief I formed on its basis likely false. Principles of meta-coherence suggest that it would be irrational to continue accepting the appearances after learning them to be deceptive, or more generally to hold a belief concurrently with the meta-belief that the former is unjustified or otherwise likely false.[13] This principle has important application to our current discussion.
We adopt the indirect strategy because we recognize that our direct first-order calculations are unreliable. The over-zealous sheriff might think that torturing a terrorist suspect would have high expected utility. But if he recalls his own unreliability on such matters, he should lower the expected utility accordingly. As a good indirect utilitarian, he believes that in situations subjectively indiscernible from his own, the best results will generally be obtained by respecting human rights and following a strict “no torture” policy. Taking this higher-order information into account, he should revise his earlier judgment and instead reach the all-things-considered conclusion that refraining from torture maximizes expected utility even for this particular act. This seems to collapse the distinction between local and global rationality. When all things are considered, the former will come to conform to the latter.[14]
This will not always be the case, however. A crucial feature of the present example is that one can consciously recognize the ultimate goal at the back of their mind, even as they employ an indirect strategy in its pursuit. But what if the pursuit of some good required that we make ourselves more thoroughly insensitive to it? Jon Elster calls such goods “essential byproducts”, and examples might include spontaneity, sleep, acting unselfconsciously, and other such mental absences.[15] Such goods are not susceptible to momentary rational pursuit. Higher-order considerations are no help here: we cannot achieve these goods while intentionally following indirect strategies that we consider more reliable. Rather, to achieve them we must relinquish any conscious intention of doing so. As we relax and begin to drift off to sleep, we cannot concurrently conceive of our mental inactivity as a means to this end. One cannot achieve a mental absence by having it “in mind” in the way required for the means-ends reasoning I take to be constitutive of rationality. In the event of succeeding, one could no longer be locally rational in their pursuit of the essential byproduct, for they would not at that moment be intentionally pursuing it at all.
Nevertheless, there remains an important sense in which a person is perfectly rational to have their momentary selves abdicate deliberate pursuit of these ends. If we attribute the goal of nightly sleep to the whole temporally extended person, then this abdication is precisely what sensible pursuit of the goal entails. In this sense, we can understand the whole person as acting deliberately even when their momentary self does not. So the distinction is upheld: global rationality recommends that we simply give up on trying to remain locally rational when we want to get some rest.
Object- and State-based modes of assessment
Oddly enough, even local rationality recommends surrendering itself in such circumstances. From the local perspective of the moment, pursuit of the goal is best advanced by ensuring that one’s future self refrains from such deliberate pursuit. Does this mean that one rationally should cease to value and pursue the good? The puzzle arises because mental states are subject to two very different modes of assessment: one focusing on the object of the mental state, and the other focusing on the state itself.[16] Suppose an eccentric billionaire offers you a million dollars for believing that the world is flat. The object of belief, i.e. the proposition that the world is flat, does not merit belief. But this state of belief would, in such a case, be a worthwhile one to have. In this sense we might think there are reasons for (having the state of) believing, which are not reasons for (the truth of) the thing believed.[17] It seems plausible that desire aims at value in much the same way as belief aims at truth. Hence, indications of value could provide object-based reasons for intention or desire – much as indications of truth provide object-based reasons for belief – whereas the utility of having the desire in question could provide state-based reasons for it.[18] This is the difference between an object’s being worthy of desire, and a desire for the object being a state worth having.
There are various theories about what reasons we have for acting, and hence what objects merit our pursuit. For example, we may call “Instrumentalism” the claim that we have reason to fulfill our own present desires, whatever they may be. Egoism claims that we have most reason to advance our own self-interest. And Impartialism says that we have equal reason to advance each person’s interests. For any such account of reasons, we can pair it with a corresponding account of object-based local rationality, based on the following general schema:
(G) Rationality is a matter of pursuing the good, i.e. being moved by the appearance of those facts ____ that provide us with reasons for action.
Let us say that S has a quasi-reason to X, on the basis of some non-normative proposition p, when the following two conditions are satisfied: (i) S believes that p; and (ii) if p were true then this would provide a reason for S to X. We may then understand (G) as the claim that one rationally ought to do what one has most quasi-reason to do.
The different theories posit different reasons, so different quasi-reasons, and hence different specifications of local rationality in this sense. For example, according to Egoism, agents are locally rational insofar as they seek to advance their own interests. There is a sense in which the theory thereby claims this to be the supremely rational aim.[19] But let us suppose that having such an aim would foreseeably cause one’s life to go worse, as per “the paradox of hedonism”.[20] Egoism then implies that we would be irrational to knowingly harm ourselves by having this aim. This conclusion seems to contradict the original claim that this aim is “supremely rational”. The theory seems to be not merely self-effacing, but downright inconsistent.[21]
The distinction between object- and state-based assessments may help resolve this problem. We might say that an aim embodies rationality in virtue of its object, in that it constitutes supreme sensitivity to one’s quasi-reasons. Or the aim might be recommended by rationality, in the sense that one’s quasi-reasons tell one to have this aim, in virtue of the mental state itself. As before, the apparent incoherence can be traced to the conflation of two distinct modes of assessment. The aforementioned theories should be interpreted as claiming that their associated aims supremely embody rationality, even though it might not be rationally recommended to embody rationality in such a way. This reflects the coherent possibility that something might be desirable – worthy of desire – even if the desire itself would, for extrinsic reasons, be a bad state to have.
It is worth noting that this distinction appears to hold independently of the local/global distinction. We might, for example, imagine a good that would be denied to anyone who ever entertained it as a goal. If one sought it via the standard “globally rational” method of preventing one’s future momentary selves from deliberate “locally rational” pursuit, it would already be too late. There is no rational way at all, on any level, to pursue the good. Still, being of value, the good might merit pursuit. It might even provide reasons of sorts, even if one could never recognize them as such. (For example, one would plausibly have reason not to entertain the good as a goal. But one could not recognize this reason without thereby violating it, for it would only move an agent who sought the very goal it warns against.) So although the object/state distinction may recommend a shift from local to global rationality, it further establishes that even the latter may, in special circumstances, be disadvantageous. Reasons and rationality may come apart, even when no ignorance is involved, because it may be best to achieve a good without ever recognizing it as such. This would provide reasons that elude our rational grasp, being such that we ought to act unwittingly rather than by grasping the reason that underlies this very ‘ought’-fact.
Global Rationality
We have seen how various distinctions, including that between the local and global levels of rationality, can help us make sense of the indirect pursuit of goods. If we know our first-order judgments to be unreliable, then meta-coherence will lead us to be skeptical of those judgments. Indirect utilitarianism stems from recognizing that expected utility is better served by instead following a more reliable – globally optimal – strategy, even if this at times conflicts with our first-order judgments of expedience. Global rationality paves the way for utilitarian respect for rights, and meta-coherence carries it over to the local level. Essential byproducts highlight the distinction, as we may understand such a goal as being rationally pursued at the level of the temporally extended person, but not at the level of every momentary stage or temporal part. Although the object/state distinction implies that even global rationality may be imperfect, the preceding cases suggest that we would do well at least to prize the global perspective over the local one. I now want to support this conclusion by considering a further class of problems that could be fruitfully analyzed as pitting the unified agent against their momentary selves.
Consider Newcomb’s Problem:[22] a highly reliable predictor presents you with two boxes, one containing $1000, and the other with contents unknown. You are offered the choice of either taking both boxes, or else just the unknown one. You are told that the predictor will have put $1,000,000 in the opaque box if she earlier predicted you would pick only that; otherwise she will have left it empty. Either way, the contents are now fixed. Should you take one box or both? From the momentary perspective of local rationality, the answer seems clear: the contents are fixed, it’s too late to change them now, so you might as well take both. Granted, one would do better to be the sort of person who would pick only one box. That is the rationally recommended dispositional state. But taking both is the choice that embodies rationality, from this perspective. This reasoning predictably leads to a mere $1000 prize. Suppose one instead adopted a more global perspective, giving weight to the kind of reasoning that, judging from the timeless perspective, one wants one’s momentary stages to employ. The globally rational agent is willing to commit to being a one-boxer, and so will make that choice even when it seems locally suboptimal. This predictably leads to the $1,000,000 prize, which was unattainable for the locally rational agent.
Similar remarks apply to Kavka’s toxin puzzle.[23] Suppose that you would be immediately rewarded upon forming the intention to later drink a mild toxin that would cause you some discomfort. Since you will already have received your reward by then, there would seem no reason for the locally rational agent to carry out their intention. Recognizing this, they cannot even form the intention to begin with. (You cannot intend to do something that you know you will not do.) Again we find that local rationality disqualifies one from attaining something of value. The globally rational agent, in contrast, is willing to follow through on earlier commitments even in the absence of local reasons. He wishes to be the kind of person who can reap such rewards, so he behaves accordingly. As Julian Nida-Rumelin writes: “It is perfectly rational to refrain from point-wise optimization because you do not wish to live the life which would result.”[24]
In both these cases, the benefits of global rationality require that one be disposed to follow through on past commitments. One must tend to recognize one’s past reasons as also providing reasons for one’s present self. This allows one to overcome problems, such as the above, which are based on a localized object/state distinction.[25] But occasional violation of this disposition might allow one to receive the benefits without the associated cost. (One might receive the reward for forming the sincere intention to drink the toxin, only to later surprise oneself by refusing to drink it after all.) So let us now consider an even stronger sort of case that goes beyond the object/state distinction and hence demands more than the mere disposition of global rationality. Instead, the benefits will accrue only to those who follow through on their earlier resolutions.[26]
Pollock’s Ever Better Wine improves with age, without limit.[27] Suppose you possess a bottle, and are immortal. When should you drink the wine? Local rationality implies that you should never drink it, for at any given time you would do better to postpone it another day. But to never drink it at all is the worst possible result! Or consider Quinn’s Self-Torturer, who receives $10,000 each time he increases his pain level by an indiscernible increment.[28] It sounds like a deal worth taking. But suppose that the combined effect of a thousand increments would leave him in such agony that no amount of money could compensate. Because each individual increment is – from the local perspective of the moment – worth taking, local rationality will again lead one to the worst possible result. A good result is only possible for agents who are willing to let their global perspective override local calculations. The agent must make in advance a rational resolution to stop at some stage n, even though from the local perspective of stage n he would do better to continue on to stage n+1.
It seems clear that the global perspective is rationally superior. The agent can foresee the outcomes of his possible choices. If he endorses the local mode of reasoning then he will never have grounds to stop, and so will end up stuck with the worst possible outcome. It cannot be rational to accept this when other options are open to him. If he is instead resolute and holds firm to the choice – made from a global or timeless perspective – to stop at stage n, then he will do much better. Yet one might object that this merely pushes the problem back a step: how could one rationally resolve to choose n rather than n+1 in the first place?
The problem of Buridan’s ass, caught between two equally tempting piles of hay, shows that rational agents must be capable of making arbitrary decisions.[29] It cannot be rational for the indecisive ass to starve to death in its search for the perfect decision. Indeed, once cognitive costs are taken into account, it becomes clear that all-things-considered expected utility is better served by first-order satisficing than attempted optimizing.[30] (“The perfect is the enemy of the good,” as the saying goes.) Applying this to the above cases, we should settle on some n, any n, that is good enough. Once we have made such a resolution, we can reject the challenge, “why not n+1?” by noting that if we were to grant that increment then we would have no basis to reject the next ones, ad infinitum, and that would lead us to the worst outcome.
Conclusion
The standard picture of rationality is thoroughly atomistic. It views agents as momentary entities, purely forward-looking from their localized temporal perspective. In this essay, I have presented and prescribed an alternative, more holistic view. I propose that we instead ascribe agency primarily to the whole temporally extended person, rather than to each temporal stage in isolation. This view allows us to make sense of the rational pursuit of essential byproducts, since we may ascribe deliberate purpose to a whole person even if it is absent from the minds of some individual stages. Moreover, global rationality sheds light on the insights of indirect utilitarianism, though meta-coherence allows that these conclusions may also become accessible from a temporally localized perspective. Finally, I have argued that there are cases where reasoning in the locally “rational” manner of point-wise optimization leads to disaster. Such disaster can be avoided if the agents embrace my holistic conception of rational agency, acting only in ways that they could endorse from a global or timeless perspective. Persons are more than the sum of their isolated temporal parts; if we start acting like it then we may do better than traditional decision theorists would think rationally possible.
Endnotes
[1] Harsanyi, p.122, seems to be getting at a similar idea in his discussion of the “normal mode” of playing a game.
[2] Of course we can imagine special circumstances whereby one’s holding of a belief would itself be the reason-giving ‘fact’ in question. If I am sworn to honesty, then the fact that I believe that P may provide a reason for me to assert that P.
[3] I leave open the question of whether such value is impersonal or agent-relative.
[4] Parfit (ms), p.21. I also follow Parfit’s use of the term “rationally ought”, below.
[5] Here I am indebted to discussion with Clayton Littlejohn.
[6] Cf. Kolodny’s “transparency account” of rationality’s apparent normativity.
[7] This is a familiar enough distinction, see e.g. the Stanford Encyclopedia entry on ‘Rule Consequentialism’, http://plato.stanford.edu/entries/consequentialism-rule/#4 [accessed 21/6/06].
[8] The following is strongly influenced by R.M. Hare.
[9] Hare, p.38.
[10] Parfit (1987), pp.31-35. But cf. my section on ‘meta-coherence’ below.
[11] Ibid., p.32. (N.B. Here I quote the words that Parfit attributes to his fictional agent ‘Clare’.)
[12] Ibid., p.13.
[13] I owe the idea of “meta-coherence” to Michael Huemer. See, e.g.: http://bengal-ng.missouri.edu/~kvanvigj/certain_doubts/?p=394 (accessed 16/6/06)
[14] Two further points bear mentioning: (1) We might construct a new distinction in this vicinity, between prima facie and all things considered judgments, where the former allows only first-order evidence, and the latter includes meta-beliefs about reliability and such. This bears some relation to ‘local’ vs. ‘global’ considerations, and again I think the latter deserves to be more widely recognized. Nevertheless, I take it to be distinct from the “momentary act vs. temporally-extended agent” form of the local/global distinction, which this essay is more concerned with. (2) Even though a meta-coherent local calculation should ultimately reinforce the indirect strategy, that’s not to say that one should actually carry out such a decision procedure. The idea of indirect utilitarianism is instead that one acts on the dispositions recommended by our practical morality, rather than having one “always undertake a distinctively consequentialist deliberation” [Railton, p.166]. So my local/global distinction could apply to indirect utilitarianism after all.
[15] Elster, pp.43-52.
[16] Cf. Parfit (ms), p.30.
[17] Musgrave, p.21.
[18] This evidence-based notion is another common use of the term “reasons”. But in light of my earlier remarks, we should instead hold that objective reasons are provided by the ultimate facts, not mere “indications” thereof. The more subjective or evidence-based “reasons” might instead be conceptually tied to rationality, as per my “quasi-reasons” below.
[19] Indeed, Parfit (1987) takes this as the “central claim” of the self-interest theory.
[20] Railton, p.156.
[21] Cf. Dancy, p.11.
[22] Nozick, p.41.
[23] Kavka, pp.33-36.
[24] Nida-Rumelin, p.13.
[25] From a local perspective, the state of intending is worth having, even though the object (e.g. the act of drinking Kavka’s toxin) does not in itself merit intending. The problem arises because we regulate our mental states on the basis of object-based reasons alone (as seen by the impossibility of inducing belief at will). The global perspective overcomes this by treating past reasons for intending as present reasons for acting, and hence transforming state-based reasons into object-based ones.
[26] For more on the importance of rational resolutions, see McClennen, pp.24-25.
[27] Sorensen, p.261.
[28] Quinn, pp.79-90.
[29] Sorensen, p.270.
[30] Weirich, p.391.
References
Dancy, J. (1997) ‘Parfit and Indirectly Self-defeating Theories’ in J. Dancy (ed.) Reading Parfit. Oxford : Blackwell.
Elster, J. (1983) Sour Grapes. Cambridge : Cambridge University Press.
Hare, R.M. (1981) Moral Thinking. Oxford : Clarendon Press.
Harsanyi, J. (1980) ‘Rule Utilitarianism, Rights, Obligations and the Theory of Rational Behavior’ Theory and Decision 12.
Kavka, G. (1983) ‘The toxin puzzle’ Analysis, 43:1.
Kolodny, N. (2005) ‘Why Be Rational?’ Mind, 114:455.
McClennen, E. (2000) ‘The Rationality of Rules’ in J. Nida-Rumelin and W. Spohn (eds.) Rationality, Rules, and Structure. Boston : Kluwer.
Musgrave, A. (2004) ‘How Popper [Might Have] Solved the Problem of Induction’ Philosophy, 79.
Nida-Rumelin, J. (2000) ‘Rationality: Coherence and Structure’ in J. Nida-Rumelin and W. Spohn (eds.) Rationality, Rules, and Structure. Boston : Kluwer.
Nozick, R. (1993) The Nature of Rationality. Princeton, N.J. : Princeton University Press.
Parfit, D. (ms.) Climbing the Mountain [Version 7/6/06].
Parfit, D. (1987) Reasons and Persons. Oxford : Clarendon Press.
Quinn, W. (1990) ‘The puzzle of the self-torturer’ Philosophical Studies, 59:1.
Railton, P. (2003) ‘Alienation, Consequentialism, and the Demands of Morality’ Facts, Values and Norms. New York : Cambridge University Press.
Sorensen, R. (2004) ‘Paradoxes of Rationality’ in Mele, A. and Rawling, P. (eds.) The Oxford Handbook of Rationality. New York : Oxford University Press.
Weirich, P. (2004) ‘Economic Rationality’ in Mele, A. and Rawling, P. (eds.) The Oxford Handbook of Rationality. New York : Oxford University Press.
Categories:
Monday, June 19, 2006
Sen on Economic Rationality
A person is given one preference ordering, and as and when the need arises this is supposed to reflect his interests, represent his welfare, summarize his idea of what should be done, and describe his actual choices and behaviour. Can one preference ordering do all these things? A person thus described may be "rational" in the limited sense of revealing no inconsistencies in his choice behaviour, but if he has no use for these distinctions between quite different concepts, he must be a bit of a fool.
-- Amartya Sen (1977), 'Rational Fools', p.336.
Holistic Rationality
I'm currently working an essay exploring my (perhaps slightly wacky) distinction between the 'local' (atomistic) and 'global' (holistic) levels of rationality. Here's the core idea:
I initially thought this mapped onto the distinction between 'direct' and 'indirect' rationality, where the former perspective is effectively limited to considering only immediate evidence and first-order judgments. But that's not quite right. The local perspective of a momentary person-stage can (and should) let higher-order judgments of reliability influence their present expected utility judgments, and thus metacoherence should lead them to embrace an "all things considered" indirect strategy even within the confines of the moment.
The local/global distinction is better illustrated by the sorts of rational "paradoxes" which suggest trouble for the former perspective:
Convinced?
I find something intuitively appealing about the idea that persons can abstract away from their particular momentary stage, and make decisions from a 'timeless' perspective instead. This holistic approach seems to ascribe a greater unity to persons than one finds in the standard atomistic view which pits our past and future stages against each other, as if they were wholly distinct and independent agents. Better to identify the agent as the whole, temporally extended person, and have each of our momentary selves conceive of themselves as mere parts of the whole, contributing to this "higher cause" without the presumption of total momentary autonomy -- we should have greater respect for our past reasons than that. In turn, it enables a stronger trust in one's future stages.
Categories:
Global optimality may sometimes require us to do other than what seems optimific within the confines of a moment. Holistic rationality, as I envisage it, tells us to adopt a broader view, transcending the boundaries of the present and identifying with a timeless perspective instead. It further requires that we be willing to treat the dictates of this broader perspective as rationally authoritative, no matter how disadvantageous this may seem from the particular perspective of the present moment. This amounts to an intrapersonal analogue of the ‘social contract’: each of our momentary stages abdicates some degree of rational autonomy, in order to enhance the rationality and autonomy of our person as a whole.
I initially thought this mapped onto the distinction between 'direct' and 'indirect' rationality, where the former perspective is effectively limited to considering only immediate evidence and first-order judgments. But that's not quite right. The local perspective of a momentary person-stage can (and should) let higher-order judgments of reliability influence their present expected utility judgments, and thus metacoherence should lead them to embrace an "all things considered" indirect strategy even within the confines of the moment.
The local/global distinction is better illustrated by the sorts of rational "paradoxes" which suggest trouble for the former perspective:
Pollock’s Ever Better Wine improves with age, without limit. Suppose you possess a bottle, and are also immortal. When should you drink the wine? Local rationality implies that you should never drink it, for at any given time you would do better to postpone it another day. But to never drink it at all is the worst possible result! Or consider Quinn’s Self-Torturer, who receives $10,000 each time he increases his pain level by an indiscernible increment. It sounds like a deal worth taking. But suppose that the combined effect of a thousand increments would leave him in such agony that no amount of money could compensate. Because each individual increment is - from the local perspective of the moment - worth taking, local rationality will again lead one to the worst possible result.
A good result is only possible for agents who are willing to let their global perspective override local calculations. The agent must make in advance a rational resolution to stop at some stage n, even though from the local perspective of stage n he would do better to continue on to stage n+1. It seems clear that the global perspective is rationally preferable. The agent can foresee the outcomes of his possible choices. If he endorses the local mode of reasoning then he will never stop, and so will end up with the worst possible outcome. It cannot be rational to accept this when other options are open to him. If he is instead resolute and holds firm to the choice - made from a global or timeless perspective - to stop at stage n, then he will do much better.
Convinced?
I find something intuitively appealing about the idea that persons can abstract away from their particular momentary stage, and make decisions from a 'timeless' perspective instead. This holistic approach seems to ascribe a greater unity to persons than one finds in the standard atomistic view which pits our past and future stages against each other, as if they were wholly distinct and independent agents. Better to identify the agent as the whole, temporally extended person, and have each of our momentary selves conceive of themselves as mere parts of the whole, contributing to this "higher cause" without the presumption of total momentary autonomy -- we should have greater respect for our past reasons than that. In turn, it enables a stronger trust in one's future stages.
Categories:
Sunday, June 18, 2006
Synthetic Survival
Kevin T. Keith asks why sci-fi characters would want to undergo a "cognitive upload", creating an immortal synthetic copy of themselves:
He further claims:
But what's the basis for this claim? I don't deny that the copying process would lead to two distinct seats of consciousness, with their respective organic and synthetic physical substrates, but why think that the consciousness of his past self "stays with his brain"? (It's question-begging to hold this on the basis that "his consciousness [is] an artefact of his brain". Each future consciousness is the product of its respective "brain". The question is which, if either, is to be identified with the past self. The above proposal presupposes that the person endures through his organic brain. We're not given any reasons to accept this view.) I agree that it's silly of the character to think he had “a fifty-fifty shot” of ending up in either body. But that's not because the self endures in its organic body. Rather, it's because the self doesn't endure at all.
According to Parfit's reductionism, which I'm sympathetic to, there is no enduring Cartesian Ego of the sort that Kevin's remarks presuppose. I'm having some conscious experiences now, and a future person-stage will have his own experiences and remember mine, but we are two distinct subjects of experience. We exhaust the relevant facts once we state the relations of physical and psychological continuity between the person-stages. There is no "deep further fact" about whether another person-stage is really "me" or not. (Parfit's sorites-type cases help support this conclusion.)
Crucially, on this view, my present consciousness does not "stay" anywhere apart from this very moment. It does not endure into the future. Each moment gives rise to "a new person[-stage] who happens to share all your memories up to that instant". Our everyday persistence consists in no more than the sort of "copying" that Kevin dismisses. Since I value my persistence all the same (even if it merely consists in perdurance, not endurance), I'm led to value "copying" likewise. What matters for our persistence, insofar as our persistence matters at all, are relations of psychological continuity. Future "copies" will be psychologically continuous with our present stage, and this provides us with all the reason we could coherently ask for in order to care about them as a future 'self'. (That's not necessarily to say that it's very much reason.)
Kevin goes on to discuss the tricky problem of double-survival, and ends up accepting a position that sounds much like Parfit's. At least, he suggests that the best way to describe a case of double-survival is to say that neither resulting person is identical to their shared past self (even though each would have been, had it not been for the other survivor). Though I'm not sure whether he goes all the way to accepting that this shows that identity is merely superficial, and isn't what matters for survival. (I'm also not sure whether he means this as a retraction of his earlier claim that the person's consciousness "stays with" their organic brain, and if not, how he would deal with Parfit's more symmetrical split-brain case.)
Anyway, I very much share Kevin's ultimate conclusion about the topic: "Good fun." Perhaps I should conclude by returning to his original question: given that survival doesn't "buy anything extra" for your momentary self, "why would “you[-now]” care?"
Categories:
[Y]ou’re not buying anything extra for “you”, you’re just creating a new person who happens to share all your memories up to that instant (and presumably your personality traits as well), but then becomes its own entity and lives its own life. A good deal for that new person, of course, but why would “you” care?
He further claims:
[I]t does not grant immortality to the entity actually choosing to undergo the copying process... his consciousness, as an artefact of his brain, stays with his brain; the new body gets its own consciousness, resident in its synthetic brain, which picks up with the exact memories and thoughts the old one had had and then branches off on its own path, but the old one just remains what it was.
But what's the basis for this claim? I don't deny that the copying process would lead to two distinct seats of consciousness, with their respective organic and synthetic physical substrates, but why think that the consciousness of his past self "stays with his brain"? (It's question-begging to hold this on the basis that "his consciousness [is] an artefact of his brain". Each future consciousness is the product of its respective "brain". The question is which, if either, is to be identified with the past self. The above proposal presupposes that the person endures through his organic brain. We're not given any reasons to accept this view.) I agree that it's silly of the character to think he had “a fifty-fifty shot” of ending up in either body. But that's not because the self endures in its organic body. Rather, it's because the self doesn't endure at all.
According to Parfit's reductionism, which I'm sympathetic to, there is no enduring Cartesian Ego of the sort that Kevin's remarks presuppose. I'm having some conscious experiences now, and a future person-stage will have his own experiences and remember mine, but we are two distinct subjects of experience. We exhaust the relevant facts once we state the relations of physical and psychological continuity between the person-stages. There is no "deep further fact" about whether another person-stage is really "me" or not. (Parfit's sorites-type cases help support this conclusion.)
Crucially, on this view, my present consciousness does not "stay" anywhere apart from this very moment. It does not endure into the future. Each moment gives rise to "a new person[-stage] who happens to share all your memories up to that instant". Our everyday persistence consists in no more than the sort of "copying" that Kevin dismisses. Since I value my persistence all the same (even if it merely consists in perdurance, not endurance), I'm led to value "copying" likewise. What matters for our persistence, insofar as our persistence matters at all, are relations of psychological continuity. Future "copies" will be psychologically continuous with our present stage, and this provides us with all the reason we could coherently ask for in order to care about them as a future 'self'. (That's not necessarily to say that it's very much reason.)
Kevin goes on to discuss the tricky problem of double-survival, and ends up accepting a position that sounds much like Parfit's. At least, he suggests that the best way to describe a case of double-survival is to say that neither resulting person is identical to their shared past self (even though each would have been, had it not been for the other survivor). Though I'm not sure whether he goes all the way to accepting that this shows that identity is merely superficial, and isn't what matters for survival. (I'm also not sure whether he means this as a retraction of his earlier claim that the person's consciousness "stays with" their organic brain, and if not, how he would deal with Parfit's more symmetrical split-brain case.)
Anyway, I very much share Kevin's ultimate conclusion about the topic: "Good fun." Perhaps I should conclude by returning to his original question: given that survival doesn't "buy anything extra" for your momentary self, "why would “you[-now]” care?"
Categories:
Labels:
ethics - good life,
metaphysics - identity,
mind,
Parfit
Posted by
Richard Y Chappell
at
9:14 am
8
comments
Metaphysical Mayhem
Is metaphysics at all relevant to living? Here we may distinguish theoretical and practical relevance. The first concerns whether metaphysical conclusions can have any implications for ethics, agency, or otherwise affect the way we think about ourselves (and what we care about). In other words, the question of whether metaphysical inquiry takes hostages.* It is a further question whether those theoretical implications have any practical effect on how we live our lives. I'm rather more skeptical of that. But I invite readers to suggest what (if any) metaphysical debates they take to be more broadly "relevant" in either of these two senses.
I do think that some metaphysics has theoretical relevance to living. I've previously written about how I think modal realism would render ethics and choice meaningless. I've also worried about "narrow fatalism" and the lack of an open future. Although I think eternalism is probably true (since I can't make sense of its negation), this must deflate our self-conception as libertarian free agents. Yet libertarian free will seems necessary for the sort of strong moral responsibility required for retributive punishment. If a person is not the ultimate source of his evildoing, then vicious character is merely a mental illness which needs to be cured, not punished (unless perhaps for utilitarian reasons, e.g. deterrence). This conflicts with the common-sense view that bad people deserve to suffer.
I guess I'm more concerned about the implications for my own agency, though. If we see through the illusion of endurance, this makes matters even worse. I won't even really exist in the future! Someone might, who's very much like me, but not the entity ("myself") who exists wholly in this moment. *sigh* It can all be a bit unsettling.
Not that any of that has any practical influence on me. I'm not too sure what to make of this disconnect. Am I simply irrational? Or do I not really believe the problematic theoretical views, at least not with sufficient certainty? Or are they actually irrelevant to the kind of life I want to live? I'm inclined towards the latter, though that may be wishful thinking. No, it seems justified, fatalism and the like don't seem to give me any positive reason to live any differently from how I am. Which raises the question: what would?
The only obvious candidates I can think of are metaphysical views which extend our lives further than we would otherwise expect, say through an afterlife. If I could only get into Heaven by bribing the Catholic Church, I'd be more inclined to do just that. Or if I could believe my favourite theology, I might do a better job with the Plutonium Rule. (I think it'd be an appealing and rewarding way to live in any case. But easier said than done.)
Any other suggestions? (Bonus points to anyone who can find importance in the universals debate, heh.)
Categories:
* = Hence the "mayhem" in this post's title. I think there's a conference which goes by the same name. I should clarify that this post has nothing to do with that.
I do think that some metaphysics has theoretical relevance to living. I've previously written about how I think modal realism would render ethics and choice meaningless. I've also worried about "narrow fatalism" and the lack of an open future. Although I think eternalism is probably true (since I can't make sense of its negation), this must deflate our self-conception as libertarian free agents. Yet libertarian free will seems necessary for the sort of strong moral responsibility required for retributive punishment. If a person is not the ultimate source of his evildoing, then vicious character is merely a mental illness which needs to be cured, not punished (unless perhaps for utilitarian reasons, e.g. deterrence). This conflicts with the common-sense view that bad people deserve to suffer.
I guess I'm more concerned about the implications for my own agency, though. If we see through the illusion of endurance, this makes matters even worse. I won't even really exist in the future! Someone might, who's very much like me, but not the entity ("myself") who exists wholly in this moment. *sigh* It can all be a bit unsettling.
Not that any of that has any practical influence on me. I'm not too sure what to make of this disconnect. Am I simply irrational? Or do I not really believe the problematic theoretical views, at least not with sufficient certainty? Or are they actually irrelevant to the kind of life I want to live? I'm inclined towards the latter, though that may be wishful thinking. No, it seems justified, fatalism and the like don't seem to give me any positive reason to live any differently from how I am. Which raises the question: what would?
The only obvious candidates I can think of are metaphysical views which extend our lives further than we would otherwise expect, say through an afterlife. If I could only get into Heaven by bribing the Catholic Church, I'd be more inclined to do just that. Or if I could believe my favourite theology, I might do a better job with the Plutonium Rule. (I think it'd be an appealing and rewarding way to live in any case. But easier said than done.)
Any other suggestions? (Bonus points to anyone who can find importance in the universals debate, heh.)
Categories:
Labels:
ethics - good life,
metaphysics,
philosophy
Posted by
Richard Y Chappell
at
4:32 am
3
comments
Friday, June 16, 2006
Constructing Rightful Property
I recently argued that property is a contingent human institution, so we should consider the various possible implementations and pick the best one. We shouldn't simply assume from the start that the unrestricted system that libertarians favour would be best. We might instead build some restrictions, e.g. taxation, into the system. (To call taxation "theft" is a category error: it confuses internal violations of institutional rules with the higher-order question of what the rules themselves ought to be.) Or, as Hilzoy of Obsidian Wings puts it:
One might object that this account gives too much weight (or "metaphysical force") to government or social institutions. Hilzoy responds with an analogy to sporting and other conventions: society "constructs" property like it does the rules of baseball, creating the possibilities of "theft" and "striking out", respectively. No great "metaphysical force" is required for either act of creation.
I think more needs to be said here. The rules of baseball are rather arbitrary, after all. They are merely conventional. Yet people typically think that there's more to property rights than that. Thieves aren't just breaking the law, they're also doing something wrong. Rights ought to be respected; they constitute moral requirements. So it seems that if institutions create rights, then they create moral requirements. That would be a distinctively significant form of "metaphysical force".
A related concern would be that the account gives the institutors too much discretion or normative power. If there are a wide range of possible "rights" that could be instituted, which society is to decide between, this seems to imply that society gets to decide what's morally required -- and that would be absurdly relativistic. (N.B. Hilzoy, to her credit, pre-empts this obvious objection by declaring: "I don't mean that we can make [property] any old way we want without criticism." But I learned the hard way that some critics simply ignore such caveats if they're not fully explained, so it's probably worth making clear exactly how the relativistic conclusion is to be avoided.)
I would answer the problem like this: society can (in the morally neutral sense of "has the ability to") institute a wide range of "property" frameworks and associated legal rules. But it shouldn't be assumed that the rules have moral or normative force. We don't get to decide which frameworks are good ones. (Morality is no "social construct" in this sense.) The normative status of internal violations is derivative upon the normative status of the institution itself. I explain all this further in my post on 'Institutional Rights'. The core thing to note is that we can escape the above objections because institutional rights are not normatively fundmental, and this means that they are not creating moral requirements out of thin air. The state has neither the rightful discretion nor the "metaphysical force" to perform such a feat.
It won't do to have just any old system of property. We want to have rightful property: property such that its theft would be morally wrong. My claim, in short, is that society constructs only the "property" component of "rightful property". The normative component has an independent foundation (i.e. indirect utilitarianism).
That's my account. Hilzoy has promised the next post in her series will tackle the question of "which set of rules is just?", so she might offer her own version there. Though it sounds like it may instead focus on assessing particular proposed rules, rather than this "meta" issue of what determines the normative status of institutional rights. I guess we'll have to wait and see...
Categories:
[N]o specific set of rules is the presumptively legitimate baseline from which we have to start. Property is a social construct, and when we construct it, all the different sets of rules that would define different systems of property are logically on a par.
One might object that this account gives too much weight (or "metaphysical force") to government or social institutions. Hilzoy responds with an analogy to sporting and other conventions: society "constructs" property like it does the rules of baseball, creating the possibilities of "theft" and "striking out", respectively. No great "metaphysical force" is required for either act of creation.
I think more needs to be said here. The rules of baseball are rather arbitrary, after all. They are merely conventional. Yet people typically think that there's more to property rights than that. Thieves aren't just breaking the law, they're also doing something wrong. Rights ought to be respected; they constitute moral requirements. So it seems that if institutions create rights, then they create moral requirements. That would be a distinctively significant form of "metaphysical force".
A related concern would be that the account gives the institutors too much discretion or normative power. If there are a wide range of possible "rights" that could be instituted, which society is to decide between, this seems to imply that society gets to decide what's morally required -- and that would be absurdly relativistic. (N.B. Hilzoy, to her credit, pre-empts this obvious objection by declaring: "I don't mean that we can make [property] any old way we want without criticism." But I learned the hard way that some critics simply ignore such caveats if they're not fully explained, so it's probably worth making clear exactly how the relativistic conclusion is to be avoided.)
I would answer the problem like this: society can (in the morally neutral sense of "has the ability to") institute a wide range of "property" frameworks and associated legal rules. But it shouldn't be assumed that the rules have moral or normative force. We don't get to decide which frameworks are good ones. (Morality is no "social construct" in this sense.) The normative status of internal violations is derivative upon the normative status of the institution itself. I explain all this further in my post on 'Institutional Rights'. The core thing to note is that we can escape the above objections because institutional rights are not normatively fundmental, and this means that they are not creating moral requirements out of thin air. The state has neither the rightful discretion nor the "metaphysical force" to perform such a feat.
It won't do to have just any old system of property. We want to have rightful property: property such that its theft would be morally wrong. My claim, in short, is that society constructs only the "property" component of "rightful property". The normative component has an independent foundation (i.e. indirect utilitarianism).
That's my account. Hilzoy has promised the next post in her series will tackle the question of "which set of rules is just?", so she might offer her own version there. Though it sounds like it may instead focus on assessing particular proposed rules, rather than this "meta" issue of what determines the normative status of institutional rights. I guess we'll have to wait and see...
Categories:
Monday, June 12, 2006
Philosophers' Carnival #31
...is here. Is it just me, or is the carnival getting better each time?
Update: I forgot to mention, we need a new volunteer for the next one! Check out the hosting guidelines, and email me if you'd like to give it a go. Just think how fun it would be to have your own carnival... and of philosophers, no less!
Update: I forgot to mention, we need a new volunteer for the next one! Check out the hosting guidelines, and email me if you'd like to give it a go. Just think how fun it would be to have your own carnival... and of philosophers, no less!
What is Existence?
We all have an intuitive grasp of what it is for entities to exist. My parents exist whereas Santa doesn't, and all that. But what of abstract objects? When philosophers argue about whether numbers truly exist, what is in dispute here? Even ontological debates about material entities seem dubious: does there exist an individual entity which is a table, or are there merely particles arranged table-wise? What's the difference? These don't seem to be debates about how the world is. Everyone agrees that there is table-ish stuff in the world. They merely dispute how to count or describe it.
Of particular concern are too-easy arguments like the following:
(P) There are nine planets.
(C1) So, nine is the number of planets.
(C2) So, there is a number that is the number of planets
(C3) So, there exist numbers.
They start with some undisputed fact, and show that it trivially entails a (seemingly substantive) ontological conclusion. But surely that's cheating! Trivial entailments can't produce substantive new results. They merely serve to highlight what is already contained in the premise. But counting planets shouldn't commit us to the existence of numbers in any deep sense, should it? At least, if the above argument is sound, then it's a marvel that so many smart philosophers could make such a simple blunder. Ontology would be easy!
At this stage, many philosophers appeal to a distinction between kinds of existence claims -- I'll follow Cian Dorr in calling these "superficial" and "fundamental". The idea, then, is that the above argument is valid only if 'existence' is used in the same sense throughout. The Platonist conflates the two, invalidly jumping from premises about superficial existence to a conclusion about fundamental existence. We can all agree that numbers "exist" in the superficial sense that follows analytically from sentences like "there are nine planets". But that says nothing about fundamental existence, which is what philosophers (at least, ontologists) are interested in.
So what do these two senses of 'exist' really amount to? I think the "superficial" sense is tolerably clear. It concerns those claims we can arrive at through conceptual analysis, analytic entailments from commonsense truths, and so forth. In this sense, the existence of abstract objects is an entirely trivial matter. It's not to claim anything substantive about how the world is. Rather, claims like "there exist numbers" are analytic: true simply in virtue of meaning, without needing any input from the world. They're more semantic than metaphysical in nature, telling us only about language and not reality. (Of course, they might be combined with a worldly component to form synthetic claims, e.g. "there are nine planets".) Scientists can look into the empirical component of such claims, but there's nothing of interest here for philosophers.
What of "fundamental" existence? This seems harder to characterize. It is meant to involve a substantive claim about how reality is. As such, claims of fundamental existence are never merely analytic. The trivial argument given above has no place in "serious ontology". Instead, perhaps, we may use inference to the best explanation, or Quinean indispensibility arguments, to conclude that we should posit some class or other of abstract objects. I follow that much. My worry is this: what, exactly, are we "positing" here? ("The existence of numbers." "Um, okay, and that means...?")
To restate the problem: how would a world with numbers be any different from a world without them? I take it the answer must be that my question is ill-formed. There are not two such possibilities to compare. Whether numbers exist or not, they have this status necessarily, and there simply is no sense to be made of the alternative proposal. But then it's still hard to see what the ontologists are disputing. (Perhaps "which of us is speaking nonsense"?)
Here's something I found helpful: In last week's reading group, Brendan pointed out that when we're speaking in the superficial sense (i.e. all the time in everyday life), we have a limited concern for the ways in which what we say might not be literally true. We're only interested in a restricted class of "relevant alternatives". His example: when I say "The board is white", we would consider it relevant if this turned out to be false because the board is really black or blue, or perhaps if I was merely hallucinating the board in the first place. But we are not concerned about the possibility that the claim is strictly false because colours don't objectively exist, say, or because there exists no fundamental entity that is a "board", but merely atoms arranged board-wise!
What I take away from this is that, in communication, we seek to narrow down the list of (epistemically) possible worlds which are candidates for actuality. When I say "the board is white", this serves to knock out those possible worlds which lack white board-ish presences. Think of it this way: we are given the various possible worlds, and we have to sort them into an "in" pile and an "out" pile. We all know how to do this, or what kind of instructions "the board is white" is meant to convey here, even if we don't know exactly how to define or describe the contents of the possible worlds that have been given to us. In particular, we don't know whether those white-boardish presences should be described fundamentally as individual objects that are white. Perhaps they shouldn't -- perhaps such macroscopic commonsense terms do not make it into "the final analysis". But we can still identify which worlds they are meant to pick out. We can tell which worlds contain stuff that fills the whiteboard role.
Another example: When I say that Santa Claus does not exist, I mean to reject those possible worlds which contain a certain qualitative character. This is the character we all associate with Santa: a red and white humanish presence usually located at the North Pole, which flies around the world delivering presents each Christmas Eve. I could draw a picture, if that'd help. ;-) Anyway, if you picture a possible world in your mind's eye, you can tell whether it contains the sorts of qualities I'm talking about here. (Or if you read a sufficiently thorough description in some idealized language.) It doesn't matter whether "Santa" is really just atoms arranged Santa-wise. So long as there's something(s) playing "the Santa role" in a world, then that counts as Santa "existing", for my purposes.
I welcome suggestions for how to express this notion more clearly, but hopefully you get the rough idea. (I feel like it's related to this post, but I can't say exactly how.) Substantive claims of superficial existence (say of concrete entities) serve to distinguish between various possible worlds. Fundamental existence claims are different. Ontologists don't discriminate between possibilities, telling us that world w1 is actual rather than w2. Rather, they fill out the possible worlds' contents, specifying what (exactly) we find in the given worlds w1 and w2.
If an ontologist says that Santa couldn't possibly exist because there are no composite objects but only arrangements of atoms, they haven't really said anything which narrows the space of possibilities. It's like Putnam's lesson from Twin Earth: the world we qualitatively imagine is still possible, it's just that we were misdescribing it. That watery presence shouldn't be called "water", and that Santa-Clausish presence shouldn't be called an individual.
Note that my analogy might be a little misleading, in that the ontologist isn't making a merely semantic claim about the meaning of our word "individual", or "exists". Rather, he's making a (purportedly) substantive claim about the contents of possible worlds. ("That Santa-ish presence is really not an individual entity! And there really are numbers -- and I'm talking about reality, not about our words!")
But we can grant that while recognizing my point that he's not really narrowing the possibilities in my sense. Since he's making claims about the necessary contents of possible worlds, if he's right then the alternative view doesn't invoke any genuine possibilities at all. It has the same kind of status as the "possibility" that there are finitely many prime numbers. This doesn't describe any coherent scenario -- it's just that not everyone realizes that.
Compositional nihilists don't really believe in a more restricted space of possible worlds than the rest of us, I take it. They simply dispute what those worlds contain. Pointing to a molecule of hydrogen gas in some possible world, they will deny that it is a third thing in addition to the two Hydrogen atoms that compose it. (Let's pretend that our so-called "atoms" really are indivisible.) They don't deny that this world (*points to a spot in the North-East of modal space*), that this world we're discussing is a possible one. They simply think we mistake its contents when we say it contains molecules as well as atoms.
I'm not convinced the difference between these views really is a substantive one though, since I can't see what the difference is. Sure, one philosopher says the two atoms compose a molecule, and the other denies this. But again, what is the content of this disagreement? What difference does it make whether we say there are three things here or just two? It seems to come down to the arbitrary matter of how you choose to count! So I'm skeptical that existence in this "fundamental" sense really amounts to much. Perhaps we should be pluralists about it, allowing that adopting different ontological frameworks might be useful for different philosophical purposes, but there's not really any deep fact of the matter. The important (even if "superficial"!) existence questions concern the differences between possibilites, and the empirical question of which one is actual.
(Though cf. Chalmers' more moderate view, which allows that there might be some determinate ontological truths, as well as some indetermine matters.)
Categories:
Of particular concern are too-easy arguments like the following:
(P) There are nine planets.
(C1) So, nine is the number of planets.
(C2) So, there is a number that is the number of planets
(C3) So, there exist numbers.
They start with some undisputed fact, and show that it trivially entails a (seemingly substantive) ontological conclusion. But surely that's cheating! Trivial entailments can't produce substantive new results. They merely serve to highlight what is already contained in the premise. But counting planets shouldn't commit us to the existence of numbers in any deep sense, should it? At least, if the above argument is sound, then it's a marvel that so many smart philosophers could make such a simple blunder. Ontology would be easy!
At this stage, many philosophers appeal to a distinction between kinds of existence claims -- I'll follow Cian Dorr in calling these "superficial" and "fundamental". The idea, then, is that the above argument is valid only if 'existence' is used in the same sense throughout. The Platonist conflates the two, invalidly jumping from premises about superficial existence to a conclusion about fundamental existence. We can all agree that numbers "exist" in the superficial sense that follows analytically from sentences like "there are nine planets". But that says nothing about fundamental existence, which is what philosophers (at least, ontologists) are interested in.
So what do these two senses of 'exist' really amount to? I think the "superficial" sense is tolerably clear. It concerns those claims we can arrive at through conceptual analysis, analytic entailments from commonsense truths, and so forth. In this sense, the existence of abstract objects is an entirely trivial matter. It's not to claim anything substantive about how the world is. Rather, claims like "there exist numbers" are analytic: true simply in virtue of meaning, without needing any input from the world. They're more semantic than metaphysical in nature, telling us only about language and not reality. (Of course, they might be combined with a worldly component to form synthetic claims, e.g. "there are nine planets".) Scientists can look into the empirical component of such claims, but there's nothing of interest here for philosophers.
What of "fundamental" existence? This seems harder to characterize. It is meant to involve a substantive claim about how reality is. As such, claims of fundamental existence are never merely analytic. The trivial argument given above has no place in "serious ontology". Instead, perhaps, we may use inference to the best explanation, or Quinean indispensibility arguments, to conclude that we should posit some class or other of abstract objects. I follow that much. My worry is this: what, exactly, are we "positing" here? ("The existence of numbers." "Um, okay, and that means...?")
To restate the problem: how would a world with numbers be any different from a world without them? I take it the answer must be that my question is ill-formed. There are not two such possibilities to compare. Whether numbers exist or not, they have this status necessarily, and there simply is no sense to be made of the alternative proposal. But then it's still hard to see what the ontologists are disputing. (Perhaps "which of us is speaking nonsense"?)
Here's something I found helpful: In last week's reading group, Brendan pointed out that when we're speaking in the superficial sense (i.e. all the time in everyday life), we have a limited concern for the ways in which what we say might not be literally true. We're only interested in a restricted class of "relevant alternatives". His example: when I say "The board is white", we would consider it relevant if this turned out to be false because the board is really black or blue, or perhaps if I was merely hallucinating the board in the first place. But we are not concerned about the possibility that the claim is strictly false because colours don't objectively exist, say, or because there exists no fundamental entity that is a "board", but merely atoms arranged board-wise!
What I take away from this is that, in communication, we seek to narrow down the list of (epistemically) possible worlds which are candidates for actuality. When I say "the board is white", this serves to knock out those possible worlds which lack white board-ish presences. Think of it this way: we are given the various possible worlds, and we have to sort them into an "in" pile and an "out" pile. We all know how to do this, or what kind of instructions "the board is white" is meant to convey here, even if we don't know exactly how to define or describe the contents of the possible worlds that have been given to us. In particular, we don't know whether those white-boardish presences should be described fundamentally as individual objects that are white. Perhaps they shouldn't -- perhaps such macroscopic commonsense terms do not make it into "the final analysis". But we can still identify which worlds they are meant to pick out. We can tell which worlds contain stuff that fills the whiteboard role.
Another example: When I say that Santa Claus does not exist, I mean to reject those possible worlds which contain a certain qualitative character. This is the character we all associate with Santa: a red and white humanish presence usually located at the North Pole, which flies around the world delivering presents each Christmas Eve. I could draw a picture, if that'd help. ;-) Anyway, if you picture a possible world in your mind's eye, you can tell whether it contains the sorts of qualities I'm talking about here. (Or if you read a sufficiently thorough description in some idealized language.) It doesn't matter whether "Santa" is really just atoms arranged Santa-wise. So long as there's something(s) playing "the Santa role" in a world, then that counts as Santa "existing", for my purposes.
I welcome suggestions for how to express this notion more clearly, but hopefully you get the rough idea. (I feel like it's related to this post, but I can't say exactly how.) Substantive claims of superficial existence (say of concrete entities) serve to distinguish between various possible worlds. Fundamental existence claims are different. Ontologists don't discriminate between possibilities, telling us that world w1 is actual rather than w2. Rather, they fill out the possible worlds' contents, specifying what (exactly) we find in the given worlds w1 and w2.
If an ontologist says that Santa couldn't possibly exist because there are no composite objects but only arrangements of atoms, they haven't really said anything which narrows the space of possibilities. It's like Putnam's lesson from Twin Earth: the world we qualitatively imagine is still possible, it's just that we were misdescribing it. That watery presence shouldn't be called "water", and that Santa-Clausish presence shouldn't be called an individual.
Note that my analogy might be a little misleading, in that the ontologist isn't making a merely semantic claim about the meaning of our word "individual", or "exists". Rather, he's making a (purportedly) substantive claim about the contents of possible worlds. ("That Santa-ish presence is really not an individual entity! And there really are numbers -- and I'm talking about reality, not about our words!")
But we can grant that while recognizing my point that he's not really narrowing the possibilities in my sense. Since he's making claims about the necessary contents of possible worlds, if he's right then the alternative view doesn't invoke any genuine possibilities at all. It has the same kind of status as the "possibility" that there are finitely many prime numbers. This doesn't describe any coherent scenario -- it's just that not everyone realizes that.
Compositional nihilists don't really believe in a more restricted space of possible worlds than the rest of us, I take it. They simply dispute what those worlds contain. Pointing to a molecule of hydrogen gas in some possible world, they will deny that it is a third thing in addition to the two Hydrogen atoms that compose it. (Let's pretend that our so-called "atoms" really are indivisible.) They don't deny that this world (*points to a spot in the North-East of modal space*), that this world we're discussing is a possible one. They simply think we mistake its contents when we say it contains molecules as well as atoms.
I'm not convinced the difference between these views really is a substantive one though, since I can't see what the difference is. Sure, one philosopher says the two atoms compose a molecule, and the other denies this. But again, what is the content of this disagreement? What difference does it make whether we say there are three things here or just two? It seems to come down to the arbitrary matter of how you choose to count! So I'm skeptical that existence in this "fundamental" sense really amounts to much. Perhaps we should be pluralists about it, allowing that adopting different ontological frameworks might be useful for different philosophical purposes, but there's not really any deep fact of the matter. The important (even if "superficial"!) existence questions concern the differences between possibilites, and the empirical question of which one is actual.
(Though cf. Chalmers' more moderate view, which allows that there might be some determinate ontological truths, as well as some indetermine matters.)
Categories:
Sunday, June 11, 2006
Snippets
I thought I'd offer some interesting snippets from this morning's blog reading. First on the Guantánamo suicides...
Velleman:
Obsidian Wings:
And more (otherwise unrelated) cutting criticisms:
Update: more fun, this time from Julian Sanchez:
And now Fafblog! weighs in on Guantanamo:
Velleman:
This protest was perhaps the only autonomous action that was left to them, the only end that they still had effective means to pursue. To be sure, their means of effecting this protest was to destroy themselves. But what they destroyed were not lives imbued with the value of rational autonomy; what they destroyed were lives whose value was already being desecrated daily, with no end in sight. Their personhood was already being used as mere means, by their jailers, and their suicides put an end to that abuse, as the abuse of slavery is ended when the slave dies attempting to escape.
What we would say of the slave is, not that he lacks regard for human life, but that he shows precisely that regard which is appropriate to it. I would say the same of the Guantánamo suicides.
Obsidian Wings:
I guess what Harris means by an "act of asymmetric warfare" is that this makes us look terrible, and may motivate people to commit acts of terrorism. That is possible. Although there's no evidence that the motivation of these suicides was to inspire attacks (rather than to increase pressure to free the other prisoners, or simply to die), they may have that effect. But if making the world think that the US mistreats prisoners is an act of war against the US, then it looks like George Bush, Dick Cheney, Donald Rumsfeld, John Yoo, David Addington and Geoffrey Miller (to name a few) are also "enemy combatants".
And more (otherwise unrelated) cutting criticisms:
He might have written a book on informal logic. If we thought it was any good, we might recommend that he read it.Ouch!
Update: more fun, this time from Julian Sanchez:
The idea, insofar as I can make it out, seems to be that if countries where marriage is viewed as less important have been among the first to let gay people in, then any country that lets gay people into marriage will come to view it as less important. Why we might expect this to be the case is, alas, not explained. I notice that both this argument and the "procreative link" one appear to rely on the presumption that "If A, then B" entails "If B, then A." I think we may have discovered the real fountainhead of opposition to gay marriage: It's not homophobia, it's the inability to distinguish between a conditional and a biconditional. Which is a little odd, really: You'd think they'd like a logical operator that only swings one way.
And now Fafblog! weighs in on Guantanamo:
Run for your lives - America is under attack! Just days ago three prisoners at Guantanamo Bay committed suicide in a savage assault on America's freedom to not care about prisoner suicides! Oh sure, the "Blame Atrocities First" crowd will tell you these prisoners were "driven to despair," that they "had no rights," that they were "held and tortured without due process or judicial oversight in a nightmarish mockery of justice." But what they won't tell you is that they only committed suicide as part of a diabolical ruse to trick the world into thinking our secret torture camp is the kind of secret torture camp that drives its prisoners to commit suicide! This fiendish attempt to slander the great American institution of the gulag is nothing less than an act of asymmetrical warfare against the United States - a noose is just a suicide bomb with a very small blast radius, people! - and when faced with a terrorist attack, America must respond. Giblets demands immediate retaliatory airstrikes on depressed Muslim torture victims throughout the mideast!
Saturday, June 10, 2006
"All things aside"?
Hmm, I just read in the paper a Jordanian quoted as saying that he was proud of Al-Zarqawi, and that "all things aside" the terrorist was better than "treasonous Americans". Does "all things aside" means the opposite of "all things considered"? I'm not sure how else one could come to his conclusion, except by setting aside all the facts. Heh. But seriously, I don't think I've come across that phrase before. I'm guessing it means something like "despite his faults" (or, more generally, "no matter all the problems")? But someone out there must know more: do enlighten me...
Public Sex, Privacy and Shame
The Volokh Conspiracy has an interesting series of posts on whether public sex and nudity should be legal. Assuming that no real harm is done, it's hard to see why not, unless you consider "yuck!" (or "I'm offended!") a legitimate reason for restricting others' liberty. I'm inclined to think that such matters should instead be regulated by informal social norms. But that's not really what I'm interested in here. Given the prevailing attitudes, having sex in public is extremely rude and inconsiderate, whether or not you consider that a criminal offense. What I'm wondering is: are the prevailing attitudes reasonable ones? Should we find public sex offensive? Is it wrong for reasons apart from any arbitrary offense it may cause?
An appeal to cultural liberalism could justify a general policy of reticence. As Nagel writes (HT: Velleman):
Such compromise is pragmatically sensible. But, politics aside, it leaves open my questions about which stance is the ideally rational one. (The whole point of cultural liberalism is that we should tolerate potential irrationality through tactful non-acknowledgment, rather than violating others' privacy in attempts to enforce conformity to our own conception of perfection.) To draw any stronger conclusions, we will need to look more closely at the nature of privacy and shame.
The right to privacy is of monumental importance, for reasons explained in the latter half of my post 'Living as Storytelling' (with further reference to Nagel). The flourishing autonomous individual must not be constantly burdened with the weight of the public's gaze. He has a right to be free of it. But what if he (incomprehensibly, to me) chooses such exposure? Our rights are granted for our own sakes, and we may refrain from exercising them if we so please. A right to privacy in one's sex life does not by itself entail a duty to refrain from sex in public. So what must be established here is no mere right to privacy, but the more dubious claim that we have a duty to keep our business private. Where would such a duty come from?
Laurence Thomas writes:
Presumably this is due to consideration for the listener, and particularly the desire to avoid causing offence. If both speaker and audience welcomed such disclosure, then it's surely unobjectionable. So this brings us back to my original question: are there any good reasons why we should be offended by another's self-disclosure? Or are our feelings here fairly arbitrary, and hence the need to respect them (i.e. the "duty of privacy") correspondingly contingent?
What of shame? Drawing on Velleman, my earlier post suggested that feelings of shame derive from awareness of one's failings as a self-presenting agent, due to unintentional self-disclosure. But if the disclosure is voluntary and intentional (cf. porn stars), then no shame results. We might say such people are "shameless". We feel that they shouldn't be so keen to expose themselves. But why not? That's the crucial question I haven't seen anyone address yet.
The closest is Laurence Thomas' claim that "a very clear indication that a person does not take himself sufficiently seriously is just the fact that the individual discloses way too much about himself." Is that true though? Why should such openness indicate a lack of self-respect (rather than, say, abundant self-confidence)? Perhaps the idea is that we need to have a restricted public persona, while holding something back, in order to be fully human. But again, it's easier to offer such proposals than to justify them. Perhaps excessive public openness precludes private intimacy: there's just nothing special left to share. Shamelessness might then be seen as a crime against one's intimates, or even against one's own humanity.
But all that sounds a little flimsy to me. Does anyone have any better ideas? In the absence of such, I have trouble seeing any wrongmaking features intrinsic to shamelessness. Perhaps the only real problem with it is the extrinsic worry about needlessly causing offence to others. (What do you think? Comments welcome.)
There are special cases, of course. In response to Sage's post on public masturbation, one person commented:
This bears clarification, however, for it risks implying "thoughtcrime". The problem cannot simply be that they've made you their "intentional object" (i.e. the object of their thoughts) -- I assume there's nothing wrong with sexual fantasy. One needn't ask another's permission merely to think about them, even in a sexual way. Rather, the problem here must involve the blatant disclosure of such thoughts. And this case plausibly goes beyond the mere risk of causing offense. Rather, the action seems to have overtones of aggression or disrespect. One supposes that the twisted individual's intention is not merely to enjoy the thought of you (which is surely innocent enough on its own), but rather to demean you, to announce to the world that he only cares about you as an instrument to achieving his own ends. One supposes that he might just as well spit on you when he's done.
The suppositions might not always apply, but they certainly indicate a class of public sexual activity that would be grievously immoral. The problem there derives not from general concerns about excessive self-disclosure, nor sexual prudishness specifically, but rather the vicious and degrading intentions that were expressed in that particular case. Being dependent on this social communication, I suspect that the moral status of public masturbation is highly dependent on social context. The case described above depends heavily on the backdrop of a misogynistic culture, for instance. Without that cultural background, the intentions communicated by the action might be very different indeed, and perhaps entirely innocent. (We might imagine a culture where such behaviour was interpreted as a polite compliment on one's appearance, for instance!)
For a rather different case, we might also imagine a shameful creature so overcome by desire at seeing a topless woman walking down the street that he simply cannot restrain himself. If we stipulate that he feels no ill will towards those exposed to his self-gratification, then he seems more deserving of pity than moral outrage. The earlier discussion implies that he will feel great shame for his lack of self-control. The rest of us may disapprove of his sub-human failure, but in a very different way from the previous case. This guy's pitiful behaviour communicates his powerlessness before the Other. The earlier case involved deliberate action meant to communicate the actor's power over the Other. ("I can do what I want with you, and there's nothing you can do about it." -- I think Sage metaphorically dubbed this "rape at a distance" in her comment thread.) So, some important differences there, I think.
Right, I'm all thought out, curious though these issues are. Your turn...
Categories:
An appeal to cultural liberalism could justify a general policy of reticence. As Nagel writes (HT: Velleman):
[B]oundaries between what is publicly exposed and what is not exist for a reason. We will never reach a point at which nothing that anyone does disgusts anyone else. We can expect to remain a sexual world deeply divided by various lines of imaginative incomprehension and disapproval. So conventions of reticence and privacy serve a valuable function in keeping us out of each other's faces.
Such compromise is pragmatically sensible. But, politics aside, it leaves open my questions about which stance is the ideally rational one. (The whole point of cultural liberalism is that we should tolerate potential irrationality through tactful non-acknowledgment, rather than violating others' privacy in attempts to enforce conformity to our own conception of perfection.) To draw any stronger conclusions, we will need to look more closely at the nature of privacy and shame.
The right to privacy is of monumental importance, for reasons explained in the latter half of my post 'Living as Storytelling' (with further reference to Nagel). The flourishing autonomous individual must not be constantly burdened with the weight of the public's gaze. He has a right to be free of it. But what if he (incomprehensibly, to me) chooses such exposure? Our rights are granted for our own sakes, and we may refrain from exercising them if we so please. A right to privacy in one's sex life does not by itself entail a duty to refrain from sex in public. So what must be established here is no mere right to privacy, but the more dubious claim that we have a duty to keep our business private. Where would such a duty come from?
Laurence Thomas writes:
Privacy is about two things that operate in tandem: what others have access to without seeking permission and what people can offer to others without seeking permission... Self-disclosure is not appropriate merely because a person want[s] to do so.
Presumably this is due to consideration for the listener, and particularly the desire to avoid causing offence. If both speaker and audience welcomed such disclosure, then it's surely unobjectionable. So this brings us back to my original question: are there any good reasons why we should be offended by another's self-disclosure? Or are our feelings here fairly arbitrary, and hence the need to respect them (i.e. the "duty of privacy") correspondingly contingent?
What of shame? Drawing on Velleman, my earlier post suggested that feelings of shame derive from awareness of one's failings as a self-presenting agent, due to unintentional self-disclosure. But if the disclosure is voluntary and intentional (cf. porn stars), then no shame results. We might say such people are "shameless". We feel that they shouldn't be so keen to expose themselves. But why not? That's the crucial question I haven't seen anyone address yet.
The closest is Laurence Thomas' claim that "a very clear indication that a person does not take himself sufficiently seriously is just the fact that the individual discloses way too much about himself." Is that true though? Why should such openness indicate a lack of self-respect (rather than, say, abundant self-confidence)? Perhaps the idea is that we need to have a restricted public persona, while holding something back, in order to be fully human. But again, it's easier to offer such proposals than to justify them. Perhaps excessive public openness precludes private intimacy: there's just nothing special left to share. Shamelessness might then be seen as a crime against one's intimates, or even against one's own humanity.
But all that sounds a little flimsy to me. Does anyone have any better ideas? In the absence of such, I have trouble seeing any wrongmaking features intrinsic to shamelessness. Perhaps the only real problem with it is the extrinsic worry about needlessly causing offence to others. (What do you think? Comments welcome.)
There are special cases, of course. In response to Sage's post on public masturbation, one person commented:
a person who is masturbating in public while looking at another person is making that person a part of their sex act, often without the other person's consent. that's why public masturbation makes me angry - if someone is watching me while they jack off, they're making me a sex object and they're including me in something sexual without my permission.
This bears clarification, however, for it risks implying "thoughtcrime". The problem cannot simply be that they've made you their "intentional object" (i.e. the object of their thoughts) -- I assume there's nothing wrong with sexual fantasy. One needn't ask another's permission merely to think about them, even in a sexual way. Rather, the problem here must involve the blatant disclosure of such thoughts. And this case plausibly goes beyond the mere risk of causing offense. Rather, the action seems to have overtones of aggression or disrespect. One supposes that the twisted individual's intention is not merely to enjoy the thought of you (which is surely innocent enough on its own), but rather to demean you, to announce to the world that he only cares about you as an instrument to achieving his own ends. One supposes that he might just as well spit on you when he's done.
The suppositions might not always apply, but they certainly indicate a class of public sexual activity that would be grievously immoral. The problem there derives not from general concerns about excessive self-disclosure, nor sexual prudishness specifically, but rather the vicious and degrading intentions that were expressed in that particular case. Being dependent on this social communication, I suspect that the moral status of public masturbation is highly dependent on social context. The case described above depends heavily on the backdrop of a misogynistic culture, for instance. Without that cultural background, the intentions communicated by the action might be very different indeed, and perhaps entirely innocent. (We might imagine a culture where such behaviour was interpreted as a polite compliment on one's appearance, for instance!)
For a rather different case, we might also imagine a shameful creature so overcome by desire at seeing a topless woman walking down the street that he simply cannot restrain himself. If we stipulate that he feels no ill will towards those exposed to his self-gratification, then he seems more deserving of pity than moral outrage. The earlier discussion implies that he will feel great shame for his lack of self-control. The rest of us may disapprove of his sub-human failure, but in a very different way from the previous case. This guy's pitiful behaviour communicates his powerlessness before the Other. The earlier case involved deliberate action meant to communicate the actor's power over the Other. ("I can do what I want with you, and there's nothing you can do about it." -- I think Sage metaphorically dubbed this "rape at a distance" in her comment thread.) So, some important differences there, I think.
Right, I'm all thought out, curious though these issues are. Your turn...
Categories:
Subscribe to:
Posts (Atom)