Monday, October 31, 2005

Philosophers' Carnival #21

The latest Philosophers' Carnival is finally up at Prior Knowledge. I'd forgotten how much work hosting one of these can be. Fun though. (I thought the bats were a nice touch.) There are heaps of interesting entries this time around too. Go see for yourself!

The Purpose of Marriage

A year ago I despaired of finding an intelligent, plausible argument against gay marriage that wasn't merely a transparent rationalization of prejudice. I've finally found that argument, over in the archives of Gideon's blog. I think it ultimately fails, for reasons to be explained, but it's a coherent and well-articulated argument that warrants careful consideration. To provide the background context for his argument:
My own thinking on this topic has gotten, if anything, more conservative over time, and this disturbs me. I have a number of close gay friends; I know gay parents whose kids are wonderful, extremely well-adjusted people. I have no reason to believe that gay couples would be unable to form stable families. I'm convinced that for an irreducible core of individuals, homosexuality is not a choice but a destiny, and I think it is cruel to say to such people that they must hide who they are from shame. Believing all this, I should be an advocate of gay marriage. And I was, until fairly recently.

What changed my thinking had nothing to do with the nature of gay people or my sense of what was fair and just. What changed my view was thinking hard about the meaning of marriage, how that meaning has been debased, and how the case for gay marriage as currently articulated makes it extraordinarily difficult to restore what is essential about marriage; how it will, in fact, close the door on the possibility of restoration of what has been lost.

What has been lost, he suggests, is the expectation of marriage, i.e. the institution's status as a social norm. You'll need to read the original post to see why this is important, as I doubt my selected excerpts will do it justice. But here's the core idea:
Because marriage is a difficult good, we cannot count on young people to choose it on the merits. The principal way that a culture increases the short-term value of a difficult good, making it much more attractive to pursue, is by according it status... the way the culture sends the message that to be married is to achieve status is by saying that marriage is normal and that people who fail to marry are, in some sense less than whole people. Marriage is articulated not as an achievement, but as a stage in life that everyone, more or less, is expected to achieve; like learning to walk, learning to read, getting a driver's license, graduating high school, getting a job. Sure, some people will never learn to drive and some people will never marry. But they will be understood by all to be exceptions, in some sense, to a general rule.

He worries that marriage will never be the norm for gay people, and that "those who marry will do so because they chose to, not because they understood it was expected of them." He continues:
[A]ssuming that gay marriage really is taken seriously (and I give gays sufficient credit that this will be the case), gay male couples are likely to consider marriage in roughly the way that people consider entering the clergy. Marriage will be recognized as a meritorious lifestyle, one to be admired - one superior, perhaps, to the footloose ways more gays will follow. But there will, of course, be no censure for not marrying, any more than there is censure for not becoming a priest or minister. Even if the conservative case for gay marriage is fulfilled, and gay marriages are as stable as straight ones, and the existence of gay marriage as an institution makes such marriages more common and exerts a stabilizing influence on gay life generally, it seems very unlikely to me that marriage will ever become a norm among gay men.

He concludes: "Straights will learn from gays that while marriage may be rewarding for some, it requires extraordinary sacrifice and discipline, and really isn't for everyone."

The first thing to be said about all this is nicely articulated by Bob McGrew:
It's the alternatives to gay marriage that pose the most threat to heterosexual marriage... Living together is increasingly being defined not as a trial period before marriage but as an alternative to marriage itself. Most large companies offer benefits to unmarried couples just as they do to married couples, and almost all do so for opposite-sex partners as well as same-sex partners... Yet there is one way to get rid of domestic partnerships with a single stroke, saving marriage from its single greatest competitor. That way is making gay marriage legal.... By restricting benefits to marriage, [corporations] will still be able to attract gay employees without losing straight employees. Domestic partner legislation would wither on the vine after losing its most important constituency.

In other words, it is the current exclusion of gays from marriage that teaches the rest of us that marriage is not necessary -- even for couples in a committed monogamous relationship.

A big part of Gideon's complaint concerns the conception of marriage commonly found amongst advocates of SSM. He writes:
Gay marriage is discussed as a right, part of the right to freedom of sexual expression and equality of treatment. And if those are the terms of its acceptance, then I don't see how we can ever go back to talking about marriage as a norm.

This ties in with his insistence that "marriage is not all about love":
[M]any people I know did not marry for "love" in the sense that you see in the movies. They married because they were ready to get married. If they were in a "relationship" of one sort or another, they proposed to their girlfriend - or, in one case, ditched her and quickly found someone more marriageable. If they were not, they actively sought out the right sort of man or woman - the sort they could imagine living with even after they grew wrinkled or fat - and, if the other party was willing, married them... This is the unromantic perspective that marriage is made of, far more than of love, sex or romance - far more, even, than of friendship, which is a different thing; also precious, and one's wife or husband really ought to be one's friend, but not the same at all. But this is not how the advocates of gay marriage talk about marriage...

As with everything before, the assumption that marriage is fundamentally about love (with the corollary that if love fades, presumably so should the marriage - after all, there might still be time to actualize oneself through another, yet more thrilling love!) does not originate with the campaign for gay marriage; far from it. But again, acceptance of gay marriage entails explicitly understanding marriage in this way, and therefore bars the way back to a more realistic appraisal.

Jason Kuznicki is a clear counterexample to the posited "entailment". Jason compellingly argues that a deep sense of 'nurturing' is the true purpose of marriage:
It cheapens the covenant to say that marriage is just about sex, or just about rights, or just about children. Marriage is about all of this — and more. Marriage is a complete, all-encompassing, nurturing relationship. It’s about care for the whole person, so much so that no one else in all the world is quite as important.

In response to the crucial question of why government has an interest in marriage (if not for the babies), Jason explains:
Protecting the right to nurture requires more than merely looking the other way because the nurtured are vulnerable and because nurturers do things for them that non-nurturers must never be trusted to do. Our natural right to designate (or act as) a nurturer therefore leads directly to a contractual right wherein the government distinguishes between nurturers (who may make decisions for us) and non-nurturers (who must not be allowed to pose as something that they are not)...

To respect the desire of two individuals who wish to nurture one another, a government must make certain that its laws do not interfere with the types of behavior that a reasonable person might want a nurturing caregiver to perform:

–The government has an obligation to respect our determinations about who should make medical, legal, and financial choices for us when we are incapacitated; about how we wish to dispose of our property on death; and about our decision to share childrearing responsibilities.

–The government ought not to compel the separation of nurturing partners merely because one is a foreign national; the citizen in the relationship must be expected to help the alien adapt to our culture.

–The government ought not to expect testimony from one nurturing partner against another; having developed (or at least promised) the lifelong habit of looking out for one’s partner, impartial testimony cannot be expected.

–The government ought to institute a formal process for initiating a nurturing relationship, if only so that the above rights may be unambiguously secured. This should ideally be an act distinct from the various religious rites of marriage.

–The government ought to institute a formal process for ending a nurturing relationship; while marriage for life is generally recognized as the ideal, some mechanism should exist for those who have determined that they will never reach the ideal owing to insuperable obstacles...

This, to me, describes the heart of marriage, its reason for being, and its connections to sex, family, spirituality, and the state.

So far I've been highlighting some potential internal criticisms, responding to Gideon's post on its own terms. But from a liberal perspective, I'm also concerned about some of his assumptions. In particular, his claim that "we cannot count on young people to choose [marriage] on the merits" strikes me as disturbingly paternalistic. Once the benefits are clearly articulated, we should trust individuals to judge how best to live their own lives. No need to "censure" those who don't conform. Ick.

Saturday, October 29, 2005

Upcoming Carnival

[Update: Just one more day to get those entries in!]

The Philosophers' Carnival seems to have run fresh out of volunteer hosts, so I'll have to do this one myself. It'll be at Prior Knowledge, next Monday. Start sending in your submissions...

(And let me know if you'd like to host a future carnival.)

Know Show

It's commonly assumed that in order to know something, we must be in a position to show or establish its truth — or at least be aware of the reasons which ground our knowledge. This is what most skeptical arguments depend upon — a brain in a vat might have experiences subjectively indistinguishable from my own, so there's no way to establish that I'm not a BIV, and so I must not know it either. Externalists simply deny this last step. You can have 'knowing' without 'showing'. Perhaps all that really matters for knowledge is that my true belief is formed by a reliable process, or that it 'tracks the truth' through close possible worlds -- regardless of whether I'm aware of this fact. Now, sometimes people's immediate reaction is to think that such externalism simply begs the question against the skeptic. "After all, you can't really show that you're not a BIV. The externalist just assumes that we're not, and argues from there. But what entitles him to such an assumption?" In this post, I want to explain why such complaints are misguided.

(I'll be drawing upon the responses I've previously made to Blar and Rex Hubbard when they offered these objections.)

First, note that the skeptic's challenge is whether we can have actual knowledge in light of the counterfactual possibility of our being BIVs. The question is not whether our knowledge would survive were we actually massively deceived -- obviously it would not. False beliefs cannot constitute knowledge. So the only cases worth considering are ones where we have true beliefs.

The skeptic accepts this, but argues that even if we have true beliefs about the external world, these cannot constitute knowledge because our subjective uncertainties mean that our beliefs lack justification. The externalist response is to deny that knowledge requires subjective awareness of justification. Instead, they suggest, our beliefs must be reliable or 'justified' in a more objective sense. There must be an appropriate connection between the truth of the belief and our believing of it. What matters for knowledge is that this connection holds as a matter of fact. It does not matter whether we are aware of the connection.

To reiterate: the skeptic is willing to grant that my belief that "I am not a BIV" is true. So the externalist is not begging the question when he assumes that I'm not a BIV. (Besides, we can always discharge the assumption via conditional introduction, if you insist. That is, we conclude that if I am in fact not a BIV then I in fact have knowledge. Since it's certainly possible that I'm not a BIV, it follows that we can, possibly, have knowledge.) And the externalist will certainly agree that we cannot show that we're not BIVs. He grants the skeptic this much. The dispute is simply over whether I can know that I am not a BIV, given that my belief is true but undemonstrable. On an externalist conception of knowledge, I can have such knowledge.

(Of course, that doesn't show that externalism is the true theory of knowledge. Perhaps it isn't. But that's another issue. The current question is whether the truth of externalism would defeat skepticism. And the answer is clear: it would.)

Another side-issue is whether it is in fact true that these objective conditions required for knowledge hold. For example, is it true that I'm not a BIV? I obviously believe so, but this isn't something I can show. But the whole point of this post is to explain why this doesn't matter. I can't show that I have knowledge, because I can't show that I satisfy the objective conditions required for knowledge (which at a minimum include truth). But, contrary to the skeptic, this doesn't mean that I lack knowledge. A belief might be true even if we can't show it. Similarly, it might constitute knowledge, even if we can't show it.

Anyway, this all ties in rather nicely with a brilliant anti-skeptical move by Thomas Reid, recently quoted on the common sense philosophy blog:
Reason, says the skeptic, is the only judge of truth, and you ought to throw off every opinion and every belief that is not grounded on reason. Why, Sir, should I believe the faculty of reason more than that of perception; they came both out of the same shop, and were made by the same artist; and if he puts one piece of false ware into my hands, what should hinder him from putting another.

In other words, shouldn't the consistent skeptic also doubt the dictates of logic, trusting nothing — not even the conclusions of their own arguments? Such an uber-skeptic would have no chance of showing that we lack knowledge. At best, he might insist that we can't show that we have knowledge (though Reid would presumably question his certainty even of this). But even if we conclude that we can't show anything at all, that still leaves upon the possibility that we know all sorts of things. Because, as already explained, knowing does not require showing. (The latter might be sufficient for knowledge, but it isn't necessary.)

(Aside, I must say there seems something incredibly odd about being skeptical of the rational faculty. I've suggested such a skeptical scenario before, but I don't know quite what to make of it all. I guess it just goes to show how misguided is the Cartesian ideal of casting off "all assumptions" — as if it would still be possible to think at all after doing so! Hmph. Actually, this ties in with the stuff in my previous post about the need for innate constraints, the impossibility of extreme empiricism, etc. Anyway, I'll stop rambling now. Maybe the comments will shed more light on the matter...)

Friday, October 28, 2005

Native Empiricists

Why are we the way we are? Each of us currently possesses a wide range of dispositions, but where did they come from? The standard answer appeals to a combination of 'nature' and 'nurture' — genetic heritage and environmental influence. We may illuminate the problem by conceiving of it in terms of a 'search space'. Given all the possible actions open to an agent or animal, how does it decide what to do? Or, taking a step further backwards, how does evolution design organisms that achieve their biological goals? Put in this light, we can see that the solution involves knowledge of a (perhaps implicit) sort. The organism needs to be sensitive to means by which it can achieve these goals. This would seem to open up two broad options. The organism might have the necessary knowledge "built in" - the 'nativist' solution - or it might instead learn from experience, as the 'empiricists' would have it. Put into an evolutionary framework, the 'innate knowledge' posited by nativists would have a phylogenetic explanation, i.e. arising over evolutionary timescales. This can be contrasted with the more familiar ontogenetic explanations provided by empiricists, wherein knowledge acquisition occurs over individual lifespans.

Of course, we obviously do learn much from experience, so no-one would seriously propose that all of our knowledge is innate. The extreme nativist would have no sense organs at all, invariably performing one pre-programmed action after another, entirely insensitive to environmental contingencies. It's an absurd image. But the extreme empiricist would fare no better. A purely 'blank slate', lacking the drive of any prior dispositions whatsoever, would never do or learn anything. Even if provided with some innate desires and a general ability to learn (perhaps through trial and error, induction, etc.), such a creature still has no basis on which to choose one action over another. A random choice, much like a random mutation, is very unlikely to be beneficial. The sheer vastness of phylogenetic scales allows the latter strategy to ultimately yield results for species despite its unreliability. But an individual organism has no such luxury. It requires some guidance as to how to achieve its goals, what actions are worth 'trialling', which properties are projectible and thus potentially inductible upon, etc.

In light of the multiplicity of natural properties, and the indefinitely varied gerrymandered objects one might identify, it seems doubtful whether the blank slate creature could even make sense of the preconceptual content delivered by its sensory organs. We need innate perceptual processes to cut up the world into identifiable chunks, drawing our attention to the properties and objects that matter, and neglecting the rest so as to avoid information overload. But even then, there are vastly many things in the world we could learn about. (How many blades of grass in this field? What does the inside of that lion's mouth look like?) How do we choose what to focus on? We clearly cannot learn what is worth learning about, in advance of learning it. Successful learning itself requires the guidance of prior (i.e. innate) knowledge and dispositions.

The empirical evidence backs up this theoretical result. For example, Garcia tested lab-raised rats (lacking any prior experience with such problems) with some novel food that caused nausea three hours later. The rats formed an immediate aversion to the food, despite the three hour separation, indicating that they were innately prepared to associate nausea with novel foods rather than more temporally proximate stimuli. Such 'innate constraints' aid learning by shrinking the search space, thus making it easier to find the solution. In general, innate biases can provide a scaffold for further learning, by drawing our attention to biologically significant features of the world that we might not otherwise appreciate. (For a human example, we might expect babies to be born with the pictorial knowledge of a rough 'face' template to guide their attention and enable them to learn their parents faces.)

So, empiricists must accept this 'minimal nativism'. But we may still capture the spirit of their position by suggesting that "culture is our nature", that our innate dispositions are geared towards learning, and that we do not generally come 'hardwired' with knowledge that could instead be acquired from experience. This position can then be contrasted with that of the 'rich nativist' who suggests that we have extensive innate knowledge of specific facts, perhaps relating to the Pleistocene environment that our ancestors adapted to.

From an evolutionary perspective, we should expect innate encoding to be favoured in cases of slow or no environmental change, as the robustness of an innate and invariable disposition can be relied upon in such circumstances. Change over phylogenetic timescales should tend to favour social learning (i.e. cultural transmission) of adaptive information. And for extremely unpredictable environments, which change from one generation to the next, a heavier reliance on individual learning would make sense. Learning is of course the more flexible option, being sensitive to environmental contingencies in a way that innate "knowledge" or dispositions are not. It may also be more efficient for an organism to simply be equipped with the general cognitive tools to extract information from the world, rather than loading extensive and specific information into the mind right from the start.

Interestingly, humans evolved to have an extended juvenile stage of development, compared to other primates. As Dr. Sean Rice explains (HT: Fido), "we have not adapted to grow rapidly during adolescence, but rather to grow slowly before it; thus stretching out our childhood." This adaptation makes sense from an empiricist / minimal nativist viewpoint -- it creates room for a long period of learning and skill acquisition. It is less clear whether rich nativists can explain our long childhoods, and it certainly seems an uncomfortable result for those who would downplay the biological importance of individual development.

Humans seem to be uniquely well adapted for learning. Our best understanding of the matter thus leads us away from the old dichotomy of "nature vs. nurture", instead suggesting that our nature is to be nurtured.

Thursday, October 27, 2005

Reader Reciprocation Experiment

[Update: Yeah, that was getting too complicated. I've dumped the 'extended hat-tip' idea. Much simpler now.]

[Update 2: If you don't like link exchanges, feel free to add yourself to this Frappr map instead.]

Nova Spivack writes:
I am trying a little experiment to see who is reading this blog. Anyone who posts a link in their blog to the permalink of this blog entry will get a reciprocal link back to their blog from within the body of this blog entry... Why am I doing this? I'd like to form a network of reciprocal links with the blogs of my readers. This will help me understand who is reading my blog, and will also enable other readers to discover one anothers' blogs.

Most link exchanges are pretty lame, I think... especially those third-party ones which just get random people to read each others' blogs. (Why even bother? Is boosting visitor stats really something of intrinsic value, independently of whether readers actually enjoy or learn from your site, contribute to interesting discussions, etc.?) So I'm a little suspicious. But this one, being quite non-random, has the potential to be at least slightly less lame than most. For example, now Nova can learn that I read his blog, which he might not have known before. I'd be quite curious to learn of others out there that read my blog. (Lurker Day was fun for just this reason.)

So I'm going to do the same thing here. If you're an (at least) occasional reader of this blog, link directly to this entry, and I'll update it so you receive a link in return. If it's been a couple of days and I still haven't noticed your post, feel free to email me.

Reader list:
- The Uncredible Hallq
[to be updated]

The Mind's Boundaries

[Disclaimer: the following post is a slightly messy jumble of ideas - in part because I wrote the first half of it about a month ago, before returning to it today.]

It's commonly assumed -- at least by me -- that the brain is the seat of the mind. But now I wonder whether there's actually any principled basis on which to draw a strict delineation between the brain and other organs (e.g. the eye and optic nerve, etc.), insisting that our minds do not extend beyond the former. I'd previously supposed that the idea of a 'brain in a vat', or a body transplant, shows that the brain is all that intrinsically matters for mentality. But now I'm not so sure that this works after all.

On the standard picture, the brain is basically a computational device. It takes 'input' from various sensory nerves in our body, performs computations on this data (which amounts to 'thinking'), and then outputs behavioural instructions for the body to perform. At least, that's how I think of it. I'm just assuming this is 'standard'. Anyway, this picture seems to make the body rather superfluous: you could replace it with anything else that gives the same 'inputs' to the brain, and reacts appropriately to the resulting 'output'. Hence the possibility of Matrix-like illusions involving "brains in vats", where the body is replaced by a complicated computer simulation feeding input to our brains, resulting in mental lives indistinguishable from our own. This possibility suggests that the brain is all that matters for mentality. (Or so I assumed.)

But then, couldn't the same sort of "replacement" occur to portions of the brain itself? Suppose a small portion of my brain was removed, and replaced with a functional equivalent. That is, the replacement part would feed the exact same inputs as before into the various neurons that it's connected to. It would react exactly as my original brain-part had: taking in information from neighbouring neurons, running the appropriate computation, and then returning the appropriate result. If parts of my brain were replaced in such a way by these functionally identical "artificial neurons", I would never notice the difference. My mental life would be unchanged. So, by the same reasoning as above, it seems we are led to conclude that brain-parts are inessential to our minds, in exactly the same way that body parts are.

Of course, if you just remove a brain portion without replacement, then my resulting cognition will be completely different. But the same is true of my body (say if you remove my eyes), or even the external world -- take away my calculator and I won't be nearly so good at solving math problems!

So indispensibility or the possibility of 'replacement' cannot be what delineates which physical parts are involved in mentality. The brain in a vat is a red herring, for we can replace even more than that; we could have a "frontal cortex in a vat", or even a single neuron in a vat, but that doesn't mean that the rest of the brain is non-mental. What this example shows us is that the mind can extend beyond what's in the vat. In case of your neuron-in-a-vat, the single neuron certainly does not exhaustively comprise your mind. More plausibly, your mind also includes whatever has 'replaced' the rest of your brain -- perhaps part of the 'vat' architecture. But then, what's stopping us from saying the same thing in the original BIV scenario? The computers that have replaced our body (and even the external environment) might now be part of our minds.

Am I missing something here? If mentality is computation, and it doesn't matter how the computation is physically realized, then it seems arbitrary to restrict the mind to the brain -- or even the body, for that matter, as Clark & Chalmers argue in 'The Extended Mind'. They argue that our dispositional/'standing' beliefs consist in information stored in any source that we rely upon and access easily and regularly, e.g. a notebook carried around by an Alzheimer's patient, and not just internal memory. This could, in principle, even extend to other people, yielding the intriguing idea that an inseparable couple's minds might to some degree overlap! I'll quote a bit from C&C's fascinating conclusion:
In each of these cases, the major burden of the coupling between agents is carried by language. Without language, we might be much more akin to discrete Cartesian "inner" minds, in which high-level cognition relies largely on internal resources. But the advent of language has allowed us to spread this burden into the world. Language, thus construed, is not a mirror of our inner states but a complement to them. It serves as a tool whose role is to extend cognition in ways that on-board devices cannot. Indeed, it may be that the intellectual explosion in recent evolutionary time is due as much to this linguistically-enabled extension of cognition as to any independent development in our inner cognitive resources.

What, finally, of the self? Does the extended mind imply an extended self? It seems so. Most of us already accept that the self outstrips the boundaries of consciousness; my dispositional beliefs, for example, constitute in some deep sense part of who I am. If so, then these boundaries may also fall beyond the skin. The information in Otto's notebook, for example, is a central part of his identity as a cognitive agent. What this comes to is that Otto himself is best regarded as an extended system, a coupling of biological organism and external resources. To consistently resist this conclusion, we would have to shrink the self into a mere bundle of occurrent states, severely threatening its deep psychological continuity. Far better to take the broader view, and see agents themselves as spread into the world.

As with any reconception of ourselves, this view will have significant consequences. There are obvious consequences for philosophical views of the mind and for the methodology of research in cognitive science, but there will also be effects in the moral and social domains. It may be, for example, that in some cases interfering with someone's environment will have the same moral significance as interfering with their person. And if the view is taken seriously, certain forms of social activity might be reconceived as less akin to communication and action, and as more akin to thought. In any case, once the hegemony of skin and skull is usurped, we may be able to see ourselves more truly as creatures of the world.

Despite their other radical suggestions, C&C conservatively assume that consciousness is purely in-the-head. But again, what is the principled basis for such a boundary? Perhaps if consciousness was seen as a fundamentally biological or neurological process, essentially arising from neural interactions, then we could get this result. (Though it would seem to imply - implausibly, I think - that replacing each of my neurons with exact artificial replicas would rob me of my consciousness.) But on cognitive theories of consciousness (ala Dennett), extended cognition would seem to straightforwardly imply the possibility of extended consciousness. Maybe someone will figure out a way to use this to test the theories one day...

The Business of Beneficence

It's generally accepted that private businesses competing in a free market tend to spend money more efficiently than their bureaucratic public-sector counterparts. This leads me to wonder why we don't have "charity companies" to investigate various charities and 'good causes', etc., and determine where aid money could be best spent to do the most good. They could make a profit by charging a small fee for their investigative services. Moreover, competition would keep them honest: the threat of a new charity company down the street that gives you 'more bang for your buck' would send their profits plummeting. To attract customers they would keep open records tracking the humanitarian benefits resulting from their recommended investments. (Competitors would no doubt keep an eye out for any duplicity which could be exposed for their own gain.) Governments would no longer sink millions of aid dollars into hopeless schemes or corrupt governments' coffers. They would subcontract out their aid budget to whoever could (be relied upon to) achieve the most good.

Has anyone tried this before? And if not, why not?

Tuesday, October 25, 2005

Better Browsing (and Blogging)

Building on some of the suggestions offered at Crooked Timber, here are all the internet tips I wish someone had told me a year ago:

1. Download Firefox.

Once you see how much better the alternatives are, you'll never want to touch Internet Explorer again.

2. Extend Firefox. I especially recommend:
  • Tab Mix - you can see the options it offers here. Two absolute essentials are 'undo close tab' and forcing links intended to open in new windows to open in a new tab instead.

  • Adblock - self-explanatory.

  • Flashblock - choose whether to load Flash media.

  • SessionSaver - if your computer crashes often.

  • TargetAlert - Gives useful info about links before you click, e.g. whether it'll open in new window, file type, etc.

  • Greasemonkey allows you to further 'extend' your browsing experience with userscripts. I'm fairly new to this, so readers are invited to share recommended scripts in comments.

  • [Some might also appreciate Stealther. It allows you to "surf the web without leaving a trace in your local computer". Not that you have anything to hide, of course ;)]

3. Get some useful bookmarklets.

I especially recommend zap - it instantly converts a page into plain, black-on-white text. No more squinting at blogs with those horrid grey-on-black templates.

4. This section is just for bloggers. It lists some neat blog add-ons (most of which you can see in action on this blog already):
  • BloggerHacks lets you add a "Recent Comments" list to your Blogger main-page sidebar.

  • No Fancy Name has the code for Expandable Posts in Blogger (i.e. a "continue reading" button).

  • Learn how to edit comments in Blogger. This has saved me many embarrassing typos. (Plus it's good to be able to tidy up the graffiti if someone leaves an inappropriate comment but you don't want to delete the whole thing.)

  • ClustrMaps sounds fun. I might add it later.

Any other suggestions?

Blog family trees

Via Pharyngula, I learn that some guy has embarked on the ambitious project of "making a family tree of the blogosphere."

He asks for the following information:

1) your blogfather, or blogmother, as the case may be. Just one please - the one blog that, more than any other, inspired you to start blogging

Like PZ, I'd have to say Crooked Timber.

2) Include your blog-birth-month, the month that you started blogging, if you can.

March 2004.

3) If you are reasonably certain that you have spawned any blog-children, mention them, too.

There's Reuben's blog, I guess. Any other bastard children will have to speak up in the comments ;)

Update: You can view the tree here. That page also includes contact details if you want your blog to be added to it...

Monday, October 24, 2005

Ought we to be Rational?

The title question stands in need of clarification before it can be answered. First, we must specify what sense of ‘ought’ is involved. On various “adverbial” senses of the term, it simply highlights the requirements of some framework or other.[1] You morally ought to do that which is required by morality, prudentially ought to do that which is prudent, and so forth. In this vein, if Φ-ing is a rational requirement, then we might restate this fact by saying that you rationally ought to Φ. But this does not state anything new, over and above the fact that Φ-ing is a rational requirement. So it is not this sense of ‘ought’ that we are interested in. Nor does it seem entirely adequate to ask whether we ought to be rational according to some other framework of requirements, say those of morality, prudence, or etiquette. These might be interesting questions in their own right, but answering them does not necessarily bring us any closer to answering the original question. We are still left wondering whether we ought to respect those requirements.

Not every possible framework of requirements is a source of genuinely normative reasons for action. The mere fact that I am “required” by etiquette or convention to Φ does not guarantee that I ought to Φ, or even that I have any real reason at all to Φ. So a central problem for the philosophy of normativity is to distinguish which requirements have genuine normative force – i.e. which are the ‘reason-giving frameworks’ – and which demands we may rightly ignore. I will use the term ‘ought’, simpliciter, to mean what some call ‘ought all things considered’, that is, a binding normative claim on our actions. This is a fairly blunt term, however, so I will continue to employ the notion of a (pro tanto) reason as something that has genuine normative force, but that might be outweighed by other reasons. A conclusive reason is one that establishes an ‘ought’ fact.

Skeptics about normativity deny that there are any reasons or ‘ought’s in this normative sense. They hold that the most we can say is that an action is required according to the standards of morality, or etiquette, or rationality, and that there is no further sense in which we really ought to perform the action. On this view, it is straightforwardly false that we ought to do anything, and hence that we ought to be rational. I will disregard normative skepticism for the remainder of this essay, and instead assume that we really ought to do some things and not others.

This leaves two main positions, which I will call ‘normative non-cognitivism’ and ‘normative realism’. Normative non-cognitivism is the view that the reason-giving frameworks are those that we personally commit to, or accept as authoritative over ourselves. So, for example, if I accept the authority of prudential demands but not moral ones, then only the former requirements provide me with reasons for action. Given these non-cognitive commitments, if I could advance my self-interest through moral wrongdoing, then that’s what I ought to do. Since most of us accept the requirements of rationality as authoritative, normative non-cognitivism straightforwardly implies that this gives us reason to be rational – though what we ought to do must also take our other commitments into consideration. But even if we did not accept rational requirements for their own sake, they might provide indirect reasons through their tendency to promote other ends that we have committed to, such as moral action or true beliefs. I will return to this possibility later in the essay.

The other view, normative realism, holds that reasons exist and apply to us whether we like it or not. There is nothing in this definition which tells us how to distinguish which frameworks are genuinely normative. But I will not address this problem here. I will simply assume that there are some normative reasons, without concern for the details of what specific type of reason they might be. Although I will be assuming normative realism from here on, many of my arguments will also apply to normative non-cognitivism in cases where the individual has no intrinsic commitment to the framework of rational requirements. In either case, we have assumed that there are some reasons, some things we ought to do, and the question is whether the normativity of rational might fall out of this.

I must now clarify what it means to be ‘rational’. Sometimes people use the term to denote ‘that which is best supported by reasons’. From this it would trivially follow that we ought to be rational. But that is not how the term is intended here. Rather, I will take rationality to be the purely internal matter of having one’s mind in good order, regardless of how this matches up to external facts. As Kolodny describes it, rationality is “about the relation between your attitudes, viewed in abstraction from the reasons for them.”[2]

Some requirements govern static relations between our mental states, prohibiting certain combinations of conflicting attitudes. Such ‘state-requirements’ have wide scope, as conflicts may be resolved by revising either one of the conflicting states. For example, consider the following principle:

(I+): “Rationality requires one to intend to X, if one believes that there is conclusive reason to X.”[3]

You can violate this requirement by simultaneously believing you ought to Φ but intending not to. There are two ways to avoid this internal conflict. You might intend to Φ, or else you might cease to believe that there is conclusive reason to Φ. A similar range of options will be available for meeting any other state requirement.

However, not all requirements of rationality are state requirements. There can also be ‘process requirements’, which govern transitions between mental states. We can see this because not all means to achieving state requirements are equally rational. Consider again the conflict state whereby you believe you ought to Φ, whilst intending not to. Further suppose that ‘all else is equal’, i.e. you have no other Φ-directed attitudes. In response to this conflict, rationality surely requires you to revise your intentions, not your beliefs about what action is best supported by reasons. Rationality requires us to go where our assessment of the evidence takes us, rather than revise our assessments to match the conclusions we’d like to reach. The latter sort of revision amounts to wishful thinking, not reasoning.[4] This leads us to the principle:

(I+NS): “If one believes that one has conclusive reason to X, then rationality requires one to intend to X.”[5]

This process requirement has narrow scope – the requirement attaches to the consequent rather than the whole conditional. We might add a ‘ceteris paribus’ clause to exclude more complicated cases whereby, for example, you have a second-order belief that your ‘belief that you have conclusive reason to X’ lacks sufficient evidence. In fact, Kolodny argues that (I+NS) holds even then, though one is also rationally required to revise beliefs that one judges to be insufficiently supported by the evidence. He suggests that you could be bound by both these ‘local’ rational requirements simultaneously.[6] But nothing of importance rests on this contention. We may simply exclude such cases from our consideration, and hold that if you believe you ought to Φ, and ‘all else is equal’ in the sense that you lack any conflicting beliefs relating to the normative status of Φ-ing, then you are rationally required to Φ.

We are now in a position to prove, by appeal to the ‘bootstrapping argument’, that we do not in general have conclusive reason to be rational.[7] For suppose we always ought to do as rationality requires. This supposition entails the absurd result that many normative beliefs are self-justifying and thus infallible: believing that you ought to Φ would suffice to ensure that you truly ought to (intend to) Φ! For recall that we have already established that one rational requirement is the narrow scope principle (I+NS), at least in normal situations. If you believe that you ought to Φ – that the weight of reasons supports it – then you are rationally required to follow through on your assessment by intending to Φ. But then, supposing that we ought to do as rationality requires, it follows that we in fact ought to intend to Φ, simply in virtue of the prior belief. This absurd consequence must lead us to reject the supposition. Thus it is not the case that we always ought to do as rationality requires.

Perhaps rationality is normative in the weaker sense that we have pro tanto reason to respect rational requirements. This entails the weaker bootstrapping result that if you believe you ought to Φ, this creates a pro tanto reason to intend to Φ.[8] But this result is not so objectionable. In such a situation, even if you ought not to Φ, it does seem that at least one thing can be said in its favour: namely, that by Φ-ing you would be acting in accordance with the requirements of rationality. You would be following your best assessment of what you ought to do. One might deny that this could be a pro tanto reason for Φ-ing on the grounds that an agent’s mental states are strictly irrelevant to what they have reason to do, but such a position simply begs the question against the normativity of rationality.

Kolodny points out that we do not typically treat rational requirements as providing some further reason for action. If you already believe you have conclusive reason to Φ, it would be superfluous for someone to advise you to Φ by citing the further reason that rationality requires it. You already take yourself to have conclusive reason to Φ, and so do not stand in further need of convincing. This insight forms of the core of Kolodny’s Transparency Account:[9] from the first-person perspective our beliefs seem transparent to truth, so what we believe we ought to do – and hence what is rationally required – will appear to us as what we ought to do, simpliciter. So even if we in fact have no reason to be rational, the rationally required action will always seem, from a first-personal perspective, to be the one we have conclusive reason to do. Kolodny thus explains the apparent normativity of rationality as a mere illusion. We need not go so far, however. By allowing the ‘bootstrapping’ of pro tanto reasons, we leave open the possibility that there are genuine reasons to be rational, in addition to the merely apparent normativity provided by the transparency account.

Kolodny further suggests that a reason must be something we can reason from, so the first-personal superfluity of rational requirements rules out their normativity.[10] But some fact may count in favour of an action, or help explain why we ought to do it – and thus be a ‘reason’ in the sense used in this essay – even if it could never be recognized as such within the context of first-personal deliberation.[11] The fact that an action is rationally required could be part of the explanation of why we ought to do it, even if it is not a fact that we could reason from in first-personal deliberative contexts.

Moreover, even if rational requirements do not themselves provide reasons, they might still be normative in the weaker sense that we necessarily have some (other) reason to do what rationality recommends.[12] Violations of state-requirements, at least, involve some sort of internal inconsistency, which guarantees that one has gone wrong in some respect. Such violations will thus necessarily be accompanied by a reason to get out of them – namely, that at least one of the conflicting attitudes must be in error. So we always have some reason to meet rational state-requirements. However, we earlier established that some rational requirements are process requirements. Due to their specificity, such narrow-scope requirements may ‘misfire’, telling us to revise one of the conflicting attitudes when in fact it is the other that is objectively in error. So process requirements, unlike state requirements, are not guaranteed to be accompanied by independent reasons. It remains an open question whether we always have a reason to meet rational process-requirements.

Such a reason might be instrumental to the realization of other ends, or it might be intrinsic, treating rationality as an end in itself. The case for intrinsic reasons might be supported by the idea that rationality is virtue much like courage, the display of which is always admirable in some sense.[13] This most plausibly leads to the idea that we have reason to possess the dispositions constitutive of the rational faculty. Perhaps we ought to be rational in character, regardless of whether we have reason to do as rationality requires in any particular instance.

Such a claim may also be supported on instrumental grounds, since possession of the rational faculty is, plausibly, the most reliable means psychologically available to us for achieving our other goals.[14] Though it would be ideal to possess precisely those dispositions that would guarantee our doing what we ought in every particular case, I assume this is not a genuine psychological possibility for us. General rationality is, we may suppose, the closest approximation available to us. Granting that we ought to be rational, in this general sense, the question remains whether we have reason to abide by rational requirements in any particular case.

This global/local problem is familiar from other philosophical debates, most notably rule utilitarianism. Given that the overall best result will be obtained by following rules R, does this mean that we ought to follow R in each particular case – even those where it turns out to be locally suboptimal? Parfit thinks not, but suggests that so acting would, at worst, constitute “blameless wrongdoing”.[15] Such a view would allow one to deny that we have reason to follow rational requirements even though we have reason to possess the dispositions that would lead us to so act. But this position seems problematic because the only way to obtain the locally optimal result would be to violate the rules that lead to global optimality. Such a breach would have worse consequences overall.[16] So it seems short-sighted to say that we ought to breach the rules in such cases. Forsaking local gain for the sake of global optimality seems not just “blameless”, but also right. If the only way I could Φ, and thus achieve some local goals, would be to lose or weaken the rational dispositions that will see me right on many more future occasions, then surely this counts against Φ-ing. Thus we have reason, derived from the value of preserving rational dispositions, to abide by rational requirements.[17]

So, in sum, ought we to be rational? The quick answer is: ‘Yes in some senses, no in others, though all depending on what assumptions you’re willing to grant.’ It seems plausible that rationality is a kind of virtue – a fact which would provide at least some reason to be rational in character. If we add in the instrumental benefits that the rational faculty typically helps us to realise, this could plausibly support the stronger claim that we ought to be rational, in the global sense. As a character trait, rationality has both intrinsic and instrumental worth. But the local question is more difficult. We have seen that, in light of narrow-scope process requirements, the bootstrapping argument conclusively refutes the general claim that we always ought to do what is rationally required. This result should not be surprising; sometimes what we ought to do is not apparent to our rational faculties. Nevertheless, a slightly weaker claim – that rational requirements provide pro tanto reasons – can more plausibly survive the bootstrapping objection. My positive argument for this claim depends on our views about the transmission of normative warrant. I have suggested some grounds for thinking that local reasons can flow from the global ones granted above, and hence that we have reason to conform to rational requirements. Finally, I note that the complexities of this discussion will only seem relevant for normative realists or uncommitted non-cognitivists. Other cases are much simpler: non-cognitivism implies that a personal commitment to a framework of requirements suffices to give its demands normative force, whereas if the skeptic is correct then the entire discussion is moot.


Broome, J. (draft) Reasoning.

Dancy, J. (draft) ‘Reasons and Rationality’ in PHIL 471 Course Reader.

Kolodny, N. (2005) ‘Why Be Rational?’ in PHIL 471 Course Reader.

Parfit, D. (1984) Reasons and Persons. Oxford [Oxfordshire]: Clarendon Press.

[1] Broome, p.20.

[2] Kolodny, p.1, emphasis removed.

[3] Ibid, p.16.

[4] Ibid, p.28.

[5] Ibid, p.25. In what follows I will sometimes leave off the words “intend to”, and instead speak loosely of rationality requiring one to Φ.

[6] Ibid, pp.32-33.

[7] See, e.g., ibid, p.41.

[8] Ibid.

[9] Ibid, p.64.

[10] Ibid, p.52.

[11] An example was offered in my previous essay, ‘Reasons for Belief’: “Suppose that God will reward people who act from selfless motives. This is clearly a reason for them to be selfless. But it is not a reason that they can recognize or act upon, because in doing so they would be acting from self-interest instead. They would no longer qualify for the divine reward, so it would be self-defeating to act upon this reason. In effect, the reason disappears upon being recognized, so it cannot possibly be a reason for which one acts. Nevertheless, it seems clear that, so long as the agent is unaware of it, the divine reward is a reason for them to act selflessly. So internalism is false. Just as there can be unknowable truths, so there can be [counter-deliberative] reasons.”

[12] Broome, p.91, notes this possibility.

[13] Kolodny, pp.49, 59. Dancy, p.16, suggests that displays of rationality are admirable in the sense that onlookers have reason to approve of the agent, rather than that the agent actually had reason to so act. “Our reason for approving is just that, if things had been as [the agent] believed, this would have given him reason to act.” (emphasis added).

[14] Broome, p.104.

[15] Parfit, pp. 35-37.

[16] Otherwise this action would be part of the ‘globally optimal’ solution, contradicting the original description of the case under discussion.

[17] Another way to develop this idea, as suggested by Jack Copeland in discussion of Broome’s seminar, would be to suggest that reliably useful dispositions, such as the rational faculty, provide prima facie reasons for action. The fact that rationality requires us to Φ might justify a defeasible / non-monotonic (and in some sense inductive) inference to the conclusion that we ought to Φ, that further evidence could undermine. After all, if a disposition really will see you right in the majority of cases, then that provides a sort of statistical evidence that it probably will see you right in any particular (randomly chosen) case.

Sunday, October 23, 2005

Stop the Clock!

No, not that clock, this one:
Searching for a cure for aging is not just a nice thing that we should perhaps one day get around to. It is an urgent, screaming moral imperative. The sooner we start a focused research program, the sooner we will get results. It matters if we get the cure in 25 years rather than in 24 years: a population greater than that of Canada would die as a result. In this matter, time equals life, at a rate of approximately 70 lives per minute. With the meter ticking at such a furious rate, we need to stop faffing about.

Read the whole thing.

Update: More here:
"One hundred and fifty thousand people die every day, and two-thirds of those die of aging in one way or the other," [de Grey] says, while nursing a pint of fine English ale. "If I speed up the cure for aging by one day, then I've saved 100,000 people." He pauses thoughtfully for a moment. "Actually, I probably do that every week."

Saturday, October 22, 2005

Richer than I thought?

My blog is worth $127,586.04.
How much is your blog worth?

[Hat tip: Parableman]

Symptoms of a philosopher

I once heard it suggested that the sign of a potential philosopher is the ability to grasp the full force of the Euthyphro dilemma. I think that's a pretty good choice, actually, but I was wondering what other signals (or "symptoms") we might be able to come up with. Obviously understanding conditionals is a necessary prerequisite. More seriously, it helps to be able to abstract away superfluous details and take a thought experiment in the spirit in which it's intended. Fafblog had a wonderful post last year which poked fun at the pedantry typical of the literal-minded:
FAF.: Oh no Giblets! You have not been eatin pork to painful excess again have you?
GIBS.: Giblets does it... GLLGGLL... for national greatness. He stuffs himself with liquid ham... for the glory of the republic!
FAF.: But Giblets does the end always justify the means? For example say there is a man stuck in the opening of a mine shaft.
GIBS.: How would a man get stuck in a mine shaft? Mine shafts are huge.
FAF.: Well lets say he's a big fat man stuck in a mine shaft an there are like a dozen other people trapped in there because the fat man he is just so fat.
GIBS.: This is an improbably fat man we are talkin about.
FAF.: Maybe he has been eatin ham jello. For the glory of the republic.
GIBS.: Then he can stuff off. This is Giblets's ham jello.
FAF.: Anyway the question is should we blow up the fat man if there is no other way to get him out of the mine shaft to free the trapped an starving people inside when we know that blowin up the fat man is cruel murder?
GIBS.: Ha! I'd like to see you try! The explosives'll just make the mine shaft collapse an squish everyone inside.
FAF.: Giiiiblets, you're ruinin my moral dileeeema.

Well, okay, maybe they're also poking fun at the silly thought experiments we come up with. But all the same, it's a neat dialogue ;)

So, any other suggestions?

Thursday, October 20, 2005

Time for Inspiration

MelbournePhilosopher points to a fascinating article about what could be the next great 'wonder of the world' -- a massive, stable, self-sufficient clock designed to survive and continue functioning perfectly for ten thousand years:
Hillis's plan for the final clock, which he reserves the right to change, has it built inside a series of rooms carved into white limestone cliffs, 10,000 feet up the Snake Range's west side. A full day's walk from anything resembling a road will be required to reach what looks like a natural opening in the rock. Continuing inside, the cavern will become more and more obviously human made. Closest to vast natural time cycles, the clock's slowest parts, such as the zodiacal precession wheel that turns once every 260 centuries, will come into view first. Such parts will appear stock-still, and it will require a heroic mental exertion to imagine their movement. Each succeeding room will reveal a faster moving and more intricate part of the mechanism and/or display, until, at the end, the visitor comprehends, or is nudged a bit closer to comprehending, the whole vast, complex, slow/fast, cosmic/human, inexorable, mysterious, terrible, joyous sweep of time and feels kinship with all who live, or will live, in its embrace.

Or so Hillis hopes.

Some people will no doubt make a pilgrimage to the cavern, but for the next century at least, that will probably require some commitment, as the site is "as far as you can get from civilization within the continental United States," Hillis says. "That will help people forget about it and avoid the contempt of familiarity."

Most people, however, will never visit the clock, just as most people never visit the Eiffel Tower. They will only know that it exists. That knowledge alone will acquaint them with the Long Now, and that is part of the plan.

There seems something incredibly noble about the whole project. I love it. And it can't hurt to put things in perspective for us too, as the author notes:
Most humans are preoccupied with the here and now. Albert Einstein, echoing the sentiments of other deep thinkers of the modern era, argued that one of the biggest challenges facing humanity is to "widen our circle of compassion" across both space and time. Everything from ethnic discrimination to wars, such reasoning goes, would become impossible if our compassionate circles were wide enough...

Hillis, at first motivated by a vague desire to promote long-term thinking, has been transformed by his idea: "Now I think about people who will live 10,000 years from now as real people." His eyes take on a distant focus as he says this, as if he sees them massed on the horizon. "I had never thought that way before."

Tuesday, October 18, 2005

Two Senses of Intrinsic Value

I think there are two quite distinct senses of 'intrinsic value'. We might value something for its own sake, rather than as a means to something else. We would then value it "intrinsically". But the value comes from us, the evaluators, and there is no guarantee that the object would still have any value if we didn't value it. So this brings us to the stronger sense of 'intrinsic value', which is when an object has value simply in itself, quite independently of others' attitudes towards it. The mere existence of such 'intrinsically valuable' objects - if there are any - makes the world a better place, and their destruction is a (pro tanto) bad thing.

Now, I'm inclined to think that only sentient beings can have intrinsic value in the strong sense. We are the creators of value, so without us there simply would not be any value in the world. Nothing that happens in a consciousless (I would say 'material', but that's not quite right) universe matters at all, one way or another. Hence my skepticism about intrinsic environmental value.

Nevertheless, I think that we ought to value many things - and perhaps the environment among them - intrinsically, for their own sake. Even though a pristine forest in a far away galaxy doesn't itself make the world a better place, perhaps it's something that we ought to value nonetheless. (And the conjunction of the forest with our appreciation might very well contribute real value to the universe.)

So when Brandon comments: "In a sense, the way we usually phrase the problem really makes intrinsic value a matter of taste, and the question about whether something has intrinsic value is a question of whether having a taste for that thing is an instance of good taste." I think he's captured something quite important. I might even grant that it's good taste to value the existence of distant pristine forests that no-one could ever visit. (I'm not entirely sure about that though.) But this isn't the full story, for it might be appropriate for us to value certain objects, such as truth and beauty, which nevertheless don't have intrinsic value in the strong sense that they would make the world a better place even if there were no conscious beings around to appreciate them.

Another way to see this would be in terms of the state/content distinction. Let's say value is linked to desires in some complex fashion. Sentient beings are the source of all value, for they are the source of 'desire' states. But our desires or evaluations are directed at other worldly objects. If I desire ice-cream, then ice-cream is the content of my desire. The ice-cream might then be valuable in the sense that it is valued, even though it is not itself the source of value, and indeed it would have no value at all if it were not for the fact that I valued it.

New Philosophy Blog

It's taken some time, but it looks like I may have finally infected my friends with the blogging bug. There's been some very active discussion over at Prior Knowledge recently, and Reuben has gone ahead and started up a new blog of his own, Mapping out the Moral High Ground, with the intriguing description: "Each week or so I will ask a question concerning some aspect of my lifestyle. After it has been discussed and a conclusion reached I shall alter my life style accordingly."

The introduction invites commentators to suggest possible questions for discussion. (Sagar's suggestions sound especially interesting, I must say. I'll be looking forward to these discussions!) Reuben also invites "any comments on the merits of such an approach to life’s choices." It's certainly a novel idea, so head on over and let him know what you think!

Monday, October 17, 2005

'Idle Argument' Essay

There are a range of related arguments which fall under the heading of the “Idle Argument”. This essay will discuss those variations which argue from future truths to fatalistic conclusions, i.e. what David Buller calls “the standard argument for fatalism”.[1] The original argument, traceable back to the Stoics,[2] attempted to prove that there is no point seeing a doctor if you are unwell. After all, if you will get well, then there’s no need to see the doctor. And if you won’t get well, then the doctor can’t help you. Either way, action seems unnecessary. We can generalize this reasoning to yield the following argument-schema, for any future event E and a related action Φ:[3]

(P1) If E will occur, then E will occur whether or not you Φ.

(P2) If E will not occur, then E will not occur whether or not you Φ.

(P3) But either E will occur or it will not.

(C) Therefore, with regard to E, it is futile to Φ.

Intuitively, the problem with such fatalism is that it ignores the causal potency of your actions. Whether E occurs may well depend on whether or not you Φ. If the first two premises are interpreted in such a way as to deny this, then the problem surely lies with them. On this reading, we understand the premises as making modal claims. Suppose you did Φ, and subsequently brought about E. The first premise then claims that E would still have occurred even if you did not Φ. This claim is unmotivated – there is no reason to think that it will generally be true. The second premise will be faulty in the same respect. If we think that Φ-ing would have been sufficient to make E occur, then we will reject premise two on the modal reading of “whether or not you Φ”.

But this puzzle is not so easily solved, for there is another interpretation of these premises, according to which they are undeniably true. We obtain this result by interpreting the premises non-modally, as mere material conditionals. Given that E will occur, it follows trivially that E will occur. Indeed, this entailment is valid independently of any other propositions, including the proposition that you Φ. We may take the phrase “whether or not you Φ” to be parenthetical, and interpret the first two premises as instances of the general tautological form: “if p, then p (no matter q)”. Made more logically rigorous, we take this to be equivalent to the logical truth:

p → ((q v ~q) → p)

To fill in the details: ‘q’ is the proposition that you Φ; and we take ‘p’ = ‘E will occur’ for (P1), and ‘E will not occur’ for (P2). So, upon this interpretation, the first two premises are logical truths, and hence undeniable.

This does not force us to accept the absurd conclusion, however, because such weakened premises no longer support the conclusion. That is, the argument becomes invalid. We can see that this is must be so as a matter of form, for you cannot reason from mere tautologies to a substantive conclusion such as that action is futile. To explain this particular case, note that a goal-directed action is ‘futile’ only if the action has no influence on whether or not the goal-state is realized. That is a causal or modal claim, not a merely logical one. The intermediate conclusion that “E will occur, or not, whether or not you Φ” can be obtained only if we understand the phrase “whether or not you Φ” in the parenthetical sense described above, rather than the more intuitive modal sense which we rejected earlier. All that this conclusion really asserts is that E will either occur or not, and that whichever one is the case, other truths must be consistent with this. There are no modal claims being made, and so no basis on which to claim that action is ‘futile’ in the usual sense. In particular, note that the non-modal claim “E will occur (whether or not you Φ)”, as understood here, is entirely consistent with the modal claim that E would not occur unless you Φ-ed.[4]

So the Idle Argument is nothing to worry about. Although it can be interpreted so as to make either its premises true or its logic valid, it cannot achieve both at once. Its unsoundness can be illustrated by the obvious example whereby E will occur, but only because you will in fact Φ. Here it is quite clear that Φ-ing is not a futile action. The difficulty is in tracing this error back into the argument. The most intuitive way to apply the counterexample to the Idle Argument is to say that it shows (P1) to be false. But if the fatalist reinterprets (P1) non-modally, as a logical truth, then the same counterexample instead shows that the Idle Argument is invalid, as its tautological premises are entirely consistent with Φ-ing being a necessary means to achieving E, and thus not futile at all.

The discussion so far, though hopefully helpful in clarifying the underlying problems, has not presented the Idle Argument in its most compelling form. I now want to consider a strengthened version:[5]

(P1’) If we will win the battle, then it is better to attack with a small force.
(P2’) If we will lose the battle, then it is better to attack with a small force.

(P3’) Either we will win the battle or we will lose the battle.
(C’) So, it is better to attack with a small force.

In this case the conclusion is no longer ‘idleness’, but there remains a clear analogy with the logic of the Idle Argument.[6] This version seems more difficult to refute, however. If we grant that it is more glorious to win with a small force, and that fewer casualties are suffered in losing with a small force, then the first two premises have significant prima facie plausibility. Moreover, as a simple instance of the disjunction-elimination rule, the logic seems clearly valid. But it would be incredible were it possible for us to establish the conclusion (C’) in such an a priori fashion. Surely it is not always better to attack with a small force. Yet here we have an apparently sound argument which claims to prove exactly that.

As before, the flaw in the argument can be highlighted by considering the obvious counterexample: a case whereby it happens to be true that we will win the battle, but only because we will in fact attack with a large force. Since a smaller force would have caused us to lose the battle, the conclusion (C’) is clearly false in this case. Moreover, the premise (P1’) is also false, if understood as a material conditional, for the antecedent is true and yet the consequent false. We will win the battle, as it happens, but it would not be better to attack with a small force, for that would cause us to lose instead.

That’s the simple diagnosis. But, as before, the defender of the argument might appeal to a different interpretation on which the premises come out true. Plausibly, (P1’) should not be understood as a material conditional, but as a more robust connection of some sort. I claimed above that (P1’) is false because attacking with a small force might cause us to lose. The argument’s defender might complain that this objection contradicts the antecedent assumption that we will win the battle. If it is given that we will win, then surely we need not worry about losing. So while it is better to win with a large force than to lose with a small one, this fact doesn’t refute (P1’) as properly understood. It remains true that it is better to win with a small force than with a large one, so if this is all we mean by (P1’), then the premise is true.

To capture this interpretation in a more logically rigorous way, we might say that (P1’) is equivalent to the following claim: of those possible worlds where we win the battle, the best are those where we attack with a small force. Similarly for the second premise: of those possible worlds where we lose the battle, the best are those where we attack with a small force. Given that these two options are exhaustive, as claimed in (P3’), it follows that the best possible worlds are those where we attack with a small force. Let’s call this claim “(B’)”, for ease of reference. The crucial question is now: can we get from (B’) to (C’), or is the battle argument invalid?

Here we are presumably to appeal to the general rule that if the best possible worlds are ones where you Φ, then it is better for you to Φ. But this inference is invalid. The mere fact that the best possible worlds are ones where you Φ is of little help if those worlds are not accessible to you, in the sense of being susceptible to realization. Further, it might also be the case that the worst possible worlds are ones where you Φ. You might find yourself in a situation where those ‘best possible worlds’ are not accessible to you, but the worst ones are. That is, you might face the option of either Φ-ing and ending up in a terrible situation, or not Φ-ing and remaining in a mediocre situation. Clearly, in such a case it is not better for you to Φ.

For a more concrete example, suppose that, so far as my finances are concerned, the best possible world is one where I win the lottery. That is, a world where I buy a lottery ticket. But in most worlds where I buy a lottery ticket, I lose, and so have wasted my money. So it would be a mistake to say categorically that it is ‘better’ for me to buy a ticket. The mere fact that I do so in the ‘best’ world is insufficient to reach that conclusion. In assessing a course of action, we must consider not only the possible benefits, but also the possible costs. So there is no straightforward inference from (B’) to (C’), and hence the battle argument is invalid. When we interpret the premises as other than material conditionals, though they might then be true, they fail to establish the conclusion.

This might seem puzzling, as the argument appears to be a straightforward instance of the valid argument form:

(P → R)

(Q → R)

(P v Q)



But this argument form involves material conditionals, which would render the first two premises of the battle argument false, as was earlier established. On the new interpretation, it is not clear in what sense those premises are really conditionals at all. Instead, they make comparative claims, to the effect that winning-with-a-small-force is better than winning-with-a-large-force, and that losing-with-a-small-force is better than losing-with-a-large-force. The argument then appears to take the form of a proof by cases, showing that in each possible case it is better to attack with a small force, and thus establishing the conclusion (C’). However, the argument fails to actually consider all possible cases. In particular, it fails to compare winning-with-a-large-force to losing-with-a-small-force. This is a serious oversight, given that these could very well be the options open to us. Since this possibility has not been accounted for, we cannot categorically conclude that it would be better to attack with a small force.

One might try to restore the valid form of the argument, whilst retaining true premises, by reinterpreting (P1’) to mean something like, “if it is guaranteed that we will win the battle, then it is better to attack with a small force”, and similarly for (P2’). Again, depending on how we interpret the ‘guarantee’ here, the argument might be made either logically valid or containing true premises, but not both at once. The conditional premises will be true if the ‘guarantee’ has modal implications, i.e. that we would win the battle even with a small force, or – in case of (P2’) – lose it even with a larger one. But it is not true that we are either guaranteed to win or else guaranteed to lose, in this sense, so the argument fails to consider all cases. It fails to take into account those cases where we will win only if we take a large force, for example. So the argument is invalid.[7] Alternatively, if the ‘guarantee’ is non-modal, merely requiring that it be metaphysically fixed that we will win in the actual world, then (P1’) is simply false. It might be “guaranteed” that we will win only because it is also “guaranteed” that we will attack with a large force; and in those closest possible worlds where we do otherwise, we might well lose. If that were so, then it would be false to claim that it is better to attack with a small force. So, either way, the argument fails to establish its conclusion.

The key to understanding both the Idle Argument and its “Battle” variation, is the ambiguity found in the first two premises. They might be interpreted so as to come out true, or else they might be interpreted so as to instantiate a valid argument form. If we equivocate between the two interpretations, then we would seem to have a valid argument with true premises, which would establish the truth of the absurd conclusion. But the puzzle can be dispelled by identifying this equivocation. The premises are false if interpreted one way, and the inference invalid otherwise. Either way, the argument is unsound.


Buller, D. (1995) ‘On the “Standard” Argument for Fatalism’ Philosophical Papers 24: 111-125.

Bobzien, S. (1998) Determinism and Freedom in Stoic Philosophy. Oxford: Clarendon Press.

Dreier, J., Fake Barn Country:

[1] Buller (1995).

[2] Bobzien (1998), chapter 5.

[3] Adapted from Bobzien (1998), p.190.

[4] Ibid, p.195.

[5] I owe this argument to a comment from Jamie Dreier at the Brown philosophy blog Fake Barn Country.

[6] It also isn’t difficult to think of more explicitly “idle” variants, based on premises like “if I will / will not pass my exam, then it is better not to bother studying”, etc.

[7] Validity could be restored by strengthening (P3’) to the claim that “we are either guaranteed to win or else guaranteed to lose”, in the modal sense. But then this will be a false premise.