Wednesday, April 30, 2008

Clay Shirky on Participatory Media

Watch the video, or read the transcript. First, the depressing:
Wikipedia... represents something like the cumulation of 100 million hours of human thought...

And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that's 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 100 million hours every weekend, just watching the ads.

Then, the hopeful:
I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, "Looking for the mouse."

Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for.

See also my review of Clay Shirky's book.

Monday, April 28, 2008

Meta-Coherence vs. Humble Convictions

G.A. Cohen points out that Oxford-trained philosophers tend to believe in the analytic/synthetic distinction, whereas Harvard-trained philosophers tend not to. As an Oxfordian himself, he believes in the distinction. But he could have gone to Harvard instead, and so ended up a Quinean. Should this fact undermine his belief?

More detail is required. Here are two possibilities:
(1) The reasons upon which the Harvardians' beliefs rest are, impartially considered, no less weighty than the reasons behind the Oxfordian view. (The difference in their beliefs is merely explained by the fact that each side is more familiar with their own reasons.) If you think this is the situation, then this should immediately undermine your belief. It's [meta-]incoherent to believe that P whilst also believing that the weight of evidence fails to support P, since this is just to judge both [on the lower level] that P is true and [on the higher level] that P is probably not true after all.

(2) Alternatively, you might judge that, all things considered, the weight of reasons really does support the Oxfordian view here. So you're lucky you didn't go to Harvard, in much the same way that it's lucky you weren't born into a society of Flat Earthers (or a religious cult). If you'd been raised and trained differently, you would have been less sensitive to where the weight of reasons truly lies.

As a reflective agent, to truly believe something you must consider it to be epistemically superior to its negation. You must therefore hold that anyone who believes otherwise is ipso facto your epistemic inferior in this respect. (They are failing to believe what is best supported by reasons.)

Conversely: humility + metacoherence = agnosticism.

One might initially be tempted to retain one's convictions even whilst modestly admitting that others' views are equally well supported (all things considered). Cohen thinks this is a fairly common stance (e.g. between Protestant and Catholic friends). But you cannot coherently maintain this combination of first-order belief and higher-order humility. Which you should give up will of course depend on the details of the case.

For example, it seems plausible to me that a proper appreciation of religious pluralism should undermine common grounds for religious belief (e.g. 'religious experience' -- why think that yours is any more reliable than Akbar's?). But this is arguably just because there are independent grounds for skepticism, which actual pluralism makes more salient. The existence of geocentrist cults, by contrast, does nothing whatsoever to undermine heliocentrism.

Culture is Biological

Ed Morrisey writes:
One of the stranger aspects of Jeremiah Wright’s speech came in the supposed neurological explanation of the differences between whites and blacks. Wright claims that the very structure of the brains of Africans differ from that of European-descent brains, which creates differences rooted in physiology and not culture...

Hilzoy responds by pointing out that Wright said no such thing. But this seems to me to miss the more fundamental error (which Hilzoy actually repeats in her post) of thinking that cultural and biological explanations are somehow alternative, competing, mutually exclusive explanations of behaviour.

Clearly, that's just plain mistaken. If culture influences our thoughts and behaviour, it must therefore affect our brains (that's where the thinking occurs, after all). All behavioural and psychological differences have a neuro-/biological explanation. The only question is whether this biological difference is in turn best explained by environmental or genetic differences.

Even this latter question of 'nature or nurture' is often confused. As I explain in my essay 'Native Empiricists', both inevitably play a role: we are equipped with innate capabilities to learn effectively from experience, and - on the other hand - one need only to deprive a plant of sunlight to see how a nourishing environment is essential for the expression of genetic potentials (e.g. height).

But, if we are careful, we can find coherent questions in this vicinity. For example, faced with a difference between A and B, we might wonder whether a genetic clone of A raised in B's environment would have ended up in a condition more like A's or B's in the relevant respects. Unfortunately, people are rarely so careful.

Philosophers' Carnival #68

... is here!

Sunday, April 27, 2008

Metaethics Diavlog

Wow. This bloggingheads.tv interview between Will Wilkinson and UNC's Geoff Sayre-McCord is incredibly good. Sayre-McCord is a wonderfully clear and careful thinker, and Will asks him excellent, probing questions. (Can you imagine seeing such a philosophically astute discussion on regular TV? It's times like these that I really love the internet!)

One recurring issue concerns the extent to which moral statements are simply redescriptions of natural facts. Does 'Hitler was evil' say anything over and above the fact that he had a callous disregard for others' welfare, etc.? Does the goodness of our social institutions consist in anything more than the descriptive fact that they are conducive to [such-and-such specification of] human flourishing?

The problem with the negative (reductionist) answer is that it risks turning normative disputes into mere semantic disputes. Suppose one were to say: "I grant that Western freedoms are more conducive to personal development, happiness, and all that jazz, but nonetheless they are bad, because it is more important to promote obedience, piety, etc." We don't want to say they've contradicted themselves, as we must if 'good' just means 'conducive to [...]'. Their error is not linguistic. It seems there's a substantive moral question at stake here, viz. how we should organize society, or what is of ultimate value, or some such.

Granted, the tricky thing is to say what this further element of disagreement amounts to. I'm inclined to think it is the question of what moral viewpoint is most reasonable, or what all ideally rational agents would ultimately converge on at the end of inquiry. Depending on our theory of rationality, this might be further reduced to the question of what set of desires/evaluative beliefs is the most internally coherent, unified, and so forth. I think this is some sort of progress. At least it is difficult to re-raise the Open Question Argument at this level: "I grant that X is approved by the maximally coherent evaluative system, and indeed I would endorse it if I were more rational, but nonetheless I think X is wrong!" sounds pretty self-contradictory to me. But in some sense I've just passed the buck from meta-ethics to meta-epistemology, so this picture is still not entirely satisfactory.

Saturday, April 26, 2008

Excuses and Responsibility

John Gardner gave an interesting talk yesterday, arguing that excuses are not merely a means of avoiding responsibility (for an act), but also a way to claim responsibility (as a moral agent).

Compare other cases, e.g. insanity pleas, where one abdicates responsibility entirely. We hold each other to normative standards only insofar as we see each other as moral agents, capable of responding appropriately to reasons. But the insane are not even within the space of reasons. They are no longer considered persons at all. They thus escape legal responsibility, because it makes no more sense to hold them accountable than to hold a wild animal to account.

It is deeply shameful to be regarded as a non-agent, however. Some defendants have therefore sought to portray themselves as reasonable people, even though they admit they acted unjustifiably. This may at first seem paradoxical. Instead of offering either justifications or an abdication of agential responsibility, they try a third way: a reasonable excuse. A provocation defence, for example, might seek to establish that the defendant's emotional response (rage/anger) was a reasonable reaction to the situation. This reasonable anger then led them to act unreasonably -- killing the victim, say. But it is not as though their actions here are totally incomprehensible, such that we must regard them as a non-person, a mere force of nature. Rather, it is the response any reasonable person would have had to the situation. It is a kind of rational irrationality, or blameless wrongdoing.

I agree with Gardner that this 'third way' makes conceptual sense. (It's a further question whether the law should allow it, of course.) Indeed, it's a familiar point for consequentialists that a good character might on occasion lead one to perform bad actions. So we can make sense of the intransitivity of reasonableness if we say that a reasonable emotional response (or disposition) is one that will tend, in general, to lead to better (more reasonable) actions. This is clearly compatible with the disposition leading one astray in particular circumstances.

(I should note that Gardner wasn't entirely satisfied by this suggestion; he thinks there are some cases where it won't suffice. But I'm not sure what those cases are.)

A practical upshot: we may be able to determine what excuses should be allowed as legal defences, depending on which dispositions we wish to encourage in the general population. For example, there is nothing to be said for encouraging jealousy, so finding your lover in bed with another should not be considered a legitimate 'provocation' to murder. But perhaps it is good to feel righteous anger in response to domestic violence. So a battered spouse might have a legitimate excuse for perpetrating their revenge (even in cases where they lack the full-blown justification of self-defence). Food for thought.

Philosophical Journeys and Destinations

Andrew raised some interesting questions in this week's PhilSoc discussion: why do we care about truth? Is there a risk of 'truthaholism', taking a good thing too far (to the detriment of other values)? Is it perhaps the process of inquiry, the journey rather than the destination, that we find most valuable in philosophy?

Of course, the truth will sometimes have great instrumental importance, as in medical science. But it's less clear what really hangs on the outcome of certain abstract philosophical debates. It may be that the truth here doesn't really matter for any other purpose at all. The question then is whether it matters for its own sake.

Jack suggests a helpful thought experiment. Suppose you had a magic 8-ball that would tell you the true answer to any philosophical question. Would this be a good thing? Bracket any instrumental benefits that the truth might yield. Just so far as the intrinsic value of philosophy is concerned, would it be a good thing for philosophy to come to an end in this way? Intuitively, there seems something deeply appalling about this scenario. This suggests that it's really the process of philosophical inquiry, rather than the end-point of truth, that we most value.

But I wonder. Perhaps the thought experiment has the wrong end in mind. There does seem something cheap and superficial about the "truths" delivered by a magic 8-ball. But this is not all that we usually have at the end of inquiry. Our best philosophy does not culminate in a mere 'yes' or 'no' answer. Rather, it gives rise to a deeper level of understanding; an appreciation of why the answer is what it is. (Or perhaps not even that -- just a deeper understanding of the question, and the various possible answers, may be plenty valuable in itself.)

So suppose you could get a brain implant that would provide you with a full understanding of a philosophical topic, without the investment of time and effort that is usually required to obtain such learning. Is that a good thing?

I don't think the answer is entirely obvious. But I lean towards thinking that it would be good. I think it really is the end-point of understanding which I most value, and not the struggle of getting there. What do you think?

Friday, April 25, 2008

World Malaria Day

Go here, click 'skip the game and send net', and sign up to have a sponsor send a malaria net (worth $10) to Africa on your behalf. You can also donate money yourself, of course.

Shifting the Center

Matthew Yglesias translates Clinton's apparent support for McCain's proposed "summer gas tax holiday":
Clinton doesn't agree with McCain's idea. She'll do it only "if we could make up the lost revenues from the Highway Trust Fund." But we can't make up the lost revenues from the Highway Trust Fund, so she won't do it. And that's the right answer, but she's successfully confused most of the audience into thinking she does favor the holiday.

Matt thinks this duplicity is "pretty neat". But does it really help the Democrats to pretend to support stupid Republican ideas? [Once again: Bad Means Have Consequences.] I would have thought a better long-term political strategy would be to try to convince the public that those are actually bad ideas. But it's kind of hard to do that when you're too spineless to publicly admit that you disagree with them.

Imagine if Clinton were instead to vigorously denounce McCain as "economically illiterate" for proposing such a stupid policy, and thus "incompetent to be president". Such strong words might lead to further probing and investigation as to whether the charges were justified; economic experts would be called in to offer their opinion, and to explain in plain terms precisely why subsidizing gas is an idiotic policy (and perhaps propose more efficient forms of financial aid instead).

It's not impossible to change public opinion, especially when you have the truth on your side. I mean, to an uneducated layperson, the idea of printing money and making everyone a millionaire overnight probably sounds even more tempting than cheap gas. But I assume if a politician tried pandering to this ignorance, they would pretty soon be called on it, and ridiculed mercilessly. Why doesn't the same happen here?

Wednesday, April 23, 2008

Libertarian Parables

Many libertarians have a fondness for parables involving private actors performing immoral coercive acts, and inviting us to conclude that government is essentially no different. Arnold Kling offers a succinct setup:
One child describes his parent, and the teacher concludes that this parent is a fine philanthropist. Eventually, however, the teacher learns that the parent is giving away other people's property. So the teacher concludes that her student's parent is a thief. The punch line is that the parent is the mayor.

To which I responded with a succinct comment:
Mayor:thief :: juror:vigilante

Institutions make a difference.

He never did reply. Perhaps the thought requires further unpacking. See my discussion of three conceptual errors of libertarian ideology:
(3) It conflates personal and institutional action. This is the difference between vigilantes and magistrates. Just because it would be illegitimate for your neighbour to do something in their role as an ordinary citizen, doesn't necessarily mean there's no legitimate way it could be done.

A well-ordered society is governed by the rule of law. This means that there are institutional processes to govern certain classes of action. The outcome of a just institutional process -- whether it be a guilty verdict, or minimum wage legislation -- has a different normative status than the corresponding action of a neighbour who takes it upon himself to unilaterally impose his will on others.

There are good pragmatic reasons to favour some libertarian policies. But the moral ideology ("taxation is theft") is obtuse.

Swinburne on Desire

"There is a natural contrast often made in ordinary language between the actions which we do because we want or desire to do them, and the actions which we do although we do not want to do them. It is a contrast which has been ignored by much modern philosophy of mind which has seen desire as a component of all actions, and the reasons for all actions as involving desires of various kinds. The ignoring of the distinction between desire and the active component in every action (call it 'trying' or 'seeking' or 'having a volition') leads a man to suppose that he can no more help doing what he does than he can help his desires. But 'desires', in the normal ordinary language sense of the word, are natural inclinations to actions of certain sorts with which we find ourselves. We cannot (immediately) help our natural inclinations but what we can do is choose whether to yield to them, or resist them and do what we are not naturally inclined to do. When we resist our natural inclinations, we do so because we have reasons for action quite other than ones naturally described as the satisfaction of desire -- e.g. we do the action because we believe that we ought to, or believe it to be in our long-term interest."

-- Swinburne (1985) 'Desire', Philosophy vol. 60, p.429.

See also: Agency and the Will

Monday, April 21, 2008

Zombie Review

Bring out your [un]dead! After all my narrowly focused posts on the topic, it's time for a "big picture" review of the zombie argument against physicalism. Recall: Physicalists think that the physical facts exhaust the base facts: just as the arrangement of particles suffices to settle whether there are tables, so it suffices to settle whether there is conscious experience. So let 'P' denote their complete microphysical description of the world, which makes no explicit reference to phenomenal experience or qualia. Let 'Q' be a statement explicitly about my qualia. The classic argument for dualism thus follows:
1. (P & ~Q) is ideally conceivable [can't be ruled out a priori]
2. If (P & ~Q) is ideally conceivable then (P & ~Q) is possible.
3. If (P & ~Q) is possible then physicalism is false.
Therefore, physicalism is false.

(3) is analytic: if you can have P without Q, then P does not suffice for Q, contrary to the physicalist's claims.

(2) is the premise most philosophers [as "type B materialists"] have traditionally questioned. It raises complicated issues in the metaphysics of modality and philosophy of language which I addressed in depth for my ANU honours thesis: 'Modal Rationalism'. (But you can get the short version here.) The upshot is that denying this step is ad hoc and ultimately commits you to the unmotivated claim that there are coherent scenarios which do not correspond to any possible world. I won't address it further here.

The blogospheric discussion has instead focused on premise (1). I think the intuitive force of the premise is made especially vivid by the zombie thought-experiment, whereby we imagine a world physically like ours but lacking in consciousness. That sure seems conceivable, but type-A materialists are committed to denying this, and claiming instead that there is some implicit contradiction which renders the zombie scenario incoherent. Unfortunately, nobody seems to have any idea what this elusive contradiction might be. (Unless you count Eliezer's suggestion, but that was based on a demonstrably false premise in the philosophy of language.)

Brandon made the reasonable point that we're not in a position to assess (1) with total confidence because we don't yet know what the statement P of completed microphysics says. That's a fair point; I certainly don't think this is a knock-down argument. But we do the best we can from our position of uncertainty, and it seems to me that we have more reason to believe (1) than its negation. So we should lean more towards property dualism, pending further evidence.

Some of the other objections that have been raised are, I think, simply confused. For example, Tanasije worries that epiphenomenalists will have no high-level explanation of our 'consciousness'-related behaviour (e.g. my writing this blog post). But we have no fewer resources than the physicalist, we just use different words to describe them. So while I would deny that zombies have beliefs about consciousness, there is a functionalist analogue (or physical component) of belief, which we may call 'z-belief', which can be cited by third parties and will do all the same scientific/explanatory work. (This raises more interesting worries about whether zombie brains are somehow 'malfunctioning' by z-believing in consciousness, which I address in my post: Zombie Rationality.)

Then there's Richard Brown's attempt at constructing an analogous "non-physical zombie" argument (replacing 'P' with 'NP' above) against dualism. But that won't work for the following reason: (i) Either 'NP' explicitly states the qualia facts Q, or it does not. (ii) If it does, then (NP & ~Q) is straightforwardly contradictory, so the first premise fails. (iii) Otherwise, the third premise fails. The possibility of (NP & ~Q) is compatible with dualism, because the dualist never claimed that those other non-physical facts NP suffice for consciousness. So, either way, it's a terrible argument.

Conclusion

As noted in my original post on the current blogospheric dispute, there are some bullets to bite either way.

The [type-A] materialist must simply have faith that there is an implicit contradiction somewhere in the zombie scenario, even though it shows no sign of such incoherence. They must also trust that third-personal scientific inquiry into non-experiential facts will somehow turn out to imply first-personal experiential facts in the same way that it implies the facts about ordinary macro objects like tables and chairs.

The epiphenomenalist, on the other hand, must explain how we can know that we're conscious if it has no causal effect. This will naturally lead to certain views about belief content and epistemology that others might balk at.

I don't think it's obvious how to weigh these various considerations. Personally, I lean towards epiphenomenalism -- the implications don't strike me as particularly worrisome. But your mileage may vary.

[There are also other views [PDF] on the table, e.g. interactionist dualism and panprotopsychism, but I won't address them here.]

P.S. Don't miss the zombie song - 're: your brains' [ht: Chris].

Non-causal Talk

Eliezer's anti-zombie argument was based on the premise that words refer to whatever generally causes us to utter them. (So 'consciousness' refers to whatever cognitive process causes us to utter this word, which is also present in the zombie world, thus contradicting the stipulation that the zombie world lacks consciousness.)

It's worth highlighting that this premise can't be right, for we can talk about things that do not causally affect us. We can even talk about things that don't exist, like unicorns, or God. (Or does Eliezer think that 'God' refers to the religious module of the brain, so that God exists after all?)

I'm reminded of Hilary Putnam's a priori semantic response to the skeptical scenario that I might be a brain in a vat (BIV). The idea is that if I were really a BIV then my terms like 'brain' and 'vat' would instead refer to the objects in the hallucinated 'image' (i.e. in the phenomenal world). So, the argument goes, whatever the words end up meaning, 'I am a brain in a vat' is guaranteed to come out false. I can "know" that I'm in the fundamentally real world, just because it's (allegedly) impossible for my term 'fundamentally real world' to refer to anything other than the phenomenal world that I'm presented with.

Clearly, something has gone wrong in these arguments. The appropriate response, I think, is to say "Stop redefining my words!" I can understand the BIV scenario perfectly well, and it's a scenario I cannot rule out with absolute certainty; hence 'I am a BIV' is not guaranteed to be false at all. Putnam's argument to the contrary is a mere semantic trick, playing with words.

The same is true of Eliezer. We know perfectly well what we mean by the term 'phenomenal consciousness'. We most certainly do not just mean 'whatever fills the role of causing me to make such-and-such utterances'. By suggesting this, he is playing Humpty Dumpty, redefining words to mean whatever he wants them to mean. It's simply changing the subject. (I never claimed that 'whatever fills the role of causing me to make such-and-such utterances' is physically irreducible. So to argue against this claim is not to address my claim that 'consciousness is irreducible'. It's just like arguing against atheism by redefining 'God' to mean 'the universe', or some such silliness.)

Eliezer has recently repeated the mistake in arguing for reductionism about identity (a view I actually share, though for different reasons). He writes:
Whatever-it-is which makes me feel that I have a consciousness that continues through time, that whatever-it-is was physically potent enough to make me type this sentence. Should I try to make the phrase "consciousness continuing through time" refer to something that has nothing to do with the cause of my typing those selfsame words, I will have problems with the meaning of my arguments, not just their plausibility.

Whatever it is that makes me say, aloud, that I have a personal identity, a causally closed world physically identical to our own, has captured that source - if there is any source at all.

And we can proceed, again by an exactly analogous argument, to a Generalized Anti-Swapping Principle: Flicking a disconnected light switch shouldn't switch your personal identity, even though the motion of the switch has an in-principle detectable gravitational effect on your brain, because the switch flick can't disturb the true cause of your talking about "the experience of subjective continuity".

This is a terrible argument, for the simple reason that whatever it is that makes me say 'X', is not necessary what I mean by 'X'. (Again: unicorns, God, etc.)

Cf. Linguistic Anti-Paternalism: each individual is the final authority on what they are using their words to express. To offer a brief sketch of how this might work: We may have a particular idea in mind, a bunch of descriptive platitudes about what it is to be X (which need not invoke causal relations at all, though it often will). Arguably, this is what decides the meaning of 'X', though there's no guarantee that it will successfully refer to anything in the world; that instead depends on whether there is any worldly thing which satisfies the platitudes.

Anyway, that's just a very rough and inadequate sketch of how an alternative view might go. But the purpose of this post is not to give a fully-fledged theory of meaning. It is simply to highlight one adequacy constraint on any such theory: it must make it possible for us to talk about, e.g., abstract or non-existent things, i.e. things which are not themselves the cause of our talk about them. (It needn't make epiphenomenalism true, but it had better be expressible!)

Saturday, April 19, 2008

Categorizing Blog Posts

I'm having trouble working out how to carve this blog at its joints -- or, if it doesn't have natural "joints", how to categorize the posts most usefully. A couple of principles suggest themselves:

1. Avoid vertical redundancy, i.e. between general and specific categories. If a post is on 'modality', don't bother to tag it as 'metaphysics' in addition. Leave the broader category so that my other (non-modal) metaphysics posts are easier to find. But cross-categorize horizontally, e.g. with 'language' or 'epistemology', if appropriate.

2. Avoid overcrowding a category. The sidebar navigation only loads 8 posts at a time, so it'll take a painfully long time to find an early post in a category of 80+. Better to split it into more specific sub-categories. Though there is some trade-off with the annoyance of an excessively long list of categories. I think the ideal balance would be to have as few categories as possible whilst maintaining a maximum category size of about 40 posts.

Question - would the navigation be easier if subcategories were bundled nearer together? E.g. have 'ethics - applied', 'ethics - metaethics', etc., rather than 'applied' and 'metaethics' distantly separated on the alphabetized list.

Difficult cases. It's quite hard to know how to split things up, especially some of my advocacy (applied ethics or politics) type posts.

- I previously had posts on the Internet and copyfight issues categorized under 'media', though I've now made a new 'Internet' category, and shifted some intellectual property stuff over to the 'property' category, leaving the 'media' category more for howling at journalists. Does that sound like a more intuitive and user-friendly categorization?

- Another recent change was to dismantle the 'democracy' category, shifting the more theoretical posts into 'political theory', and the advocacy/metapolitics stuff into a new category: 'civics'. This also helped empty out my 'politics' category slightly. I might further subdivide out 'electoral politics'.

- A new category on '2-Dism' could clear a lot of room out of 'modality' and 'language'.

- I'm not sure whether the 'links' posts are worth including at all. Maybe I should just let them melt into the archives uncategorized, only preserving any especially good ones under the 'quotes' category?

- Maybe I should split up 'favourite posts' too, perhaps according to whether they're aimed at a general or advanced academic audience?

- 'Applied ethics' is a mess. I'm not entirely clear on whether policy prescriptions belong here or under 'politics' -- maybe I need a new 'policy' category, and leave the 'ethics' stuff for ethical questions relating to private conduct? In addition, there's 'the good life' which typically concerns theories of well-being; the ironically titled 'family values' which concerns the ethics of sex, relationships, children, etc. If I'm not careful, this may overlap with 'identity politics' on gender/feminist issues, though the latter also addresses racism and such.

I think I just need to get a clearer idea of what sorts of posts go where. (E.g. is abortion a matter for 'family values', or general 'applied ethics'? If I made a new 'bioethics' category, would it belong there instead?)

- 'Moral theory' needs to be split up. 'Action theory' currently contains posts on agency / moral psychology. I wonder whether stuff on 'reasons' belongs there too, or maybe in a whole new category of its own. 'Utilitarianism' probably deserves its own category too. What about 'rationality'? (I've posted a bit on global rationality and indirect utilitarianism, how should that be categorized?)

Suggestions welcome!

Grasping Normativity

Some people (most commonly, economists) appear to have lost their grasp of the concept of normativity. Their use of the word 'should' is tempered by scare quotes, and appears to refer to mere conventional morality (i.e. whatever norms happen to actually be accepted by society) rather than philosophical morality (i.e. the norms we really ought to have). What do you think is the most effective way to help them reclaim their grasp of the latter concept? Here are a few possibilities...

(1) Reassure them that nothing 'spooky' is required. (I assume such fears are what motivated them to banish the concept in the first place.)

(2) Offer cognate terms. Scrap 'morality'. Try talking about what you really ought, all things considered, to do. What matters. Or consider what would be best, or what you have most reason to do; what's reasonable, or rational.

(3) Quote Sidgwick (1890: 39)
Even, finally, if we discard the belief, that any end of action is unconditionally or "categorically" prescribed by reason, the notion 'ought' as above explained is not thereby eliminated from our practical reasonings: it still remains in the "hypothetical imperative" which prescribes the fittest means to any end that we may have determined to aim at. When (e.g.) a physician says, "If you wish to be healthy you ought to rise early," this is not the same thing as saying "early rising is an indispensable condition of the attainment of health." This latter proposition expresses the relation of physiological facts on which the former is founded; but it is not merely this relation of facts that the word 'ought' imports: it also implies the unreasonableness of adopting an end and refusing to adopt the means indispensable to its attainment.

(4) Go procedural. Consider the possibility that a reflective equilibrium process might lead one to change their values/preferences in ways that could only be described as an improvement. For example, one might iron out any inconsistencies, reduce the number of ad hoc/arbitrary distinctions, add more general principles that enhance the overall coherence and unity of one's desire set, etc.

(5) Offer examples of irrational preferences, e.g. future-Tuesday indifference, only caring about your future self's interests up until 1/1/09, altruistically caring about all and only persons whose names begin with the letter 'A', etc. Intransitive preferences. Failure to pursue the necessary means (ceteris paribus) to some endorsed end. Preferring the acknowledged lesser good to the greater. And so on.

(6) Draw attention to their agency. These skeptics usually presuppose a kind of naive Humeanism, according to which preferences are 'given' and automatically combine with beliefs to yield action. But that can't possibly be right, because it leaves no room for the familiar phenomenon of deliberation. We are agents with the capacity for practical reasoning, i.e. the assessment of reasons that count for or against various courses of action. This is a self-consciously normative process of decision: just as theoretical reasoning addresses the question what should I believe?, so practical reasoning addresses the question what should I do? Insofar as you think of yourself as a rational agent at all, you must be engaging with these normative questions; the alternative is to be a mere automaton, a reflexive stimulus-response machine. Most of us are more deliberative; but deliberation is inherently normative: it addresses a question for which there may be better or worse answers.

(7) Compare epistemic normativity. For some reason, people seem to be more skeptical of practical reason than theoretical (epistemic) reason. Even the most hard-nosed science-cheering skeptic usually thinks that Creationists, say, are going wrong in their beliefs. This is not just to say that their beliefs are likely false, or that they are unsupported by evidence (though this is part of it); in response to someone who invokes practical reasons for belief (say religion makes them happy), the skeptic may make the further claim that they are being unreasonable. (Cf. Sidgwick in #3 above.) Practical normativity is like that, only applied to actions rather than beliefs. Performing bad actions is kind of like believing contradictory things. People manage it all the time, but they're cognitively malfunctioning in doing so.

An additional point of analogy: skeptics may initially be inclined to an inadequately narrow conception of rationality. Deductive logic gives us no reason to believe that the Sun will rise tomorrow, as per the famous problem of induction. But we clearly do have good reason to believe this, and indeed it would be unreasonable not to. This points the way to a broader conception of rationality which invokes considerations of coherence, etc., much as Future Tuesday Indifference and similar examples (#5 above) show the need to go beyond mere instrumental rationality in the practical sphere.

Any other suggestions?

Revealed Preference?

(1) Quoting an Open Letter to ABC:
The debate was a revolting descent into tabloid journalism and a gross disservice to Americans concerned about the great issues facing the nation and the world.... For 53 minutes, we heard no question about public policy from either moderator. ABC seemed less interested in provoking serious discussion than in trying to generate cheap shot sound-bites for later rebroadcast.

Alex Tabarrok responds with a lame 'revealed preference' argument: "who would want to rebroadcast something the public didn't want?" He really needs to read, um, Alex Tabarrok:
Early on Slee makes a good point about preferences and outcomes:

"The prisoner's dilemma shows how, as soon as one person's choice alters the outcome for another person... choices do not reveal preferences... instead of thinking about choices as revealing preferences, it pays to think of choices as 'replies' to the actions or likely actions of others. The best choice you can make is the best reply to the likely actions of others."

Given that public debate is so degraded and sub-rational, partisans will reach for every weapon in their rhetorical arsenal, including 'gaffe bombardment'. No one dares risk unilateral disarmament. But it obviously doesn't follow that they prefer this situation to some alternative where gaffe bombardment was safely off the table. Political discourse could very easily be a prisoner's dilemma in this way.

(2) Another problem I'd like to consider for sloppy appeals to 'revealed preference' is that of intra-personal conflict. Consider the unwilling addict, who is compulsively moved to seek drugs, even though he would prefer (on reflection) to be able to withstand this compulsion. Robin Hanson denies that there is any normatively relevant structure to our preferences. Instead, in a case like this, we can simply watch the addict's behaviour to see which preference is the more weighty. 'Might is right.'

But this is plainly misguided. Mere behavioural drive has no normative impact; the mere fact that my body is disposed to move in such-and-such a fashion is at most defeasible evidence - and does not strictly entail anything - about my mental states, or what I really desire in any normatively significant sense (viz. reflective endorsement). You can see this by imagining a brain implant that grants a mad scientist remote control over my body. It's not so different in principle if the source of my unfreedom is internal -- an inner demon such as mental illness, addiction, etc., may be every bit as constraining as an external obstacle. My resulting action may be no more an instance of my "getting what I most want" than when the mad scientist was controlling me.

Friday, April 18, 2008

Sockpuppet Journalism

The dismal ABC debate raises many questions about journalistic ethics and the civic obligations of the media. One thing I want to explore is the use of quotes from "ordinary voters", in light of revelations that Nash McCabe - the woman used by ABC to ask whether Obama "believes in the flag" - has had past media exposure:
Presumably, a researcher for ABC or Gibson saw the piece in the Times, figured, hey, this lady hates Obama and is seriously ginned up about the lapel issue. Let's send a camera crew and film her slamming Obama to his face. It'll be great in the debate.

A TPM reader adds:
In [certain other debates], citizens asked questions that weren't obvious or oriented toward sound bytes. They were the kinds of questions that would not, for whatever reason, be asked by these tv moderators. Moreover, these were their questions. In this case, the producers put the producers' question into the mouth of a voter, because it made the question seem more authentic, as if people care in large numbers about the flag pin question. That is, the woman was used to legitimize the traditional media's focus on these frankly trivial and, yes, distracting issues.

So it's not just bad that they sought out someone to ask the question, but that they did it in order to avoid asking the question themselves because, you know, it's sort of embarrassing. It's not about content; it's about TV content and TV optics. There's no way for Gibson to ask that without looking petty and stupid. So they used this woman.

Let's define journalistic sockpuppetry as the practice of starting with a preconceived statement 'X' in mind, and purposefully searching for someone (anyone) to echo it, so that you can present your desired statement 'X' under the guise of neutrally reporting someone else's words.

I suspect that journalistic sockpuppetry is fairly common. Such gratuitous dishonesty also seems pretty clearly unethical. (Any counterarguments?) Journalists shouldn't use others merely as a mouthpiece. If you've stacked the deck in such a way as to ensure that your final quote matches some preconceived content, the quoted person is no more the author of that content than my keyboard is the author of this post. They're merely a mechanism through which you've expressed yourself, and it's deceitful to pretend otherwise.

Wednesday, April 16, 2008

Non-spooky Moral Realism

I suspect that many people are tempted towards moral skepticism/nihilism (i.e. the meta-ethical view that there are no objective moral truths) because they don't want to be committed to the existence of 'queer' moral entities (as per Mackie's famous objection).

But I think that's a misleading way to frame the issue. The central question of meta-ethics is not about the world and whether it contains entities of a special sort. Instead, it concerns our practical reasoning, and whether some answers to normative ethical questions ('What am I to do?' 'How to live?') are better than others. So we do best to approach meta-ethics from an epistemic, rather than ontic, angle.

Put most simply, the question is whether our moral judgments can be improved. There's nothing particularly 'spooky' about answering in the affirmative. On the contrary, it seems entirely plausible that my evaluative beliefs (just like the rest of my beliefs) are not as coherent and unified as they possibly could be. My idealized self would see room for improvement -- inconsistencies to iron out, etc. So we can make sense of there being a gap here between belief and truth, i.e. between what my actual moral views are, and what they ought to be -- what they would be if I were to reflect more carefully.

So, don't worry about whether moral entities "exist". We don't need any such things in order to secure the kind of objectivity or 'moral realism' that matters. All we need is for there to be more or less reasonable answers that could be given to moral questions. As I like to say:
Philosophical truth just is the ideal limit of a priori inquiry; it does not answer to the sort of independent reality that might sensibly be considered beyond all epistemic reach. Whereas physical facts are made true by existing things in the world, philosophical facts are made true simply by the fact that they are what ideally rational agents would believe.

Vaccination: Compulsion vs. Incentives

Thomas Pogge defends mandatory vaccination policies on the grounds that this is "the only way to overcome the collective action problem." It is not:
Let's distinguish two forms of regulation, reflecting the statist vs. Hayekian distinction. One option is to make the undesirable activity illegal. The alternative is to make it costly. More generally, we can regulate activities either by using the blunt instrument of the law, or else by the more subtle manipulation of market forces. I think the latter will often be preferable.

Let's suppose that $1000 is more than enough to counterbalance the public health costs of a non-vaccinated person. In that case, what possible reason could we have for denying people the right to opt out of getting vaccinated if they care so much that they're willing to pay this cost?

Problems for Decision Theories

Andy Egan has a great paper, 'Some Counterexamples to Causal Decision Theory', which effectively makes the case that we (currently) have no adequate formalization of the intuitive principle do what's most likely to bring about the best results.

We begin with Evidential Decision Theory and the injunction to maximize expected value, but it turns out this is really a formalization of the subtly different principle, do what will give you the best evidence that the best things are happening. Suppose you believe that watching TV is very strongly correlated with idiocy (but does not cause it). You want to watch TV, but you really don't want to be an idiot. We can set up the numbers so that the expected value of watching TV is lower, because then it's most likely you're (already) an idiot. So EDT says it's "irrational" for you to decide to watch TV. But that's ridiculous -- whether you decide to watch TV or not won't affect your intelligence (ex hypothesi). That's already fixed, for better or worse, so all you can change now is whether you get the pleasure (such as it is) of watching TV. Clearly the thing to do is to go ahead and do so.

Causal Decision Theory (a la David Lewis) tries to get around this by holding fixed your current views about the causal structure of the world (i.e. ignore the fact that choosing to watch TV would be evidence that you instantiate the common cause of idiocy and TV-watching). This solves the previous problem, but introduces new ones. Suppose that instead of correlating with idiocy, TV-watching correlates with a condition X that makes one vulnerable to having TV turn your brain to mush. If I don't watch TV, I'm probably fine and could watch TV without harm. If I initially assign high credence to this causal structure, then - holding it fixed - CDT advises me to watch TV. But that's nuts. Most people who end up deciding to watch TV have condition X. So if I decide to watch TV, that's new evidence that I'm susceptible to having my brain turned to mush. That is, if I make that decision, I'm probably seriously harming myself by doing so.

So, neither evidential nor causal decision theory is adequate. Though I should note a proviso: all these objections assume that the agent has imperfect introspective access to his own mental states. Otherwise, he could discern whatever states (e.g. beliefs and desires) will cause him to reach a certain decision, and those mental states will provide all the relevant evidence (as to whether he is an idiot, or has condition X, or whatever). The decision itself will provide no further evidence, so these problems will not arise. (Once you have the evidence that you're an idiot or not, you can go ahead and watch TV. In the second case: whether you should watch TV will be settled by the evidence whether you have condition X.) But a fully general decision theory should apply even to agents with introspective blocks.

Andy proposes and then rejects a view he calls lexical ratificationism. The idea is that some decisions are ratifiable (i.e. conditional on your choosing it, it has the highest expected value). You should never choose an unratifiable option (e.g. refraining from watching TV in the first 'idiocy' case) if some ratifiable alternative is available. But sometimes there are no self-ratifying options (as in the 'condition X' case), in which case you should simply follow EDT.

The objection to this view comes from Anil Gupta's 3-option cases. Suppose that most people who smoke cigars have some background medical condition such that they would benefit from smoking cigarettes instead, but suffer great harm if they chose to not smoke at all. Similarly for cigarette smokers -- they would likely benefit from changing to smoking cigars, but suffer harm if they did neither. So neither option is ratifiable (each recommends the other instead). Non-smokers, on the other hand, do best to refrain from smoking, so this option is ratifiable. Still, the thought goes, if you're initially leaning towards cigar smoking, you may have some reason to switch to cigarettes instead, but the one thing you can be sure of is that you shouldn't be a non-smoker. So ratificationism, too, yields the wrong results.

I'm not sure about this objection, for reasons Helen brought to my attention. Note that it can't just be the initial inclination towards one option that is the evidence here -- otherwise you could note your inclination for cigars and decide to smoke cigarettes, no problem. Instead, it must only be your ultimate decision (post-deliberation) that's evidence of the relevant medical condition (never mind how radically implausible this is). But then there's nothing wrong with the ratificationist answer after all. If you're susceptible to being persuaded by ratificationism not to smoke, then (ex hypothesi) that's very strong evidence that you don't have the other medical conditions, and so not-smoking really is most likely to be best for you. A mere initial inclination towards cigars is no evidence to the contrary.

An interesting point Andy made in response is that this might work for first-personal guidance, but we also want a decision theory to apply third-personally, i.e. to tell us when others' decisions are rationally criticizable. And it would be bizarre, in this case, to tell a cigar smoker that they should have chosen to be a non-smoker instead, when (given that they ultimately chose to smoke cigars) they probably have a medical condition that would make non-smoking harmful to them.

I think the upshot of all this is that we can't give any third-personal advice in these problem cases until we see what decision the person themselves made. Until then, the only normative guidance on offer is first-personal, and lexical ratificationism gets that exactly right.

What do you think?

Monday, April 14, 2008

Philosophers' Carnival #67

... is here (with a neat introduction to metaphysical idealism).

Speaking of links, I've added some more to my 'favourite blogs' list on the sidebar, but also set it so that only the 10 most recently updated will display. (So if you find that you've disappeared, write a new post!)

P.S. I hope more people eventually pick up on the 'Useful Meme'. Don't you have 5 useful tips to share?

Saturday, April 12, 2008

Rationality and Reflective Endorsement

The following has the ring of truism: belief aims at truth, and preference/desire aims at the good. But Liz Harman argues against the latter claim on the grounds that loving a person as they actually are may lead us to prefer this actual state even to an alternative that would be better in every important respect. (Suppose you have a disabled child, now grown up. You love this person as they are, and so do not wish that their disability had been cured at birth -- even assuming this would have been better for them and everyone else too -- because then they would have grown up to be a radically different person from the one you actually know and love.)

My initial response is to simply bite the bullet and insist that love makes us irrational in these respects. It is not, strictly speaking, rationally warranted to be biased towards the actual, worse state. The better state is always preferable (it warrants preferring). It's just that the contrary bias is one we may be happy to have. It's fortunate that we prefer our loved ones as they actually are. Indeed, these biases are reflectively endorsable in a way that is familiar to sophisticated consequentialists: we do better to be biased in these ways rather than single-mindedly pursuing the good. But that doesn't mean the attitudes in question are rationally warranted (in the strictest sense). It just shows that sometimes we foreseeably do well to be irrational.

One conclusion you might draw from this is that we shouldn't be too concerned about having warranted attitudes. I think that's exactly right. My writing sample defended a form of 'rational holism', according to which we are instead advised to reason according to those norms which we can reflectively endorse. I thus agree with Julian Nida-Rumelin that it would be perfectly advisable to "refrain from point-wise optimization because you do not wish to live the life which would result." (He actually says it would be 'perfectly rational', which I earlier endorsed too, but for now I want to draw a sharper distinction between advisability and rational warrant.)

Further: if rationality and warrant and so forth are meant to be action-guiding -- concerned with reasons we can follow -- then perhaps we should simply identify these with 'advisability', and give up on the notion that preference aims at the good, as Liz suggests. That's compatible with a continued insistence that it's the impartial good which is the ultimate normative source. It's just that we end up with a neater and more coherent framework overall if we tie rationality more closely to reflective endorsement, rather than identifying it with the strict and alienating impartial ideal.

Anyway, I'm basically just thinking aloud here, and I'm not even too sure what my question is. (Perhaps: "What is the best way to think about rationality?") But if anyone else can shed some light on the issue, that would be immensely helpful...

Thought Experiments and Begging Questions

Adam Rawlings has an interesting post complaining that common philosophical thought experiments are "hopelessly question-begging":
The zombie scenario just assumes, without argument, that a fully-specified physical world contains no consciousness. The amoralist case just assumes, without argument, that full knowledge of morality will not give sufficient motivation to act. The Frankfurt-style examples just assume, without argument, that being under the control of a counterfactual demon is still a case of genuinely free choice in action. And the direction of fit metaphor just assumes, without argument, that these two directions of fit must both exist. In other words, the cases are structured in such a way as to appeal solely to intuitions favourable to one side of a philosophical dispute, bypassing the inconvenient "giving an argument" stage completely.

All very true, I grant, but there's nothing wrong with this. To see why, suppose I were to claim that 'bachelor' is defined to mean 'unmarried man'. You might respond with putative counterexamples, e.g. pointing out that the Pope is an unmarried man, but (intuitively) not a bachelor. It would be odd for me to complain, "You're just assuming, without argument, that the Pope is not a bachelor!" True, you say, but I'm missing the point. You merely wanted to draw my attention to a possibility that I may have neglected. Do I really deny that the Pope is no bachelor, you ask? If so, we must look elsewhere to resolve our disagreement. But it was not unreasonable for you to offer the suggestion that you did: you had every reason to expect that it might inform my view, even though I did not antecedently agree with your conclusion that there could be unmarried men who aren't bachelors.

Compare Stalnaker's wonderful insight:
With riddles and puzzles as well as with many more serious intellectual problems, often all one needs to see that a certain solution is correct is to think of it--to see it as one of the possibilities.

Sometimes a vivid illustration is all we need to advance our understanding, and so make philosophical progress.

With this in mind, I think it helps to understand 'begging the question' in dialogical (rather than purely logical) terms. Every valid argument contains its conclusion in its premises, after all. So that alone can't be grounds for complaint -- there's nothing wrong with drawing out implicit commitments which one hadn't previously appreciated. Rather, what's problematic is if an argument is not going to be rationally persuasive to anyone who doesn't already accept the conclusion. Fruitful debate merely calls for arguments that are dialectically effective for logically non-omniscient agents such as ourselves. Some arguments aren't going to help advance the dialectic at all, so it's those which I would call 'question begging'.

The thought experiments Adam discusses aren't like that though. Many people are persuaded upon learning of zombies and Frankfurt cases. These thought experiments make vivid to us certain conceptual interrelations which we had not fully appreciated beforehand. That said, further argument will be required if someone sincerely disputes the proffered description of the scenario in question (as Adam suggests). But that merely shows that these thought experiments are not guaranteed to be dialectically effective. That's fine; there's plenty of space between 'knock down' and 'mistaken' for these arguments to occupy.

Constraining Qualia

[See 'Zombie Rationality' for background.]

A correspondent writes:
Consider the brain of a human looking at three colored balls, which are red, blue, and green respectively. Then a trustworthy logically omniscient agent comes along and truthfully says the following:

1. Brains like this brain, faced with balls like these in the absence of logically omniscient intervenors, engage in information processing that results in inner z-representations to the effect that the red, blue and green balls are inducing different qualia. This typically results in utterances like "when I see these three different balls they look phenomenally red, blue, and green."
2. Across possible non-zombie worlds, in 10% there are indeed different qualia or phenomenal colors for each of the balls, while in 90% the red and blue balls have the same phenomenal color, which is distinct from the phenomenal color of the green ball.

In this case, and using your account, wouldn't the brain maximize the likelihood that its z-beliefs would result in true beliefs by z-believing that it sees only two phenomenal colors rather than three?

Sure (assuming that the measure of a priori credibility over these worlds mirrors the 10/90 split). But it's like a case where an angel tells you that experiences like yours are most likely to be non-veridical because an 'evil demon' or BIV world is objectively more probable than true Earth. If that claim really is true, then sure, I guess you should disbelieve your senses in such a case. But it's highly implausible.

The thought experiment raises a more interesting issue though: might there be constraints on the bridging laws that convert physical configurations into qualia? Although you can have any physical arrangement without corresponding qualia (cognition without mentality), it's not so clear that you can have any old qualia without corresponding computations (mentality without cognition). That is, we might doubt whether 'raw phenomenal feel' can hold entirely independently from the rest of our mental economy. Instead, it may be that in order to see two balls as the same colour, one must also be disposed to judge that they are the same colour, etc.

On this picture, then, you can either have a zombie world with no qualia at all, or else qualia that correspond appropriately to physical structures (which may still allow for some variation, e.g. colour inversion). But there's no sense to be made of free-floating qualia that are thoroughly out of sync with the physical computations that are going on in the world.

(I don't have any arguments for this view. I'm not even sure whether it's well-motivated. I just thought I'd throw the idea out there and see what others think...)

Thursday, April 10, 2008

Rational Recovery

It's tempting to interpret the Equal Weight View (EWV) as offering positive normative advice: 'when you disagree with someone you take to be an epistemic peer, you should split your credence equally between both your conclusions.' But this would lead to implausibly easy bootstrapping. (Two creationists don't become reasonable after splitting the difference between each other. It's just not true that what they (epistemically) ought to do is to give equal credence to both their unreasonable views. Rather, they ought to revise/reject their prior beliefs altogether. Cf. wide-scope oughts.) To avoid this problem, Adam Elga restates EWV merely as a constraint on perfect rationality. That is: if you fail to split your credence in this way, then you're making some rational error. But even if you satisfy the EWV constraint, you might be making some other, more egregious, error. So it doesn't follow that, all things considered, you ought to follow EWV.

Or consider Roger White's argument against imprecise credence. It shows that we're "irrational" (i.e. imperfectly rational) to have other than perfectly precise credence in any proposition. But given our cognitive limitations, I expect we'd do even worse if we tried to give a precise credence to every proposition under the sun.

The fact is, we're not ideal agents. We have no hope whatsoever of being perfectly rational. And this leads to the problem of second best. That is, attempts to conform to norms of ideal rationality may end up leading us even further away from that goal. What we really need are norms of non-ideal ("second best") rationality, that recognize that we will make rational errors, and so incorporate strategies for recovering from such errors. In other words, we need to know what to do in case we are in an irrational position to start with -- how can we revise our beliefs so as to make them less irrational? Bayesian updating and other rationality-preserving rules are no help at all when your initial belief state has no rationality to preserve.

[I'm sure this isn't an original observation. I know many moral and political philosophers are interested in non-ideal theory. I'm just less familiar with epistemology. Can any readers point me in the direction of epistemologists who work on non-ideal theory?]

How to start a philosophy blog

I've already had a couple of classmates ask about starting their own blog, which is an encouraging sign. (More philosophy blogs = more interesting conversations, more helpful summaries of interesting books or lectures that I didn't have time to read/attend myself, etc. Every grad student should have one!) In hopes of encouraging yet more people to join in, I thought I'd offer this 'Getting Started' guide.

(Step 1) Create a blog. Go to www.blogger.com and follow their instructions. It really couldn't be easier: select a pre-made template and you'll be up and running within five minutes.

(Step 2) Start writing posts. If you're unsure where to start, see whether any of the following three post types appeals to you:
I think there are three kinds of philosophical activity to which blogs are especially well suited. First is the exploration of half-baked ideas, to get some early feedback and test their potential for further development. Secondly, blogs are a great study and teaching tool, as students can attempt to summarize an issue, and their readers may respond to help correct any misunderstandings. (A good summary may also benefit the readers' knowledge, of course.) Finally, a tightly focused blog post can make technical contributions in response to other work, perhaps critiquing a particular step in an argument, or offering an alleged counterexample.
(I must admit I'd especially appreciate seeing more posts in the second category, e.g. distilling out and sharing the most valuable new insights you've come across in classes or readings, etc.)

(Step 3) Enhance your blog.
- You may wish to add a hit counter so you can see how many visitors you're getting, and where they're coming from. (You can also do a Technorati search for your blog URL, to see if anyone has linked to it.)
- Add a recent comments widget to your sidebar, if you wish. (I recently removed mine due to technical problems. But may reinstate it soon, since they're handy things to have.)
- Sign in to draft.blogger.com and navigate to your blog's 'Layout' page. Here you can add new gadgets to your sidebar, e.g. polls, subscription links, and blog lists. I especially recommend the latter two.

(Step 4) Join the community! So, you have a sparkling new blog, with groundbreaking and insightful posts, but nobody else seems to notice. That's not the end of the world: there's plenty of benefit in simply writing your thoughts down. But there's plenty more benefit to be gained by attracting an intelligent audience with whom to engage in discussion. There are several things you can do here.

The simplest is to submit posts to the Philosophers' Carnival, or even sign up to host a future edition yourself.

But it's probably more effective to interact with other bloggers that you like. (Hopefully there are some!) At the very least, add a 'blog list' to your sidebar, as mentioned above. Most bloggers regularly check who's linking to them, so this is an easy way to attract their attention (at least for a moment) and gratitude. That's a very minimal form of interaction, of course. Better: leave (intelligent) comments on their blog. They'll be more likely to reciprocate. Participate in silly memes and other forms of community-building -- any excuse to link, however trivial, will bring you closer together. Best of all: write a substantive post responding to one of theirs (and link to it, of course). You'll find yourself engaged in a fruitful back-and-forth discussion in no time.

[Any other tips? Add them in the comments below...]

Wednesday, April 09, 2008

Useful Meme

Now for something completely different...
Instructions

1. Copy these instructions.
2. Link to the original 'useful meme' post.
3. Share 5+ things that may be of benefit to your readers -- useful facts, advice, product recommendations, etc.

(If others follow these instructions, it should be easy to track responses simply by searching for links to this post.)

I've a bunch of recommendations, so I'll split them into five categories instead.

(1) Amazon food. I hate shopping (and spending time cooking), but I'm also not a huge fan of starving, so this seems like a decent compromise.
- Clif Bars are my favourite snack, especially the 'cool mint chocolate'. I don't know how they manage to make something so nutritious taste so good. Seriously. (The variety pack flavours are also good. But avoid apricot and blueberry crisp.) Has anyone tried the Peanut Toffee Buzz or Iced Gingerbread? I'd be curious to hear what they're like.
- Clif Nectar bars are also good, especially the dark choc raspberry flavour.
- Healthy Choice Country Vegetable Soup is the best canned soup I've tried. (Much better than their 'Chicken Noodle' one.)

(2) Favourite Fiction (philosophy books are discussed here.)
- The Truth Machine, for fun and thought-provoking tech utopianism.
- The Sparrow explores liberal religion, cultural misunderstandings, and much more.
- Best fantasy world: Stephen Donaldson's Mordant's Need (2-book series). This is also runner-up in the 'best plot twists' category, second only to Donaldson's Gap saga.

(3) Classic (freeware) Video Games
- Liquid War is the greatest multiplayer game ever invented. (Yes, even better than Liero.)
- Dungeon Crawl is the ultimate classic RPG. (I've linked to a graphics version, because gameplay trumps all only once you've attained a minimal level of aesthetic acceptability, and ascii characters violate this minimal requirement!)
- The broader category of 'greatest games' is discussed here.

(4) Facebook philanthropy
- I'm a fan of the Hunger Site app. It's much easier to remember to click each day when there's a counter right there in your Facebook profile. For no trouble at all, you get to transfer money from sponsoring advertisers to the third world, to the worth of 1.1 cups of food each day.
- It's also fun and easy to participate in Peter Unger's UNICEF facebook chain (just join the group here, donate $10 or more, and invite your friends to do likewise). Note that the downstream effects of your participation may be exponentially greater than your personal donation considered in isolation. So it's a great opportunity.

(5) Music on the web
- Again, I must say Don Skoog's 'Attendance to Ritual' is the greatest Marimba piece ever. (That linked performance by my little brother ain't half bad either, though I may be biased here!)
- Incidentally, this YouTube to mp3 converter is handy.
- Last.fm is a neat way to discover new music.
- Project Playlist lets you share playable lists of music, as I've mentioned before. I'm really surprised that bloggers haven't made greater use of this yet (e.g. so that readers can actually listen to their 'Friday random ten' song lists).

Okay, that's it from me. Feel free to write up your own "useful" post. Or -- if you lack a blog of your own -- share your recommendations, etc., in the comments section below.

P.S. I'm tempted to get a Kindle e-book reader, to read online papers (PDFs etc.) more comfortably. Have any philosophers tried it? There was some encouraging discussion at Crooked Timber recently...

Update: I should tag a few people to help get this thing started. How about: Brandon, Chris, SteveG, Hallq, and you, whoever you are.

Tuesday, April 08, 2008

Standard Reasons, Adaptive Reasons

[I wrote the following in an exam for Michael Smith's class last semester. It explains some helpful distinctions that I want to be able to refer back to in future posts...]

In 'Reasons: Practical and Adaptive' Raz makes distinctions between, on the one hand, practical and adaptive reasons, and on the other, standard and non-standard reasons. Explain these distinctions using examples.

Imagine a biology student whose parents threaten to disown her should she ever come to believe in evolution. This situation exposes her to what look to be two very different kinds of reasons regarding her belief. From her biology class, the student receives epistemic reasons, i.e. reasons which speak to the truth of the thing believed. From her parents, she receives practical reasons, i.e. reasons which speak to the (dis)value of holding the belief in question. There are a couple of noteworthy differences revealed by this scenario, which form the bases of Raz’s two distinctions.

First, consider how reflecting on the various reasons will affect the student’s beliefs. Faced with compelling evidence that evolution has in fact occurred, she may - as a rational agent - come to believe it. That is, her rational faculties may respond to her apprehension of epistemic reasons for a belief by directly producing the recommended belief. This marks epistemic reasons as instances of what Raz calls standard reasons, or reasons that “we can follow directly”. Practical reasons for belief, by contrast, are non-standard in that they cannot be directly followed. Much as the student might wish to please her parents, no amount of reflection on their threat will suffice by itself to change her scientific beliefs.

What if people could respond directly to practical reasons for belief by changing their belief? It seems like this should be possible. At least, we can imagine a scenario in which reflecting on the practical benefits of holding a belief has a similar neurological effect as what actually happens when we reflect on evidence suggesting the truth of a belief. One might argue that the resulting neurological state, being sensitive to non-epistemic reasons, no longer qualifies as ‘belief’. But this seems implausible so long as enough of the functional role of belief remains intact: the person still sincerely asserts the proposition when asked what they believe, draws inferences from it, and behaves in ways that could be expected to fulfill their desires if the proposition were true, etc. So I think we must allow that this scenario is properly described as involving belief. But does it involve following a reason? This seems more questionable. Raz suggests, of a similar case, that the agent merely deceives themselves into believing that they followed the reason. They have not really done so, for that would be impossible -- it is not the kind of reason that can genuinely be followed in such a fashion. Of course, to assert this without argument risks begging the question, as Raz well recognizes. What we need is some independent basis for determining which reasons can be followed and hence qualify as standard reasons.

One thing we can tell right away is that this is not simply an empirical matter, to be ‘read off’ the neuro-psychological data. Not all forms of influence qualify as rational influence, and information may make its way into our heads without doing so under the guise of a reason. The other lesson from the above scenario is that, as Raz puts it, “whether one follows a reason is not purely a matter of how the agent understands his situation.” Combining these: the agent may cite a practical reason why he holds his belief, and it may indeed have played a central causal role in his neuro-psychology, but this still does not count as following the practical reason, in the normative sense we’re interested in here.

But why not? Raz appeals to “the nature of that reason” to settle the matter. This works most clearly in the case of reasons that are such that it would be self-defeating to try to follow them. For example, I may offer you $100 to hop on one leg for non-pecuniary motives. The prize-money is a reason to hop, but not one you could follow directly without thereby disqualifying yourself. The self-effacing nature of the reason is a logical fact which explains why it cannot be successfully followed, and thus why it is non-standard. But the previous case of practical reasons for belief is less clear. Raz claims that “the fact that non-epistemic reasons cannot serve to warrant belief shows that they cannot be followed.” It is not entirely transparent why this should be so. But I think it is most plausibly understood in reference to the normative character of reason-following, where this is taken to essentially involve a response on the part of our rational faculties (rather than just any old psychological process). Standard reasons are thus understood to be those that rationally justify or warrant the attitude they recommend. Or, if we are willing to take rationality itself as a primitive: standard reasons are those that our rational capacities respond to (insofar as they are functioning properly). Of course, even non-standard reasons may be rationally responded to in a different way: they warrant acting so as to bring about their target attitude, for example. This confirms Raz’s point that non-standard reasons for one thing are standard reasons for something else.

(Aside: there may be some exceptions to this claim. Suppose that God will reward those who are saintly, but to qualify as a saint you must never act from self-interest. This sounds a lot like the other non-standard reasons we’ve discussed, so it would seem ad hoc to deny that it really is a reason. But it cannot be redescribed as a standard reason for anything. However indirectly you bring about your sainthood, if you do it for the reason of the heavenly reward, then you’re no saint after all. So this looks like a non-standard reason without any corresponding standard reason. To hold onto his view that “the fact that they can be followed is what makes reasons into reasons”, Raz had best deny that “non-standard reasons” are really reasons at all. There are no practical reasons for belief. There are just standard reasons for acting to bring about a belief.)

So much for Raz’s first distinction. What of the second? Harking back to our original case of the biology student, notice that only her practical reasons derived from the value of holding the belief. Epistemic reasons instead indicate that the belief would be warranted or appropriate to the way things are, but this does not depend on whether believing the truth would be in any way beneficial. This renders epistemic reasons a subset of what Raz calls adaptive reasons. The adaptive/practical distinction arises whenever we have states whose internal norms of correctness may diverge from their practical value. Emotions are another obvious example. Given that fear is meant to be a response to danger, evidence that we are in danger provides an adaptive reason for this emotion; fear is warranted in such circumstances, regardless of whether it would be beneficial (a question which instead concerns the practical reasons for and against it).

Raz offers what we may take to be three tests for the dependence of reasons on value: (i) the possibility of akrasia, (ii) shaping the world to fit the attitude, and (iii) presumptive sufficiency. Here I will discuss only the second, as it is most vivid. If there’s value in the state of affairs of your having warranted attitudes, then this should be so whether this state of affairs came about as a result of shifting your attitudes to match the world, or by changing the world to match your attitudes. But this is absurd: if you feel fear, for example, there is nothing at all to be said for manipulating your situation to match your emotion by gratuitously exposing yourself to danger. Danger is a reason for fear, but fear is not a reason for (bringing about) danger. This asymmetry demonstrates that the reasons we have for feeling fear when in danger are adaptive reasons -- they do not assume that there is necessarily value in the combination of fear and danger.

Now that I have introduced Raz’s two distinctions, one might wonder about the degree to which they overlap. From my original example, we saw that epistemic reasons are standard and adaptive, whereas the non-epistemic reasons for belief are non-standard and practical. But not all standard reasons are adaptive reasons: sometimes warrant derives from value, as we find for example in reasons for action. If leaping into the air would produce great benefits, then I may follow this reason and rationally decide to leap. So that is an example of a standard practical reason. There may also be non-standard reasons for action, as we saw earlier in the case of prize money given to those who hop from non-pecuniary motives. (Note that this would also be a standard reason to bring it about that you hop, say by stabbing yourself in the foot. The latter is a reason you can follow without self-defeation.)

There is at least some overlap between the two distinctions, however, for there is no possibility of a non-standard adaptive reason. Non-standard reasons for an attitude are really just standard reasons for bringing about the attitude, and this places them firmly in the practical domain. We have seen that the other combinations are all possible, however:

(i) standard adaptive reasons, e.g. scientific evidence as a reason for belief, or evidence of danger as a reason for fear;
(ii) standard practical reasons, e.g. ordinary monetary rewards as a reason for action;
(iii) non-standard practical reasons, e.g. self-effacing rewards as a reason for action, or threat of parental disownment as a reason for belief.

How To Imagine Zombies

Some of the recent discussion on other blogs has assumed a sloppy version of the zombie argument, whereby we are to imagine a world just like ours but subtract consciousness. Hence Eliezer complains:
The epiphenomenalist imagines eliminating an effectless phenomenon, and that separately, a distinct phenomenon makes Chalmers go on writing philosophy papers. A substance dualist, or reductionist, imagines eliminating the very phenomenon that causes Chalmers to write philosophy papers.

Right, so that's a bad way to present the argument. The better way to imagine the zombie world is not by subtraction, but by building it up. Give a complete microphysical description of the world, and specify "that's all". A Laplacean demon can infer that the world contains tables, brain states, and a book entitled 'The Conscious Mind'. That is, the world contains particles arranged table-wise, brain-wise, book-wise, etc.

The Laplacean demon knows all that there is to know about this world. Does he know that it contains phenomenal consciousness, that there is something it is like to be the particles-arranged-humanwise in this world? Seems not. There's nothing in the microphysics that entails the presence of such subjectivity. So we've successfully imagined the zombie world. Not by subtracting, but by building up from the physics alone and noting that more needs to be added in order to obtain our (consciousness-containing) world.

Richard Brown makes a similar mistake in his attempted parody:
I am conceiving of a world that is just like this one in all non-physical respects except that it lacks consciousness. Therefore dualism is false.

The zombie argument begins by providing an undisputed specification of the "physical respects" of the world. It then asks whether phenomenal consciousness logically follows from the specification. Our answer is 'no'. That's why physicalism is false.

A proper analogy, then, would require building up the "non-physical zombie" world from an undisputed non-physical specification, just as we earlier built up a physical zombie world from an undisputed physical specification. But of course RB cannot do this. So that's why the zombie argument cannot be turned against dualism in this way.

Saturday, April 05, 2008

Zombie Rationality

'Zombie' writes:
On Chalmers' view, wherein the 'psychophysical laws' are contingent, it seems that across possible worlds most brains like ours will be zombies or at least have 'associated' qualia that don't 'match' the information processing in the brain. So sophisticated brains proceeding according to ordinary standards of rationality should zombie-conclude that they probably are not conscious (as they don't have access to any non-material qualia), despite their zombie-perceptions of being conscious (shared by both zombie and non-zombie brains). Yet Chalmers thinks that in our actual world the psychophysical laws lead to conscious experience mirroring the information processing in the brain. So, upon hearing the argument, shouldn't Chalmers' brain zombie-conclude that it is probably a zombie brain, and 'phenomenal Chalmers' consciously think the same?

No. Conclusions are drawn by people, not brains. Standards of rationality likewise apply to agents and their beliefs, not to their physical components (brains and neural states) in isolation.

On my view, beliefs are partly constituted by phenomenal properties -- that's what gives them their representational content. Zombies don't have beliefs like we do. They exhibit all the same behaviour, and make all the same noises, but there's no meaning in it. It's not really about anything.

One might define a 'z-belief' as the functional (physical, dispositional) component of a belief. It's not so clear how to assign pseudo-contents to these z-beliefs, but I guess a reductionist may offer a stipulation of some kind: S has a z-belief that P iff S has such-and-such physical dispositions [e.g. 'S behaves as though P were true', or 'S has a brain state which covaries with evidence of P', or some such. See my essay 'What Behaviour is About' for a more sophisticated empirical approach to attributing "content".]

Presumably we're to suppose that whenever I really have the belief that P, my brain has the z-belief that P. But I doubt whether any such reduction can be given that perfectly mirrors my actual belief contents. (If epiphenomenalism is true, and qualia are partly determinative of belief content, then the physical facts underdetermine what it is that I believe. My inverted-spectrum duplicate has the same brain -- hence z-beliefs -- as me, but our phenomenal beliefs are very different. My 'red' is his 'blue', or whatever.)

There's a more fundamental problem, even if we grant the reductionist his impossibly fine-grained z-content. Let's grant - per impossibile - that my brain (and zombie twin) "z-believes that P" iff I believe that P. However, my brain (understood as a purely physical system, i.e. excluding its phenomenal properties) is in possession of only a subset of my total evidence. Qualia - the contents of experience - are among my evidence if anything is. But these phenomenal properties are not causally accessible to my neural processes. So the conclusion 'I am conscious' follows from my evidence, but not from the "information" available to my brain. One can be a rational person, or have a "rational" brain, but not both.

Now, it's pretty obvious that being a rational person is better than having a "rational brain" (insofar as the latter attribution is even meaningful). Brains are parts of people, and like any body part we really only care about it for how it can serve the whole person. If quick feet didn't make for a quick person, we wouldn't much care for the former. Similarly, a rationally desirable brain is one that makes for a rational person, with justified beliefs.

One could imagine a brain that is instead built in such a way that it tends to produce "z-justified" z-beliefs. What this means is that it tends to end up in physical states such that a conscious person in that physical state would have beliefs in line with the physically accessible subset of their evidence. When put like that, it becomes clearer that what we've really described here is a defective brain. Let's call it "z-rational", and reserve the term 'rational' for brains that give rise to rational people -- people whose beliefs are in line with their total evidence.

Here are two implications:
(1) A z-rational brain can be expected to have more true z-beliefs (across all possible worlds).
(2) A rational brain can be expected to yield more true beliefs.

Fortunately, my brain is rational rather than z-rational. Hopefully yours is too (otherwise, you're a defective agent). One might try to argue that there's something "wrong" with a brain that isn't z-rational, but I don't think that'll work. For one thing, since you're really just describing a physical state it's not clear that brains or z-beliefs are even open to this sort of normative assessment. Norms apply primarily to people, and to our organs only derivatively. What a well-functioning agent really needs is a brain that will make them rational, not z-rational. As suggested above, a z-rational brain is defective from the standpoint of contributing to the functioning of the whole person (which is the relevant standpoint against which to assess brains). Further, when you stop to think about what it really means to have 'z-rational z-beliefs', you see that there's not really anything significant (worth caring about) there.