Wednesday, September 28, 2005

Fair Grapes

Stupid blog software wouldn't let me comment on an interesting post over at the Cardinal Collective, so I'll just write up my response here instead, after quoting the revelant section:
Let's suppose that person A and person B are sitting together in a room when an angel appears in a burst of light and says "Be not afraid! Here are 100 grapes for your enjoyment - divide them fairly between the two of you, and then consume them." The angel vanishes.

How should A and B divide the grapes? As it happens, person A likes grapes twice as much as person B does. (For the purists, assume that each of them has constant marginal utility and that these facts are commonly known.) The following dialogue ensues:

A: In order to divide them fairly, we should split the grapes evenly - 50 to you, 50 to me.

B: That's not a fair division: I like grapes half as much as you, so if we split 50-50, I only end up half as happy as you are. A truly fair division would make us equally happy. Therefore, I should get 66 grapes, and you should get 34.

A: That's ridiculous! It can't be fair to give fewer grapes to the person who likes them more. Our individual happiness is irrelevant - fairness means splitting the grapes evenly, nothing more.

B: How can our individual happiness be irrelevant? The whole point of having the grapes is to makes ourselves happier. It's the quantity of grapes that is irrelevant here - neither of us cares about how many grapes we get except through how much utility we will gain from eating them. Our happiness is the relevant concept to equalize here.

A: [turns to a commentator] What should we do?

C: The fair thing to do is show equal concern for the interests of each of you. And B is quite correct that individual utility is what matters.

B: I knew it!

C: And since A would receive so much more utility from each grape, it would be unfair to give any at all to B. That would be to treat a small increment to B's welfare as more important than a larger increment to A's welfare. Clearly that is mistaken. You ought to give all 100 grapes to A.

B: Doh!

Red Pill: Ethics & Rationality

Is it irrational to be selfish or evil? You might think not, as people typically separate ethics and rationality, but I think this is a mistake. From genocidal tyrants to inconsiderate neighbours, wrongdoers are not just mean and nasty, they’re also making intellectual errors. They fail to draw the conclusions and perform the actions that they rationally should.

This might sound surprising. It’s widely assumed that the only form of rationality is instrumental rationality – that is, taking effective means to achieving your desired ends, whatever they might be. If you want to be rich, and could achieve this by exploiting other people who you don’t care about, then the “rational” thing to do is exploit them – or so economists and game theorists would have us believe. They assume that there is no possible basis for assessing ultimate goals. The fact that you care more about money than people might make you mean, but it’s no failure of rationality on your part.

Although one can see why this claim might appeal to economists (ha, sorry, cheap shot), we need not accept such an impoverished conception of rationality. We can go beyond mere means-ends reasoning, and assess a set of values for internal consistency and coherence. That is, we can assess a set of ‘ends’ as being more or less rational to desire. We are not limited to merely assessing the efficiency of various ‘means’ to achieving them.

For example, we tend to consider it irrational for an agent to disregard their future interests. Suppose Larry only cares about what will happen to him during the present year, and has no concern whatsoever for how he fares after that. He might get into huge debt, seeking immediate gratification without regard for the costs he’ll suffer later. Wouldn’t you consider such behaviour to be irrational? But note that Larry exhibits no flaw in his means-ends reasoning: he’s successfully achieving precisely what he wants; the problem is that he wants the wrong things. Larry might not care about his long-term interests, but he ought to, and a more reasonable person would.

We can bring out the inconsistency in Larry’s desires by noticing that he draws arbitrary distinctions. He cares about what happens to him on New Year’s Eve of this year, but not what happens the day after. But there is no relevant difference between these two cases that would justify his taking such different attitudes towards them. If he cares about one then he rationally ought to care about the other, for they are similar in all relevant respects.

So we see that rationality requires us to treat like cases alike, and not to draw arbitrary distinctions. We can apply this to morality by examining the distinctions we draw between people that we think do or don’t “matter”. All of us think at least some people matter: at a minimum, our selves, friends, and family. But sometimes we disregard others’ interests. The most rationally coherent value set would contain a general principle explaining why we care about the welfare of the first group of people but not the others. Otherwise we are just like Larry, inconsistently caring about some cases but not others, when there is no principled basis for distinguishing between the two.

The question now arises: is there such any such principle? We might illuminate this by considering a visual metaphor. Imagine yourself at the centre of the universe (too easy!), with everybody else arranged around you based on how similar you are according to the relevant criteria, whatever that might turn out to be. So all the “like cases” are clumped around close to you, with people differing more and more as they get more distant.

Now imagine that you hold a powerful light above your head – representing your moral consideration – so that the light reaches everyone that matters to you. The light is clearly going to touch many other people too, since they will be relevantly similar to yourself or people you care about, and thus will be positioned nearby and within the light’s reach. What this shows is that you rationally ought to think that the welfare of these others matters too. You should take their interests into moral consideration.

That’s not to say that you must care about all people equally. The fact that someone is your close friend gives you reason to care more about them than a mere acquaintance. There are some relevant differences, and they will eventually add up. Returning to the ‘light’ metaphor, we can note that the light will get progressively dimmer the further away it extends. Those that are most important according to the relevant criteria will be near you in the centre, thus receiving the most light – the strongest weight in your moral consideration. This will get weaker as minor relevant differences build up into more and more significant distinctions as people get “further away” from you in our imagined ordering.

Perhaps you ought to have most concern for friends and family, a bit less for acquaintances and neighbours, and again for fellow citizens, then foreigners, etc. At each step you can identify some relevant differences, but none so significant as to justify a clear-cut distinction of saying that the first person matters and the second one doesn’t matter at all. To draw such a distinction would be arbitrary, and so leave you open to rational criticism for your inconsistency.

We’re now in a position to see why immorality is irrational. Ethics is essentially a matter of taking others’ interests into account. It is wrong to cause undue harm to another person. Is it also irrational? Given that we already take some people’s interests into account, consistency requires that we expand our sphere of moral consideration to encompass others that are relevantly similar. In order to avoid drawing arbitrary distinctions, we must recognize that other people matter too. In other words, yes, we are rationally required to be ethical. Contrary to common assumptions, the evil man is not just nasty, he’s downright irrational.

Monday, September 26, 2005

Humans, Matter, and Mattering

There's a confused discussion going on at Crooked Timber about whether there is a "radical difference... between human beings and all other terrestrial species". It's patently obvious that the answer is "yes", so I'm surprised that the majority of CT commenters can seriously deny this. Obviously the human mind is leagues beyond all others in important respects, as evidenced by the fact that no non-humans have yet joined in the debate. Sure, dolphins or chimps might have some capacity to exhibit crude precursors of language and culture, but they pale in comparison to human capabilities. To deny our vast rational superiority would be as ridiculous as denying the vast aquatic superiority of dolphins on the basis that "humans can learn to swim a bit too".

Several commenters seemed to buy the old fundamentalist's canard that only an immaterial soul could explain human exceptionalism. To avoid metaphysical extravagance, then, they seem forced into the absurd claim that humans "really" aren't that different from other animals. But this is silly -- the real problem lies with the first premise. The fundamentalists are mistaken: naturalism is entirely consistent with human exceptionalism.

Fundies object to physicalism because it implies that humans and rocks are made of the same "stuff" (i.e. matter), and thus "must" be the same in kind. The anti-exceptionalist argument, that humans must be like chimps for sharing so much DNA, is similarly foolish. The great fallacy in both is to overlook the importance of arrangement or internal relations between the constitutive 'stuff'. A great poem is obviously of far greater value than a page of random words, even though both are made of the same "stuff" (namely, words). The arrangement matters. Similarly, a chair is vastly different in kind from a newspaper, even if both are made from tree products. And a human being is vastly different in kind and value from rocks, even if both have purely physical constituents. For full knowledge of an object, it does not suffice to learn merely what materials it is composed of, in ignorance of their compositional arrangement. Contrary to the "fallacy of composition", the whole may have properties that differ (and cannot be inferred) from those of its isolated individual parts. This point is so patently obvious that I have to wonder if the fundies who make the "how can 'mere matter' matter?" objection have some kind of mental defect. It's as silly as asking how "dry", "colourless" atoms could suffice to make green grass or wet water.

So when Richard Cownie suggests that the purported human difference is "obviously nothing physical," I respond that he is obviously mistaken. There are hugely important physical differences between the human brain and the brains of other animals. Otherwise we wouldn't be capable of having this debate. Though, if it makes the anti-exceptionalists feel any better, I'd be happy to grant that human infants are similarly miles away, in terms of cognitive development, from the rest of us. (From what I've heard, chimps perform comparably to 3 year old children against several cognitive measures, so it might not be a bad analogy to bear in mind.) You won't find infants, any more than dolphins, starting philosophical academies or building spaceships. There is a vast and undeniable difference here.

Of course, it isn't an entirely unbridgable gulf. Infants do develop into (variously) rational adults, after all. And the human species did evolve from a common ancestor shared with other animals. So one can point to important similarities, I certainly wouldn't deny that. But none of that changes the fact that an adult human being is a very different kind of creature from an adult chimp or dolphin, which in turn are vastly different from ants and earthworms. There is something quite exceptional about the human capacity for general intelligence, or rational thought, which is not found in any other animal. Animals can be cognitively skilled in various domain-specific ways, but they don't have anything remotely comparable to the incredible cognitive plasticity of most humans. We don't need to posit anything so extravagant as an immortal soul in order to be able to accomodate this undeniable fact. Instead, we may simply recognize that the human brain is a remarkable piece of matter.

Sunday, September 25, 2005

Digital Auras

I remember reading once that a group of special (I think autistic) school kids wore colour-coded badges to inform others whether they were in the mood to be approached, or if they'd rather be left alone. The writer wryly joked that the rest of us could probably benefit from such a system also. In fact, I'm inclined to take the suggestion seriously. With advancing technology, it might soon be possible for us to outfit ourselves with 'digital auras', short-range wireless transmitters that contain information about ourselves that we want to make accessible (in various possible ways, which I'll expand upon below). These would come with "aura-readers", which allow us to receive the information contained in others' auras.

The aura might have a 'surface' interface which immediately presents itself to anyone who tunes in, perhaps containing basic information such as your name and your current 'receptivity level' (e.g. "do not disturb", "happy to chat", etc., much like you find on MSN Messenger). Name tags would be a thing of the past. I imagine most people would not bother monitoring who accesses this level of their aura - it's the sort of thing which would become absolutely commonplace. This surface level might then offer the analogue of a 'hyperlink' to further details, offering (perhaps restricted) access for those who want to "dig deeper".

This second level of your aura might include some superficially personal info about your hobbies, interests, personality, etc. It might also be a good place to mention whether you're single or not. I imagine that could come in handy, potentially preventing some awkward moments. The curious might want to monitor who accesses their second level, though I expect most would not bother restricting access to it. I conceive of this level as presenting your public persona.

You might then have further level of more personal information yet. I'm not really sure what people might end up using these for. Perhaps they'd say a bit about their emotional state, explaining what they're happy or upset about at the moment, what their long term goals are, what they really value in life, etc. They might have a 'personals' section where singles can say what they're looking for in a partner, like the sort of thing you might find in an online dating profile or personals ad. These sections are for stuff that you wouldn't normally talk to random strangers about, but that you nevertheless want people who are genuinely interested to be able to find out.

I'm not sure what the best way to restrict this information would be. At the very least you would likely want to monitor who accesses the information. If there were strong (and well-respected) cultural norms which forbade violations of privacy, this -- along with the threat of denouncing anyone who inappropriately impinges on your privacy -- might suffice. But that does seem a bit too open, still. You might instead require your individual authorization for any attempts to probe these deeper levels of your aura. Or, perhaps for less sensitive information, it would suffice to rely on a general 'reputation score'.

Suppose that aura-readers develop a 'reputation', whereby the targeted individuals get to judge how appropriate the "intrusion" was. You might give someone a low score for callous or inappropriate intrusions, especially if they went on to misuse the information revealed to them. Conversely, more helpful and empathetic individuals would receive boosts to their reputation score from those who appreciate their concern and subsequent appropriate action. Automatic screeners could then restrict medium-level aura access to those with relatively high reputation scores. In a sense, it would track who is "trustworthy", letting them learn more about you whilst blocking out the scoundrels.

Obviously this is all extremely speculative, and the precise details are flexible. So the important question is: what do you think of the general idea? Is it appealing? Do you think it could help people communicate and get along better in an age of increasing alienation? (There's certainly something appealing about it, for people as introverted and socially reclusive as myself.) Or do you think it is somehow "cheating", or too artificial? Would it prove just another barrier to meaningful communication, much like some see in the brevity and superficiality of txt messaging?

Suppose the technology developed to the point where you could open up 'private channels' between auras, effectively enabling a sort of telepathy (once we get to the point where computational devices are so unobtrusive as to be practically continuous with our unaided thoughts -- e.g. through brain-computer interface). What would the implications of this be? Better or worse than before? Is it necessarily better to communicate vocally, face to face? What if some people just aren't that comfortable at speaking -- wouldn't it be helpful to develop alternative modes of communication? Or, for a more mundane example, is email bad for developing personal connections? It does seem quite limited, but as I suggested in my post on Transphysicalism, it may be a merely contingent fact that our technology is often inadequate to mediate meaningful human connections. I'm not sure that there's any reason to doubt that future advances could, at least in principle, be more conducive to emotional nourishment.

Getting back to the original idea of the no-frills digital aura -- i.e. a "surface-level only" version -- it seems to me that it would have two small but significant benefits in everyday life. Firstly, a "do not disturb" sign could be really nice at times. I guess you could just scowl fiercely instead, but that tends to get tiring after a while. Second, a "happy to chat" sign could make public spaces, e.g. bus trips, a lot more enjoyable. Strangers tend to keep to themselves, sometimes just because it's difficult to tell whether others want to be spoken to, and one doesn't want to risk causing offence or being rejected. Some of us just aren't that good at unaided mindreading. But with the right aids... well, things might become a whole lot easier, don't you think?

P.S. I'd really like to hear what others think about all this. Consider the comments section a "two cents" collection plate. Don't be stingy now! ;)

Saturday, September 24, 2005

Anselm and the Perfect Reductio

Anselm's infamous ontological argument effectively defines God into existence: We can conceive of the perfect being (that which none greater can be conceived) -- let's call it 'God'. Suppose (for reductio) that God does not exist. Then we can conceive of a greater being yet -- namely, one just like God but with the added virtue of existing. That would contradict our premise that we're conceiving of the greatest conceivable being (see our definition of 'God' above). So, on pain of contradiction, we must reject the supposition that God does not exist. Hence, God exists. So argues Anselm.

Now, this argument is obviously problematic. This is brought out by the fact that it would seem to commit us the existence of all sorts of perfect entities, as per the following schema:

1) We can conceive of the perfect X, for which no greater X can be conceived.
2) It is greater for an X to exist than not.
3) Suppose for reductio that the perfect X does not exist.
4) Then we can conceive of a greater X, namely, a twin of the perfect X that has the further virtue of existing.
5) This is a contradiction; thus (3) is false, i.e. the perfect X exists.

We can plug anything we want into the X placeholder. Anselm's argument goes through with X = 'being' (which then concludes with the existence of "the perfect being" = 'God'), but we might as well follow Gaunilo in putting X = 'island', and thus conclude, absurdly, that the perfect island must exist. So, we might think, the schema must fail, and Anselm's argument with it.

It isn't quite so simple, however. As was noted by Andrew in the comments at FQI, it isn't obvious that the (single, unique) perfect island is conceivable. Perhaps we can always imagine one slightly better, thus forming an infinite series with no upper bound. But that merely shows that 'island' is a poor choice for X -- it doesn't satisfy premise 1 -- but perhaps some other choice would do the trick. I'll return to this question in a moment.

First, I should explain what work this reductio is doing. Brandon has offered a couple of posts wherein he objects that Gaunilo's island objection (and my more general schema) fail to properly parallel Anselm's own argument. I think he's rather missing the point, and suggested as much in an unusually frustrating exchange in his comments section. Anyway, to clarify, here's my meta-argument:

(1') Absurd consequences follow from the conjunction of (1) and (2) in the schema above, for any X other than 'being'.
(2') Thus, for each such X, either (1) or (2) in the schema must be false.
(3') It seems implausibly strong to claim that 'the perfect X' is inconceivable for all such X.
(4') So it's most likely that (2) must be false instead, for some such X.
(5') If (4') is true, then it seems simplest and most plausible to take (2) as being universally false, i.e. false for all X.
(6') If (2) is false for all X, then Anselm's argument is unsound.
(C') Anselm's argument is probably unsound.

Put more loosely, the point of my reductio schema is to show that Anselm's argument can only survive if "the perfect X" is inconceivable for every X other than 'being'. (Okay, it's logically possible for (2) to be non-universally false, i.e. false for X = those conceivable perfect other-than-beings, and true for X = Anselm's perfect being. But this is implausibly ad hoc. Hence my premise (5').) That is a very strong claim, so Anselm's argument looks to be in trouble.

I should add that if you don't like (4') and (5'), we can replace them with the following:

(4") If (2) is true for X = 'being', then, given (3'), (2) is probably going to be true for some X other than 'being' for which 'the perfect X' is conceivable. By (1'), this will yield absurdities.
(5") Thus (2) is probably false for X = 'being'.
(6") Anselm's argument depends upon the truth of (2) when X = 'being'.
(C") Thus Anselm's argument is (probably) unsound.

Note that my meta-argument does not depend upon the sort of strict parallel that Brandon is criticising, so his objections are quite irrelevant. Indeed, the only point where I really depend upon the analogy is my premise (6'/"), but that one is surely uncontroversial, and unaffected by the sorts of nit-picky differences Brandon highlights. Anselm clearly relies on the claim that it's greater for a being to exist than not. Otherwise he wouldn't be able to reach the conclusion that the perfect being must exist.

Anyway, my main purpose here is to explain why the reductio has some significant rational force, contrary to Brandon's claim that it is merely "a clever bit of philosophical sleight-of-hand, useful for fooling those who don't take the trouble to analyze it, and nothing more."

It is illuminating because we can now see that the core issue is whether there is any other conceivable 'perfect X', for any X other than 'being'. We discussed this in the later comments over at FQI, and I presented some plausible candidates, e.g. X = "malevolent being". If the Anselmian responds by taking 'perfection' to be domain-general (so that the most perfect anything will always tend towards God's attributes: omnipotence, omniscience, and benevolence) then we can stipulatively define a domain-specific evaluative term to take its place in the reductio. Let "sperfect" =df "perfect according to the appropriate domain-specific criteria". Let us say that the evaluative criteria for malevolent beings are just the same as those for beings generally except that the criterion of goodness is replaced by that of evilness. It then follows, by the Anslemian logic of my reductio schema, that the perfectly evil being exists. This alteration won't affect my meta-argument, because Anselm still assumes that it's greater for a being to exist than not, and so it's sgreater for a being qua being to exist than not, and hence (2) must be true when X = 'being'. And the whole point of the reductio is to cast doubt on this latter claim.

As a final point, I'd note that you can have a lot of fun using Anselmian logic with domain-specific perfection. Over at FQI I discussed the conceivability of 'the perfect melody', evaluated against criteria which include its being actually accessible to me whenever I want to hear it. It follows from my reductio schema that the perfect melody really is actually accessible to me right now (given that I want to hear it)! If only...

Readers are welcome to leave a comment suggesting other interesting possibilities for defining perfection into existence. Extra credit if you use Anselm's own logic to prove that there is a perfect counterargument that will negate his own attempts! ;)

Friday, September 23, 2005

Googling Gratification

According to Google, this is the "Top blog" on philosophy. How cool is that? ;)

Thursday, September 22, 2005

Lurker Day

Via Pharyngula, the idea is to invite any 'lurkers' to step out of the shadows and say "hello". If you're a semi-regular reader of this blog, but you've never left a comment here before, then please feel free to do so here!

Wednesday, September 21, 2005


There have been some interesting new comments made recently to my old post on conceptual nominalism (more about philosophy in general than the specific topic of the original post). These don't show up on the main page "recent comments" list, so I thought I'd mention it here instead in case anyone else wants to join the discussion. Update: Same for moral goals vs. side-constraints.

Why Be Rational?

Kolodny argues that we don't have conclusive reason to be rational. He appeals to the "boot-strapping objection": rationality requires us to do what we have apparent conclusive reason to do. Suppose (for reductio) that we have conclusive reason to be rational. Then we have conclusive reason to do whatever we have apparent conclusive reason to do. But that's absurd -- you can't just drop the 'apparent' like that. Apparent reasons need not be real reasons. From the mere fact that you take yourself to have a reason, it doesn't follow that you really do have a reason - it's possible to be mistaken about such things! So the supposition is false: we do not have conclusive reason to be rational.

Still, it seems to me that we have a reason of sorts to be rational. And not just instrumental ones (e.g. a rule-utilitarian-style suggestion that rationality is the most reliable strategy in the long run). Here's why:

Suppose an angel comes along and offers you the following choice. She will alter your brain in one of two ways. (1) She will make you perfectly rational. You will be able to reason perfectly well, and never suffer from weakness of the will, so you will always intend to do what you have apparent conclusive reason to do, etc. or (2) She will make you totally irrational but perfectly lucky. Knowing the future, the angel will set up your brain so that you always pick the option that you in fact have conclusive reason to choose. But this "choice" will not be because you consciously reflected on these reasons. Instead, you will engage in terribly bad reasoning, and make fallacious inferences, all of which just happens by good fortune (and angelic tampering) to yield true conclusions.

Which option would you choose? It seems to me that there is something very much preferable about option #1. This shows that we have reason to be rational. It's a good disposition or characteristic to have. I wouldn't want to be incapable of rational reflection, that would suck much of the value out of life. It would be like losing your free will. (Indeed, on some conceptions, it would be to lose your free will!)

Now suppose the angel adds that, whatever you pick, she'll re-offer these options in one week's time. Now you should pick option #2. Why? Because you can't be sure which is the best permanent option, but if you temporarily pick option #2 then the angel will guarantee that your later decisions -- including next week's decision about which of these two options to keep -- are always right.

So, what will the angel make you pick for your permanent choice? Like I said above, it seems that option #1 is the better. By restoring (and improving!) your rationality and free will, this option provides you with the most worthwhile life. Sure, you'll make some mistakes later on, but that's better than being a coincidentally perfect robot. Suppose I'm right about this, so that when you temporarily choose option #2, the angel would set you to pick option #1 the second time around. What does this show? Well, the angel sets you up to pick whatever you have conclusive reason to do. So it would show that you have conclusive reason to pick #1 - to make yourself rational.

Does this mean you have conclusive reason to be rational? This looks like the same thing, but perhaps it isn't. Making yourself rational seems like a kind of acting upon yourself. It might be like the distinction between believing P and bringing it about that you believe P. These are two distinct types of action, as I explain in my essay on reasons for belief. So we might grant the above argument and still deny that we have reason to be rational. On the other hand, if we accept that there can be 'external' or 'inaccessible' reasons, then we might say that we have reason to be rational (there is something to be said in favour of it), even if this is not a reason that we could actually recognize and act upon.

(Kolodny thinks it's "fetishistic" to value rationality for its own sake. But I don't know about that. It seems to me that there really is something intrinsically valuable about being rational in general, or having rational capacities, at least. Though perhaps I'd agree that it's fetishistic to care instrinically about doing what is rational in any particular case.)

So I guess I haven't really made much progress here. Maybe you should go read Clayton instead. Then come back and leave me some helpful comments :)

How Objective is Rationality?

There is something subjective about rationality. Given that belief aims at truth, there is a sense in which we "fail" whenever we have false beliefs. But it would be far too harsh, too "objective", to call this a failure of rationality. On the other hand, rationality cannot be completely subjective: the mere fact that I believe myself to be rational does not guarantee that I in fact am! We might do something unreasonable without realizing it. So, where do we draw the line?

John Broome has suggested various rational requirements. We ought not to believe contradictions, we should intend to do what we believe we have conclusive reason to do, etc. If we violate one of these rules, then we are irrational in doing so. This seems straightforward enough when applied to those of us who agree with the rules Broome suggests. But what if someone disputes that non-contradiction is a rational requirement? Let us assume that they are mistaken to dispute it. But nonetheless, is such a mistake necessarily irrational? Surely we cannot say that Priest and other paraconsistent logicians are irrational. Even if they are mistaken about the possibility of true contradictions, they have reasons justifying their position, which would seem to put it on a par with any other reasonably held false belief. They are not irrational in holding that the liar sentence is both false and true. (Broome grants this, and suggests that his initial non-contradiction requirement should be loosened somewhat to allow for this.)

Still, we do require some sort of objective requirements, whatever they turn out to be. Some (e.g. Scanlon) have suggested, contrarily, that rationality is just a matter of abiding by those rules that you accept. That is, being rational "by your own lights". But this is clearly far too weak -- if someone rejects all rules of logic and reasoning, they are irrational, even if they refuse to acknowledge it. Or suppose someone denies the meta-rule that it is irrational to break a rule that you yourself accept. What then? Should we go uber-subjectivist and say that even the meta-rule only applies to those who accept it? We soon descend into absurdity.

Kolodny has similarly suggested that rationality involves acting on those reasons that it seems to you that you have. (He suggests we don't really have reason to be rational. But because of the transparency of beliefs -- the way our beliefs seem to us to be true -- if we believe we have a reason, then this will have the appearance of normative force.) But again, this is insufficient for the same reason as above - it's just far too subjective. If you arbitrarily cease to believe in any rational requirements, then it won't seem to you that you have any reasons at all. So you will never fail to act on reasons that it seems to you that you have. So, on Kolodny's account (if I've understood him correctly), you would never be irrational. But that is surely a mistake. It is irrational to ignore rational requirements like the law of non-contradiction (with special exceptions for paraconsistent logicians and other principled objectors), and you cannot escape the charge of irrationality by saying that you don't see any reason to abide by the rule. The fact that you don't see these reasons is precisely why you are irrational!

But again, there is definitely something to the idea that rationality is about "apparent reasons" rather than fact-based reasons. There can be reasons that we are not yet in a position to know, and we certainly cannot be blamed for failing to act on those. So it's certainly wrong to say that rationality is about doing what you have most reason to do. That's just far too objective. But the subjective extreme -- that rationality is about doing what you believe you have most reason to do -- is equally implausible. So what's left?

I'd suggest that rationality is about doing what you have most apparent reason to do, where 'apparent reason' is a semi-objective, evidence-based notion. I do not just mean whatever you believe you have reason to do. Beliefs can be wildly mistaken, after all. Rather, it must be a justified belief. I also mean to include reasons that are (objectively) apparent but that you fail to recognize nonetheless. That is, if the available evidence indicates that R is a reason for you to X, then R is an "apparent reason" for you to X.

So I'm taking (fact-based) reasons and (evidence-based) justification as foundational notions, and defining rationality in terms of those. Rationality is a matter of acting on what the evidence indicates to be the best reasons. These apparent reasons might not actually be the best reasons -- we're not omniscient, and might reasonably make mistakes if the evidence has misled us. So this isn't a purely objective notion. But it's not purely subjective either, because not just any old belief about reasons will do. It's irrational to ignore apparent reasons, even if you do not realize that they are apparent reasons. (Indeed, as noted earlier, your failure to recognize apparent reasons is precisely what makes you irrational.) Sound like a good compromise?

Monday, September 19, 2005

Self-Recognition and Awareness

According to Gallup's famous "mirror test", you can test whether an animal is self-aware by whether it can learn to recognize itself in a mirror. In particular, if you secretly mark the animal's forehead, and it sees this mark in the mirror and responds by touching its own forehead, then this shows that the animal recognizes itself in the mirror. So, the argument goes, the animal must have a concept of self, and be self-aware. But despite the initial plausibility, this conclusion doesn't follow at all.

All living organisms draw some sort of distinction between self and other. For example, the job of our immune system is to identify and destroy alien cells, whilst leaving our own ones untouched. But no-one would thereupon conclude that our immune system is self-aware. Building on this sort of idea, our lecturer pointed out what I think is a devastating criticism of Gallup's test.

Suppose experimenters marked the animal's arm instead, in a clearly visible spot. Now, we surely wouldn't be surprised if the animal then touched the mark on its arm. It can identify its own arm, but that doesn't entail full-blown self-awareness. But why does bringing a mirror into the picture make any difference? A mirror is merely a perceptual tool - it allows you to see things that you might not otherwise see. In particular, it allows an animal to see its own forehead, in addition to its arms and such. Most animals can't make use of this -- they can't work out that the mirror gives them a view of themselves (they either think the image is another animal, or else ignore it entirely). Some apes are more intelligent in this sense -- they can make use of the mirror as a perceptual tool. But it still doesn't imply anything new about psychological self-awareness.

What we really want to know is whether animals know that they have minds. Do they have a concept of self (as opposed to simply being able to recognize themselves)? Can they think about their own beliefs and desires, recognizing that they have thoughts, and that they have a life, and are the same creature today that they were yesterday? These are the important questions, but they're questions that the mirror test is silent on. Despite all the hype, it doesn't really say anything interesting about self-awareness at all.

Philosophers' Carnival #19

Mathetes has published the 19th Philosophers' Carnival. There are several really interesting entries this time around -- I especially recommend Jason's post on Egoism (where I chip in, in comments), and The Frozen Texan on time's arrow. I should add that the carnival host was pretty selective, so entrants shouldn't worry too much if their submission wasn't included this time around. There'll be another carnival in three weeks' time, so give it another shot then!

P.S. I note that the Mathetes blog has one of those awful templates with grey text against a black background, which is practically impossible to read in Firefox (at least on my computer). If you find the same problem, you can use the Aardvark Firefox Extension (or Platypus) to convert the page back into black-on-white.

Sunday, September 18, 2005

Academic Blog Survey

The following survey is for bloggers who are actual or aspiring academics (thus including students). It takes the form of a go-meme to provide bloggers a strong incentive to join in: the 'Link List' means that you will receive links from all those who pick up the survey 'downstream' from you. The aim is to create open-source data about academic blogs that is publicly available for further analysis. Analysts can find the data by searching for the tracking identifier-code: "acb109m3m3". Further details, and eventual updates with results, can be found on the original posting:

Simply copy and paste this post to your own blog, replacing my survey answers with your own, as appropriate, and adding your blog to the Link List.

Important (1) Your post must include the four sections: Overview, Instructions, Link List, and Survey. (2) Remember to link to every blog in the Link List. (3) For tracking purposes, your post must include the following code: acb109m3m3

Link List (or 'extended hat-tip'):
1. Philosophy, et cetera
2. Add a link to your blog here


Age - 20
Gender - Male
Location - Christchurch, New Zealand
Religion - None
Began blogging - March 2004
Academic field - Philosophy (not yet specialized)
Academic position [tenured?] - undergraduate student [so, no]

Approximate blog stats
Rate of posting - daily
Average no. hits - 250/day
Average no. comments - 10/day
Blog content - 90% academic, 10% political, 0% personal.

Other Questions
1) Do you blog under your real name? Why / why not?
- Yes. I'm generally proud to claim authorship of my blog's content.

2) Do colleagues or others in your department know that you blog? If so, has anyone reacted positively or negatively?
- Yes. I've received some positive feedback.

3) Are you on the job market?
- No, not for a few years yet.

4) Do you mention your blog on your CV or other job application material?
- Yes.

5) Has your blog been mentioned at all in interviews, tenure reviews, etc.? If so, provide details.
- n/a.

6) Why do you blog?
- Mainly for the intellectual benefits: it's fun, allows me to develop ideas, have interesting discussions, etc. (Of course, if the potential name recognition, etc., proves beneficial as I progress in academia, then that would be an added bonus. But I don't know how likely that is.)

Additional remarks:
Thanks to Jonas for the original idea of investigating pseudonymity in academic blogs, and Rebecca for many of the questions about the career impact of blogging.

If you have an academic blog, please do consider participating in this go-meme. I think the results could prove quite valuable and interesting. To get the ball rolling, I'm going to nominate Brandon, The Little Professor, and (since he did so well in spreading the last one) P.Z. Myers. But others are most welcome to join in too!

Finally, if you end up analyzing the results of this memetic survey, let me know and I'll update this post with a link to your analysis.

Friday, September 16, 2005

Voting Green

With the elections tomorrow, I thought I'd outline my reasons for voting Green. But to break the one-eyed partisan patterns one sees on most NZ blogs these days, I'm going to balance this with some criticism too.

Why the Greens suck:

My main problem with the Greens is their tendency to let ideology trump reality. Our aim should be to enable humanity, and we should do whatever the evidence suggests is the best way to achieve this. That means we must promote science and rational inquiry rather than hiding behind romantic appeals to "nature", hyperbolic fears of "playing God" through biotechnology, emotional opposition to all things nuclear, and so forth. That's not necessarily to say we should be building nuclear plants or employing genetic engineering. But these issues should be open to rational debate. The Green movement is far too prone to dogmatic romanticism, and this can only harm its chances of actually doing good in the world.

The Greens would do better to become hard-headed empiricists: remaining committed to their goal of enabling humanity (including future generations), but remaining open minded as to the question of how best to achieve it. If the evidence suggests that private prisons are better run than public ones, then take note! Don't keep pretending that the public sector must always be best. Such dogmatism simply shows that the Greens care more about their statist ideology than achieving what's really best for our country. It's despicable. (See also David Farrar's Ideology vs. Common Sense, condemning Labour for the same vice.)

I also strongly disagree with the Greens on minority issues relating to "positive discrimination", as explained in my recent posts: Why Discrimination is Wrong, and The Human Race. The latter post grants that, in the short term, National would be even worse for our race relations and national unity. But still, the Left needs to give up its racial separatism and recognize the goal of a colourblind future (even if they think it's too early to dispense with our concept of race quite yet).

Another general problem with the Greens is their paternalism. They are far too quick to impose coercive measures and regulations without adequate justification. As a general rule, we should trust individuals to make their own decisions about what's best for them. If smoking bans in bars are to be justified, proponents first need to explain: where, exactly, is the market failure? Are bar owners mistaken about what would best satisfy their customers? Have the workers been misinformed about the health risks, or not adequately compensated for them? Some explanation is required, at least, to justify such blatant paternalism. Many Leftists, in their arrogance, seem not to recognize this requirement.

My final complaint against the Greens is more specific, concerning the S59 "anti-smacking" bill. In what can only be described as an act of rank idiocy, the Greens want to shift discretionary powers from jurors to the police. (As always, click the link for details.) This really does seem transparently stupid. I just don't know what the Greens are thinking -- or, indeed, whether they're thinking at all. Bloody idiots.


Okay, so that's why the Greens suck. Most of those criticisms apply to Labour too, and indeed leftists generally. A bunch of unthinking, reflexive statists, the lot of them. (Hell, me too most of the time.) It's most unfortunate. So why am I voting for them? Simply enough, it's because the alternatives are so much worse.

Why the Greens rule:

As far as I'm aware, they're the only party to explicitly take well-being and quality of life (rather than GDP) as a primary goal. They may not be perfect utilitarians, but they're the closest thing on offer.

They have the best tax policy, which shows signs of putting market mechanisms to good use after all. (So, in fairness, they aren't always blinded by ideology.) They are serious about protecting the environment, promoting renewable energy sources, improving public transport, etc. The unprincipled right-wing parties will pollute and pillage for a quick buck. We need the Greens to keep New Zealand clean, and protect our future. This point alone is sufficiently important to outweigh all the criticisms noted above. The other parties are irredeemably irresponsible.

Further, the Greens are the only party with a remotely sensible drug policy. They recognize that the alcohol problem lies in our national culture, and is not liable to any 'quick fix' in the form of raising the drinking age. And Nandor's suggestion to make personal cannabis possession subject only to a minor fine, on a par with speeding and similarly trivial offenses, is eminently sensible.

The Greens were the only party to suggest that we modernize our sexist rape laws. I respect them a lot for that. All the other parties were too cowardly to talk about this uncomfortable issue.

While all the other parties are competing to see who can appear most "tough on crime", only the Greens can be relied upon to focus on the real issues, e.g. how to reduce crime in the first place. They recognize that prisons are inefficient and should be a last resort. They want to bring the victim back into the justice process, and promote restorative justice in appropriate circumstances.

Finally, the Greens have real family values. They support paid parental leave, greater flexibility in working hours for parents, and other such policies that will help enable parents to raise their children. They're supportive of all families, even those that don't fit the Church's restrictive mould, e.g. de facto couples, gay couples, etc. They're more interested in protecting prostitutes than condemning them. Unlike conservatives, they recognize that ethics and sexual prudishness are not the same thing.


There's a lot I don't like about the Greens. I wish they were more rational and less romantic. Nevertheless, on many of the most important issues, they show themselves to be the most rational party of them all. When conservatives discount the future, refuse to discuss uncomfortable issues, or care more about condemning people than helping them, the Greens can be relied upon to speak up for what really matters. And for that, they have my vote.

Thursday, September 15, 2005

The Human Race

The debate over race relations in New Zealand has gotten quite polarized. As a result, I find myself thinking that everyone has got it wrong. The racist Right wish Maori didn't exist. The racist Left want to elevate Maori to the superior status of a natural aristocracy, on the basis that their ancestors have been here longer than everyone else's. I don't know which is worse. Fortunately I don't have to decide, because there is an alternative, and that is to overcome these pernicious and restrictive binaries.

Both sides assume that Maori and Pakeha are two separate peoples. The multicultural/separatist Left wants to entrench this divide, whereas the assimilationist Right wants to blot half of it out. I think both positions are harmful to our country. As I've argued before, we need to recognize the third way, of reciprocal cultural integration, which is naturally evolving in our society. No Right Turn notes that we are becoming a Pacific nation, and quotes Colin James:
Maori culture, supplemented by Pacific Polynesian culture, has begun over the past five years to alter "mainstream" culture and the way we live our lives.

We are moving beyond the tokenism of the past 160 years. The new All Blacks' haka - which could do with an English phrase or two to be truly "national" - is a prime example.

This transformation is still in the very early stages, and in any case will modify, not blanket out, European cultural traditions and ways of life. But over the next 25 years, in part driven by demographics, it will make us a Pacific nation, not just dwellers in the Pacific - it will Pacific-ate us.

The inadequacy of the binary model is reinforced by Anne Salmond:
We have had 200 years of swapping with each other, genes, language, and so by now, the binary model is fictional... People get married and end up with a foot in both camps - that should be a basis on which we go forward together, rather than seeing the Treaty as an instrument which cuts us in half as a nation.

My position has two core implications for policy:

1) We should not discriminate between individuals based on their race. That's racism, pure and simple. It doesn't suddenly become okay when a pure-hearted liberal does it.

2) We should acknowledge and promote Maori culture, but in a non-exclusive fashion, which sees the culture as belonging to all New Zealanders, no matter their genetic makeup.

We should look forward to the day when the word 'race' can be expunged from our vocabulary. It is a useless concept, an utterly arbitrary way of categorizing people. We are a nation of individuals, bound together in various ways. The common focus on race blinds us to the complexity of individuals, and the multiplicity of ways in which they could be categorized. I am young, middle-class, white, male, a student, philosopher, atheist, blogger, brother, son, and so forth. Why focus on race? Why is that an especially important category? Whenever I come across a race-based distinction (such as separate Maori seats in parliament), I wonder: why not make similar distinctions based on sex, or age, or religion? Hell, why not hair colour?

Imagine two individuals who are almost exactly alike, but for their genes. One has some Maori ancestry, and the other doesn't. Apart from that, they are practically indistinguishable: same social class, same skills and abilities, same values. How could you possibly justify treating the two differently in any respect whatsoever? Race is a difference that makes no difference.

Leftists commonly appeal to historical injustices. Maori people were harmed then, so Maori people should be compensated now, you argue. But this presupposes that race is a relevant category. We might just as well say that black-haired people were harmed then, so black-haired people should be compensated now. That's clearly absurd reasoning. The link between past victims and present claimants is too tenuous: the mere fact that they share hair-colour just isn't relevant. But why is biological race - mere genes - any more significant than hair colour?

So, we should aim to overcome racial separatism, especially in our laws. I agree with the National party about that much, at least. Unfortunately, National's actual policies - e.g. using white votes to abolish Maori seats - are likely to be disastrously counterproductive, and simply serve to aggravate racial tensions. What we really need is for Maori to voluntarily abandon the path of separatism. If I may wax poetic for a moment: We should invite them back into the mainstream -- or, rather, the "braided river" that we've become. Let our waters mingle, and our people too. When the wounds of history have healed, we will greet the sunrise together: not as two peoples, but one.

Only the Left could credibly make such an offer. The question is whether they want to. Romantics want to preserve 'the Other' as such, which requires separatism and special recognition for the chosen ones of indigenous blood. They are to be set apart from the decadent colonial 'West', and recognized for their moral purity, historical priority, and harmonious connection to the Land.

This is the pernicious ideology we have to overcome. Leftists needs to recognize that race has no intrinsic impact on who you are as an individual. We need to embrace Maori culture and make it accessible to all New Zealanders, rather than standing it on a pedestal to idolize from afar. Finally, we need to recognize that conservatives are "culturally insecure" for a reason: the separatists among us keep implying that Pakeha are somehow less authentic New Zealanders! This offensive rhetoric has got to stop. We are not British colonialists. We are native New Zealanders too. So please, show some respect.

Tuesday, September 13, 2005

Upcoming Carnival

The next Philosophers' Carnival is coming up Monday. If you have a philosophy blog, then pick your favourite recent post and send it in by the end of the week! If you read philosophy blogs, feel free to make a nomination or two using the system.

Red Pill: Freedom to Starve

There’s much that’s misleading in politics. But perhaps the worst offender is the common claim that Right-wing “libertarians” (e.g. ACT) champion the value of individual freedom. They stand for non-interference, but this “negative freedom” is only half the story. The more important aspect of freedom is opportunity.

Imagine you find yourself stuck down a well. Libertarians claim that you are perfectly free so long as everybody else leaves you alone, since that way you suffer no interference. But surely we can see that this is mistaken. If left alone, you would dwindle and die. That’s not any sort of freedom worth having. Real freedom requires that you be rescued from the well. Until that happens, you lack any opportunities to act and achieve your goals. And that is clearly what really matters.

Of course, most of us aren’t stuck down wells. But the example proves an important point. If you agree that the person stuck down the well lacks freedom, then you are committed to the view that freedom requires more than mere non-interference, for they suffer no lack of that!

For a more politically relevant example, consider the consequences of poverty. It is not enough to leave poor children alone: by letting them starve, we do not thereby make them “free” in any worthwhile sense. The fulfillment of basic needs is a prerequisite to any form of freedom worth having.

When right-wingers claim to stand for “freedom”, they conceal this crucial point. What they really stand for is non-interference, and that only means freedom for those who have the means to take advantage of it – freedom for the rich, in other words. Non-interference won’t free children from poverty any more than it will free the person stuck down the well. Sometimes freedom requires positive action.

Further, sometimes achieving an important freedom requires us to sacrifice a less important one. Do traffic laws count as “government interference”? Clearly the laws do restrict us, removing our right to drive wherever we please. But in return, we get functioning roads that enable us to actually get where we want to go. Some interference is justified for the sake of improving our real opportunities. This sacrifice yields a net benefit to our real freedom.

So how about poverty, then? Could tax similarly be justified on the grounds of freedom itself? Sure, the rich might have to give up their caviar. But if this enables the poor to meet their basic needs, get a decent education, and so forth, then this too looks like a net gain for freedom. More opportunities have been gained than lost. And that’s what really matters.

Non-interference is utterly worthless in the absence of opportunity. Ford used to say, “You can have any colour you want, so long as it’s black.” Some choice that is! But for people who lack options to begin with, that’s the only “freedom” that the Right wing has to offer. Don’t let the common rhetoric mislead you. They promote non-interference, whereas the Left promotes opportunity – thus enabling people to lead the lives they want to live. If you agree that it’s the latter sort of freedom that matters, you might find it better championed by the Greens than ACT.

Racial Profiling

I would like to thank Prof. Thomas for inviting me to respond to his interesting post on profiling and moral progress. He began by contrasting the standard liberal and conservative views on what counts as pernicious racial discrimination. I am afraid that by his classifications, I stand as a poor spokesman for liberals, for I'm wary of affirmative action as well as racial profiling. I take the following anti-prejudice principle to apply universally:

(P) It is morally obnoxious to draw conclusions about an individual person on the basis of their race (or any other involuntary group characteristic for that matter).

Monday, September 12, 2005

Liberal Shaming in Action

I recently suggested that liberals need to reclaim the power of shame ("for good instead of evil"). There's a neat example of this mentioned over at Positive Liberty, of a website that will publicly expose those who sign a petition to ban gay marriage and civil unions in Massachusetts. Let's see them explain that signature to their neighbours -- and in a few years' time, their grandchildren.

Sunday, September 11, 2005

Abstract and Concrete Probabilities

There seems to be an important distinction between two types of probability. Let's say there's some object O with a property P1, but you don't know what other properties it has. Further suppose that what you're really interested in is whether O has some other property P2. What's the probability that O has P2? We could interpret this as an epistemic question, whereby we abstract away all the unknown details of O and just ask what proportion of P1-objects also have P2. Here we aren't really talking about the specific object O at all. We treat it merely as a token of type "P1-object", without regard for the multiplicity of other (unknown) ways O might be categorized. That's one option. Alternatively, we might focus on the concrete object O, and ask: of this specific object, including its many other properties that we are not yet aware of, how likely is O to have P2? This is a question about the real or metaphysical modal properties of O, rather than being a purely epistemic question. Let's illustrate the distinction with a couple of real-world examples.

The first example came up in the comments to my recent post on ad hominems. Suppose a notoriously unreliable person makes an argument. You are wondering whether the argument (O) is likely to be a sound one (P2), given that it is being made by this notoriously unreliable person (P1). One way to answer this would be to abstract away from the details of the argument itself, and just use your background knowledge of the advocate's unreliability - in particular, that most of his arguments are unsound - to conclude that this argument is therefore likely unsound. This is a classic ad hominem fallacy. Although such abstraction may be rational in a sense, it isn't really fair on the argument in question. After all, you've completely ignored the argument itself, when the dialectical norms of civil discourse recommend that we consider each argument on its own merits.

This latter suggestion is taken seriously by the concrete probabilist. He focuses on the argument (object O) rather than who makes it (property P1). In particular, he recognizes that the validity of the argument is metaphysically independent of the person making it. So he ignores the person, and instead assesses the argument on its own merits, trying to determine whether it is logically valid, and how plausible the premises are. It is from these considerations that he estimates the soundness of the argument.

The second example is provided by stereotyping individuals (e.g. racial profiling). Say you want to know whether Jack (O) committed the crime (P2), when the only other information you have about him is that he is a black male (P1). You can probably tell what comes next. The abstract probabilist ignores all the complexities of Jack the individual, and makes his judgment in light of how many other black males have been known to commit crimes. As with the ad hominems, we can see that this prejudice is rational in a sense, but also incredibly unfair to Jack as an individual. Your judgment of him is based on what you know of others who share the same property as him (i.e. of being a black male), rather than anything concrete and intrinsic to Jack himself. I argue elsewhere that this is morally problematic.

Consider a pair of cross-racial "twins", Alan and Bill, who are identical in all respects except that Alan is white and Bill is black. Clearly this is a difference that makes no difference. As a matter of fact, each is equally likely to make any particular decision, or have any other property or characteristic. These facts are what determine concrete probabilities, and they are unaffected by whether we know about them. But abstract probability is an epistemic notion. If all you knew was that Alan was white and Bill black, then you would judge their abstract probabilities very differently. You might think Alan is likely to be wealthier, better educated, and so forth. But these "probabilities" are metaphysically superficial -- merely reflecting facts about group frequencies and correlations. Concrete probability is much deeper, being grounded in causal and explanatory facts about the concrete individuals themselves.

So, like I said, this strikes me as an important distinction. (If someone with more formal training in statistics can go beyond the intuitive graspings I've offered here, please do leave a comment.) I think the close analogy between ad hominems and sterotyping is also interesting to note.

Saturday, September 10, 2005

Unknowable Truths

You might have thought that all truths are, in principle, knowable, even if it turns out that nobody actually ever does manage to know it. But, surprisingly enough, this is provably false. For consider some such truth P that will never actually be known. Then the statement Q: "P is true but will never be known" is true. But Q is clearly unknowable, for if you knew Q then you would know the first conjunct (P), but that would contradict the second conjunct (P is never known), thus making Q false, which is a contradiction. Since knowing Q would yield a contradiction, and is thus impossible, then Q is an unknowable truth.

So we've managed to prove a priori that either there are no never-known truths P, or else there are unknowable truths Q. It's a neat argument. I just came across it here. "Fitch's knowability paradox", I think they call it. Maybe epistemology isn't so bad after all.

"But that would justify sodomy!"

I've long been meaning to write about Max Goss' hilarious comment on this old Right Reason post. Goss objects to a proposed moral theory on the grounds that "It would justify sodomy." I couldn't believe my eyes, it's so mind-bogglingly wrongheaded. One must be a complete moral retard to think that the "wrongness" of sodomy is somehow a central and unrevisable moral truth. And just think what would happen if Goss' objection were legitimate: why, we could instantly refute every non-arbitrary moral theory in one fell swoop! Just imagine it...
  • Utilitarianism? Well, on the one hand it is pretty plausible to think that only human wellbeing matters. But on the other hand, "it would justify sodomy!"

  • Kantianism? It's all well and good to treat other people always as an end in themselves, and never as a means only. Then again, once the universalizability constraint is properly understood, that too "would justify sodomy!"

  • Contractarianism? It tends libertarian, allowing people to do what they will so long as they don't harm others. But, gosh, you know what? That too would justify sodomy! Oh no!

  • How about ethical egoism? Surely that can be relied upon to give ridiculous results. Alas, not in this case. Given that gay individuals are going to be better off having loving sexual relationships than being lonely and miserable their whole lives, even this poor excuse for a moral theory is going to justify sodomy.

And so forth. Try a few others for yourself -- you'll be amazed at how easy it is to refute otherwise plausible theories using this one simple objection. Who ever said there's no progress in philosophy, eh? In a few short minutes we've managed to overturn centuries of moral philosophy! All that's left are those few moral theories that are completely arbitrary.

Divine command theory might work -- God can command whatever he likes, so he could stipulate that sodomy is wrong, and rock music too. That's gotta count for something. Otherwise we might try moral relativism: if conservatives think that sodomy is wrong, then it really is "wrong for them", no rational reasons required. Or we could ask the magic 8-ball -- at least there's a chance it'll answer "no" when we ask it whether sodomy is justified, and we're really running out of options since we ruled out all the non-arbitrary moral theories already. How about flipping a coin to decide?

I guess we might resort to natural law theory, and claim that sodomy somehow violates the natural "purpose" or "function" of the sex organs. But good luck explaining why clapping isn't immoral, then, given that hands are for grasping. The aforementioned Right Reason post tries to draw a distinction between using an organ for "other than" its purpose vs. "contrary" to its purpose, where only the latter is "immoral". But then - though the author refuses to recognize it - that would justify sodomy, because sodomy only temporarily disables the sex organs, just like clapping temporarily disables my hands, but either can be put back to their "proper" use soon afterwards. I can see no principled difference between these two "unnatural" practices, insofar as their relation to natural purposes is concerned. (More here.)

Devastating though the "but it would justify sodomy!" objection no doubt is, there's more to be said against natural law theory. For one thing, it's patently false. I've talked before about "Creator's rights" and the illegitimacy of externally-imposed purposes. Quick refutation: suppose humans had a scorpion-like tail, the (evolved or God-given) purpose of which was to kill our enemies. Further suppose that modern scientists could remove the tail and extract useful medical substances from it. The idiot's law theory implies that it would be wrong to use the tail to save lives in such a way, for that would be contrary to its "true purpose" of killing!

Clearly, such externally imposed "purposes" have no intrinsic bearing on morality. They're morally arbitrary. What little plausibility this theory has arises because evolution has tended to provide us with organs whose natural functions promote our wellbeing. And of course it's our wellbeing that really matters. But a few conservative dullards get confused by the correlation and decide that it's the natural functions that really matter. How depressing.

Anyway, I was supposed to be ranting about morally retarded homophobes, not natural law theorists (though I suppose the two categories do tend to overlap -- at least when the homophobes can think of no other rationalization besides "it's unnatural!").

I must say, one good thing to come from all of this is the sheer entertainment value. We've already had wingnuts arguing for their (purportedly) factual claims by asking: "If you doubt this is possible, how is it there are PYGMIES + DWARFS??" Now we can complement this with the similarly inane moral objection: "But that would justify sodomy!" As with the "pygmies + dwarfs" remark, it can be applied to just about anything. That's sure to come in handy.

Friday, September 09, 2005

Attacks and Arguments

Sometimes people make false accusations. Let's define the "fallacy" fallacy as being when someone mistakenly rejects an argument on the basis of an alleged fallacy that it does not in fact exhibit. (I suppose this process could iterate: If I falsely accuse you of making the "fallacy" fallacy, then I thereby commit the ""fallacy" fallacy" fallacy. And so on.) Anyway, I suspect the most common form of the "fallacy" fallacy involves a false accusation of argumentum ad hominem. In a delicious twist of irony, many of these false accusations actually amount to genuine ad hominems themselves. Here's the typical pattern I have in mind: Person A makes an argument sprinkled with insults. Person B objects, "That's an ad hominem!", and refuses to address the substance of A's arguments.

The problem is that people commonly object to any form of insult as ad hominems. But this is mistaken. An ad hominem fallacy is when you reject your opponent's argument because of some characteristic of the advocate that is irrelevant to the content of the argument made. In general, what matters is the argument, not who makes it. (I will mention some exceptions below.) But not all "personal attacks" take this fallacious form. Rather than saying "you suck, therefore your argument does", one might instead provide an adequate counterargument, then append: "your argument sucks, therefore you do". Such gratuitous insults may be unwise, but the counterargument doesn't depend upon them, so it's a mistake to object to the counterargument (and ignore its substance) on that basis.

Further, I think insults aren't always inappropriate. Creationists and homophobes, for example, tend to make revealingly bad arguments. In critiquing these arguments, we first aim to show why the conclusion doesn't follow. But we are also in a position to draw a conclusion about the character of the person advancing the argument. Many arguments are so bad that they could not be honestly made by any informed rational person. Thus anyone who makes them must be either stupid, ignorant, or dishonest. In the course of criticising such an argument, a partisan may wish to point this out, just to emphasize how incredibly bad the argument really is. I don't think it's necessary wrong to do so. Some positions are so lacking in rational warrant that they deserve our scorn.

For an example of the "fallacy" fallacy in action, just look at political blogs. Liberals tend to insult Bush a lot in the course of their arguments. Conservatives then respond by dismissing them as "Bush-haters". But in fact it isn't the "Bush-haters" that were committing ad hominems - after all, they had arguments to back up their insults. On the contrary: the conservatives are effectively arguing, "You hate Bush, therefore I don't need to address your substantive argument." - and that is a clear case of argumentum ad hominem. As Leiter notes:
This guy sure is pissed, not without reason. The Bush shills and loyalists have coined a phrase as a defense mechanism against this kind of increasingly common reaction: they refer to "Bush haters," as though this kind of response is irrational and inexplicable. It doesn't occur to them that there are reasons people hate Bush, that people are responding to events, to facts, to stupid things the man and his Administration have done.

(I note that Leiter himself is commonly targeted by people committing the "fallacy" fallacy. People are put off by his abrasive tone and plain-spoken insults, and thereby conclude that he's committing "ad hominems", no matter the substance of his arguments. Of course, one may question whether it's the best way for him to persuade anyone who doesn't already agree with him -- but Leiter has explained that that is not his purpose anyway.)

Okay, so far we've distinguished "personal attacks" based on whether they form premises or conclusions of counterarguments. Only the former are fallacious -- and not even all of them are. I can think of two general exceptions. The first type is when the person being attacked will be responsible for carrying out the action being debated, and hence their character is relevant to assessing the likely consequences of the action -- I explain this in more detail over at Prior Knowledge, in response to a "fallacy" fallacy from Simon Clarke.

The second type is more complicated, and it was brought up in our political philosophy class last semester, where G.A. Cohen (I think it was him) uses it in relation to the incentives debate over tax. Anyway, the basic structure is this: a kidnapper has stolen your child; you will only get your child back if you pay the ransom, therefore you should pay the ransom. This seems like a fair enough argument, e.g. if made by the police who are helping you. But the normative force of the argument is completely different if it is instead made by the kidnapper himself. After all, he is responsible for making the unfortunate premises true. As such, his use of the argument is reprehensible blackmail, rather than helpful advice. That doesn't necessarily make it unsound. But it does mean that the person making the argument (i.e. the kidnapper) can be blamed for doing so. Cohen argues that when capitalists argue that they will work less hard if taxes are raised, this is a similarly reprehensible "threat" of an argument.

Anyway, the point to note here is that arguments are not simply the free-floating propositional structures that philosophers are trained to see them as. Sometimes it makes an important difference who is making the argument. As such, it is not always illegitimate to draw attention to this fact.

Thursday, September 08, 2005

Reasons for Belief

“[T]he term ‘belief’ is ambiguous. It can refer to the thing believed, and it can refer to the act or state of believing that thing. Talk of ‘reasons for beliefs’ inherits this ambiguity.” – Alan Musgrave.[1]

“[O]ne cannot settle on an answer to the question whether to believe p without taking oneself to have answered the question whether p is true.” – Nishi Shah.[2]

Can there be reasons for belief that are not reasons for the truth of the thing believed? The negative response has some intuitive appeal, and fits well with our epistemic practices: the way we generally justify believing p is to point to evidence that p is true. Nevertheless, I will argue that there can be non-truth-indicative reasons for belief. Evidentialism can be challenged on two general fronts. Firstly, there can be non-truth-indicative epistemic reasons for belief, that is, reasons which derive from distinctively epistemic norms other than that of truth. Second, there can be non-epistemic or practical reasons for belief, such as when the holding of the belief would be morally or prudentially advantageous.

We typically take truth to be the normative standard against which to assess beliefs. We say “belief aims at truth,” and assess beliefs according to whether they achieve this goal. By this standard, reasons for belief will be indicators of truth. We have reason to believe p when we have reasons for the truth of p. This is the standard picture. But there are other normative frameworks we might adopt instead, or in addition. For example, we might assess a set of beliefs for its internal coherence, rather than applying an external standard such as truth. Despite making no explicit reference to truth, the standard of internal coherence is nevertheless clearly an epistemic standard, rather than a practical one. And, indeed, it is quite a plausible one. I thus propose that we have epistemic reason to believe p if doing so would yield a more coherent belief set. The goal of internal coherence and the practice of good reasoning both have normative force, independently of their relation to the external goal of obtaining true beliefs.

Suppose that you believe that p, and believe that if p then q, and so on this basis you rationally form the belief that q. The two former beliefs provide you with a reason to adopt the latter belief. In doing so, you make your belief-set more coherent. This is epistemically rational. So you have reasons to believe q. But are there any reasons for the truth of q? If asked, you would likely point to the propositions that p, and that if p then q, as being your reasons for taking q to be true. However, let us suppose that your former beliefs are false: p is false, and if p then q is false. So while you took these to be reasons for the truth of q, in fact they are not. You were mistaken about whether there are reasons for the truth of q. There are no such reasons in this case.

One might object that this conclusion is overly hasty. While the false propositions that p and that if p then q cannot be reasons for the truth of q, perhaps there are other reasons that we have overlooked. In particular, since modus ponens is a truth-preserving rule of inference, the conclusion will inherit whatever justification the premises have. Whatever truth-indicative reasons you have for your former beliefs will be transmitted to the conclusion, thus serving as (inconclusive) reasons for the truth of q. That, the evidentialist argues, is why you have some reason to believe q. But let us suppose that your former beliefs were entirely unjustified according to external standards of evidence or truth-indicativeness. We stipulate that there are no such reasons for q to inherit. Nevertheless, you do have a reason to believe that q, simply because of the coherence of this conclusion with your other beliefs that p and that if p then q. The internal standard of coherence provides normative force quite independently of external considerations. Of course, it does not force to you believe q. Perhaps you ought to reject the premises instead. But this is consistent with there being some pro tanto reason to believe q; it may just be that you have even more reason to avoid all three beliefs. It would be implausible to suggest that there are no reasons whatsoever for believing q. The mere fact that doing so would boost the coherence of your belief set is surely a consideration in favour of the belief, even if there are other, more strongly favoured, ways to achieve this goal. Thus we find that the evidentialist is mistaken: it is possible to have a reason for believing q which is not also a reason for the truth of q.

This vindicates Musgrave’s distinction, though he himself employs it in a different fashion. Musgrave wants to accept inductive scepticism – the view that there can be no inductive evidence for the truth of a belief – but hold that our everyday beliefs can be reasonable nonetheless. In particular, he claims that if an evidence-transcending hypothesis H survives our best attempts to falsify it, then “this fact is a good reason to believe H, tentatively and for the time being, though it is not a reason for the hypothesis H itself.”[3]

This claim is not well-supported, however. It is not enough to point to the possibility of non-truth-indicative reasons. Musgrave needs to explain precisely what sort of reason we have to believe H. If it is not a truth-indicative reason, it must be a reason of some other sort. But it isn’t clear what other sorts of reasons there might be in this case. He doesn’t seem to be pointing to pragmatic reasons of any sort. But nor does he appeal to any alternative epistemic norms, such as the ‘internal coherence’ approach I advocated above. Instead, Musgrave appeals to the fact that the hypothesis is not yet known to be false. This is the sort of appeal one would expect from someone who was still working within a truth-focused normative framework. The reason we feel an intuitive pull towards accepting falsification-resistant hypotheses is presumably that we take such resistance to be evidence that the hypothesis is true. If this were not the case, then it is no longer clear why we should care about resistance to falsification. Why believe the hypothesis if we have no reason to think it more likely true than not? It seems doubtful whether Musgrave has pointed to a genuine example of non-truth-indicative reasons for belief.

A more intuitively compelling instance of the distinction is found by appeal to pragmatic reasons. Suppose a demon threatens to torture your family unless you believe (N) that the number of stars is even. Clearly this threat is in no way evidence for the truth of N. But it does seem a very strong reason to believe N, nonetheless! Granted, this is not something you would be capable of doing through willpower alone. But suppose that an angel offered you a magic pill that would instil this belief if swallowed. It is obvious that you ought to swallow the pill. But the pill is purely instrumental to the end of obtaining the belief in N. You would have no reason to take the pill unless you had reason to believe N. So you must have reason to believe N. In this scenario, it appears that the continued wellbeing of your family is a reason for the act of believing, but not a reason for the truth of the thing believed.

It is undeniable that there is a practical reason of some sort relating to believing N in the above scenario. But perhaps we misdescribe it in calling it a reason for the act of believing N. We might instead say it is a reason for the act of getting oneself to believe N. After all, taking the pill cannot be described as an act of belief per se. Rather, it is an act of getting oneself to believe. The belief is a consequence of the intentional action; it is not the action itself. Indeed, there is no ‘act of belief’ occurring here at all. The belief is something that happens to you, rather than something you do.

In this case, belief is treated as a mere state of feeling, like hunger or tiredness, rather than an intentional attitude. Clearly, hunger is not a rational action, or something that stems directly from your agency. Rather, it is something that happens to you. It is also something you can make happen. But it is clear that in doing so your action is not being hungry, but rather, getting yourself to be hungry. It would be nonsensical – a category error – to claim that you have ‘reasons for hunger’. Hunger is a mere state, occurring outside the scope of your agency. While we can act so as to bring such states about, and have reasons for doing so, this action is distinct from the state itself. There can be no normative reasons for the latter.[4]

We are now in a position to prove that the magic pill scenario fails as a counterexample to evidentialism. Practical reasons are reasons for action. I am here using the word ‘action’ in a broad sense, to cover all intentional occurrences that fall within the scope of our agency; intuitively, something that we do. It thus includes deliberative judgments. Now, only actions can be supported by reasons for action. So if X is not an action, then there can be no practical reasons for X. In the described scenario, believing N is not an action, but a mere state. The belief is brought about by acting upon yourself as if you were an alien object. The resulting belief is thus a consequence of your action, not an action itself; something you make happen, not something you do. Thus, there can be no practical reasons for believing N. Contrary to initial appearances, this is not really a case of reasons for belief that are not reasons for the truth of the thing believed. Instead, we have found reasons for getting oneself to believe, as distinct from reasons for the act of believing (since there is no such act here).

Although believing N in the above scenario turned out not to be an action, this is a very rare case. Most often, believing is something we do, not something that happens to us. Beliefs usually develop from our internal mechanisms in a manner which allows them to be attributed to our agency, rather than some external source such as a magic pill. Doxastic deliberation, where one reflects on the evidence and thereby comes to a judgment about what to believe, is the clearest case of such rational influence. So even if the particular believing mentioned above was not an action, and thus not something we could have reasons for, there are acts of believing for which the possibility of practical reasons is still open. Perhaps the agent in the magic pill scenario later reflects on their induced belief N, and makes a judgment about whether to retain the belief. This is a genuine ‘action’ (in my broad sense), and thus potentially open to support from practical reasons. So let us now address the general question whether practical reasons can apply to doxastic deliberation. Is it possible for practical considerations to influence what one ought to conclude when deliberating over whether to believe p?

Shah thinks not. He begins by noting a phenomenon which he calls ‘transparency’: within the first-personal perspective of doxastic deliberation, “the question whether to believe p seems to collapse in to the question whether p is true.”[5] At first glance, the magic pill scenario appears to be a counterexample to this claim. The agent was deliberating whether to believe N (by taking the pill), and paid greater heed to practical concerns than evidential ones. But in fact this was not a case of doxastic deliberation at all. We have already noted that the resulting action was not an act of belief, but an act of getting to believe. As the prior deliberation concerned the decision to thus act, it was deliberation over whether to get oneself to believe N, rather than deliberation over whether to believe N per se. More generally, it is not doxastic deliberation when one deliberates about whether to manipulate oneself into having a belief. You can only deliberate about what to do (or judge), and forced beliefs are not things you do, they’re something that happens to you – perhaps as a result of some other action that you do. In any case, one can no more deliberate about whether to have a forced belief than one can deliberate about whether to be hungry. In either case, what one is really deliberating about is whether to act in such a way as to bring the state about.

Once this is clarified, transparency does seem undeniable. The question we then face is how to explain it. Shah’s answer is that “it is analytic of belief that it ought to be true”.[6] Recognition of this normative requirement prevents us from deliberately believing for non-truth-indicative reasons. This is not to say that our beliefs always do track the evidence. Shah emphasizes that transparency only holds in deliberative contexts, and this is part of what needs explaining. Our other belief-forming mechanisms are not so pure. We can be influenced by wishful thinking, confirmation bias, and so forth. It is a virtue of Shah’s account that he can explain this. By locating evidential norms in the concept of belief, this explains why transparency is only found in the context of doxastic deliberation, where we conceive of our beliefs as such, and not in sub-personal mechanisms that do not exercise the concept of belief.[7]

But Shah’s answer is still too strong. If it were analytic that beliefs ought to be true, then it would be incoherent to assert “p is false but S ought to believe it”. But this statement does not seem incoherent. For one thing, not all false beliefs are irrational. If we know that S has been exposed to a great deal of compelling but misleading evidence, then we might well judge that S ought, as a matter of epistemic rationality, to have the false belief. Shah might accommodate this objection by tweaking his norm slightly. Still, the false accusation of incoherence will remain for examples involving purported practical reasons for belief. Many of us would judge that an agent ought to have a false belief if the fate of the world depended on their doing so. The agent in question might even agree, and wistfully respond, “Yes, it would be best, all things considered, if I were to believe p. How unfortunate that I know it to be false!” Evidentialists will insist that we are mistaken in our judgments here, but to accuse us of self-contradiction is an implausibly strong claim for them to make.

This problem can be avoided by changing the proposed norm of belief. Rather than aiming at the external goal of truth, the appropriate norm might instead appeal to internal coherence, as described earlier in this essay. This would equally forbid what Foley calls “near-contradictions”, i.e. believing p whilst also believing the evidence to indicate that p is likely false.[8] But while it forbids believing near-contradictions, it does not forbid straight falsehoods. So this alternative has all the advantages of Shah’s account, without the above drawback. It can explain transparency without denying the coherence of the statement “p is false but S ought to believe it.”

Even worse, in light of these examples, Shah’s account can no longer explain transparency at all. It is possible, if perhaps mistaken, for us to take practical reasons as bearing on what another deliberator ought to believe. But the concept of belief is engaged in third-personal judgments just as in first-personal ones. Indeed, as noted above, there may be many cases in which we exercise our concept of belief without necessarily feeling bound by evidential norms in judging what ought to be believed. So it cannot be concept-mediated recognition of the normative hegemony of truth that explains why transparency occurs in first-personal doxastic deliberation. Third-personal deliberative judgments may also involve the concept of belief, but no such norm is necessarily recognized. So Shah’s account fails. An adequate explanation for transparency must rest on a feature that is unique to first-personal doxastic deliberation, and his does not.

Focusing on the first-personal aspect of transparency can also serve to bring related phenomena to our attention. Consider the Moore-paradoxical incoherence of asserting “p, but I lack sufficient evidence that p is true.”[9] Or consider the related principle: It is impossible to believe p whilst recognizing that one lacks adequate evidence for the truth of p.[10] Just as we cannot knowingly believe falsehoods, so we cannot knowingly believe what could very well be false for all that the evidence shows.

All three phenomena have the same root explanation. To deliberately believe p – that is, to believe p as a result of doxastic deliberation or rational reflection – is to judge that p is true. If one recognizes that the evidence is against p, then one cannot coherently judge that p is true, and so cannot deliberately believe p. As for practical reasons, while they may lead one to judge that p would be good to believe, they have no bearing on rational judgments whether p is true. But again, it is this latter judgment that is constitutive of deliberative belief, so this explains why practical reasons can have no influence over our doxastic deliberations. This is no merely contingent fact about human psychology. Rather, it is a conceptual fact about deliberate belief that it arises through settling the question of what is true. Even if we imagine creatures that could respond to practical reasons and bring themselves to have a belief through sheer force of will (immediately forgetting about its origin so as not to undermine the new belief), this would be no different in principle from taking a magic pill.[11] Their intention action is still getting to believe, rather than believing itself, for they never made an intentional judgment that p is true. Instead, after judging that p would be good to believe, they acted on themselves – through sheer force of will – to bring it about that they held this belief.

We are led to conclude that we cannot deliberately believe for practical reasons. But what then? Evidentialism only follows if we assume the internalist claim that a reason for S to Φ must be capable of being a reason for which S Φs.[12] But this claim is mistaken. A consideration may count in favour of Φ-ing even if that consideration is necessarily inaccessible to the agent herself. For example: Suppose that God will reward people who act from selfless motives. This is clearly a reason for them to be selfless. But it is not a reason that they can recognize or act upon, because in doing so they would be acting from self-interest instead. They would no longer qualify for the divine reward, so it would be self-defeating to act upon this reason. In effect, the reason disappears upon being recognized. Nevertheless, it seems clear that, so long as the agent is unaware of it, the divine reward is a reason for them to act selflessly. So internalism is false. Just as there can be unknowable truths, so there can be inaccessible reasons. This doesn’t seem to change when we modify the example so that God rewards people who believe what the evidence indicates to be true. The practical reward counts in favour of these beliefs, even if the agents could never recognize or act upon this reason in their doxastic deliberation. No doubt there is much more to be said here, but the debate over internalism goes beyond the scope of this essay. Let us simply note that Shah’s internalist premise is certainly open to contention.

If we reject internalism then there is no longer any conflict between Musgrave’s distinction and Shah’s phenomenon of transparency. We may grant that agents cannot deliberately believe p without taking themselves to have settled that p is true, yet still hold that there can be other reasons for belief besides those that are potentially accessible to the agent. More generally, it may help to expand Musgrave’s distinction to include a third type of belief-related reasons. The core controversy is over reasons for (the act of) believing. We have more clear cut cases at either extreme: reasons for the truth of the thing believed are clearly just truth-indicative reasons, and reasons for getting oneself to believe can clearly include practical reasons. Whether one thinks that any of these practical reasons can also count as reasons for believing per se, will probably depend upon whether one rejects internalism. But even if one rejects pragmatic reasons for belief, I have also pointed to the possibility of epistemic yet non-truth-indicative reasons for belief. These can arise from the normative force of internal coherence, a force which applies independently of external goals such as truth. Thus it seems that the weight of evidence is against evidentialism, and that we can indeed have reasons for belief that are not reasons for the truth of the thing believed.

[1] Musgrave, p.21.

[2] Shah, ‘How Truth Governs Belief’, p.2.

[3] Musgrave, p.24, original italics.

[4] Scanlon, p.20.

[5] Shah, ‘How Truth Governs Belief’, p.1.

[6] Shah, ‘How Truth Governs Belief’, p.44.

[7] Ibid, pp.25, 34.

[8] Foley, p.215.

[9] Adler, p.272.

[10] Ibid, p.273.

[11] Hieronymi, p.24.

[12] Shah, ‘A New Argument for Evidentialism’, p.5, uses this assumption as an undefended premise in his argument.


Adler, J. (1999) ‘The Ethics of Belief: Off the Wrong Track’ Midwest Studies in Philosophy, 23: 267-285.

Foley, R. (1987) The Theory of Epistemic Rationality. Harvard University Press.

Hieronymi, P. (forthcoming) ‘Controlling Attitudes’ Pacific Philosophical Quarterly.

Musgrave, A. (2004) ‘How Popper [Might Have] Solved the Problem of Induction’ Philosophy, 79: 19-31.

Scanlon, T. (1998) What We Owe to Each Other. Harvard University Press.

Shah, N. (2003) ‘How Truth Governs Belief’ in PHIL 471 Course Reader (also published in Philosophical Review).

Shah, N. (forthcoming) ‘A New Argument for Evidentialism’.