Wednesday, June 30, 2004

This sentence is false

I've always found self-referential paradoxes fascinating. The most obvious one is the Liar's paradox, like the title of this post. Another I like is "this sentance has threee errors" (the third error being that it has only two. But wait, that means...).

Then there are the Godel-style ones which demonstrate the limits of logic. Consider the theorem G: There is no proof of G*. Is G true? Then it's not a proveable theorem (and with a bit of extra work we end up having to conclude that mathematics is incomplete). Is G false? But then we have a false proof (so mathematics is inconsistent)!

* = I've simplified that a bit, what we really want to consider is the theorem G: It is not the case that there exists x and y such that (x is a proof of y, and y is the Godel number of G), expressed in the language of arithmetic.

Or in metaphysics, realists about 'properties' have to face various problems regarding exemplification. [Quick background: an object is said to 'exemplify' a property if that property is possessed by the object - e.g. red objects exemplify the property of redness. A property is said to be 'self-exemplifying' (SE) if it exemplifies itself. For example, the property of 'Being a property' is SE, but 'being triangular' is not, since properties themselves have no shape.]

For consider the property of Being Non-Self-Exemplifying (BNSE). Does this property exemplify itself? If so, then it has this property: namely, being not SE - so it does not. If not, then it's not SE, so it has the property of BNSE, so it is! Either way, we have a contradiction.

Feel free to mention your own favourite paradoxes of this sort in the comments.

What I've just been thinking about though, is the Rochester philosophy blog, whose name is "This is Not the Name of this Blog". At first I thought it was another nice paradox, but now I don't think it works at all. Instead, it's just plain false. For it is the name of the blog, and in asserting that it's not, it asserts something which is false. But there's no paradox there - its falseness does not imply its trueness, or anything interesting like that. So that's a pity.

Culture Wars

The Enlightenment Project has a fascinating cultural critique suggesting liberals are to blame for conservative populism in the US (thankfully the masses are more liberal here in New Zealand!)
We despised and lionized the working class by turn, and always patronized them... Working class people recognized that they were being trashed and fought back.

[As a solution,] I would favor genocide--cultural rather than material. We could let them in and by doing that dismantle their culture--we could wipe out the working class by assimilation. If, as the author suggests, class has become a matter of ethos rather than economics then anyone can join the elite. Ideas are free: anyone can, and should, be a liberal "intellectual."

But we excluded the working class by romanticizing their "culture" and despising their religion, by adopting passes and signs to keep them out and by consuming positional goods to set ourselves apart from them. We despised their food preferences and body fat, their fundamentalism, their leisure activities, their grammar and the little boxes made of ticky-tacky in which they lived because those were the things that set them off from us and gave us claim to elite status. We didn't want them to slim down, repudiate fundamentalism, speak grammatically or develop a preference for microbrews over Budweiser because then our status symbols would be tarnished: there's no point in eating whole grains or practicing Wicca if everyone else is.
This is all interesting stuff - and I do find the central idea (that "anyone can, and should, be a liberal 'intellectual'") quite attractive - but I'd like to hear a bit more about precisely how we are to entice the working class to adopt our culture. I am especially skeptical of the suggestion that the problem is with our motivation: I would have thought that most liberals would be overjoyed to see more people "repudiate fundamentalism" - the difficulty is in convincing them to do so! We can open the door as wide as we like, it won't do any good if nobody wants to walk through it.

The question of how middle-class liberals should relate to working class culture is a tricky one. Should we respect their (conservative) values? Try to convert them to ours? Is education the key? Is assimilation or multi-culturalism preferable? I'd be very interested to hear others' thoughts on this issue.

Tuesday, June 29, 2004

On Liberty

J.S. Mill is famous for setting the limits of individual liberty through his harm principle: "the only purpose for which power may be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not sufficient warrant". (On Liberty, Ch. 1: Introductory)

Mill distinguished between 'other-regarding' actions, which fall within the public sphere of potential state interference, and 'self-regarding' actions, which fall within the private sphere and is no business of anyone else. The difficulty is defining where to draw the line, for it is clear that everything we do affects others in some (perhaps indirect) way. The decisive factor for Mill is not whether we merely affect others (for we always do), but rather, whether our actions affect the objective interests of anyone else. This appeal to objectivity mirrors Bentham's characterization of pleasure and pain as "real entities", rather than the purely subjective judgments of people (this move being necessary to exclude people's "offense" at others' opinions to count in the utilitarian calculus). For Mill, an objective interest was long term, observable to outsiders, and concerned with the relationship between means and ends. To thwart a fickle or arbitrary desire, then, would not count as a genuine 'harm'.

Still, it seems possible that one might have a long-term 'interest' which nevertheless does not justify impinging upon the liberty of others. A businessman, for example, would be right to consider the new competition down the road to be against his interests. Nevertheless, it is not a harm which he has any right to prevent. Mill suggested that a genuine harm involves the violation of "certain interests which... ought to be considered as rights". It must be emphasised that this is not an appeal to anything as absurd as natural rights, but rather, is simply an extension of utilitarianism.

Mill was a Rule Utilitarian. Rather than judging the morality of acts directly against the utility principle (the greatest happiness for the greatest number), he opted for a more subtle, indirect approach. Mill thought acts should be assessed against a set of secondary principles (or rules) which are in turn derived from the ultimate principle of utility. He recognised that if everyone tried to maximise happiness, with no concern for human rights or justice, the inevitable result would be much unhappiness. An indirect approach is much better: identify those general rules which would (if universally followed) tend to maximise utility, and get people to follow those rules instead. This, then, is what he meant by 'rights' - those "certain interests" whose protection would help maximise the general happiness. (Clearly, immunity from competition would not thus qualify!)

This raises the question of whether granting so much liberty would actually serve to maximise utility. Mill focused on two aspects in particular, freedom of speech, and freedom of action.

Freedom of Speech:
He advocated nearly limitless freedom of speech. Censorship, he argued, assumes infallibility - which history demonstrates is often mistaken. Furthermore, even if we suppose that the view of your opponents is false, Mill argued that society is still better off not to suppress it. This is because such challenges will help us to think critically and re-affirm/strengthen the foundations of our own ideas, avoiding their degradation into "a dead dogma". Mill pointed out that "He who knows only his own side of the case, knows little of that".

This argument is based on a sort of conceptual Darwinism: the conviction that in a 'free market of ideas', the best will come to the fore and survive. Some argue for censorship not because an idea is false, but because they say it will be harmful to society (atheism is a common target here). However, as Mill points out, this too assumes infallibility - the question of whether any specific view is helpful or harmful to society, is itself a proposition which might be true or false, and so, as argued above, should be open to debate: "the assumption of infallibility is merely shifted... The usefulness of an opinion is itself a matter of opinion".

There are some limits to free speech however. Mill was sensitive to the modern idea of a 'speech act' (that speech is action, and should be treated as such). If a man gives an inflammatory speech rousing a mob to violence, then he has clearly violated the harm principle. But as always, it is far from clear where to draw the line, and Mill tends to err on the side of liberty.

Freedom of Action:
It might seem that the harm principle is a flagrant violation of utilitarianism, for we can easily think of cases when the general happiness could be best served by paternalistically imposing our will on others 'for their own good' (e.g. limiting the availability of harmful drugs). So Mill's insistence that "his own good... is not sufficient warrant" (for intervention) will require some justification.

The crucial matter here is the way in which Mill's utilitarianism varies from Bentham's. As already discussed, he was a Rule, not Act, utilitarian. But he also went further, and made qualitative (not merely quantitative) distinctions about utility. That is, he considered some pleasures to be intrinsically better than others - better to be a discontented man than a happy pig, and all that. In a crucial passage at the start of On Liberty, Mill tells us: "I regard utility as the ultimate appeal on all ethical questions; but it must be utility in the largest sense, grounded on the permanent interests of man as a progressive being" (emphasis added). This can help us to understand how he came to the conclusion that a rule in favour of much personal liberty would tend to maximise utility.

Freedom, Mill thought, was beneficial both to individual and to society. Society would benefit from getting to observe various different "experiments of living". The diversity which results from freedom is the only sure way to guarantee long-term progress in society, as people can observe and learn from each other, in a semi-Darwinian process of trial & error.

From an individual's perspective, freedom is necessary to develop their individuality as a human being - an essential feature, for Mill, of a good (high-utility) life. To this, James Fitzjames Stephen objected that Mill was simply mistaken: people are lazy creatures, and given a choice they will tend to choose idleness and passivity. Their personal qualities and individuality could be better furthered by forcing people into activities which they otherwise wouldn't bother with.

Mill anticipated this objection, and rebutted it by appealing to the 'best judge' view, i.e. individuals are the best judges of what is best for them. Here Mill appeals to Bentham's notion of individuals having privileged information about what makes them happy. He also goes further, pointing out that individuals are the most motivated to look out for their own interests. So although there might be rare occasions when paternalism would indeed have the best results, in the vast majority of cases it would prove mistaken. As a general rule, we ought to respect individual liberty, for that is what will tend, overall, to maximise utility.

Being based on the principle of utility, Mill recognises that freedom should not be granted to those who are incapable of benefiting from it. So paternalism is well justified (indeed, morally required!) in the case of children and "barbarians". However, we should aim to impart to them the skills and knowledge necessary to become free agents in their own right.

Mill aimed to justify a limited sphere of personal liberty of action, and an almost limitless freedom of expression. His utilitarian justification focused on the benefits of individual spontaneity, both to the individual and society at large. It is to be valued for expressing the highest in human life itself (the qualitatively 'best' pleasures), and for promoting the development of civilization.

Monday, June 28, 2004

Political Fictions

Natural Law
I've just been reading Jeremy Bentham's rather good refutation of natural law/rights. A major problem for the whole 'natural law' approach to morality & politics is that there doesn't seem to be the slightest bit of evidence to suggest that any such laws exist. They're mere inventions, made up to support the prejudices of their advocate. Despite their supposed 'self-evidence', such 'laws' end up conflicting with those proposed by others. As Bentham notes, "the systems are as numerous as authors".

The whole approach seems to involve a confusion of physical (natural) and human laws. Natural laws cannot be broken, they simply describe the way our universe works - there are no normative notions involved, for they do not allow themselves to be broken. "If there were a law of nature which directed all men toward their common good, [human] laws would be useless... it would be kindling a torch to add light to the sun."

The notion of natural rights is similarly incoherent: "nonsense on stilts", according to Bentham. "Right is with me the child of law... A natural right is a son that never had a father".

One point I particularly liked was that "those who cite [natural law] with so much confidence... must suppose that nature had some reasons for her law. Would it not be surer, shorter and more persuasive, to give us those reasons directly?"

Quite apart from being false, natural law should also be opposed because of its negative consequences. These fictions also encourage dogmatism: "There is no reasoning with fanatics, armed with natural rights, which each one understands as he pleases". Further, there would be anarchy if each individual rebelled whenever the state offended his perception of Divine or Natural law (as apparently recommended by Blackstone, a contemporary of Bentham).

Social Contact Theory
I don't mind social contract theory as much as natural law, since (at least in my experience) it tends to support liberalism or libertarianism rather than conservativism. But while we're on the topic of political fictions (and since I've been revising Hume a bit too), it's surely worth a mention...

Original Contract:
Some of the early social contract theorists wrote as if the social contract were a historical fact - as if men in the 'state of nature' had come together and explicitly consented to submit to a government for their mutual benefit. This consent was supposed to be the basis of state legitimacy. The problem, as Hume pointed out, was that no such events ever took place - and even if they had, this would be insufficient for modern legitimacy.

As a historical fact, the origin of states tended to be a violent seizure of power (whether through conquest or usurpation); the consent of the populace was never a serious consideration. In any case, such consent would only bind those who were party to it. New generations never got any say in the matter. We surely would not consider them bound by their ancestors' choices.

Tacit Consent:
A more promising move is to suggest that citizens have somehow given their tacit consent by living in society. As Socrates argued in the Crito, he had lived in Athens his whole life, despite numerous opportunities to leave, and so he had given his implicit consent to be bound by the laws of the city.

The problem here, Hume suggested, is that a poor peasant doesn't have any real choice about what land he lives in. We might as well say a sleeping man put on board a ship has freely consented to obey the captain, for he could jump overboard (and drown) if he didn't like it.

Further, Social Contract theory analyses political obligation in terms of the obligations arising from contractual promises. But this begs the question of the basis of contracts - why are we obligated to fulfill our promises?

Hume thought there were two kinds of moral duty: those arising from natural instinct (e.g. sympathy), and artificial ones arising from "a sense of obligation, when we consider the necessities of human society". Examples falling in the second class include "fidelity" (keeping promises), and also "allegiance" to the state.

But then, Hume asks, what point is there in attempting to reduce allegiance to fidelity, when both rest on the same foundation? There is no need for this elaborate fiction, simple utility is enough: "The general interests or necessities of society are sufficient to establish both".

I don't find this objection decisive, however. It seems vulnerable to a Kantian view of humans as free agents, subject to the categorical imperative of always treating other people as ends, and never as a means only. Such a view would seem to provide a deontological (rather than consequentialist) justification of promise-keeping, but not government. On such a view, some sort of consent would be necessary to legitimise the State after all.

The first objection does seem decisive though: it simply isn't true that everyone can choose their country of residence. So this cannot be considered grounds for tacit consent.

Hypothetical Consent:
More sophisticated accounts are based not on real consent (whether explicit or tacit), but rather, mere hypothetical consent. That is, we merely ask whether people would consent to the government over anarchy, if asked.

As commentators here have noticed, Hume's arguments simply don't apply to Rawls' style of social contract theory. [Well, that's not entirely accurate. The 'same foundations' objection discussed above still seems relevant. But as I said, it's not decisive, at least not for deontologists.] Instead, we may return to Bentham and his distaste for fictions.

In response to those who suggested contractual theorising was justified on the basis of being a useful fiction, Bentham made the obvious rejoinder that if you're going to assert false premises, why not cut to the chase and just assert the conclusion? "Indulge yourself in the license of supposing that to be true which is not, and as well may you suppose that proposition itself to be true, which you wish to prove, as that other whereby you hope to prove it."

The wittiest objection I've heard is that "a hypothetical contract isn't worth the paper it's not written on"! I forget who said that though (leave a comment if you know).

Hypothetical consent cannot create a real obligation (if one results, it's because of reasons other than the 'consent'). There's also the problem of anarchists who wouldn't consent anyway.

A more promising avenue might be to ask whether a government is worthy of our consent. This is closer to utilitarianism. The whole notion of the social contract would then be nothing more than a heuristic, or mental shortcut. It cannot serve as the real foundation of state legitimacy.

Update: Siris defends natural law from Bentham's criticisms.

Creator's Rights

Some people argue that if humans were created by God, then we have a moral duty to obey him. I've never understood how that conclusion is supposed to be justified. I've heard analogies based around the idea of people getting to use their (inanimate) creations however they please, but there are obvious problems with the analogy - not least that it dehumanises us into mere 'objects' or playthings of God.

Anyway, I was just thinking about how the general principle here (that creations have a moral duty to serve the ends of their creator) would apply within an atheistic framework. For suppose Richard Dawkins is right, and we are 'lumbering machines' built by our genes for the purpose of enhancing their replicative abilities. (Perhaps 'purpose' has misleadingly anthropomorphic connotations; feel free to use the more neutral word 'function' instead.)

If we are created by our genes, does that mean we have a moral duty to serve them? That is, should we be dedicating our lives to reproduction, looking after relatives, and just generally spreading our genes? Surely not. But then this seems to imply the falsity of the Creator's Rights principle. Though perhaps one could object that it does not apply to creators who lack intentions (and thus genuine 'purposes').

But we can imagine biotechnology advancing to such a point that humans could create a new species of intelligent lifeform. Would they be morally bound to obey us? Again, surely not! If anything, we would be the ones with a moral duty towards them!

Saturday, June 26, 2004


Two exams down, one to go. That one being on political theory, I'll probably write up some blog posts as I revise over the next couple of days. Possible topics include Tocqueville and Mill on democracy, Mill on liberty & utility, Hume on the social contract, and Bentham on natural rights (and possibly utilitarianism).

For a couple of interesting links:
The Grey Shade recommends a "2-stage process for criminal proceedings", where juries first decide between proven/not-proven, and if 'not proven', they choose whether or not to give a "positive finding of innocence" through a 'not guilty' verdict.

David Farrar has a useful summary of NZ bloggers' reactions to the early success of the Civil Unions Bill.

For my own two cents, I must say I'm surprised and disappointed that John Tamihere voted against it - I wouldn't've picked him as a social conservative. Also surprising was that some good can come out of NZ First after all, with the following quote from Brian Donnelly: "You don't make your own candle glow brighter by blowing out someone else's". Very nicely put.

Update: Tamihere explains his vote:

The biggest (or at least most controversial) event of the week as far as politics was concerned was the introduction of the Civil Union Bill. I voted against – timing and take-up of social moves are important and I judged my constituents' appetite to be against it. Frankly, I've got a whole list of higher priorities. (Emphasis added)

He's representing bigots so he thought he'd better go along with them. Charming.

Wednesday, June 23, 2004

Freedom & Moral Responsibility

I've argued before that I think freedom & determinism are compatible. Now I want to look at it from the vantage point of moral responsibility.

Suppose morality is an essentially social phenomenon. Its pragmatic societal function is the socialisation of individuals - the manipulation of their desires through the mechanisms of blame and praise. (I will try to defend this view in a future post, for now just bear with me. I hope it sounds at least vaguely plausible.)

Then, presumably there would be no point to holding morally responsible anyone for whom blame/praise would have no effect (e.g. addicts, the mentally ill, or the coerced). This is consistent with our intuitions, I think.

This moral perspective can help to clarify some aspects of a compatibilist freedom, supposing S is free iff S is morally responsible. One compatibilist conception of freedom is based around the idea of freedom from coercion, i.e. we are free only if we are not coerced. The difficulty here is in identifying precisely what coercion consists of, and where to draw the line. Presumably we are coerced if someone points a gun to our head. But what about if they merely threated to kick us in the shins? Or splash us with a water pistol? Presumably the latter, at least, is not real coercion!

I think the consideration of counterfactuals, or close possible worlds, can help us here. Coercion (and thus moral responsibility) is a sliding scale, whereby we are more responsible the more open we are to the influence of praise/blame (recall the pragmatic basis of morality we are using here).

Consider those close possible worlds where S had been socialised slightly differently. We then ask, does S behave differently in those worlds? If so, then S is clearly receptive to socialisation (blame & praise), and so we can consider S morally responsible. If not, however, then it would seem that it is inappropriate to hold S responsible. For either the blame/praise did not alter S's desires (e.g. if S is a mentally ill sociopath), or S was coerced such that his own desires were not the causes of his behaviour. Either way, the social exercise of moral dis/approval would have no impact on S. S, in this case, is not to be considered a moral agent.

Tuesday, June 22, 2004

Slimy Equivocation

Schools Urged to Teach Religion:
"A lot of New Zealanders, I think, are very nervous of the word 'religion' because they think it's indoctrination, but the danger is if you miss that whole dimension of intellectual debate out, you deprive young people of the opportunity to engage with some of these really important issues, such as genetics, or the war in Iraq."

Ugh. Just look at it. The blatant idiocy just jumps right out of the page at you: that unquestioned assumption that religion and values are the same thing. Has nobody heard of secular philosophy?

There's a common bit of intellectual dishonesty on display here, whereby someone defines 'religion' to mean one thing (usually 'thoughtful reflection', 'values', or some other aspect of philosophy), and then uses that to advocate a different sort of 'religion' altogether (i.e. belief in a supernatural deity). Ophelia Benson was talking about this sort of slimy equivocation not so long ago, so I won't repeat her.

I just want to make one (surely obvious?) point: If we want more young people to think about important moral issues, why not cut straight to the point and encourage the discussion of those issues? There is no reason here to be teaching religion in schools. Just teach philosophy. It's infinitely better.

Objects of Perception

What are the objects of perceptual awareness? The intuitive answer is probably "physical objects" - we think our perceptions are perceptions of the world (Direct Realism). The Shallow Foundationalist relies on direct realism when he claims that perception can directly justify our beliefs about the external world.

The Argument from Illusion is meant to cast doubt on this. There are many ways our perceptions can play tricks on us - for example, objects half-immersed in water look bent. The real object is not bent though, so whatever it is that we are perceiving, cannot be the actual object. Instead, we merely perceive a mental object, or sense datum. The Deep Foundationalist insists that only beliefs about sense data (not the external world) can serve as foundational beliefs (though we might be able to infer from these to beliefs about the world).

This highlights the gap between appearance and reality, between evidence and truth, between what we experience and what is actually there. Our senses clearly do not provide us with infallible knowledge of the world. But it gets worse. For if all we perceive is sense data, not the world itself, then it would seem that our perceptions don't tell us anything about the outside world at all. We cannot justify beliefs about the external world solely from sense data.

We seem to have the following options:
1) Phenomenalism (anti-realism) - deny that the physical world exists. Minds and sense-data are all there is. The realist structures described by science are just a "useful fiction" which help us to predict the order within our experiences.

2) Representational Realism - Suppose we add in the additional assumption that our sense data are usually caused by, and resemble, real objects. That would then allow us to make justified inferences about the external world (not infallible ones, but at least generally reliable).

[Either way, by basing all justification on our subjective sense impressions, the deep foundationalist seems committed to the existence of a Wittgensteinian private language, which could be a problem.]

#2 seems preferable, but how could we possibly know whether the assumption it relies upon is true or not? This is really just the problem of skepticism all over again. We would seem to have to resort to some form of externalism, whereby justification for our beliefs is independent of our awareness of the justification (e.g. possible worlds externalism, or perhaps reliabilism - i.e. the belief must be caused by a reliable process in fact, whether we realise it or not).

Alternatively, the internalist could accept that perception/experience does not justify our beliefs after all. Instead, it causes beliefs to form in the first place. Where does the justification come from, then? We would have to give up on foundationalism, and accept coherentism instead: no beliefs are intrinsically or categorically justified. Instead, a belief is justified to the extent that it coheres with our other beliefs.

Monday, June 21, 2004

Bundles of Joy

Most of my metaphysics posts so far have concentrated on the universals debate, so now I'll move on to considering particulars instead. (I've an exam on Friday, so I'm basically just using this opportunity to revise, plus it's kinda fun anyway.)

Concrete particulars are simply those things which we call "objects" in everyday life. Now, the big question is whether these objects can be analysed in terms of anything simpler (their constituent parts). Austere nominalists suggest not, but the metaphysical realist will want to analyse objects in terms of its attributes/properties/universals.

The (reductionist) Bundle Theory suggests that particulars are simply "bundles" of attributes which happen to be co-present. For example, if you have a ball with properties: {is spherical, has a 10 cm diameter, is made of rubber, is red, etc...}, then the ball simply is the bundle of those properties. Those are its only constituents, there's nothing more to it than that. (But note that any object probably has an infinite number of such properties, e.g. "being green or not-green", or "not being the number 42", etc.)

Substratum Theory, by contrast, insists that there is something else, something which possesses those properties, a special something at the core of an object. They call this special something a 'bare particular', or 'substratum'. It functions as a sort of empty shell, which gets filled in by all the properties the object possesses.

Now, many philosophers have objected to the substratum theory on the grounds that it is bizarre, stupid, and borderline incoherent. All good reasons to reject a theory, I suppose. (Actually, the main reason is just good old Ockham's Razor - why invent substrata if we don't need to?) Anyhoo, Bundle Theory is certainly the default choice, so let's see if it will work.

Objections to Bundle Theory:
(1) It renders subject-predicate discourse tautological.
Recall the ball described above, and consider the sentence "the ball is red". According to bundle theory, this sentence is really saying "the bundle of attributes {is spherical, has a 10 cm diameter, is made of rubber, is red, etc...} includes the attribute of being red". In other words, "this red thing is red" - an obviously empty claim!

However, this objection is easily overcome if we distinguish metaphysics from epistemology, i.e. the difference between what exists, and what we know to exist. It is entirely possible to refer to an object, despite being in a position of ignorance regarding its attributes - rather like it's possible for Lois to refer to Clark Kent, without knowing that he is Superman. So although referring to the ball means referring (metaphysically) to the property of redness, the speaker does not necessarily know this. He can refer to the ball, without knowingly (i.e. epistemologically) referring to redness. So the sentence "the ball is red" will indeed contain epistemologically (though not metaphysically) meaningful information.

(2) It is ultra-essentialist.
Bundle Theory implies that objects have the properties they do essentially - i.e. necessarily, if the object had any different properties, it would be a different object. This seems counterintuitive. A book is sitting on my computer desk. But if instead it were sitting on a bookshelf, surely it would still be the same particular book, not a different one. Ultimately, the bundle theorist just has to bite the bullet and insist that our intuitions there are, strictly speaking, mistaken. I plan to post more about strict vs common-sense identity soon.

I should mention though, that the opposite extreme of substratum theory is similarly implausible in this regard. Substratum theory is anti-essentialist: all that matters for identity is the substrata, you could change any of the properties and it wouldn't matter. You would still be the same object, even if you were a beetle, or a rock, or even the number 7. Riight...

(3) It implies a false principle - the Identity of Indiscernibles
This is the strongest objection against Bundle Theory, since it actually, erm... works. Here's the problem.

According to the Principle of Constituent Identity (PCI): If any two objects are made of all the same constituent parts, then they are the same object. But, according to Bundle Theory (BT): The only constituents are attributes. These 2 principles together imply a third, the Identity of Indiscernibles (II): If any two objects have all the same attributes, then they are the same object.

However, II seems to be a false principle. We can imagine a perfectly symmetrical universe with exactly 2 objects in it, e.g. two golden spheres, which exactly resemble each other. They have all the same attributes. But then II implies that they are the same object, which they're not! So II is false. Thus BT must be false.

There are a couple of ways to argue that the spheres actually have different attributes:

a) The spheres occupy different areas of space. This requires that we conceive of space as being absolute, an object in itself, rather than a mere relation between objects. (For clearly the spheres are in the same relation to each other!) But then we are in danger of regress, for how are we to identify one portion of space as different from the other? This response then begs the question, by assuming what we have set out to prove.

b) Haecceity / Impure Properties. Call one sphere 'A', and the other one 'B'. Then we can say that A has the property of "being identical to A", which B does not! However, recall that we are attempting an analysis of what particular objects consist of. It would clearly be circular to appeal to the concept of a particular object as part of this analysis/definition. Yet that is precisely what this response does, by appealing to 'impure properties' (properties which already involve the notion of a particular object). Thus, we really should update the BT and II principles mentioned above, by replacing each mention of "attributes" with "pure attributes" instead.

c) Trope theory. This is the Bundle Theorist's only real option. If you understand attributes/properties as being multiply-exemplifiable universals, then counterexamples to II are possible. However, this can be avoided by adopting Trope theory instead. If we understand attributes as tropes, then no two distinct objects (not even the golden spheres) will ever have any (let alone all) of the same attributes - they can have exactly resembling tropes, but they are nevertheless numerically distinct.

I have to say, I think properties are a load of bunk, so trying to analyse concrete objects in terms of these fictional entities isn't gonna do a lot of good. But for those who think differently, the best option is definitely bundle theory with tropes. Substratum Theory is ontologically bloated, but Bundle Theory won't work with universals, because of the II objection. Alternatively, there's Aristotelian 'substance theory', which I haven't mentioned here, but that's pretty lame anyway. (Ha, yeah, not much of an argument, I know, but I'm lazy. Sue me.)

Artificial Empatelligence

Jonathan Ichikawa recently noticed an interesting contrast between how different businesses want to portray the mechanical/computational aspects of their service:
[A]n amusing message from "Dear Customer, We've noticed that many customers who've purchased albums by Various are also interested in music by John Williams."...

Consider the extremely similar phenomenon of Google's gmail's ad-targetting... Gmail wisely does not follow Amazon in anthropomorphizing its pattern-analyzing self... They're having enough privacy trouble as it is. Apparently there's an important difference between Amazon and Gmail: Amazon seems to thrive on selling itself as a smart person who watches us very carefully and anticipates our desires, while Gmail has to work very hard to avoid that impression.

Now what I found interesting about this, is its implications when considering the growing importance of emotional labour in the modern economy (thanks to Just Left for the link):
Two ways of measuring the demands of a job have defined industrial relations since the beginning of the Industrial Revolution - time and effort - but a third has emerged in the past few decades: emotional labour. It's not just your physical stamina and analytical capabilities that are required to do a good job, but your personality and emotional skills as well. From a customer services representative in a call centre to a teacher or manager, the emotional demands of the job have immeasurably increased.

...Empathy has become big business, according to consultancy Harding & Yorke, which claims to be able to measure every aspect of the emotional interaction between customer and company. If a company wants its employees to sound warmer or more natural, it turns to the likes of Bob Hughes at Harding & Yorke. Delight your customers and they'll be back, is his watchword: empathy makes money.

...This kind of cognitive restructuring of employees' responses is required to pamper the customer's every whim. Such self-control can be very hard work, as management theorist Irena Grugulis points out: "Expressing warmth towards and establishing rapport with customers may provide a genuine source of pleasure for workers. Yet in practice, emotions are incorporated into organisations within strict limits. Emotion work does not necessarily legitimise the expression of human feelings in a way that supports the development of healthy individuals, instead it offers these feelings for sale."

I feel that there's an important link to be made here, but I'm having trouble putting my finger on it. I guess it's to do with the future presentation of computational processes - will they tend to get 'personalised', or "dressed up", to more and more try to simulate a real person? Or will they instead remain cold and impersonal, merely mechanical, to assure us that there's no threat to our own humanity?

One interesting issue regards the possibility of utilising mechanical processes (e.g. complex computer programs) capable of easing the emotional workloads. As Jonathan mentioned, some people find it disconcerting when computer programs mimick a personal touch. But I find the "friendliness" of tele-marketers (for example) to be no less artificial and off-putting. In effect, we're currently asking real people to pretend to be machines pretending to be real people. The Guardian article suggests that this puts an enormous strain on the workers involved, who are instructed to "Think of yourself as a trash can. Take everyone's little bits of anger all day, put it inside you, and at the end of the day, just pour it into the dumpster on your way out of the door". So why not skip the outer layer of deception, and just use machines pretending to be real people?

Of course, given current levels of technology, this simply isn't practical. Machines aren't versatile enough to respond - let alone respond appropriately - to all the different possible concerns a customer might have. But I see no reason why advances in artificial intelligence couldn't make improved simulations viable, perhaps within the next few decades.

So I guess the questions I'm raising here are:
1) How will computed processes be presented in future - as personal or impersonal? (Or perhaps both, with a growing divide between the Amazon type vs the Gmail type?)
2) Would it be possible (in future) to use advanced computer programs for emotional labour?
3) If so, would this be desirable?

Sunday, June 20, 2004

Properties, Troperties

(If you're new to metaphysics, you may want to have a quick read of my background to the universals debate.)

Recall that the metaphysical realist thinks that universals exist: when two objects agree in attribute, that is because there is some one thing they have in common. For example, if two objects are both red, then that is because they both possess the same property (universal) - that of "redness".

By contrast, the austere nominalist denies that universals exist - only particular objects exist. If two objects agree in attribute, then that is just a basic unanalysable fact. There is no property of "redness" floating out there in the world. When people refer to "redness", they're really just refering to the set of all the red objects in the world.

Now, there is a nice compromise between these two extremes, called Trope theory. The Trope theorist basically takes universals, and de-universalises them. That is, he agrees with the realist that we can analyse objects in terms of the properties they possess, BUT he denies that it is possible for two objects to share the same property. Properties are understood not as being multiply-exemplifiable 'universals', but rather, as being individual, singular 'tropes'.

If two objects are both red, then that just means that they both possess red tropes. It is important to note that each trope is distinct - even if they are exactly resembling (e.g. the exact same colour), they nevertheless are numerically distinct tropes. In contrast to realism, then, the two objects do not share the same constituent property (trope). Instead, they are made up of two distinct tropes, two different constituents, though the two happen to be exactly resembling.

Trope theory was portrayed rather unsympathetically in class, since both our lecturer and tutor find it counterintuitive ("crazy" might have been the word they used). But I disagree - for although I'm more inclined towards nominalism regarding properties (they don't really exist, do they?), I at least think Trope theory is better than metaphysical realism. And it's not really as counter-intuitive as they suggested.

Consider two jerseys churned out by a factory, which look exactly alike. Are they made of the same material? Well, no, not exactly. They're made of the same type of material (cotton, wool, or whatever), but if you pull out a thread from each jersey, you will nevertheless agree that they are two different pieces (or 'tokens') of material. So trope theory is just like that, but with all the other properties too. Are the jerseys made of the same colour? The same shape? The same size? No, in each case, the shape/size/colour is exactly resembling between the two jerseys, but they nevertheless are distinct tropes.

Problems with sets:
Now, according to the trope theorist, "redness" does not refer to some ethereal universal (unlike the realist), nor the set of all red objects (unlike the austere nominalist), but instead, it refers to the set of all red tropes. But, as I've discussed before, set-based accounts raise certain problems:
One obvious problem with set-based accounts is that properties which are exemplified by all the same objects would refer to the same set, and so, according to this account, would mean the same thing. For example, austere nominalism considers "being a featherless biped" and "being human" to be the same property, which is obviously an unacceptable result. Possible worlds nominalism improves this, since obviously the sets will differ in other possible worlds (where, say, chickens have no feathers). But there will still be some properties which are co-exemplified (i.e. exemplified by the same set of objects) in all possible worlds - Loux gives the example of being triangular and being trilateral (all objects with three angles also have three sides, and vice versa).
Tropes overcome this to some degree (it doesn't matter if properties are possessed by all the same objects, for we are comparing sets of tropes, not sets of objects). But what about non-existent tropes? For example, in the real world, no tropes of either "elven" or "dwarven" actually exist. Now, supposedly "being a dwarf" refers to the set of all "dwarven" tropes, which is the empty set. Yet "being an elf" refers to the set of all "elven" tropes, which is also the empty set. Hence, the trope theorist concludes, being an elf and being a dwarf are actually the same thing. This, of course, is an unacceptable result.

There are two ways to solve this problem:
1) Expand the set of tropes to cover those in all possible worlds - an analogous solution to that of the austere nominalist in the quoted paragraph above.
2) Deny that non-exemplified properties refer to anything meaningful. I don't much like this answer, but it's acceptable enough, since it's precisely what the Aristotelian realist believes about universals anyway (in contrast to the Platonist).

Another set-based objection is that mathematical sets have their members necessarily. So if there was one more red trope in the world, that would be a different set - "redness" would refer to something different from what it does now! And that seems odd. This is a powerful objection, but I think it can be overcome by distinguishing strict (philosophical) identity from loose (common-sense) uses of the word. I'll discuss that more in a future post.

Saturday, June 19, 2004

Category: Politics & Society

History of Political Thought:


Justice, Equality, and Desert:

General Political Philosophy:

Education and Civics:
  • Education - What should schools be teaching? I argue that a greater emphasis on skills and understanding over mere facts/information would be preferable.

  • The Role of Schools - Seems to be both to educate and socialize. Would we do better to separate these two goals? Could homeschooling achieve this?

  • Education and Civic Engagement - Researching "engaged universities", and looking at whether students should be encouraged to get more involved in their communities.

  • Opposing ID - Should "Intelligent Design" Creationism be taught in schools?

Social Issues / Applied Ethics:

Race & Ethnicity:

Family Values:

Law and Crime:


Affirmative Market Action

As promised, I'll now describe the 'argument from market forces' which could be used to justify affirmative action in the workplace.

The core idea is that sometimes members of a particular group (eg a specific race or gender) will be better suited for a job, for the intrinsic reason of having such membership. So I'm not talking about hiring men cos they're stronger (just hire strong people, whether they're men or not is irrelevant), or anything like that. Rather, I'm talking about hiring an Asian, a female, a white person, etc, for the particular reason that they belong to that group. But it must be emphasised that the reason for doing this will not be for the sake of the employee (unlike the arguments from inferiority & superiority), but rather, for the sake of the job/customers.

A couple of examples might make this clearer. A typical example would be customer relations. The Christchurch City Council wants to hire more ethnic minorities (particularly Asians), for the reason that such people would be better able to communicate with, and serve the needs of, (e.g.) Asian clients who contact the CCC with enquiries.

Probably the most pressing example I can think of would be male school teachers - of which there is a disastrously short supply in New Zealand. Some boys are going through primary school without ever having a single male teacher. I don't know exactly what all the psychological & sociological implications of this are, but I hear it's bad news, in any case. Ya know, lack of role models, boys educational needs aren't being met, etc etc.

Anyway, the point is, we desperately need more male teachers. (A similar argument might be made for Maori teachers, though it is probably not so pressing a problem. At least according to the statistics I'm aware of, the gender gap in school performance is much more significant than the race gap.) The answer: entice more males into the profession. A pay raise, special scholarships, it doesn't matter how it's done, so long as it works.

The rationale behind such discrimination is pragmatic, not ideological. Hence the name: "argument from market forces". It's simple supply & demand. In the ideal 'free market' (ya know, that one which exists next to 'perfect circles' in Plato's world of forms), if some skill is in short supply (relative to the demand), then its worth increases. In our case, that "skill" - odd though this sounds - is membership in some particular group. In heavily regulated markets such as the civil service, the government might need to actively advocate such pragmatic discrimination, for the sake of market efficiency.

There are some dangers with this line of argument though. Market forces are driven by customer preferences, and these preferences might not always be justified. In the case of Asian customer relations personnel, or male or maori school teachers, I think it probably is.

But suppose some business' customers were all "black power" racists, and they didn't like to be served by white people. There is clearly a pragmatic market incentive for the managers to hire only black people in such a case. But this seems quite patently unjust. This is one case in which we would probably want the government to step in and prevent any discrimination by market forces. (This intuition is even stronger if you reverse the scenario so that it involves traditional white-supremist racism, but that would weaken the link to "affirmative action".)

So there must be some reasonable justification for thinking that someone of a particular race or gender would be intrinsically preferable for a job. Mere client preference is not always enough. Unfortunately, it's not immediately clear where to draw the line. As always, any suggestions/comments would be most welcome.

Friday, June 18, 2004

Affirmative Aristocracy

No Right Turn is appalled that anyone could have the gall to oppose legislation which gives Maori tribes (iwi) preferential treatment in the multimillion-dollar sea farming industry.

I won't comment on the specifics, because I don't know enough. But the general principle at issue here is well worth examining:
I'll spell it out: the reason Maori "get it for free" is because we stole it from them. Giving iwi a mere 20% of what they're entitled to seems to be quite a good deal for us. The reason we have a settlements process is because massive injustices were committed in the past, which stripped an entire people of their economic base and relegated them to disposessed poverty in their own country.
Now, my heart bleeds as much as the next liberal's, but it never fails to irritate me when my fellow lefties start spouting the old 'white people are all imperialist/colonialist thieves!' rhetoric. There are several problems with it:

1) Individuals are not morally responsible for the actions of their ancestors. "We" didn't steal anything.

2) I have a general distaste for legalistic obsession with 'property rights'. There exists no 'natural law' which gives first occupants an enduring metaphysical right to their land. It's irritating when libertarians get hung up over this, and no less so with lefty anti-colonialists who hijack these right-wing ideals. There are other considerations besides history.

3) Dubious use of corporate entities. It's philosophically suspect to personify a race and then attach moral attributes such as victimhood to this fictional entity. This point may well be controversial, but I don't think the category "Maori" (or "Pakeha" for that matter) actually refers to a meaningful entity. Persons can be harmed, groups cannot. The sentence "Maori were wronged" is only meaningful (and true) if it's understood as a rough translation of something like "Some Maori individuals were wronged". The corporate individual "Maori" does not itself exist. As such, trying to compensate this fictional entity does not make any sense. Nor does it make sense to compensate those Maori individuals who were not wronged.

4) Due to generations of interbreeding, there is no longer a clear-cut distinction between Pakeha and Maori. All Maori now have at least some European blood in them. Thus a) they presumably should be counted among the allegedly guilty "we"? b) Separatist rhetoric (e.g. "iwi" vs "the Crown") is highly misleading, and c) the assumption of two distinct races does not accurately reflect reality.

5) Why favour the tribal elite, rather than urban Maori?

More generally, there are two broad ways to attempt justification of affirmative action: the argument from superiority, and the argument from inferiority.* Both are misguided, and rather racist in essence.

The argument from superiority (NRT's approach) suggests that Maori people have special rights (or 'entitlements') which people of other races do not have. In NRT's case, those 'special rights' appear to be a form of property rights inherited by modern Maori (or at least iwi) from their ancestors, marking them out as a sort of natural aristocracy. I outlined five objections to this above.

The argument from inferiority, by contrast, suggests that Maori are somehow disadvantaged by virtue of their race, and so are in need of welfare. Now, good little socialist that I am, I'm all for helping those who need it. But poverty is the appropriate indicator of need, not race.** To suggest otherwise would be patronising and racist in the extreme.

So, to conclude, I really don't see any justification whatsoever for race-based legislation, preferential treatment, 'affirmative action', or whatever else you want to call it. We should be helping those people who need it, but not those who don't. Racial generalisations will not help us to differentiate between those two classes. Most importantly, in response to NRT's argument, today's Maori are not "entitled" to the entire country, or indeed any 'special treatment' at all. They are New Zealand citizens just like the rest of us, and ought to be treated as such.

* = Within the limited domain of employment, there is also the argument from market forces (I'm just making up all these names, by the way), which I have a bit more sympathy for.

** = If a disproportionate number of Maori happen to be poor (for whatever reason), then a fair, poverty-targeting welfare system will (as an innocuous side-effect) result in that same disproportionate number of Maori being helped. And that is as it should be. However, the welfare system should not be actively targeting any particular race.

Update: With regard to point #4, see also this article by Denis Dutton, which mentions one of Tremain's memorable cartoons:
He portrayed an exasperated, potbellied European-looking bloke with scraggy beard and a bone carving around his neck, exclaiming to the reader, "Oh boy, I've got a grievance all right! The despicable way my Maori ancestors were diddled and hoodwinked by my Pakeha ancestors".

Wednesday, June 16, 2004

Matrix Metaphysics

David Chalmers has a wonderful paper: The Matrix as Metaphysics, which perfectly captures my own ideas about skepticism - though in a much more coherent and convincing way than I've ever managed to express.
I think that even if I am in a matrix, my world is perfectly real. A brain in a vat is not massively deluded (at least if it has always been in the vat). Neo does not have massively false beliefs about the external world. Instead, envatted beings have largely correct beliefs about their world. If so, the Matrix Hypothesis is not a skeptical hypothesis, and its possibility does not undercut everything that I think I know. [Emphasis added]
The core idea is that we should understand the matrix as being another form of reality, another universe. Thus, the things which happen in it aren't lies, rather, they're true in that world.
I think the Matrix Hypothesis is equivalent to a version of the following three-part Metaphysical Hypothesis. First, physical processes are fundamentally computational. Second, our cognitive systems are separate from physical processes, but interact with these processes. Third, physical reality was created by beings outside physical space-time.
Interesting stuff. Read the whole thing for the rest of the details...

Tuesday, June 15, 2004

Political Survey

Via No Right Turn, I notice there's an improved version of the 'political compass' about these days. Their methodology does sound a lot better, though it has resulted in a slightly strange axis called "pragmatism", which seems to measure some sort of combination of utilitarianism and atheism. The other axis, of course, is the standard left/right political divide.

Anyway, I was 6 points to the left, and 4 towards pragmatism (16 is the maximum in each direction).

Try it yourself and see what you get.

Oh, and my political compass results are (out of a maximum of 10 in each direction):
Economic Left/Right: -5.38
Social Libertarian/Authoritarian: -5.74  

Update 25/10/04: I just retook the survey, and this time got:
left/right -7.3251
pragmatism +3.5051

Monday, June 14, 2004

The Universe's Birth-Song

This is cool.
You can listen to the sound from the first million years after the big bang here (0.5 Mb .wav file). The sound has been compressed to five seconds, with the volume held constant.

Whittle played the soundtrack at the American Astronomical Society meeting in Denver last week. Contrary to its name, the big bang began in absolute silence. But the sound soon built up into a roar whose broad-peaked notes corresponded, in musical terms, to a "majestic" major third chord, evolving slowly into a "sadder" minor third, Whittle explained.

For those worried that you cannot have sounds in space, that is true today, but it was not so in the Universe's infancy. For perhaps its first million years, the Universe was small and dense enough that sound waves could indeed travel through it - so efficiently, in fact, that they moved at about half the speed of light.

1000 Visits!

Quite a milestone, especially since this blog is less than three months old (though only just - my first post was on March 19). I'm surprised at how well it's turned out actually. When I first started blogging, I expected to have hardly any readers at all. Though I still haven't got a clue how many regular readers I've got - probably a fair chunk of the visitors are people who just followed a link here once then never returned.

Anyway, it's good fun, blogging. I've been especially productive recently because I'm (supposedly) on study leave for exams, and it provides me with a handy excuse for procrastination :)

So, to all those reading this: thanks for visiting, hope you return often (or at least occasionally), and feel free to discuss things in the comments!

Your Counterpart is Going to Hell!

I was thinking of appending "P.S. Don't plagiarize or you'll go to Hell" to the post where I link to my essays. Alas, being an atheist, I'm not really allowed to make such fiery threats. A pity. Though I could always go with, say, "if you [enter damnable sin here], then in the closest possible world where Hell exists, your counterpart will be sent there!". But that threat doesn't have quite the same ring to it. What I'm wondering is: does it have any force at all?

Assume modal realism is true, so all possible worlds are real in just the same way as our actual world is. There was a recent post at FBC which suggested that we ought to care about real evils, even if they're non-actual ones - much like we intuitively feel that we ought to care about (some) fictional evils. Here I want to go one step further, and ask whether we ought to feel personally responsible for what happens to our counterparts in other possible worlds.

This probably seems like an absurd suggestion. After all, we're completely isolated from them spatio-temporally - it's not even logically possible for us to affect their world. So how could we be responsible?

Well, as a general rule we have a greater moral responsibility for those who are 'close' (in any sense of the term, not just spatially) to us, right? But our behaviour will affect which other possible worlds are 'closest' (conceptually, not spatially of course) to ours. That is, we have the power to affect whether the possible world closest to ours where heaven & hell exist, is one where our counterpart goes to heaven, or one where he goes to hell. Don't you feel powerful now? ;)

So, the thought goes, perhaps we have some (very faint) moral duty to try to ensure that our close counterparts are better off, rather than our distant ones?

Actually, no. That's a load of nonsense. Our actions there wouldn't make anyone "better off" (we can't affect other worlds, remember?). All we'd be doing is moving ourselves so that we were 'closer' to the well-off people rather than the suffering ones. And that hardly seems like a virtuous move (by my intuitions anyway).

So much for that idea. Still, who would've thought pursuing such a flippant thought would have any philosophical merit at all? ;)

Category: Mind

Posts about psychology, philosophy of mind, cognitive science, etc.

Personhood & Personal Identity:

Consciousness and Subjectivity:
  • Dreams and Sensations - on whether the latter occur in the former

  • Sensations, Beliefs, & Subjectivity - are sensations purely subjective things? How about beliefs? Could you have one without realising it? Could you think you have one, when really you don't?

  • Private Languages - Do purely subjective phrases have any meaning? (My version of) Wittgenstein's argument suggests not.

  • Public Minds - Are the contents of our minds truly - in principle - private?

  • Illusions and Zombies - on whether non-conscious people ('zombies') would be fooled by optical illusions

  • The Cartesian Theatre - outlines Dennett's arguments against the common-sense view that there is a place in the brain where "it all comes together" and consciousness 'occurs'.

  • Representing Time - We can represent time using a medium other than time itself, so the temporal order in subjective experience may differ from the objective order of events.

  • Sensory Substitution - Tactile stimuli can give rise to visual experiences. The blind shall see...

  • "Filling in" for presentation - Explaining the blind spot without positing extravagant mental 'paint'.

  • Multiple Drafts - Overview of Dennett's theory of consciousness.

Free Will:

Related Topics: Freedom is discussed further in some of my political philosophy, religion, and metaphysics posts.
Posts on Fiction (including our emotional response to fiction) are in the Semantics category.

Private Languages

The concerns about subjective truths expressed in my previous post lead naturally to Wittgenstein's 'Private Languages' argument (or at least that version of it which is discussed in Everitt & Fisher's "Modern Epistemology" textbook).

I've tried to formalise the argument as follows:
1) In any genuine language, its words must have genuine meanings.
2) The meaning of a word or sentence is determined by the use which is made of it.
3) Therefore, in any genuine language, there must be a distinction between the correct and incorrect uses of that language. (Else the language would be meaningless)

But suppose there was a private language, one which no-one else could ever possibly understand. Then:
4) Necessarily, the speaker of a private language would be the only person capable of telling the correct from incorrect use.
5) Yet, when evaluating a sentence of the private language, this person would (necessarily) be unable to distinguish between "it seems to me to be so, but really it isn't", and, on the other hand, "it really is so".
6) Hence, necessarily, no-one would be able to distinguish between the correct and incorrect uses of a private language. (From 4 & 5)
7) If it is necessarily impossible for anyone to distinguish between some particular things, then there is no distinction to be made. [premise]
8) Hence, there would be no distinction between the correct and incorrect uses of a private language. (From 6 & 7)
9) Therefore, there could be no genuine private language. (From 3 & 8)

Now, talk about sensations seems to be an attempt at a private language. Your conscious experiences are genuinely private in the sense that nobody else could possibly have access to them. But then, the above argument suggests that you cannot meaningfully refer to those conscious experiences. We can say that 'red' refers to the colour of blood, roses etc, rather than our experience of the colour. But how about the word 'sensation' itself (as used in these contexts)? Is there any way we can save it from pure subjectivity? If not, does that mean that the word 'sensation' is actually meaningless?

That does seem an odd conclusion, since we surely all have at least a rough idea of what 'sensation' means. Presumably we never could have learnt this if it were a truly 'private' word. So it must have some more objective, public meaning. I just can't think what. (Any ideas?)

(Though I'm not entirely certain of premise 2. Maybe we could deny that, to get out of this mess? Dunno. Hopefully I'll be able to add to this after taking Semantics next semester!)

Sensations, Beliefs, & Subjectivity

In my dreams & sensations post, I raised the question of whether sensations were purely subjective things - a question I'd like to pursue further.

Is it possible to have a sensation, yet not be aware of it? I would think not - surely if you're unaware of it, then it's not conscious, and therefore not what I understand 'sensation' to mean. How about the reverse? Could you falsely believe you're experiencing a sensation, when in fact you are not? Again, this seems problematic, unless the word 'sensation' is being used to refer to something beyond subjective conscious experiences, but if so, it's hard to imagine what that something is supposed to be. I'll come back to this in a moment.

How about beliefs? I think I'd take a more objective approach to them. Of course we have a plethora of subconscious beliefs beyond those of which we're aware of at any particular moment. You believe that your 27th paternal great-great-...-grandfather is no longer alive, though you've probably never consciously considered that exact proposition before. But how about once a proposition is made salient - is it possible for you to actively deny a belief that you really have? i.e. Can you believe that P, whilst simultaneously believing that 'I do not believe that P'? That seems trickier. I guess you could go either way, depending on which understanding of 'belief' you're going by:

1) S believes1 that P iff (S would consciously affirm P, were P salient).
2) S believes2 that P iff (S behaves as though P were true).

Those are just rough formulations, of course, but hopefully the general idea is clear enough. We can differentiate between (subjective) belief as affirmation of a proposition, and (objective) belief as an influence on behaviour. You might believe1 that P, yet not believe2 that P, and that is no contradiction (just compare the words and actions of the nominally religious).

I don't think we can divide sensations that way, however, because sensations themselves are just a form of mental event. They only influence our behaviour insofar as they give rise to beliefs. (Your seeing an apple gives rise to your belief that there is an apple, which then - in conjunction with your desires and various other beliefs - causes you to pick it up and eat it.)

Yet, despite what I said earlier, there is a gap between sensation/experience and beliefs. After all, we can imagine a mad scientist or evil demon messing with your mind in such a way that it gives you the false belief that 'I just experienced sensation X'. Thus, it is conceivable that conscious introspection could mislead you, even with regard to your own conscious states and sense data.

I'm not sure what to do about all that. I've almost argued myself into a form of sense-data skepticism. Whether S has experienced sensation X or not is a matter of objective fact, but it seems completely unknowable by anyone, including S himself! What a useless concept that would be! Perhaps we're best to stick with the subjective version: S experienced sensation X iff S believes that this is so. A fairly empty definition then - a somewhat dishonest one, even - but perhaps more practical?

Well, no, I don't like skepticism. Having the belief is (in most cases) pretty good evidence that you did in fact have the sensation. Since we're all forced to be fallibalists about knowledge anyway, there's not really any good reason to deny that we really do know that we've had sensations. It's just that sometimes, we could be mistaken. Sometimes we know, but other times, we only think we know, though in fact we are wrong. And, so far as I can tell, there's no possible way of telling the two apart.

So, I suppose, if Jonathan wants to argue that we're all mistaken when we think we have sensations while dreaming... well, he could just be right. We could also be mistaken that we have them when awake. [That's quite possibly the most insane thought I've ever had.]

P.S. I don't mean to conflate the two hypotheses. As Jonathan points out, there are actually some good reasons to deny that we have sensations whilst dreaming. To deny that we have them even when awake, by contrast, would just be crazy.