Sunday, April 30, 2006

Rehabilitating Lust

It's typically assumed that lust is a "shallow" emotion, in contrast to the platonic desires for companionship, affirmation, and so forth. Consider the old stereotype about guys "using" women for sex. Such situations are certainly possible, but isn't it just as possible to "use" a romantic partner for companionship or to boost one's ego? In either case, the "user" has only self-regarding desires, to which their partner is merely instrumental. Why the double standard?

Indeed, it seems that genuine lust is, properly speaking, other-directed. It is a form of aesthetic appreciation, a recognition of -- and hence attraction towards -- another's physical beauty. It is genuinely about them and their qualities. It thus seems as "deep" and appropriately flattering as any other form of romantic appreciation.

Perhaps what critics of lust really have in mind is the self-directed state of feeling horny. There the feeling is all about oneself. One wants sexual release, and doesn't much care where it's found. One's partner is then treated as a mere masturbatory tool, a "sex object" in the most derogatory sense. The other is merely incidental to satisfaction of horniness. But for lust, they are centre stage. This is a crucial difference, and one that makes lust rather more admirable, to my mind.

Even granting that lust is about the other, one might still worry that it is for oneself, and hence in some sense "selfish". This strikes me as doubly mistaken. First, I think there is an important sense of lust which drives one to seek not just one's own sexual pleasure, but also the other's. We might call this "unified lust", as the value it seeks inheres in the whole sexual union, not just the part of one lover alone. Secondly, we should not confuse selfishness with self-concern. Selfishness consists in an inappropriate disregard for others. But one can seek things for oneself whilst also caring about others and seeking their good too, so there is nothing necessarily selfish about this.

But the possibility is there, and perhaps this is the real complaint. It is certainly possible to lust after someone without genuinely caring for the person themselves. So lust may indeed lead one to "use" another for sex without having any intrinsic concern for them. This too is to treat the other as a "sex object", albeit in a slightly less derogatory sense than that previously described. (At least with lust they really are the focal object of one's desire. In the earlier case, they weren't even that. Perhaps "sex instrument" would've been a more accurate term!)

However, this possibility is also present with regard to the platonic attitudes. It is possible to "use" one's friends, after all. In the same way, a selfish agent might enjoy his partner for the way she brightens his life, without thereby caring for her or wanting to advance her interests or happiness. We might say that this is to treat her as a "platonic object".

(We can make a similar distinction to that noted above, between platonic "objects" and "instruments", depending on whether the agent's self-interested desire is directed at the other or merely themselves. Note that instruments are entirely replacable, whereas objects are not. For example, one could use one's partner as merely an instrument to boosting one's own ego. Here your partner is merely incidental to the desire's satisfaction. Anyone else might satisfy the desire just as well. Alternatively, you might have a self-interested "objectual" desire for the companionship of that particular person, in which case no replacement could satisfy that particular desire, and that person is centre-stage rather than oneself.)

I think it is plainly more degrading to be treated as an instrument than an object (though neither is very appealing!), but I see no basis for the double standard between platonic and sexual objectification. Am I missing something here?

Open-Mindedness

It's generally acknowledged that open-mindedness is a virtue. But there is some confusion as to what it actually involves. Too often, people confuse open-mindedness with indecisiveness. They think that open-mindedness requires that one abstains from drawing conclusions; hence the absurd tendency to claim that agnosticism is the most reasonable religious stance, solely on the basis that God's existence can be neither proved nor disproved with absolute certainty. I've previously explained why such a stance is misguided. We have ample reason to disbelieve in gods and faeries, and the virtue of "open-mindedness", properly understood, shouldn't ask us to pretend otherwise.

The trait of open-mindedness is best understood as a disposition, rather than an occurrent state of mind. It's not about what beliefs you actually have, but how open you are to revising them in appropriate circumstances. It requires the true humility of self-acknowledged fallibility. It requires that our minds be open to new evidence. But this is something very different from suggesting that we should be equally accepting of nonsense as we are of sense. That's not open-mindedness; it's gullibility, or perhaps stupidity.

The virtuously open mind is not wide open, indiscriminately accepting of any and all viewpoints. Rationality must remain as a filter. We should be open to accepting good reasons of which we are currently unaware. But this doesn't require us to take recognizably bad reasons seriously. If we judge that the weight of reasons favours P over not-P, then we should (tentatively) believe that P. Open-mindedness means that we will acknowledge the possibility that new evidence could in future lead us to change our mind. But it doesn't preclude our drawing reasonable conclusions in the present.

Saturday, April 29, 2006

The Fake Humility of the Religious

It really bugs me when religious dogmatists claim to be "humble", in contrast to us "arrogant atheists", because they "trust in God" rather than their own judgments. What bollocks. These deluded hypocrites deride human reason for its fallibility, all the while failing to realize that religious belief is simply their - fallible - choice. Consider the central insight of existentialism: that we are "doomed to freedom". One may decide to defer to another's judgments, or even to do nothing at all, but this remains one's decision all the same. You can decide to treat a musty book (or, just as plausibly, a magic 8-ball) as the infallible word of God. But don't pretend it wasn't your fallible choice to do so.

Stop That Crow! makes a related point:
The problems of positions which are taken to be infallible should be largely obvious. We would typically call any person who takes their position to be infallible to be insufferably arrogant, but religious infallibility is taken in such a way as to almost reverse the situation. The infallibility lies not in our religious neighbor, but in the God which is supposed to be omniscient, thus allowing the religionist to not feel at all arrogant in their claims. Furthermore, any person which disagrees with the position of an all-knowing God is thus by very definition wrong, and inasmuch as they cling to their views THEY are the ones who are seen as arrogant.

Of course to claim to know anything with absolutely infallible knowledge is arrogant, however. Even if God does accept something with infallible certainty, the idea that the religionist knows God’s mind does seem to be less than humble to put it mildly.

Further, once we recognize our own fallibility, the idea of committing ourselves absolutely and inflexibly to some particular set of guidelines has little to recommend it. What if we pick the wrong guide? That's surely very possible, given our fallibility. An absolute commitment presupposes certain confidence; thus the "faithful" religionist is the very epitomy of arrogance, as they must hold that their initial decision to commit to their religion was an infallible one. True humility entails a degree of skepticism, i.e. the acknowledgment that one's beliefs and commitments might possibly be mistaken, and hence a willingness to revise one's positions in light of new evidence.

This is the very opposite of religion, which lauds the false certainty of blind faith over reasoned doubt and sensitivity to evidence. Thus, contrary to the propaganda, religion is essentially a call for arrogance, not humility. It asks us to forsake all future opportunities for learning or self-correction, and instead pretend that we are now in a position of such perfect knowledge that we could justifiably make the absolute commitment religion demands. Needless to say, we are not in such a position, we do not have perfect knowledge, and so we are not justified in making such a reckless decision. It is sheer arrogance for the religious to think otherwise, and outrageous hypocrisy for them to claim "humility" in doing so.

Monkey Talk

I just found an old draft post dating from last year's research into animal minds. I can't remember how I was wanting to finish it off, so I'll simply reproduce it as is. (I think it's interesting enough, despite the feel of incompleteness.)

--------

To take a neat example from Dennett's The Intentional Stance (p.270): Experimenters managed to "frame" a particular vervet-monkey as a "boy who cried wolf". They recorded his warning cry, and then played it back to the group in inappropriate situations, until they became 'habituated' to it and no longer payed attention. This much can be explained behaviouristically. But, importantly, the habituated call is apparently "synonymous" to another (quite different sounding) call. And the other monkeys also ignored the framed monkey when he tried to give calls of this other type. As Dennett explains it, the framed vervet can "lose credibility with the group on a particular topic, thanks to being "framed" by experimenters."

One natural explanation of this behaviour appeals to higher-order intentionality, i.e. that the other vervets believe the framed monkey is intending to mislead them on the particular topic. Though perhaps that's still too rich an interpretation. They might merely believe he is unreliable on that topic.


Categories:

Political Classifications

This quiz (via stumbling and mumbling) seems more thoughtful than most. I'd classify myself roughly as PDRW, i.e. Pelagian Digger ("hippie") and Right-Hegelian Whig ("reformer"). I think. It's a bit hard to tell really, as I favour the institution of an unconditional basic income, and a more deliberative democracy, both of which are fairly significant departures from the status quo. But I think of them as natural reforms to improve the current system, rather than a radical or "revolutionary" upheaval. Still, I imagine that some might instead call me a "Left-Hegelian Whig" on this basis.

Categories:

Archive: Philosophers' Carnival #22

[Hmm, Ian Olasov seems to have deleted his blog "For those of you at home". I wish people wouldn't do that. Anyhow, I managed to rescue his Nov 2005 presentation of the 22nd Philosophers' Carnival via Google's cache, and so will reproduce it below, for archival purposes...]

----------

Welcome one and all to the 22nd every-few-weekly Philosopher's Carnival, a collection of the cream of the crop from your favorite blogs! Here you will find a variety of high-quality, durable posts on a variety of high-quality, durable topics. A number of the offerings come from blogs heretofore unbeknownst to yours truly. Without further ado...
__________

In a smart, down-to-earth, and original post over at Bricks Without Clay (which I'd never heard of) on The Future of Intelligent Design, Dan Kurtz mulls over the possible consequences of taking Intelligent Design seriously as a scientific research program. One interesting conclusion:
If ID were to succeed, it would change biology from a physical science to a social science. In ID, all us terrestrial lifeforms are just totems designed by God, the agent. The question “how was the eye formed?” becomes easy. God did it. We’ve answered that. But science must go on. Knowing the “how” of the formation of the eye tells us nothing about why God made it. God’s intentions, his thought processes, his psychological makeup—those become the interesting research questions. So that’s the new field of inquiry, isn’t it? That’s the next logical place to look, the subject of innumerable theses in the Departments of Intelligent Design across the country. Cognitive theology. Ecclesiastical anthropology.

To my imagination, the ID crew can fall back on only one of two responses:

(1) Right now we don't have enough data to know God's "psychological makeup", but maybe one Day the Creator will reveal himself to us. ID is a legitimate research program, but the experimentum crucis is revelation.

(2) There are ways of doing "cognitive theology" scientifically - we just have to take a closer look at what "scientifically" means.Still, there might be a great third response that I haven't thought of yet. Hopefully some ID sympathizers are reading this and will take this opportunity to move the conversation forward.
________________

Congratulations are in order for Jonathan Ichikawa at Fake Barn Country, who has won a prize for his favorite picture of a logical impossibility. The comments address the complexities of the task rigorously and creatively. I should say that there is still a lot of important surveying work to do on the logical geography of the key concepts of the discussion - creation, conceptual impossibility, depiction, entailment (of a depiction). Hop to it fellas!
________________

I read "The Impossibly Conceivable Counteractual", by Richard of Philosophy, Etc.. He argues that he has discovered a counterexample to the claim that conceivability entails possibility. Some of the comments are extraordinarily lucid and productive. If you're interested in the concept of actuality, this is the post for you. (Ba-dum bum!)*

* - Think: "Reading Rainbow".
________________

At another blog new on my radar screen, Chris Hallquist of The Uncredible Hallq eloquently argues from memory skepticism that "demands for justification lead to incoherence". Despite a weakness to my favorite Rylean objections to memory skepticism, the Hallq's post calls attention to the interesting and oft-overlooked intersection of memory and justification. Check out the other stuff on his blog while you're there - it's quite good.
________________

Here's a piece of philosophically charged fiction from a site (also new to me) called "The Science Creative Quarterly". It reads oddly like a really intense mystery. I don't think I've figured it out yet. Those of you who know more about information theory than me (which is to say, all of you) ought to take a look. The Carnival should get more submissions like this.
________________

Mathetes has a post out on the concept of logos in St. John and Origen. This is pretty far afield from my own thinking, but it contains the clearest and most open-minded reasoning that I've ever seen about a confusing, deep mystical idea. Keep an eye open for the startling serendipity between this post and Mr. Ichikawa's.
________________

At yet another blog new to me, Kenny Pearce argues for an interpretation of "judicial activist". His interpretation is quite clear and sensible; it preserves the negative associations of the word while keeping the definition purely descriptive, and makes able use of a number of interesting ideas about the law. For me, Mr. Pearce's raises the interesting question what a "comprehensive theory of legal interpretation" would be. (How can a hermeneutics comprehend the whole law, including laws which have not yet been made?) Now all we need to do is force our elected officials to speak the way Mr. Pearce does...
________________

And as if in response to Mr. Pearce, Max Goss at Right Reason has put up a post defending textualism against intentionalism in hermeneutics. This is the single best post I've read yet at Right Reason. Mr. Goss's criticism is clear, concise, and devastating above all. Blue, red, green, and the rest would all do well to take a look.
________________

Clark Goble has put up a discussion of the relationship between phenomenology and theory. As usual, he brings Analysis and Continental philosophy together seamlessly. I'll keep his remarks from the comments section in mind next time I decide to open the copy of Being and Time staring at me from my bookshelf. You should also take a look at the comments if, like me, you need some advice about how to begin reading (Continental) phenomenology.
________________

One last blog I didn't know about: The Skwib. Mark Rayner has unearthed the lost Camus powerpoints. My favorite:

...
- Human life is precious
- but still meaningless

(By the by, if you like Mr. Rayner's "discoveries", you ought to check this out. (There's some respect in which hyperlinks are analogous to indexicals, isn't there...?) Hat tip: The MP.)
________________

Lastly, from the Socratic Gadfly, we have an eminently readable, thought-provoking argument against Dr. Dennett's stance on... the intentional stance (among other things). An interesting conclusion:
If there’s no “I” at the core, in the sense of a master controller, there’s no “intentional stance.”

Actually, that’s not quite right.

There’s no single intentional stance. Instead, there are several sub-intentional stances, some stronger, some weaker, some more permanent, some fleeting.

It's a good conversation-starter. The whole issue seems highly dependent on what kind of "stance" we want to talk about. Some of the meat here - the depth and character of the analogy between intentionality and "free will", the implicit premises - could use a little cooking. Enter commenters...

Friday, April 28, 2006

Dead Organs

It seems to me that we don't make good enough use of corpses. Many people fail to register as organ donors, and so fewer lives are saved than could be. This is a terrible waste. It would be an obvious improvement to shift from our 'opt in' system to an 'opt out' one. At the very least, we should favour organ-donation as the default position. The more interesting question is whether we should let individuals opt out at all.

I think it would be immoral to opt out. Donating your corpse's organs does you no harm, and it benefits others. You don't get many surer chances to improve the world than that. One might feel a bit squeamish at the idea, but that's simply irrational. In any case, surely one's discomfort is not more important than another person's life. To opt out of organ-donation would thus be an act of extraordinary selfishness.

Despite this, we might think that individuals should have the (legal) right to dispose of their body as they please. For example, it could be argued that to grant the state such powers of bodily violation would set too dangerous a precedent. But there seems a clear enough line between living and dead bodies to avoid any "slippery slope" here.

Alternatively, one might point out that the policy is disrespectful of irrational minority beliefs, e.g. religious nutters who believe that they'll need their organs intact for the afterlife. While I personally think that saving lives is more important than respecting nutty beliefs, I admit that it's best for the liberal state to be as accommodating as possible. Promotes a greater sense of legitimacy and all that. (And I suppose it's always possible that some individuals would find the whole idea so incredibly traumatic that the psychological harm to them would actually outweigh the benefits to others, and hence be problematic even on direct utilitarian grounds.)

A tempting solution would be to offer an inconvenient opt-out option. It should be enough to deter those who are only mildly opposed to saving another's life. But the option is there and attainable (after some inconvenience) for those who feel very strongly about the issue.

A more principled liberal option would be to promote individual choice (rather than the common good) by ensuring that everyone can easily opt out if they so prefer. Thinking that it's not the government's role to place hurdles down the road of vice, we might instead rely on social norms to discourage people from making the wrong decision. People tend to go along with what's expected of them anyway. If organ-donation was widely accepted as the norm, squeamishness or opposition to the idea might disappear almost entirely.

So: which would be best? (Or are there other options that I've missed?)

Categories:

Thursday, April 27, 2006

Introduction

Hi, welcome to Philosophy, et cetera. Here you'll find discussion of analytic philosophy, logic, ethics, politics, religion, and other items of intellectual interest.

If this is your first visit, have a browse through my favourite posts. My 'Web of Beliefs' gives a broad outline of my various philosophical views, and links to more specific posts on central topics. You can find my latest posts on the main page, but since I mostly tackle timeless questions (and sometimes even time itself), you might do just as well to browse through my archives -- simply select a topical "label" or category of interest from the list at the bottom of the page. See also my old categories for posts from my first year of blogging.

If you think it sounds interesting but would like a second opinion, you can click here to read some nice things that other people have said about me.


Comments Policy:

Some of the best features of this blog are the civil tone and respectful disagreements commonly hashed out in the comment threads below each post. Feel free to join in!

Note that I reserve the right to moderate or delete comments which I judge to detract from this civil atmosphere. (Fortunately I've rarely had need to exercise this right.) Free speech means that you may create your own blog and post to it whatever you like. (Note that I may also elect to delete contentless or lengthy tangential comments, and invite the commenter to instead repost the comment on their own blog. You can email me for a copy of your deleted comment.)

Finally, please bear in mind that the purpose of the comment threads is to continue the specific conversation started in the main post. If you want your own soapbox, get your own blog.


About me:

This blog is more about the thoughts themselves than the person behind them. But I'll tell you this much: my name is Richard Chappell, and I was born in New Zealand in 1985. The years passed, and in 2005 I completed my BA in philosophy at the University of Canterbury. In 2006, I moved to Canberra to do my honours year at the ANU, where I thoroughly enjoyed my study of modality and possible worlds under the supervision of Dave Chalmers. As of 2007, I'm working towards a Ph.D. in philosophy at Princeton University. My dream is to become an academic philosopher, make some progress in understanding the world, and help others to do likewise. Other wishy-washy values are mentioned here.

Aside from this blog, I also organize the world-famous Philosophers' Carnival. They say it's a small world. But you should definitely check it out.


About you:

Regular commenters are invited to introduce themselves in the comments to this post, and link to their own blog if they have one. (I figure it might be nice for newcomers to get at least a vague idea of who they're arguing with!)

Wednesday, April 26, 2006

Three Things

In a reckless bout of philosophevangelism, I submitted the following brief article to my hall of residence newsletter. I reprint it here in case any fellow residents want to respond in comments, e.g. to complain about my being a pompous ass. The final sentence is pure provocation, but that may be appropriate in light of my pseudonym.

(Note: regular readers may find this redundant after my previously posted "ten things", though I've made a few revisions in light of helpful suggestions from Clark and Duck.)

3 Things everyone should know about Philosophy

1) Philosophy (and that includes ethics!) isn’t just a matter of opinion. Some opinions are better justified, or more reasonable, than others. We should aim to hold those judgments that are best supported by reasons.

2) Philosophy is an academic discipline, and hence something you do, not something you have. (No-one would go up to a scientist and ask “What’s your physics?” Similarly, to ask “What’s your philosophy?” is to misunderstand the term.)

More specifically: philosophy is a form of inquiry, not rhetoric or apologetics. One should be open to the possibility of changing one’s mind, and so view “opposing” arguments as opportunities for learning, rather than threats to be dismissed at all costs.

Corollary: The aim of argument is not to convince others to your point of view regardless of its true merits, but rather to adduce rational evidence that the view is most likely correct.

3) Philosophy is inescapable. If you dismiss it as worthless, you’re making a claim about ethics or value theory, which are sub-fields of philosophy. If you think it’s an unreliable source of knowledge, that’s epistemology.

All your “common sense” beliefs rest on philosophical assumptions. Most people prefer not to examine them, but that doesn’t mean they aren’t there. It just means everything you think and do could be completely misguided and you wouldn’t even realize it.

-- By Socratic Shadow - pixnaps.blogspot.com

Abortion, Hubris, and Moral Trust

I'm generally a fan of Bitch, Ph.D., but the first half of her featured post Do you trust women? seems awfully misguided. Her core claim is that unless you're a pro-life absolutist, "there is no ground whatsoever for saying that there should be laws or limitations on abortion other than that you do not trust women." After all, non-absolutists recognize that moral discernment and careful judgment is called for here. The only question is whose judgment should be relied upon, and the only reason not to trust the discretion of the pregnant individual herself would be rank sexism, right?

Obviously not. For example, one might worry that self-interested considerations would cloud the judgment of the pregnant individual. Or one might simply be very confident in one's own moral discernment. Either way, the mistrusted individual's gender has nothing to do with it. Perhaps there is a sense in which this would mean that one did not "trust women". Here one would not trust any individual who found themselves in that situation, and some women find themselves in that situation, it follows that one would not trust those women. But the mistrust is nothing to do with their gender; it is not that one mistrusts women per se. Rather, one mistrusts people, and of course some people happen to be women.

Now, silly cries of sexism aside, one might question the appropriateness of this more universal mistrust. I will return to this issue shortly. First though, another howler from Dr.B:
When pro-choice feminists like Wolf, or liberal men, or a lot of women, even, say things like, "I'm pro-choice, but I am uncomfortable with... [third-trimester abortion / sex-selection / women who have multiple abortions / women who have abortions for "convenience" / etc.]" then what you are saying is that your discomfort matters more than an individual woman's ability to assess her own circumstances.

That's just stupid. When I say "I have doubts about the morality of X, for reasons Y", what I am actually saying is that X is morally dubious because of Y, not because of my doubts. (Duh.) If you feel discomfort about abortion because you think issues Y matter, then perhaps you are implying that issues Y matter more than the individual woman's discretion. But this is (obviously!) completely different from claiming that "your discomfort" is what matters here. Dr. B's conflation wrongfully maligns reluctant pro-choicers as selfish. This move may have rhetorical force, but it's intellectually dishonest.

(I've discussed a similar point before: when one asserts that P is true, this is not to assert that P is true because of one's assertion. This gets the order of explanation wrong. One asserts P because of an antecedent judgment of its truth. Similarly, one feels discomfort about abortion because of an antecedent judgment about what matters. One's response is a consequence of the judgment, not the basis for it.)

However, Dr.B. doesn't elaborate on this point (and perhaps doesn't really mean what she says), and instead goes on to discuss the 'hubris' point I hinted at above:
In short, [you're saying] that your judgment is better than hers. Think about the hubris of that. Your judgment of some hypothetical scenario is more reliable than some woman's judgment about her own, very real, life situation?

And you think that's not sexist? That that doesn't demonstrate, at bottom, a distrust of women? A blindness to their equality? A reluctance to give up control over someone else's decision?

Again, this is confusing a whole bunch of separate issues. In particular, the latter question is independent of the earlier ones about sexism. It's quite obviously consistent with anti-sexism to hold that the masses can't be trusted to make good moral decisions. There's nothing misogynistic or gender-biased about that. This is equal-opportunity cynicism. So again, I'll ignore the silly cries of sexism and focus on the more general question of "hubris".

The problem here is that the objection proves too much. Any kind of moral judgment or legal imposition involves this sort of "hubris". Should the rich pay tax to help those in need? Wait! That shows a distrust of the wealthy, and a "reluctance to give up control over someone else's decision"! We should trust them to decide for themselves how much they ought to give to charity. (We're not absolutists about this, after all; it's clearly an issue which calls for careful judgment and discretion.) What about involuntary euthanasia: should family members be able to pull the cord on their comatose grandpa? Maybe, I don't know, but it's clearly a proper issue for public debate. Should reluctant fathers be able to pack up and leave, without providing any sort of child support? (If you think not, does that show that you're a sexist misandrist who doesn't trust men?)

As these examples should make clear, Dr. B.'s position here is rather unprincipled. We don't leave all moral judgments to the individual's discretion. Libertarians may think that we should, but that's clearly not the kind of society we live in. Assuming that Dr. B. is not a radical libertarian, the question arises: why make a special exception for abortion? Why is public debate (including the arguments and moral judgments of - shock horror - men) "hubris" for this moral issue and no other?

For the record: I am quite thoroughly pro-abortion, as should be clear from my previous post. Like I said there: if anything, I think that abortion is probably under-utilized in our society, and that too many people remain pregnant when really they shouldn't. (Of course, it would be much better still to avoid the unwanted pregnancy in the first place.) Though either way, political or legal interference would likely just make things worse, so I'm all for individual choice here.

But this is a meta-political matter of civic discourse. Dr. B. wants to shut down debate. (She says: "The fact that abortion is even a debate in this country demonstrates that we do not trust women.") And she supported this position with bad arguments. As a procedural liberal and aspiring philosopher, I am strongly opposed to both shutting down debate, and to bad arguments. Hence my opposition to Dr. B's post. (At least, the first part, as discussed here. The second half is more about how women have as much right as men to be assertive and "insist on having their arguments acknowledged". Of course I don't disagree with that. Who would?)

Tuesday, April 25, 2006

The Actual World is not a Possible World

As I understand the standard picture of modality accepted by contemporary philosophers, there's a space of possible worlds, and this - the actual world - is one of them. It has some special properties, being concrete or realized in a way that the other, merely possible worlds aren't. But this picture seems to run into trouble, for reasons that I've previously mentioned in passing.

In brief: the modal "multiverse" is seen as static and necessary. Contingent facts vary from world to world, but the worlds themselves remain constant, and hence world-indexed facts (e.g. "P is true at w") hold necessarily. But then, if it's a property of our world '@' that it is actual, then it seems that "@ is the actual world" comes out as a necessary truth. We're stuck with narrow fatalism. That's bad. We should be able to make sense of the idea that our world's actuality is a merely contingent fact, and that other possibilities could have been actualized instead. But to achieve this, we must deny that actuality is an intrinsic property of any possible world. That is: there's no red flag built into modal space to specify actuality.

That's not to deny the existence of the actual world, of course. I simply think we should deny that the actual world is a possible world. Instead, I think it is a fundamentally different kind of thing, existing quite separately from modal space. This allows it to have genuine contingency, by escaping the bounds of the static and necessary "modalverse".

We can motivate this idea from another direction too. Possible worlds are typically characterized as "ways a world might be" (see, e.g., Stalnaker). There's a possible world to represent each way, including the way the world actually is. There's a possible world representing that. But the representation is clearly not identical to the thing itself. There's a possible world representing "the way the world is", and then there is the actual world, that concrete thing which contains us - flesh and blood - and not mere abstract representations of "the way we are". As someone (van Inwagen?) noted, we wouldn't dream of identifying "Socrates" with "the way Socrates is". It doesn't even make grammatical sense. So why make the same mistake with the world?

Despite the misleading name, "possible worlds" aren't really worlds. (We're not Lewisian realists here.) They're just abstract representations. Maybe they're primitive entities, or maximal properties or states of affairs, or sets of sentences in an idealized language; the details don't much matter. In any case, the actual world is clearly a very different kind of thing. It really is a world -- a concrete thing, filled with other stuff, real entities, and not mere representations of those entities. Since possible worlds aren't really worlds at all, it follows that the actual world is not numerically identical to any possible world. Rather, it corresponds to that possible world which represents "the way things actually are". There's a representational relation between them, but representation is not identity. (Compare: a photograph doesn't really contain you - the concrete person - as a part. I could cut it up without thereby decapitating you.)

So, I see three major advantages to denying that the actual world is numerically identical to a possible world:

1) Consistency. All possible worlds remain on an ontological par. You don't have one special one made out of concrete stuff while all the others are abstract things. Instead, we can hold that all possible worlds are the same kind of thing, and the actual world is simply a different kind of thing.

2) Grammar. "Ways things are" are distinct from the things themselves.

3) Genuine contingency. We can accept that the "modalverse" is static whilst avoiding fatalism. The actual world is contingent, and can be so because it is outside and separate from the modalverse. No possible world has the property of actuality intrinsically. Instead, it is a relation that holds between the static possible world and the contingent actual world. This makes the relation itself contingent, of course. If the actual world had been different, then it would correspond to a different possible world. None of this requires any change in the modalverse itself. The changes all occur in the fundamentally separate space of concrete actuality.

Update: Ah, it turns out Kripke beat me to it.


Categories:

Upcoming Carnival

For all the philosophy bloggers out there: be sure to send in an entry for the upcoming Philosophers' Carnival by the end of the week!

Abortive Virtues

Peter Thurley argues that circumstances of rape or incest are irrelevant to the (im)morality of abortion. Mostly everyone else in the world disagrees. Peter argues that the circumstances shouldn't register on the two extreme ideologies of "pro-life" and "pro-choice". But he neglects to note that most people have a more moderate view. Indeed, I think that the common-sense view is best reflected in theory by some form of virtue ethics.

It's certainly true that "pro-life" extremists who can't tell the difference between a zygote and a person must deny that such circumstances can be relevant. (We wouldn't allow an amnesiatic mother kill her ten-year old son after uncovering the suppressed memory that he was a product of rape. The child's genesis is not his fault, after all, so it would still be an instance of killing an innocent person.) But since no-one in their right mind would deny a rape victim the right to an abortion, this just goes to show that no-one in their right mind is a pro-life extremist.

At the other extreme, you have those who think that people can do what they like with their bodies, for even the most fickle of reasons, and so the extra reasons for abortion provided by rape or incest would be entirely superfluous. As a consequentialist who can tell the difference between zygotes and persons, I'm reasonably sympathetic to this view. No harm is done, even by fickle abortions, so there are no grounds for thinking it's fundamentally immoral. (Perhaps we can derive reasons from indirect utilitarian reasoning, however. It's plausible that we really ought to be virtue ethicists in our everyday lives, so I can potentially endorse the arguments that emerge below.)

But again, most people aren't consequentialists. (And just as well, I say.) Most people probably don't have any coherent theoretical views which systematize their various moral intuitions. But we might charitably see them as virtue ethicists of a sort. They want to be good people, and having abortions for no good reason isn't the sort of thing a virtuous person would do. Special circumstances - such as rape or incest - can obviously have an impact here, then.

The ever-insightful Hilzoy of Obsidian Wings puts the matter thusly:
I think that we have an obligation to treat any form of human life, from conception onwards, with a certain sort of respect. What respect requires changes as a blastocyst develops into an embryo, a fetus, and then an infant, but I think it's there from the start. (And this doesn't require that I think that an embryo is "really" a person: I also think we ought to treat corpses with respect.)
...
At this stage, what respect seems to me to require is just this: that whatever we do to a blastocyst, we should have good reasons for doing, reasons that are not just a matter of our own pleasure or entertainment. Just as I would think it wrong to carve up a corpse for fun, I would think it wrong to kill a blastocyst for one's own amusement or convenience. (This is why, though I am in favor of legalized abortion, I would think that someone who decided, on a lark, to try to have abortions in each of the fifty states, or who had an abortion just so that she could go to a really great party in her favorite dress (which, let's suppose, wouldn't fit if she were pregnant), was doing something abhorrent. I think you just shouldn't kill embryos for these sorts of reasons. I wouldn't be in favor of making such abortions illegal, but that's partly due to the same sorts of reasons that lead me not to favor a legal ban on being a complete insensitive jerk to one's significant other: doubts not about whether it's wrong, but about whether it would be a good idea to have the state policing our motives in the ways that would be required if it were criminalized.)

But while I think it's wrong to kill a blastocyst for no good reason, I don't think it's wrong to kill one period. I just think that respect requires that one recognize that one is doing something that should not be done lightly. I feel similarly about corpses: I do not think that carving them up for fun is OK, but I do think that a person who thinks her corpse should be treated with respect can appropriately leave her body to science, where it will be carved up for a non-frivolous purpose.

I suspect that what really upsets most people about abortion is the idea that it might be done frivolously. We may be more confident that rape victims are not being frivolous in this respect. Hence, on the common-sense (or virtue-ethical) view, rape can be relevant to the morality of abortion.

As it happens, I think people's concerns here are probably empirically misguided. The frivolous hypothetical abortions described by Hilzoy are clearly not everyday occurrences. Surely no-one could seriously believe that most women who seek abortions are so fickle about it. Abortion is a serious choice, and I have trouble believing that many women would fail to see it as such. Conservatives often whine about "abortions of convenience", but I find it incredible to dismiss as mere "inconvenience" the massive responsibility inherent in bringing a child into the world.

(Perhaps some people are reckless about getting pregnant in the first place, and hence blameworthy for that. But once pregnant, abortion will often be the best and most responsible option. While it would certainly be preferable if there were no unwanted pregnancies, and hence no need for abortion, as things stand I rather suspect that people don't have abortions enough. This is for the general reasons discussed in my post Badness Without Harm, and illustrated in Parfit's "14 year old mother" example. The world would be a better place if people only had children when they were most capable of raising and providing for them. There's nothing "frivolous" about such concerns, and so I think it is misguided to decry such "abortions of convenience". In saying this, I grant that it might have been wrong for someone to get into this position in the first place. So, by all means, do feel free to condemn "unsafe sex of convenience", and advocate better sex education, contraceptive provision, etc. But leave off the irresponsible anti-abortion diatribes already.)


Categories:

Monday, April 24, 2006

Significant Negative Duties

Libertarians often attempt to defend their unequal privilege by insisting that they are subject only to negative duties, i.e. duties of non-interference. They have no positive duty to help the poor. Equivalently: the poor have no right to their help. I've previously argued that our concept of freedom (of the type worth having) requires more than mere non-interference. But even if we grant the libertarian their impoverished moral conception, consisting only in negative duties, in practice this makes little difference. They are still (potentially) committed to significant redistribution. Let me explain why.

The core problem is that any action will interfere with others to some extent. So appeals to merely "negative" duties won't really restrict the scope of our obligations all that much. An example of this can be found in the problem of initial acquisition: whenever you claim a property right over previously common goods, you are peremptorily excluding others from its use. Your action is thus a kind of harmful "interference", and -- by the libertarian's own lights -- you ought to recompense others appropriately. (The implications of our actual history are explored here.)

Even after the establishment of "property rights", note that their enforcement is itself a form of coercive interference. So there is a conflict between (i) the liberty of the rich to use their surplus resources for luxury purposes without interference, and (ii) the liberty of the poor "not to be interfered with in taking from the rich what they require to meet their basic needs." If we take liberty as fundamental, the only reasonable resolution of this conflict is to recognize that liberty (ii) morally trumps (i). Hence the rich have the (still merely "negative"!) duty not to interfere with others' appropriation of their surplus resources for the sake of those in need.

That's obviously a very significant negative duty. Right-wing libertarians won't like it one bit. Which just goes to show the insincerity of any claims to be fundamentally concerned with liberty. They're more interested in preserving the privilege of the rich, and by force if need be. Freedom (for anyone but themselves) and justice have nothing to do with it.

Another application of the "core problem" is found in Thomas Pogge's work on human rights. He argues that we should conceive of rights in institutional terms:
By postulating a human right to X, one is asserting that any society or other social system, insofar as this is reasonably possible, ought to be so (re)organized that all its members have secure access to X, with 'security' always understood as especially sensitive to persons' risk of being denied X or deprived of X officially: by the government or its agents or officials. Avoidable insecurity of access, beyond certain plausibly attainable thresholds, constitutes official disrespect and stains the society's human rights record. Human rights are then moral claims on the organization of one's society.... Persons share responsibility for official disrespect of human rights within any coercive institutional order they are involved in upholding.

-- Thomas Pogge, 'How Should Human Rights Be Conceived?', p.64, emphasis added.

He further explains:
The most remarkable feature of this institutional understanding is that it can go well beyond minimalist libertarianism without denying its central tenet: that human rights entail only negative duties. The normative force of others' human rights for me is that I must not uphold and impose upon them coercive social institutions under which they do not have secure access to the objects of their human rights... Even if I owned no slaves or employed no servants myself, I would still share responsibility: by contributing my labour to the society's economy, my taxes to its governments, and so forth. I might honor my negative duty, perhaps, by becoming a hermit or an emigrant, but I could honor it more plausibly by working with others toward shielding the victims of injustice from the harms I help produce or, if this is possible, toward establishing secure access through institutional reform. (p.66, emphasis added)

By constraining ourselves to only negative duties, we find that "human rights give you claims not against all other human beings, but specifically against those who impose a coercive institutional order upon you." (p.67) But this is close enough to be practically the same thing. We are all participants, contributors, upholders, and hence imposers of the current global institutional order. Our actions thus cause harms to those who suffer unjustly under this order. It is wrong to impose such harms -- a violation even of merely negative duties -- so we have a corresponding duty to recompense the victims accordingly.

So we see that even negative duties make significant demands on us. Right-wingers may wish to ignore this. But they can no longer pretend to have the support of any half-way plausible theory of justice.

Sunday, April 23, 2006

Reasons and Misinformation

I take (objective) "reasons" to be facts which count in favour of an action. If a large rock is about to hit the back of your head, then this is a reason for you to move, even if you don't know about it. There's a sense in which one "should" do what one has most reason to do. As inquiring agents, we try to discover what reasons for action we have, and hence what we should do. Such inquiry would be redundant according to subjective accounts which restrict reasons to things that an agent already believes.

Nevertheless, that's a very objective sense of 'should', and the concept is perhaps better captured by the term 'desirable', or the idea of which action would be best. As inquiring agents, we attempt to uncover which action would be best, and act accordingly. But we only have limited information available to us. In the end, our decisions must be based on subjective reasons, i.e. what the agent takes (or believes) their reasons to be. We may also speak of "apparent reasons" which I define in semi-objective evidential terms: i.e. what the accessible evidence suggests the reasons (most likely) are.

Subjective rationality is a matter of being rational by one's own lights, i.e. acting on one's subjective reasons. I take rationality, simpliciter, to be a matter of acting on the apparent reasons. (After all, this tracks the advice of an ideally rational agent -- a perfect reasoner who responds appropriately to available evidence.) Sometimes people speak of "objective rationality" as a matter of performing the best action, i.e. that which one has most reason to do. But I find that unhelpful. It is not a failure of rationality in any usual sense of the term to fail to duck when a rock is secretly about to hit the back of one's head. (The failure is nevertheless unfortunate, or not for the best.)

For example: suppose I am attacked by an angry bear. Let's say that in actual fact, the best way to respond is to lie still and play dead. So that's what I have most reason to do. But I'm not aware of this fact, and the bear looks rather large and cumbersome, so it is rational (recommended by apparent reasons) for me to flee. But further suppose that I have the deluded belief that I am much stronger than the bear. Then it is "subjectively rational" for me to fight the bear.

I think this makes it clear that "subjective reasons" are empty and lacking in normative force. The interesting notions are what I have called (objective) "reasons", and (evidence-based) "rationality". When distinguished in this way, it seems that we won't necessarily have reason to be rational. That connection would have the wrong direction of fit. It's not that we ought (in the objective sense) to be rational, but rather, that rationality aims to discover what we ought to do. Put another way, the proper aim is surely to do what's best, not what merely seems to be. (Cf. Hare's quote about winning backgammon.)

(See also my old taxonomy of reasons, which neglects the evidence-based option, but distinguishes between different levels of subjectivity.)

Saturday, April 22, 2006

Harean Nuggets

Select quotes from R.M. Hare's Moral Thinking:
Nothing is so difficult in philosophical writing as to get people to be sympathetic enough to what one is saying to understand what it is. (p.65)

[I]t is a misuse of the word 'ought' to say 'You ought, but I can conceive of another situation, identical in all its properties to this one, except that the corresponding person ought not'. (p.10)

The winner of a game of backgammon is the player who first bears off all his pieces in accordance with the rules of the game, not the one who follows the best strategies. Similarly in morals, the principles which we have to follow if we are to give ourselves the best chance of acting rightly are not definitive of 'the right act'; but if we wish to act rightly we shall do well, all the same, to follow them. (p.38)

[To directly employ act-utilitarian reasoning] is, as we have seen, a dangerous procedure; but sometimes we may be driven to it [e.g. if our prima facie principles conflict]. Anti-utilitarians make it their business to produce examples in which this is the only recourse, and then charge utilitarians with taking it (which is unavoidable) and with taking it light-heartedly (which is a slander). The good utilitarian will reach such decisions, but reach them with great reluctance because of his ingrained good principles; and he may agonize, and will certainly reflect, about them till he has sorted out by critical thinking, not only what he ought to have done in the particular case, but what his prima facie principles ought to be. (pp.51-52)

If we want to find out what ordinary people mean, it is seldom safe just to ask them. They will come out with a variety of answers, few of which, perhaps, will withstand a philosophical scrutiny or elenchus, conducted in the light of the ordinary people's own linguistic behaviour (for example what they treat as self-contradictory). (p.80)

Since this is a problem which has to be faced by any theory of rational choice, and not merely by utilitarianism, those who clutch at it as an argument against utilitarianism in particular reveal only their own lack of interest in rational choice between alternatives. But it has to be faced all the same. (p.95, fn.4)

He wasn't talking about the Infinite Spheres of Utility puzzle, or the population paradox. But the point has broad application. And again:
It is worth saying right at the beginning that this is not a problem peculiarly for utilitarians... The fact, if it is one, that there are other independent virtues and duties as well [as beneficence] makes no difference to this requirement. Only a theory which allowed no place at all to beneficence... could escape this demand. Anybody, therefore, who is tempted to bring up this objection against utilitarians should ask himself whether he is himself attracted by a theory which leaves out such considerations entirely. (p.118)

Limited Omniscience

I think I've come up with a simpler way to characterize Chalmers' two-dimensionalism. A core idea is modal rationalism: the contents of possible worlds are a priori knowable (on ideal rational reflection). The only thing the ideal agent doesn't know a priori is which world is hers. Her only fundamental lack is this self-locating knowledge; from that, she could know all. (It's like a multiversal version of Lewis' Two Gods.)

The ideal agent can know all world-indexed qualitative facts, i.e. facts of the form "P is true at w", given in a semantically neutral "qualitative" language. That is, expressions must have identical primary and secondary intensions. Intuitively: descriptions good, rigid designators bad! The idea is to ensure that one can know the full meaning of the terms without needing to know which world is actual. Thus "watery stuff" is okay, but "water" is not. We can know that the former picks out both H2O and XYZ, without needing to know whether we live on Earth or Twin Earth. The term 'water', by contrast, requires this empirical knowledge in order to determine which of H2O or XYZ it rigidly designates.

Okay, so our semi-omniscient being knows pretty much everything except for what world she's in (and hence what rigid designators like 'water' refer to). And she knows all this a priori. So all a posteriori sentences must in some sense depend on one's location in modal space. They depend on which world is 'actual' (in the indexical sense, i.e. which world is ours).

We can now clearly see the problem diagnosed in my post Misusing Kripke; Misdescribing Worlds, i.e. why a posteriori necessities are generally uninteresting. Our ideal agent can see all the possible worlds, she knows just what they're like, in qualitative terms. So she knows what is "necessary" in any interesting sense. She just doesn't know how to describe it using rigidly designating terms. (You might think of her as a practising Descriptivist!)

Moreover, this ideal agent would surely know any ethical truths there are to know. (She doesn't need to know which world she's in for this. Her location is quite irrelevant to the question of whether some particular action was wrong.) And this conclusively refutes Synthetic Ethical Naturalism (or any other a posteriori meta-ethic).

Must-read philosophy books

While I'm in the mood for lists, let me ask: what "must read" books would you most highly recommend to all advancing philosophy students? Here are my picks...

1. Derek Parfit, Reasons and Persons
2. Frank Jackson, From Metaphysics to Ethics : A Defence of Conceptual Analysis
3. Saul Kripke, Naming and Necessity
4. Dave Chalmers, The Conscious Mind : In Search of a Fundamental Theory
5. David Lewis, On the Plurality of Worlds

Honourable mention: Michael Smith, The Moral Problem

And what philosophy books would you recommend to non-philosophers? J.S. Mill's On Liberty stands out for me, and perhaps Dennett's Consciousness Explained (so long as the reader remains sufficiently skeptical of the title's claim). And the lessons of Plato's Euthyphro haven't been sufficiently recognized in our broader culture, of course. (Though modern readers might get more out of a secondary text, e.g. James Rachels' The Elements of Moral Philosophy.) Any other suggestions?

(For some related lists, see last year's Top 5 Philosophers and Book Meme.)

Open Thread

I don't think I've seen a philosophy blog with an 'open thread' before. I don't know if I get enough commentators for this to work, but hey, it's worth a try.

Discuss whatever philosophical topics you feel like. Special invite to "lurkers" (readers who don't usually comment), and readers who don't have blogs of their own: what would you post about if you had a blog? Feel free to post it here!

You're also welcome to complain about this blog, call me names, request a topic for me to post about in future, or whatever else tickles your fancy. Just don't let the open thread stay forever commentless. That would be lame.

Skydiving

I jumped out of a plane this morning. 'Twas great fun -- highly recommended! I found the Aerial Skydiving team very friendly and helpful, so any readers in Canberra are hereby encouraged to give Graeme and Chris Windsor a call. :-)

Friday, April 21, 2006

Multiversal Ethics

If modal realism is true (i.e. all possible worlds are equally real) then choice is meaningless. You can't change what happens, but only who it happens to. It is already fixed that every possibility will occur in some world or other. If we have choice at all, it is merely that we get to choose which world is ours.

Perhaps I can make it so that I am the good guy rather than the bad guy. But this makes no broader difference. Others will be helped and harmed (respectively) either way. I just get to choose which of 'me' and 'my counterpart' gets assigned to play which role. Put another way, I have various counterparts performing various actions, and I get to choose which one of these guys to locate myself within. This cannot be a morally significant decision. It makes no difference to the worlds. It's like choosing which movie to watch. We don't really influence the events therein. We merely choose which events to view, which pairs of eyes to see out of. The spectator's self-locating decision affects no-one but themselves.

David Lewis denies that his modal realism has such ethically repugnant implications. He claims that we should only care about our own world, and making this a better place. But apart from the crude tribalism, it's not as if we can change the world itself. Each world is what it is, and according to modal realism they are all equally real. So all we can do is change which world is ours. We can make "this world" a better place de dicto, by changing which world 'this' refers to. We don't thereby change the world itself (de re). As I wrote once before:
All we'd be doing is moving ourselves so that we were 'closer' to the well-off people rather than the suffering ones. And that hardly seems like a virtuous move (by my intuitions anyway).

Jeremy doesn't see it this way though:
If my action causes the bad, then I am to blame. If it causes the good, I'm to be evaluated positively. So if I'm the one who does the bad thing, and some duplicate of me in another part of the multiverse does the good thing, then I'm to blame and he is to be congratulated. So my not doing an action might logically (but not causally) entail someone else doing it, and it would lead to a result that's exactly similar to what happens if I do the other action. But that doesn't mean my action isn't bad if it's the bad one. In the world where I do the bad action, my action is bad. In the world where my duplicate does the bad action, his action is bad. The total amount of good or bad in the universe is irrelevant to whether my action right here is bad.

That means we should blame the one who does the bad thing, even if any choice leads to the same resulting future when you factor in the whole multiverse. This means that, even if this view that we have no evidence for is true, moral evaluation still makes sense.

By appealing to our standard moral practices of praise and blame, I think Jeremy fails to fully take on board the radical implications of modal realism here. Sure, we can still distinguish the good and bad actions. But an agent's choice between them has no significance, for the reasons explained above. The actions will all take place regardless of the agent's choice. So it makes no sense to say that he "should have done otherwise". He did! (At least, his counterpart did.) At worst you might criticize an agent for choosing to locate himself in the life he chosen. Perhaps he has poor aesthetic taste, prefering to experience the bad actions than the good ones. But again, it's not as if his self-locating/spectative decision impacts upon anyone else.

Worse, you might have suspicions about these "locative powers" according to which agents have the semi-magical ability to influence in which world their consciousness resides. Perhaps the most coherent interpretation of modal realism would simply deny that the "agents" in each world have any real choice at all. And then moral evaluation goes right out the window. (We can't even justify blame on pragmatic grounds, since nothing can influence multiversal consequences.)

Finally, Jeremy suggests that non-consequentialists are unaffected by the argument:
The argument is that the net result wouldn't be any different if I did an action usually considered better [or] worse, and therefore the action isn't really better or worse because the consequence of either action would be the same. If consequentialism is true, then the only morally relevant features of the action are its consequences, but anyone who denies consequentialism isn't going to buy this.

Again, this seems to underestimate what's going on here. The most plausible ethical views allow that actions have other morally relevant features besides consequences, but nevertheless recognize that these other features are ultimately grounded in consequential concerns. Ethics is important because (we assume) our choices can affect others and change the world. If this assumption is false, as modal realism would have it, then our decisions - and hence the norms governing them, i.e. ethics - are inconsequential, in the most derogatory sense. To care about ethics even when it makes no difference would be arbitrary and fetishistic.

Thursday, April 20, 2006

What Mightn't Have Been

It's sometimes held that, though there are no necessary beings, yet it is impossible for there to have been nothing at all. Each possibility is represented by a possible world, and some accounts (e.g. Lewis' identification of worlds with spatio-temporal regions) leave no conceptual room for an empty possible world, so total emptiness is -- according to such accounts -- not a possibility.

I find such reasoning suspicious. It's generally the case that you can (conceptually) substract a contingent entity from the world, and nothing need replace it. The result is still possible. (Another "possible world", if you want to call it that.) This fits nicely with Humean views banning necessary connections between distinct existences, and all that. Not to mention Lewis' own combinatorial principle. So why should this suddenly change when we're down to the last object? Why would its subtraction necessarily require replacement by something else? The suggestion seems awfully ad hoc.

Thinking modally, it seems clear that its substraction does not require replacement. It is merely Lewis' definition of a possible world that leads to his anti-void conclusion. So we find that his account fails to track fundamental modal matters in this respect. (I use Lewis as an example; the same will be true of any account of possible worlds which leaves no room for nothingness.)

We might conceive of possible worlds as representing only intrinsic existences. By this I mean that they don't make claims about anything 'external' to the world itself. On Lewis' picture, for example, worlds are simply spatio-temporally isolated regions -- universes, in other words.

Each possible world may or may not have the additional property of obtaining, or being made concrete or "actual" (in the absolute, non-indexical sense). Our world has this property, for example. We typically assume that no others do. But if worlds are characterized intrinsically, then there's nothing to stop multiple worlds from being actualized. And of course Lewis himself holds that all possible worlds are concrete. But on such a picture, why couldn't it be that none of the worlds are actualized?

This picture is puzzling because it leaves meta-modal facts of 'actualization' unaccounted for. Which worlds get this "special property" is, it seems, no simple first-order fact internal to the worlds themselves. But on the standard view [see below], extra-worldly meta-modal facts are static and necessary. This entails what I call "narrow fatalism".

It might make more sense to instead conceive of worlds as maximal compossibilities, and thus containing within themselves some 'external' or meta-modal facts. Then, for example, the existence of multiple independent spatio-temporal universes would constitute a single possible world. After specifying all of the "intrinsic" or positive facts, the world would need to add an extra claim to the effect of "that's all". For example, our possible world might consist of our universe and that's all. Then it would be impossible for any other universe to be actualized alongside our own, for that would contradict the "that's all" clause.

But then why can't you have a possibility that consists in nothing but the "that's all" clause? It seems perfectly coherent, given our assumption that there are no necessary beings. Insofar as an account of possible worlds rules out this possibility, I'm inclined to think that the account fails to accommodate all the possibilities. For this is surely one of them!

Aside: note that we no longer need to think of 'actualization' as a meta-modal property which attaches to worlds. Instead, we might give a deflationary account, according to which a world is actual just in case the possibility it uniquely describes is realized. In this way, the meta-modal fact can "piggy-back" on the worldly facts. A world's actuality merely consists in its claims all being true.

A final point: we might think that various abstract or 'Platonic' objects exist necessarily. Possible worlds themselves might be an example. Presumably even if nothing (concrete) existed, still our universe would have been possible. That's certainly the case on the standard S5 modal-logical picture, which sees all modal facts as static and necessary. (If p is possible then it is necessary that p is possible. The contingent facts may vary from world to world, but the modal facts are "extra-worldly", concerning a static modal-multiverse which remains unaffected by your world's location within it.)

In that case, the above discussion should be reinterpreted as concerning the question whether there could exist nothing concrete. (Perhaps nothing internal to a world, if we think of worlds as containing the contingent concrete stuff, and Platonic entities as genuinely "other-worldly", i.e. existing outside of worlds.)

We might instead consider a dynamist meta-modal view, according to which even modal facts can vary from world to world. On this view (which is of dubious coherence), some things are metaphysically possible which might not have been. And other things aren't possible, but they could have been possible if things had turned out differently. (They just couldn't have been actual.) So we might try to defend the possibility of total nothingness by suggesting that our necessities might not have been necessary. But note that they still couldn't have failed to exist (since they are actually necessary, after all), so that doesn't help after all. At best, a totally empty world might be possibly possible, but not actually possible, if we accept that some things are contingently necessary. But this is getting messy and borderline nonsensical, so I'll leave it at that.


Categories:

Redundant Contrastives

BV claims that the question "Why is it that P rather than ~P?" presupposes that ~P is possible. This strikes me as mistaken. The question could be perfectly well answered with: "because ~P is impossible."

Contra BV, I think the question "Why is it that P?" (and, for that matter, "Why isn't it that ~P?") are clearly equivalent to the above question. To ask why something is, is to ask why it isn't not. The question is always implicitly contrastive in this broad sense. Another way to put it is that the contrastive version is redundant: to ask "why P rather than not ~P?" asks nothing more beyond "why P?".

See also: Why does the universe exist?

Tuesday, April 18, 2006

Suicide and the End of Persons

Velleman gave a great talk today on his paper Beyond Price, arguing for (amongst other things) the immorality of escapist suicide. The initial argument runs like this: welfare or happiness is only worth caring about for the sake of the person. So to sacrifice the latter for the sake of the former is a kind of practical irrationality. It's like sacrificing happiness for the sake of money, an 'end' for the 'means' with which to achieve it: the thing attained is only valuable for the sake of that which is given up. So you end up with nothing of value at all.

To separate a person from their interests in this way doesn't make a lot of sense to me. Rather, it strikes me as analytic that sacrifice involves harms, not benefits, so you cannot sacrifice someone by benefitting them. Hence, if death is in a person's best interests, then their suicide cannot plausibly be described as self-sacrifice. Sure, they sacrificed their continued living; but they did that for the sake of their interests, which is to say, for them.

One might worry here that the person no longer exists afterwards, so the action can't achieve anything for their sake. (To extend the title pun: the ending of a person's life means that the person can no longer be an end.) Some people claim that death can't harm you, for just the same reason. Both claims are shortsighted. We should look at the person's life as a whole, and ask what would make that life go best. In the latter case, premature death makes the life go worse than it could have, and so is a harm. In the former case, death makes their life as a whole go better than if they were forced to live on in aimless misery; thus death comes as a boon to the person, and can properly be welcomed for their sake.

Again, the Kantian idea that we could want a person's life to go less well "for their sake", strikes me as simply nonsensical. When considering the idea of a person as an end in themselves, we should consider them in terms of their welfare, not their pulse. It's perfectly rational to sacrifice the latter for the former. Indeed, it's the opposite that would be irrational: to harm the person, i.e. make their life go less well, merely in order that their life might extend further in time? That sounds far more wrongheaded to me.

Besides these external disagreements, I also have an 'internal' query: Velleman allows that suicide can be permissible if done for reasons other than self-interest. In particular, it's okay to do for the sake of one's dignity. But then why isn't this too an instance of sacrificing the 'end' (person) for a 'mere means' (the person's dignity). Aren't dignity and interests alike valued for the sake of the person? But perhaps this has something to do with the technical Kantian use of 'dignity', in contrast with 'price', as representing some kind of incommensurability? I'm not too familiar with any of that. Explanatory comments would be most welcome!

Despite the above, I found myself agreeing with just about everything Velleman said (which I don't think really depends too heavily on the bits I've criticized here, though Velleman might not see it that way). He emphasized the importance of having projects and caring about things, which I think is central to a person's well-being. Indeed, it seemed like the real reason behind his opposing escapist suicide is the idea that the person is thwarting their own capacity to create meaning and value in their life. In response to the example of a depressed widow contemplating suicide because she doesn't care about anything else after losing her spouse, Velleman condemned her attitude: maybe she doesn't want to find new meaning and goals in life, but she ought to, as anyone who loved her would want this for her. The widow's attitude thus betrays her lack of self-love.

Velleman claims that suicide is only ethical in cases where it's compatible with self-love (in his sense). I can agree with this, because I think self-love tracks one's true interests. The widow would be better off enduring her grief and then finding new meaning in her life. To pursue and achieve other goals or projects would make her life better. So it seems to me that the real force behind Velleman's anti-suicide argument is that escapists are neglecting their own welfare. (This contrasts with Velleman's own characterization, which sees escapists as advancing their [hedonistic?] welfare at the expense of their person.)

Of course, suicide isn't always bad for you, e.g. if one is satisfied that one's life story has reached its conclusion, and further extending it would be entirely miserable and lacking in meaningful direction. But Velleman didn't seem inclined to count such cases as bad or 'escapist'. I gather he wants to attribute the difference to the 'dignity' of personhood rather than 'welfare', but I now wonder if this is a merely terminological difference between us. Indeed, while I earlier suggested that his conception of the person as an end-in-themselves should be tied more closely to (my concept of) welfare, perhaps there wasn't really any gap here to begin with. Put the other way: perhaps my concept of welfare is near enough to his conception of the 'end' of persons, that he needn't object to it after all. While I initially thought we had a disagreement here about what it is to value persons as ends in themselves, perhaps we're actually only disagreeing about how to describe it? (Though I guess we'd have to have very different conceptions of welfare in order for such a confusion to arise in the first place.)

Monday, April 17, 2006

Fictionalist Necessitation

According to modal fictionalism, some modal sentence p is true iff according to PW, p*; where PW is a fiction based on Lewis' theory of possible worlds, and p* is the Lewisian translation of p. Now, Chihara's The Worlds of Possibility (p.181) claims that this analysis makes the necessitation rule: □φ→φ come out invalid. Chihara writes:
The antecedent tells us that, according to PW, (□φ)*. Why should that enable us to conclude that φ is true? After all, PW is acknowledged to be a piece of fiction... how can it be legitimate to infer the truth of φ from what this wildly implausible story says about (□φ)*?

But back on p.170 he quoted Rosen's exposition of modal fictionalism, explaining that PW includes an encyclopedia consisting in a list of the "non-modal truths about the intrinsic character of the universe". That is, PW includes all non-modal truths about our world, in addition to a whole bunch of recombinant falsehoods. On the Lewisian story that PW tells, □φ is true iff φ is true in all possible worlds. So the italicized consequent is what (□φ)* means. Now, note that one of the worlds in the fiction of PW is an accurate representation of the actual world. So if (□φ)* is true according to PW, then this entails that φ is actually true, as required.

Am I missing something here?


Categories:

The Conceptual Development of 'Rights'

Thomas Pogge's 'How Should Human Rights Be Conceived?' traces the interesting development from "natural law" through "natural rights" to the modern notion of "human rights" -- a development that largely consists in narrowing the content of morality.

The idea of a "natural moral law" doesn't come with any obvious restrictions on content built in. (So it is a favourite of religious homophobes and others with arbitrary ethical views.) But as Pogge explains:
Expressing moral demands in the natural-rights rather than natural-law idiom involves a significant narrowing of content possibilities by introducing the idea that the relevant moral demands are based on moral concern for certain subjects: rightholders.

This undermines the notion of religious duties, since God surely has no need of rights. More generally, it excludes all the arbitrary concerns that have nothing to do with any individual's interests, or harms and benefits. So that's progress of sorts. But it already excludes too much. For instance, the non-identity problem shows that one can harm humanity (or "people in general") without harming any particular person. And we might also reasonably hold that individuals have a moral duty to develop their talents, etc., whereas there's little sense to be made of the notion of a "right" against oneself.

The notion of 'human rights' is even more restricted. It is an essentially political notion, whereby the violators "must be in some sense official". Victims of theft aren't said to have had their "human rights" violated; not unless this arbitrary confiscation of property was undertaken by government agents acting in an official capacity, or some such. Human rights thus offer protection "only against violations from certain sources". (Though Pogge goes on to argue that the relevant sort of "official disrespect" may be manifested in a wider range of situations than we might at first expect -- including, for example, official inaction when protection is needed, etc.)

Sunday, April 16, 2006

Commonplace Confusions

The other week I proposed 10 things everyone should know in or about philosophy. Judging by some people's reactions, perhaps I should've kept the two types separate. If you want a purely metaphilosophical list, let me recommend DuckRabbit. But I was also wanting to explore the question of which commonplace assumptions or 'folk beliefs' philosophical reflection should lead us to reject as false. In addition to those mentioned in my earlier post, a few other lessons spring to mind:

1) Not all actions stem from selfish motives. Sure, you can define "selfish" so that doing whatever you choose (hence, "want") counts as "selfish". But that trivializes the thesis -- it says no more than that actions stem from motives. So shut up with the knee-jerk cynicism already.

2) Rationality does not require always and only acting to advance your self-interest. I think this is pretty conclusively established by Parfit's arguments about incomplete relativity. We should either be instrumentalists or universalists about reasons. Rational egoism is an unstable and ad hoc intermediary position. (I suspect that most people who unthinkingly espouse this view are really instrumentalists who simply assume that people are purely self-interested in their aims. But see #1 above.)

3) Knowledge does not require certainty. We learn this from considering Matrix-like skeptical scenarios.

4) Whether "a tree falling in an empty forest makes a sound" is not a profound problem. It's simple once you disambiguate the term 'sound'. The falling tree causes soundwaves, but no conscious experiences or sound qualia. End of story.

5) Deontological absolutism and moral objectivity are not the same thing! Most people seem to believe that if you deny that lying is always wrong, then you should be a relativist. That's stupid. One can (and should!) have context-sensitive moral principles, as in consequentialism, whilst still insisting that these principles yield the objectively correct answer in any particular situation.

(When you add this to the points made in my earlier post, it looks like most people are simply thoroughly confused about what objectivity actually involves. Most unfortunate.)

6) Pure self-creation is incoherent. Choice requires a prior basis which underlies judgment. Corollary: 'Nurture' requires 'nature' to guide it. A genuine 'blank slate' could do no more than a rock.

What other commonplace confusions and rational remedies can you think of?


Categories: