Tuesday, January 27, 2009

Is Sophia a Jealous Goddess?

Peter Fosl laments that "philosophers are more often than not pre-occupied with status and acquisition", just like everybody else. But is an interest in status and acquisition necessarily objectionable? The life of the mind is commonly associated with an ascetic lifestyle (since at least Socrates, I suppose), and it can seem very appealing to imagine valuing the intellect so highly that other "petty" concerns fade into the background. But in other moods, this strikes me as a kind of snobbishness. There's no reason to expect that philosophers should value their art exclusively. If, in addition to thinking and theorizing, one also wishes to acquire the latest gadgets, and vacation (or, indeed, conference) in pleasant locations, what's the harm in that?

Perhaps it's a matter of "guilt by association". Most folks seem deplorably anti-intellectual, and so I grew up defining myself partly in opposition to them: not only was I a person who loved reading, thinking, etc., but further, I wasn't one of those "others" who cared (only?) about sports, celebrities, cars, partying, and so on. Human minds being what they are, our contempt for one trait (e.g. anti-intellectualism) can easily spread to others that have been correlated with it in our experience. And, in fairness, I guess they can serve as a kind of indirect, Bayesian evidence, if these signs are all that we know about a person. But that doesn't make such interests intrinsically repugnant, so if some people combine a genuine love of philosophy with more conventional concerns, it's hard to see why this should be objectionable.

(It would be more of a worry if one became obsessed with such trivialities to the point of ceasing to care about what really matters. But I don't see much reason to think that's what's going on here. Fosl asserts that most philosophers no longer care about wisdom, but all he really shows is that they care about other things in addition.)

In another odd part of the article, Fosl rails against Brian Leiter's PGR rankings:
Philosophers have even taken to ranking their programs in a linear, hierarchical way – or rather deferring to the barely-informed rankings of others. That these rankings are actually taken seriously in a profession so diverse, so pervaded by idiosyncrasy, so flush with an overabundance of talent, and so thoroughly populated by people sophisticated enough to know better, is simply breathtaking.

Is it really so "breathtaking" for someone to "take seriously" the idea that, say, a budding young philosopher of language would do well to look at Rutgers, or that Arizona is a powerhouse of political philosophy? Does the existence of "idiosyncrasy" and widespread talent really preclude our making any useful generalizations whatsoever about the comparative strengths of philosophy departments? Fools may be led into error if they attempt to apply their knowledge of the rankings too crudely, of course. But to entirely dismiss the value of such rankings is itself a... well, not exactly sophisticated position.

Fosl concludes:
Academics now flee or aspire to flee to institutions where status and money pool. Finding philosophers devoted principally to the love of wisdom and to sharing it broadly has become, as Spinoza said of all excellent things, as difficult as it is rare.

But again, these strike me as compatible (depending, perhaps, on how heavily one stresses 'sharing it broadly'). Quite apart from any material benefits, sheer love of philosophy provides ample reason to want to work in a top institution, with outstanding colleagues and intelligent, motivated students. I guess an egalitarian might prefer to reach out to less able students, and the best of luck to them. But there's nothing anti-philosophical about wanting to nourish and develop the very best.

Philpapers public launch

The wonderful PhilPapers website is now open to all. As David Chalmers introduces it:
The core of PhilPapers is a database of close to 200,000 articles and books in philosophy. Around this database, the site has all sorts of tools for accessing the articles and books online wherever possible, for discussing them in discussion forums, for classifying them in relevant areas of philosophy, for searching and browsing in many different ways, for creating personal bibliographies and personal content alerts, and much more.

The best way to get an idea of what PhilPapers can do is to go to the site and try it yourself (we've compiled a basic introduction to some of the features). Even a casual browser can browse listings for new and old papers, search for papers in a given area or by a specific author, read the discussion forums, and so on. However, we encourage you to create a user account, which enables many more sophisticated features. If you do this, you'll have a profile page from which you can set up personal research tools such as bibliographies, filters, and content alerts (via RSS or email). Your profile page will include a list of your own work (compiled via name matching), which you can edit where appropriate. With a user account, you can also submit new entries (giving publication information and/or a link, and optionally uploading a paper to our repository), edit and categorize existing entries, and contribute to discussion forums...

This is the future of philosophy on the internet. Check it out.

Wednesday, January 21, 2009

Are Prudential Reasons Special?

Alex writes:
It seems that prudential reasons are intimately tied to what we want in some manner that moral reasons are not. I hope to show that this claim is false.

Sounds false to me. I guess the putative "tie" could go in either direction: from 'wanting' to reasons, or from appreciating reasons to 'wanting'. But in neither case does there seem anything special about prudential reasons in particular.

From 'wanting' to 'reasons': A desire theorist may think that prudential reasons are given by what we would ideally want for our own sakes, or something along those lines. But then they will presumably give a similar analysis for moral reasons, only with a different restriction (or no restriction at all). Some idealization is necessary, of course, because our actual present desires may not suffice to ground either morality or prudence -- cf. future Tuesday indifference.

From 'reasons' to 'wanting': We can imagine an 'aprudentialist' who doesn't care about his own future welfare, just as we can imagine an 'amoralist' who cares nothing for the welfare of others. Normative reasons are never guaranteed to be motivating -- agents may be irrational, after all, or simply act on desires which aren't guaranteed to align with either prudence or morality, as noted above.

Moreover, the impassioned may care more about loved ones and valued projects than they do about their own welfare, in which case (some) moral reasons may have greater motivational force than prudential reasons. In this case, the motivationally significant distinction is not between prudential and moral reasons, but between internalized and merely recognized values. The latter are experienced as 'alienating', insofar as they place demands on us that conflict with our internalized 'wants' or preferences. Now, it happens that most folks have internalized their own interests but not everyone else's, and hence it's only moral reasons that they experience as alienating. But don't forget that some moral reasons may also stem from internalized values, so not all will be alienating in this way. And even this limited connection is a merely psychological fact, of no great philosophical interest.

Overall, I'm pretty skeptical of the distinctive philosophical interest of prudential reasons, or of any similarly restricted class of practical reasons, for that matter. Any apparent interest to the prudential-moral content distinction may be better captured by the distinction between internalized and uninternalized (alienating) values.

See also: Moral Roots and Alienating Aspirations.

Saturday, January 17, 2009

Only Action is Practical

'Which action, of those available to me, would be best?' -- or, in other words, what ought I to do? -- may be considered the basic question of practical moral deliberation. There are of course other questions we may ask -- e.g. 'what sequence of actions, across my remaining lifetime, would be best?' -- but it's important to note that these are different questions and may fail to 'integrate' in any simple way. (In my SaintOrSinner case, for example, one is obliged to reject a divine deal, even though the ideal sequence of actions would begin by accepting the deal.) In this post I want to argue that my above 'basic question' (i.e. concerning an action) is central to practical morality in a way that the other questions (e.g. concerning sequences of acts) are not.

To begin, note that the standard question 'What ought I to do?' is a question about an individual act. For example, "Should I accept the divine deal or not?" is an important moral question that someone might deliberate over. If they reason correctly, they will reach the conclusion "no, reject it". Morality is action-guiding, and this is the guidance it offers in this case.

What about the other questions, and their divergent answers? In particular, what are we to make of the question "what sequence of actions would be best?" and its answer "taking the deal, then doing P, Q, R..."? Doesn't this suggest we should take the deal? Simply, no. We've already answered - negatively - the question whether to take the deal. Now we're considering a different question, and the answer to it is completely irrelevant to our earlier practical question. We are only tempted to think otherwise because we are tempted to expect, erroneously, that the questions 'integrate', i.e. that that the desirability of {X, P, Q, R, ...} implies the desirability of X, simpliciter. But it doesn't -- that's simply a fallacy, as proven previously. So, while it may be of theoretical interest to learn that taking the deal is part of the best lifetime sequence of acts available to you, this has no practical implications. In particular, it does not imply that you should take the deal.

The above discussion suffices to convey my key point. But it may be helpful to go in to a bit more depth. So I now want to explain why the 'best sequence' question is a matter of theoretical rather than practical reason.

Practical reasoning concludes in action (or, to put it more neutrally, let's say implementation) . We can reason about how to act, and then do so. But while we can think about what the best sequence of acts would be, we can't implement this as the conclusion of our reasoning -- at most, we can implement but a part of it. (If we could implement it "all in one go", so to speak, then it would count as a single act - perhaps with many parts - rather than a sequence of distinct acts. This is a telling point, which I'll return to below.) The reasoning instead concludes in mere belief, and so is really theoretical reasoning, albeit about a topic of moral interest.

You might respond: "no, once I realize that the best sequence of acts would be {X, P, Q, R...}, I can immediately begin to implement it, by doing X." But again, 'do X' was not your conclusion. It's something distinct. So it must be the conclusion of a separate piece of (bad!) practical reasoning from your new belief to the doing of X. So forget this fallacious follow-up and return to the (good) reasoning that concluded in the desirability of the whole X-sequence. Now, because what you've actually implemented here is something distinct from what your reasoning licensed, it remains to be seen whether you've actually done as you should, i.e. whether there exists a chain of good practical reasoning for you that concludes in doing X (simpliciter).

Note that the performing of X is the performing of an action, so what it answers to (normatively speaking) is not the question 'is this part of the best sequence of actions', but the basic practical question: 'how should I act?'. And we've already seen that it's a fallacy to move from the one to the other. It may be that you ought not to do X (or take the deal) -- that it would be morally wrong, the wrong action, the wrong thing to do, however you want to say it. Being part of a good possible sequence doesn't change any of this. It doesn't preclude the action's being inadvisable.

Morality is supposed to be action-guiding, not action-sequence-guiding. There's a reason for this: we can only implement actions, not whole sequences thereof. This isn't really a substantive claim -- actions may be effectively defined as implementations of agency, or that which we can 'do'. So it's a truism that only action is practical (in contrast to the other evaluative questions I've mentioned). Still, it can help to remind ourselves of truisms, if only to refocus our attention. The substantive question in this vicinity is 'what counts as an action', or more precisely, what can an agent directly implement as a conclusion of practical reasoning?

"An individual action" is a true, but trivial answer. The fact that it's true explains why 'what should I do?' is, as I said, the basic question of practical moral deliberation, and why the best-sequence question isn't really 'practical' at all. On the other hand, the fact that it's trivial suggests that, philosophically, we would do better to refocus on the question, 'what can I do?'

Thursday, January 15, 2009

Ignoring Reality Ain't So Ideal Either

Paul is unimpressed with non-ideal theory that takes into account agents' own moral failings or unreliability when prescribing what they ought to do:
Moral obligations ought not to depend on an agent’s character. We ought to insist that an agent with a bad character act entirely like an agent with a good character...

That seems inadvisable. Cases like the bad squash loser show that we ignore our moral failings at our peril. For a more high-stakes example, suppose God offers me a deal that's a freebie for a saint, but insanely risky for the rest of us:

(SaintOrSinner) God will save one innocent life now, but if I ever fail to meet a moral obligation for the rest of my life then a million innocents will be tortured and killed.

Should I take the deal? If I'm a saint (and know it), then it's clearly obligatory: I can save a life at no real cost, since - as a saint - I'm sure to meet my later obligations whatever they may be. But since I'm not a saint, it's very clearly impermissible -- I'd almost certainly be condemning a million innocent people to torture and premature death.

More generally, it's fallacious to move from the fact that it would be good to do P and Q to it would be good to do P. Remember that evaluations of act-aggregations and of individual acts may diverge:
If my future self cannot necessarily be trusted to do the ideal thing, this could radically alter what current decision would be for the best. Suppose my currently φ-ing could lead to (i) the best possible outcome if I were to follow this up with a series of acts S which I could, but actually won't, perform; and (ii) the worst possible outcome otherwise. In those circumstances, it is not best for me to φ -- doing so would have very bad consequences, due to my subsequent failure to S -- even though it's part of the best possible life. The divergence arises because at any given time I can only choose how to act then; I cannot perform a lifetime's aggregation of acts with a single decision.

So we need to be clear about what normative question is being asked. It may be advisable both (i) to do P-and-then-Q, and yet (ii) not to do P (because one is unlikely to follow up with Q, and the consequences of P without Q would be disastrous). This is perfectly consistent, because the two answers address different questions: one concerning the aggregation of acts that would be best, and the other concerning what present individual act would be best. It's just a basic fact that in non-ideal agents, these may diverge.

Two postscripts:
1. Although we will typically be more interested in deliberating over individual acts than collections thereof (since we can really only perform the former), there may be exceptions. See, e.g., my complaints about illegitimate appeals to 'Political Reality': they may be fine from an individual perspective, but sometimes we want to deliberate from an explicitly collective perspective, about the question of what 'we' (rather than just 'I now') should do. This shift in focus can be vital for breaking out of a bad equilibrium.

2. My old post on 'Accommodating Unreason' offers some reasons why, at least in low-stakes contexts, it may be better to treat people as if they are morally more reliable than they really are.

Person-Centered Objections to Value Holism

Total utilitarians, and other 'atomists', hold that the contributory value of a life depends only on its intrinsic features, and not on what else there is. 'Value holists' (e.g. average utilitarians) think that "big picture" relational features, e.g. how the life affects the shape of the world as a whole, also matter. Recall that holists may avoid Parfit's repugnant conclusion by rejecting the 'Mere Addition' premise that adding worthwhile lives can't (all else equal) make the world worse. The atomist asks, 'Who is harmed?' But to a holist, there is nothing contradictory about the idea that adding an intrinsically good part may make the whole worse. Is there anything deeply objectionable about this move?  

As a first step, atomists might seek to fortify Mere Addition by appeal to Huemer's Modal Pareto Principle ('In Defence of Repugnance', p.903):

For any possible worlds x and y, if, from the standpoint of self-interest, x would rationally be preferred to y by every being who would exist in either x or y, then x is better than y with respect to utility.

Like 'Mere Addition', this principle is clearly atomistic: it considers each life in isolation ("from the standpoint of self-interest"), and leaves no room for holistic 'big picture' considerations. I don't mean to deny that it is an intuitively appealing principle. It serves well to highlight an intuitive feature of the atomistic view. But because it is so transparently atomistic, it won't have much dialectical force against those of us who find value holism antecedently plausible. We may simply ask: "why restrict our attention to self-interested standpoints?"

This claim may be given a more principled grounding by appeal to an independently appealing conception of ethics as fundamentally person-centered -- call this "metaethical individualism". (N.B. This is not merely to reiterate the first-order normative claim that promoting individual welfare is what's desirable. Rather, it is to further specify why - or on whose behalf - it is desirable.) The metaethical individualist objects: holists concern themselves with 'the world as a whole', when really the only entities worth ultimately caring about are particular individuals: Tom, Harry, Sally, etc. "Morality is made for man, not man for morality," they insist. Ethical acts are ultimately called for on behalf of particular persons. To aim at anything "larger" is to miss the point.

If this individualistic conception of ethics were correct, it would provide a strong, principled objection to value holism. But my previous post argued that it is not correct. Assuming the arguments in that post succeed, it follows that even atomistic total utilitarians should ultimately conceive of themselves as what I call 'world consequentialists'. They should reject metaethical individualism, even though the content of their axiology is "individualistic" in the sense that they claim that what's good for the world is just to promote individual welfare.

Rather than a deep, principled divide between person-centered atomists and world-centered holists, we find that both atomists and holists are ultimately concerned to improve the world. (So we might say that both are "metaethical holists", in this sense.) They simply have different first-order views about what this involves.

Wednesday, January 14, 2009

Against person-affecting views

Define the 'person-affecting view' as the claim that particular persons (or sentient beings) are what ultimately matter, the ultimate source from which practical reasons stem, or that for the sake of which the reasons exert their normative force. The view has prima facie appeal, but I think the non-identity problem - where a bad event doesn't harm anyone in particular - shows that we should reject this in favour of the 'world-affecting view' that it is the world as a whole, not [just] its particular constituents, that is of ultimate import.

As an extreme example, compare these two worlds:
(w1) Mass sterilization, extinction: everyone alive is sterilized, without exception. There will be no future generations. Most folks are pretty unhappy about this (as you might expect), but life goes on - for them, at least - and their personal welfare is not catastrophically affected.

(w2) Temporary hardship, future utopia: everyone alive suffers some hardship, equivalent (in terms of personal welfare) to the harms they suffer in w1. So everyone in w1 does just as poorly in w2. But then they spawn future generations, all of whom have blissfully wonderful lives.

Obviously, w2 is better than w1. We have reason to prefer future utopia to mass sterilization. If someone implements the latter, we have reason to lament this fact. But why, or for whose sake? No-one was made worse-off by the choice to realize w1 rather than w2. And it would hardly make sense to lament it for the sake of those who will now never get to exist. (There are no such individuals, remember! No Platonic souls waiting by the river Lethe to be born into our world.) Instead, I propose, we must be lamenting the loss suffered by the world as a whole (which exists well enough).

I assume here a general principle of 'Actual Reasons': reasons that actually exist can only stem from - and exert their force on behalf of - entities that (likewise) actually exist.

This forces us to reject the person-affecting restriction, if we do not think that actually existing people are the only people that matter. Reasons can also invoke the welfare of merely possible people as being of moral significance. But the principle of Actual Reasons implies that this invocation must ultimately be on behalf of some actually existing entity. The entity in question cannot be a person, because the interests of actual people give us no reason to favour w2 over w1. So we must appeal to some larger, supra-personal entity, such as 'the world'.

Is there any way to escape this argument (without denying the truism that we have moral reason to prefer w2 to w1)? One of my professors suggests that the 'Actual Reasons' principle should be amended to also allow entities that "will exist if we choose one of the options available to us". But this strikes me as metaphysically suspect. Firstly, it is unclear whether there is any determinate possible entity that we can really refer to here. Secondly: notice that the reason will exist whichever choice we make, so its source must likewise exist in either case -- otherwise, where has the reason come from? It would seem a kind of magic, for possible future persons to reach out and create reasons in the present, even if they turn out to never exist at all!

To preempt any possible misunderstandings, I should emphasize that it is not at all mysterious why facts about possible people's welfare could suffice to give us reasons. (These facts reveal ways we could make the world a better place, and there's nothing especially mysterious about the desirability of that.) What I consider mysterious is not just that it's desirable to bring happy people into existence, nor even that this outcome is non-instrumentally desirable or 'good in itself', but the further claim that this is desirable for the sake of those possible people. Note that in case of retrospective lamentation (e.g. post-sterilization, above), "they" will even be definitely non-existent. So for "them" to really do anything -- including providing the ultimate metaphysical grounds of a reason to lament how things turned out -- seems strictly nonsensical, no?

Tuesday, January 13, 2009

Reifying Possibilia

Sally could have had a child, but didn't. Does that mean that there is some possible child, of Sally's, who doesn't actually exist? Can we give him or her a name - 'Kim', say - and lament for Kim's own sake that s/he wasn't brought into existence?

I think such talk is metaphysically confused. But it can sometimes be useful to talk of mere possibilia (e.g. possible people), so long as we take care to understand what is really being said. Suppose that, if Sally had had a child, it would have had a wonderful, flourishing life. This modal fact might give us a reason to prefer that Sally had the child. We might express this in shorthand by saying, "Kim's welfare gives us reason to want Kim to exist". But we are not really talking about Sally's child Kim - there is no such person to talk about. Rather, we are talking about the world and how it could have turned out. It could have turned out that Sally had a child with a flourishing life, and this fact about the world gives us a reason to wish things had turned out that way.

Compare our talk of fictional characters: it is convenient to reify them, and talk "about" Frodo and Sauron and the rest. We can even say true things using such talk: e.g. Frodo destroyed Sauron's ring of power. This is a truth, not about what actually happened, but about the fiction. There is not really an entity, 'Frodo', and another, 'Sauron', such that the former destroyed the latter's ring of power. You can't reify intentional objects so. There aren't really any such people or things. But it is sometimes convenient to talk as if there were, since that can help us talk about particular features of the fiction.

In the same way, we can talk "about" possible people in this loose, 'de dicto' sense. But it's just words. It would be a mistake to reify them, or to think that there are some particular possible people (de re) of which we speak. Talk of possible people should instead be understood as shorthand for talking about particular features of a way the world could be -- i.e. such that more people exist than currently do.

This theoretical point has at least two important applications: (i) in understanding what's wrong with the ontological argument, and (ii) in seeing that possible people cannot really be the ultimate source of the (even non-instrumental) reasons to bring them into existence.* More on this in a future post...

* = [Of course, there isn't really any 'them' to refer to. So that should read: "reasons to bring it about that the world contains additional people."]

Disincentivizing Discipline

We should want to catch and hold students accountable for acts of academic dishonesty -- plagiarism, copying exam answers, etc. However, at my university at least (I don't know how common this is) reporting any such incident entails a significant burden for the instructor. To begin with, there's the several hours of work involved in filing a formal incident report, with multiple copies of the suspected sources attached and the plagiarised sections highlighted, etc. Then one must put aside another afternoon to attend an official "hearing", present a statement, hear the student's defense, and - if they are found guilty - see them suspended from the university for a year (as seems to be the minimum punishment for such infractions).

Hearing of this (and seeing how burdensome the whole process feels for those who are forced to go through it), I can't help but feel extremely relieved that I haven't detected any such academic dishonesty in the work of my own students. When the process is burdensome, it makes instructors want very much not to go through it. That is, it makes instructors want not to discover any evidence of academic dishonesty in their students' work (whether it exists or not). So, in all likelihood, they won't look too hard, and their wish will be granted.

This is obviously a stupid and counterproductive system. Even if minor violations are suspected, there are strong disincentives against pursuing the matter further. (Imagine: the formal procedures are dreaded by the instructor, and the promised punishment seems unreasonably disproportionate anyhow, for what may be a first time offense. So the instructor reasonably decides to do nothing. As does the next one, and the next. And so a serial offender can get away with cheating their way through college.)

If we actually want to prevent cheating, we need to make it easier to enforce discipline. Here's one possibility: let departments deal with it mostly "in house", with an 'F' grade and a simple explanatory note for administrators to attach to the student's internal academic record. Only bring out the big guns -- the 'hearings', etc. -- if (i) the student wishes to appeal, or (ii) prior notes on their internal record reveal that they have become a serial offender. Indeed, even the latter might be dealt with by the appropriate administrative committee unilaterally imposing some punishment (say 1 year suspension for a third minor offense), with instructors only being called to the hearing if necessary -- e.g. if the student wishes to appeal. In short: we need more plea bargains, fewer trials.

Three (or Four) Distinctions in Goodness

Korsgaard famously distinguished two senses of 'intrinsic value': (1) final (non-instrumental) value, and (2) value held in virtue of intrinsic properties. Mementos or other objects with symbolic significance, for example, may be valued non-instrumentally precisely for their relational properties: they connect us to someone or something we care about, and we value this connection in its own right, not as a means to any other end. So far so good. But I also want to distinguish the question of an object's final value from a further question:

(3) Whether the object is of ultimate concern (a source of reasons, on whose behalf the reason exerts its normative force), or whether it instead contributes value to some 'larger' entity that is our real concern (that for the sake of which we seek value).

Even if a memento has final value, it surely isn't of ultimate concern -- it is not an entity whose interests we serve. So mementos might be said to lack the 'intrinsic value' of persons in the sense that they are valuable for our sakes, not their own. (World-consequentialists would go even further and say that the reason to make persons better off is for the sake of making the world better. It's the world as a whole, not its particular constituents, that is of ultimate concern or 'intrinsic value' in this third sense.)

P.S. It's an interesting question whether final value is the same thing as contributory value. Perhaps we need to add a further question yet:

#(4) Whether, holding all else strictly equal, the addition of the object makes the world a better place.

Suppose average utilitarianism is true. Then a life might have positive welfare value but negative contributory value (because below average). Since the life has positive welfare value, we can say that it is non-instrumentally good in respect of its intrinsic features. Does that mean the life has final value but not contributory value? Well, not necessarily. After all, the life also has extrinsic features in virtue of which it is (ex hypothesi) non-instrumentally bad. So it looks like this fourth question is redundant in light of the first, i.e. whether the object is non-instrumentally desirable. We simply need to take care to distinguish what's desirable all things considered from more abstracted questions, e.g. what's desirable in some respect, or for Bob's sake, etc.

Update: further discussion of this 4th possibility here.