Saturday, March 21, 2009

Five Years Old

Happy birthday, blog! (I'm a couple of days late.)

Information Architecture: email overuse

Any computer scientists in the audience? I often find myself pondering what kinds of communications/IT tools (e.g. email, blogs, forums, wikis) are best suited for various purposes. I'm sure computer scientists and software engineers must have an official name for this field of inquiry, but for now I'll just call it 'information architecture'. Anyway, I'm especially struck by how organizations typically over-rely on email communications, even when it should be clear that it isn't the right tool for the job.

For example, a work team (or academic department) will typically use mass emailing as their primary (or even sole) form of digital communication. But email is very poorly suited as a medium for large-group discussion. I see two major reasons for this: (1) the discussion is scattered across myriad individual emails, making it more difficult to refer back to past installments, and easier to lose replies. (2) Not everyone in the group will be interested in every such discussion, and so may not appreciate having their inbox cluttered by the constant stream of babble. A discussion board or 'forum' solves both these problems. So I think any such team (or department, etc.) that's likely to benefit from such discussions should ensure that they have a dedicated forum for this purpose.

A second example that jumps out at me is notification. Don't get me wrong: email is great for issuing one-off, instant notifications that may be of interest to others in one's team (department). But for serial or regular use, greater customization is called for. It's just plain rude for (say) the Italian Studies department to spam me every week about their upcoming public lectures. Rather than forcing such notification on students in other departments against our will, they should simply offer an RSS feed (or similar) which we may choose to subscribe to or not.* Similar lessons apply to trans-institutional announcements, e.g. conference announcements, "calls for papers", etc. It's completely backwards to rely on ad hoc email forwarding (all those "please distribute" emails sent to department secretaries) for this sort of thing. Better communications infrastructure should be put in place. For example, the 'PhilosophyCFP blog', if sufficiently well-implemented (I haven't looked closely), could render all those annoying CFP emails redundant.
* (Indeed, an optimally organized university would centralize such offerings, letting us pick and choose which departmental -- or even sub-field -- public notification lists we want to opt in or out of.)

These lessons may even apply to regular intra-departmental announcements: though I don't mind these as much, it wouldn't hurt to set things up so that recipients can pick and choose which regular departmental notifications they wish to receive. (But in this case, at least, the benefits might be modest enough as to not be worth the bother of setting up a better communications infrastructure.)

Of course, email overuse is a small crime in the grand scheme of things -- i.e. compared to the inexcusable overuse of snail mail. But don't get me started on the absurdity of requiring (e.g.) job applicants to transmit their information on dead trees...

P.S. Is there a standard administrative support position with this job description, i.e. an 'informational architect' to investigate ways the organization could streamline its communications? There should be.

Wednesday, March 18, 2009

Relativism and Genuine Disagreement

Sometimes people think they disagree when really they are making perfectly compatible claims. What criteria determine whether an apparent disagreement is genuine? In particular, might we genuinely disagree over a matter of 'relative' truth (such that we're "both right")?

Here's a natural picture which suggests not: genuine disagreement requires a dispute over the (objective, "God's eye view") state of the world. This explains why, if tomorrow I say "It's raining", I don't really disagree with your present claim that it isn't raining. Both can be right, because there's no objective matter of fact about which we disagree. (We can both agree that it rains on March 19, 2009, but not March 18.) The situation is similar, we may think, for the cultural relativists who disagree about whether abortion is wrong. Both may agree that it is condoned by Sally's moral standards but not by Anne's, and at that point it is difficult to see what is left for them to disagree about. (They might fight, or have conflicting desires, but that is not the same thing as disagreeing in their propositional beliefs. It's more like the dinnertime 'disagreement' between a predator and its prey.)

The relativist might insist that they can still disagree over the relativistic proposition whether abortion is wrong. But sophisticated interlocutors will recognize that this effectively amounts to disagreement over whether abortion is wrong relative to the assessor's moral standards, which again seems to reduce to a non-cognitive conflict between the two moral standards. (It's not as though either party is making any kind of rational/epistemic error -- a fact, incidentally, that should give the moralist pause.)

So it's unsatisfying for the relativist to just appeal to formalities, e.g. "we disagree over the truth of this proposition!" That seems too cheap. But perhaps they can say more. On p.19 of 'Relativism and Disagreement' [pdf], MacFarlane suggests:
Accuracy [i.e. truth at the relevant context of assessment] is the property we must show assertions to have in order to vindicate them in the face of challenges, and it is the property we must show others’ assertions not to have if our challenges are to be justified.

This suggests the following account of genuine disagreement: two people genuinely disagree if they can appropriately challenge each other's assertions.

Sometimes people challenge assertions inappropriately. For example, if tomorrow I were to look back and say, "You said it wasn't raining, but now look: it clearly is!" my challenge would be inappropriate. Even given my beliefs, your past assertion was perfectly accurate, since the relevant context of assessment, in this case, is the context of utterance -- a time at which (I know) the statement was indeed true. So this account suffices to explain standard cases of merely apparent disagreement. But it does so in a way which is (at least potentially) compatible with maintaining genuine disagreement in case of relative truths.

The big question, now, concerns what is the "relevant" context of assessment when evaluating assertions about a relativistic domain. Again, suppose moral relativism is true. When we assess Anne's claim that abortion is wrong, is it Anne's moral standards or our (the assessor's) own that are relevant? To preserve moral disagreement, the relativist will want to insist on the latter. I argue elsewhere that this is a mistake: the relevant context for moral assessment is always the actor's. But, in any case, this is the crux of the dispute, and I'm more sympathetic to MacFarlane's line in other domains, e.g. matters of taste, humour, etc.

P.S. It seems to me that there's something to be said for both criteria of genuine disagreement discussed above. I think MacFarlane's is plausibly the primary notion: philosophical inquiry into disagreement is initially motivated by concern about our assertoric practices, which is what he focuses on. Yet there seems an important sense in which (from a philosopher's rationalistic perspective) relativistic disagreement is defective. So even if we grant that MacFarlane has correctly diagnosed what constitutes "genuine disagreement", we might still want to reserve a related term -- "objective" or "substantive disagreement", say -- for disagreements that meet the first criterion, of disputing how things are objectively.

Saturday, March 14, 2009

Blame and Schadenfreude

I want to say that envy and schadenfreude are essentially irrational, in that these emotions involve implicit normative misjudgments. Envy involves responding to something good (for another person) as though it were bad, and schadenfreude involves responding to something bad as thought it were good. In general, I hold, emotions involve implicit judgments, and can be rationally assessed according to whether those judgments are themselves rationally warranted. But might this approach be used, by moral responsibility skeptics, to argue that blame is never warranted?

Someone might suggest that the reactive emotions associated with blame (resentment, guilt/shame, etc.) involve an implicit claim to the effect that the target deserves to be punished, or that it would be a good thing if this were to occur. But many people are skeptical of such retributivist claims. We may instead think that for a person to suffer is never good in itself (even if the person in question is bad). Does this then commit us to the view that people are never blameworthy?

Well, only if we accept their initial suggestion about what implicit judgment is contained in the emotion of blame. And I don't see any reason to accept that. Of course, many folk are retributivists, and so would be happy to see the people they consider blameworthy suffer. But this seems a merely contingent connection. As I see things, moral emotions like resentment or moral indignation merely involve, in the first instance, a kind of negative judgment or moral disapproval. Roughly, as per Nomy Arpaly's account, to hold someone blameworthy for an action is to hold that they manifested a "deficiency of good will" in so acting. It is this normative judgment that is implicit in the negative moral emotions, or so it seems to me, not any stronger claim to the effect that it would be a good thing for this bad person to suffer. How we should respond to the identified deficiency is a strictly further question.

It's important to distinguish here between blame and outward expressions of blame (e.g. vocally berating the person). To express blame is an action -- a form of punishment -- that may be advisable or not as the practical reasons dictate. All else equal, we may think there is a pro tanto reason not to so act, namely that it's unpleasant for the recipient. We shouldn't want people to suffer unpleasantness (unless it's instrumental to some greater good, e.g. deterring future wrongdoing, that can outweigh this consideration). This is the standard anti-retributivist point. But even if expressing blame is inadvisable, it doesn't follow that blame itself is unwarranted. (Just as one may be warranted in feeling fear, though it may be inadvisable to scream or otherwise express this emotion if that may trigger panic in others.)

This is the crucial point. While expressing one's moral disapproval may be an act of 'punishment', and (pro tanto) inadvisable for that reason, the same simply isn't true of feeling moral disapproval in the first place. Emotions aren't actions, and so a fortiori aren't acts of punishment. They may be practically fortunate or unfortunate, but those aren't the kinds of considerations that determine whether a given emotion is rationally warranted (just as practical considerations aren't relevant to rational belief). Rather, what matters is whether the implicit judgment in the emotion is warranted. So it all comes down to the question whether the moral emotions involve implicitly judging punishment to be desirable. Do they? (How, exactly, could we go about settling this question one way or another?)

P.S. For a somewhat different take on what blame consists in (which I'm also sympathetic to), see my old discussion of Scanlon's view.

Tuesday, March 10, 2009

Browsing / Blogging Hacks

I previously wrote a very simple guide to starting a philosophy blog. But I figure it might also be helpful to collate some more advanced tips and tweaks. So, here goes!

Sunday, March 08, 2009

Desire-based Objective Value

Desire-based theories of value (or reasons) are sometimes called 'subjectivist', and contrasted with 'objective' theories. But I think this classification fails to cut logical space at its joints. According to a more natural way of dividing 'subjective' and 'objective' theories of value, many desire-based accounts will turn out to fall on the latter side.

One way to highlight the arbitrariness of the standard classification is on formal grounds. Note that the mental state of desire is part of objective reality, and whether it qualifies as 'fulfilled' is a matter of fact no less objective than the question whether some belief qualifies as 'knowledge'. But a theory according to which knowledge is intrinsically valuable is typically considered a form of objectivism. That is: there can be 'objectivist' theories of value according to which mental states X are intrinsically valuable. The theory might even be monistic, and claim that X is the only thing that's intrinsically valuable. (This wouldn't be very plausible in case of knowledge, but for all its implausibility, such a view would clearly be objectivist in nature.) It would seem inexplicably strange if objectivists were barred from substituting "fulfilled desires" for X -- a mere change in content. Presumably we want the distinction between 'objective' and 'subjective' theories to track some deep difference in form.

Note that someone might hold that fulfilled desires are objectively valuable in just the same way that other objectivists hold that knowledge (say) is objectively valuable. This seems especially clear in the case of preference utilitarians who claim that we have reason to satisfy other people's preferences even if we don't want to. (The fact that we don't want to is itself a reason that counts for something. But so is the other person's desire, and it might outweigh our own.) The relevant feature here seems to be that we are normatively bound by a higher authority than our contingent, immediate perspective. This may be true even on desire-based accounts, if they impel us to respect other desires besides our present ones -- i.e. other people's desires, or even just our own future desires, for that matter.

A sure sign of value objectivism, as I've defined it, is when a theory imposes on us alienating aspirations -- a condition certainly satisfied by any form of utilitarianism, including desire-based versions.

Value subjectivism, by contrast, insists that the values or normative reasons to which one is beholden must be firmly rooted in one's current sentiments or 'deliberative standpoint'. Subjectivism thus entails some kind of 'present aim theory' (in Parfit's terminology), allowing only what Bernard Williams calls 'internal reasons' -- reasons that can gain traction and motivate us given our actual psychologies.*

* (But what if our actual psychologies are irrational? Isn't some idealization needed here? But then, if we build enough into our understanding of 'rationality' -- cf. Kantians -- then the results may be just as alienating as before. I'll put aside such complications for now, but comments are welcome.)

I guess one could, as a matter of form, hold even the present aim theory in an 'objectivist' way. That is, one might just insist that objectively, "from the point of view of the universe", what's advisable for each person is that they fulfill their own present aims. So what's distinctive about subjectivism is not the content of our normative reasons, but the underlying explanation why we have the reasons that we do. Subjectivism appeals to our subjective perspectives as bedrock. Objectivists, if they appeal to our subjective perspectives at all, do so on the basis of some further, underlying consideration: e.g. that subjectively satisfied people constitute a better world.

So, even if satisfying desires is what finally matters (practically speaking), it is an open question why this is what matters, or what the source of this normativity is. Desire-based subjectivism will ground the putative normativity of desire in the agent's own (unalienated) perspective, whereas desire-based objectivism will ground it in something larger than the agent -- possibly, the universe itself. This, I propose, is the most theoretically interesting way to divide 'subjective' and 'objective' theories of value.

Blog 'Follow' Gadget

In case anyone was doubting whether bloggers are megalomaniacs at heart, the folks at Blogger.com are now encouraging us to prominently display a list of "followers" on our blog sidebars. Heh.

Amusing word choice aside, a closer look suggests that there's more to the gadget than I'd initially thought. For example, you can click on a reader's portrait to see their public profile [if they've filled it in], including a list of other blogs they read/"follow". So, at a minimum, it offers a new way to find possibly-related blogs to check out. (In the same way, I suppose, it could bring readers of other sites to find this one, if you sign up.)

The gadget also makes it extremely easy to actively recommend this blog to your friends, should anyone feel so inspired.

Finally, on a more symbolic level, it might serve to develop the blog's "community", by introducing regular readers and commenters to each other, and explicitly recognizing them as part of the blog itself. (And I expect Google will continue to add more social-networking features.)

Anyway, I figure I may as well try it out. Feel free to join in, and let me know what you think.

Thursday, March 05, 2009

Illusions and Practical Competence

Eric Schwitzgebel has a fascinating post on illusions, and whether they still count as such when the viewer is so familiar with them as to no longer be disposed towards misjudgment:
If one knows enough about the world, one should know that an oar partly submerged in water (seen from a particular viewing angle) should look bent just like that. If it looked straight, I suppose, a longtime oarsman or a person very familiar with the laws of refraction might think the oar looked strange, might even think that it looked like an oar that is actually bent (bent in such a way as to exactly compensate for the bend a straight oar would seem to have at that angle)...

So is the skilled oarsman experiencing a visual illusion as he looks at the oar? If we say no, then I'm worried we're off onto a slippery slope to entirely denying the possibility of illusions that are known to be such.

Note, though, that's there's a fair gap between mere propositional knowledge and the kind of internalized know-how or "fluency" possessed by the skilled oarsman. I am still very much 'taken in' by the Muller-Lyer illusion, despite - intellectually - 'knowing' that it is an illusion. For it to cease to count as an illusion to me, I would need to internalize this knowledge in such a way as to render me fluent in interacting with Muller-Lyer lines. (Were I to come across a worldly instance, I should automatically respond in ways appropriate to the lines being of equal length; just as the skilled oarsman automatically responds appropriately to the shape of his oar in the water.)

djc adds:
Another nice case is that of mirrors. Does looking at a mirror create an illusion that twin-you is in front of you? When one doesn't know it's a mirror, the "yes" answer seems plausible, but this answer rings less true for the familiar case when one knows it's a mirror. But if one says "no" in this case, then the question is whether there's a principled distinction between this case and the bent stick case, or any other case of a "known illusion".

I suppose that for there to be a principled distinction, one will have to make the case that in some cases (e.g. the mirror case), knowledge of the relevant effect penetrates the experience and changes its content, while in other cases (e.g. the stick case) knowledge does not penetrate the experience in this way. Perhaps there is some intuitive plausibility to the claim that the relevant spatial phenomenology is changed by mirror knowledge in a way that it's not changed by bent-stick knowledge. But it's not clear what the source of this difference is.

Could practical competence vs. merely intellectual knowledge ground this difference? (Imagine someone who has just learned what a mirror is, but who is not yet sufficiently practiced to be competent at using them. It seems plausible that the mirror could still create a sense of 'illusion' for them, even though they know full well - intellectually, at least - what is going on.)

[Related posts: What is illusion?]

Wednesday, March 04, 2009

Favouritism and Peer Review

Brian Leiter links to a fascinating article on 'The 'Black Box' of Peer Review', which includes the following mind-bender:
[W]hen it comes to an affinity for work that is similar to their own or that reflects personal interests having nothing to do with scholarship, many applicants benefit in a significant way. In a passage that may be one of the most damning of the book, Lamont writes: "[A]n anthropologist explains her support for a proposal on songbirds by noting that she had just come back from Tucson, where she had been charmed by songbirds. An English scholar supports a proposal on the body, tying her interest to the fact that she was an elite tennis player in high school. A historian doing cross-cultural, comparative work explicitly states that he favors proposals with a similar emphasis. ... Yet another panelist ties her opposition to a proposal on Viagra to the fact that she is a lesbian....

Seriously? There are academics who consider such autobiographical facts to provide a (fundamental) reason for favouring one grant proposal over another? That would be insane. But perhaps that's too uncharitable an interpretation of what's going on here.

Perhaps they are instead taking their personal experiences as providing them with special epistemic access to an independently existing reason. The lesbian, for example, believes that medical research is excessively "focus[ed] on men" -- which, if true, would provide a perfectly objective reason for shifting funding elsewhere. The fact that she's a lesbian might make her more sensitive to this reason (or possibly even oversensitive, if her perceptions turn out to be inaccurate); but that's not at all the same thing as claiming that her being a lesbian is itself the reason to oppose the grant.

Similarly in the other cases: it would be insane to take the mere autobiographical fact that you personally were recently charmed by songbirds as a reason for supporting research on songbirds. But perhaps one could reasonably hold that one's experience put one in a position to recognize a more objective fact: that songbirds are charming, and thus worthy of further study. The autobiography gives a causal explanation of how it is that one came to be aware of this reason. It is not itself the "reason" or consideration that counts in favour of the grant. (One might say, "If it weren't for my recent trip to Tuscon, I would never have appreciated how objectively worthy this grant proposal is!")

I'm especially suspicious of the blanket assumption that it's some kind of disreputable "favouritism" for academic evaluators to prefer work that is similar to their own (in topic or methodology). It's possible for such preferences to be disreputable, and perhaps even likely that they involve some degree of motivated reasoning or bias. But let's not forget that academics' own work is shaped by their prior evaluations. I work on X in part because I think it's one of the most pressing and interesting issues around. I use methodology Y in large part because I think it is the most rigorous, fruitful, or reliable way to solve the problem. Other methodologies (consulting a magic 8-ball, say) I avoid because of my prior judgment that they're not good methods of inquiry. Is it thereby "favouritism" to prefer a grant proposal that uses methodology Y rather than the method of magic 8-ball consultation? Surely not.

To be "favouritism" of the odious kind, it must be that one's judgments are tracking one's personal interests independently of their merit. Here's a test: any fair-minded academic can presumably identify additional topics or methodologies that they judge to be roughly on a par with their own in terms of objective merit/interest. We can't do it all, so what we end up specializing in is presumably just a small subset of the areas we take to legitimately merit such interest. A fair-minded evaluator will thus be just as receptive to proposals in those other meritorious areas as they are to proposals in their own meritorious area. But it is no part of fair-mindedness that one must also be receptive to work one considers intellectually bankrupt. (At least if that opinion is itself reasonable. One should, of course, be wary of adopting such a dismissive view on insufficient evidence!)

Sunday, March 01, 2009

Implicit Bias vs. Implicit Malice

It's worth noting that the ordinary sort of implicit (e.g.) racial bias is not the same thing as being subconsciously "racist" (in the ordinary, vicious sense). We should take care to distinguish two forms of implicit bias. One might harbor some subconscious ill-will towards people of other races, which would clearly be a moral defect of character. But there is another possibility, which would involve a kind of defect or bias merely in cognition rather than in values or desires.

It's conceivable that you might have the best will in the world (even at the deepest depths of your subconscious), and yet - through some cognitive quirk - end up processing information in ways that leads to systematically biased judgments. For example, common cultural stereotypes might influence the 'schemas' that our minds use in categorizing and remembering information, and in generally making sense of the world. Stereotypes presumably also influence what "associations" are most salient or easily 'primed' in our minds when thinking about different groups (and individuals we implicitly classify as members of those groups). It isn't difficult to see how this could conceivably cause one to, say, (i) be more likely to interpret a black student's work negatively, or (ii) be less likely to think of a top female academic when selecting a keynote speaker, etc. -- without any hint of malice or ill-will coloring the explanation.

We may draw a couple of conclusions:

(1) Ordinary implicit bias, if purely 'cognitive' as described above, is best understood as merely unfortunate rather than inherently blameworthy. It's not necessarily a sign of hidden racial animus, or any kind of "racism" in the ordinary sense. So people needn't feel too defensive about it, the way they might if their good character were in question. Still, insofar as the cognitive disposition is unfortunate, and leads to people being treated unfairly, it is certainly something we should want to mitigate upon learning of it.

(2) The mere fact that ordinary implicit bias isn't blameworthy does not suffice to show that no subconscious attitudes are. We can imagine a character that really is subconsciously racist in the most deplorable sense (i.e. they harbor deep-rooted animosity towards people of other races), and such a character is surely bad in this respect -- however tolerant and egalitarian their consciously professed attitudes might be. If the subconscious malicious desire leads them to akratically (i.e. against their better judgment) perform racist acts, the person is surely blameworthy for this -- just as Huck Finn is praiseworthy for the good acts he performs, against his (dopey) "best judgment", in helping Jim escape.