Wednesday, February 25, 2009

Culpable Ignorance and Double Blame

[Significantly updated and moved-to-front from Feb 7.]

Suppose Adam wrongly - i.e. without sufficient reason - kills Bob. Though there's no real justification for killing Bob, such an action may nonetheless be excused if it was done unwittingly, at least if the agent's ignorance was itself non-culpable. But what if Adam was previously negligent -- i.e. his ignorance culpable* -- does this mean he's blameworthy for the action after all (at least to some degree)? I think the literature on this topic has been plagued by some serious conceptual confusions. Let me explain.

Holly Smith, in her seminal (1983) paper 'Culpable Ignorance' [pdf], considers cases in which an earlier, "benighting act" foreseeably causes our agent to later be deprived of evidence that would have led him to refrain from committing the bad "unwitting act". In one example, the agent impermissibly leaves behind his driving glasses, and so unwittingly runs over a pedestrian that he otherwise would have seen. (As will become clear, I think these are deviant cases of 'culpability' in ignorance, but more on that later.)

As HS points out, all can agree that Adam is blameworthy for the benighting act. (This is meant to be built into the specification of the case.) Further, there's a clear sense in which this makes him responsible for the ultimate outcome: "the unwitting act is a risked upshot of the benighting act", and we're responsible, roughly speaking, for the foreseeable risks that eventuate from our reckless actions (compare Russian roulette). HS thus diagnoses the debate over whether culpable ignorance is exculpating as just an instance of the moral luck debate, i.e. whether agents become more blameworthy if the foreseeable risks of their reckless or negligent acts actually eventuate.

This seems a misdiagnosis.** Let's distinguish two ways in which Adam - already culpable for the benighting act - might come to be doubly blameworthy due to the occurrence of the unwitting act. It could be that the eventuating harms render him more blameworthy in his earlier act of negligence. That's 'moral luck'. Or it could be that he is now blameworthy on more occasions: not just in the benighting act, but in the unwitting act (qua act) as well. That would be 'culpable ignorance'* failing to excuse the unwitting act as a second occasion of blameworthy agency.

Note that one might think that (even culpable*) ignorance excuses the unwitting act, in the sense that we cannot blame Adam qua agent of the unwitting act, whilst nevertheless blaming him (qua agent of the benighting act) more because of this bad outcome. So the question whether culpable ignorance excuses the unwitting act is orthogonal to the 'moral luck' issue of whether the unwitting outcome makes the (earlier) agent more blameworthy.

[N.B. If you're suspicious of this talk of "earlier" and "later" agents, you can easily translate it into talk of how the later event makes the enduring agent more blameworthy "in" acting as they did earlier. That is, I'm not really making any metaphysical claims about persistence through time here. I'm just using such talk as a convenient shorthand for locating the agent's responsibility. We may speak of that exercise of agency in virtue of which one is eligible for moral assessment; in which responsibility resides; or (equivalently) to which our moral assessment attaches. Call this the focal act. The crucial point is that one may be responsible in acting, in a completely different sense from how one may be responsible for an outcome or event (or even 'for an action', considered as a mere outcome or event, as opposed to a focal act). HS runs together 'focal actions' and 'acts-as-events' in a way which causes her to seriously misconstrue the issue at hand, or so I claim.]

Another way to see this point is to note that it doesn't seem to make any difference for HS's cases who ends up performing the unwitting bad act. She is talking about blaming Adam for Bob's death in the exact same way that we would blame Adam for Bob's death if he had negligently misplaced Cid's glasses, and Cid subsequently crashed into Bob. That is, we're blaming Adam for the outcome of Bob's death, and not (in any further sense) the act of killing him. In either case, the act of Adam's to which our blame attaches - the focal act - is the reckless 'benighting' act of depriving a driver of his glasses. This is clearly to change the subject, if what we're interested in is whether ignorance excuses (blocks blame from "attaching to") the unwitting act (as a locus). On the other hand, if we're interested in whether Adam is blameworthy, in his negligence, for the event of Bob's death, any ignorance in the "later" actor -- whether Cid or Adam himself -- is just obviously irrelevant.

Simply put: we are responsible, though (or 'in') a focal action, for various outcomes. If those outcomes are bad, there's a prima facie case for thinking the agent, in acting as they did, blameworthy. We may then ask whether there are any 'excuses' or defeaters for this presumption. These are things that may excuse the agent's action, rendering them no longer liable (again, in so acting) for various consequences. One possible excuse is ignorance. But note: By this it is not meant that if the agent becomes ignorant consequently from their act, this somehow blocks the agent from liability for any further downstream consequences of their focal act. That would be daft. No, the relevant 'ignorance' (ignorance that is eligible to excuse an agent's action) must be present in the act itself -- i.e. the act must have been done "from ignorance". It is ignorance concurrent with the focal act -- the exercise of agency in virtue of which the agent is prima facie eligible for moral assessment (positive or negative depending on, amongst other things, the consequences resulting from the act).

So, if we are considering complicated examples in which one action has as its causal upshot another act's performance, we need to be clear if the former is being treated as the focal act, and the latter as a mere event. Because if we're interested in whether ignorance (e.g. of some feature or outcome) excuses the agent's action, we had better be talking about ignorance that is concurrent with the focal action, and not ignorance that is concurrent with the mere downstream event. But this is precisely what HS fails to do. Her examples all concern cases of ignorance downstream of the focal act. As such, they are not even formally eligible to serve as excuses for the agent's action. Her very question is ill-formed.

P.S. My follow-up post considers more appropriate cases, i.e. of culpable ignorance concurrent with a focal act.

* I should note that 'culpable ignorance' is being used here in a potentially misleading way. HS really means something more like "self-inflicted ignorance", or "ignorance for which one is responsible in virtue of a prior culpable act". But that's just not the same thing as ignorance that is culpable or unreasonable in itself -- as I emphasize in my follow-up post. A 'snapshot theorist' may hold, as I do, that self-inflicted but intrinsically reasonable ignorance is indeed wholly exculpatory in the relevant sense, whilst intrinsic or 'synchronically culpable' ignorance is no excuse.

** [I owe the basic 'misdiagnosis' idea to Liz. The arguments - and broader conclusions - are my own.]

Sunday, February 22, 2009

Philosophical Training

One of my best students recently asked how he could further improve his philosophical work. It's something I occasionally wonder for myself, too. So I thought I'd throw the question out there for the collective wisdom of the Internets to answer: how should one "train" in philosophy?

Most obviously, one can learn from expert feedback on one's written papers. One's instructor may pick up on the main flaws or possible objections that should be addressed, and suggest potential avenues for further developing and extending the core ideas of the paper, which should eventually help one to develop a "sense" of what makes for good philosophy. However, there may be only limited opportunities for obtaining such formal written feedback (since professors are likely busy with their own work, and perhaps reluctant to do anything that feels like additional "grading"), in which case more general, informal discussions -- e.g. in office hours, or by email -- may be more welcome.

More generally, simply doing philosophy -- writing, and arguing with others -- is presumably always good practice. My advice: start a philosophy blog, and use it to (i) practice clearly summarizing any interesting ideas you come across in readings and lectures; and (ii) test out some of your own arguments and ideas. (At least, I find this invaluable. YMMV.)

Then there's the method of simply exposing oneself to a lot of professional-level philosophy: read - or skim - the top journals, attend departmental talks or 'colloquia' by visiting professors, etc. See if you absorb their philosophical skills by osmosis; or, better yet, combine with the previous suggestion to actively engage with the new ideas and methodologies you encounter.

(These procedural questions aside, there's also the question of what philosophical content to focus on. I would strongly advise undergrads, especially, to obtain a broad familiarity with the various sub-fields and methodologies: from ethics to formal logic, and from armchair analyses to more empirically-minded approaches. Further, I think almost any philosopher can benefit from thinking about the connections between their field and others -- e.g. between ethical and epistemological normativity, or between meta-ethics and meta-metaphysics.)

Have I missed anything important? And how would you balance the various recommendations -- is a marginal hour better spent reading more journal articles or developing more of one's own ideas, say?

[Cf. Getting the Most Out of Grad School]

Saturday, February 21, 2009

Skepticism, Rationality and Default Trust

Intellectual virtue lies between the two extremes of dogmatism and radical skepticism. The irrationality of dogmatism is generally appreciated; less so the irrationality of radical skepticism (at least among my intro M&E students!). Let's attempt a remedy.

The global skeptic insists that we should question everything, trusting nothing: apparent perception, memory, "rational intuition", even methods of reasoning -- anything that can be doubted must be suspended from our minds and considered 'guilty until proven innocent'. The obvious problem with this stance - if followed assiduously - is that it entails rejecting everything, leaving one with no mind at all. This is the crucial point: just as a pure 'blank slate' cannot learn from experience, so must we rely on various assumptions if we are to learn, reason, and act rationally at all.

That's not to say we should dogmatically refuse to question our beliefs and practices. It's just that we cannot question them all at once. Any individual belief or assumption may be tested in light of the other things we (provisionally) take to be true, and revised if some incoherence is found. But we have to reason from the beliefs we have -- as Neurath famously put it, our mind is like a ship at sea, and even as we replace a faulty plank we must trust our weight to others.

The mere fact that those other beliefs can likewise be questioned does not suffice to show that we can't reasonably accept them (again, provisionally) in the meantime. The skeptic's claim to the contrary is itself a questionable belief, and not one we have any reason to accept. (It may seem intuitive at first, but once we fully understand the implications of this claim, it would be crazy to accept it -- or so I argue.)

To begin with, it's worth emphasizing that global skepticism is self-defeating, insofar as it implies that we should not trust its recommendation to trust nothing. Or, as Railton writes (in 'How to Engage Reason', Reason and Value, p.186):
Hume observed that [the sceptic] displays a touchingly unsceptical attitude towards the power of argumentation and his own powers of thought and memory. We might add: toward his own command of language and the content of this thoughts. Remove this default confidence, and he can no more declare his words to be 'giving an argument for scepticism'--an intentional action--than 'giving a recipe for haggis' or 'scat-singing without a tune'.

Hence my earlier claim that one who truly internalizes global skepticism is no longer capable of intentional thought or action at all. To accept global skepticism is to forsake any hope of rationality. It is the ultimate intellectual black hole. Railton (p.187) draws an important lesson:
Default trust, however 'blind', is not inherently blinding. On the contrary. If the sceptic trusts his ability to speak English and draw the conclusions demanded of his premisses, and I trust my own appreciation of his argument, we will both see (no longer be blind to) a problem that I have in defending my beliefs: Where I previously had hoped to be able always to have a reason for whatever I believe, taking nothing 'blindly' or 'without reason', I now realize that this hope is impossible--some things cannot, without regress or circularity, be argued for.

I trust the reader will by now agree that global skepticism is a non-starter, as any rational agent must take some things 'on trust' if they are to be capable of reasoning at all. Still, one might ask, what of slightly restricted forms of skepticism? Couldn't one consistently hold the more traditional skeptical view that it's just our sensory experiences and/or inductive practices that shouldn't be trusted? The traditional skeptic is willing to trust in reasoning as much as anybody is. They simply have different expectations about the external world (or "affirm a different prior") from the rest of us. In particular, they hold that all possible worlds are (a priori) equally probable, whereas we anti-skeptics consider some (perhaps simpler or more regular-seeming) distributions of properties across space and time to be more probable than others.

Sure, there may not be anything inconsistent about traditional skepticism, so defined. (Though such a skeptic has no reason to expect that they will continue to exist long enough to finish their thought.) But nor is there much reason to accept it, or indeed to find it any more credible than the claim that the world just came into existence 5 minutes ago. Admittedly, traditional skepticism is motivated by a premise that seems plausible at first glance, namely: "the reasonable 'default' view is to start off by assigning each possible world an equal probability of being actual." That sounds fairer and more reasonable than an a priori bias in favour of, say, worlds where memories are typically true--representing times and events that really did happen. But I suspect the intuitive plausibility of this is an instance of us being misled by overly-abstract principles. When we consider every more particular judgment or knowledge-attribution we are inclined to make, is it really plausible to think that the one abstract principle is more credible than all our conflicting particular judgments - and practices - combined? Colour me (cough) skeptical.

Upon reflection - in light of our actual beliefs - it seems we have most reason to reject the skeptic's principle. Though it seemed plausible at first, it is inconsistent with other claims that most of us find much more plausible. Here's another: rational agents should learn from experience. Skeptics -- even of the merely 'traditional' variety -- can't. As Railton writes (in 'Rational Desire and Rationality in Desire'):
It is well known that in order to learn about one's environment (whether one be human, animal, or teachable computer) it is not enough to have ample sensory input and plenty of memory registers to fill. The learner must also bring some expectations--such as expected dimensions of similarity (the "implicit quality space"). Otherwise, experience will simply accumulate in its infinite diversity, and all experiences will be equally relevant or irrelevant to one another. No lessons will be extracted.

Carnap gave an elegant demonstration of this point within the theory of logical probability. He asked us to consider a confirmation function that began (sensibly, it would seem) with no "prior bias", i.e., that assigned the same non-zero probability to every possible state of the world (what he called "state descriptions"). This function would, even given indefinitely large amounts of information about past states of the world, still assign the same probability to every logically possible way of extending this history into the future. In a fundamental sense, it could not learn from experience.

So I think it's pretty safe to conclude that (even the traditional form of) radical skepticism isn't rational. There's no guarantee that the rest of us are any better off, of course, but at least we have a chance. We do the best we can -- and we're yet to see any good reason to think that some other way is better.

One may be left feeling unsatisfied: the best we can do may not seem good enough. But since turning to skepticism is even worse, it seems we will just have to learn to live with tentatively trusting in the reliability of our perceptions, despite the uncertainty of it all. At least we can learn such things -- and that is something to be thankful for.

Friday, February 20, 2009

Harmlessly Manifesting Vice

Is it wrong to manifest a vice (or bad character trait) in transparently harmless ways? For example, consider the repressed racist, who reins in his ill-will towards people of other races so that he would never intentionally do harm or visibly express disrespect, but who secretly fantasizes about slavery and mouths racist epithets when nobody is around to hear him. He's clearly a worse person for his secret racism (though not as bad as a whole-hearted racist, assuming he's motivated to rein in his ill-will for moral and not just prudential reasons). But assuming this is a stable fact about his character, does it make any moral difference whether or not he goes ahead and secretly manifests his racism in these inconsequential ways? Does it make him more blameworthy, say, than having the exact same feelings but refraining from (even secretly) acting on them?

One can multiply examples: at the most despicable end of the spectrum, there's stuff like virtual child porn and virtual rape. For a more ordinary case, consider someone who fantasizes about punching someone they're angry with. Somewhere in between, perhaps, we find the fundamentalists who enjoy the torture-porn of Jesus boiling the blood of atheists in the Left Behind novels.

In all these cases we can imagine the agent deliberating about whether or not to secretly and harmlessly express their vicious feelings. They might acknowledge that it's bad (or at least morally imperfect) of them to have the desires that they do. But assuming that they can't change their desires, what should they conclude about the permissibility of harmlessly acting on them -- by daydreaming, simulating the vicious acts through video games, etc.?

I guess one thing to emphasize is the possibility of indirect harm: perhaps 'indulging' in such fantasies strengthens the vicious disposition of character in undesirable ways. (We are what we do?) Then again, for all we know the reverse might be true: perhaps expressing a vice this way serves as a kind of cathartic "release" that will help the agent behave better afterwards?

Either consideration, if true, could be morally decisive. But suppose neither is true, so that secretly manifesting vice in these ways would have no further consequences whatsoever. Is it bad just in itself, at least a little bit? Or is the badness exhausted by the vice itself -- which persists equally in either case -- so that it doesn't matter whether or not one secretly acts on it?

(Take care not to be confused by the merely epistemic factor that people's actions may serve as evidence about their underlying character. We may expect that a person who acts on a vicious desire likely has a stronger such desire than someone who doesn't, and so think worse of them for that reason. But to assess my question, we must hold the agent's character and desires fixed.)

Thursday, February 19, 2009

Structures of Dynamic Desire

Railton ('Rational Desire and Rationality in Desire') argues that "desire is a compound, dynamic state", containing implicit expectations about the desired object, which serve to regulate the desire over time -- e.g. weakening the desire if the actual experience of its object disappoints our expectations. For example: one may desire to try some exotic fruit, with the expectation that it will offer a novel and pleasant taste. But this sounds to me like a purely instrumental desire, where the so-called "favorable expectation" represents what the person really wants or hopes for. So can Railton's account apply more generally, and make sense even of non-instrumental desires?

Let's distinguish five desire 'structures', or ways the object of desire may be related to the regulating favorable expectation:

(1) Instrumental desire: here what you ultimately desire just is the implicit 'expectation' (a novel and pleasant taste, say), and you desire the immediate object (the fruit) merely as a means to this end. Note that mere instruments are replaceable: you could just as well substitute any equivalent means in place of this particular object. [I typically find it unhelpful to count these as real desires at all.]

(2) [Pure] Ultimate desire: if I ultimately desire X, in the pure case this shouldn't come with any further "favorable expectations" besides the desired object itself. (Otherwise, one might think, it starts to look like what I really desire is the expected outcome, rather than the object X itself. But compare the "impure" possibilities below.) Pure ultimate desires thus aren't 'regulated' by any expectation beyond themselves, and hence don't really fit into Railton's account. I'm not sure how serious an objection this is: he might consider such 'pure' ultimate desires to be a theorist's invention that aren't really found in complex human psychologies. (More on this, below.)

(3) Conjunctive desire: Here one desires the object to be as one expects it. E.g. one desires that [one eats the fruit and it tastes novel and pleasant]. This differs from purely instrumental desire because this time you really do want this particular fruit, and not just any source of novel pleasant taste sensations. But it has to be tasty, or your desire is thwarted. [It seems a little odd to model this by separating the desired duo into an 'object' and an 'expectation', though. Why isn't it just a simple desire with a conjunctive object?]

(4) Conditional desire: Here one desires the object (in itself) conditional on its fulfilling one's expectations. This is much like the conjunctive case, insofar as it blocks instrumental substitution. The difference here is that if the expectation (e.g. tastiness) isn't satisfied, the desire as a whole is cancelled rather than thwarted.

(5) Contingent desire: Here one genuinely desires the object for its own sake. (So the desire is straightforwardly satisfied if you get the fruit, regardless of whether it turns out to be tasty.) But the desire just happens to be causally dependent on the expectation. That is, you have some brute (non-rational) psychological mechanism that will cause you to lose this intrinsic desire if the experienced object fails to live up to your expectations. Alternatively, the process might be rationalized by means of a higher-order desire to only have first-order desires that meet their expectations. (I've discussed this possibility before in relation to Railton's sophisticated hedonist.)

[I guess this last proposal is actually compatible with previous variations. E.g. one may have a conditional desire for A given B that is causally contingent on some third condition C.]

What's the best way to interpret Railton, or to understand how most of our ordinary non-instrumental desires fit his model? My best guess is that our ultimate desires are supposed to be mutually regulating, as complex #5-type structures. That sounds psychologically realistic too, since it does seem that (even) our ultimate ends do change over time, perhaps by becoming associated with other ultimate desires/aversions. Any thoughts?

Tuesday, February 17, 2009

Rational Akrasia

I must say, I'm a huge fan of Nomy Arpaly's book Unprincipled Virtue. In the second chapter she argues against the common assumption that our deliberative judgments about what's rational (for us to do or believe) have any special normative authority or significance in determining what would in fact be rational for us. That is:
[S]ometimes an agent is more rational for acting against her better judgment than she would be if she acted in accordance with her best judgment... there are cases where following her best judgment would make the agent significantly irrational, while acting akratically would make her only trivially so. (p.36)

This is so in cases where one's "best judgment" is itself completely irrational -- a point I previously made in my post on 'Subjective Oughts'. Though this fact is often neglected, I don't think there's any sane view of rationality on which S believes that φ-ing is rational implies that S rationally ought to φ. Even on the most 'internalist' views, e.g. coherentism, it's possible to have false beliefs about what would be most coherent. That is, I might think that believing P is rational (and would increase the coherence of my belief-set), when in fact this is false and my belief set would be more coherent on the whole were it to contain not-P instead. Coherentism then straightforwardly entails that I ought to believe not-P, despite the fact that I believe that I rationally ought to believe P. The higher-order judgment is just wrong. And the same may be true in case of practical rationality, i.e. I may hold false views about what I rationally ought to do or desire. Following my judgment, in such a case, would not in fact be rational.

Once stated explicitly like this, the point seems very obviously correct. I guess one possible point of resistance could arise in readers who fail to distinguish what Arpaly calls an "account" of rationality from an "instruction manual". (Cf. the traditional distinction between the 'criteria of rightness' and a 'decision procedure'.) Obviously an instruction manual can't contain advice of the form, "Go against your all-things-considered better judgment", or "Do X, for a reason other than that this rule advises it", or "Don't think of an elephant". None of this is advice you can follow. Nonetheless, they may be true statements of what it would in fact be most rational for you to do in the circumstances. As I keep stressing, trying hard is no guarantee of rationality. Sometimes (though hopefully not often) one's efforts may even be counterproductive* -- this is perhaps most familiar in case of neurotic "over-analyzing", but another example would be overriding one's reliable (reasons-responsive) gut instincts with bad reasoning or rationalizations.

One virtue of Arpaly's discussion is that she highlights how (implicitly) familiar this point is in everyday life. We're all familiar with the idea of "a man who has some 'crazy notions' sometimes but whose common sense prevails 'in real life.'" (p.49) Most people aren't philosophers, or even particularly competent reasoners, so their explicit judgments may end up being downright dopey. If they fully internalized and acted on these dopey explicit beliefs, we might consider these folks fanatics. But because Uncle Bob doesn't really 'live out' or act on his dopey "best judgment" -- being instead restrained by his implicit "common sense", practical wisdom, and basic human decency (though he doesn't consciously realize it) -- we may judge that he is rational enough on the whole. His irrationality is restricted to his explicit beliefs, and he's otherwise ("in real life") not so bad. Akrasia -- failing to act on his explicit 'better judgment' -- thus makes him comparatively more rational than the fanatic he otherwise would have been. (Though of course he'd be even more rational if he didn't have such dopey explicit beliefs in the first place.)

* Stronger still: sometimes any attempt at explicit deliberation might prove to be essentially less-than-optimally rational. This point is more familiar in the context of ethics, where we may be required to respond directly to a person in need rather than mediating our response by any kind of deliberate moral theorizing. The person who considers the permissibility of saving his drowning wife before jumping in clearly has "one thought too many", as Bernard Williams puts it. We want people to be sensitive to moral considerations, but that doesn't require -- and sometimes precludes -- consciously deliberating about such things.

Sunday, February 15, 2009

First Principles and False "Primafication"

Let's say you "primafy" a general principle when you seek to elevate it to the level of a first principle. For example, one might start from the general principle that stealing is wrong, and foolishly primafy it (abstracting away all the essential details of institutional context) to yield propertarianism.

Any systematic moral theorizing is likely to involve some 'primaficiation' -- for a more sympathetic example, utilitarians elevate the principle that human welfare matters -- but we should at least be wary of overly hasty instances of this move. It's one thing to determine a first principle after careful and systematic reflection, and quite another to leap to the conclusion that something is a first principle just because it's a general principle we recognize in everyday life. It's the latter mistake that I want to warn against.

One common mistake in this vicinity is to primafy rights, perhaps assuming that their practical priority translates into theoretical priority. But this is just a mistake. What rights we ought to institute depends on contingent facts about our situation (in particular, which 'rights' would actually serve to make people better off -- obviously a contingent matter). For all their practical importance, rights are a merely surface-level moral phenomena that emerge from (rather than ground) our fundamental moral theorizing.

More generally, people commonly take their everyday moral sensibility and seek to directly apply it in radically different contexts -- contexts to which their sensibilities are not at all attuned. They seek to evaluate economic policies, for example, according to whether they seem "fair", when consequentialist evaluation is far more fitting. Some make silly claims about "deserving" their pre-tax income in some strong sense which is supposed to render taxation a form of "theft". And much political philosophy has traditionally taken as its starting point the general principle that we shouldn't coerce each other, and so -- by elevating this, out of context, to the level of a first principle -- concluded that any government action requires some special justification (saying it may only do things that are required by justice, for example).

All of this, it seems to me, results from a simple failure to distinguish 'internal' and 'external' moral questions. There are certain norms or principles are appropriate within the context of our everyday lives, but not necessarily in other contexts (e.g. when it comes to assessing the institutional structure that stands behind our everyday interactions). So it would be a mistake to 'primafy' a merely internal principle -- pulling it out of context and treating it as though it were a universally applicable 'first principle'. Moral and political thought would be much improved by bearing this in mind.

Saturday, February 14, 2009

Chancy Elections

What is the best electoral process? Universal suffrage and 'one person, one vote' seems like a good start. But granting that, there's a further question, namely, what to do with all those votes. Our present 'majoritarian' system simply tallies the votes, and awards the election to whoever receives the most. Consider an alternative that Jack and I have been thinking about: use the votes as the basis of a lottery. A randomly chosen vote then determines the outcome of the election.

As stated, this sounds too risky. (If 1% of the population votes for fringe nutters, we wouldn't want to be landed with a 1% chance of a fringe nutter becoming president.) But, as Jack pointed out to me, there's an easy solution: let each vote transform into n lottery tickets, and keep randomly drawing tickets until we have n for a single candidate, who is thereby declared the winner. As n increases, this exponentially reduces the risk of any single voter's (or small minority's) preferences deciding the outcome. That is, for large n, this lottery system comes to approximate our present system of guaranteeing the election to whoever receives more votes. Of course, that would defeat the whole purpose of the proposal. But we can ask what an appropriate balance of randomness would be. Let me now argue the case for introducing some degree of chance (and so perhaps preferring a low-to-moderate n -- maybe just n=2, even).

The current system exhibits a sharp 'critical level' (e.g. "50% +1", for a two-party race). Everything hangs on increasing one's vote share to this critical level, e.g. from 49.9 to 50.1. Further increases -- e.g. from 50.1% to 60% -- don't matter in the slightest for determining the election's outcome. Nor do increases that fall short of this mark: third party candidates have no real chance, and even if they increase their vote share from 1% to 10%, this doesn't do them any good. This system thus gives candidates a strong incentive to gamble: 30% of the vote share is worthless (in a 2-way race), so if an irresponsible gimmick offers some non-zero chance of doubling their popularity, this may well be worth it for them, even if they're far more likely to end up (deservedly) losing their base of support. This seems undesirable: better to have risky elections than risky candidates, we may think.

Chancy elections would mean that every vote makes a difference. Even if a candidate is way ahead (or behind) in the polls, a mere majority is no guarantee: each extra vote would contribute to making their victory more likely still. Incumbents and popular politicians could no longer risk complacency, but would instead have an incentive to pursue every single extra vote that they possibly can (without writing off those that they already have). Third party candidates could no longer be written off. A major source of voter apathy and low turnout would be averted. In sum, elections would be made more competitive.

The risk, of course, is that misfortune may deliver an election to a less popular candidate. But it may be worth tolerating some (slight) risk of this in order to obtain the benefits listed above. What do you think? (Any political theorists out there know whether this has been discussed much before?)

Saturday, February 07, 2009

Negligence and Culpable Ignorance

Let's distinguish two kinds of culpable ignorance: 'diachronic' negligence, as discussed before, and what I'll call 'synchronic' negligence or plain epistemic irrationality.

In case of synchronic negligence, Adam at the time of acting has evidence that should bring him to realize that he's endangering Bob, but simply fails to appreciate this risk. I'm inclined to think that Adam is blameworthy for his action in this case (to some degree). So at least synchronic culpable ignorance fails to fully exculpate. He really - i.e. in the strictest sense - should have known better. We owe it to each other to be more careful than that.

Holly Smith, in 'Culpable Ignorance' [pdf], seems to dismiss these as poor test cases of whether ignorance always exculpates. The closest she comes to explaining why is when she writes, "if the agent's act is not justified relative to his actual beliefs, and he realizes this fact, then of course he is to blame for performing the act, but we cannot infer that any blameworthiness arises from his culpable ignorance." (p.545)

This is a puzzling remark. For one thing, talk of blameworthiness "arising" from culpable ignorance risks changing the subject -- a point I'll return to shortly. What's more immediately relevant is that the section I underlined unnecessarily confounds the issue -- it's true that if he realizes his act is unjustified, then it's not really a case of full "ignorance", and so a bad test case. So cut that section. What if the agent doesn't realize his act isn't justified (even given his actual beliefs or background knowledge) -- wouldn't that, at least, be a legitimate test case for whether culpable ignorance exculpates?

Consider Smith's example of a driver who fails to check his rear-view mirror before backing out of his driveway, and so crashes into an oncoming vehicle. Smith claims (pp.545-6):
The driver's ignorance is undoubtedly culpable, and it leads to his performing the wrong act, since if he had checked he would have seen the car and avoided the collision. But we cannot infer from this that his culpable ignorance makes him blameworthy for colliding with the car. For the driver is blameworthy quite independently of his culpability in failing to check his mirror: his background knowledge that the street often has traffic on it makes his decision to take the risk of backing up quite unjustified.

This seems to change the subject, since the original question was whether culpable ignorance excuses us from blame, not whether it's what makes us blameworthy. The relevant structure is this: the driver does something bad (namely crash into an oncoming car); the driver didn't realize that his action would have this bad consequence; but the driver is still blameworthy for the outcome because his ignorance was culpable rather than reasonable. That is, he should have realized (given his background knowledge) that his action was unacceptably risky, even if in fact he did not realize this.

[It's true that we cannot infer from this that "culpable ignorance makes him blameworthy", if anyone were to care about that question, but we sure can infer that culpable ignorance does not (fully) excuse him from blame for the crash -- which was, I thought, the question under consideration.]

Why does Smith switch questions like this? She argues for her approach as follows (p.545):
The fact that a single act can be culpable on more than one count means that we can decisively settle the question of whether or not culpable ignorance excuses only if we focus exclusively on cases in which the agent's culpable ignorance is the only possible source of blameworthiness.

But this just seems confused. If we are considering whether culpable ignorance excuses, presumably we are considering whether it excuses some other wrong act -- killing someone, say -- which is the presumed source of blameworthiness. It's an additional, very different (and seemingly analytic)* question whether culpable ignorance is itself a source of blameworthiness.

* (Aside: perhaps it's not really analytic, e.g. if we're asking whether epistemically culpable ignorance is also morally culpable, or some such. Though the 'diachronic' cases arguably invoke morally culpable negligence right from the start, so there it seems that Smith must be concerned with some yet further question -- e.g. the question of moral luck, applied to negligence cases, which I previously argued was an entirely independent question.)

Now, I take it that what really follows from the possibility of normative overdetermination is that "we can decisively settle the question of whether or not culpable ignorance excuses" only if we focus exclusively on cases in which the agent is culpably ignorant of all the decisive reasons against his action. This suffices to avoid the problem cases wherein the agent is partially ignorant, but knows he should investigate further before acting, and hence "knows himself to be performing an act less good than its alternative, namely conducting further enquiry." (p.546) There the agent might be blameworthy for the failure to investigate, rather than the unwitting killing, and so we would attribute blameworthiness to the agent even if their ignorance excused the latter. A fair test must avoid such merely partial ignorance, then.

But this condition is easy enough to meet, and we're still left with many possible synchronic cases of the sort that Smith too-hastily dismissed. Again, just take the example of the driver who should know to check his rearview mirrors before reversing, but one rushed day it simply doesn't cross his mind that he has any reason to do so. Here, we stipulate, there are no confounding wrong-making features that he remains aware of. So it's a genuine test of whether (culpable/irrational) ignorance exculpates. And, intuitively, the answer seems clear: the agent is still blameworthy for his unwittingly reckless behaviour (even though he would, of course, be even more blameworthy if he had knowingly endangered others' lives).

Perhaps Smith is simply interested in a different question, which is fine, so long as we're clear about what the discussion is meant to establish. The paper presents itself as addressing the question whether culpable ignorance exculpates. If that's the aim, then it seems to me to end up wide of the mark.

Moral Luck: vice and culpability

I think Holly Smith's 'Culpable Ignorance' also mistakes what culpability itself consists in. She claims (p.568) that believers in moral luck hold that an agent "is more culpable -- a worse person -- for [the foreseeable risk's] occurrence."

But that's not right: the recklessly drunk driver who narrowly escapes an accident is obviously just as bad a person as his intrinsic duplicate who (due to slightly different external circumstances) kills a pedestrian. It's just that, through a stroke of luck, he's not responsible for any deaths. This means there's less we can blame him for.* It doesn't mean he's a better person.

So the upshot of moral luck is that judgments of culpability and of character may diverge. Two equally bad people may differ in what they've done (or caused), and hence what we can blame or hold them responsible for. To be more culpable than another is hence not the same thing as to be "a worse person".

(Smith actually makes the point earlier in her paper that we blame people for their actions, not their character, so it's odd to see her make this mistake later on.)

* [I think Nate first pointed this out to me.]

Thursday, February 05, 2009

Against a Defense of Future Tuesday Indifference

How should we understand Parfit's example of the hedonist with future Tuesday indifference? Sharon Street ('In Defense of Future Tuesday Indifference') distinguishes two possible interpretations, but I want to urge a third.

Suppose first that the agent is only indifferent to future Tuesdays, and on the stroke of midnight his preferences change so that he regrets his earlier decision to schedule an agonizing operation for this day. Such fundamental preference changes complicate the case. As Street points out, the different temporal stages of the agent would effectively be at war with each other -- the earlier ones plotting to ensure that the Tuesday-stage suffers agony (supposing this is necessary to spare their later stages, which they care more about, from some lesser pain), and then the Tuesday-stage trying to undo this plot against himself. That is, the Tuesday-agent has a diverging deliberative standpoint, from which he won't endorse or carry out the intentions of his earlier stages. In this sense, it's almost like a new agent temporarily takes over the body each Tuesday, which raises complications regarding whether the prior, Tuesday-indifferent stages are doing something morally objectionable in imposing suffering on the Tuesday stage against "his" will.

To avoid such complications, we may instead suppose the agent is tenselessly Tuesday-indifferent. His reflective preferences remain the same, even on Tuesdays themselves. However, Street argues that when we imagine this agent in vivid detail, he is not so obviously irrational. For during his painful experience, he maintains his "meta-hedonic" indifference to the pain, and so we might think that he achieves a state of emotional calm, and so doesn't really "suffer" from his pain in the ordinary way. (Compare, e.g., a Buddhist monk distancing himself from the searing pain of hot coals.)

However, this scenario too strikes me as containing some potentially confounding complications. In particular, it's no longer clear that the Tuesday experiences we've described are really as painful (hedonically bad) as the experiences felt on other days, and so rather than an agent whose preferences make arbitrary references to Tuesday as such, it seems we've instead described an agent with the more ordinary preference to experiences "pains" on days when they won't cause him suffering (and given his odd constitution, this happens to be Tuesdays).

To clarify this, we need to distinguish two further versions of (tenseless) Tuesday-indifference. Street offers a non-conceptual interpretation, whereby the agent ('Indy') is simply constituted (perhaps due to some bizarre evolutionary story) such that he undergoes regular cycles of psychological transformation. In particular, Indy feels a Buddhist-like 'detachment' from any pain inflicted during special periods (that happen to coincide with Tuesdays), and this indifference naturally carries over to his prospective and retrospective evaluations of "pain" experienced during the special period.

Note that Indy's pain-indifference is prior to his beliefs about what day it is. Locked in a dungeon and deprived of any other temporal cues, he might one day notice a psychological change in himself ("I no longer care about present pain experiences"), and thereby infer that it's Tuesday. This clearly isn't the kind of agent we normally have in mind when talking about Future Tuesday Indifference. Most importantly, Indy's changing psychology corrupts the thought experiment. His phenomenal experience of pain-on-Tuesday is qualitatively different from how he experiences pain on other days, so it could be this qualitative difference, rather than the purely temporal difference, that his preferences are tracking.

So I think it is more worthwhile to consider a psychologically uniform agent with explicitly conceptualized Tuesday-indifference. That is, we should imagine an agent whose psychology is consistent across time, and whose preferences make special reference to "Tuesday" as such (under that description). This means that his beliefs about what day it is will affect his behaviour: in particular, if he falsely believes that today isn't Tuesday, his subjective experiences of pain will feel as agonizing as they do on other days.

This strikes me as the 'pure' version of the thought experiment. After all, to assess Future Tuesday Indifference, we need to hold all else equal, and that means ensuring that the experiences in question are qualitatively identical -- not differing in any phenomenally discernible, and hence potentially hedonically relevant, respect from what he'd suffer on any other day. So let's ask the agent to choose between the following options:
(1) We will wait until Tuesday, and then hook him up to an experience machine that will give him the total phenomenal experience of intense suffering. (This may include brainwashing him into thinking it's Wednesday, or otherwise ensuring that he doesn't "dissociate" himself from the pain in any way that introduces subjective differences.)

(2) We wait until Wednesday, and merely inflict moderate suffering with the experience machine.

The genuinely Future-Tuesday Indifferent agent will choose option (1), by definition. Will Indy?

This poses a dilemma for Street. Indy previously seemed less irrational precisely because we could interpret his Future-Tuesday Indifference as largely compatible with a kind of impartial hedonism: he wasn't really suffering on Tuesdays, after all. But this new choice prises the two motivations apart. He can sanely choose to minimize his suffering, by opting for (2), but then we see that he's not really indifferent to Tuesday suffering after all. (It was just never an issue before, due to his quirky ability to ignore pain on Tuesdays.) Or he can choose option (1), but then we see that his apparent reasonableness was an illusion.

In any case, genuine Future-Tuesday Indifference seems (intuitively) as irrational as ever.

Wednesday, February 04, 2009

Norm Enforcement

There's a lot of blameworthy behaviour around (especially on the internet). In what circumstances should we kick up a fuss?

Some might suggest 'never'. It's impolite to chastise others, after all. Better just to mind one's own business, live and let live, etc.

As stated, this categorical denial is surely too strong: we shouldn't just "turn a blind eye" if our associates engage in acts of cruelty or bullying, for example. The social enforcement of (at least some) moral norms seems essential for maintaining a decent society, especially given that not everything bad can be legally prohibited. So some moral failings, at least, may be other people's business. The question is where to draw the line. (I tend to favour more confrontation than most people, at least in theory, but that's probably due to my optimism about how constructive disagreement can be.)

At the other extreme, one might argue that wrongdoing is always others' concern (at least in principle): that's just what makes something a moral - rather than merely personal - failing. Here we need to clarify whether we mean it is the business of some or all others -- only the former seems plausible to me. But in any case, it doesn't follow that the appropriate way to respect this concern is always to voice it.

Note that it would seem ridiculous to actively seek out people to remonstrate. And even if I accidentally happen across a forum of noxious homophobes, or a news story about some evil dictator on the other side of the globe, it may not seem worth engaging my reactive attitudes. Although their blameworthiness renders them fitting or legitimate targets for censure, some further, practical reason is arguably required to make it actually worth doing.

This suggests two key factors: the importance of the violated norm, and the likely efficacy of our remonstrations -- whether at convincing the wrongdoer to repent, or strengthening the norm for others in our moral community. This latter goal suggests that our affiliations (if any) with the violator may strengthen the case for public censure, in hopes of counteracting the human tendency to show less respect for norms that others in their "group" have violated. (On the other hand, increased exposure may prove counterproductive for the same reason.) In any case, some kind of 'division of moral labour' may be appealed to here also: it'll generally be easiest if groups police their own -- not to mention more effective, as we're generally more responsive to the opinions of people "like us". (Widespread non-compliance renders this an imperfect rule, however.)

I also feel like more prominent, well-respected, or influential people ought to be subject to increased "vetting" and criticism. I guess it does more good to counteract the influence, so far as we can, of those that actually have some in the first place. They might also serve as evidence that a view is more widespread than we'd realized.

For example, it's one thing for nationalistic wingnuts to blithely disregard the interests of the global poor, but it's something else altogether to see Richard Posner advocating that foreign aid "that goes to fight malaria... or promote agriculture or family planning there could be redirected [in part]... to the United States to help get us out of our economic predicament". He chillingly concludes:
I grant that poor countries may be harder hit by what is a global depression than the United States, but I consider Americans' obligations to be primarily to Americans rather than to the inhabitants, however worthy, of foreign countries. I am also inclined to think that charitable giving abroad is so closely entwined with the nation's foreign policy objectives that it should be regulated by the State Department rather than left entirely to private choice.

*shudder*

But I digress. Any other suggestions regarding when censure is called for? (I guess in practice we can generally just follow our inclinations here, as in most matters, but they might always be refined. Besides, I'm curious.)

Tuesday, February 03, 2009

Epistemic Supererogation

It's commonly thought that epistemic normativity lacks the structure of moral normativity. We should believe in proportion to the evidence, and that's that. But most moral theorists will carve things up further, e.g. into the impermissible, the (minimally) permissible, and the supererogatory.

I'm not convinced there's any great difference here, however. In both cases, we ought to believe/do whatever we have most reason to believe/do. Nevertheless, we can distinguish between reasons that are more or less stringent, and failures or imperfections that are more or less forgivable. This is so even in the epistemic sphere. Some biases or errors in reasoning are worse - more blameworthy - than others. Young-earth creationists, holocaust deniers, etc., might be considered especially egregious offenders, for example. Moral and epistemic offenders alike may warrant censure -- blame in the one case, and ridicule or scorn in the other, perhaps.

Further, it seems pretty clear, in both ethics and epistemology, that not all imperfections (or failures to believe/do as we ought) call for such censure. There is a significant range of 'permissible', or tolerable, belief and action -- even though within this range, some options may be determinately superior to others.

I think we can even extend the notion of supererogation, or going "above and beyond the call of duty", to the epistemic sphere. Consider such ordinary optimistic biases as that of parents exaggerating the virtues of their children. What should Mother believe about the aptitude of her little Joe-Average? She certainly shouldn't consider him a genius -- that would be a bit too unhinged. But she may be forgiven for considering him a little above average, even if he's not (and the evidence is sufficiently clear about this). Indeed, we probably can't reasonably expect or demand much more from her than that.

But now suppose Mother is unusually dedicated to overcoming her biases, and reasons thusly: "Joe seems above-average to me, but mothers are known to overestimate such things, so I really should compensate for my own bias here. This suggests that Joe is really about average." Here it seems to me that Mother is going above and beyond the call of (epistemic) duty. A normal, adequately reasonable agent would probably have just stuck with their conveniently over-optimistic initial judgment. But she really went the extra mile. Doesn't that sound like supererogation?

Monday, February 02, 2009

About this blog

[My old 'introduction' was getting a bit outdated; here's a replacement.]

Welcome to Philosophy, et cetera. My name is Richard Chappell, and this blog contains my thoughts on academic philosophy and - occasionally - other stuff.

My 'web of beliefs' offers an overview of the past year's blogging, and of my various philosophical views. The main page sports my latest posts, or you can browse my monthly archives in the sidebar. Alternatively - and perhaps more usefully - you can scroll to the bottom of any page to see a list of 'categories', and select a topic of interest to peruse.  You might also want to check out my favourite posts, as well as my 'lessons' diagnosing common philosophical mistakes (which I hope may impart some useful insights to budding young philosophers).

Comments Policy

If you're an academic philosopher or grad student, go right ahead. Non-philosophers: read carefully. This blog is a place for me to explore my ideas. I welcome the opportunity to improve them, so intelligent critical feedback is greatly appreciated. To that end, here are some ground rules.
(1) Keep it focused. The comments section is emphatically not an invitation to rant or "express yourself" however you please. The purpose of the comment threads is to continue the specific conversation started in the main post. [Exception: open threads.] If you want your own soapbox, get your own blog.

(2) Add value to the conversation. Don't just assert your disagreement, offer reasons why I should change my mind. (Questions are also welcome, of course.)

(3) No trolls allowed. Maintaining this blog is my hobby; receiving rude or abusive comments is not.

In general, I reserve the right to delete any comment that I judge to detract from the discussion. (Its author may email me to request a copy of the deleted comment.)

On a more positive note:
* You're welcome to comment on older posts. As always, I can't promise to respond, but I often will. Links to 'Recent Comments' are listed in the sidebar.

* I recommend signing in with a Google/Blogger account, so you can check the 'email follow-up comments' box to be notified of any responses.

* If you'd like to discuss a new topic of interest (while respecting rule #1 above), feel free to email me the topic request, and I'll often be happy to start a new thread for that purpose. I'm also very open to the possibility of publishing 'guest posts' by other academic philosophers. Again, email me if you're interested.

Moral Principles, Objective Generalizations

Folk often confuse the question whether morality is objective with the very different question of whether general moral principles, e.g. 'Stealing is wrong', are exceptionless. This is pretty silly, when you think about it. After all, nobody would think that the general biological principle "cats have four legs" is refuted -- or rendered merely 'subjective' -- by the existence of a three-legged cat. General principles may merely claim to describe normal cases, in which case they are not threatened by the odd exception.

So there's nothing to prevent a moral objectivist from affirming that normally, stealing is wrong, but in special cases it may be justified. Indeed, 'particularists' go even further, and claim that there are no (useful, informative) moral principles at all. One can still be a moral objectivist, and hold that each particular ('token') action is either objectively permissible or impermissible, depending on the precise details of the situation in question. They're merely claiming that we can't usefully generalize from the moral status of token actions to general act types. This suggests that morality is complicated, but complexity is obviously compatible with objectivity. As I put it in an old post:
Any adequate theory must be sensitive to the morally relevant features of a situation. It would be morally obtuse to claim that lying (for example) is always wrong, no matter the specific context. But this isn't relativism, so long as we agree that there's an objective fact of the matter in any particular case. Some lies are permissible and others aren't; but there's no one particular (token) act of lying that is at once both right and wrong, "relative" to different observers.

One might object, "If you can't give a straight answer to the question whether lying is wrong -- an answer that's true for everyone, in all times and places, then isn't that practically the definition of relativism?" But no, that's just as stupid as demanding a straight and universal answer to the question whether cats are tabby. The fact is: some are, some aren't. In each particular case, there's a perfectly objective fact of the matter. It may vary from case to case, however, rendering the general question underspecified. (Which cat? Which instance of lying?) Point to a cat, and I'll tell you whether that one is tabby. Likewise, point to a particular instance of lying, and we may determine whether that action was wrong or not. Any more general questions may lack answers. This isn't "relativism", in any interesting sense. It's just the perfectly ordinary point that (to put it in slogan form) variation precludes generalization.

General lesson: to think clearly about such fundamental issues, focus on tokens, not types.