Sunday, September 30, 2007

Measuring Time

Is time immanent in the world - reducible, perhaps, to the ticking of an atomic clock - or transcendent, somehow beyond the physical universe? (One of my old Canterbury lecturers gave a great talk on this a couple of years back.) We seem pressed towards a kind of middle ground. No mere clock can be the ultimate standard of time, for a clock may slow down, and that does not mean that the rest of the universe has speed up! No, we take them to be measuring something beyond themselves. The same will be true of any local standard (e.g. the movement of the sun).

Markosian (1993) suggests:
[The change in the sun's position] is also meant to be a stand-in for a more important change, namely, the pure passage of time. Indeed, it seems that our assumption that the sun's position changes at a constant rate amounts to the assumption that the sun's position changes at the rate of fifteen degrees per hour, i.e., that every time the sun moves fifteen degrees across the sky, one hour of pure time passes. So it at least appears that what we are after in trying to determine the rates of various physical processes, such as Bikila's running of the marathon, are the rates at which those processes occur in comparison to the rate of the pure passage of time. (pp.840-1)

I hope we can come up with a better account of this appearance, since "the rate of the pure passage of time" is gibberish. But why should we interpret "fifteen degrees per hour" as relating two changes (the sun through space vs. the present through time)? It seems on the face of it to just be reporting a single change, i.e., that the sun moves fifteen degrees across the sky in the space of one hour. The hour doesn't have to move. Just the sun.

Perhaps the worry is that if time doesn't pass, then the standard of an 'hour' must be defined in terms of immanent physical changes (like the sun's movement, or a clock's ticking). But all measurement is like this. A clock is to time as a ruler is to space. Nobody takes this to mean that we need an objective 'here', extending over space at a rate of one meter per meter, to tell us how long a meter really is in case all our rulers suddenly shrink. Yet Markosian writes (p.841):
suppose that the pure passage of time thesis is false... if it should turn out one day that the motion of the sun in the sky appears to speed up drastically relative to other changes, then we should say, not that the motion of the sun has sped up drastically relative to the pure passage of time, while every other change has maintained its rate, but, rather, simply that the sun's motion has sped up relative to the other normal change.

Why can't we say that the sun has sped up drastically, not relative to any other rate, but just simpliciter? It is moving a greater distance in space for the same interval of time. Simple.

It seems like the real issue here is substantival vs. relational conceptions of space-time. If space-time is like a container, an objective thing in its own right, then universal shrinkage - or slowing - of its contents might be a coherent possibility (even if we couldn't recognize such an event from the inside). If they're merely relational, on the other hand, and so fundamentally about relative proportions, then the idea of all distances or durations universally increasing may make no sense, since to double each component is to leave the ratio the same. (Note that while this is a curious issue in its own right, it's nothing to do with the passage of time.)

In any case, if immanent relations are all that we have access to, we may wonder whether substantive, transcendent space-time could really matter. So it is worth seeking a plausible immanentist theory. We noted at the start that no local standard would do. But perhaps a global generalization would serve better. Plausibly, we seek a frame of reference that yields the greatest amount of stability in our general region. Relative to my heartbeat, the world is in a crazy flux. But my clock, and the sun's movement, and a whole cluster of other natural processes, can be interpreted as each holding a constant rate relative to each other. So we take this general cluster as our standard of time. Any one component may become out of sync with the rest, in which case we will judge it to have changed its pace. The stability of the cluster thus transcends each of its parts (considered individually), whilst remaining wholly immanent. That strikes me as providing as good a basis for measurement as one could hope for.

Now You're Talking

Markosian (1993) defends the suspicious move from tensed language to tensed reality, by claiming that if we cannot paraphrase away talk of 'presentness' into B-theoretic language (e.g. 'being contemporaneous with this utterance'), "this must be because [the former] expresses something that cannot be expressed by anything like [the latter]." (p.833) But why should this matter? Perhaps the assumption is that sentences express world-involving propositions, so that the difference in expression reflects a difference in the world. But that would seem question-begging in this context. We might do better to skip straight to the question of how the world has to be in order to make our tensed sentences true. And, as noted here,
the sentences U: "The enemy is now approaching." and V: "The enemy [is] approaching simultaneously with U." are presumably made true by one and the same fact -- the tenseless fact of the enemy's approaching at some time t which is also U's time of utterance -- despite their lack of synonymity.

So the inference from language to reality seems thoroughly unmotivated. (Am I missing something?)

One way to bring this out is to consider the analogy between 'now' and other indexicals, e.g. 'I'. As Lewis and others have pointed out, there seems something special about attitudes de se, which refer to oneself under the indexical guise. They cannot simply be paraphrased into objective worldly descriptions. But I take it no-one is thus tempted to infer that the world itself contains a special property of "I-ness", held by me alone. So why does tensed talk tempt anyone into inferring that the world itself contains a special property of "presentness", held by the current moment alone?

Could God Pause Time?

Markosian (1993) defends A-theorists (who believe in the objective 'moving present') against the objection that they cannot answer 'How fast does time pass?' He suggests three possible answers:

(1) Measure time against itself. Time thus passes at the trivial rate of one second per second.

(2) Measure time against a non-temporal standard:
If I tell you that Bikila is running at the rate of twelve miles per hour of the pure passage of time, for example, then I have also told you that the pure passage of time is flowing at the rate of one hour for every twelve miles run by Bikila.

(3) Claim that the question involves a category error. Perhaps rate talk essentially "involves a comparison between some normal change and the pure passage of time." (p.843) The pure passage of time itself then has no rate to speak of. But it passes all the same.

These responses all seem woefully inadequate. Especially the third - what does it mean to speak of movement that occurs at no rate? Surely this is just to say that it doesn't literally move after all. The second seems similarly senseless: to move just is to move through time, i.e. to be in different positions at different times. And the first says nothing of substance.

Things that flow may speed up or slow down. Not only does movement entail a rate of movement (contra 3), but it must be possible for this rate to change. If time passes, it must be possible for God to alter its rate of passage - to 'fast forward', 'rewind', or 'pause' the flow of history. But that is incoherent. So time does not pass.

Why is it incoherent? Well, suppose God decides to pause the flow of time for five minutes. How much time has passed? None, for time is frozen. But ex hypothesi five minutes have passed, so time is not frozen. This is a contradiction. (Similarly for the other manipulations, which all involving changing the rate of time's passage to something other than 1 second per second.)

Saturday, September 29, 2007

Quote of the Day

The partisans of time often take it with such Spartan seriousness that they deny existence to virtually all of it

- Donald Williams (1951), 'The Myth of Passage', p.458.

Update: or for a little more substance, p.464:
"Taking place" is not a formality to which an event incidentally submits - it is the event's very being. World history consists of actual concrete happenings in a temporal sequence; it is not necessary or possible that happening should happen to them all over again. The system of the manifold is thus "complete" in something like the technical logical sense, and any attempted addition to it is bound to be either contradictory or supererogatory.

See also: unchanging time and the infinite past.

Affect, Drive, and Evaluation

One worry for simple belief-desire psychology is that it potentially conflates several importantly different kinds of motivation or pro-attitude:

(1) Affect - i.e. positive emotional valence, or pleasure.
(2) Drive - sheer behavioural/motivational force, e.g. compulsion.
(3) Evaluation - degree of reflective endorsement.

(Any others?) It seems at least logically possible that these could come apart. For example, one might compulsively act in a way which feels emotionally neutral to the actor, but which they rationally judge to be bad. Does the agent "desire" to so act? It depends which of the three meanings we have in mind.

It seems clear that mere behavioural drive has no normative significance. Affect is more promising, since we generally like to have positive emotions. But in that case, it is arguably deriving its value from #3: evaluation.

I wonder whether these distinctions can help us to make sense of so-called 'conditional desires', e.g. the desire for ice-cream, which seem in some sense to be conditional on their own persistence. (Shieva has a neat paper [doc] explaining why the notion is so problematic.) We feel a transient affective pull towards ice-cream, and we positively evaluate the goal of obtaining ice-cream while the feeling persists. This allows us to avoid self-reference, as the evaluation is instead conditional on the persistence of affect. Or something like that.

Literalism and Automatic Interpretation

Some interesting remarks from Jason Kuznicki:
Fundamentalism is an interpretive strategy. Fundamentalism is not a divine command; it is a human decision about how to read a text, and it should be made to prove itself against all of the other equally human approaches to reading. No one has a magical hermeneutic key descended from Heaven, and there is no reason whatsoever to believe from the outset that fundamentalist readings are any closer to God than any other. The fundamentalist interprets his text just like anyone else does. The only difference is that he claims not to interpret, and the sacredness of the text causes many people to believe what would in any other context be an obvious imposture.

It is tempting to claim that a literal interpretation is somehow the most 'natural', or the 'default' option. But I think this is simply because it comes most easily to us literal-minded folk. Some past cultures were, I gather, not nearly so literal-minded. I vaguely recall reading an ancient Roman historical text, calmly relating the role that the gods and sea monsters played in the day's events. Even if my memory misleads me, we can certainly imagine a culture where their talk is infused with mythological references, which have more poetic than literal significance. (They may treat religion as a cultural practice, rather than a collection of metaphysical beliefs, and so be puzzled if an outsider were to ask them if they thought it was "really true?" They didn't take themselves to be making such assertions.)

The point is this: given our cultural background, we tend to automatically interpret text literally. (There are some exceptions, e.g. idiomatic expressions.) It may not even occur to us to interpret it any other way - or if it does, it may seem forced or artificial. But this is a wholly contingent fact about us. We could have been different. In the imagined culture, one automatically interprets text poetically. It may not even occur to them to interpret it any other way. No more than we are tempted to think that a man needs a wheelchair upon making a purchase that "cost him an arm and a leg."

Does that sound right? I've heard of similar views in aesthetics, i.e. that there is no natural distinction between "realistic" vs. "abstract" art or representation. There are only signs that are more or less conventionally familiar. The more familiar ones are recognized automatically, and so no conscious interpretive effort is required, which misleads us into thinking that there is no interpretation involved at all. Contingent ease is thus mistaken for essential naturalness. There's surely something right about this, though the leap to full-blown interpretive relativism seems a bit suspicious. Any thoughts?

Apoorcalypse

$5000 for every U.S. baby? Maybe Clinton is not so bad after all. This is the very best kind of left-liberal policy, (a) being universal rather than means-tested, and (b) offering cash rather than specifically delimited goods, and thus ceding control over the spending decision to the recipients. It's really wonderful to see this idea floated in mainstream politics. (See here for why it is such a good idea.)

It's strange to read the objections from right-wingers in the comments here. There are some real head-scratchers. (Some appear to confuse the end of poverty with the end of the world.) 'Justin', for example, mockingly asks:
If she's serious, why not pay for all basic food products? You shouldn't have to pay for things like flour!

But one of the major arguments in favour of general (cash) redistribution is that it doesn't distort incentives and price signals the way specific interventions (flour) would. We're talking about redistribution whilst maintaining a market economy. That's a pretty important difference.

'The Ghost' adds: "there's nothing you can do that would aggravate American poverty more than promise every poor kid $18,000 when they turn 18."

Yeah, there's nothing like an unconditional cash injection to keep people poor. I guess we ought to ban trust funds for rich kids too. We shouldn't want them to be disadvantaged by all that money waiting for them when they grow up, just because they were unfortunate enough to have wealthy parents. They should enjoy the same freedom from resources as everyone else.

Friday, September 28, 2007

Normative Explanations

Railton (1986) proposes a naturalistic conception of normativity: "facts about what ought to be the case are facts of a special kind about the ways things are. As a result, it may be possible for them to have a function within an explanatory theory." (p.185) To capture this, he introduces the notion of criterial explanation: "we explain why something happened by reference to a relevant criterion, given the existence of a process that in effect selects for (or against) phenomena that more (or less) closely approximate this criterion." The bridge collapsed because it failed to meet certain engineering criteria; it was not up to scratch; it should have been reinforced. (There is nothing irreducibly normative about this explanation.)

There is a similar explanatory role, Railton thinks, for morality (the "comprehensive" criteria given by the impartial "moral point of view" -- roughly, preference utilitarianism). We may think it is not a coincidence that we have abolished slavery. Abolitionism is partly explained by the fact that slavery is unjust. You may suggest that we can just as well appeal to the mere beliefs of the abolitionists. But Railton points out that even unrecognized injustice may be conducive to social unrest. The mere fact that the interests of some group are being systematically discounted will tend to foster dissatisfaction among those group members, even if they don't consciously appreciate what's going on.
An individual whose wants do not reflect his interests or who fails to be instrumentally rational may, I argued, experience feedback of a kind that promotes learning about his good and development of more rational strategies. Similarly, the discontent produced by departures from social rationality may produce feedback that, at a social level, promotes the development of norms that better approximate social rationality. (p.193)

Hence Railton predicts that "over time, and in some circumstances more than others, we should expect pressure to be exerted on behalf of practices that more adequately satisfy a criterion of rationality." (pp.196-7) Justice is thus seen as a selection pressure in cultural evolution.

But, as others noted in class, this explanatory role fits ill with Railton's utilitarianism. Some form of egalitarianism - which places greater weight on the 'separateness of persons' - would seem to better predict social stability (given human selfishness). That's no reason to favour egalitarianism as a normative theory; but it does raise the question of what distinctive explanatory role a utilitarian moral theory is thought to have.

I prefer a different route. We shouldn't expect normative facts to have any direct impact on society. But they'd better influence us as rational agents!

Often (we may think), we hold the beliefs we do precisely because they are the most reasonable ones to hold. So if the normative fact of slavery's immorality simply consists in this being the most reasonable view, then the normative fact itself explains why we believe it (given our general capacity for reasonable belief). And that belief in turn influences our actions.

This account takes us far away from the kind of naturalism that would see ethics as just another branch of scientific inquiry (sociology, say). But I think that is a good move.

Thursday, September 27, 2007

Can Railton Avoid the Conditional Fallacy?

In 'Moral Realism' (1986), Railton suggests a form of ideal agent theory (of one's non-moral good) designed to avoid the conditional fallacy:
Give to an actual individual A unqualified cognitive and imaginative powers, and full factual and nomological information about his physical and psychological constitution, capacities, circumstances, history, and so on. A will have become A+, who has complete and vivid knowledge of himself and his environment, and whose instrumental rationality is in no way defective. We now ask A+ to tell us not what he currently wants, but what he would want his non-idealized self A to want - or, more generally, to seek - were he in the actual condition and circumstances of A. (pp.173-4, bold added.)

In class yesterday, Dave came up with a wonderful example to suggest that even this double-counterfactual creates interference. Suppose that A's strongest desire is that his cognitive capacities never decline. He desires that, if at any future moment he becomes stupider than he previously was, he dies. (This is just a more extreme version of the common preference many of us have to die rather than succumbing to Alzheimer's or similar mental degeneration.) Given Railton's merely instrumental conception of rationality, there's no reason why this desire couldn't survive idealization, and so be shared by A+. But now the indexical character of the desire is latching on to a new content, given by A+'s context rather than A's. Given that A+'s strongest desire is to be no stupider, what he would want were he to find himself "in the actual condition and circumstances of A" is simply to die! This clearly does not reflect what is in A's objective interest at all, since A has not actually suffered any degeneration. The problem is merely an artifact of the counterfactual scenario.

Liz Harman then suggested a couple of clever solutions. The problem, recall, is that the counterfactual context changes the content of A's indexical desires. So one solution would be to construct the idealization according to the (actual) content rather than character (meaning) of A's desires. That is, even in the idealized context, we treat the desires as referring to A and his actual circumstances. Then A+'s strongest desire is merely to be no stupider than A.

A second option, which I like even more (though I'm not sure how much of it is a reconstruction on my part) would be to bring A's context over to A+. That is, ask A+ to assess an indicative rather than subjunctive conditional: not "what would you want if you were to find yourself in A's condition", but "under the hypothesis that you are in A's actual condition, what do you want?" (Very 2-D!) I think that should work, right?

(Mind you, it's a bit of a mystery why Railton appeals to this idealization process at all. Given that he only builds in full information + instrumental rationality, it doesn't seem that A+ is allowed to revise any of A's ultimate ends. So what work is he doing? Why not just directly identify A's objective interest with whatever would best fulfill his ultimate desires in fact? Presumably that's what is supposed to be guiding A+'s decision. Smith mentioned Railton's "wants/interests mechanism" as going beyond mere instrumental rationality, by tending to bring our motivations more into line with our affective responses, but this alignment does not seem to be included in the idealization process quoted above. Can anyone think of a case where A+ would appropriately choose something other than what would best fulfill A's ultimate desires? Divergence - as in the 'degeneration' case above - seems to indicate precisely that the idealization has gone wrong!)

Tuesday, September 25, 2007

Rational Force: science vs. ethics

It's a widespread view that science, but not ethics, has rational force. Creationists are irrational, whereas fascists are merely nasty. Is this alleged asymmetry defensible? I'd recommend rejecting the instrumental conception of rationality, so I think both have rational force. But Railton suggests the opposite approach in his 'Moral Realism' (1986, pp.166-7):
From the standpoint of instrumental reason, belief-formation is but one activity among others: to the extent that we have reasons for engaging in it, or for doing it one way rather than another, these are at bottom a matter of its contribution to our ends. What it would be rational for an individual to believe on the basis of a given experience will vary not only with respect to his other beliefs, but also with respect to what he desires. From this it follows that no amount of mere argumentation or experience could force one on pain of irrationality to accept even the factual claims of empirical science... Unfortunately for the contrast Ayer wished to make, we find that argument is possible on scientific questions only if some system of values is presupposed.

This need not imply epistemic relativism, since "epistemic warrant may be tied to an external criterion - as it is for example by causal or reliabilist theories of knowledge." (p.171) Still, on this account we cannot say that creationists are irrational. They merely fail to adhere to the objective, external norms we would hold them to -- same as the fascists. In either case, we (generally) have plenty of good reasons to care about those external norms. So what grounds are there for thinking the rational status of ethics and science differ?

Monday, September 24, 2007

Maximizing over Infinite Time

Mathew Wilder asks:
Consequentialism aims at maximizing the good in the long term, or on the whole. But what if the universe is infinite, temporally speaking? Then it seems that there are no actions that maximize the good (or that every action does so) because there will always be an infinite amount of good (and bad) in the future (and in the past as well, if the universe is truly infinite).

I recall reading once about how the notion of a multiverse where every action/decision results in another universe seems to make moral choices worthless, from a consequentialist view-from-nowhere, since every good and bad possibility is an actuality. [Yup - RC.] However, it seems plausible that we could focus the scope of our consideration on the universe in which we live without being open to an accusation of arbitrariness.

But, even if we keep our focus on the only universe which we experience, if it is infinite, then how are we to non-arbitrarily judge what maximizes the good? Should it be what maximizes the good in ten years, or one hundred, or a million? Why should the tenth year matter, but not next year, or all of the infinite years to come?

Now, it is clearly disputable that the universe will continue infinitely, but it certainly seems plausible. Do you think I have hit on an interesting problem, or has this been dealt with before?

Sounds interesting. (If anyone is familiar with the literature on this topic, feel free to provide references in the comments!) Cf. my post on the infinite spheres of utility paradox.

My initial thought is to clarify that what the consequentialist wants to do is to bring about the best world practically possible. And it seems that even when comparing worlds that contain (equally) infinite value, we can judge that some are better than others. For a simple example, consider a case of 'domination', i.e. where one world is (finitely) better than another at each of the infinite moments in time. Clearly, this world is also better overall, even though we cannot attribute a higher quantity of value to it (since both are just countably infinite).

[N.B. This is a puzzle for value theory generally, not anything peculiar to consequentialism -- cf. R.M. Hare.]

Anyway, I'll throw open the comments for anyone else who wants to chip in...

Temporal Neutrality: can we still care?

In Reasons and Persons, Parfit invites us to imagine a character called 'Proximus' who cares more about the nearer future. This 'bias towards the near' means that he would wholeheartedly prefer to undergo intense pain later rather than a mild pain now. This seems irrational. We may advocate temporal neutrality in general, and so think that "mere differences in timing... cannot have rational significance." I agree with this, so in this post I want to address a couple of objections Parfit makes to this view.

Parfit's basic strategy is to compare the bias towards the near with two other temporal biases we have. The bias towards the future is seen in our tendency to be relieved when bad things are past. And the bias towards the present is our tendency to care especially about the pain we are now experiencing. Parfit claims that temporal neutrality requires us to denounce both these tendencies, but such denunciation seems crazy. So we should conclude that mere differences in timing can be significant after all. His most compelling example is of "the mounting excitement that we feel as some good event apporaches the present -- as in the moment in the theatre when the house-lights dim." Such excitement seems perfectly reasonable, yet - Parfit claims - it is an instance of the bias towards the near. Shouldn't temporal neutrality require us to be just as excited about distant pleasures?

Well, no. We should be more careful in understanding the scope of the temporal neutralist's claims. Obviously the claim is not that timing never matters for anything: if a bomb is about to explode, better to start running sooner rather than later! For similar reasons, it makes more sense to fear the explosion when the risk is in the future rather than the past. It's perfectly reasonable to attend more to present events, and to be excited about those that will very soon be present, etc. The neutralist need not deny any of this. Their claim is simply that one's preferences should not involve any temporal bias, or time-inconsistency. Note that temporally responsive feelings are perfectly endorsable from a timeless perspective. I do not later regret feeling excitement before the show, but I would regret choosing a lesser nearby good over a greater distant one. That's a revealing difference.

If we recast Parfit's proposed biases in terms of preferences (e.g. for the bias towards the present, say you would forsake greater future benefits in order to obtain a small boost of pleasure right now), they no longer seem any more defensible than Proximus' bias towards the near. So I don't think Parfit has any good objection to temporal neutrality after all.

Against Dressing Up

Fashionable or fancy appearance has no intrinsic value. The only value is to make yourself look comparatively better than others. If everyone dressed up, no-one would be any better off than if all had stuck to casual attire. In fact, everyone would be worse off, because less comfortable, not to mention the wasted time and effort. It's a mere rat race, so those who put effort into their appearance impose externalities on those who don't (by making them look worse). That's obviously bad, and there is no general benefit to justify imposing this costly transfer of status. So, we may conclude, it is immoral to 'dress up', follow fashion, put much effort into your appearance, etc.

N.B. This complaint does not extend to basic hygiene, since that is a non-comparative value: a world full of smelly people really is worse, in a way that a world of unkempt people is not.

Possible objections: I see two ways one might rebut the claim of 'no general benefit' here:

1. Insist that mere appearance is a non-comparative value after all. (Apparently cosmetic surgery gives lasting satisfaction, unlike most luxury purchases which people soon adjust to. This at least suggests it isn't comparison to one's recently past self that one values here. But it may still be the comparison to other people.) I remain skeptical.

2. Appeal to status pluralism. If some people care more about appearances than others, then maybe those who care can obtain great subjective benefits while the rest of us don't much care about the imposed "cost" of looking worse in comparison. I'm sympathetic to this line of thought -- the only flaw is that it ignores the run-on consequences: other people think appearances matter even if we don't, and so may treat us worse, and we certainly care about that.

Absent any more convincing objections, we seem led to the conclusion that caring for appearances is indeed a mere 'rat race', or Prisoner's Dilemma, such that deliberators in the Original Position (behind the veil of ignorance) would make a collective agreement not to start down that track. Is there anything to stop me drawing the convenient conclusion that dressing up is not just tiresome, but unjust?

Saturday, September 22, 2007

Evaluating (and Enumerating) Pains

What matters: the objective quality and duration of a pain, or our subjective conception of it? Suppose you undergo (1) 30 seconds of intense pain; or (2) 35 seconds of intense pain, followed by 5 seconds of milder pain. It turns out that most people prefer the second type of experience; afterwards, it seems to them to have been more bearable. Does that make it less bad for them in fact, or are they simply irrational/mistaken?

I'm drawn to the subjectivist option. (Plausibly, if anything is subjective, pleasure and pain most surely are.) What matters is subjective suffering, not the objective qualities of pain. It just turns out to be a curious fact about human psychology that you can make us suffer less by inflicting additional (if attenuated) pain.

Parfit seems to assume the contrary view in arguing against temporal neutrality in Chp 8 of Reasons and Persons. In his 'Case Two', you wake up the day after a painful operation, though you cannot remember exactly how long it was. A nurse tells you there are two possibilities: (1) You had 5 hrs of pain, but the operation is now over; or (2) You had just 2 hrs of pain yesterday, and have another hour still to come. Parfit suggests that the first seems preferable, despite being worse for your life as a whole. But, it seems to me, one episode of extended pain may have a roughly constant disvalue no matter its actual duration, at least if you cannot subjectively tell the difference. If this is so, then the first option is actually better for your life as a whole. It contains merely one episode of pain, whereas the alternative contains two.

One objection to my position is that memory flaws may distort our retrospective understanding of how bad a pain really was at the time. (This seems to be what Daniel Gilbert thinks of the attenuated pain case, based on his remarks in a talk last week.) I'm not sure what to make of this suggestion. But even if we're drawn to a more objective theory of hedonistic evaluation, we may only wish to count as distinct experiences those that are qualitatively distinct. (We would then think that duplicate universes, or Nietzschean eternal recurrence, would make no difference to the value of the world.) Most of the time, even intrinsically identical pains are embedded in discernibly different experiences, and so count as recognizably distinct. But in Parfit's hospital case, it seems like the duration doesn't introduce sufficient qualitative differences. After a while, many moments of hospitalized agony all blur together, and we may think the reason for this is precisely that there is truly nothing in the experiences to distinguish them. And so they count for just one.

My intuitions on these cases are all over the place, so I'd love to hear what others think. For any who prefer a more practical example, consider Michael Vassar's past comment:
One potentially important example of experiences that may be identical enough not to stack and may not be comes from factory farms. It's plausible that factory farms aren't all that bad, but also plausible that they are good candidates for "worst thing ever". I'd definitely like to know which is true.

What do you think? Is it true that (a) qualitatively (sufficiently) identical experiences only count for one? and (b) many animal pains are (sufficiently) qualitatively identical?

Wednesday, September 19, 2007

Regulating Aims

Railton offers some interesting thoughts on the paradox of hedonism (and self-defeating consequentialist aims more generally) in his 'Alienation' paper (Facts, Values, and Norms, p.156):
However, it is important to notice that even though adopting a hedonistic life project may tend to interfere with realizing that very project, there is no such natural exclusion between acting for the sake of another or a cause as such and recognizing how important this is to one's happiness... while the pursuit of happiness may not be the reason he entered or sustains the relationship, he may also recognize that if it had not seemed likely to make him happy he would not have entered it, and that if it proved over time to be inconsistent with his happiness he would consider ending it.

So the sophisticated hedonist (SH) may take the goal of hedonism to regulate his other ends, but nevertheless regards those first-order contingent desires as non-instrumental for as long as he retains them. Railton continues (p.157):

It might be objected that one cannot really regard a person or project as an end as such if one's commitment is in this way contingent or overridable. But were this so, we would be able to have very few commitments to ends as such. For example, one could not be committed to both one's spouse and one's child as ends as such, since at most one of these commitments could be overriding in cases of conflict. It is easy to confuse the commitment to an end as such (or for its own sake) with that of an overriding commitment, but strength is not structure.

I don't think that is an "easy" confusion to make at all. It would be downright silly for someone to think that a desire must be instrumental merely because it was overridable. As I see it, the worry here is not that hedonistic concerns may outweigh SH's other desires; it is that they may extinguish them utterly. (Non-hedonistic ends seem to be treated as in some sense providing merely prima facie rather than pro tanto reasons.) We do not find this in ordinary cases of conflict: a parent will still care about their spouse, even as they favour their child. But SH, on my favoured reading, would cease to recognize an end if it proved clearly detrimental to his long-term happiness. So the structural relations of these desires is unusual, and this should be recognized. 

We do not here have two first-order desires (on a structural par) weighing against each other. Nor - it is argued - do we have derived desires that are merely instrumental to one's hegemonic "true aim" of hedonism. Rather, in the case of SH we have first-order desires that are largely non-hedonistic, yet - despite being non-instrumental - they are contingent on the 'regulating aim' (let us call it) of hedonism. Hedonism is treated as a higher-order desire. It does not guide one's actions directly, but it guides the acquisition and maintenance of one's first-order desires. That's how I would want to explicate the idea, at least.

Railton's most vivid explication comes on p.159:

An individual could realize that his instrumental attitude towards his friends prevents him from achieving the fulles happiness friendship affords. He could then attempt to focus more on his friends as such, doing this somewhat deliberately, perhaps, until it comes more naturally. He might then find his friendships improved and himself happier. If he found instead that his relationships were deteriorating or his happiness declining, he would reconsider the idea. None of this need be hidden from himself: the external goal of happiness reinforces the internal goals of his relationships. The sophisticated hedonist's motivational structure should therefore meet a counterfactual condition: he need not always act for the sake of happiness, since he may do various things for their own sake or for the sake of others, but he would not act as he does if it were not compatible with his leading an objectively hedonistic [i.e. maximally happy] life. Of course, a sophisticated hedonist cannot guarantee that he will meet this counterfactual condition, but only attempt to meet it as fully as possible.

Discussing this in Michael Smith's seminar today, it was initially suggested that this 'counterfactual condition' - by implying that SH would never act on his non-hedonistic desires when doing so would be inoptimal - required a kind of overdetermination: although actually motivated by concern for others, SH's stronger hedonistic desire is waiting there in the background, ready to override the others just in case they fail to fall into line. But this just looks much like the simple hedonist. So I think we do better to interpret the sophisticated hedonistic desire as a purely higher-order one, which does not have any direct motivational force at all. (Note that the counterfactual condition may still be true due to finkish dispositions. Though it probably calls for a slightly looser interpretation in any case, i.e. SH may act inoptimally at times, so long as this doesn't too greatly undermine the happiness of his life as a whole.)

This is vital for seeing the difference between Railton's characters of John and Juan. Both feel great affection for their respective wives, and recognize the impersonal demands of utilitarianism as having ultimate moral weight in some sense. But John justifies his good treatment of his wife in directly utilitarian terms: "I've always thought that people should help each other when they're in a specially good position to do so. I know Anne better than anyone else does, so I know better what she wants and needs." (p.152) He thus seems troublingly 'alienated' from his personal concerns and relationships. Juan, on the other hand, responds simply: "I love Linda... So it means a lot to me to do things for her." (p.163) He adds the utilitarian justification only when further asked how in principle his marriage fits into the greater scheme of things.

Due to the assumption of structural parity, and thus motivational overdetermination, many in our class concluded that John and Juan were much the same, differing only in which of their two aligned desires were causally operative. But I don't think that's the right way to look at it. Juan isn't directly motivated by impersonal utilitarian considerations at all (we may say) -- not even waiting inoperative in the background. He has a quite different motivational structure, full of deeply personal and non-alienated concerns; it is just that these concerns are regulated by (or contingent on) a higher-order requirement that they align with the impersonal goals of morality.

Sound plausible?

Tuesday, September 18, 2007

An obligation to have sex?

There's an interesting exchange going on at Ask Philosophers about whether one can be morally obliged to have sex with one's partner. (Ignoring outlandish cases where aliens will otherwise blow up the world, etc.) Sally Haslanger argues not, appealing to one's inalienable right to one's own body. Alan Soble responds that we may also want to take into account the virtue of benevolence, considerations of "orgasmic (distributive) justice", and prudence / maintaining the relationship. Interesting stuff. It reminds me of some of the old comments here. (My initial intuitions are more in line with Haslanger's, but I don't know how justifiable they are at the end of the day. I mean, the idea of reluctant sexual relations seems simply awful. But if one is tossing up various possible options - all enjoyable - and it just happens that one would prefer reading even more than sex from a purely self-interested viewpoint, then in that case I can see that love of one's partner might reasonably motivate a good person to have sex instead. A less favoured option may still be viewed favourably, after all. But if the act were engaged in reluctantly, that would seem to change its very nature -- much for the worse!) Any thoughts?

Aside: one curious implication of utilitarianism seems to be that women ought to be a lot more promiscuous. Could this ground an especially strong 'demandingness objection' to the theory?

Motivating Descriptivism

Here's a quick and dirty argument (inspired by Frank Jackson's seminar last night):

Premise: Signs are no use to us if we don't know (have access to) what they mean.
Conclusion: Reference requires mediating descriptions.

Convinced?

The Maxim of Mirroring

Jonathan Ichikawa discusses a curious puzzle, 'It must be so (but I wouldn’t say that it is so)':
“it must be that X” can be [used] to express high confidence in X. What’s striking me is that this is so even when, intuitively, my epistemic position isn’t strong enough to just assert X. Is that weird? ...

I know that sometimes, pragmatic rules prohibit me from asserting things that are entailed by other things that I can assert, like when I say “she’s more than four feet tall” when someone asks me how tall she is and I know that she’s 5′4″. But this seems worse: I’m prohibited by something like the maxim of quality from asserting something that is strictly stronger than something I am permitted to assert.

This strikes me as a special use of the modal 'must'. In saying, "it must be that X", we are not merely communicating the claim "it is the case that X" with additional modal force ("it is - indeed, must be - the case that X"). Rather, we seem to be saying something more like, "General considerations force me to conclude that X." We are indicating a certain indirectness of thought; a need for inference from the general to the specific. That is how we reached the conclusion ourselves, after all. By forcing the listener to undergo a similar process, we communicate that this is how we reached the conclusion ourselves. (We may need to add this "maxim of mirroring" to Grice's list!)

On this account, then, direct assertion ("X is the case") implicates possession of direct evidence. Note that introducing the 'must' - though semantically stronger - requires us to make an inferential step before we reach the simple conclusion that X is the case. By drawing out the listener's thoughts in this roundabout way, one mirrors the thoughts of the speaker. That is, you communicate that your grounds for believing X are indirect (and so perhaps insufficient for knowledge). In this way, it can be felicitous to make the strong claim "X must be the case" - even if you lack sufficient evidence for the weaker claim that X is actually true.

A problem remains. Suppose you don't know whether X is true, but may permissibly say 'it must be that X.' Assuming that knowledge is the norm of assertion, Jonathan points out that we must either (1) deny that 'it must be that X' entails X on the above use of 'must', or (2) claim that 'it must be that X' is not an assertion but "some more tentative thing." (I guess a third option would be to deny that knowledge is closed over entailment.) Which of these options most naturally fits the story I've told above?

Reporting "morals"

*sigh*. Not this old equivocation again:
Where do moral rules come from? From reason, some philosophers say. From God, say believers. Seldom considered is a source now being advocated by some biologists, that of evolution.

It seems to be the great new trope for ignorant reporters. Conflate descriptive inquiry into sociological norms with normative inquiry proper, and then marvel at how scientists are breaking new ground in contrast to those doddery old philosophers and theologians and whatnot. Please stop.

Sunday, September 16, 2007

Examples of Irrational Desires

More from Reasons and Persons. I love this one (pp.123-4):
A certain hedonist cares greatly about the quality of his future experiences. With one exception, he cares equally about all the parts of his future. The exception is that he has Future-Tuesday-Indifference. Throughout every Tuesday he cares in the normal way about what is happening to him. But he never cares about possible pains or pleasures on a future Tuesday... This indifference is a bare fact. When he is planning his future, it is simply true that he always prefers the prospect of great suffering on a Tuesday to the mildest pain on any other day.

We can judge such a preference to be irrational because it makes arbitrary discriminations. It is ad hoc, and fails to treat like cases alike. A more coherent desire-set would appreciate pleasure on future Tuesdays as for any other day.

Parfit also discusses "Within-a-Mile-Altruism". Rather than caring about the welfare of others in his general community, the Within-a-Mile Altruist cares only about those who are located within one mile of him. One step further, and he feels indifferent to their suffering.

I've discussed similar arguments from Michael Smith here. This leads to the core argument of my essay, 'Why be moral?':
We have already established that self-interested reasons would force the amoralist to develop an intrinsic appreciation of at least some other people as ends in themselves. But it would seem arbitrary to recognize only some people as having intrinsic worth or even agent-relative worth to him. We can ask the relativistic amoralist why others do not also have worth to him. It seems plausible to hold that his overall desire set could be made more unified and coherent by adding in a more general desire for human well-being. This would contribute to explaining and justifying the more specific values the amoralist holds in valuing himself and his friends. We thus have rational grounds to criticize his desire set, in that it fails to exhibit such a degree of internal coherence. Given the rational pressure towards coherence, we may thus conclude that even the amoralist has reason to care about morality.

Reflecting on Irrational Desires

In Chapter 6 of Reasons and Persons, Parfit contrasts three desire-based views of what we have reason to do. The Instrumental Theory (IP) claims that we have most reason to do whatever would best fulfill our present desires. The Deliberative Theory (DP) instead appeals to the desires we would have on ideal reflection. Finally, the Critical Present-aim theory (CP) claims that some desires may be "intrinsically irrational", and others may be rationally required, and we have reason to fulfill only the latter. But it isn't clear to me how DP and CP differ.

After suggesting that we may consider apocalyptic desires (etc.) to be irrational and to provide no reason for acting, Parfit writes (p.119):
[The Deliberative theorist] might insist that his theory is adequate, since those who were thinking clearly and knew the facts would not have such desires.

Whether this is true is hard to predict. And even if it were true, our objection would not be fully met. If certain kinds of desire are intrinsically irrational, any complete theory about rationality ought to claim this. We should not ignore the question of whether there are such desires simply because we hope that, if we are thinking clearly, we shall never have them. If we believe that there can be such desires, we should move from the Deliberative to the Critical version of the Present-aim Theory.

Bizarre. It may be hard to predict whether a desire would be rejected on ideal deliberation, but that is just to say that it is hard to tell whether the desire is truly irrational. For surely persistence on ideal reflection provides the very criterion of what it is for a desire to be rational. (DP then tells us how to derive reasons from rationality. Parfit seems to be doing the opposite: taking reasons as given, and defining 'irrationality' as the objective absence of these.)

Parfit is thus using 'intrinsic rationality' as a term of dogmatic endorsement. He demands that a "complete theory of rationality" be more like the Ten Commandments: an explicit list of which desires are to be deemed 'good' or 'bad', without any need for actual reasoning. But that would be a theory of revelation, not rationality. What we want from our complete theory of rationality is a process of inquiry by which we can discover which desires are ir/rational. We shouldn't expect to be given the answers right at the outset, at least not explicitly. Instead, the theory has implications for what conclusions are or are not rational. This is the only sense in which it "ought to claim" such things.

It's especially strange when Parfit talks about how "[w]e should not ignore the question of whether there are [irrational] desires..." Of course we shouldn't! (Does he really take the Deliberative Theorist to disagree with this?) We should inquire as best we can - undergo a process of rational deliberation, do philosophy - and see where we end up.

Saturday, September 15, 2007

Desires and Preferences

Desires address a single goal or object, and grant it an absolute weight on the scale of utility: "X is worth 5 utils to me." (We may write "|X| = 5" for short.) Preferences, in contrast, are purely comparative: "I prefer X to Y." These two types of states are presumably interdependent: in particular, I should prefer X to Y iff |X| > |Y|. But which is more fundamental?

Desires seem more basic, being monadic (taking just one object) and scalar. So I'm inclined to see comparative preferences as merely a fancy way to report relative desire weightings. But that would render intransitive preferences not just irrational, but strictly impossible. To prefer X to Y, Y to Z, and Z to X, is not possible for any combination of desire weights |X|, |Y|, and |Z|, for the 'greater than' relation is transitive. But isn't it possible that someone really might be disposed to pick X when offered X or Y, pick Y when offered Y or Z, and to pick Z when offered Z or X? Our motivational systems are messy and context-dependent in all sorts of ways that the simple desire model fails to capture.

Might we instead define or construct desires out of preferences? This would be a lot messier. We've already seen that in case of intransitive preferences there would be no coherent way to assign absolute weights to each individual desire. But so long as the preferences satisfy the standard formal requirements (transitivity, asymmetry, etc.) it might work out better. Though it's not too clear how to assign scalar values to a mere ordering of more and less preferred outcomes, one might suggest that the scale of "utils" never had any clear meaning in the first place.

Most likely neither option is wholly adequate, as it seems unlikely that either desires or preferences have exact neural correlates. They are what Dennett calls "real patterns": useful abstractions (like centers of gravity). It's unsurprising, then, that such models break down when we push them too far. I expect that desire talk will usually be more useful than preference talk, at least. Are there any other competitors for modelling human motivation?

Are reasons in the head?

I'm thirsty - I desire that my thirst be quenched - so I drink some water. What is my reason for drinking the water? Is my desire itself the reason, or is the reason instead whatever qualities of the object inspire my desire, e.g. the water's thirst-quenching quality? Does it matter?

As a terminological point, it seems that a 'reason' is what answers a call for explanation. Why did/should I drink the water? "Because I wanted to" is not much of an answer. "Because it quenches thirst (and I was thirsty)" seems much better.

On the other hand, we want reasons to be the causes of our actions, and it seems more natural to say that our actions are caused by our beliefs and desires than by external qualities. But these are not really competing causal explanations. Features of the external world explain why we have the particular mental states we do. The world causes our behaviour by means of our beliefs and desires.

So there's a quick reason for favouring the view that reasons are things out in the world, rather than just in the head. I'm still not sure I grasp the significance of the debate, though. It's very much like the objects of perception debate (i.e. whether we perceive the external world or just internal sense-data). It all strikes me as mere wordplay and verbal confusion. The base facts are not in dispute: we have mental states that represent the external world. The only question seems to be how to talk about them sensibly. Or am I missing something?

Thursday, September 13, 2007

Value, Alienation and Choice

The ever-calculating consequentialist is incapable of commitment, and is thus deprived of a good life (which arguably requires stable life projects, relationships, etc.). So we shouldn't want people to be ever-calculating. Still, even when consequentialism is understood only as providing a criterion of rightness, rather than a decision procedure, the threat of alienation remains. According to maximizing utilitarianism, we are obliged to sacrifice our own lives and interests entirely, becoming (say) an aid worker in Africa if doing so would in fact bring about the greatest net benefits.

But perhaps it wouldn't. Worries about excessive demands aside, it seems plausible that some greater degree of self-investment is required for many to maintain their capacity to create value in the long term. Pure altruism is unsustainable; as human beings we need our personal projects and relationships. Like they say on airplanes: it's only once you have your own oxygen mask secured that you're in a position to help others. Moreover, it is largely through these personal commitments that we succeed in bringing value into the world!

Still, as Vanessa points out to me, this is not enough to get one off the hook. Even granting the necessity of having life projects, we can go back a step and question one's initial choice of vocation. Maybe it would do more harm than good to wrench a person off their established path, but we might still say that they ought to have chosen a different path in the first place. (Compare Unger's claim that young philosophers should go off and become rich lawyers instead -- effectively farming themselves for charitable donations. Ben Miller is similarly skeptical of the value of philosophy.) If one could form a stable and meaningful commitment to aid work, that would be pretty great.

Even so, we shouldn't want everyone to do the same thing. One thing that bothers me about these kinds of discussions is that the alleged obligations sometimes seem to be herding everyone into the same restrictive mold -- as though there was only one way for a person to lead a good life. Such a world doesn't sound at all appealing to me. I think a world with artists, clowns, and all the rest, is actually better - more valuable - than one where everyone is working directly to relieve suffering. Partly this is because these other vocations indirectly enrich the lives of many people, and it's hard to estimate the run-on effects of this for creating a happier future. But it's not just instrumental: perfectionist values and "civilization" are arguably the greatest goods we could hope to see advanced in the world -- and this is more important, I think, than merely removing the bad.

So, given that the world I want to live in is one that contains such vocational plurality, ethical holism then implies that it's morally permissible to pursue any one of them. (An individual may fill any role that is part of the best structure. You can't promote a general structure without permitting its specific implementation.) The globally optimal framework will not necessarily demand optimality from each individual who is part of it. So it is enough to fill a needed or desirable role, even if it is not the most important role in the system. Still, not everything is permitted: there may be some roles (producing manipulative advertising springs to mind) that have no place in a good world. But my position is less demanding than some consequentialists would have it.

This fits in with my general view that the most demanding work of morality should be done at the level of politics and institutional structure, leaving individuals with a very broad space of moral autonomy within which they may shape their personal lives. Such freedom is likely the best way to encourage the passionate pursuit of the diverse values we would hope to see realized in the world, and also seems good in its own right.

So I'm not at all sympathetic to claims that people are morally obliged to pursue one or other particular vocation "for the greater good". Such alienating, impersonal motivations cannot sustain us for long, and the best world has room for our diverse passions in any case. It is important that our chosen vocations have a legitimate place in the good world (or play some role in bringing such a world about); but beyond that requirement, the choice is rightly ours.

Ego-Depletion and Moral Demands

Despite my consequentialism, I have a fairly lax view of our moral obligations. I don't think we're obliged to directly help the less fortunate, or anything like that. If someone lives a basically decent life, I'm not about to criticize them for failing to do more. (It'd be great if they did, but I don't think it's reasonable to expect or demand it.) If a moral theory implies otherwise, then I think that's a count against it. The "demandingness objection" is, I think, a fair one.

But it's important to be clear on precisely what the complaint is against. It is not that large costs are being imposed on the wealthy, or anything so object-centered as that. (There's nothing inviolate about the advantages held by the most fortunate, and nothing intrinsically problematic about redistributing these advantages.) Systematic redistribution is just fine. What's problematic, to my mind, is the very act of demanding action from another, and the psychological burden this imposes.

Humans have limited executive cognitive control or 'willpower' (cf. the psychological literature on ego-depletion). Decision-making and conscious action is draining. It's hard work. The immediate concerns of everyday life can be burdensome enough without adding all the world's ills to one's plate. Again, so long as one is leading a basically decent life, it just doesn't seem reasonable to condemn them or demand that they attend to more pressing concerns elsewhere. Most people have more than enough to attend to already!

It's worth noting the contingency of this concern. If we can make it cognitively easier for people to do good, then we could reasonably expect more from them. Habitual behaviours are less demanding, for example. Best of all would be to free them of the burden entirely: replace opt-in schemes with opt-out ones, automate charitable redistribution via taxation, etc. Don't demand, just take. (Liberty concerns may be mitigated by the opportunity to exercise one's agency in the deliberative-democratic processes behind this policy decision.)

My account of the demandingness objection thus leads to the rejection of Liam Murphy's constraint against imposing unrequired sacrifice. Brian Berkey introduces it:
The intuitive idea behind such a constraint is that if a person is not herself required to make a sacrifice, then it would be inappropriate for others to force her to make it.

This only makes sense on an object-centered view of demands. On my psyche-centered version, we see that it is less burdensome to dispose of another's material holdings appropriately than to demand that they do so themselves. The latter involves both material sacrifice and ego-depletion. If you can instead find your way to my wallet without bothering my mind, then that's just fine. (Unless you're acting within a context where this would qualify as 'theft', of course.)

Wednesday, September 12, 2007

Imperfectly Right

Maximizing consequentialists claim that the right action is that which maximizes the good (e.g. aggregate human welfare). So it's impermissible - morally wrong - to do anything less than what's perfectly optimal. Probably, then, everything that everyone has ever done was immoral. That seems bizarre. Railton puts it nicely in his 'How Thinking about Character and Utilitarianism Might Lead to Rethinking the Character of Utilitarianism' (Midwest Studies 1988):
Now it seems inconsistent with anything like our ordinary understanding of 'morally right' to say that the boundary separating the right from the wrong is to be sharply drawn infinitesimally below the very best action possible. 'Wrong' does mark a kind of discontinuity in moral evaluation, but one associated with with unacceptability. For this reason 'right', though not itself a matter of degree, covers actions that are entirely acceptable given reasonable expectations as well as those that are optimal. 'Wrong' comes into clear application only when we reach actions far enough below normal expectations to warrant real criticism or censure. (p.407)

So I've always preferred satisficing consequentialism: an action is right if it is good enough. This view has its own problems, though. Start with any action that is good enough. If we modify it, such as to increase the net benefits, the result is better and so (a fortiori) also good enough. So suppose I modify my initial action by, in addition, gratuitously murdering one person but saving two others (by giving to OXFAM, say). Surely I have not then on balance acted rightly!

But can we really say that this was all one act? On a more fine-grained individuation, we can say that my initial act and the charitable giving were both right acts (good enough), whereas the murder plainly wasn't. So there are responses available to the satisficer. But I'm not too sure how to assess them.

In any case, I'm skeptical that obligations and the like are fundamental to moral theory. At the base, there are only relations of value: better and worse states of affairs. From there we can ask about what 'practical morality' (dispositions of character, etc.) would tend to best promote the good. Moral obligation is constructed at this level: it is that minimal baseline against which individuals are properly subject to blame and social censure if they fall short. In this sense I see deontic assessments ('right' and 'wrong') as akin to rights talk. An important part of our moral practice, perhaps, but not so deep in theory.

That's a rough outline, anyway. Which parts of this picture do you think stand most in need of further attention?

Thursday, September 06, 2007

Non-deceptive Lies

Brian Weatherson writes:
In the middle of a post on Larry Craig, Mark Schmidt interestingly says “[I]n my world, if something’s none of my business, it’s o.k. for you to lie about it, in order to protect your privacy.” That would allow a much broader sphere of permissible lying than many philosophers would (I think) allow. Still, it sounds like a pretty plausible principle to me.

I'm more sympathetic to this idea now than I was last year -- in part because I'm less inclined to assume that (even intentionally) false statements are properly considered "deceptive". Cf. Nagel: "The point of polite formulae and broad abstentions from expression is to leave a great range of potentially disruptive material unacknowledged and therefore out of play... this is not a form of deception because it is meant to be understood by everyone."

I think it is always (pro tanto) wrong to intentionally deceive others, i.e. to take as your goal the manipulation of their beliefs so as to introduce falsehood. Ideally, we would have social norms such that "lying" to protect privacy is recognized as standard practice, so nobody would actually be deceived. But even failing this, we may introduce a kind of 'doctrine of double effect': if you say something false in order to protect your privacy, which has as an undesired (albeit predictable) side-effect that it gives rise to false beliefs in the listener, then that's morally okay. One did not intend to be deceptive; the listener's beliefs are mere "collateral damage".

One worry here is that the deception is no mere "side effect", but the essential means by which one's goal of privacy protection is achieved. This will depend on the case. I mean to defend those lies that serve a purely deflective purpose (i.e. "where refusing to answer cannot preserve the secret"). In other cases, one might lie for remedial purposes, i.e. to actively undo some damage that has already been done. If your privacy has already been violated, you might wish to manipulate others' beliefs back towards the state they were in before the violation. I'm not sure whether I have any general views on the im/permissibility of this (as opposed to case-by-case judgments), but such cases would count as genuinely deceptive lies, and so be at least pro tanto wrong (even if possibly justified in light of other reasons).

Sunday, September 02, 2007

Efficiency and Value

I hate shopping. So I was delighted by my first ever visit to Wal-Mart yesterday. Very efficient, very cheap, I hopefully won't need to go again anytime soon.

Basically, I think the aim of such shops should be to minimize the amount of time we have to waste in them (or working to pay for such material things). I'm skeptical that commerce can have any deep value, so efficiency is all that's left for it. Give me Wal-Mart, then get me away from the blighted cityscape.

The alternative view, I suppose, would be to try to rescue commerce from the dull glint of the bottom dollar. Close down the factories, imbue production with a human touch, buy custom-made goods direct from the craftsman, and all that. The local market is certainly far more attractive than the mall, so all else being equal I'd jump at the replacement. But what are the opportunity costs? Their inefficiency means more time and effort must be invested to produce these material goods -- time and effort that might be better spent on non-commercial pursuits.

So my question is this: should we "invest" in improving the commercial sphere of society, or simply try to minimize it?

Ideal Rulers

One often hears that the ideal government would be a "benevolent dictatorship" - the wise ruler would make the right decision every time, and implement it with a minimum of fuss. But if we are going to engage in such wishful thinking, why stop at one perfect person? Why not have an ideal democracy, where the populace would make the right decision every time, and implement it with a minimum of fuss? How is the perfect autocrat any more ideal than the perfectly united demos? Or how about a perfect anarchy, where everyone simply does what they ought, without need for legal coercion? So long as we're guaranteed our perfect outcomes in any case, why favour the most repulsive (dictatorial) process? (Is it because the wish is really to be the dictator oneself?)