It's tempting to interpret the Equal Weight View (EWV) as offering positive normative advice: 'when you disagree with someone you take to be an epistemic peer, you should split your credence equally between both your conclusions.' But this would lead to implausibly easy bootstrapping. (Two creationists don't become reasonable after splitting the difference between each other. It's just not true that what they (epistemically) ought to do is to give equal credence to both their unreasonable views. Rather, they ought to revise/reject their prior beliefs altogether. Cf. wide-scope oughts.) To avoid this problem, Adam Elga restates EWV merely as a constraint on perfect rationality. That is: if you fail to split your credence in this way, then you're making some rational error. But even if you satisfy the EWV constraint, you might be making some other, more egregious, error. So it doesn't follow that, all things considered, you ought to follow EWV.
Or consider Roger White's argument against imprecise credence. It shows that we're "irrational" (i.e. imperfectly rational) to have other than perfectly precise credence in any proposition. But given our cognitive limitations, I expect we'd do even worse if we tried to give a precise credence to every proposition under the sun.
The fact is, we're not ideal agents. We have no hope whatsoever of being perfectly rational. And this leads to the problem of second best. That is, attempts to conform to norms of ideal rationality may end up leading us even further away from that goal. What we really need are norms of non-ideal ("second best") rationality, that recognize that we will make rational errors, and so incorporate strategies for recovering from such errors. In other words, we need to know what to do in case we are in an irrational position to start with -- how can we revise our beliefs so as to make them less irrational? Bayesian updating and other rationality-preserving rules are no help at all when your initial belief state has no rationality to preserve.
[I'm sure this isn't an original observation. I know many moral and political philosophers are interested in non-ideal theory. I'm just less familiar with epistemology. Can any readers point me in the direction of epistemologists who work on non-ideal theory?]
Thursday, April 10, 2008
How to start a philosophy blog
I've already had a couple of classmates ask about starting their own blog, which is an encouraging sign. (More philosophy blogs = more interesting conversations, more helpful summaries of interesting books or lectures that I didn't have time to read/attend myself, etc. Every grad student should have one!) In hopes of encouraging yet more people to join in, I thought I'd offer this 'Getting Started' guide.
(Step 1) Create a blog. Go to www.blogger.com and follow their instructions. It really couldn't be easier: select a pre-made template and you'll be up and running within five minutes.
(Step 2) Start writing posts. If you're unsure where to start, see whether any of the following three post types appeals to you:
(Step 3) Enhance your blog.
- You may wish to add a hit counter so you can see how many visitors you're getting, and where they're coming from. (You can also do a Technorati search for your blog URL, to see if anyone has linked to it.)
- Add a recent comments widget to your sidebar, if you wish. (I recently removed mine due to technical problems. But may reinstate it soon, since they're handy things to have.)
- Sign in to draft.blogger.com and navigate to your blog's 'Layout' page. Here you can add new gadgets to your sidebar, e.g. polls, subscription links, and blog lists. I especially recommend the latter two.
(Step 4) Join the community! So, you have a sparkling new blog, with groundbreaking and insightful posts, but nobody else seems to notice. That's not the end of the world: there's plenty of benefit in simply writing your thoughts down. But there's plenty more benefit to be gained by attracting an intelligent audience with whom to engage in discussion. There are several things you can do here.
The simplest is to submit posts to the Philosophers' Carnival, or even sign up to host a future edition yourself.
But it's probably more effective to interact with other bloggers that you like. (Hopefully there are some!) At the very least, add a 'blog list' to your sidebar, as mentioned above. Most bloggers regularly check who's linking to them, so this is an easy way to attract their attention (at least for a moment) and gratitude. That's a very minimal form of interaction, of course. Better: leave (intelligent) comments on their blog. They'll be more likely to reciprocate. Participate in silly memes and other forms of community-building -- any excuse to link, however trivial, will bring you closer together. Best of all: write a substantive post responding to one of theirs (and link to it, of course). You'll find yourself engaged in a fruitful back-and-forth discussion in no time.
[Any other tips? Add them in the comments below...]
(Step 1) Create a blog. Go to www.blogger.com and follow their instructions. It really couldn't be easier: select a pre-made template and you'll be up and running within five minutes.
(Step 2) Start writing posts. If you're unsure where to start, see whether any of the following three post types appeals to you:
I think there are three kinds of philosophical activity to which blogs are especially well suited. First is the exploration of half-baked ideas, to get some early feedback and test their potential for further development. Secondly, blogs are a great study and teaching tool, as students can attempt to summarize an issue, and their readers may respond to help correct any misunderstandings. (A good summary may also benefit the readers' knowledge, of course.) Finally, a tightly focused blog post can make technical contributions in response to other work, perhaps critiquing a particular step in an argument, or offering an alleged counterexample.(I must admit I'd especially appreciate seeing more posts in the second category, e.g. distilling out and sharing the most valuable new insights you've come across in classes or readings, etc.)
(Step 3) Enhance your blog.
- You may wish to add a hit counter so you can see how many visitors you're getting, and where they're coming from. (You can also do a Technorati search for your blog URL, to see if anyone has linked to it.)
- Add a recent comments widget to your sidebar, if you wish. (I recently removed mine due to technical problems. But may reinstate it soon, since they're handy things to have.)
- Sign in to draft.blogger.com and navigate to your blog's 'Layout' page. Here you can add new gadgets to your sidebar, e.g. polls, subscription links, and blog lists. I especially recommend the latter two.
(Step 4) Join the community! So, you have a sparkling new blog, with groundbreaking and insightful posts, but nobody else seems to notice. That's not the end of the world: there's plenty of benefit in simply writing your thoughts down. But there's plenty more benefit to be gained by attracting an intelligent audience with whom to engage in discussion. There are several things you can do here.
The simplest is to submit posts to the Philosophers' Carnival, or even sign up to host a future edition yourself.
But it's probably more effective to interact with other bloggers that you like. (Hopefully there are some!) At the very least, add a 'blog list' to your sidebar, as mentioned above. Most bloggers regularly check who's linking to them, so this is an easy way to attract their attention (at least for a moment) and gratitude. That's a very minimal form of interaction, of course. Better: leave (intelligent) comments on their blog. They'll be more likely to reciprocate. Participate in silly memes and other forms of community-building -- any excuse to link, however trivial, will bring you closer together. Best of all: write a substantive post responding to one of theirs (and link to it, of course). You'll find yourself engaged in a fruitful back-and-forth discussion in no time.
[Any other tips? Add them in the comments below...]
Wednesday, April 09, 2008
Useful Meme
Now for something completely different...
(If others follow these instructions, it should be easy to track responses simply by searching for links to this post.)
I've a bunch of recommendations, so I'll split them into five categories instead.
(1) Amazon food. I hate shopping (and spending time cooking), but I'm also not a huge fan of starving, so this seems like a decent compromise.
- Clif Bars are my favourite snack, especially the 'cool mint chocolate'. I don't know how they manage to make something so nutritious taste so good. Seriously. (The variety pack flavours are also good. But avoid apricot and blueberry crisp.) Has anyone tried the Peanut Toffee Buzz or Iced Gingerbread? I'd be curious to hear what they're like.
- Clif Nectar bars are also good, especially the dark choc raspberry flavour.
- Healthy Choice Country Vegetable Soup is the best canned soup I've tried. (Much better than their 'Chicken Noodle' one.)
(2) Favourite Fiction (philosophy books are discussed here.)
- The Truth Machine, for fun and thought-provoking tech utopianism.
- The Sparrow explores liberal religion, cultural misunderstandings, and much more.
- Best fantasy world: Stephen Donaldson's Mordant's Need (2-book series). This is also runner-up in the 'best plot twists' category, second only to Donaldson's Gap saga.
(3) Classic (freeware) Video Games
- Liquid War is the greatest multiplayer game ever invented. (Yes, even better than Liero.)
- Dungeon Crawl is the ultimate classic RPG. (I've linked to a graphics version, because gameplay trumps all only once you've attained a minimal level of aesthetic acceptability, and ascii characters violate this minimal requirement!)
- The broader category of 'greatest games' is discussed here.
(4) Facebook philanthropy
- I'm a fan of the Hunger Site app. It's much easier to remember to click each day when there's a counter right there in your Facebook profile. For no trouble at all, you get to transfer money from sponsoring advertisers to the third world, to the worth of 1.1 cups of food each day.
- It's also fun and easy to participate in Peter Unger's UNICEF facebook chain (just join the group here, donate $10 or more, and invite your friends to do likewise). Note that the downstream effects of your participation may be exponentially greater than your personal donation considered in isolation. So it's a great opportunity.
(5) Music on the web
- Again, I must say Don Skoog's 'Attendance to Ritual' is the greatest Marimba piece ever. (That linked performance by my little brother ain't half bad either, though I may be biased here!)
- Incidentally, this YouTube to mp3 converter is handy.
- Last.fm is a neat way to discover new music.
- Project Playlist lets you share playable lists of music, as I've mentioned before. I'm really surprised that bloggers haven't made greater use of this yet (e.g. so that readers can actually listen to their 'Friday random ten' song lists).
Okay, that's it from me. Feel free to write up your own "useful" post. Or -- if you lack a blog of your own -- share your recommendations, etc., in the comments section below.
P.S. I'm tempted to get a Kindle e-book reader, to read online papers (PDFs etc.) more comfortably. Have any philosophers tried it? There was some encouraging discussion at Crooked Timber recently...
Update: I should tag a few people to help get this thing started. How about: Brandon, Chris, SteveG, Hallq, and you, whoever you are.
Instructions
1. Copy these instructions.
2. Link to the original 'useful meme' post.
3. Share 5+ things that may be of benefit to your readers -- useful facts, advice, product recommendations, etc.
(If others follow these instructions, it should be easy to track responses simply by searching for links to this post.)
I've a bunch of recommendations, so I'll split them into five categories instead.
(1) Amazon food. I hate shopping (and spending time cooking), but I'm also not a huge fan of starving, so this seems like a decent compromise.
- Clif Bars are my favourite snack, especially the 'cool mint chocolate'. I don't know how they manage to make something so nutritious taste so good. Seriously. (The variety pack flavours are also good. But avoid apricot and blueberry crisp.) Has anyone tried the Peanut Toffee Buzz or Iced Gingerbread? I'd be curious to hear what they're like.
- Clif Nectar bars are also good, especially the dark choc raspberry flavour.
- Healthy Choice Country Vegetable Soup is the best canned soup I've tried. (Much better than their 'Chicken Noodle' one.)
(2) Favourite Fiction (philosophy books are discussed here.)
- The Truth Machine, for fun and thought-provoking tech utopianism.
- The Sparrow explores liberal religion, cultural misunderstandings, and much more.
- Best fantasy world: Stephen Donaldson's Mordant's Need (2-book series). This is also runner-up in the 'best plot twists' category, second only to Donaldson's Gap saga.
(3) Classic (freeware) Video Games
- Liquid War is the greatest multiplayer game ever invented. (Yes, even better than Liero.)
- Dungeon Crawl is the ultimate classic RPG. (I've linked to a graphics version, because gameplay trumps all only once you've attained a minimal level of aesthetic acceptability, and ascii characters violate this minimal requirement!)
- The broader category of 'greatest games' is discussed here.
(4) Facebook philanthropy
- I'm a fan of the Hunger Site app. It's much easier to remember to click each day when there's a counter right there in your Facebook profile. For no trouble at all, you get to transfer money from sponsoring advertisers to the third world, to the worth of 1.1 cups of food each day.
- It's also fun and easy to participate in Peter Unger's UNICEF facebook chain (just join the group here, donate $10 or more, and invite your friends to do likewise). Note that the downstream effects of your participation may be exponentially greater than your personal donation considered in isolation. So it's a great opportunity.
(5) Music on the web
- Again, I must say Don Skoog's 'Attendance to Ritual' is the greatest Marimba piece ever. (That linked performance by my little brother ain't half bad either, though I may be biased here!)
- Incidentally, this YouTube to mp3 converter is handy.
- Last.fm is a neat way to discover new music.
- Project Playlist lets you share playable lists of music, as I've mentioned before. I'm really surprised that bloggers haven't made greater use of this yet (e.g. so that readers can actually listen to their 'Friday random ten' song lists).
Okay, that's it from me. Feel free to write up your own "useful" post. Or -- if you lack a blog of your own -- share your recommendations, etc., in the comments section below.
P.S. I'm tempted to get a Kindle e-book reader, to read online papers (PDFs etc.) more comfortably. Have any philosophers tried it? There was some encouraging discussion at Crooked Timber recently...
Update: I should tag a few people to help get this thing started. How about: Brandon, Chris, SteveG, Hallq, and you, whoever you are.
Tuesday, April 08, 2008
Standard Reasons, Adaptive Reasons
[I wrote the following in an exam for Michael Smith's class last semester. It explains some helpful distinctions that I want to be able to refer back to in future posts...]
In 'Reasons: Practical and Adaptive' Raz makes distinctions between, on the one hand, practical and adaptive reasons, and on the other, standard and non-standard reasons. Explain these distinctions using examples.
Imagine a biology student whose parents threaten to disown her should she ever come to believe in evolution. This situation exposes her to what look to be two very different kinds of reasons regarding her belief. From her biology class, the student receives epistemic reasons, i.e. reasons which speak to the truth of the thing believed. From her parents, she receives practical reasons, i.e. reasons which speak to the (dis)value of holding the belief in question. There are a couple of noteworthy differences revealed by this scenario, which form the bases of Raz’s two distinctions.
First, consider how reflecting on the various reasons will affect the student’s beliefs. Faced with compelling evidence that evolution has in fact occurred, she may - as a rational agent - come to believe it. That is, her rational faculties may respond to her apprehension of epistemic reasons for a belief by directly producing the recommended belief. This marks epistemic reasons as instances of what Raz calls standard reasons, or reasons that “we can follow directly”. Practical reasons for belief, by contrast, are non-standard in that they cannot be directly followed. Much as the student might wish to please her parents, no amount of reflection on their threat will suffice by itself to change her scientific beliefs.
What if people could respond directly to practical reasons for belief by changing their belief? It seems like this should be possible. At least, we can imagine a scenario in which reflecting on the practical benefits of holding a belief has a similar neurological effect as what actually happens when we reflect on evidence suggesting the truth of a belief. One might argue that the resulting neurological state, being sensitive to non-epistemic reasons, no longer qualifies as ‘belief’. But this seems implausible so long as enough of the functional role of belief remains intact: the person still sincerely asserts the proposition when asked what they believe, draws inferences from it, and behaves in ways that could be expected to fulfill their desires if the proposition were true, etc. So I think we must allow that this scenario is properly described as involving belief. But does it involve following a reason? This seems more questionable. Raz suggests, of a similar case, that the agent merely deceives themselves into believing that they followed the reason. They have not really done so, for that would be impossible -- it is not the kind of reason that can genuinely be followed in such a fashion. Of course, to assert this without argument risks begging the question, as Raz well recognizes. What we need is some independent basis for determining which reasons can be followed and hence qualify as standard reasons.
One thing we can tell right away is that this is not simply an empirical matter, to be ‘read off’ the neuro-psychological data. Not all forms of influence qualify as rational influence, and information may make its way into our heads without doing so under the guise of a reason. The other lesson from the above scenario is that, as Raz puts it, “whether one follows a reason is not purely a matter of how the agent understands his situation.” Combining these: the agent may cite a practical reason why he holds his belief, and it may indeed have played a central causal role in his neuro-psychology, but this still does not count as following the practical reason, in the normative sense we’re interested in here.
But why not? Raz appeals to “the nature of that reason” to settle the matter. This works most clearly in the case of reasons that are such that it would be self-defeating to try to follow them. For example, I may offer you $100 to hop on one leg for non-pecuniary motives. The prize-money is a reason to hop, but not one you could follow directly without thereby disqualifying yourself. The self-effacing nature of the reason is a logical fact which explains why it cannot be successfully followed, and thus why it is non-standard. But the previous case of practical reasons for belief is less clear. Raz claims that “the fact that non-epistemic reasons cannot serve to warrant belief shows that they cannot be followed.” It is not entirely transparent why this should be so. But I think it is most plausibly understood in reference to the normative character of reason-following, where this is taken to essentially involve a response on the part of our rational faculties (rather than just any old psychological process). Standard reasons are thus understood to be those that rationally justify or warrant the attitude they recommend. Or, if we are willing to take rationality itself as a primitive: standard reasons are those that our rational capacities respond to (insofar as they are functioning properly). Of course, even non-standard reasons may be rationally responded to in a different way: they warrant acting so as to bring about their target attitude, for example. This confirms Raz’s point that non-standard reasons for one thing are standard reasons for something else.
(Aside: there may be some exceptions to this claim. Suppose that God will reward those who are saintly, but to qualify as a saint you must never act from self-interest. This sounds a lot like the other non-standard reasons we’ve discussed, so it would seem ad hoc to deny that it really is a reason. But it cannot be redescribed as a standard reason for anything. However indirectly you bring about your sainthood, if you do it for the reason of the heavenly reward, then you’re no saint after all. So this looks like a non-standard reason without any corresponding standard reason. To hold onto his view that “the fact that they can be followed is what makes reasons into reasons”, Raz had best deny that “non-standard reasons” are really reasons at all. There are no practical reasons for belief. There are just standard reasons for acting to bring about a belief.)
So much for Raz’s first distinction. What of the second? Harking back to our original case of the biology student, notice that only her practical reasons derived from the value of holding the belief. Epistemic reasons instead indicate that the belief would be warranted or appropriate to the way things are, but this does not depend on whether believing the truth would be in any way beneficial. This renders epistemic reasons a subset of what Raz calls adaptive reasons. The adaptive/practical distinction arises whenever we have states whose internal norms of correctness may diverge from their practical value. Emotions are another obvious example. Given that fear is meant to be a response to danger, evidence that we are in danger provides an adaptive reason for this emotion; fear is warranted in such circumstances, regardless of whether it would be beneficial (a question which instead concerns the practical reasons for and against it).
Raz offers what we may take to be three tests for the dependence of reasons on value: (i) the possibility of akrasia, (ii) shaping the world to fit the attitude, and (iii) presumptive sufficiency. Here I will discuss only the second, as it is most vivid. If there’s value in the state of affairs of your having warranted attitudes, then this should be so whether this state of affairs came about as a result of shifting your attitudes to match the world, or by changing the world to match your attitudes. But this is absurd: if you feel fear, for example, there is nothing at all to be said for manipulating your situation to match your emotion by gratuitously exposing yourself to danger. Danger is a reason for fear, but fear is not a reason for (bringing about) danger. This asymmetry demonstrates that the reasons we have for feeling fear when in danger are adaptive reasons -- they do not assume that there is necessarily value in the combination of fear and danger.
Now that I have introduced Raz’s two distinctions, one might wonder about the degree to which they overlap. From my original example, we saw that epistemic reasons are standard and adaptive, whereas the non-epistemic reasons for belief are non-standard and practical. But not all standard reasons are adaptive reasons: sometimes warrant derives from value, as we find for example in reasons for action. If leaping into the air would produce great benefits, then I may follow this reason and rationally decide to leap. So that is an example of a standard practical reason. There may also be non-standard reasons for action, as we saw earlier in the case of prize money given to those who hop from non-pecuniary motives. (Note that this would also be a standard reason to bring it about that you hop, say by stabbing yourself in the foot. The latter is a reason you can follow without self-defeation.)
There is at least some overlap between the two distinctions, however, for there is no possibility of a non-standard adaptive reason. Non-standard reasons for an attitude are really just standard reasons for bringing about the attitude, and this places them firmly in the practical domain. We have seen that the other combinations are all possible, however:
(i) standard adaptive reasons, e.g. scientific evidence as a reason for belief, or evidence of danger as a reason for fear;
(ii) standard practical reasons, e.g. ordinary monetary rewards as a reason for action;
(iii) non-standard practical reasons, e.g. self-effacing rewards as a reason for action, or threat of parental disownment as a reason for belief.
In 'Reasons: Practical and Adaptive' Raz makes distinctions between, on the one hand, practical and adaptive reasons, and on the other, standard and non-standard reasons. Explain these distinctions using examples.
Imagine a biology student whose parents threaten to disown her should she ever come to believe in evolution. This situation exposes her to what look to be two very different kinds of reasons regarding her belief. From her biology class, the student receives epistemic reasons, i.e. reasons which speak to the truth of the thing believed. From her parents, she receives practical reasons, i.e. reasons which speak to the (dis)value of holding the belief in question. There are a couple of noteworthy differences revealed by this scenario, which form the bases of Raz’s two distinctions.
First, consider how reflecting on the various reasons will affect the student’s beliefs. Faced with compelling evidence that evolution has in fact occurred, she may - as a rational agent - come to believe it. That is, her rational faculties may respond to her apprehension of epistemic reasons for a belief by directly producing the recommended belief. This marks epistemic reasons as instances of what Raz calls standard reasons, or reasons that “we can follow directly”. Practical reasons for belief, by contrast, are non-standard in that they cannot be directly followed. Much as the student might wish to please her parents, no amount of reflection on their threat will suffice by itself to change her scientific beliefs.
What if people could respond directly to practical reasons for belief by changing their belief? It seems like this should be possible. At least, we can imagine a scenario in which reflecting on the practical benefits of holding a belief has a similar neurological effect as what actually happens when we reflect on evidence suggesting the truth of a belief. One might argue that the resulting neurological state, being sensitive to non-epistemic reasons, no longer qualifies as ‘belief’. But this seems implausible so long as enough of the functional role of belief remains intact: the person still sincerely asserts the proposition when asked what they believe, draws inferences from it, and behaves in ways that could be expected to fulfill their desires if the proposition were true, etc. So I think we must allow that this scenario is properly described as involving belief. But does it involve following a reason? This seems more questionable. Raz suggests, of a similar case, that the agent merely deceives themselves into believing that they followed the reason. They have not really done so, for that would be impossible -- it is not the kind of reason that can genuinely be followed in such a fashion. Of course, to assert this without argument risks begging the question, as Raz well recognizes. What we need is some independent basis for determining which reasons can be followed and hence qualify as standard reasons.
One thing we can tell right away is that this is not simply an empirical matter, to be ‘read off’ the neuro-psychological data. Not all forms of influence qualify as rational influence, and information may make its way into our heads without doing so under the guise of a reason. The other lesson from the above scenario is that, as Raz puts it, “whether one follows a reason is not purely a matter of how the agent understands his situation.” Combining these: the agent may cite a practical reason why he holds his belief, and it may indeed have played a central causal role in his neuro-psychology, but this still does not count as following the practical reason, in the normative sense we’re interested in here.
But why not? Raz appeals to “the nature of that reason” to settle the matter. This works most clearly in the case of reasons that are such that it would be self-defeating to try to follow them. For example, I may offer you $100 to hop on one leg for non-pecuniary motives. The prize-money is a reason to hop, but not one you could follow directly without thereby disqualifying yourself. The self-effacing nature of the reason is a logical fact which explains why it cannot be successfully followed, and thus why it is non-standard. But the previous case of practical reasons for belief is less clear. Raz claims that “the fact that non-epistemic reasons cannot serve to warrant belief shows that they cannot be followed.” It is not entirely transparent why this should be so. But I think it is most plausibly understood in reference to the normative character of reason-following, where this is taken to essentially involve a response on the part of our rational faculties (rather than just any old psychological process). Standard reasons are thus understood to be those that rationally justify or warrant the attitude they recommend. Or, if we are willing to take rationality itself as a primitive: standard reasons are those that our rational capacities respond to (insofar as they are functioning properly). Of course, even non-standard reasons may be rationally responded to in a different way: they warrant acting so as to bring about their target attitude, for example. This confirms Raz’s point that non-standard reasons for one thing are standard reasons for something else.
(Aside: there may be some exceptions to this claim. Suppose that God will reward those who are saintly, but to qualify as a saint you must never act from self-interest. This sounds a lot like the other non-standard reasons we’ve discussed, so it would seem ad hoc to deny that it really is a reason. But it cannot be redescribed as a standard reason for anything. However indirectly you bring about your sainthood, if you do it for the reason of the heavenly reward, then you’re no saint after all. So this looks like a non-standard reason without any corresponding standard reason. To hold onto his view that “the fact that they can be followed is what makes reasons into reasons”, Raz had best deny that “non-standard reasons” are really reasons at all. There are no practical reasons for belief. There are just standard reasons for acting to bring about a belief.)
So much for Raz’s first distinction. What of the second? Harking back to our original case of the biology student, notice that only her practical reasons derived from the value of holding the belief. Epistemic reasons instead indicate that the belief would be warranted or appropriate to the way things are, but this does not depend on whether believing the truth would be in any way beneficial. This renders epistemic reasons a subset of what Raz calls adaptive reasons. The adaptive/practical distinction arises whenever we have states whose internal norms of correctness may diverge from their practical value. Emotions are another obvious example. Given that fear is meant to be a response to danger, evidence that we are in danger provides an adaptive reason for this emotion; fear is warranted in such circumstances, regardless of whether it would be beneficial (a question which instead concerns the practical reasons for and against it).
Raz offers what we may take to be three tests for the dependence of reasons on value: (i) the possibility of akrasia, (ii) shaping the world to fit the attitude, and (iii) presumptive sufficiency. Here I will discuss only the second, as it is most vivid. If there’s value in the state of affairs of your having warranted attitudes, then this should be so whether this state of affairs came about as a result of shifting your attitudes to match the world, or by changing the world to match your attitudes. But this is absurd: if you feel fear, for example, there is nothing at all to be said for manipulating your situation to match your emotion by gratuitously exposing yourself to danger. Danger is a reason for fear, but fear is not a reason for (bringing about) danger. This asymmetry demonstrates that the reasons we have for feeling fear when in danger are adaptive reasons -- they do not assume that there is necessarily value in the combination of fear and danger.
Now that I have introduced Raz’s two distinctions, one might wonder about the degree to which they overlap. From my original example, we saw that epistemic reasons are standard and adaptive, whereas the non-epistemic reasons for belief are non-standard and practical. But not all standard reasons are adaptive reasons: sometimes warrant derives from value, as we find for example in reasons for action. If leaping into the air would produce great benefits, then I may follow this reason and rationally decide to leap. So that is an example of a standard practical reason. There may also be non-standard reasons for action, as we saw earlier in the case of prize money given to those who hop from non-pecuniary motives. (Note that this would also be a standard reason to bring it about that you hop, say by stabbing yourself in the foot. The latter is a reason you can follow without self-defeation.)
There is at least some overlap between the two distinctions, however, for there is no possibility of a non-standard adaptive reason. Non-standard reasons for an attitude are really just standard reasons for bringing about the attitude, and this places them firmly in the practical domain. We have seen that the other combinations are all possible, however:
(i) standard adaptive reasons, e.g. scientific evidence as a reason for belief, or evidence of danger as a reason for fear;
(ii) standard practical reasons, e.g. ordinary monetary rewards as a reason for action;
(iii) non-standard practical reasons, e.g. self-effacing rewards as a reason for action, or threat of parental disownment as a reason for belief.
How To Imagine Zombies
Some of the recent discussion on other blogs has assumed a sloppy version of the zombie argument, whereby we are to imagine a world just like ours but subtract consciousness. Hence Eliezer complains:
Right, so that's a bad way to present the argument. The better way to imagine the zombie world is not by subtraction, but by building it up. Give a complete microphysical description of the world, and specify "that's all". A Laplacean demon can infer that the world contains tables, brain states, and a book entitled 'The Conscious Mind'. That is, the world contains particles arranged table-wise, brain-wise, book-wise, etc.
The Laplacean demon knows all that there is to know about this world. Does he know that it contains phenomenal consciousness, that there is something it is like to be the particles-arranged-humanwise in this world? Seems not. There's nothing in the microphysics that entails the presence of such subjectivity. So we've successfully imagined the zombie world. Not by subtracting, but by building up from the physics alone and noting that more needs to be added in order to obtain our (consciousness-containing) world.
Richard Brown makes a similar mistake in his attempted parody:
The zombie argument begins by providing an undisputed specification of the "physical respects" of the world. It then asks whether phenomenal consciousness logically follows from the specification. Our answer is 'no'. That's why physicalism is false.
A proper analogy, then, would require building up the "non-physical zombie" world from an undisputed non-physical specification, just as we earlier built up a physical zombie world from an undisputed physical specification. But of course RB cannot do this. So that's why the zombie argument cannot be turned against dualism in this way.
The epiphenomenalist imagines eliminating an effectless phenomenon, and that separately, a distinct phenomenon makes Chalmers go on writing philosophy papers. A substance dualist, or reductionist, imagines eliminating the very phenomenon that causes Chalmers to write philosophy papers.
Right, so that's a bad way to present the argument. The better way to imagine the zombie world is not by subtraction, but by building it up. Give a complete microphysical description of the world, and specify "that's all". A Laplacean demon can infer that the world contains tables, brain states, and a book entitled 'The Conscious Mind'. That is, the world contains particles arranged table-wise, brain-wise, book-wise, etc.
The Laplacean demon knows all that there is to know about this world. Does he know that it contains phenomenal consciousness, that there is something it is like to be the particles-arranged-humanwise in this world? Seems not. There's nothing in the microphysics that entails the presence of such subjectivity. So we've successfully imagined the zombie world. Not by subtracting, but by building up from the physics alone and noting that more needs to be added in order to obtain our (consciousness-containing) world.
Richard Brown makes a similar mistake in his attempted parody:
I am conceiving of a world that is just like this one in all non-physical respects except that it lacks consciousness. Therefore dualism is false.
The zombie argument begins by providing an undisputed specification of the "physical respects" of the world. It then asks whether phenomenal consciousness logically follows from the specification. Our answer is 'no'. That's why physicalism is false.
A proper analogy, then, would require building up the "non-physical zombie" world from an undisputed non-physical specification, just as we earlier built up a physical zombie world from an undisputed physical specification. But of course RB cannot do this. So that's why the zombie argument cannot be turned against dualism in this way.
Saturday, April 05, 2008
Zombie Rationality
'Zombie' writes:
No. Conclusions are drawn by people, not brains. Standards of rationality likewise apply to agents and their beliefs, not to their physical components (brains and neural states) in isolation.
On my view, beliefs are partly constituted by phenomenal properties -- that's what gives them their representational content. Zombies don't have beliefs like we do. They exhibit all the same behaviour, and make all the same noises, but there's no meaning in it. It's not really about anything.
One might define a 'z-belief' as the functional (physical, dispositional) component of a belief. It's not so clear how to assign pseudo-contents to these z-beliefs, but I guess a reductionist may offer a stipulation of some kind: S has a z-belief that P iff S has such-and-such physical dispositions [e.g. 'S behaves as though P were true', or 'S has a brain state which covaries with evidence of P', or some such. See my essay 'What Behaviour is About' for a more sophisticated empirical approach to attributing "content".]
Presumably we're to suppose that whenever I really have the belief that P, my brain has the z-belief that P. But I doubt whether any such reduction can be given that perfectly mirrors my actual belief contents. (If epiphenomenalism is true, and qualia are partly determinative of belief content, then the physical facts underdetermine what it is that I believe. My inverted-spectrum duplicate has the same brain -- hence z-beliefs -- as me, but our phenomenal beliefs are very different. My 'red' is his 'blue', or whatever.)
There's a more fundamental problem, even if we grant the reductionist his impossibly fine-grained z-content. Let's grant - per impossibile - that my brain (and zombie twin) "z-believes that P" iff I believe that P. However, my brain (understood as a purely physical system, i.e. excluding its phenomenal properties) is in possession of only a subset of my total evidence. Qualia - the contents of experience - are among my evidence if anything is. But these phenomenal properties are not causally accessible to my neural processes. So the conclusion 'I am conscious' follows from my evidence, but not from the "information" available to my brain. One can be a rational person, or have a "rational" brain, but not both.
Now, it's pretty obvious that being a rational person is better than having a "rational brain" (insofar as the latter attribution is even meaningful). Brains are parts of people, and like any body part we really only care about it for how it can serve the whole person. If quick feet didn't make for a quick person, we wouldn't much care for the former. Similarly, a rationally desirable brain is one that makes for a rational person, with justified beliefs.
One could imagine a brain that is instead built in such a way that it tends to produce "z-justified" z-beliefs. What this means is that it tends to end up in physical states such that a conscious person in that physical state would have beliefs in line with the physically accessible subset of their evidence. When put like that, it becomes clearer that what we've really described here is a defective brain. Let's call it "z-rational", and reserve the term 'rational' for brains that give rise to rational people -- people whose beliefs are in line with their total evidence.
Here are two implications:
(1) A z-rational brain can be expected to have more true z-beliefs (across all possible worlds).
(2) A rational brain can be expected to yield more true beliefs.
Fortunately, my brain is rational rather than z-rational. Hopefully yours is too (otherwise, you're a defective agent). One might try to argue that there's something "wrong" with a brain that isn't z-rational, but I don't think that'll work. For one thing, since you're really just describing a physical state it's not clear that brains or z-beliefs are even open to this sort of normative assessment. Norms apply primarily to people, and to our organs only derivatively. What a well-functioning agent really needs is a brain that will make them rational, not z-rational. As suggested above, a z-rational brain is defective from the standpoint of contributing to the functioning of the whole person (which is the relevant standpoint against which to assess brains). Further, when you stop to think about what it really means to have 'z-rational z-beliefs', you see that there's not really anything significant (worth caring about) there.
On Chalmers' view, wherein the 'psychophysical laws' are contingent, it seems that across possible worlds most brains like ours will be zombies or at least have 'associated' qualia that don't 'match' the information processing in the brain. So sophisticated brains proceeding according to ordinary standards of rationality should zombie-conclude that they probably are not conscious (as they don't have access to any non-material qualia), despite their zombie-perceptions of being conscious (shared by both zombie and non-zombie brains). Yet Chalmers thinks that in our actual world the psychophysical laws lead to conscious experience mirroring the information processing in the brain. So, upon hearing the argument, shouldn't Chalmers' brain zombie-conclude that it is probably a zombie brain, and 'phenomenal Chalmers' consciously think the same?
No. Conclusions are drawn by people, not brains. Standards of rationality likewise apply to agents and their beliefs, not to their physical components (brains and neural states) in isolation.
On my view, beliefs are partly constituted by phenomenal properties -- that's what gives them their representational content. Zombies don't have beliefs like we do. They exhibit all the same behaviour, and make all the same noises, but there's no meaning in it. It's not really about anything.
One might define a 'z-belief' as the functional (physical, dispositional) component of a belief. It's not so clear how to assign pseudo-contents to these z-beliefs, but I guess a reductionist may offer a stipulation of some kind: S has a z-belief that P iff S has such-and-such physical dispositions [e.g. 'S behaves as though P were true', or 'S has a brain state which covaries with evidence of P', or some such. See my essay 'What Behaviour is About' for a more sophisticated empirical approach to attributing "content".]
Presumably we're to suppose that whenever I really have the belief that P, my brain has the z-belief that P. But I doubt whether any such reduction can be given that perfectly mirrors my actual belief contents. (If epiphenomenalism is true, and qualia are partly determinative of belief content, then the physical facts underdetermine what it is that I believe. My inverted-spectrum duplicate has the same brain -- hence z-beliefs -- as me, but our phenomenal beliefs are very different. My 'red' is his 'blue', or whatever.)
There's a more fundamental problem, even if we grant the reductionist his impossibly fine-grained z-content. Let's grant - per impossibile - that my brain (and zombie twin) "z-believes that P" iff I believe that P. However, my brain (understood as a purely physical system, i.e. excluding its phenomenal properties) is in possession of only a subset of my total evidence. Qualia - the contents of experience - are among my evidence if anything is. But these phenomenal properties are not causally accessible to my neural processes. So the conclusion 'I am conscious' follows from my evidence, but not from the "information" available to my brain. One can be a rational person, or have a "rational" brain, but not both.
Now, it's pretty obvious that being a rational person is better than having a "rational brain" (insofar as the latter attribution is even meaningful). Brains are parts of people, and like any body part we really only care about it for how it can serve the whole person. If quick feet didn't make for a quick person, we wouldn't much care for the former. Similarly, a rationally desirable brain is one that makes for a rational person, with justified beliefs.
One could imagine a brain that is instead built in such a way that it tends to produce "z-justified" z-beliefs. What this means is that it tends to end up in physical states such that a conscious person in that physical state would have beliefs in line with the physically accessible subset of their evidence. When put like that, it becomes clearer that what we've really described here is a defective brain. Let's call it "z-rational", and reserve the term 'rational' for brains that give rise to rational people -- people whose beliefs are in line with their total evidence.
Here are two implications:
(1) A z-rational brain can be expected to have more true z-beliefs (across all possible worlds).
(2) A rational brain can be expected to yield more true beliefs.
Fortunately, my brain is rational rather than z-rational. Hopefully yours is too (otherwise, you're a defective agent). One might try to argue that there's something "wrong" with a brain that isn't z-rational, but I don't think that'll work. For one thing, since you're really just describing a physical state it's not clear that brains or z-beliefs are even open to this sort of normative assessment. Norms apply primarily to people, and to our organs only derivatively. What a well-functioning agent really needs is a brain that will make them rational, not z-rational. As suggested above, a z-rational brain is defective from the standpoint of contributing to the functioning of the whole person (which is the relevant standpoint against which to assess brains). Further, when you stop to think about what it really means to have 'z-rational z-beliefs', you see that there's not really anything significant (worth caring about) there.
Friday, April 04, 2008
Procreative Duties
Bryan Caplan:
In traditional households, the mother shoulders most of the costs of childrearing (not to mention childbearing!). If we can assume this background context, then we have 82% of people affirming one's right to back out of a massive burden, compared to only 61% affirming one's right to back out of a not-quite-so-massive burden.
Further: sexist gender norms mean that women are more likely to be stigmatized for being childless. If we can assume this as background, it means that not only is the alleged "duty" less burdensome for the husband, his reneging would impose greater costs on his spouse (compared to the costs to him if the wife were to back out).
These assumptions won't hold in every particular case, of course. But they seem plausible enough in general, so the asymmetry in the survey results seems perfectly sensible -- certainly not evidence of anti-male bias.
In 1996, the GSS asked:
If the husband in a family wants children, but the wife decides that she does not want any children, is it all right for the wife to refuse to have children?
and
If the wife in a family wants children, but the husband decides that he does not want any children, is it all right for the husband to refuse to have children?
Survey says: 82% affirmed the wife's right to refuse, but only 61% affirmed the same right for husbands. Other than a simple men's rights story, anyone got an explanation?
In traditional households, the mother shoulders most of the costs of childrearing (not to mention childbearing!). If we can assume this background context, then we have 82% of people affirming one's right to back out of a massive burden, compared to only 61% affirming one's right to back out of a not-quite-so-massive burden.
Further: sexist gender norms mean that women are more likely to be stigmatized for being childless. If we can assume this as background, it means that not only is the alleged "duty" less burdensome for the husband, his reneging would impose greater costs on his spouse (compared to the costs to him if the wife were to back out).
These assumptions won't hold in every particular case, of course. But they seem plausible enough in general, so the asymmetry in the survey results seems perfectly sensible -- certainly not evidence of anti-male bias.
Wednesday, April 02, 2008
Subtracting Self-Reference
Brandon recently linked his old post on the Propositional Depth Response to the Liar Paradox:
I like this sort of account. (Alex recently suggested that one might arbitrarily choose whether or not to believe the proposition P: I believe [P]. My immediate response was to doubt that there really is any proposition here. The 'depth' account can explain why.)
A standard objection to this sort of view is that there would seem to be some true self-referential propositions. In an old guest post, Rad Geek suggested:
EM: [EM] is true or [EM] is false.
Now, it's not entirely clear to me that the above constitutes a wholly meaningful claim. (What is this '[EM]' it speaks of? I get stuck in an infinite loop if I try to fill it out.) But perhaps we can apply a lesson from Yablo and say that it is partly true. (I doubt his truthmaker account can actually accommodate this, but never mind that for now.)
Subtract out the self-reference, and what remains ("__ is true or __ is false") is true about logical forms, i.e. insofar as it claims that the law of excluded middle holds. The particular application is meaningless, but we can abstract away from that part of what's said.
Another puzzle case:
M: [M] is true and grass is green.
We certainly don't want to say that this is wholly meaningless. It's partly true: grass is green. Again, it seems that the thing to do is simply to subtract away the meaningless self-referential component.
P.S. Towards the end of his post, Brandon worries that the propositional depth solution commits us to the view that "whether the sentence has the same meaning, or any meaning at all, depends on purely contingent facts about the world that we may not be aware of." We should embrace this result, though, as Michael Sprague once pointed out to me:
Every meaningful statement must be assumed to have a determinate propositional depth...
L: [L] is false.
This has no determinate propositional depth. If we assume that L has a propositional depth of n, we find that, since L embeds itself, it must have a propositional depth of n+1.
I like this sort of account. (Alex recently suggested that one might arbitrarily choose whether or not to believe the proposition P: I believe [P]. My immediate response was to doubt that there really is any proposition here. The 'depth' account can explain why.)
A standard objection to this sort of view is that there would seem to be some true self-referential propositions. In an old guest post, Rad Geek suggested:
EM: [EM] is true or [EM] is false.
Now, it's not entirely clear to me that the above constitutes a wholly meaningful claim. (What is this '[EM]' it speaks of? I get stuck in an infinite loop if I try to fill it out.) But perhaps we can apply a lesson from Yablo and say that it is partly true. (I doubt his truthmaker account can actually accommodate this, but never mind that for now.)
Subtract out the self-reference, and what remains ("__ is true or __ is false") is true about logical forms, i.e. insofar as it claims that the law of excluded middle holds. The particular application is meaningless, but we can abstract away from that part of what's said.
Another puzzle case:
M: [M] is true and grass is green.
We certainly don't want to say that this is wholly meaningless. It's partly true: grass is green. Again, it seems that the thing to do is simply to subtract away the meaningless self-referential component.
P.S. Towards the end of his post, Brandon worries that the propositional depth solution commits us to the view that "whether the sentence has the same meaning, or any meaning at all, depends on purely contingent facts about the world that we may not be aware of." We should embrace this result, though, as Michael Sprague once pointed out to me:
It's worth noting that all liar sentences are dependent on context. For example, an instance of the liar sentence next to an arrow pointing to another sentence (like, say, "All ravens are orange") may be true. Context determines the sentence to which "this sentence" refers; the truth or falsity of that sentence then determines the truth of the liar.
Overcoming Scientism
I take 'Scientism' to be the view that empirical inquiry is the only form of rational inquiry, perhaps coupled with the even stronger claim that only "scientific"/testable claims are meaningful, or candidates for truth or falsity. In other words, it is to dismiss the entire field of philosophy (and arguably logic and mathematics too, though this is less often acknowledged). Indeed, a primary symptom of scientism is that sufferers are incapable of distinguishing philosophical arguments from religious assertions. They claim not to comprehend any non-empirical claim; it is mere 'gobbledygook' to them.
It's worth noting right away that Scientism is self-defeating, for it is not itself an empirically verifiable thesis. Insofar as its proponents have any reasons at all for advancing the view, they are engaging in (bad) philosophical, not scientific, reasoning. This is the familiar point that one cannot assess philosophy (even negatively) without thereby engaging in philosophy oneself. [For a more positive argument in support of the a priori, see my post on conditionalizing out all empirical evidence.]
This bias against philosophy is unfortunate for the obvious reason that there are a lot of interesting and important philosophical truths, which the scientismist would never think to look for. (My original 'scientism' post quoted some ignorant dismissals of Nick Bostrom's very interesting 'Simulation argument'. Not that I think his conclusion is true; but it is eye-opening just to consider.) Moreover, as I once wrote:
The scientismist will no doubt have many false philosophical beliefs in addition to their scientism. (We all do.) But if they are unaware of the rational tools that allow us to identify and correct such errors, then they will be stuck with them -- not a situation that any dedicated truth-seeker would consider desirable.
I think it's especially unfortunate that most folk seem unaware that reasoned inquiry into normative questions -- e.g. ethics and political philosophy -- is possible. This is at least part of the explanation why public discourse on these matters is so impoverished and sub-rational. So I think it's very important for more people to appreciate that we can go beyond mere instrumental rationality and also assess one's ultimate ends in terms of rational coherence.
Scientism also leads to more mundane mistakes. For example, in a recent 'Overcoming Bias' thread, one commenter defended the common-sense view that different observers experience the same colour qualia (rather than my 'red' being your 'yellow'), on the grounds that the alternative claim is "purely metaphysical with no implications for reality". But that can't be the right reason, because the same could be said of the eminently reasonable -- and presumably true -- view that he was defending. Whether we experience the same qualia or different, either answer is "purely metaphysical" with no scientific implications. So the right justification for the former view must lie elsewhere (e.g. in philosophical principles of parsimony that count against drawing unmotivated ad hoc distinctions).
Fortunately, this bias is easily overcome. Accept no substitutes: Think!
[See also: The Problem with Non-Philosophers.]
It's worth noting right away that Scientism is self-defeating, for it is not itself an empirically verifiable thesis. Insofar as its proponents have any reasons at all for advancing the view, they are engaging in (bad) philosophical, not scientific, reasoning. This is the familiar point that one cannot assess philosophy (even negatively) without thereby engaging in philosophy oneself. [For a more positive argument in support of the a priori, see my post on conditionalizing out all empirical evidence.]
This bias against philosophy is unfortunate for the obvious reason that there are a lot of interesting and important philosophical truths, which the scientismist would never think to look for. (My original 'scientism' post quoted some ignorant dismissals of Nick Bostrom's very interesting 'Simulation argument'. Not that I think his conclusion is true; but it is eye-opening just to consider.) Moreover, as I once wrote:
All your “common sense” beliefs rest on philosophical assumptions. Most people prefer not to examine them, but that doesn’t mean they aren’t there. It just means that everything you think and do could be completely misguided and you wouldn’t even realize it.
The scientismist will no doubt have many false philosophical beliefs in addition to their scientism. (We all do.) But if they are unaware of the rational tools that allow us to identify and correct such errors, then they will be stuck with them -- not a situation that any dedicated truth-seeker would consider desirable.
I think it's especially unfortunate that most folk seem unaware that reasoned inquiry into normative questions -- e.g. ethics and political philosophy -- is possible. This is at least part of the explanation why public discourse on these matters is so impoverished and sub-rational. So I think it's very important for more people to appreciate that we can go beyond mere instrumental rationality and also assess one's ultimate ends in terms of rational coherence.
Scientism also leads to more mundane mistakes. For example, in a recent 'Overcoming Bias' thread, one commenter defended the common-sense view that different observers experience the same colour qualia (rather than my 'red' being your 'yellow'), on the grounds that the alternative claim is "purely metaphysical with no implications for reality". But that can't be the right reason, because the same could be said of the eminently reasonable -- and presumably true -- view that he was defending. Whether we experience the same qualia or different, either answer is "purely metaphysical" with no scientific implications. So the right justification for the former view must lie elsewhere (e.g. in philosophical principles of parsimony that count against drawing unmotivated ad hoc distinctions).
Fortunately, this bias is easily overcome. Accept no substitutes: Think!
[See also: The Problem with Non-Philosophers.]
Tuesday, April 01, 2008
Thought Substitutes
I've recently lamented how logical formalisms and inductive meta-arguments risk being misused as substitutes for careful thought. It's also a major worry I have about (some) "experimental philosophy". Consider, for example, the following:
It's not clear how this fact is any response to the question. The question, recall, is not whether most folk believe in moral objectivism (which is all the survey can tell us). That's quite irrelevant. The real question is whether our moral practices rationally commit us to objectivism, and that is not a purely empirical question. It's a normative question, so there is simply no way to answer it without actually doing philosophy.
I don't mean to bash all attempts at empirically informed philosophy. If we are concerned with analyzing actual moral practices, it's an empirical question just what those are, i.e. what moralizing behaviours people in our society engage in. It's entirely appropriate to use empirical data as a starting point for philosophical inquiry, especially if that data is precisely what we're wanting to analyze. My point is simply that empirical work cannot substitute for philosophical analysis.
Further, it will rarely be worthwhile to ask folk for their theoretical opinions. Surveys like the above merely tell us what pop-philosophical theories are most prevalent in our public culture at present. Many people will profess a belief in relativism, for example, even if further probing would eventually reveal that they don't actually accept the implications of this view. As R.M. Hare once wrote:
So, next time you come across a study reporting the philosophical beliefs of non-philosophers, just remind yourself of the classic Onion study:
Does common sense morality assume objectivity? According to a recent study by Goodwin and Darley, most folk actually don't believe that their moral judgments are objectively true.
It's not clear how this fact is any response to the question. The question, recall, is not whether most folk believe in moral objectivism (which is all the survey can tell us). That's quite irrelevant. The real question is whether our moral practices rationally commit us to objectivism, and that is not a purely empirical question. It's a normative question, so there is simply no way to answer it without actually doing philosophy.
I don't mean to bash all attempts at empirically informed philosophy. If we are concerned with analyzing actual moral practices, it's an empirical question just what those are, i.e. what moralizing behaviours people in our society engage in. It's entirely appropriate to use empirical data as a starting point for philosophical inquiry, especially if that data is precisely what we're wanting to analyze. My point is simply that empirical work cannot substitute for philosophical analysis.
Further, it will rarely be worthwhile to ask folk for their theoretical opinions. Surveys like the above merely tell us what pop-philosophical theories are most prevalent in our public culture at present. Many people will profess a belief in relativism, for example, even if further probing would eventually reveal that they don't actually accept the implications of this view. As R.M. Hare once wrote:
If we want to find out what ordinary people mean, it is seldom safe just to ask them. They will come out with a variety of answers, few of which, perhaps, will withstand a philosophical scrutiny or elenchus, conducted in the light of the ordinary people's own linguistic behaviour (for example what they treat as self-contradictory).
So, next time you come across a study reporting the philosophical beliefs of non-philosophers, just remind yourself of the classic Onion study:
Subscribe to:
Posts (Atom)