Monday, June 20, 2016

Carroll on Zombies

Zombies are back in the news!  Via the DN Heap of Links, I see physicist Sean Carroll defending what appears to be a kind of analytical functionalism:
What do we mean when we say “I am experiencing the redness of red?” We mean something like this: There is a part of the universe I choose to call “me,” a collection of atoms interacting and evolving in certain ways. I attribute to “myself” a number of properties, some straightforwardly physical, and others inward and mental. There are certain processes that can transpire within the neurons and synapses of my brain, such that when they occur I say, “I am experiencing redness.” This is a useful thing to say, since it correlates in predictable ways with other features of the universe. For example, a person who knows I am having that experience might reliably infer the existence of red‐wavelength photons entering my eyes, and perhaps some object emitting or reflecting them. They could also ask me further questions such as “What shade of red are you seeing?” and expect a certain spectrum of sensible answers.
There may also be correlations with other inner mental states, such as “seeing red always makes me feel melancholy.” Because of the coherence and reliability of these correlations, I judge the concept of “seeing red” to be one that plays a useful role in my way of talking about the universe as described on human scales. Therefore the “experience of redness” is a real thing.

This is manifestly not what many of us mean by our qualia-talk.  Just speaking for myself: I am not trying to describe my behavioural dispositions or internal states that "correlate [...] with other features of the universe" in "useful" ways.  I have other concepts to do that work, concepts that feature in the behavioural sciences (e.g. psychology).  Those concepts transparently apply just as well to my imagined zombie twin as to myself.  We could ask the zombie 'further questions such as "What shade of red are you seeing?" and expect a certain spectrum of sensible answers.'  But this behaviouristic concept is not such a philosophically interesting one as our first-personal concept of what it is like to see red -- a phenomenal concept that is not properly applied to my zombie twin.

So I worry that Carroll is simply changing the subject.  Sure, behavioural dispositions and internal cognitive states (of the sort that are transparently shared by zombies) are "real things".  Who would ever deny it?  But redefining our mentalistic vocabulary to talk about these (Dennettian patterns in) physical phenomena is no more philosophically productive than "proving" theism by redefining 'God' to mean love.
This diagnosis of the debate leads to a rather different dialogue than that which Carroll imagines:
P: What I’m suggesting is that the statement “I have a feeling ...” is part of an emergent way of talking about those signals appearing in your brain. There is one way of talking that speaks in a vocabulary of neurons and synapses and so forth, and another way that speaks of people and their experiences. And there is a map between these ways: When the neurons do a certain thing, the person feels a certain way. And that’s all there is.
M: Except that it’s manifestly not all there is! Because if it were, I wouldn’t have any conscious experiences at all. Atoms don’t have experiences. You can give a functional explanation of what’s going on, which will correctly account for how I actually behave, but such an explanation will always leave out the subjective aspect.
P: Why? I’m not “leaving out” the subjective aspect, I’m suggesting that all of this talk of our inner experiences is a useful way of bundling up the collective behavior of a complex collection of atoms. Individual atoms don’t have experiences, but macroscopic agglomerations of them might very well, without invoking any additional ingredients.
M: No, they won’t. No matter how many non‐feeling atoms you pile together, they will never start having experiences.
P: Yes, they will.
M: No, they won’t.
P: Yes, they will.

Carroll's imagined dualist, in claiming that "non-feeling atoms... will never start having experiences", doesn't sound much like a property dualist to me.  Most property dualists think the psycho-physical bridging laws require "agglomerations" of the right sort (i.e., as found in brains, but not in rocks or individual "non-feeling atoms") in order to give rise to phenomenal experiences. So those last lines don't ring true at all: non-feeling atoms will, thanks to the bridging laws, give rise to experiences if piled together in the right way (though in other possible worlds -- e.g. zombie worlds -- they might fail to do so, so it's true that the right arrangement of atoms doesn't strictly suffice for consciousness).

The larger point being missed here is that on P's view, there is nothing more to the idea of having experiences than "bundling up the collective behavior of a complex collection of atoms".  P thus cannot even make sense of there being a real question about which entities are conscious.  M should instead object as follows:
I agree that the behavioural dispositions you describe exist.  Indeed, we know full well that certain agglomerations of atoms exemplify such properties -- there is no "might" about it.  But insofar as consciousness remains an open question, over and above the question of whether certain behavioural dispositions are exhibited, I don't see that you have actually said anything about it at all.  You're just talking about behavioural dispositions.  I'm not interested in those. I want to know whether those dispositions strictly suffice for the further phenomenon of first-personal conscious experience. 
You say that "When the neurons do a certain thing, the person feels a certain way. And that’s all there is."  That sounds more like my view, insofar as you have listed two distinct phenomena here: (i) the neurons doing a certain thing, and (ii) the person feeling a certain way.  Perhaps you instead meant to say, "When the neurons do a certain thing, this physical event can be stipulatively redescribed as 'the person feeling a certain way'."  This would make clearer that you have implicitly eliminated any further phenomenon of first-personal conscious experience, and instead use mentalistic language to talk about something else.  (Though, admittedly, the careful reader should realize that you're really engaging in semantics rather than philosophy of mind when you go on to take as your explanatory target "all of this talk of our inner experiences", rather than directly addressing our inner experiences.)

P.S. On the challenge that "You don’t think you’re a zombie, but that’s just what a zombie would say," I take it that your reason for thinking yourself conscious is not that you say so to yourself, but that -- as Carroll initially put it -- "you have access to your own mental experiences."  A zombie doesn't have such access, since a zombie has no such experiences.  They walk the walk and talk the talk, but we know ourselves to do more than that.  [See also: Why do you think you're conscious?]

7 comments:

  1. A zombie sincerely thinks it is not a zombie, right? Thinking does not require consciousness, and sincerity is just a kind of lack of duplicity, which we simply have to assume in every zombie that is in all behavioral dispositions like me. I say this as a rejoinder to your response about the "that's exactly what a zombie would say" passage.

    When I know that I would sincerely think that I'm not a zombie both in the cases where I was and wasn't, then my first-person experience makes it impossible for me to distinguish which of these I am. But if *I* can't tell I have qualia from *first-person evidence*, nobody can from any evidence. This is Wittgenstein's beetle argument, and it applies here.

    ReplyDelete
    Replies
    1. I'd be reluctant to attribute any real mental states to a zombie at all. When I think, there's something that it's like to have the thought. There may be a faint mental image to go along with the idea, or an auditory 'image' as of the sound of the spoken words, or simply a rough sense of what it is that my thought is about. A zombie has none of that; just information processing in the brain, as per a biological computer. We should be no more inclined to attribute real thoughts or desires to them than we do to (e.g.) a chess-playing computer.

      But the argument of your second paragraph seems mistaken in any case. Just because I would (let's suppose) sincerely believe myself to be awake while dreaming, it doesn't follow that I cannot now tell that I am awake. After all, it might be that the reason why I'd have this belief while dreaming is that I simply wouldn't be thinking all that clearly, and that while (a) I cannot (concurrently) identify such fuzziness in my thoughts, nonetheless (b) when I am awake and thinking clearly, I can identify this.

      The epistemic asymmetry is all the more extreme in the zombie case. Although a zombie cannot tell that they lack experience altogether (supposing one can make sense of them being able to know anything, which again I take to be a mistaken supposition), that doesn't prevent a conscious being from knowing that they are conscious. Indeed, the suggestion that your first-person experience "makes it impossible" to tell whether you have first-person experiences at all is epistemically absurd on its face. We are intimately acquainted with our first-personal experiences -- arguably, we know nothing better.

      To deny this on the grounds that our zombie twins couldn't identify their deficiency makes as little sense as denying that we know we're alive on the grounds that if we were dead we wouldn't know that to be so.

      Delete
    2. If you have information that you are not a zombie, that info is not encoded in any of your cells in your brain. But if not there where else could the info possibly be encoded?

      Delete
    3. What do you mean by 'information'? How does it relate to standard concepts in epistemology like evidence or knowledge? If "having info that p" requires a physical state that differentiates p-worlds from not-p worlds, then info is not necessary for knowledge or justified belief (on pain of radical skepticism: you don't have info that you're not a brain in a vat).

      On my view, we have (conclusive) evidence that we are not zombies, provided by our first-personal experiences. Our beliefs that we are truly conscious are of course not caused by these experiences, but rather the underlying brain states (which happen to reliably give rise to consciousness, though it's logically possible that they fail to do so). Such beliefs are highly reliable nonetheless, since if we were not conscious we would not have real beliefs at all. (Compare Descartes' "I think, therefore I am".)

      You may worry that our brains and zombie twins are condemned to being "irrational" if we reason in this way. I discuss such concerns in this old post.

      Delete
    4. We should be VERY careful in thinking we are justified in believing things for which we have no information as evidence. Maybe there are cases where we just can't practically do otherwise, but this doesn't seem to me such a case.

      Delete
    5. It's par for the course for philosophical questions, which tend not to be empirical in nature. (You have no information as evidence for the normative thesis you just asserted, for example.)

      Delete
    6. Hello "unknown", how would a non-conscious thought differ from a mere physical process?

      Delete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.