Monday, December 01, 2008

The Homunculus in the Chinese Room

Searle's Chinese Room argument:
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese....

The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.

I think this is a very misleading thought experiment. It's true that a homunculus implementing a program won't necessarily understand what's being implemented, but who ever would have thought otherwise? We may not actually have homunculi running around in our heads, passing electrical charges along from neuron to neuron; but if we did, they wouldn't share our understanding either. The mental states of the imagined homunculi don't limit the mental states that their efforts can give rise to (i.e. in us), and so it is with the Chinese Room. The homunculus' lack of understanding has no implications for the real question of whether there is understanding created by the Chinese Room. So by asking us to focus on the homunculus, Searle introduces a red herring -- and worse, an invitation to 'level confusion' and category mistakes. The SEP has a quote from Margaret Boden that perfectly captures my objection here (see further 4.1.1 The Virtual Mind Reply):
Computational psychology does not credit the brain with seeing bean-sprouts or understanding English: intentional states such as these are properties of people, not of brains... Searle's description of [the symbol-manipulating homunculus] involves a category-mistake comparable to treating the brain as the bearer, as opposed to the causal basis, of intelligence.

Technically, Searle's above conclusion is true: computers, as implementers of programs, aren't in the category of things to which 'understanding' may be predicated. But neither are brains. So this tells us absolutely nothing of interest. The real question is whether computational processes give rise to conscious and/or intentional mental states, as neuronal processes do. Seen in this light, the Chinese Room seems an entirely unhelpful distraction.

17 comments:

  1. "This tells us absolutely nothing of interest."

    You're plain wrong here, I think. First, the majority reaction to Searle is that had the man in the room been more functionally isomorphic to the brain, then the system of which the man is a part actually would understand Mandarin, the way that brains actually understand Mandarin. You say he has established a conclusion that I know most people think is false; interesting in its own right.

    Second, what kinds of things does "understanding" predicate of in your view? Are they material things? If so, what are their parts? Consider this argument that if people understand Mandarin, then their brains do. (i) A person retains their understanding of Mandarin even after they are turned into a brain in a vat. (ii) All the parts of a person are material. (iii) So, either the brain or a part of the brain understands Mandarin.

    The question isn't what gives rise to understanding. Maybe that's a question. I think the VERY interesting question is what kinds of things can be thinking things.

    Here's another way of putting it. Suppose that if my brain instantiates functional role F, then I understand Mandarin. HERE is the million dollar question: what do we have to add to a creature that already has a mechanized brain that instantiates functional role F to make it a thinking, understanding thing?

    ReplyDelete
  2. I think you get to the core of the issue and that your thinking on the Chinese Room Experiment is more or less the way I've always felt about it. What confuses me is that the objection seems so obvious that I find it hard to see why the example is cited so often (unless I'm missing something).

    ReplyDelete
  3. Jack - your "million dollar question" looks like a mere rewording of my question. I claim that Searle's argument is not of interest because it tells us absolutely nothing about the answer.

    Consider the 'robot' version of the thought experiment, wherein the robot's "brain" consists of a homunculus pushing symbols around. The question here is: is the robot a thinking, understanding thing? The question is absolutely not whether the homunculus understands what the robot is doing. Searle invites us to confuse these two questions, and that is the problem.

    (Aside, your first paragraph misunderstands my claim. I agree with the majority view that the "system" understands Mandarin the same way that brains do. What 'way' is that? It is that the brain/system is the basis of understanding. Again, see the quote from Margaret Boden -- she is an advocate of the systems response. And I see the Virtual Mind reply as an offshoot or elaboration of this -- much in the same spirit.)

    (Even further aside: I think the questions in your second paragraph are terminological. I think it's bad "grammar" (so to speak) to say that brains are the bearers of understanding. But when I say the person or the mind understands things, this is just a higher-level description that can ultimately be reduced to fundamental facts about the brain and its properties. See, e.g., Dennett on 'Real Patterns'.)

    ReplyDelete
  4. I think the Chinese Room argument is so convincing to some people because it confirms their intuitions of free will/we're not like computers/etc, and so they don't think too hard about it. In my head, it's basically a specialised case of the problem of other minds designed to feed this particular intuition. If you replace the computer with simple brain processes you'll come to the same conclusion.

    But in my head Searle's argument works specifically as a criticism of 1960s/70s cognitive psychology. Cognitive psychology of that time did ultimately try to explain our minds through homunculi; it saw the mind as made up of computer-ish modules, without reference to the whole system. So when you show that a computerised homunculi is not sufficient, as Searle did, it causes problems for the enterprise. This is why Searle mentions that stuff about thought/consciousness being something that living things do, because he's cautioning against going too far with the mind/computer analogy of cognitive psychology.

    ReplyDelete
  5. Hillsong - my claim is that Searle's argument doesn't achieve even that. It shows that the homunculi themselves lack understanding, but it does nothing whatsoever to show that such (non-understanding) homunculi couldn't suffice to give rise to the robot's understanding.

    ReplyDelete
  6. If I recall correctly, Searle's goal was to argue against the view that intelligence is just formal information processing that can in principle be simulated on any sufficiently complex machine (i.e. "strong AI"). Searle is of the opinion that intelligence and consciousness depend on specific physical properties and processes in the brain, and in coming up with the Chinese Room Argument he just wanted to show that strong AI is extremely unlikely, if not impossible. Your idea that understanding somehow 'emerges' from the Chinese room as a whole is a common reaction to the thought experiment, but you have to agree that this idea - although not completely inconceivable - sounds very far-fetched.

    ReplyDelete
  7. Sure, but one can better make that point by describing any number of odd-looking instances of "formal information processing" -- e.g. a Turing Machine made using rolls of toilet paper, or the one that consists of buckets of water.

    The Chinese Room is objectively inferior because it builds in the distracting element of the homunculus. Philosophizing on this topic would be much improved if people threw away the homunculus and got back to arguing about whether understanding could emerge from rolls of toilet paper.

    ReplyDelete
  8. Philosophizing on this topic would be much improved if people threw away the homunculus and got back to arguing about whether understanding could emerge from rolls of toilet paper.

    I think you're right; but even if you were wrong, this whole discussion was worth it just to have this quote.

    ReplyDelete
  9. Searle's Chinese argument makes more sense if you take into consideration his arguments against ontological emergence. (Which in The Rediscovery of the Mind he calls radical emergence)

    His point is more akin to the zombie argument. That is you have to be able to identify a knower somewhere. The argument that knowledge could be emergent while plausible only works if consciousness is emergent. So to me the whole Chinese Room argument is a variation of the same intuition that Chalmers uses in the zombie argument. And is just as unpersuasive to those who don't already agree with his conclusions.

    ReplyDelete
  10. Clark -- the two arguments (and their employed intuitions) actually differ drastically in structure.

    The zombie intuition merely sets up a logically possible state of affairs. But Searle is not merely claiming that there's a possible world out there where computation fails to give rise to intentionality. (I'd agree with that.) He's making that claim about the actual world -- a far stronger claim, and one that's far less defensible by mere intuition. Indeed, I'd say the logical difference here is so great as to blow any attempted analogy right out of the water.

    If memory serves, I think Chalmers himself thinks that Strong AI is possible. This doesn't require any kind of weird "ontological emergence" (I don't even know what that means). It's compatible with the perfectly ordinary sense of contingent 'emergence' the zombie argument leads us to, i.e. whereby the physical properties, in conjunction with the contingent psycho-physical bridging laws, give rise to the mental properties.

    The big question (on this Chalmersian view) is whether the 'physical' side of the bridging laws invoke generic higher-level functional/computational properties of the sort that brains share with computers, or whether the laws invoke specifically those "causal properties of the brain" that Searle harps on about. But it's a merely contingent matter -- either way is possible.

    ReplyDelete
  11. Well I think the zombie argument ends up being about the actual world as well.

    Note that I'm not defending intuition but then I don't defend it with the zombie presentation either. Indeed I'm pretty skeptical about all such thought experiments which just seem like intuition pumps to me. And I don't trust intuition.

    Ontological emergence (or radical emergence in Searle's terminology) is the idea that the whole is greater than the sum of its parts not just in the sense of operation or properties being more than the properties of the part. One form is that the parts together form a new ontological part. Obviously most people are skeptical of this sort of thing but it's a big deal in some free will theories where one doesn't want to embrace traditional substance dualism. (O'Conner and Clarke being two big proponents) A similar idea pops up in the idea of consciousness although I'm less familiar with the literature there. (I suspect it ends up being the same sort of thing - ones intuitions pointing towards Cartesian dualism but ones skepticism rejecting dualism)

    As to Chalmer's view of consciousness I just don't recall so I can't speak there. I'm skeptical as I thought he was arguing for a dualism not reducible in the sense you suggest. But I just don't have any of his writings handy to look up. (Anyone want to chime in here?)

    I just don't recall Chalmers arguing for the weak sense of emergence enabling zombies. But I may just be completely wrong here. And I'm too lazy to look it up tonight so as to appear intelligent. (grin)

    While I find Searle's logic typically problematic (the Chinese room being an example) and it took me forever to actually figure out what he was arguing for in The Rediscovery of the Mind I think he is on to something. He just hasn't clarified it enough in my view.

    I do agree with what someone else wrote about his attacks being more on the kind of naive AI that developed in the 60's through the 80's. I'm more partial to Dreyfus there although I'll not drag you into that discussion.

    ReplyDelete
  12. The homunculus' lack of understanding has no implications for the real question of whether there is understanding created by the Chinese Room.

    I think Searle has a pretty good response to this: imagine he leaves the room and the room burns down or vanishes or whatever ... so there's NO room, NO book, JUST him. He can keep doing the exact same thing, but he doesn't understand Chinese -- i.e. the conclusion is unchanged. So to say that the "homonculus" is just a "distraction" seem plainly false.

    Some of the comments in this thread seem to assume that Searle was broadly attacking the idea that AI is possible. Searle doesn't deny the possibility of AI. Hence the portion of the first blockquote: "if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis..." I know the distinction between weak AI and strong AI has been mentioned in this thread, but the overall discussion seems to blur that distinction and make a straw-man argument against the CRA.

    I know the CRA has tons of critics, to the point where there's a whole book of these criticisms. But I'm sorry, it strikes me as a convincing argument, and I'm not seeing a sober, lucid refutation. I know my opinion won't convince anyone, but it also doesn't convince anyone to indulge in all this: "It's a terrible argument!" "It's just a distraction!" "It has no implications!" "It's too obvious to even point out!" Why should any of that huffing and puffing carry any weight with anyone? Of course, I might be missing something -- maybe the CRA is dead wrong -- but I don't feel that this blog post has satisfactorily explained it.

    If the CRA is wrong, there should be a clear, straightforward explanation of why it's wrong. Throwing around a flurry of philosophy buzz words like "homonculus," "category mistake," "zombies," and "Turing" isn't a substitute for a convincing argument, at least for those who aren't already inclined to agree with you.

    ReplyDelete
  13. John - if you're going to accuse someone of "huffing and puffing", at least have the decency to quote things they actually said. (It's ironic that your last two paragraphs are themselves pure puffery.)

    I argued for the claim that the homunculus is a distraction. The clear, simple explanation is that the CRA confuses the level of implementation (where symbol-manipulation occurs) and the level of realization (where understanding emerges). We should not expect the two to coincide. That is, we should not expect the implementer to also be the realizer.

    Normally, implementers are brains, which give rise to understanding minds. I can understand English, even though my neurons don't. Clear enough so far? Now, in the CRA (and especially in the variation you mention, whereby the homunculus internalizes the rulebooks), Searle invites us to imagine a case where the implementation takes place within what's already a mind. This tempts us to conflate the two levels of operation, and to suppose that any new "understanding" of Chinese must emerge (if at all) at the same 'level' at which the Homunculus understands English and manipulates the symbols. But this is not so.

    Again, the claim at issue is that appropriate symbol manipulation implements (or creates) a newly-realized mind. Normally, from a brain, but in principle it could be another mind that is the implementer. This latter possibility is just what we find in the case of the Internalized Chinese Room. All the symbol manipulation takes place in the English-speaking mind of the homunculus. If Strong AI is true, we should expect this implementation to 'realize' (or give rise to) a new mind, distinct from the first, that understands Chinese.

    Searle invites us to intuit that the English-speaking homunculus does not himself understand Chinese by implementing the requisite computation. This is true, but irrelevant. The question is not whether the implementer acquires understanding, but whether the realizer does. Lack of understanding in the implementer simply does not imply lack of understanding in the realizer. That is why the CRA "has no implications". Of course, Searle misses this logical point because he assumes that the realizer in this case must just be the implementer (the homunculus). And that is why the homunculus is a distraction: it tempts us to conflate the implementing and realizing levels of cognition.

    (Of course, one might question whether symbol-manipulation -- whether in our minds, or on some more directly physical substrate e.g. toilet paper -- is really enough to give rise to a new level of mind. Such claims are certainly "odd looking". But this is to move beyond the CRA. See my response to Lvb upthread.)

    ReplyDelete
  14. John - if you're going to accuse someone of "huffing and puffing", at least have the decency to quote things they actually said.

    In retrospect, yes, I could have added "(paraphrasing)," or used italics instead of quotes, to show that I was paraphrasing for effect.

    Of course, one might question whether symbol-manipulation -- whether in our minds, or on some more directly physical substrate e.g. toilet paper -- is really enough to give rise to a new level of mind. Such claims are certainly "odd looking". But this is to move beyond the CRA.

    If the CRA rubs you the wrong way and you prefer looking at the problem a different way, I have no problem with that. But you seem to have no fundamental disagreement with Searle in finding strong AI "odd," to say the least. If I find the Chinese room a more useful way to think about it, while you find toilet paper a more useful way to think about it, then don't we simply prefer different imagery to arrive at the same conclusion? You find the man in the argument "distracting"; I find it useful.

    Once you admit that "one might question whether symbol-manipulation is really enough to give rise to a new level of mind," you're not "moving beyond" the CRA; you're recognizing that there's a pretty good point to the CRA! I mean, you practically admit this when you respond to LvB's comment, which is basically supportive of the CRA, by saying: "Sure, but one can better make that point by describing..."

    BTW, you can say I'm "puffing," and maybe I am -- but I'm not claiming to advance a substantive argument. I simply agree with Searle (and for that matter LvB's comment) on this point. I assume everyone reading this thread has also read the argument in Searle's words, so I feel no need to duplicate it here.

    ReplyDelete
  15. "I think Searle has a pretty good response to this: imagine he leaves the room and the room burns down or vanishes or whatever ... so there's NO room, NO book, JUST him. He can keep doing the exact same thing, but he doesn't understand Chinese -- i.e. the conclusion is unchanged. So to say that the "homonculus" is just a "distraction" seem plainly false."

    This is laughable, if the book is destroyed he can't "keep doing the same thing". How is he supposed to look up the rules? Ok you probably means he memorized the book, in which case the book has NOT been destroyed (duh, he created a copy by memorization before burning the book). Doesn't Searle defeat his own argument here (and all others like vitalism) intelligence is some pattern and emergent phenomena where the "implementation" doesn't really matter, can be a book or a memorization.

    ReplyDelete
  16. The whole argument is silly. Searle says nothing about whether or not it is possible to build a machine which is extensionally indistinguishable from something with a mind, his position is that, however it behaves, a machine cannot have a mind. If we ever build such a thing, then the terms of the debate will become real, and will change radically, and Searles paper, I suspect, will not be much cited.

    I suppose if David Bowman had pointed out Searle's paper to Hal, then Hal would have just said 'well that's ok then - feel free with the off switch'.

    ReplyDelete
  17. Hi. Sorry for resurrecting a seven-year-old thread, but I just came across it and couldn't resist.

    I quite agree with your analysis of the argument. We can also describe the argument as committing a fallacy of equivocation. We're asked to accept the premise that, when executing the program, "Searle still doesn't understand Chinese". Under normal circumstances such a statement would be unambiguous, but Searle has created a weird scenario in which it becomes ambiguous. On the one hand, "Searle" could refer to the English-speaking sub-system (or homunculus) with all Searle's native abilities, memories, personality, etc. In that case it's trivially true that "Searle" doesn't understand Chinese, but it's also irrelevant, since it ignores the execution of the AI program, which is what we were supposed to be thinking about. If, on the other hand, we take "Searle" to mean the entire system, including the execution of the AI program, then asking us to accept as a premise that "Searle" doesn't understand Chinese is just begging the question. By equivocating between the two possible readings of the premise, Searle gets us to accept a trivially true but irrelevant premise, and then reinterprets into a question-begging premise.

    Unfortunately fallacies of equivocation can be very seductive and hard to see, especially if people consider the argument only in a sequential way, one step at a time. It can help to step back and consider the argument as a whole, asking what resources it uses to achieve its goals. Then we may see that the argument is trying to get something for nothing. What was the point of replacing an ordinary computer with a human computer, called Searle? Why didn't Searle make the same argument in respect of an ordinary computer? Because then it wouldn't work. Asking us to accept the premise that "the ordinary computer doesn't understand Chinese" would obviously beg the question. But, if the argument doesn't work for the ordinary computer, why should it work for the human computer, when replacing one type of computer with another is an irrelevant (truth-preserving) move? Both types of computer do exactly the same thing, and the internal workings of the computer are irrelevant. Even Searle treats the difference as irrelevant when he extends his conclusion from the human computer back to ordinary computers. The switch from an ordinary computer to a human one is logically irrelevant, but rhetorically effective. It introduces a superfluous extra element, the homunculus or English-speaking sub-system, which can be used to distract our attention from the Chinese-speaking sub-system that we should be attending to.

    Since its original presentation in 1980, Searle has compounded the fallacy by conflating this argument with a second argument for the same conclusion, an argument based on syntax and semantics, referring to them both collectively as the CRA. Then, when the Systems Reply is pressed against the original argument, he avoids it by switching attention to the second argument, complaining that the Systems Reply doesn't address that. And the second argument (from syntax and semantics) is just another case of question-begging, dressed up in misleading language to look like something more. I'm afraid, to me, this is philosophy at its worst.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.