Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese....
The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.
I think this is a very misleading thought experiment. It's true that a homunculus implementing a program won't necessarily understand what's being implemented, but who ever would have thought otherwise? We may not actually have homunculi running around in our heads, passing electrical charges along from neuron to neuron; but if we did, they wouldn't share our understanding either. The mental states of the imagined homunculi don't limit the mental states that their efforts can give rise to (i.e. in us), and so it is with the Chinese Room. The homunculus' lack of understanding has no implications for the real question of whether there is understanding created by the Chinese Room. So by asking us to focus on the homunculus, Searle introduces a red herring -- and worse, an invitation to 'level confusion' and category mistakes. The SEP has a quote from Margaret Boden that perfectly captures my objection here (see further 4.1.1 The Virtual Mind Reply):
Computational psychology does not credit the brain with seeing bean-sprouts or understanding English: intentional states such as these are properties of people, not of brains... Searle's description of [the symbol-manipulating homunculus] involves a category-mistake comparable to treating the brain as the bearer, as opposed to the causal basis, of intelligence.
Technically, Searle's above conclusion is true: computers, as implementers of programs, aren't in the category of things to which 'understanding' may be predicated. But neither are brains. So this tells us absolutely nothing of interest. The real question is whether computational processes give rise to conscious and/or intentional mental states, as neuronal processes do. Seen in this light, the Chinese Room seems an entirely unhelpful distraction.