Would you still be conscious if your neurons were replaced by (functionally identical) silicon chips?
It seems like this is an open question. But how do physicalists accommodate this datum? We know (by stipulation) all the physical facts of the story: we know that the resulting "brain" is functionally/computationally no different, but that the matter it's made of is different. If the physical facts exhaust the facts, then it doesn't seem that there's anything left for us to wonder about the situation.
But clearly there is something more we can wonder about. We can wonder whether silicon brains would still give rise to qualia (phenomenal consciousness), as biological brains do. We can similarly wonder whether Block's "Chinese Nation" (a functional analogue where individual humans communicating via walkie-talkies play the role of neurons) is really conscious. There's not any physical fact we're ignorant of here. So if there's a substantial fact we remain ignorant of, it must concern a matter over and above the physical facts. That is, it must be a matter of non-physical fact.
Update: Brad DeLong is puzzled. The following may help. We can distinguish two kinds of questions. Semantic questions concern which words pick out which properties. Substantive questions concern what properties are instantiated by various worldly entities. The question whether my cyborg twin is conscious or not is surely a substantive question: I'm picking out a distinctive mental property, and asking whether he has it. Now, the problem for physicalists is that they can't really make sense of this. They can ask the semantic question whether the word 'consciousness' picks out functional property P1 or biological property P2. But given that we already know all the physical properties of my cyborg twin (say he has P1 but not P2), there's no substantive matter of fact left open for us to wonder about if physicalism is true. It becomes mere semantics.