What do we mean when we say “I am experiencing the redness of red?” We mean something like this: There is a part of the universe I choose to call “me,” a collection of atoms interacting and evolving in certain ways. I attribute to “myself” a number of properties, some straightforwardly physical, and others inward and mental. There are certain processes that can transpire within the neurons and synapses of my brain, such that when they occur I say, “I am experiencing redness.” This is a useful thing to say, since it correlates in predictable ways with other features of the universe. For example, a person who knows I am having that experience might reliably infer the existence of red‐wavelength photons entering my eyes, and perhaps some object emitting or reflecting them. They could also ask me further questions such as “What shade of red are you seeing?” and expect a certain spectrum of sensible answers.
There may also be correlations with other inner mental states, such as “seeing red always makes me feel melancholy.” Because of the coherence and reliability of these correlations, I judge the concept of “seeing red” to be one that plays a useful role in my way of talking about the universe as described on human scales. Therefore the “experience of redness” is a real thing.
This is manifestly not what many of us mean by our qualia-talk. Just speaking for myself: I am not trying to describe my behavioural dispositions or internal states that "correlate [...] with other features of the universe" in "useful" ways. I have other concepts to do that work, concepts that feature in the behavioural sciences (e.g. psychology). Those concepts transparently apply just as well to my imagined zombie twin as to myself. We could ask the zombie 'further questions such as "What shade of red are you seeing?" and expect a certain spectrum of sensible answers.' But this behaviouristic concept is not such a philosophically interesting one as our first-personal concept of what it is like to see red -- a phenomenal concept that is not properly applied to my zombie twin.
So I worry that Carroll is simply changing the subject. Sure, behavioural dispositions and internal cognitive states (of the sort that are transparently shared by zombies) are "real things". Who would ever deny it? But redefining our mentalistic vocabulary to talk about these (Dennettian patterns in) physical phenomena is no more philosophically productive than "proving" theism by redefining 'God' to mean love.
This diagnosis of the debate leads to a rather different dialogue than that which Carroll imagines:
P: What I’m suggesting is that the statement “I have a feeling ...” is part of an emergent way of talking about those signals appearing in your brain. There is one way of talking that speaks in a vocabulary of neurons and synapses and so forth, and another way that speaks of people and their experiences. And there is a map between these ways: When the neurons do a certain thing, the person feels a certain way. And that’s all there is.
M: Except that it’s manifestly not all there is! Because if it were, I wouldn’t have any conscious experiences at all. Atoms don’t have experiences. You can give a functional explanation of what’s going on, which will correctly account for how I actually behave, but such an explanation will always leave out the subjective aspect.
P: Why? I’m not “leaving out” the subjective aspect, I’m suggesting that all of this talk of our inner experiences is a useful way of bundling up the collective behavior of a complex collection of atoms. Individual atoms don’t have experiences, but macroscopic agglomerations of them might very well, without invoking any additional ingredients.
M: No, they won’t. No matter how many non‐feeling atoms you pile together, they will never start having experiences.
P: Yes, they will.
M: No, they won’t.
P: Yes, they will.
Carroll's imagined dualist, in claiming that "non-feeling atoms... will never start having experiences", doesn't sound much like a property dualist to me. Most property dualists think the psycho-physical bridging laws require "agglomerations" of the right sort (i.e., as found in brains, but not in rocks or individual "non-feeling atoms") in order to give rise to phenomenal experiences. So those last lines don't ring true at all: non-feeling atoms will, thanks to the bridging laws, give rise to experiences if piled together in the right way (though in other possible worlds -- e.g. zombie worlds -- they might fail to do so, so it's true that the right arrangement of atoms doesn't strictly suffice for consciousness).
The larger point being missed here is that on P's view, there is nothing more to the idea of having experiences than "bundling up the collective behavior of a complex collection of atoms". P thus cannot even make sense of there being a real question about which entities are conscious. M should instead object as follows:
I agree that the behavioural dispositions you describe exist. Indeed, we know full well that certain agglomerations of atoms exemplify such properties -- there is no "might" about it. But insofar as consciousness remains an open question, over and above the question of whether certain behavioural dispositions are exhibited, I don't see that you have actually said anything about it at all. You're just talking about behavioural dispositions. I'm not interested in those. I want to know whether those dispositions strictly suffice for the further phenomenon of first-personal conscious experience.
You say that "When the neurons do a certain thing, the person feels a certain way. And that’s all there is." That sounds more like my view, insofar as you have listed two distinct phenomena here: (i) the neurons doing a certain thing, and (ii) the person feeling a certain way. Perhaps you instead meant to say, "When the neurons do a certain thing, this physical event can be stipulatively redescribed as 'the person feeling a certain way'." This would make clearer that you have implicitly eliminated any further phenomenon of first-personal conscious experience, and instead use mentalistic language to talk about something else. (Though, admittedly, the careful reader should realize that you're really engaging in semantics rather than philosophy of mind when you go on to take as your explanatory target "all of this talk of our inner experiences", rather than directly addressing our inner experiences.)
P.S. On the challenge that "You don’t think you’re a zombie, but that’s just what a zombie would say," I take it that your reason for thinking yourself conscious is not that you say so to yourself, but that -- as Carroll initially put it -- "you have access to your own mental experiences." A zombie doesn't have such access, since a zombie has no such experiences. They walk the walk and talk the talk, but we know ourselves to do more than that. [See also: Why do you think you're conscious?]