Sunday, July 11, 2010

Non-Physical Questions

Would you still be conscious if your neurons were replaced by (functionally identical) silicon chips?

It seems like this is an open question. But how do physicalists accommodate this datum? We know (by stipulation) all the physical facts of the story: we know that the resulting "brain" is functionally/computationally no different, but that the matter it's made of is different. If the physical facts exhaust the facts, then it doesn't seem that there's anything left for us to wonder about the situation.

But clearly there is something more we can wonder about. We can wonder whether silicon brains would still give rise to qualia (phenomenal consciousness), as biological brains do. We can similarly wonder whether Block's "Chinese Nation" (a functional analogue where individual humans communicating via walkie-talkies play the role of neurons) is really conscious. There's not any physical fact we're ignorant of here. So if there's a substantial fact we remain ignorant of, it must concern a matter over and above the physical facts. That is, it must be a matter of non-physical fact.

Update: Brad DeLong is puzzled. The following may help. We can distinguish two kinds of questions. Semantic questions concern which words pick out which properties. Substantive questions concern what properties are instantiated by various worldly entities. The question whether my cyborg twin is conscious or not is surely a substantive question: I'm picking out a distinctive mental property, and asking whether he has it. Now, the problem for physicalists is that they can't really make sense of this. They can ask the semantic question whether the word 'consciousness' picks out functional property P1 or biological property P2. But given that we already know all the physical properties of my cyborg twin (say he has P1 but not P2), there's no substantive matter of fact left open for us to wonder about if physicalism is true. It becomes mere semantics.

19 comments:

  1. Maybe I'm not understanding your argument, in which case the following is an irrelevant point, but it seems that the ignorance about whether the resulting silicon-brain is conscious stems from ignorance about whether the structures on which conscious mental states supervene have been left intact by the procedure, and/or from ignorance about what the right supervenience base is.

    If you replace all phospholipids in someone's neural cell membranes with some artificial lipid with near-identical physical properties (I guess you could do this with dietary changes and let turnover take care of the rest), few would doubt whether the person is still conscious (assuming no behavioral changes of course). On the other hand, if you gradually replace certain connections ending up with a Blockhead-type computer hooked up to sensory and motor neurons, only behaviorists will maintain that the person is still conscious. It seems that disagreement/uncertainty in intermediate cases reduces to disagreement/uncertainty about what the right supervenience base is.

    ReplyDelete
  2. Hi Alexander, right: we wonder precisely what kinds of physical states give rise to [the distinct phenomenon of] conscious states. Do behavioural dispositions suffice, or internal functional states of the right kind, or must we further limit it to just certain (biological/neuronal) realizers of those functional states?

    Note that this question is only substantial if conscious states are distinct from the physical/functional states on which they (nomologically) supervene. For if physicalism is true, and the physical states are all that there is, then there just isn't any further question to ask. There would not be any further thing, "consciousness", over and above the physical states that we already know about.

    Compare: there is no deep question about whether my silicon twin is "really alive". Once we know all about my twin's physical makeup and functioning, there's just no further question we can ask about his biological status -- whether he is "alive" or not. It's a merely terminological question, and nothing hangs on which way we decide to use the word.

    Intuitively, questions of consciousness are not like this. It's not an arbitrary matter whether or not we call cyborgs 'conscious': we think there's a real, further fact of the matter about their mental status. And it's a question that remains unanswered, even after we know all the physical facts.

    ReplyDelete
  3. Here is a very nice discussion of the same topic:
    http://bloggingheads.tv/diavlogs/28165 (Consciousness, intelligence, and computation)

    ReplyDelete
  4. "Functionally identical" is pretty strong. This doesn't really seem like an open question to me. But you probably expected that from me.

    ReplyDelete
  5. OK: suppose you are a physicalist. How might you define consciousness?

    Possibly a physicalist might define consciousness as "the capacity of a complex, dynamic system to contain within itself a real-time model of the totality of itself."

    It is in this sense, says the physicalist, that I am conscious, as my mind has created within itself a model of itself.

    And in this sense the putative cyborg would also be conscious, as if the cyborg genuinely was functionally equivalent to a human being then it would contain a model of its own mind within its mind.

    I do not see how there is a *problem* for a physicalist in this case. The problem only emerges when a meta-physicalist comes along and claims there is something intrinsically "special" about the consciousness of human beings, without actually explaining what this specialness might be, and in this case the problem would be that of the meta-physicalist in showing there is some "special" or metaphysical component to human consciousness.

    If the commonly held intuition is that there *is* some kind of specialness in human consciousness then the physicalist would simply point out that the commonly held intuition is wrong.

    ReplyDelete
  6. Aaron - you don't even think there's an open question regarding the Chinese nation? Wow! How about animals -- do you think there's an open question about how complex their brains need to be before they really feel pain (as opposed to just instinctively flinching away, etc.)?

    TJ - indeed, there's no open question at all about whether the cyborg (or Chinese nation) is capable of a kind of self-modelling or 'access consciousness'. So self-modelling is obviously not what we are talking about when we wonder what kinds of things are conscious. (We're wondering about 'phenomenal consciousness', i.e. whether there is something it is like -- a conscious feel -- to it all.)

    For a physicalist to simply redefine '[phenomenal] consciousness' as 'self-modelling' is to change the subject. Again, the question is whether there is something that it's like to be a cyborg, etc. (You might hypothesize that any sufficiently complex self-model will give rise to phenomenal feels, but it sure isn't true just by definition.)

    ReplyDelete
  7. I would think that it doesn't matter whether silicon chips or neurons are used, as long as all of the connections are maintained. The resulting cyborg brain would be just as conscious as the human brain. I don't think there are any non-physical traits to worry about, because the entire functionality and consciousness of the brain are determined by its physical nature. I don't believe that there is anything above that, because how could this non-physical property affect anything or matter at all if it does not influence or is not based in the physical properties of the brain? As soon as it influences or is based in a physical property of the brain, isn't it itself a physical property?

    ReplyDelete
  8. Property dualists agree that mental properties are "based in" (or arise from) physical properties of the brain. The crucial point is that there is a new, further property here. (We can imagine a "zombie world" which has only physical properties, and where these do not give rise to consciousness like they do in our universe.)

    This dualist view is most naturally understood as a kind of "epiphenomenalism", whereby conscious qualia have no causal effect. This leads to some puzzles, as discussed here. But note that even if phenomenal consciousness doesn't have any causal influence, it might still "matter" for other reasons. (E.g. it might be morally significant.)

    ReplyDelete
  9. Do you have the same worry about reductive identifications of chemical properties with underlying physical properties?

    ReplyDelete
  10. No, how were you imagining the analogous argument would go? (As I explained in response to Alexander, it's clear that no such "further question" arises for biological properties like being alive. And the same should apply to chemical properties. The microphysical facts entail everything substantial there; anything left open seems merely terminological, and not something we could really wonder about like we can wonder about the presence or absence of consciousness.)

    ReplyDelete
  11. First, apologies; I didn't read your reply to Alexander carefully enough, or I would have framed my question in those terms. My question was motivated by the thought: "Surely there are substantive open questions in the field of physical chemistry".

    Let me switch to biology, so as to stick with the property already discussed (i.e. being alive):
    If I remember correctly from high school biology (and it is possible I'm mis-remembering this): there had been, at some point, a dispute about whether viruses were alive. They shared some distinctive features with bacteria and other life forms, but also lacked some of the features common to all other life forms. Let the F be the rich functional property (i.e. the larger set of features) lacked by viruses, and let F- be the weaker functional property that the virus possesses. So, it's known that viruses are F, and known that they are not F-, but there is an open question as to whether they are alive? Is this question merely terminological? I'm inclined to think not (or at least, that it isn't obviously merely terminological). It is true that we could decide to use "alive" to pick out anything with the property F-, or to pick out the smaller group of things with property F, but that doesn't automatically render the question of whether viruses are alive merely terminological.

    ReplyDelete
  12. It sure looks terminological to me. At the very least, it's clear that there aren't really two different possibilities here -- at best, we might interpret it as a question about what is the best (most natural or principled) way of describing this particular situation. (There obviously isn't any more to 'being alive' than satisfying the appropriate functional property. There's not some further qualitative property here that may or may not be instantiated. We're merely wondering which of F and F- has the higher-order property of being more natural, or some such.)

    I guess that's not merely terminological, if we think there are important metaphysical facts about natural categories / similarity, etc.; but it's still a far cry from the sort of "substantial" questions we have about consciousness. The latter don't (on the face of it) just concern the most natural way to categorize or cluster lower-level functional properties. Rather, they seem to suggest qualitatively distinct possibilities.

    In short: it doesn't make a difference to the world whether viruses are alive or not (though this classificatory question might legitimately interest philosophers of science and metaphysicians). In contrast, it surely does make a difference whether dogs and cyborgs are conscious or not!

    ReplyDelete
  13. I don't know, it seems to me that the question of whether or not viruses are alive is obviously a terminological question.

    The question of whether or not a thing is conscious is an example of a general problem of determining whether anything apart from ourselves is conscious, the problem of other minds. There is no such skeptical problem of other lives.

    The question of whether a silicone duplicate is conscious is so clearly an open question that it has traditionally been seen to be an example of unknowability. I don't think anyone would claim that it is unknowable whether a virus is alive or not. It is what we say it is.

    ReplyDelete
  14. So, if you grant that the question of whether viruses are alive can be more than merely terminological (whether or not you think it falls under the heading of "especially substantive"), I think I can make the case that your silicon duplicate case is parallel to it, though I'll have to state things in a slightly simplified manner:

    Say that A and B have "strict" functional identity just in case, for any input x, if A receives x and B receives x, then, for any output y, A outputs y iff B outputs y.

    Say that A and B have "loose" functional identity just in case, for any input x from a relevant class C1 of inputs, if A receives x and B receives x, then, for any output y, from a relevant class C2 of outputs, A outputs y iff B outputs y.

    Both of these definitions need to be refined so that there can be functional duplicates in different functional states, but they should be sufficient for present purposes)

    First: I maintain that a silicon duplicate of a human brain could only achieve loose functional identity with that brain. This is because, unless we restrict our attention to a certain subset of inputs, the fact that individual bits of silicon respond differently to certain stimuli than individual neurons do, ensures that some possible inputs will provoke different responses from the silicon system as from the original brain.

    Second: If the silicon brain and the original brain exhibit only loose functional identity, then, there are a pair of functional properties, F and F-, such that both brains share F-, but only the original brain instantiates F.

    So: There is a question of whether F- is enough to constitute consciousness, even though we know that F is. Of course, the functional property F isn't instantiated by all the brains we know to be conscious. So, we would generate a class of known-to-be-sufficient functional properties, F1...Fn. Likewise, there is a class of silicon duplicates of those brains, permitting us to generate the broader class F1-...Fn-. We could then judge the relative naturalness of the two classes. If it looks like we have to gerrymander things to keep the F- properties out, we'd have good reason to think that the silicone brains are conscious.

    ReplyDelete
  15. Lewis - this seems to miss the central point of my previous comment.

    In the virus case, there question (to the extent that there is one at all) is entirely exhausted by the question of which functional property is more natural. There's obviously no further property of 'being alive' for us to wonder about.

    The case of consciousness is crucially different. We don't just wonder which of F and F- is more natural. We use this as a basis for extrapolating the distinct property of 'being conscious'. (It's reasonable to expect that consciousness will correlate with a natural functional property in this vicinity. But it's clearly a further question, over and above the question of naturalness, and as such it could - in principle - go either way.)

    In short: deciding the naturalness facts conclusively settles the virus question, which shows the latter to concern nothing over and above the former. By contrast, settling the naturalness facts would not conclusively settle questions about consciousness; it would merely give us "good reason" to posit one theory of mind rather than another. But it still remains an open question whether consciousness actually correlates with the more natural functional properties, or whether it turns out to be instantiated in a physically more 'gerrymandered' pattern. So consciousness is something over and above those settled physical facts.

    (Shorter still, as Soluman points out, there's a problem of other minds, but no such 'problem of other lives'!)

    ReplyDelete
  16. I think the reasonable response is the Dennett-ish response that phenomenal consciousness (as something logically separate from access consciousness) is just an incoherent concept. The idea that there's an actual qualitative property of redness beyond access consciousness event of me thinking that I am experiencing red seems like it's just reification.

    I'm not a philosopher though, so I don't know how many physicalists actually take that direction.

    ReplyDelete
  17. Richard, I think you are failing to understand Lewis, but I think you are doing so because you have been educated to believe that your answer on this point is officially correct and to believe that when sources you see as official disagree there is always an open question and thus there must always be some reasonable disagreement.

    The open question regarding animals is about what sorts of patterns we would care about if we could understand those patterns more clearly.

    ReplyDelete
  18. What do you mean by "official"? I think lots of philosophers have gotten sucked into merely terminological disagreements.

    "The open question regarding animals is about what sorts of patterns we would care about if we could understand those patterns more clearly"

    That may be an open question. But it sure isn't the one that I wonder about. (I'm wondering about the psychologies of animals, not about my own!)

    ReplyDelete
  19. "Would you still be conscious if your neurons were replaced by (functionally identical) silicon chips?

    It seems like this is an open question."

    Well I think "Type-A" materialism is correct, and the question is closed: yes, you would still be conscious. (And yes, Block's Chinese Nation is conscious.)

    "it doesn't seem that there's anything left for us to wonder about the situation."

    Indeed.

    You asked Aaron Boyden "do you think there's an open question about how complex their brains need to be before they really feel pain (as opposed to just instinctively flinching away, etc.)?"

    Here, I think you're doing my work for me - this is my favourite line of attack against dualism!

    If you think that for every animal that lives and has ever lived, there's a well-defined yes/no answer to the question "Did this animal have any experiences" then you must believe that the 'tree of life' is severed into two disjoint pieces - the conscious animals and the unconscious ones. And because, evolution proceeds incrementally, these two pieces must approach each other extremely closely. You end up having to believe that, many times in Earth's history, there has been a conscious 'lizard' (fish, mouse, amoeba, whatever) with unconscious parents, despite those parents being (broadly speaking) as similar to it as your own parents are to you.

    [Note: This argument can be made even sharper by considering the development of a fetus, and asking whether there must be an 'exact moment' when its mind 'switches on'.]

    Thus, the dualist is forced to paint an unparsimonious picture of the world, with 'jagged edges' where minds suddenly begin, whereas a Type-A materialist is quite content to say "there is no fact of the matter as to whether this lizard is conscious. It has some of the features we associate with consciousness but not others."

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.