Friday, August 12, 2005

Conscious Causation

Derek asked why I think zombies are nomologically impossible. Basically, it's because if they weren't, then I'd have no reason to think that anybody else is conscious. Perhaps everyone I meet is a zombie. Sure, they say they're conscious: but non-conscious physical duplicates would have all the same causal properties, and so say all of the same things, in that exact same sincere-sounding tone of voice, but all without any lights on "inside". Now, the reason I'm justified in thinking that other people are conscious too, is that I know that I'm conscious, and that other people have a relevantly similar constitution to myself, so I guess they're conscious too.

The universe is such that entities composed like I am somehow end up being conscious. If the previous statement is true, then there can't be any zombies, given the natural laws that actually govern our universe. Being physically identical to me is nomologically sufficient for consciousness. That's my assumption. Without it, it becomes utterly mysterious as to why I'm conscious. What special features do I have, beyond my physical ones, that make me conscious? It sounds like we'd be getting into substance dualism, ghosts, and fairytales. So that's that.

The zombie identification issue, described in the first paragragh, raises a more general problem. To illustrate it, imagine a universe physically identical to ours, but lacking the psycho-physical laws (whatever they are) that bestow consciousness upon us. In particular, consider my zombie counterpart in that world, and call him 'Zichard'. Now, Zichard is physically identical to me, and so he interacts with the physical world in all the same ways. He goes where I go, says what I say, and types what I type. To the outside observer, we are entirely indistinguishable. But I'm conscious, and he isn't. Right now, he's typing up a blog post identical to this one. He writes about how the 'publish post' button looks red to him. But it doesn't, not really, because he isn't actually conscious, so nothing looks like anything to him. He's just a robot made of flesh and blood -- and he doesn't even realise it.

Now let's think about me. I'm physically identical to Zichard. When my fingers hit the keyboard, they have the same physical causes behind them as do Zichard's fingers in his zombie world. So consider the following statement of mine: The 'publish post' button looks red to me. Zichard typed that exact same sentence, mind. There's nothing non-physical in the zombie world, so Zichard's physical situation must have been sufficient to cause him to type the sentence. But we're physically identical beings, living in physically identical worlds. So my physical situation must also have been sufficient for my typing that statement. The fact of my consciousness doesn't seem to have entered into it at all. The fact that the button really does look red to me, doesn't seem to be any part of the causal explanation of why I typed that sentence.

That seems a real worry. Indeed, it strikes me as a powerful motivation for some form of reductionism about consciousness. Whatever those physical facts were that led me to type that sentence, my consciousness must consist in those facts, in order for it to be true that my description of the button's redness was caused by my experience of that redness. In other words, the following claims form an inconsistent triad:

(1) Consciousness is non-physical.
(2) Physical actions have physical causes.
(3) My description of the button's redness was caused by my conscious experience of that redness.

I'm inclined to reject (1). So I should say that zombies are metaphysically impossible after all. I guess dualists are going to reject (2) instead. But then, in light of my 'Zichard' scenario, to retain (3) they're also going to have to deny the following:

(2') If physical causes C lead zombie Z to perform act A in the zombie world, then the corresponding physical causes C' lead the physically identical non-zombie Z' to perform the corresponding act A' in the physically identical non-zombie world.

And how can you deny that?

16 comments:

  1. But to impute that you are conscious and Zichard is, hypothetically, non-conscious does not compute either in the first place because your 'consciousness' has no other reference but yourself.

    If you cannot define, grasp or identify 'consciousness' or 'what it is to be conscious' without self-reference, then you're never really going to be in a position to imagine a Zichard, much less another human being, as conscious or non-conscious.

    In which case, for all you know, Zichard could be conscious or non-conscious, or you could be your very own Zichard, but you wouldn't be able to tell the difference.

    ReplyDelete
  2. Hi Richard. I agree with you that zombies are impossible for the same reason. But this is not an argument for physicalism, IMO. The problem with physicalism is a failure to explain why this is so. I don't think it is satisfying to say that a physical system put together the way I am "just is" conscious. Nowhere in the third person methodologies of present-day physics do we find an entity or process which would explain why first person experience exists in the world.

    I also agree with your saying that a reductionist argument makes sense. But reductionist approaches which eliminate the explanandum aren't explanations at all. There must be some fundamental entity and/or process in the world we can reduce our experience to.
    - Steve Esser

    ReplyDelete
  3. I'm sorry that I didn't read your earlier post before commenting. I see you addressed my point there.

    ReplyDelete
  4. Regarding 2', perhaps a dualist might argue that non-zombie's non-physical consciousness is capable of overriding, but may choose not to override, the physical processes that translate physical cause to physical actions. In other words, where consciousness is involved, physical actions are *not* caused solely by physical causes, but have an additional, non-physical cause. This is exactly the negation of assertion 2.

    What I'm saying, perhaps not very well, is that I don't see how a dualist in rejecting 2 is necessarily forced to accept 2'. Of course, IANAD(ualist), so maybe I'm missing something.

    Fun reading, though. :)

    ReplyDelete
  5. Seems to me you are a Zombie in one sense and not in another.
    For example in as far as you define what you hve as conciousness you are not a Zombie - HOWEVER in reality what you have is far less that what we might philosophically define as conciousness or free will etc etc.

    ReplyDelete
  6. Paul, I don't think the rejection of (1) commits one to functionalism. We might still hold that it matters which physical causes were behind the action. For example, we can look inside chat-bots like 'Eliza', and see straight away that they are merely cleverly programmed such as to give the false impression of understanding, when there is no deep parsing of semantics going on. So we can hold mere behaviour (even passing the turing test) to be insufficient to establish consciousness, for the behaviour must also have the right kinds of causes.

    (I don't find the Chinese Room objection very compelling though. I'm inclined to simply insist that the system does understand Chinese, even if the conscious agent doesn't.)

    ReplyDelete
  7. Covaithe - I guess you're right that the dualist could say something along those lines. Interesting.

    Pulp - I have no idea what you're trying to say. It seems fairly clear that we are able to imagine other beings as conscious or not, even if the only consciousness we have first-hand experience of is our own. I just don't see the problem here.

    ReplyDelete
  8. I don't quite follow your previous comment. The denial of (1) is physicalism, which makes zombies impossible. Anything physically identical to myself must be just as conscious as I am, because my consciousness consists in some of those physical facts.

    Now, physicalism does not entail behaviourist functionalism. Let me illustrate why.

    Suppose I had a behavioural twin 'bot' called Bichard. Bichard exhibits identical behaviour to myself, but if you look inside his head you find not a brain, but rather, a clever yet superficial Eliza-like bot program. (Okay, it seems unlikely that a superficial program could exhibit such complex behaviour, but never mind that for the moment.)

    I take it functionalism is the claim that if I am conscious, then my behavioural duplicate Bichard must also be conscious. But clearly the denial of (1) does not entail any such thing, for Bichard and I are physically very different. Our behaviours both have physical causes, but they are different physical causes. Maybe mine are of the right type to give rise to consciousness, whereas Bichard's aren't.

    ReplyDelete
  9. Richard,
    Not sure if your argument is very strong. You are saying that the robot does all the same things as you and thus we have all the evidence that it is conscious as we have for you except that it is a machine.

    For an external actor to deny it the status of consciousness they would have to also deny you that status. Unless you use the "understanding" argument that I will detail as follows.

    The "understanding" argument is an implied argument one that one often finds in the justice system and in religion.

    There is a complex system and you declare that if you can understand it is let say "not conscious" and if you can understand it is "conscious".

    Similarly in religion if you can’t understand it is god if you can it is nature

    And in law if you can understand it is OK if you cant it is "evil"

    In this case you might say - I can see why the computer comes up with those answers as a simple function of some inputs.

    The problem is that understanding is in internal to you - so how can it be valid for judging?

    The other possibility is that you define consciousness arbitrarily as "what I have" or "what I and every natural born human has" in which case you can deny a conscious robot that can exceed you on every test as a result of him just not being identical (in every aspect) to you.

    It is valid for you to exclude the robot because it can’t do something you can do (at the risk of being arbitrary again) but that implies the robot does something different to you.

    Otherwise when you say
    "Our behaviours both have physical causes, but they are different physical causes. Maybe mine are of the right type to give rise to consciousness, whereas Richard’s aren't. "

    You are using a tool of defining a hypothetical that is potentially impossible and then marvelling at the fact that it is possible as a result of your hypothetical

    for example I might say imagine if two things are pulled towards the earth - but one thing is pulled towards the earth by a different but absolutely identical force to gravity on the same trajectory but you look and find that that one is being effected by gravity while the other is being effected by another force - therefore you cannot say that if something obeys gravity it is being effected by gravity.

    Now I guess you can say "yeah that’s true" but few people are using that hypothetical to dispute gravity.

    I think you quite often to some extent use such arguments that emerge naturally from the impossibility of some hypotheticals.

    ReplyDelete
  10. Genius, I think you've misunderstood my argument. I wasn't suggesting that robots cannot be conscious. Rather, I was suggesting that it's possible for a robot to behave identically to me without being conscious.

    "we have all the evidence that it is conscious as we have for you except that it is a machine."

    No, the crucial difference isn't that it's a machine, but rather, the exact details of how the mechanism works. Functionalists want to treat agents as a "black box", ascribing consciousness purely on the basis of behaviour. I was suggesting that we need to look inside the black box, at the causal mechanisms that underlie the behaviour. If the robot's mechanisms are of sufficient complexity, we might be justified in ascribing genuine understanding and even consciousness to it. But if we find the behaviour is arising from simplistic chatbot-style heuristics, then we can conclude that it's just a big fake. It's not just behaviour that matters. The underlying "programming" is crucial too.

    ReplyDelete
  11. (This point is made by Jack Copeland in his excellent introduction to Artificial Intelligence -- highly recommended reading if you're interested in this stuff.)

    ReplyDelete
  12. Richard: If you look inside the robot and find that its behavior is the result of a physical system that is really simpler than your own, then it should in principle be possible to find a test to demonstrate this fact. It should be possible to design a test where the robot behaves physically differently than you do. If, on the other hand, you claim that the robot is totally behaviorally identical to you under every conceivable test, I fail to see in what meaningful way you can describe its internal mechanism as "simpler" than your own, no matter what it is.

    ReplyDelete
  13. Covaithe, I was thinking of a robot that merely managed to match my actual behaviour. If it was impossible in principle to tell us apart, then it would seem odd to say that we might nevertheless have different mental states. Hmm. Maybe functionalism isn't so bad after all? ;)

    ReplyDelete
  14. I don't find determinism problematic -- see here.

    But I don't think it matters for present purposes anyway. Nothing I've said here assumes that any physical replica of the universe must turn out the same way. Rather, the dualist is committed to saying that there could be a physical replica of our universe, lacking consciousness. More pressingly, my physical actions have physical causes. So if consciousness is non-physical, it seems superfluous when explaining my actions. Indeterminism doesn't change any of this.

    ReplyDelete
  15. "you say that Zichard having same physical structure as yours cannot perform same tasks as you coz you are performing them being conscious."

    Actually, I was suggesting the opposite. Zichard would (in some possible world) perform the exact same actions as me, simply in virtue of our physical identity, whether or not he was conscious. But our actions have the same, physical, causes. So if my action was a result of my consciousness, then my consciousness must consist in my physical facts -- so Zichard must be conscious too (since he's physically identical to me). Hence the conclusion that zombies are impossible.

    ReplyDelete
  16. You're right that it's possible for similar actions to have different causes, but I don't think that's what's going on in this case. For note that Zichard is behaviourally indistinguishable from myself (by the definition of a 'zombie'), so when you ask him why he ran away, he'll say "because I saw a lion!" (He'll even philosophize about consciousness, completely unaware that he doesn't have any.)

    Assuming Zichard has no consciousness, the real reason he ran away must be some physical fact (perhaps about light from the lion impacting on his retina, being processed by his brain in certain ways, computing that there is danger, etc.).

    But I'm physically identical to him, so all those exact same events take place in my brain too. So it just seems unnecessary to add consciousness in to the mix. The physical facts alone are sufficient to cause my action.

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.