It's commonly assumed -- at least by me -- that the brain is the seat of the mind. But now I wonder whether there's actually any principled basis on which to draw a strict delineation between the brain and other organs (e.g. the eye and optic nerve, etc.), insisting that our minds do not extend beyond the former. I'd previously supposed that the idea of a 'brain in a vat', or a body transplant, shows that the brain is all that intrinsically matters for mentality. But now I'm not so sure that this works after all.
On the standard picture, the brain is basically a computational device. It takes 'input' from various sensory nerves in our body, performs computations on this data (which amounts to 'thinking'), and then outputs behavioural instructions for the body to perform. At least, that's how I think of it. I'm just assuming this is 'standard'. Anyway, this picture seems to make the body rather superfluous: you could replace it with anything else that gives the same 'inputs' to the brain, and reacts appropriately to the resulting 'output'. Hence the possibility of Matrix-like illusions involving "brains in vats", where the body is replaced by a complicated computer simulation feeding input to our brains, resulting in mental lives indistinguishable from our own. This possibility suggests that the brain is all that matters for mentality. (Or so I assumed.)
But then, couldn't the same sort of "replacement" occur to portions of the brain itself? Suppose a small portion of my brain was removed, and replaced with a functional equivalent. That is, the replacement part would feed the exact same inputs as before into the various neurons that it's connected to. It would react exactly as my original brain-part had: taking in information from neighbouring neurons, running the appropriate computation, and then returning the appropriate result. If parts of my brain were replaced in such a way by these functionally identical "artificial neurons", I would never notice the difference. My mental life would be unchanged. So, by the same reasoning as above, it seems we are led to conclude that brain-parts are inessential to our minds, in exactly the same way that body parts are.
Of course, if you just remove a brain portion without replacement, then my resulting cognition will be completely different. But the same is true of my body (say if you remove my eyes), or even the external world -- take away my calculator and I won't be nearly so good at solving math problems!
So indispensibility or the possibility of 'replacement' cannot be what delineates which physical parts are involved in mentality. The brain in a vat is a red herring, for we can replace even more than that; we could have a "frontal cortex in a vat", or even a single neuron in a vat, but that doesn't mean that the rest of the brain is non-mental. What this example shows us is that the mind can extend beyond what's in the vat. In case of your neuron-in-a-vat, the single neuron certainly does not exhaustively comprise your mind. More plausibly, your mind also includes whatever has 'replaced' the rest of your brain -- perhaps part of the 'vat' architecture. But then, what's stopping us from saying the same thing in the original BIV scenario? The computers that have replaced our body (and even the external environment) might now be part of our minds.
Am I missing something here? If mentality is computation, and it doesn't matter how the computation is physically realized, then it seems arbitrary to restrict the mind to the brain -- or even the body, for that matter, as Clark & Chalmers argue in 'The Extended Mind'. They argue that our dispositional/'standing' beliefs consist in information stored in any source that we rely upon and access easily and regularly, e.g. a notebook carried around by an Alzheimer's patient, and not just internal memory. This could, in principle, even extend to other people, yielding the intriguing idea that an inseparable couple's minds might to some degree overlap! I'll quote a bit from C&C's fascinating conclusion:
In each of these cases, the major burden of the coupling between agents is carried by language. Without language, we might be much more akin to discrete Cartesian "inner" minds, in which high-level cognition relies largely on internal resources. But the advent of language has allowed us to spread this burden into the world. Language, thus construed, is not a mirror of our inner states but a complement to them. It serves as a tool whose role is to extend cognition in ways that on-board devices cannot. Indeed, it may be that the intellectual explosion in recent evolutionary time is due as much to this linguistically-enabled extension of cognition as to any independent development in our inner cognitive resources.
What, finally, of the self? Does the extended mind imply an extended self? It seems so. Most of us already accept that the self outstrips the boundaries of consciousness; my dispositional beliefs, for example, constitute in some deep sense part of who I am. If so, then these boundaries may also fall beyond the skin. The information in Otto's notebook, for example, is a central part of his identity as a cognitive agent. What this comes to is that Otto himself is best regarded as an extended system, a coupling of biological organism and external resources. To consistently resist this conclusion, we would have to shrink the self into a mere bundle of occurrent states, severely threatening its deep psychological continuity. Far better to take the broader view, and see agents themselves as spread into the world.
As with any reconception of ourselves, this view will have significant consequences. There are obvious consequences for philosophical views of the mind and for the methodology of research in cognitive science, but there will also be effects in the moral and social domains. It may be, for example, that in some cases interfering with someone's environment will have the same moral significance as interfering with their person. And if the view is taken seriously, certain forms of social activity might be reconceived as less akin to communication and action, and as more akin to thought. In any case, once the hegemony of skin and skull is usurped, we may be able to see ourselves more truly as creatures of the world.
Despite their other radical suggestions, C&C conservatively assume that consciousness is purely in-the-head. But again, what is the principled basis for such a boundary? Perhaps if consciousness was seen as a fundamentally biological or neurological process, essentially arising from neural interactions, then we could get this result. (Though it would seem to imply - implausibly, I think - that replacing each of my neurons with exact artificial replicas would rob me of my consciousness.) But on cognitive theories of consciousness (ala Dennett), extended cognition would seem to straightforwardly imply the possibility of extended consciousness. Maybe someone will figure out a way to use this to test the theories one day...
"The brain in a vat is a red herring, for we can replace even more than that; we could have a "frontal cortex in a vat", or even a single neuron in a vat"
ReplyDeleteWhy not take it the final distance? You don't even need the single neuron - all you need is the vat! Of course the vat ends up being a fantastically complicated bit of wiring that functionally duplicates the brain(+external cognition) anyway...
Ha, yeah, well spotted. Some would say that it's no longer you any more, but I guess that stage would occur long before the single-neuron version too. This brings up all those crazy questions about personal identity discussed in this old post.
ReplyDeletehmmm my analysis is
ReplyDelete"yes"
more detailed - I guess you can define "yourself" as anything you want - a bit like frming a club and defining yourself as "a labourite" or a "nationalist".
Heck, go one step further and break the mind away from the body entirely. I don't mean the way dualists do. But the way externalists do. Thus the mind is simply a different sort of thing as is related holistically with your environment. That's the externalist perspective. The brain still is what is most important. But our thoughts about things can't easily be separated from the things themselves. So mind-talk can't be finitely translated into brain-talk (to adopt a more Davidson like language)
ReplyDeleteYou could define yourself as a certain chain of events that hapen to flow into a human body and out of it. that would be much harder to define than just "a body" but in a sense potentially even more true if it was defined tightly enough.
ReplyDeleteAfterall in a few years you will have almost no atoms in common with the origional you.
anyway I put some mroe on this on my blog - guess it will make a link here?
ReplyDeleteGenius, that's largely how the process though folks do it (although they don't just limit to a process like that. There are some even more expansive notions that make it more powerful.
ReplyDeleteI'm not a big process thought guy, if only because every time I reread Whitehead I get told that I still have him wrong.
I think that within analytic philosophy externalism is the most useful. I like Davidson's approach using translation and have argued it's largely Peirce's view as well. There are lots of meanings to externalism though in analytic philosophy. It's one of those confusing terms until you narrow down how it is being used.
1) can you reference somthing on process theory regarding what you mean? (eg web page or somthing)
ReplyDelete2) Not sure I understand you correctly. Reading around Davidson...
I guess you mean the swampman (duplicate of you).
I guess the fundimental difference here would be aserting that the swampman doesn't have a soul and is not you? It sounds like a statement of faith to me..
I don't know of a good page. The closest I can think of is something I wrote last year comparing Peirce and Davidson's view of mind. Davidson's anamalous monism can be captured in the following three statements:
ReplyDelete(1) All mental events are causally related to physical events. For example, beliefs and desires cause agents to act, and actions cause changes in the physical world. Events in the physical world often cause us to alter our beliefs, intentions, and desires.
(2) If two events are related as cause and effect, there is a strict law under which they may be subsumed. This means: cause and effect have descriptions which instantiate a strict law. A 'strict' law is on which makes no use of open-ended escape clauses such as 'other things being equal.' Thus such laws must belong to a closed system: whatever can affect the system must be included in it.
(3) There are no strict psychophysical laws (laws connecting mental events under their mental descriptions with physical events under their physical descriptions.
Hey Richard,
ReplyDeleteNice! I like this post. It's very similar to a thought experiment you can play around with in response to Searle's (crap) 'Chinese room argument': We ask Searle if he's conscious. He replies yes. We snip out one of his neurons, and I start following the instructions in my 'neuron simulation manual', taking the various inputs, running calculations and feeding the outputs back into the downstream neurons. etc, etc as with your thought experiment, until there's only one (or no) original neuron left. (In order to give me a chance to keep up with his neural processes, he'd have to start out on a pretty powerful sedative or something!)
Since I'm not directly conscious of his thoughts (or so Searle's original argument seems to imply), his mind must no longer be associated with this system. I just have this sneaky suspicion that the simulated Searle wouldn't be willing to agree with this conclusion! I kind of suspect that if we asked him what he thought, he'd say: ''DON'T STOP SIMULATING!'' (I assume that his nervous system has some of the same self-preservation instincts hard-wired into it that mine does!)
I think it's natural to believe (as you apparently do ?) that there's an isomorphism between mind and brain, in the sense that when one thought CAUSES a subsequent thought -- eg. when an initial belief inevitably gives rise to belief in its logical consequence -- then the physical state associated with the first thought equivalently CAUSES the subsequent brain state associated with the second thought). In other words, a structural description of the evolution of our mind matches perfectly with a structural description of the evolution of our brain.
If this is a sensible approach, then perhaps there is some artificiality in drawing a boundary between 'mind' and 'what lies outside', but it starts looking like it might be able to link in with Platonic forms somehow: the mind becomes ''The form in the arrangement of things that necessitates (essentially) THIS subsequent evolution'', and the environment becomes those elements whose exact configuration ideally wouldn't matter.
It's interesting to think about the way our neurological activity is physiologically isolated from its environment (by the skull, by ion pumps on neuron cell membranes, etc) except for certain very narrow channels through which external influence is allowed to flow.
(Note that this way of looking at things would lead us to think (for instance) of thermal fluctuations inside the head as being part of the brain's environment - a part from which we have evolved in such a way as to insulate the activity of our mind from.)
Also, imagine two people talking in a room. We've been conditioned to talk of there being ''two separate minds'' associated with the room. But the ideas bouncing back and forth are a part of the system's NECESSARY evolution: it makes sense to think of the ''room's awareness'' as constituting a SINGLE mind of which either person's mind is just a part... :)
I don't think I'm doing justice to my ideas here - might leave it for another time :) probably better do some work!
Ciao!