Monday, November 01, 2004

Are Brains Computers?

Is the human brain a computer (i.e. a Turing Machine)?

That's the question we looked at in our final Logic lecture this year. I'm inclined to say "yes, it must be", because I can understand (roughly) how computers can work things out, but I can't imagine how a non-computational process could do the same. That might just be ignorance on my part - if anyone else out there has an explanation of non-computational 'thinking', do tell me about it!

Anyway, our logic lecture was about the implications of the Halting Theorem, that is, the proven fact that no Turing Machine can possibly solve the Halting problem. In brief, the Halting problem is: given any Turing Machine's instruction table and initial input, tell whether that machine will eventually halt, or whether it will run forever (i.e. get stuck in an infinite loop). No computer can solve that for the general case (though of course they can solve specific instances).

Yet there seems no reason to doubt that the human mind is capable of solving the Halting problem. It seems possible that there could be some human uber-Einstein such that you could give him any Turing Machine whatsoever, and he could (eventually) work out whether it would halt on a given input. (Of course, if it's an excessively complicated program, he might require a million years to solve it, but if we allow for such idealised conditions, there seems no reason to think that anything is in principle impossible for our uber-Einstein.)

So, since we know that some mathematic problems cannot be solved computationally, yet there seems no reason to think that they cannot be solved by humans, it would seem to follow that the human mind must not be a computer. At least, that's the argument our lecturer gave, and apparently it's quite a popular one, made most recently by Roger Penrose (though I haven't read him yet).

I don't think it's a very good argument though. It strikes me as entirely question-begging. I would dispute that there is "no reason" to doubt that humans can solve any mathematical problem. I think the Halting Theorem provides us with good reason to doubt it. If something cannot possibly be achieved (even in principle) by computational processes, then that provides us with reason to think that humans cannot do it either. After all, there seems no reason to doubt that the human brain could be simulated by a computer! (I guess it's the old case of "one man's modus ponens is another man's modus tollens".)

I did suggest that counterargument to my lecturer, and he wasn't convinced. He assumed the anti-computationalists had the more intuitive position, so the "burden of proof" rested on us. I disagree, of course. What we have here is an inconsistent triad, and all that is agreed upon is the first claim: that a Turing Machine cannot solve the Halting problem. Of the other two, either humans also cannot solve it, or the human brain is not a Turing Machine; but I don't think that either of these positions is more intuitive than (or priveleged above) the other. So I don't think you can use either to argue against the other - not without begging the question.

But could it be computers, plural?

Here's the really interesting part. Even if we accept that the Halting Problem shows that the human brain is not a (individual) Turing Machine, that still leaves upon the possibility that it could be several Turing Machines (in sequence). Indeed, this is the position Turing himself advocated.

Mathematicians cast out for new methods of proof when necessary. Perhaps a Turing Machine could do the same thing? That is, faced by a problem it cannot solve, it might rewrite its own code to become a new Turing Machine. Such a sequence of TMs could (in principle) solve the Halting problem, so long as the sequence could not be generated by a single TM (since otherwise that single TM could solve the Halting problem itself, and we know no single TM can do that). In other words, the sequence must be uncomputable; the transformation from one TM into another must be partly random.

So we can overcome the inconsistent triad by suggesting that the human mind could be an uncomputable sequence of Turing Machines. This would seem to require that the world be indeterministic on some fundamental level (in order to yield the 'randomness' necessary to ensure that the sequence is uncomputable). Given quantum physics, this is likely the case. But it does seem an interesting consequence of this view that if determinism is true, then human mathematicians can only ever hope to solve that which is computable. If the world is indeterministic, however, we may be able to transcend such constraints.

9 comments:

  1. The human brain is a mechanistic "computer" but a complex "neural net". If it is fully deterministic then it must have a complete "table" and by definition is a Turing machine. If there is some level of indeterministic behaviour it is most unlikely the brain would "halt" (it will presumably branch "randomly" to one of several possible new states) and this would mean it wasn't a Turing Machine.

    If the human brain can solve the halting problem then it must be a non-Turing machine and the halting problem is obviously computable by such a non-Turing machine. On the other hand the halting problem may be non-computable by any finite machine and the human brain may be a fully deterministic Turing machine. Stating that there is no reason to believe that the human brain cannot solve the halting problem is a long way from proving that it CAN.  

    Posted by Greyshade

    ReplyDelete
  2. I took a philosophy of mind course last year and one thing that always struck me as odd was the claim that "the brain" is structurally/functionally akin to a computer. Now, I realize there are obvious parallels between the two, but I also think there are obvious (and important) dissimilarities. For example, if our 'mental life' (i.e. conscious and unconscious experience) is taken to be either identical to or supervenient upon 'the brain' (as many would suggest), I find it striking that even the most sophisticated computers would seem to lack the affective/evaluative nature of our mental awareness. This is obviously no minor difference. How would a proponent of the view that brains = computers try to account for it? 

    Posted by Daniel

    ReplyDelete
  3. A "Von Neumann Architecture" computer running a fixed program is going to be quite different to a massively parallel network running a self-modifying, learning AI program designed to simulate a human brain. The problem with the question of whether such an AI can exhibit "real" intelligence, emotions, self-awareness, etc is defining them and coming up with experiments by which they can be tested. We might, for instance, have a contest between a human an an AI to compose a poem or a piece of music on the subject "Who am I?" but would we judge the results on the basis of human or AI response to them. It would seem to me that a fair appraisal would be possible only if a hypothetical silicon-chemistried (or whatever) space alien was available as a judge.

    For a long time we thought that the "art" of playing grand-master level chess would be beyond any computer because of the limited search depth. It turns out that computer programs can beat human grand masters rather easily. This is not because they play grand master chess but because they play a simpler erro-free game and human chess players (even at the highest levels) make far more errors than we realised but get away most of them because their opponents don't spot the error either - unless the opponent is a computer. 

    Posted by Greyshade

    ReplyDelete
  4. Daniel - Note that by 'computer' I really just mean an abstract symbol-manipulator. The 'Von Neumann' computers we're used to are just one form of this, as Greyshade notes, and are very different from the brain. So that is not what I meant by 'computer'.

    Taking computation to mean the manipulation of representations (i.e. symbols), it does seem to me that this is how our minds must work. We represent the world through our beliefs, desires, and other mental states, and these interact through various computations. I'm not sure how else it could be. (Though as I said, I would like to hear about the alternatives.)

    As for consciousness, I'll be posting a bit on Dennett's views of that over the next few days. In short, he basically thinks it's a sort of "virtual machine", i.e. special software which is run on our neural hardware (brain). No existing computers have software anything like it, but it's not in principle impossible for them to do so.

    Greyshade - I agree that it's not something we can judge objectively, but I don't really have a problem with judging AI's from our biased human standpoint. 'Intelligence', 'emotion', etc. are human concepts, so I'm happy to let us decide where they apply. If a robot behaves in such a way that's it strikes us as useful and appropriate to attribute these abilities to them, then we should do so. If not, then we shouldn't . It's less than ideal, I suppose, but I'm not sure what else we can do here.

    P.S. Are non-Turing machines still 'computers' (in the symbol-manipulation sense), or are they something else? (I really don't know anything much about them.) 

    Posted by Richard

    ReplyDelete
  5. Richard - In my post I was assuming that what you meant by 'computer' was something like an "abstract symbol manipulator" and not my PC :) However, it seems that my conscious experience is not *merely* a matter of symbol manipulation - much of it is, I agree, but it's hard to see how *desires* and *emotions* fit the category. Though affections and evaluative states are (often) intentional and are "directed" at an object (or symbol), they are not merely such. Obviously I need (and want) to study some cognitive science and AI work, and this may help alter my deeply embedded "intuitions" about conscious experience. But then again, does the cognitive scientist/philosopher really know more about what it is *like* to be a human being and have conscious experience than I do? ...I guess my affinity with some of Chalmers' work is becoming obvious :) 

    Posted by Daniel

    ReplyDelete
  6. Greyshade - A couple thoughts...First, are not emotions, desires, evaluative sensibilities, etc simply *different kinds of things* from anything that would go on in a massive parallel network running a self-modifying, "learning" program? My point is just that (e.g.) learning (or being taught) how to play chess seems to be very different *kind of thing* from having affective and evaluative conscious experience. Second, how would we *test* other human beings? Is there a (realistic) chance we will be able to know what a person *believes* and is phenomenologically experiencing?

    I would be very interested in such developments. Some links or article/book recommendations would be greatly appreciated. 

    Posted by Daniel

    ReplyDelete
  7. Daniel
    We can only speculate. The problem is we have to understand both the brain and the hypothetical AI before we can address the problem theoretically. Obviously playing chess (the way a computer plays) is a highly mechanical activity. Human chess players use quite different strategies which probably include semi-intuitive rules for evaluating a game position. Top chess players have been shown to have exceptional ability to remember a chess game position when shown it briefly but do no better than the general population at remembering positions where the pieces are placed randomly. I included it as a cautionary tale. A lot of knowledgeable experts recognised that the human chess game is much more complex than finite depth minimaxing and therefore concluded that computers (at least the conventional computer chess program) would not be able to play top-level chess. The premise that human and computer chess are different was (probably) correct butthe conclusion that computers could not play at Grand Master level was wrong. The "dumb but reliable" searching computer algorithm outperforms the much more elegant ut error-prone human "art of chess".

    As far as emotions, desires, evaluation sensibilities are concerned we can certainly postulate that they can run on the right sort of computer architecture. If we accept that the human brain IS a massively parallel network then the fact that they exist in the human brain proves that they can run on such networks. The only alternative is to invoke some sort of vitalism. Of course, the "real" nature of emotions, etc may be quite different to what they think they are.

    Richard I don't know any formal stuff about non-Turing machines but assume that the essential feature would be that state transitions are not (all) strictly determinable. This can happen with networks of computers with unsynchronised clocks and is more or less assumed in parallel systems even with synchronised clocks. Typically subtasks are assigned to processing units as they become available and we assume that the exact time taken by each subtaskis not known in advance. A computer program running on a single PC would also be a non-Turing machine if it responds to an analog input such as a temperature probe.  

    Posted by Greyshade

    ReplyDelete
  8. a more efficient model of modern computers would be an abacus machine anyway

    ReplyDelete
  9. It is most useful to use idealized concepts like Turing machines and ideal minds. The ideal mind concept was hinted at by "an uber genius who had a million years to solve the problem" but it should really be a person with an infinite IQ, and maximized {whatever it is that humans have} as well as infinite time etc. Then compare that to a Turing machine. Von Neumon computers versus neural nets would be identical in this idealized sense if we are talking about neural nets simulated on Turing machines. If we mean neural nets with infinite precision, then we get into hypecomputation which cannot be fully simulated on a computer.

    Many hypercompuation (beyond Turing machines) devices have been formulated theoretically, but they cannot be built in the real world. People are "built" in the real world, but i imagine that if people have a way to get outside of themselves even in the most subtle way to get answers for things that they cannot compute (true intuition), then that may explain why we are non-computational.

    If, however, we are Turing machines, then the vast transfinite infinity of problems unsolvable by machines would apply to us as well...of which only an infinitessmal of probelms are deciable to us and the same would be true of humans.

    I do not buy arguments that humans cannot solve undecidable problems because we would need an algorithm to do it and that is by definition impossible. We clearly have the ability to perform semantic reasoning and that is, in an ideal sense, uncomputable...so the question becomes: can non-algorithmic language (for example the langauge we are all using to make our cases) have precise meaning to humans?

    This gets into berry's paradox: "A phrase under 1000 characters that denotes the smallest number, k, that cannot be denoted with 1000 characters or less." denotes such a phrase even when we are limited to a finite combination of symbols because the process of denoting is non-algorithmic. Is such a word as "denote" merely ambiguous or is it truly uncomputable? The same question applies to "meaning" and understanding. I am absolutely convinced that idealized minds cannot be simulated by Turing machines...what is not so clear is if physical humans have such abilities. But perhaps humans are to idealized minds what physical computers are to Turing machines.

    -Gary Geck

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.