banner
toolbar
August 21, 1983

Exploring the Labyrinth of the Mind

By JAMES GLEICK
From The New York Times Magazine

YOU'RE LOOKING AT A NEWSPAPER comic page and your eye falls on today's jumble. It's an anagram puzzle. You have to turn a few scrambled letters into a word. LOONDERK. A tough one. KRONDOLE. KNOODLER. Close. Patterns form and re- form in your mind. Actually, in this case there isn't even a word there, but at least the patterns look like words. Implausible combinations like EOKDNLRO and NRDOEOKL never leap to mind. This isn't like doing arithmetic - there are no rules to tell you how to make these patterns. No conscious logic decides how to tear the letters apart and put them back together. It just happens, with a delicacy that belies the power of the decision making. The regrouping is fast, subtle and fluid.

Or so it is when Douglas R. Hofstadter, computer scientist and Pulitzer Prize-winning author, does Jumbles. ''I have an unbelievably rapid way of exploring the space,'' he says, writing KNOODLER on his blackboard at Indiana University. ''These words just appear in front of me. Then something else appears, or two or three things, over and over again, new possibilities and new combinations - and always English sounds or close to them.'' He looks at the letters. A grin appears and disappears under a mass of unruly black hair. ''I don't make any conscious decisions - I don't say, well let me try this, let me try that. Instead, instantly, the whole word is built in my mind - like that.''

In the blossoming field of artificial intelligence, where scientists are trying to make computers simulate sophisticated human abilities, Hofstadter's colleagues have little interest in trivia like Jumbles. Many of them are working on expert programs that can prospect for oil or diagnose diseases, and they have made great and well-publicized strides that just a few years ago would have seemed inconceivable. It's no longer astounding to hear about computers imitating anything from a psychiatrist to a schizophrenic. Yet some of the abilities that add up to intelligence - abilities as simple as recognizing the letter A, or predicting the next number in a sequence (1-2-2-3-3-3-?), or doing Jumbles - have stayed as mysterious as ever. Generally, what people can do without thinking, computers cannot do at all.

So Hofstadter is writing a computer program that will try to unscramble Jumbles. In one way, it's a trivial problem. It would be easy to let a computer solve Jumbles by mechanically listing every possible permutation of the letters and checking the results against a dictionary of English words. A program like that, relying on raw, stupid computing power, wouldn't even qualify as artificial intelligence - it would be like untying shoelaces with a buzz saw. Hofstadter wants his program to do its thinking the same way he does, deep below the level of consciousness, without logic but with fluidity. He wants a program with an understanding of how words are put together - a program that won't waste a millisecond on ODKNRLEO, but will pause seriously to consider KNOODLER. Above all, he wants the mental juggling and the flash of inspiration.

With his Jumbles and with some other programs, equally unprepossessing, Hofstadter is reaching for the very heart of thinking. He is trying to simulate not the most sophisticated thought processes, but the most basic. His developing theory of the mind - and it is a real theory, although his conversation and his writings often seem as wild and multidirectional as a fireworks display - has begun to draw a wide following through two successful books, including ''G"odel, Escher, Bach,'' and a monthly magazine column, with a collection of essays to come in late 1984. And Hofstadter, in lectures around the country and in an especially provocative paper not yet published, has also begun causing a stir in the academic world - not in his own field, where his ideas are far from popular, but among some philosophers of mind, who believe he is claiming a territory all his own at the increasingly busy crossroads of artificial intelligence, neuroscience and philosophy.

The three fields are coming together as never before, providing new approaches to some of the most fundamental questions of how the mind works. When you think about yourself, what is being thought about and what is doing the thinking? Can machines be taught the most human of human traits - creativity, inspiration, imagination? How does a brain of neurons and synapses come to be aware of itself as a mind? In seeking answers to such questions, Hofstadter - a 38-year-old associate professor of computer science with a background in mathematics and physics, a love of music and language and a weakness for puns - is an unlikely philosopher. But in his own modern way, he is reinventing the human soul.

''A lot of people believe that there is nothing going on when you perceive,'' Hofstadter said, stepping away from his blackboard to a desk covered with papers and unanswered mail. ''They say: 'I see a book there. It's instant! It doesn't take any time at all! There can't be any processing or computing going on there. It's just obvious.' '' Yet whatever unconscious process manages such tasks is so subtle and so powerful that it has eluded the best efforts of artificial intelligence. ''Memory, too,'' he said. ''To see something you haven't thought of for years float effortlessly to the surface of the mind is a great mystery.''

When Hofstadter looks at KNOODLER, so frustratingly close to a word and so unexpectedly elegant that he is reluctant to take it apart and try something else, it reminds him of a summer day in 1966 when he looked in all the streets and shops of a small Danish fishing town and then stood waiting for a friend on a long and beautiful pier - reluctant to give up and look elsewhere. Somehow the two moments, separated by 17 years, had exactly the same emotional flavor. ''But when you try to describe the connection,'' he said, ''it's unbelievably abstract - so abstract you couldn't in 10 years get a program to see the connection.'' There are no conscious rules for it, just as there are no conscious rules for retrieving a tip-of-the-tongue piece of memory that is just out of reach. But some part of the mind is doing some very hard work.

Perception. Memory. Analogy. Regrouping. ''Abilities to do very simple things, to take things apart and put them back together again in new ways, that's so much at the root of creativity,'' Hofstadter said. ''When a composer like Bach composed fugues, you can practically see the wheels churning. You can see Bach taking things apart and putting them back together - you can see that incredible fluidity.'' He is trying to build that same fluidity into his computer programs, by organizing them the way he believes the human mind must be organized.

THESE ARE VERY EXCITING times in philosophy of mind and artificial intelligence and the neurosciences, which are also exploding,'' said Paul M. Churchland, professor of philosophy at the University of Manitoba. He has been working at the Institute for Advanced Study in Princeton, N.J., where Hofstadter addressed a conference early this spring.

''In the last 10 years, maybe only in the last two years, they've seriously discovered one another, and Hofstadter is doing a great deal to make it one great interactive ball of activity. So much that the suspicion is that there are three Doug Hofstadters working in cycles to produce all the work he does.''

The challenge presented by the revolution in artificial intelligence is to show how one might create a mechanical model for the mind - and not just any model, but one that expresses all our wonder at the spark of human inspiration and the power of human will. Although that is a distant goal for computer scientists, many philosophers are intrigued by theories that have already begun to develop out of their work. For anyone, in fact, who thinks about thinking, the ferment is turning these fields into a lively spectator sport.

There are plenty of spectators. Hofstadter's first book on these matters, ''G"odel, Escher, Bach: An Eternal Golden Braid,'' published by Basic Books, won the 1980 Pulitzer Prize for general nonfiction and then went on to indisputable distinction as the hardest-to-read book ever to spend five months on the trade paperback best-seller list. The book was initially turned down by several publishers, including Indiana University Press, where a reader called it ''a formidable hodgepodge,'' and it was widely misunderstood even by some reviewers who admired it. It's no wonder. The book is a richly woven enigma, exploring Bach's fugues, M. C. Escher's drawings and Kurt G"odel's notorious Incompleteness Theorem, the ultimate spoilsport of modern logic, which declares that any attempt to build a complete and consistent logical system will inevitably be ruined by undecidable propositions. Riddled with wordplay, mixing mathematical discursions with fanciful dialogues, ''G"odel, Escher, Bach'' carries readers through the central problems of contemporary philosophy of mind.

The book has sold well over 300,000 copies in hard cover and paperback, and is now bringing sweat to the brows of translators in Japanese and a half- dozen European languages. Hofstadter has also continued to reach thousands of readers in an unusual Scientific American column devoted, apparently, to nothing more or less than the idiosyncratic world view of Douglas Hofstadter. His publisher, Martin Kessler of Basic Books, plans to bring out a collection of his essays next year - ''precisely,'' as Kessler put it, ''in order to define what might be called the Hofstadter perspective.''

Despite his enormous popular appeal - and in some cases, because of it - Hofstadter has put himself at odds with the more practical segment of his own field of artificial intelligence. He is well aware that academics tend to mistrust new ideas arriving in splashy guises. But he persistently plays up to that prejudice by filling his writings with talking animals, bizarre coinages and more than any man's fair share of bad puns. It doesn't help that he writes scholarly articles with titles like ''Who Shoves Whom Around Inside the Careenium?'' in the form of dialogues between Achilles and Tortoise - favorite Hofstadter characters borrowed, via Lewis Carroll, from an ancient Greek paradox of infinity. Nor did it help that he wrote Scientific American's cover story on Rubik's Cube, just before it became a pop culture craze.

Hofstadter has also become a player in a profound debate that is taking shape in artificial intelligence, between technologists and scientists, between the pragmatic and the theoretical. Some programmers have managed to keep a foot in each door. But many have turned from pure science to programs that accomplish a specific high-level task, successfully simulating some piece of intelligent behavior. These ''expert'' programs are best suited to meeting the immediate needs of industry, and many computer scientists have left the universities to join private companies with names like Symbolics and Teknowledge.

In the meantime, there is much disagreement inside and outside of the technical community about just what computers can do. ''These are days of hype about computers,'' Hofstadter said. ''People are being asked to change overnight from a view of computers as basically stupid to the idea that computers are our partners in evolution. Not enough people are saying, wait a minute, how do we really think, what is consciousness, where does our sense of self come from.''

None of which is to dispute the value of the less theoretical contributions in the field. After all, the development of the jet plane may not have contributed much to our theoretical understanding of how birds fly, but it hasn't been without a certain practical utility.

If any one person was responsible for the growth of artificial intelligence over the last decade, it was Marvin Minsky, Donner Professor of Science at the Massachusetts Institute of Technology, where the computer science department has suffered heavily from the lure of industry. Minsky, a pioneer in the field since before it had a name, describes the new debate in terms of what might be called the Bird's Nest Problem.

Like Hofstadter, Minsky has a vision of artificial intelligence striving to simulate the mind in all its richness. He regrets the preference of many in his field for programs that can do commercially useful ''performance'' tasks that seem complicated but sidestep the most fundamental problems. Such programs can be refined endlessly and taught ever- fancier tasks, but they remain too stolid to adjust to the kind of unpredictability that the real world is so fond of throwing in our paths. An inspired chess- playing program might be able to trounce a good player, but change the rules a little - let knights move twice, or let pawns move backward, or make the board 9 by 9 - and the machine will be at sea, while the human player will manage to cope.

Expert systems are by no means the whole story of artificial intelligence these days. Many scientists have continued to work on issues of language, of learning, of planning. But none have managed to teach a machine to handle the variation and complexity of real life. That's the Bird's Nest Problem.

''Nobody's ever tried to make a machine that could build a bird's nest,'' Minsky said. ''Instead they're all out there in factories assembling motors. People say, oh yes, the bird gets straws and it sticks them in the nest and glues them in. But a motor is designed to be put together. The debris lying around on the floor of a forest isn't designed to be made into nests.''

Industry needs programs that work now, not programs that point the way to a cognitive science of the future. And the quickest way to make a program accomplish a sophisticated task is to write a sophisticated set of instructions for the computer to follow, step by step, one after another. An expert system takes a particular kind of expert behavior - medical diagnosis, for example - and imitates it, with the help of rules abstracted from the way people perform the same tasks. A diagnosis program might tell the computer to begin with certain questions, just as a doctor would. Then a particular response might guide the computer to a particular set of follow-up questions. The paths of questions and answers can go in an intricate variety of directions.

The patient's responses can be analyzed against a data base containing information about vast numbers of diseases. A world of medical experience can be written in, and the results can be truly impressive. They are leagues beyond the routine programs used by scientists and businesses for calculating, sorting or filing. But they don't match the creativity of even the most pedestrian doctor.

''The problem is,'' said Roger C. Schank, head of Yale University's artificial intelligence laboratory, ''what you've done at that point is just written down a set of rules. You haven't got a system that can then form its own rules. What you get now are machines that are intelligent enough to do some stuff, but not intelligent enough ever to surprise you.''

Schank, who has also formed a private company, Cognitive Systems, agrees that expert systems are leaving the most important issues of intelligence untouched. He believes the answer is to keep writing rules, but more flexible ones - rules that will tell the computer how to learn and change. As long as programmers provide the rules, however, the problem of predictability seems inescapable. All the initiative, all the goal setting, all the things that resemble free will come from outside the machine.

Hofstadter describes a different approach, based on his view of the subconscious processes of our own minds. Reasoning comes not first but last. Instead of beginning with an overall algorithm, or set of rules, he begins with many small pieces of computer code acting almost independently. ''You don't write the thinking algorithm. It's not that you dictate from on high, first this will happen, then this will happen and so on. ''What you do is, you write a lot of algorithms for little teeny structures and then you allow them to interact in a certain way. You also write the algorithm for how they interact, but you let them, in some sense, swim and interact together. In essence, you let them nondeterministically interact with each other, and it's the sum total of how they work together that creates intelligence.''

In the anagram program, which he calls ''Jumbo,'' one tiny part - a ''spark'' - might pick a couple of letters and put them together. Simultaneously another spark might be looking at other letters, or groups of letters.

Meanwhile, a higher-level part - a ''flash'' - might be checking a couple of sparks. ''There are quick tests for affinity and slightly longer tests for affinity. You can imagine such tests at all levels.''

All the time, groups of letters might be formed and broken apart again until gradually, out of the simultaneous swimming together of the many parts, a pattern begins to emerge. No one is telling the computer to create a certain kind of pattern. No one knows exactly what kind of pattern will be created. It just happens.

Or would, if the program worked as well as Hofstadter hopes it will. So far, it doesn't, and most of his colleagues - including Schank - believe that they are on firm ground in viewing his approach with skepticism.

But some, like Minsky, believe that history will be on Hofstadter's side. They argue that expert systems, no matter how impressive they seem, will be a dead end, never learning to find the deep connections between concepts, recognize patterns, carry tasks beyond the instructions set for them.

''Somebody's got to spend a few years asking just what does it take to do the things we take for granted,'' Minsky said. ''That's what Hofstadter is doing. He's one of those people of whom, 50 years from now, they'll say he was on the right track and they should have listened more.

''Hofstadter's philosophical ideas on how the mind works are just about the best in the world today,'' Minsky said. ''He's laying out the future - and people are not reacting because there are too many details to do first.''

WHEN HOFSTADTER IS home in Bloomington, he rides his bicycle between a cluttered Indiana University office and his even more cluttered house, where he lives alone. As any Hofstadter reader would instantly see, the clutter is the same stuff that spills off every page of his writing. Escher reproductions on the walls. Bach and Chopin scores on the piano. Dozens of Rubik's Cubes and cube spinoffs, including a 5-by-5-by-5 version that he is not quite ready to scramble. Tortoises everywhere: porcelain tortoises, metal tortoises, wooden tortoises, even a tortoise footstool on which Hofstadter perched happily when I went to see him. ''It's a friendly beast,'' he said.

His living-room floor was piled with letters from readers. He gets 40 to 60 a week, mostly in packets forwarded by Scientific American. A minister wrote enclosing a sermon he had delivered about ''G"odel, Escher, Bach.'' A photographer said he had bought a Chopin recording after reading Hofstadter's column on Chopin and the spirit of Poland. But most were just people asking for guidance on questions as simple and as profound as one at the top of the pile, from a man suddenly struck with helpless curiosity about the nature of the difference between right and left.

''They all want me to say something to them - why is the universe the way it is,'' Hofstadter said. He cannot begin to answer them all, but he has been in love with such mysteries all his life.

Douglas Richard Hofstadter was born in New York City on Feb. 15, 1945, and grew up near Stanford University, where his father, the Nobel Prize- winning physicist Robert Hofstadter, still teaches. ''As a child growing up in a family where physics was being done, I was incredibly absorbed by words like photon and neutrino,'' Hofstadter said. ''I thought the most exciting thing in the world would be to be an antineutrino - the idea was too mysterious for words. And from the beginning I was fascinated by numbers - each number, it seemed, had some kind of magical property, and there was a sense of mysticism about it all, of being in tune and in touch with God.''

He threw himself into mathematics and languages and later the piano. When he was still in high school, and computers were two decades away from becoming a household item, he managed to get access to Stanford's. ''I was a cocky young kid, and if I could push the button, I would sneak in there and do it. I'd have my program on cards, put them in the hopper and watch the line printer chunking along. Chunk, chunk.''

Hofstadter went to Stanford, graduating in 1965, and eventually began a miserable struggle with his father's specialty, particle physics, at the University of Oregon. By the time he got his doctorate, he knew he was no physicist. ''I was a recursion- crazed mathematician who happened to have found the right problem in physics, and I began training myself to be an A.I. person: what I had always been if I had been paying attention.'' He was also already writing out in pen what eventually became ''G"odel, Escher, Bach.''

The book was published in 1979, and its huge popular success astonished both Hofstadter and Basic Books, which had brought out a modest first printing of 5,000. It did little for Hofstadter's relations with the orthodox academic world. ''When 'G.E.B.' started appearing on the paperback best seller list, I was pleased and appalled - appalled because it was making my name mud in academia. My chief rivals were 'The Joy of Sex' and a book on how to have thinner thighs. Then there was a Tom Robbins book, 'Still Life With Woodpecker' - for weeks he and I jockeyed back and forth. It was a terrible case of mixed feelings. I wanted to get ahead of him, but at the same time I was ashamed of being on the list.''

''I never expected the cult reaction,'' he said. ''The reaction from fans, fanatics. And yet I never expected that it would be so ignored by A.I. people. I've felt very cold-shouldered by the A.I. world.

''A lot of people in A.I. come from a mathematics or logic background. They're interested in deduction, and they're bewitched by the glitter of a fancy expert system.''

For philosophers who believe that intelligence can be mechanized, a stumbling block has always been the question of who will be doing the programming for an intelligent computer - when a machine is thinking, who is telling it what to think? Where is the ''I''? It is a machine version of one of the most ancient philosophical conundrums - a mind-body problem in the tradition of Plato and Descartes.

Some philosophers of mind have lately become fascinated with a possible road to an answer, beginning with the fundamental problems that have persistently defied programmers. Take the letter A. Hofstadter has hundreds of them on his office wall, in a poster cataloguing a variety of typefaces. They are all different, but they are all A's. How to recognize letters in any of the many shapes they can take was one of the classic early problems of artificial intelligence. But it was largely abandoned in favor of programs that could read a narrow range of specially designed characters, like the numerals on bank checks. The computer program has not yet been written that can tell any recognizable A from any B.

What is an A, anyway? The basic form buried in most people's minds seems to be a pair of slanting uprights and a crossbar. Yet people identify A's in limitless incarnations, with curved lines or broken lines or double lines, with flourishes and curlicues, upside-down or sideways, black-on-white or white-on- black, with or without uprights and crossbar. The process is instantaneous, and it is easy to suppose there is nothing to it - until you try to teach a machine to do it. In a real sense, to solve the letter- recognition problem would be to solve the whole problem of perception.

More than that, the exploration of processes below the level of conscious thought may begin a path to the deeper problems of mechanizing inspiration and self- awareness. That is the prospect that so tantalizes philosophers.

Some of these issues are raised in a new paper by Hofstadter with the uncharacteristically forgettable title ''Artificial Intelligence: Subcognition as Computation.'' It will not be published until fall, but copies have already percolated through several layers of the academic world, stirring special interest among some philosophers of mind.

''It impressed me enormously,'' said Churchland of the University of Manitoba. ''It turned around in one fell swoop any tendency I might have had to think of Doug as a popularizer. He's standing back a few steps and taking a large look at the course of A.I., remarking on where its successes have come and where the frustrating failures have come. And he has offered a suggestion on how the barrier might be broken through.''

Hofstadter argues that artificial intelligence has been caught up with mimicking logic and deduction, at the expense of the more mysterious processes of subcognition. It is a sharp critique. ''It is my belief,'' Hofstadter says, ''that until A.I. has been stood on its head and is 100 percent bottom-up, it won't achieve the same level or type of intelligence as humans have.''

Needless to say, in the artificial intelligence world, the paper's reception has not been warm.

The response from the technical community generally runs something like this: Hofstadter has demonstrated no useful working program. He makes strong claims about where true intelligence will and will not be found, but does not back them up with technical work. He offers theories that appeal to philosophers, but philosophers do not have the same need for scientific proofs.

One particularly successful scholar of artificial intelligence is Allen Newell of Carnegie-Mellon University. ''He's trying to make the case that intelligence is somehow emergent out of the lower- level stuff,'' Newell said. ''But I don't think he has produced a technical sort of proposal there to support the rhetoric.''

In Newell's view, the current approach to his field is succeeding - it is where almost all the progress has been made. Hofstadter's approach, he says, is plausible, but only plausible. ''One can certainly have the hypothesis,'' he said. ''And in fact that may be right. But I don't know of any evidence for it in the way Doug talks about it in that paper. He wasn't actually providing enough technical stuff.

''There's a coin of the realm in A.I. with respect to running programs that demonstrate things. And of course that attitude doesn't exist in philosophy - philosophy has its own ethos.''

It is true that philosophers, even those drawn to the ideas coming out of computer science and artificial intelligence, do not place a high premium on experimental proofs. ''Psychology never has been and never will be like physics,'' said Judson C. Webb, a philosopher and logician at Boston University. ''Most questions you don't have a ghost of a chance of ever settling by experiment.''

Still, to scientists like Newell and like Schank at Yale, a working demonstration would be more convincing than mere theorizing, and the undeniable fact is that Hofstadter is not putting forward working programs. ''Maybe Hofstadter is a philosopher,'' Schank said, ''but you can't say such a person is an A.I. person. He's a popularist.''

''Actually,'' said Webb, ''other people think that about Schank. Artificial intelligence is a curious field, I must say - it's often difficult to distinguish the cranks and the geniuses. But Hofstadter has a fertile, seminal mind, and the ideas he deals with have attracted the attention of philosophers more and more.''

Even so, the criticism within his chosen field has rankled Hofstadter ever since ''G"odel, Escher, Bach,'' which he believes was misunderstood by laymen and professionals alike.

''It does not seem like a technical contribution,'' he said. ''It does not seem like a working program. It does not seem like a set of theorems. It does not seem like a set of proposals for how a program should be organized. What saddens me is that so many A.I. people seem trapped in their already-formed modes of thought and their preconceptions. They tend to eschew the whole question of what consciousness means. They avoid the questions of philosophy of mind.''

The book drew the attention of a few philosophers early on. Raymond Smullyan, a philosopher and logician at the City University of New York, shares Hofstadter's delight with paradox and puzzle-making, and he believes the book will have a lasting effect on the way people think about the mind. ''It may not have been academically influential,'' he said, ''but it is culturally influential.''

Several of the major philosophical journals have now reviewed the book. ''It weighs in to a very juicy area,'' said Boston University's Webb, who has prepared a long and appreciative review for the Journal of Symbolic Logic.

The academic grapevine that brought Churchland the subcognition paper went by way of Daniel C. Dennett, professor of philosophy at Tufts University and a former president of the Society for Philosophy and Pyschology. He first met Hofstadter in 1980, in Stanford, Calif., where they were both studying artificial intelligence - Hofstadter as a Simon F. Guggenheim fellow at Stanford University, Dennett as a fellow of the Center for Advanced Study in the Behavioral Sciences. By then, of course, Dennett knew of ''G"odel, Escher, Bach.''

''My initial bias,'' he recalled, ''was that a book with that title and that subtitle couldn't possibly be any good, that it would be a sort of West Coast, oh wow book. But of course it isn't anything of the kind. It's an amazingly rich and intricate book.''

Out of their conversations at Stanford came a collection, called ''The Mind's I: Fantasies and Reflections on Self and Soul,'' of pieces by novelists, scientists and logicians with commentaries by Hofstadter and Dennett. It has sold well over 100,000 copies in hard cover and paperback. Since then Dennett has followed Hofstadter's work closely, most recently citing it this spring in a series of lectures at Oxford University on free will.

''He develops ideas that have been bandied about by philosophers for years, but nobody has done it with the depth and the care and the detail, and nobody has exploited the idea as richly as Hofstadter,'' said Dennett. ''He has found a way of characterizing the contribution of the computer metaphor to the understanding of the mind which is realistic and flexible and not ideological and programmatic.

''To philosophers who think that's the way to go in solving the mysteries of the mind, Hofstadter's work is as sophisticated as anything anybody's done. In fact, that's something of a bandwagon these days, and to get on that bandwagon you've got to pay attention to Hofstadter.''

Hofstadter has no shortage of metaphors for the mind. An ant colony. A labyrinth of rooms, with endless rows of doors flinging open and slamming shut. A network of intricate domino chains, branching apart and rejoining, with little timed springs to stand the dominoes back up. Velcro-covered marbles bashing around inside a ''careenium.'' A wind chime, with myriad glass tinklers fluttering in the cross-breezes of its slowly twisting strands.

Most educated people today accept the idea that the brain is purely a thing of flesh and blood, neurons and axons and synapses. For most, religious faith in a noncorporeal soul is no longer the answer it may have been a century ago. The problem is to reach a modern understanding of how the glories of the mind might spring from pure matter. For anyone with a view of the mind as creative and self- aware - anyone, that is, with the vista on the soul that comes from looking inward - it is extremely unsatisfying to think of it as nothing but electrical impulses and biological tissue.

''Tissue isn't quite the right word,'' Hofstadter remarked. ''Pattern, I would say.''

Hofstadter's sense of the soul as pattern is the core of his view of how thoughts and symbols might be built up from the physical structures that neuroscientists see in their microscopes. It hardly matters whether the pattern is rooted in the firing of neurons or the marching of ants. Or the switching of silicon chips. ''The medium is different,'' as Achilles says in one of Hofstadter's recent dialogues, ''but the abstract phenomenon it supports is the same.''

Whatever the medium, Hofstadter's path to consciousness begins, not with reasoning, but with a level of stupidity and randomness. In an ant colony, to use the example Hofstadter develops at length in a key section of ''G"odel, Escher, Bach,'' we begin with ants. Consider a team of ants - a ''signal,'' Hofstadter names it - carrying a piece of food from one part of a colony to another. No ant knows where the food is going. In fact, what with all the random comings and goings of the individual ants, the whole original team may have long since scattered by the time the signal arrives at its destination. For ants, of course, we may substitute neurons - or some mechanical equivalent. In Hofstadter's Jumbo program, he substitutes ''sparks'' and ''flashes.''

Signals interact, too, creating patterns at still higher levels - Hofstadter calls them ''symbols'' - and eventually, out of the food-carrying, trail-building and so forth, a genuine orderliness emerges. Of course, the ant colony doesn't exhibit anything like consciousness. Symbols in the brain represent pieces of the outside world; the ant-symbols don't. No matter how organized they get, they're still just ants, after all.

But the ant colony as a whole does have a kind of knowledge - how to grow, how to move, how to build - that is nowhere to be found in individual ants. The colony is a metaphor, meant to illustrate how a pattern of intelligence can emerge from the intertwining of different levels of activity, instead of being imposed from above. And like thoughts, the ant- symbols are sometimes orderly, sometimes erratic, always changeable and fluid.

A piece of food moves two feet across a colony. An entomologist watching it can describe that bit of behavior just that way, without any reference to the complicated underlying activity of scurrying ants. In the same way, some abilities at the top of human consciousness can be described with rules - the ability to manipulate numbers, to reason logically - and the rules are easy for computers to handle. But the rules are the end of the story, not the beginning. To focus on them exclusively is to sacrifice the potential richness of true intelligence.

''What guarantee is there,'' Hofstadter asks in his subcognition paper, ''that we can skim off the full fluidity of the top-level activity of a brain, and encapsulate it - without any lower substrate - in the form of computational rules?'' If symbols could be built up the hard way, he argues - from the interaction of many small processes, as in the Jumbo program - they would also be able to do the hard things that have so far eluded artificial intelligence.

Computers need to get bored. They need to know when they have fallen into a repetitious, machinelike rut. In solving problems that lack definite answers - in doing Jumbles, for example, without the help of an English dictionary - a machine needs to develop a sense of when it has got close enough to stop. That requires an ability that people have, the ability to watch oneself. And that, in turn, means a program must have a symbol system that includes a symbol for the program itself, that in effect makes it conscious of itself.

That isn't the kind of symbol that can easily be written into a program, the way numbers can be written in. But after all, in our own minds, numbers are not just tiny units moved from one place to another by a program following logical rules. They are huge, rich symbols floating at the top of a bubbling stew of subcognitive activity. A number like three, for example, has affinities with symbols like tricycle, numeral, tripod, four, waltz and countless others with or without names. That is one reason that a $10 calculator manipulates numbers more deftly than we do. A truly intelligent machine, with human-like symbols for numbers, might be humanly mediocre at arithmetic, because it might not have access to the computational power at the lowest level, any more than we can reach inside to the firing of our own neurons.

And as Hofstadter points out, we shouldn't be able to reach them. ''The world is not sufficiently mathematical for that to be useful in survival,'' he notes. ''What good would it do a spear thrower to be able to calculate parabolic orbits when in reality there is wind and drag and the spear is not a point mass - and so on. It's quite the contrary: a spear- thrower does best by being able to imagine a cluster of approximations of what may happen, and anticipating some plausible consequences of them.'' The loss for calculability is a gain for poetry.

This view of symbols is at the heart of the dispute in artificial intelligence. ''The crux of the matter,'' Hofstadter argues, ''is that these people see symbols as lifeless, dead, passive objects - things to be manipulated by some overlying program. I see symbols - representational structures in the brain (or perhaps someday in a computer) - as active, like the imaginary hyperhyperteams in the ant colony. That is the level at which denotation takes place, not at the level of the single ant.'' The way symbols change and interact is not programmed in from above. It is determined not by formal rules, but by the churning of the entire system. ''We cannot decide what we will next think of,'' as Hofstadter says, ''or how our thought will progress.''

Your own symbol system now has an element you might label Douglas Hofstadter. It may have begun as a mere bump on your computer scientist or philosopher symbol, but now it has a life of its own. It has affinities with symbols like computer, bicycle, Indiana, thought and soul. It may interact with some of these symbols in ways that refine and deepen them. Hofstadter argues that another such symbol - the most powerful and pushable of them all - is the one you might label I.

When you think about yourself, what is doing the thinking? When you think about thinking about yourself, what are you thinking about? These questions, with their implications of a hall-of-mirrors sort of infinite regress, seem to many to be a barrier to any possibility of mechanizing self-awareness.

The British philosopher J. R. Lucas expressed this view in an often-quoted passage: ''The concept of a conscious being is, implicitly, realized to be different from that of an unconscious object. In saying that a conscious being knows something, we are saying not only that he knows it, but that he knows that he knows it, and that he knows that he knows that he knows it, and so on, as long as we care to pose the question. . . . A machine can be made in a manner of speaking to 'consider' its performance, but it cannot take this 'into account' without thereby becoming a different machine, namely the old machine with a 'new part' added. But it is inherent in our idea of a conscious mind that it can reflect upon itself and criticize its own performance, and no extra part is required to do this.''

This is a common view even among strong partisans of artificial intelligence. But the trap, perhaps, is in thinking of the self-symbol as a 'part' designed from the surface down. It may be far more fruitful to see the act of perceiving, the act of self-watching, in the pattern as a whole, not in any one layer of it.

On Hofstadter's office wall is a somewhat tattered reproduction of one of his favorite mind-twisting Escher prints, ''The Print Gallery.'' In it, a boy stands looking at a print depicting a town where a woman looks down from her window at the roof of a gallery in which - yes - the boy stands looking at the print. We appreciate the paradox without being thrown by it, because we are outside looking in. Something like that creates our own unfathomable feelings of self. The self, subject and object, perceiver and perceived, is everywhere in the paradox.

It is a ''circling back,'' the Tortoise tells Achilles, ''of a complex representation of the system together with its representations of all the rest of the world.''

''It is just so hard, emotionally,'' Achilles tells the Tortoise, ''to acknowledge that a 'soul' emerges from so physical a system.''

Indeed, until quite recently, ''soul'' has been something of a dirty word in philosophy, and even now it suggests a reactionary throwback to mind-body dualism of a Cartesian sort. It suggests something mysterious and unapproachable. Some prefer it that way. Like Arthur Koestler, who felt that creativity was a divine, mystical spark, they consider the human spirit beyond the scope of the natural sciences. But philosophers like Dennett believe, with Hofstadter, that scientific answers can be found without cheating by reducing the question to a manageable scale.

''They want to revel in the mystery, protect the mystery at all cost,'' Dennett said. ''Doug is not afraid to take seriously the most extravagant claims about the mind. He glorifies the phenomenon he's trying to explain. But he is a solver, and a very scrupulous one.'' Synapses and souls are hard to reconcile. But for many philosophers, and perhaps for the many nonspecialists drawn to Hofstadter's writing, the outline of a bridge from one to the other is emerging. The most valued kinds of behavior seem to depend on a willingness to recognize the soul in ourselves and others - the danger of looking only at the lowest biological level is in losing sight of the essential humanity that, in Hofstadter's view, exists in the pattern and in the paradox. ''There seems to be no alternative to accepting some sort of incomprehensible quality to existence,'' as Hofstadter puts it. ''Take your pick.''

Return to the Books Home Page




Home | Site Index | Site Search | Forums | Archives | Marketplace

Quick News | Page One Plus | International | National/N.Y. | Business | Technology | Science | Sports | Weather | Editorial | Op-Ed | Arts | Automobiles | Books | Diversions | Job Market | Real Estate | Travel

Help/Feedback | Classifieds | Services | New York Today

Copyright 1997 The New York Times Company