Feigenbaum wasn`t as interested in the chemistry per se as he was in how a scientist develops a hypothesis to explain unprecedented data, in other words, how a scientist thinks–and, by extension, how a machine could be taught to think. To duplicate that process Feigenbaum had to program in all the ”hard” knowledge about chemistry a chemist with a Ph.D. would know, plus the far less quantifiable knowledge about how the scientist proceeds to make decisions when he or she isn`t really sure or the data seem ambiguous.
It rapidly became clear that acquiring this second sort of knowledge–the ”soft” knowledge–was the bottleneck for AI research. ”Essentially we are miners,” Feigenbaum has said. ”We extract the gemstones of knowledge that are the private reserve of expert practitioners in each field.”
The knowledge of an expert is largely rule of thumb. Coming from experience and often the result of hunches, good guesses, intuition and creative leaps of the mind, it includes items the expert may not be able to explain logically. An everyday example of this kind of knowledge is the rule of thumb about warming up the car first before driving away on a cold morning. Most car owners may not understand precisely what warming up does to the internal-combustion engine, but they know from experience that it works. This kind of knowledge is also called ”heuristic,” from the Greek heuriskein, to discover, the same root as for ”eureka.”
Knowledge engineering, an entirely new Ph.D. specialty, has grown out of the attempt to create expert systems. Knowledge engineers ask all sorts of what-if questions, attempting to foresee any and every problem a machine will be asked to tackle. Then they try to understand the human expert`s method of thinking through the problems and to translate each of these separate pieces of information into a symbol a computer can recognize and process with extraordinary speed.
Most expert systems are based on if-then rules: If this is true, then that is also true; if A, B and C are true, then D is also true. Given the complexities of first reducing these pieces of information to symbols that can be encoded in the zeros and ones of the binary system, the basic on-off switches of every digital computer, expert systems require immense amounts of computer coding and powerful machines to run them.
An example that recurs again and again in AI history is chess. Even before the modern-day computer was realized, mathematicians had theorized about and tried to build machines to play chess. Because there are a fixed number of chess pieces and a similarly limited number of legal moves, there is a finite number of possible chess plays. A digital computer can faithfully slog along, searching one by one through all these moves in response to its human opponent`s play. The computer might win by attrition–its human opponent would likely die before the game was over. By examining every possible move, however, the computer would always come up with the best move; a human could come up with a very effective move, but not always the one best move for a given situation.
Computers have been playing challenging chess for a decade or more now because scientists have learned varying systems of organizing effective strategies much as chess masters do, abandoning less likely directions of search, or ”pruning the search tree,” as AI researchers describe it. (Going back to everyday heuristics, if on a below-zero morning your car`s engine doesn`t turn over, you don`t respond by checking the tires or the windshield wipers; you have pruned the search tree.)
DENDRAL performs so well it is now used by chemists all over the world and has spawned other expert systems. The methodology used to pick an expert`s brain and transform the results into computer code could be used to develop other expert-level inference systems, and in the early 1970s doctors at the Stanford Medical School began work on a computer program that would attempt to advise physicians on antibiotic selection for infectious diseases.
Edward Shortliffe, the young physician directing this project, met once a week with Stanley Cohen, the physician who later became famous as the codeveloper of the techniques for recombinant-DNA research, and another doctor, Stanton Axline. ”We sat around, went over patient cases and tried to understand how Axline and Cohen would decide to treat those cases,”
Shortliffe has said. ”We`d stop them–those of us who knew little medicine and were more computer scientists–and ask, `Well, why do you say that?` ”
In the week between sessions, Shortliffe would put the two doctors` rules of thumb into his emerging computer program, then ”we`d all have a good laugh the following week when I would show them how the computer had tried to handle the same case. What we did was discover the great simplifications they made in explaining the rules from the previous week, where they`d go wrong if you tried to run it on a different case.”
One of the lessons was that human knowledge is much more complex than it seems. Shortliffe`s computer was quite capable of asking if a male patient was pregnant. It was just one example of the critical if-thens the programmers hadn`t thought to include: If the patient is male, then the patient cannot be pregnant, so skip that question.
By the time MYCIN, as the program was named, was working, investigators realized they had accomplished far more than they had set out to do. In addition to the knowledge base–all the facts about infectious diseases and the antibiotic treatments of choice–there was a separate logical part of the program, which had so crystallized medical reasoning that it could be used separately from the infectious-disease knowledge base and paired with other medical specialties. They called this logical part the ”inference engine”;
its job is to reason as a doctor would about whatever problem is at hand. As a result, MYCIN has led to several other programs, among them PUFF, a diagnostic tool for lung disease in use at the Pacific Medical Center in San Francisco, and ONCOCIN, which helps Stanford cancer specialists manage complex chemotherapy for their cancer patients.
”The word `reasoning` means one thing to the general public and another to a computer scientist,” explains ONCOCIN researcher Lawrence Fagan, an M.D. who also has a Ph.D. in computer science. ”What we mean by reasoning is following a chain of evidence or facts to a conclusion. One typical chain in ONCOCIN is to calculate what dose of treatment would be expected at a particular point in a patient`s progress based on standard treatment, and then to determine if that needs to be modified if the patient, for example, was experiencing some toxicity. ONCOCIN could order tests and determine dosages.” The program is merely a tool, he stresses. ”Just as humans use books and other resources, we can provide them information in a new, more useful way. . . ONCOCIN is like an intelligent textbook,” he says. ”If you had a problem patient and you wanted to look at a textbook–`Well, does this sound like my patient . . . or this?`–what if you could say to a textbook, `My patient is like this. What do you think?` ”
So far ONCOCIN has been available only to physicians at Stanford partly because of the very expensive, high-powered machines needed to operate it, but the cost of hardware is plummeting, and tests are planned at centers outside the university. Stanford is not the only AI center where computer scientists and medical researchers are working together. The nationwide interest in biomedical applications of AI is so great that several years ago the National Institutes of Health agreed to fund a national computer resource at Stanford that is now accessible to an estimated 300 to 400 other researchers around the country.
A number of computer programs are now being offered commercially that claim to be expert systems or to deliver AI to their users. In reality, says one expert, most of these programs are merely ”clever programming,”
refinements of existing computer software that use ”AI tricks.” At this point any program that really is an expert system requires more power to run than is available in any but the most expensive computers. True expert systems number only around 50 (some 500 are under development), and not all of them are fully on line. Among their users are telephone companies, for debugging their lines, and computer firms, for refining programs used in the design of both software and hardware.
These expert systems, the state of the art of AI, are nonetheless designed to run on existing computers. The hardware of the future–the fifth- generation machines–remains elusive. Although the way expert systems are programmed to reason may be patterned on what is known or believed about the human mind, existing expert systems do not actually mimic human intelligence as such. Instead they reproduce the knowledge and judgment of the humans whose expertise was programmed into them.
Likewise, the AI robots that exist today have very limited task capabilities. The primitive speech-recognition systems have limited vocabularies–150 words typically–and are frustratingly slow. As one expert in the field quips, ”It`s hard to wreck a nice beach.” (It`s one way a computer might hear ”It`s hard to recognize speech” said aloud and fast.)
Similarly, a computer`s ability to see and read–or recognize patterns
–is so early ”Sesame Street” that merely changing the type font from a plain sans-serif face to curliqued Old English can baffle it.
Among all the human capabilities programmers must give a computer to make it truly intelligent, perhaps the most crucial is common sense. John McCarthy and others continue to believe in and work toward machines that will have this most elusive quality of human beings.
There are, of course, critics who doubt this can be done. Cal/Berkeley`s Dreyfus, an existential philosopher who with his brother, Stuart, a professor of computer science at Berkeley, has dogged AI proponents for years, recently came out with their second book on the subject, ”Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer” (Macmillan). Studies the brothers have done for the U.S. Air Force indicate, they say, that even the most highly touted expert system merely codifies ”novice rules.”
Once a pilot or any other specialist proceeds beyond the beginner stage of a skill, he or she can be shown to be breaking the ”rules”–flying, or whatever, by the seat of the pants.
”Most of us are able to ride a bicycle because we possess something called `know-how,` which we have acquired from practice and sometimes painful experience,” the brothers write. ”That know-how is not accessible to us in the form of facts and rules. If it were, we could say we `know` that certain rules produce proficient bicycle riding.”
Other critics question whether computer scientists should attempt to replicate human intelligence, pointing to HAL, the intelligent machine of the movie ”2001: A Space Odyssey” that eventually runs amok. One MIT researcher has joked that he`s concerned not about the U.S. using AI to take over the world but about the Stanford AI lab taking over the world. MIT`s Minsky has joked that once true AI systems exist, perhaps they`ll be nice enough to keep us humans as pets.
Writer Pamela McCorduck, who wrote what is considered the best text on artificial intelligence, ”Machines Who Think” (W. H. Freeman), sets forth various theories about why we humans are so fearful of being replaced by a machine yet are so fascinated by the possibility. Perhaps the wildest theory was suggested with apparent seriousness by a British scientist who wondered if man`s determination to create intelligent computers wasn`t really a deep-seated psychological desire to re-create the species without the help of women.
”The horrible possibilities of success (were) the kernel of Frankenstein`s horror for us,” McCorduck theorizes, ”and I think they are the kernel of the horror that characterizes most people`s reaction to the idea of artificial intelligence itself.”
Mary Wollstonecraft Shelley, McCorduck reminds her readers, wrote
”Frankenstein” after long conversations during a dreary, rainy summer with Lord Byron and her husband, the poet Percy Bysshe Shelley, about the nature of life and whether it could ever be reproduced. The weather was so bad the housebound vacationers decided to write ghost stories to entertain one another. The Frankenstein story was meant to address how frightful its author felt any attempt to ”mock creation” could be, but McCorduck says it stands as a paradigm for all science: It`s the story of a scientist ”who yearns to know and drives madly ahead without real thought to the consequences.”
McCorduck herself is not uncomfortable with the efforts toward a thinking machine and dismisses the controversy: ”Future historians will probably consider the whole shebang to be prehistory, in the same way we regard the Agricultural Revolution.”
Stanford professor Terry Winograd is another AI skeptic, though he also is one of the researchers whose early successes stirred hopes and dreams. As an MIT doctoral student he created SHRDLU, a simulated robot represented by an arm on a videoscreen that lived in a microworld composed of a tabletop filled with blocks of various shapes. SHRDLU`s task was to move the blocks as commanded, remembering what it had done and using its limited abilities to communicate with its operator about various building options available to it. ”It could handle certain kinds of complexities of English structure but only within the context of its tiny world,” Winograd says. ”If you didn`t know the program in detail, it was very easy to use a sentence a human would easily understand, but SHRDLU could not. It certainly moved away from what had been done, but it couldn`t do what a person could do.”
The problem for Winograd`s videoscreen robot wasn`t simply its lack of vocabulary. ”It became clear the problem was the contextual open-endedness of ordinary words,” he says. One computer expert, for example, has compiled 17 possible interpretations for the sentence ”Mary had a little lamb.” Others cite such computer-stumpers as ”The duck is ready to eat,” and wonder if a computer would ever understand the difference, say, between ”John shot the girl with the red dress” and ”John shot the girl with the gun.”
Winograd is one of the researchers at Stanford (and others, most notably, at SRI, Yale, Berkeley and Illinois) who are studying natural language, that is, language as it occurs in ordinary human conversation, and some of its philosophical and psychological underpinnings.
One researcher at SRI has transcripts of conversations during which lapses of as much as half an hour may occur between a pronoun and the noun to which it refers. For example, two men at work assembling a machine talk about other things as they work and half an hour later say, ”Let`s plug it in.”
One researcher says, ”Imagine a computer grinding backward through its careful records of everything that was said, looking for a match. The screwdriver? The pliers? The toolbox? All the stuff that`s been mentioned in between?” It`s something humans obviously can do; the researchers want to figure out how we do it so they can program that ability into a machine.
Context in everyday speech is so complex a problem the computer cannot just rely on words or patterns of words and react. A notorious program called ELIZA, which imitated the passive responses of a psychotherapist, could produce a hilarious conversation. If the patient mentioned ”family,” ELIZA would ask him to tell it about his family. If a patient said, ”I am depressed,” the program would respond, ”How long have you been depressed?” The patient could as easily have said, ”I am taking poison,” and could been asked, ”How long have you been taking poison?” because the computer was responding to the pattern, not the content of the words.
Hubert Dreyfus recalls accidentally befuddling ELIZA when in response to a question he said, ”I am feeling happy,” then corrected himself by typing in, ”No, I am elated.” ELIZA then rebuked him, ”Don`t be so negative.” It had been programmed to object to the word ”no.”
There are all sorts of schemes to give computers frames of reference to keep them from making these mistakes. Yale`s Roger Shank, who did his graduate work at Stanford, is working with scripts for behaviors people expect in certain situations. For example, the script for what one would expect at a McDonald`s would be different from that at a fancy French restaurant.
”The meat of AI, the glamor of AI is (the belief) that we`re going eventually to do HAL,” Winograd says. ”But all you really have is a set of particular computational techniques. The assumption is if you expand those and put in enough data and knowledge, you`ll gradually build up to where you are doing everything a person does and that this will be the best way to get computers to be useful–that whole sort of fifth-generation promotion. I don`t see that assumption as being valid.”
Winograd does not rule out an entirely unexpected breakthrough and likes to remind critics that although the alchemists were entirely wrong in their attempts to turn lead into gold, ”right up here on the hill at Stanford, we have this machine (a linear accelerator) that can bombard lead with particles and theoretically create gold. The alchemists could never have imagined nuclear physics. Maybe AI will have its nuclear physics.”
He doesn`t believe it likely that the kinds of existing programming,
”even given brilliant ideas in the next 10 years,” will lead to machines that really understand language. Furthermore, he doesn`t think that is necessarily the correct appproach.
”You can step back and go in the other direction. Make computers more friendly, more useful, not turn them into surrogate humans. Humans are wonderful at doing certain things and terrible at other things. Computers are wonderful at some things and terrible at different things. We should try to find the right fit.
”The computer can never have the full contextual understanding that a human being has,” he says flatly. ”Would you want a computer handling a nuclear incident such as the one at Chernobyl?” he asks. ”The answer is you don`t because one mishap, and you are in trouble. On the other hand you might be perfectly willing to have a computer controlling your furnace.”
He thinks one answer is to get rid of the notion that because a computer is ”intelligent,” it can handle everything. He even suggests calling expert systems ”opinion systems” to keep our expectations of them in perspective.
”There are ways computers will be able to do the right things, even given situations the programmer has not foreseen, but on the other hand there is a non-ignorable chance it will do the wrong thing. And it will do wrong things that a human being won`t do, because the human being has the common-sense back-up.”
One scientist who seems to agree with Winograd about the applicational limits of AI is John Seely Brown, head of Xerox PARC`s intelligent-systems labs. He talks about AI systems as ”empowering tools” to amplify a human`s ability to think creatively.
He uses as a metaphor the difference between a secretary who is good and one who is ”relatively not so swift, the one to whom, whenever you want a job done, you have to explain in painstaking detail, in fact to whom you end up explaining every conceivable detail, and you get incredibly bogged down.” His own secretary understands him and his peculiarities so well, Seely Brown says, that if he asks for a memo he wrote a year ago, she`ll know it could be as long as 18 months or more ”because I always err in that particular direction.” And if Seely Brown says the memo is to one person concerning one topic and she doesn`t find it, she knows the lab personnel and the subject matter well enough to know he may have confused two researchers who work in the same area. Because she is in ongoing contact with him, she also understands why he needs the particular piece of information at this time. Given all that, even though he asked for X, she is going to give him Y–and be right.
In a way his secretary combines the best of existing expert systems
–those that reason from experience–and the sort he hopes to see in the future–those that also can reason from a basic knowledge of how an operation works, be it the Xerox PARC labs or a nuclear power plant.
”To me an expert system is a system that can handle the unusual,” Seely Brown says. ”The more experientially based systems are wonderful at handling the problems the programmer has in mind, but they collapse in a twitching heap when you present them with a problem that hasn`t already been thought through.” A further problem is that existing expert systems almost never have the capacity to examine information in its knowledge base, the ”facts” as they were known to the experts ”teased” at the time it was built, and to invalidate flawed presuppositions, as Seely Brown`s secretary is able to do with his request. ”Having such self-knowledge would lead to building bootstrap systems. When you know what you don`t know, sometimes you can pose interesting questions or change what you don`t know.”
Right now what AI experts do know they don`t know is just how the human mind works, and they are asking some very intriguing questions. Interestingly some researchers have turned back to earlier concepts of AI and have been re- examining ideas about actually mimicking the physical structure of the brain
–what some irreverently call ”the meat machine”–instead of the behaviors of the mind. The Japanese fifth-generation effort, for example, includes a study of the simple ”brain,” or neural net, of the nematode worm. ”We take our metaphors from many places,” Seely Brown admits. Some researchers, who are called ”connectionists,” are examining what happens when vast numbers of computers are linked together. Others are looking at physical sense organs to try to learn how information from the outside world gets into the brain, what fires the nerve endings in the nose or eye, and the possibility that there is no one physical structure in the brain where a particular piece of knowledge is situated.
One group in Seely Brown`s lab is interested in the mathematics of phase transitions–as when water boils and becomes a gas. ”There are some very interesting mathematics underlying phase transitions, which seem to be governed by local interactions between all these particles,” Seely Brown says. ”Then suddenly something changes, and you get a quantitatively global change.” It is too simple an explanation, but perhaps understanding the basic physics of phase transitions may help understand what happens with the billions of neurons in our brains. (The University of Illinois is one center for the study of qualitative physics as it relates to AI theories.
No wonder, then, that the science–or art–of AI has attracted philosophers and neurophysiologists, linguists and cultural anthropologists as well as computer experts. And it`s the questions they are asking, not the answers they have so far provided, that are the most fascinating. John McCarthy has a whole new way of looking at the ways humans acquire and begin to use intelligence. After decades of trying to think the problems through, McCarthy is watching the process in his own young son, Timothy.
”You see the behavior of the baby, and you can`t help but be interested in the simplest mechanisms that account for this behavior and the development of mechanisms. When he was younger, he would reach for something, but if it disappeared, he couldn`t maintain his purpose. Now he does. Now at what stage could Timothy`s behavior be accounted for by a rule-based expert system, and when did he get beyond that stage?”
At this point the proud father stops and chuckles ruefully as he parallels the ”mental” development of his two offspring, the 30-year-old AI and his little son. ”Well, in certain respects, I would say Timothy got beyond the expert-system stage in the last month or so,” he says. Timothy was then 6 months old.




