Like a very patient teacher supervising a really dense pupil, Kornelija Zgonc sits before a computer screen in her laboratory on Northwestern University’s Evanston campus, teaching the machine to recognize cracks on a metal surface.
It can take as many as 31,000 repetitions of the lesson, but eventually the machine learns to tell when the metal is cracked and when it isn’t by reading signals sent from ultrasonic sensors that scanmetal surfaces.
Research by Zgonc, a post-doctoral scientist in Northwestern’s center for quality engineering and failure prevention, and others could automate the tedious task of inspecting aging airplane components for potentially dangerous flaws.
Zgonc’s computer pupil may take a lot of training to learn to recognize ultrasonic signal patterns that correspond to cracks, but the lessons go fast. Each iteration takes only a few seconds and is seen only as a blink on the computer screen and some changes in several numbers appearing there.
What is most attractive about Zgonc’s work is that she didn’t program the computer with a lot of specific instructions for it to follow so that it will recognize signal patterns from cracks. That would be nearly impossible. Because no two cracks are identical, their signal patterns can vary greatly.
Instead, Zgonc programmed the computer to analyze each signal pattern in much the way a human might. Each time the machine reads a pattern and chooses whether that represents a crack, it is told whether its choice was correct.
For each erroneous choice, the machine readjusts mathematical values within its decision program to try to reduce its error rate.
As it scans more and more patterns and makes more decisions and error adjustments, the machine’s accuracy rises.
Zgonc’s machine mimics human learning, and it can generalize from that learning, recognizing as cracks signal patterns it never saw before, but which resemble the patterns it learned.
“The beauty of this is that I don’t have to have extensive knowledge of cracks and metals and a complex theory to program the computer,” said Zgonc. “I just have to have data to train it. Once you program it, it’s a black box that makes its own error adjustments.”
Zgonc has programmed the computer to work as a neural network, a machine that processes information in a way comparable to the human brain.
Since the first computers were built, people have referred to these collections of wires and switches as thinking machines.
But in the real world, computers are mostly electronic idiot savants that can calculate millions of complex mathematical operations in a few seconds but cannot figure out how to tie a shoelace.
Neural network computing, which has been around more than a decade, is an effort to make computers work more like human brains, with software that ties many decision centers together the way neurons are connected in the brain.
Some of the most impressive benefits of neural net programming have been in pattern recognition, the sort of task Zgonc has undertaken.
Northwestern and Iowa State University are working with the federal government and the airline industry to devise new ways to test aging airplanes for flaws and defects in ways that are quick, cheap and accurate.
Several technologies are available to scan airplane parts for problems. Many, such as X-rays and ultrasound, have already been developed into sophisticated tools for medical diagnosis and can readily be adapted for non-destructive evaluation of metal parts. But these technologies tend to turn out tons of information beyond human ability to digest.
It isn’t surprising, then, that researchers are trying to train machines to scan mountains of data and correctly decide what few probable defects should be called to the attention of human inspectors.
Another Northwestern scientist, Michael Peshkin, associate professor of mechanical engineering, is training machines to spot flaws in airplane wheels. That work may be a year or so away from being used in actual safety inspections, he said.
One strength of neural networks is that once they are trained, they are robust in making decisions, Peshkin said.
“If you put in a few wrong answers, it doesn’t kill you,” said Peshkin. “You can have some cases where noise in the signal is thought to be a crack or where a crack is thought to be noise. But the machine won’t say `This Does Not Compute’ and shut down. It’ll still get it mostly right, just like a person would.”
It is possible to get carried away with the analogy between neural networks and human brains, said Peshkin. The fact is that for the most part, scientists don’t really understand the details of how the brain works, so it would be impossible to design a computer that mimicked a brain precisely.
“Neural networks are philosophical models of the brain,” Peshkin said. “They’re inspired by brain models but aren’t meant to be accurate reproduction of human thinking processes.”
Even so, some scientists think that neural network computers are enough like human thinking processes to offer a new means of studying how the brain itself works.
Mathematically, some neural network programs have been compared to a flat surface with many troughs or wells in it.
When marbles are dumped onto the surface, the goal is to have the surface tilted so that the marbles fall into the appropriate wells as quickly as possible. Each time a new batch of marbles is dumped on the surface, it readjusts itself to guide each marble into its well as quickly as possible.
Some researchers say that this is also the way that humans learn to do things.
Lina Massone, who holds faculty appointments in Northwestern’s departments of electrical engineering and computer science as well as biomedical engineering, finds connections between the ways computer neural networks operate and the human brain functions.
Massone specializes in using neural net programs to simulate control of human eye and arm movement. She has found in her simulations that neural nets provide much smoother movement control than other strategies.
She also has found that neural net programs provide a flexibility similar to the brain’s. When a computer-controlled arm is instructed to reach for a target and that target is changed, the moment the movement starts the arm’s simulated movement also changes to head for the new target, almost exactly as a person’s arm would under the same circumstances, Massone has found.
“We’re trying to understand the mechanisms the brain might use to coordinate movements,” Massone said. “Some of these are known, but mostly they aren’t. There is a lot of controversy among scientists on this question.”
Massone said computers she programs to simulate an arm’s movement will make initial attempts to reach for a target and sometimes succeed, but often fail. As they try again and again, they discard erroneous strategies and succeed more and more often.
“It’s very similar to the way a baby learns to control limbs, starting with random movements and improving with practice,” she said.
For the moment, Massone’s work is mostly pure research intended to provide insights into human brain processes by using computer simulations. But it may one day find practical application.
Other researchers are working to make robotic controls that mimic human muscles and tendons, creating robots whose motions are more smooth and humanlike than anything available today.
As such robots are built, they will need control systems that mimic the human brain, Massone said, and her work is leading in that direction.
“It’s very exciting,” she said. “I’ll be able to connect my neural network controls to an artificial arm that’s built like a human arm and be able to test these controls. This could be important in making better prosthetic arms for people.”




