Skip to content
Chicago Tribune
PUBLISHED: | UPDATED:
Getting your Trinity Audio player ready...

As more and more of the nation`s defense becomes dependent on computers, new kinds of electronic sabotage could be devastating to our national security.

In 1971 a sophisticated scam was uncovered in South Korea involving a U.S. Army supply computer. Through insider access, a group of South Korean blackmarketeers and U.S. personnel had a lucrative racket going. By using the computer, they were able to siphon off as much as $18 million worth of U.S. military supplies a year, and even as they resold the stolen items–sometimes back to the U.S. Army–they manipulated computer files to conceal traces of the fraud.

When this classic case of computer crime by insiders finally came to light, the moral seemed to be clear: Software–the detailed instructions that tell a computer how to function and what operations to perform–is the ultimate medium for anyone who, for whatever purpose, seeks to engage in deception. Yet from that time to the present a sometimes-touching trust in computer software has become a hallmark of ever more of our nation`s business and defense establishments, from banks transferring funds electronically to the Strategic Defense Initiative (SDI).

In recent years Americans often have been entertained by stories of youthful ”hackers” breaking into corporate or government computers and toying with the data or programs contained there. Amusement has sometimes turned into alarm, as it did in 1983, when some young people in Wisconsin penetrated part of a computer at the U.S. government research center in Los Alamos, N.M. Or when, in July of 1985, New Jersey teenagers were found to have developed the capability through their home computers to alter orbits of commercial communications satellites.

More ominously, such break-ins need not be by teenage hackers alone. On Nov. 18, 1985, concerned members of Congress made public a letter they had written to President Reagan expressing broad fears for U.S. communications security. They were motivated by reports that drug smugglers with alleged ties to the Colombian M-19 terrorist movement had gained access to sensitive U.S. communications frequencies, including that used by the President`s Air Force One.

Under the best of circumstances, software is a tricky medium. In March, 1987, the National Weather Service issued a warning that a tornado had demolished Rockford, Ill., and was headed in all its fury toward Chicago. Actually, Rockford was quite intact and Chicago in no danger; there was no tornado at all. The erroneous warning–and a number of others like it–was traced to botched computer software.

In 1982 U.S. and allied military brass assembled to see the ”Sergeant York” gun, the U.S. Army`s first new divisional air defense system in years

(DIVAD), put through its paces. When the gun`s electronic brain was turned on to seek its target, the gun promptly started to swivel–and came uncomfortably close to firing on the reviewing stand instead of its designated target.

DIVAD was cancelled in 1985, as snafus continued in its software.

At a time when computer software is part of 80 percent of U.S. weapon systems now in development, software warfare–attacking the software that controls or operates these weapons–may be the most effective, cheapest and simplest way to cripple vital U.S. defenses. Software warfare in fact is coming of age as a new type of systematic, offensive warfare, one that can be waged far removed in space and time from any battlefield to influence not only combat outcomes but also peacetime balances of power. And in a world where computers are an integral part of often overloaded telecommunications, financial and air-traffic-control systems, software warfare can also strike at civilian targets that are crucial to national security.

In this new type of warfare, the points of attack (and, conversely, of defense) are all the nonhardware elements of computer systems, including people. If a John Walker family or a Jerry Whitworth can be persuaded to sell out to the Soviets and do so for almost 20 years, it is easy to envision a computer programmer offering, if the price is right, to add or modify critical lines in a software program to benefit an enemy country. And as anyone who has ever struggled to write a major computer program or decipher a virtually unintelligible software manual can readily appreciate, the resulting change may not be caught for years, if ever.

The weapons of software warfare are best understood through their effects. In the spring of 1985 a software saboteur–who has never been caught –managed to plant a particular kind of malicious software bug known as a

”logic bomb” in a computer system of the Los Angeles Department of Water and Power. At a point dictated in advance by its creator, this invisible logic bomb went active. Its effect: creating a denial-of-access situation, in which some of the utility`s important internal files were made inaccessible for a week. Although the files were eventually unscrambled and normal access to them restored, any reader of Tom Clancy`s thrillers ”The Hunt for Red October”

and ”Red Storm Rising” knows that modern military encounters are commonly won or lost in far less time than a week.

As this Los Angeles example suggests, a logic bomb is a (typically very small) set of instructions surreptitiously entered into a computer`s software, where it remains undetected and inactive until a specific date or set of conditions occurs. The bomb then activates for a malicious purpose, such as denying access to legitimate users.

The logic bomb is just one type of software attack. Another is the

”software virus.” This is a snippet of software having some malicious purpose (such as creating a logic bomb) whose special hallmark is its ability to spread copies of itself far and wide through computer files or parts of a computer network as normal computing operations take place. The software virus and still other species of malicious ”bugs” are the warfare counterparts of the ordinary computer ”bugs” familiar to anyone who has ever been on intimate terms with a computer. The first computer bug was a real one, discovered on that famous day in 1945 when Grace Hopper (since retired as a Navy rear admiral) and her computer team pulled a four-inch moth out of a malfunctioning Harvard Mark II computer.

Names like ”bug” and ”virus” can suggest an almost biological complexity that belies how very easily many software warfare bugs can be created, even by a novice at programming. One well-documented case of software attack needed just six lines of additional Basic-language statements to subvert a financial program and pull off a scam.

Such deliberate bugs are not the same as unplanned program errors. In trying to exterminate unplanned bugs, sometimes also known as ”glitches,”

man is pitted against nature; in combatting the planned and planted bugs of software warfare, he is battling human opponents. Like mines in naval warfare, all software warfare bugs are carefully designed to be small and hidden and to leave few telltale traces, even after being activated with devastating effects, such as causing repeated crashes of a major computer system.

Accidental and warfare types of bug may sometimes overlap in their dire consequences. Even accidental software errors have killed people, such as the one in 1986 that caused a radiation treatment for cancer patients to go haywire. The frequency of truly accidental glitches–the bane of every programmer`s existence–may afford easy opportunity as well as excellent camouflage for deliberate sabotage.

The late Rear Adm. Henry E. Eccles, in whose honor the Library of the U.S. Naval War College was named in 1985, foreshadowed America`s need to plan broadly against software sabotage. Such sabotage, Eccles suggested in a terse statement in the June, 1986, issue of the Naval War College Review, could successfully target an extremely wide range of our country`s most prized and expensive advanced military systems. At the time the military was plagued with software snafus of the unplanned variety.

Given its scale and mission, however, it is SDI, or what is now popularly known as ”Star Wars,” that merits special scrutiny in the light of Eccles`

concerns. Like Eccles`, this article is based entirely on open, unclassified sources.

It has been estimated that a fully operational SDI may require more than 10 million lines of software code, as compared to at most a few hundred thousand lines of instructions to run, for example, SAGE (for Semi-Automatic Ground Environment), an air defense system of the late 1950s. SDI`s extremely large and complex software system would take on the entire responsibility for mechanical conduct of an antimissile battle that would be occurring too swiftly for human intervention or correction. For example, it would take less than 30 minutes for intercontinental ballistic missiles (ICBMs) launched from the Soviet Union to reach their targets in the continental U.S., and that is far too little time for a human programmer to rectify the effects of any logic bomb activated in the U.S. defense system.

In turn, the effort to develop and coordinate all the necessary SDI software seems destined to involve several thousand software professionals alone, working over many years, possibly decades. The true head count of persons gaining some access to SDI software during all this time will inevitably be larger still, once one includes hardware technicians, logistics specialists, security people, foreign and domestic liaison personnel, people at all levels of management and oversight, secretarial, janitorial and contractor support and so forth. Efforts to speed up SDI`s development to achieve early deployment may further expand this population.

The years required to design, build, test and deploy SDI are certainly time enough for the Soviets or other U.S. adversaries to move heaven and earth to achieve a significant hostile penetration of its large work force. The series of recent U.S. spy scandals does not, of course, guarantee an adversary`s future success. But the fact that U.S. Air Force security test groups known as ”Tiger Teams” some years ago penetrated a supposedly tightly secure ”multics” computer system, among others, foreshadows some of the risks even a peripheral penetration of SDI personnel might create. And only recently, 1,400 documents were reported lost from a U.S. defense plant said to be making a highly classified Stealth fighter plane.

As these and other examples indicate, military and industrial security throughout the development, construction and maintenance of SDI`s far-flung component systems seems unlikely to be uniformly tight at all times. Moreover, the interval before full SDI deployment is long enough for security procedures and enforcement climates to wax and wane, affording software saboteurs the tremendous advantage of picking their own time and place to achieve strategic surprise.

Any weapon system that pushes the state of the art as far as SDI must do on many fronts has another prime area of vulnerability: algorithm development. This is the creation of the fundamental mathematical rules later expressed in the software code to guide the system`s performance. Indeed, algorithm sabotage may be one of the most potent weapons of software warfare. This is the deliberate embodying of defects in intricate mathematical-logical rules even before they are translated into software code and entered into the computer. In a strategic defense battle, for example, the U.S. might be stymied by sabotage of the algorithms designed to discriminate Soviet missiles from other space objects and debris, including the many decoys designed specifically to confuse SDI computers.