Although tactical combat in software warfare is waged in the new manmade medium of software code and mathematical algorithm design, this new type of warfare remains rooted in sound and tested military principles.
As a fundamental strategic concept, software warfare is only the latest major expression of the ”indirect approach” school of strategy pioneered by the great British military theorist Sir Basil Liddell Hart early in the 20th Century. Formulated by Liddell Hart, drawing on battlefield experience during World War I, the essence of the ”indirect approach” is to plant in an enemy`s unprotected areas, often behind the lines, the basis of a decisive surprise combat blow.
Liddell Hart`s historical documentation of the effectiveness of the indirect approach to strategy has long been broadly influential in Soviet General Staff quarters. For this reason alone software warfare deserves recognition and scrutiny as a powerful, coherent extension of some of Soviet military thought`s most effective strands.
Software warfare may in fact harmonize much better with Soviet strategic principles and ways of war than with those of our own nation. Such warfare merges longstanding Soviet investment in ”human intelligence” capabilities with the emphasis on Western high technology that was a legacy of Yuri Andropov. Ever since Stalin`s time, generations of Soviet leaders have been brought up to be acutely conscious of the havoc that sabotage–very broadly conceived in Soviet law to include even some types of environmental pollution –can wreak upon the workings of Soviet-style programmed economies. For this same cultural reason, Soviet leaders are well-prepared to think creatively and effectively about extending software warfare to target not only military but also a range of civilian computer systems vital to U.S. national security.
For example, the voting process at the heart of democratic societies may itself come under software attack, through interference with the software used increasingly to tally election outcomes.
It has been publicly reported that last January the CIA visited the computer operation run by the Clearing House Interbank Payments System
(CHIPS), operated by 140 banks, to determine whether the Soviets could penetrate it. As the Soviets know, amid other signs of the growing fragility of worldwide financial systems, an unplanned software bug in a bank`s computer program in late 1985 forced the Federal Reserve to make a $23.6 billion emergency loan to stabilize the U.S. government-securities market. Wire transfer systems like CHIPS and the Federal Reserve`s Fedwire are estimated to move more than $1.2 trillion daily.
Software warfare also meshes with Chinese and other Asian military traditions. ”All warfare is based on deception,” wrote the ancient Chinese military philosopher Sun Tzu, in a famous passage to which Chinese as well as Soviet military and intelligence services have long paid near-scriptural heed. For example, the way in which a logic bomb in a computer program may lie dormant for many years, if indeed it ever surfaces at all, fuses both this emphasis on deception and the Chinese sense of planning for protracted conflict.
If Western strategy often tends to be structured in ways suggested by either chess or poker, much Chinese strategy can be patterned along lines suggested by the ancient Asian game of go, a board game of strategy. In go terms, software warfare`s key strategic principle is one of achieving
”encirclement from within,” attained by planting just a few of one`s own men on the inside of a major enemy position. Mao Tse-tung himself explicitly invoked this important go maneuver on more than one occasion to explain his strategy for waging guerrilla war. Software warfare now holds the promise of carrying this same ”encirclement from within” concept right into the heart of Western military high-tech resources.
Above all, software warfare is dirt cheap by any yardstick of modern weapon costs, as evidenced by publicized IRS tax liens and other financial records on convicted Soviet agents. Software warfare rests primarily on ingenuity, not on the acquisition of costly military hardware by the attacker. For this reason, software warfare may come to be the next great military equalizer. In the Falklands War an Exocet missile costing a few hundred thousand dollars sank a warship costing some $50 million. A software saboteur, even a highly paid one, could be a still more cost-effective weapon capable of disabling a multibillion-dollar military system. Software warfare certainly lies well within the grasp of any number of aggressive lesser military powers with the means to buy insiders to plant crippling bugs in major U.S. or allied military systems.
In devising software warfare defenses–and even turning the threat around for potential U.S. offensive gains–constructive lessons can be drawn from the American experience in the Vietnam War. In 1982 Army Col. Harry G. Summers Jr. published ”On Strategy,” a powerful analysis of the Vietnam experience. His message was simple: It was flawed U.S. military concepts, not deficient battlefield performance, that determined the outcome in that conflict.
Failure to grasp software warfare`s fundamental nature as warfare similarly can be a prescription for U.S. defeat on some future high-tech battlefield. Building on Summers` analysis, in software warfare as in the Vietnam War the crucial challenge is an intellectual one: How to develop sound concepts to lay the groundwork for future U.S. military success. In the case of software warfare, such conceptual unity and clarity must take precedence over ”technical countermeasures”–computer security methods, etc.– vital as these countermeasures are. Here we can hit just a few high points.
Software testing techniques offer one way of trying to block software attack. Backstopping the limitations of feasible testing is a potpourri
of other countermeasures that are still essentially technical because they are focused on machines, not on people. As holes show up in these countermeasures in turn, one must recognize that the problem has an inescapable human dimension.
In understanding the role and limitations of software testing, software warfare may be usefully seen as not just a single threat but an entire range of threats that can be laid out on a scale of software complexity. At the simpler end of this scale, some software warfare threats should in fact be reasonably manageable if carefully addressed. For example, while one must never lose sight of the Falklands experience, extremely thorough steps may in principle be taken to test weapons with sufficiently simple software. Of course, just because testing is possible doesn`t guarantee that it can be done adequately in all cases. In particular, budget constraints can be an important factor in deciding how much testing is feasible.
But simple software in military systems may be a vanishing breed. Already a proposed aircraft ”smart cockpit” is said to require a program that is more than 500,000 lines long.
Moving up the scale of software complexity–from systems like those for the DIVAD gun to those for the Army`s Forward-Area Air-Defense
(FAAD) system or the Navy`s carrier-protecting AEGIS–the vulnerability of the software and the difficulty of testing it increase very rapidly. Testing problems become increasingly and exceptionally severe as the software being tested seeks to factor in more and more combinations of possible enemy countermoves.
At the extreme upper end of the complexity scale is SDI. As many people have recognized, testing the software for SDI–which must work the very first time it is set in motion and is vastly more complex than even AEGIS–is not likely ever to be fully feasible.
Widespread recognition of testing limitations for SDI`s software has triggered much interest in automated software-writing and -testing ”tools,” which are themselves software. Such automated programming aids, possibly incorporating future advances in artificial intelligence (AI), would ideally ease SDI programming burdens, reducing the number of human beings needed to carry out the work–and thereby limiting SDI`s human targets for hostile penetration. These AI measures may, however, generate their own
vulnerabilities to software sabotage. They expand the community of those involved with SDI software. They likewise insert new technical layers between human planners and operational software, layers that a software saboteur might manipulate.
Advances in other software engineering areas offer assistance in software warfare defense but provide no comprehensive solution. Continued U.S. technological superiority alone will not solve all the national-security problems that software warfare poses. For example, conventional computer security does little to eliminate the extremely basic problem of algorithm sabotage striking at the ”idea stage” of software development, before the resulting software gets anywhere near a computer.
Most crucially, as Clausewitz wrote long ago and as Summers emphasizes in ”On Strategy,” ”move from the abstract to the real world, and the whole thing looks quite different.” A review of the highly technical and often controversy-fraught open literature of computer security makes clear the immense chasm existing between ideal, abstract security plans and the practical realities of implementing security for the thousands of weapons-related computer systems and networks.
Thus, because of the many limitations of purely technical countermeasures, a sizable share of the burden of software warfare defense inevitably must fall back upon its human aspect: deterring and–if deterrence fails–catching human traitors. Here again the problems remain formidable.
Much relevant expertise and information for waging software warfare come in intangible forms, making its flow across national boundaries exceptionally difficult to control by conventional import-export restrictions. Unlike classic sabotage, even the most active software attacker has no need for access to illegal explosives or deadly weapons.
Software saboteurs may be much easier to recruit than traditional saboteurs. Because they do not need to work with hazardous chemical, biological or nuclear materials, software saboteurs won`t have much fear of causing immediate physical injury to themselves or others. Because logic bombs and other malicious bugs can be activated long after they are planted, or may never actually surface, the saboteur`s act against his or her country may stir few pangs of guilt. And because these bugs can be so cleverly programmed that they leave frustratingly few traces, chances of getting caught are greatly lessened if not totally removed. Should a trail lead back to the saboteur, it might still be particularly hard to convict him or her of deliberate sabotage; in many cases the criminal act and intent may be hard to distinguish from mere sloppiness or oversight.
Certainly, more up-to-date laws specifically drafted with software warfare in view could assist in deterring and prosecuting those who would engage in this type of sabotage. These new laws could build upon the experience gained from recent legislation designed to tackle the related though less formidable problems of civilian computer crime. Yet it is safe to say that even the most deftly drafted statute cannot eliminate totally the basic threat that software warfare now presents U.S. and Western security.
Military training and doctrine, too, have a crucial role to play. Units like the Army`s Training and Doctrine Command (TRADOC) can alert and educate commanders who must directly face the software warfare threat. Some of this training could carry over from training in classic unconventional warfare, like that waged by U.S. Special Forces. (A Soviet counterpart is SPETSNAZ, Special Purpose Forces, in part trained as sabotage troops.) But software warfare is so different from most unconventional warfare missions that new military doctrine is also needed. Training for defense against software attack ideally calls for a new breed of military and civilian experts who command both the necessary mathematical skills and a deep knowledge of Soviet and Chinese tenets of warfare.
In framing this doctrine and other countermeasures, it is important to understand that each of the U.S. armed forces faces its own unique software warfare problems.
The CALS computerized logistics proposal highlights the difference between the military goal of absolute software security and the more modest cost-effective security of business. Between the two is a key fault line that future software saboteurs will undoubtedly attempt to exploit.
Defense planners also need to take a hard look at software warfare`s multinational aspects. In Department of Defense circles, serious concerns are already being voiced about increasing U.S. dependence on Japanese and other foreign suppliers of microelectronics parts, including computer chips used for vital U.S. defense equipment. Eyebrows also are being raised at defense data- processing contracts let to foreign companies, as well as U.S. software being written abroad. Of course, all such arrangements are not necessarily bad. Simultaneously, SDI plans call for participation by countries as diverse as Britain, Israel, Italy, Japan and West Germany, and CALS-related planning documents discuss putting Japanese-language characters in digital code. The challenge of software warfare is therefore starting to impinge directly on policy debates seemingly as far afield as those concerning international alliances and economic and trade matters.
In the end, as effective defense against software warfare clearly demands a synthesis of many separate countermeasures, the greatest risk is allowing this threat to remain unstated and unanalyzed, even as ever more ambitious and all-inclusive computerized military systems are put in place.
In trying to gauge software warfare`s future role and the urgency of planning for it across a broad spectrum of conflict, a specific type of generation gap may prove to be the most important factor of all. Software warfare is a concept born of several still very young technologies: computer, telecommunications, satellite and network. All over the world, a fresh generation of military officers steeped in the relevant technical ideas is now approaching senior rank in the armed forces of many nations. These younger officers are well-prepared–as an older generation on both our own and the other side may not be–to seize and act decisively on software warfare`s true potential.
The forerunner of some members of this new generation may be ”Captain Midnight,” one of the first high-tech saboteurs to carry out an operation with dramatic nationwide impact. On April 27, 1986, Captain Midnight, as he called himself, interfered with a satellite transmission to Home Box Office viewers in the U.S., interrupting a movie to run his own message–and producing widespread consternation in civilian and military circles. With some irony, the movie that Captain Midnight chose to interrupt was ”The Falcon and the Snowman,” which was based on one of the great U.S. spy scandals of the 1970s.
As a new generation comes of age, it is vital to bring software warfare into focus in broad arenas of U.S. national security planning. Until an equilibrium favorable to U.S. security and interests is credibly established, we should heed Adm. Eccles` warning. Technological ingenuity, he said, is always a two-edged sword, and the U.S. must be concerned about serious vulnerabilities in overlapping fields of weapons development, logistics, information security–and sabotage.




