Computer Security in Aviation:
Vulnerabilities, Threats, and Risks

Peter G. Neumann
Principal Scientist, Computer Science Laboratory, SRI International, Menlo Park CA 94025-3493
Telephone 1-415-859-2375, valid until March 1998 (1-650-859-2375 after 1 Aug 1997)
E-mail Neumann@CSL.SRI.com ; WorldWideWeb http://www.csl.sri.com/neumann.html

International Conference on Aviation Safety and Security in the 21st Century, 13-15 January 1997; White House Commission on Safety and Security, and George Washington University

Abstract. Concerning systems that depend on computers and communications, we define security to involve the prevention of intentional and -- to a considerable extent -- accidental misuse whose occurrence could compromise desired system behavior. This position paper addresses some of the fundamental security-related risks that arise in the context of aviation safety and reliability. We observe that many of the past accidents could alternatively have been caused intentionally -- and in some cases could be recreated maliciously today.

We first examine characteristic security vulnerabilities and risks with respect to aviation and its supporting infrastructure, and recall some previous incidents. We consider primarily commercial air travel, but also note some related problems in military applications. We then consider what crises are possible or indeed likely, and what we might do proactively to prevent disasters in the future.

Brief Summary of Security-Relevant Problems

An overall system perspective is essential. Security is tightly coupled with safety and reliability, and must not be ignored or relegated to incidental concerns. We take a broad view here of the problems of attaining security and safety, and consider these problems as a unified global system/network/enterprise problem. (See References 2 and 3 for extensive background, with considerable emphasis on safety as well as security, together with an integrative view that encompasses both. See also Reference 4 for some specific recommendations relating to the computer-communication security infrastructure, the role of cryptography, and the system development process.)

Security vulnerabilities are ubiquitous. Most computer operating systems have weak authentication and are relatively easy to penetrate. Most such systems have weak access controls and tend to be poorly configured, and are as a result relatively easy to misuse once initial access is attained. These systems often have monitoring facilities that are ill adapted to determining when threats are mounting and what damage may have occurred. Consequently, misuse by outsiders and insiders is potentially easy to achieve and sometimes very difficult to detect.

System safety depends on many factors. System safety typically depends upon adequate system security and adequate system reliability (as well as many other factors). It can be impaired by hardware and software problems, as well by human fallibility and nonbenevolent operating environments. As a consequence, in many of the cases discussed here, an event that occurred accidentally could alternatively have been triggered intentionally, with or without malice. A conclusion from that observation is that a sensible approach to security must encompass a sensible approach to system safety and overall system reliability.

Threats to security and safety are ubiquitous. The range of threats that can exploit these vulnerabilities is enormous, stemming from possible terrorist activities, sabotage, espionage, industrial or national competition, copycat crimes, mechanical malfunctions, and human error. Attacks may involve Trojan-horse insertion and physical tampering, including retributive acts by disgruntled employees or former employees or harassment. Denial of service attacks are particularly insidious, because they are so difficult to defend against and because their effects can be devastating. Systems connected to the Internet or available by dial-up lines are potential victims of external penetrations. Even systems that appear to be completely isolated are subject to internal misuse. In addition, many of those seemingly isolated systems can be compromised remotely because of their facilities for remote diagnostics and remote maintenance. Electromagnetic interference is a particularly complex type of threat. Unanticipated acts of God are also a source of threat -- for example, from lightning or extreme weather conditions. Of increasing concern in aviation is the omnipresent threat of terrorism. In addition, with respect to safety, References 2 and 3 provide a chilling history of relevant computer-related problems.

The risks are ubiquitous. The consequences of these vulnerabilities and associated threats imply that the risks can be very considerable. Computer-related misuse may (for example) result in loss of confidentiality, loss of system integrity when systems are corrupted, loss of data integrity when data is altered, denials of service that render resources unavailable, or seemingly innocuous thefts of service. Such misuse may be intentional or accidental. It may be very difficult to detect as in the case of a latent Trojan horse, or may be blatantly obvious as in the case of a complete system wipeout -- with the usual spectrum of difficulty in between. More broadly, overall system risks included major air-traffic-control outages, airport closures, loss of aircraft, deaths of many passengers, and other major disturbances.

The interrelationships are complex. As stated above, security, safety, and reliability are closely interrelated, and the interrelationships can be subtle. In general, if a system is not adequately secure, it cannot be dependably reliable and it cannot have any predictable availability; misuses could happen at any time. Similarly, if a system is not adequately reliable, it cannot be dependably secure; the security controls could be vitiated at any time. A simple example of a security-related reliability flaw is provided by the time when MIT's CTSS (the first time-sharing system) spewed out the entire password file as the logon message of the day. (See Reference 3 for a more detailed discussion of the interrelationships.)

A Review of Past Incidents

Among a large collection of riskful events, Reference 2 includes many aviation-related cases -- with a wide variety of causes and an enormous range of effects. Two sections in that list are of particular interest here, namely, those relating to commercial aviation and to military aviation. We consider here just a few cases from that list. (The sections of that list on space and defense are also instructive, as are the lengthy sections relating to security and privacy.)

Radio-frequency spoofing of air-traffic control. Several people have masqueraded as air-traffic controllers on designated radio frequencies (in Miami, in Manchester, England, and in Virginia -- the ``Roanoake Phantom''), altering flight courses and causing serious confusion. (Some communication authentication might help mitigate problems of this type.)

Power and telecommunication infrastructural problems. Vulnerabilities of the power infrastructure and other computer problems have seriously affected air-traffic control (Chicago, Oakland, Miami, Washington DC, Dallas-FortWorth, Cleveland, all three New York airports, Pittsburgh, Oakland, etc.). An FAA report listed 114 major telecom outages in a 12-month period in 1990-91. Twenty air-traffic control centers were downed by a fiber-optic cable inadvertently cut by a farmer burying his cow (4 May 1991). The Kansas City ATC was brought down by a beaver-chewed cable (1990); other outages were due to lightning strikes, misplaced backhoe buckets, blown fuses, and various computer problems, as well as a 3-hour outage and airport delays in Boston that resulted from unmarked electronic components being switched. The AT&T outage of 17 September 1991 blocked 5 million calls and crippled air travel with 1,174 flights cancelled or delayed. Many such cases have been recorded. (Much greater recognition is needed of the intricate ways in which air-traffic control depends on the power and telecommunication infrastructures.)

Fatal aircraft incidents. The list of computer-related aircraft accidents is not encouraging. Undeserved faith in the infallibility of computer systems and the people who use them played a role in the Korean Airlines 007 shootdown, the Vincennes' Aegis shootdown of the Iranian Airbus, the F-15 shootdowns of two U.S. BlackHawks over Iraq, the Air New Zealand crash into Mt Erebus, the Lauda Air thrust-reverser problem, NW flight 255, the British Midlands 737 crash, several Airbus A320 crashes, the American Airlines Cali crash, the Ilyushin Il-114 crash -- to name just a few.

Near-misses and near-accidents. Numerous near-misses have also been reported, and probably many more have not. The recent missile observed passing AA 1170 over Wallops Island reminds us that accidents can be caused by friendly fire (as was indeed the case in the two UH-60 BlackHawks shot down by our own F-15Cs over Iraq). The sections in References 2 and 3 on commercial and military aviation are particularly well worth reviewing.

Electromagnetic interference. Interference seem to be a particularly difficult type of threat, although its effects on aircraft computers and communications are still inadequately understood. Passenger laptops with cable-attached devices appear to be a particularly risky source of in-flight radiation. EMI was considered as one possible explanation for the U.S. Air Force F-16 accidentally dropping a bomb on rural West Georgia on 4 May 1989. EMI was the cited cause of several UH-60 BlackHawk helicopter hydraulic failures. Australia's Melbourne Airport reported serious effects on their RF communications, which were finally traced to a radiating video cassette recorder near the airport.

Risks inherent in developing complex systems. Computer-communication system difficulties associated with air-traffic control are of particular concern. Significant problems have arisen in computer-communication systems for air-traffic control and procurements for military and commercial aviation and defense systems. Unfortunately, these problems are not indigenous to the aviation industry. There have been real fiascos elsewhere in attempts to develop large infrastructural computer-communication systems, which are increasingly dominated by their software complexity. For example, the experiences of system development efforts for the Social Security Administration, the IRS Tax Systems Modernization effort, and law enforcement merely reinforce the conclusion that the development of large systems can be a risky business. Another example is provided by the C-17 software and hardware problems; this case was cited by a GAO report as ``a good example of how not to approach software development when procuring a major weapons system.'' Unfortunately, we have too many such horrible ``good'' examples of what not to do, and very few examples of how systems can be developed successfully. In general, efforts to develop and operate complex computer-based systems and networks that must meet critical requirements have been monumentally unsuccessful -- particularly with respect to security, reliability, and survivability. We desperately need the ability to develop complex systems -- within budget, on schedule, and with high assurance compliant with their stated requirements. (References 2 and 3 provide numerous examples of development fiascos.)

In some aircraft incidents, system design and implementation were problematic; in other cases, the human-computer interface design was implicated; in further cases, human error was involved. In some cases, there were multiple causes and the blame can be distributed. Unfortunately, catastrophes are often attributed to ``human error'' (on the part of pilots or traffic controllers) for problems that really originated within the systems or that can be attributed to poor interface design (which, ultimately, should be attributed to human problems -- on the part of designers, system developers, maintainers, operators, and users!).

There are many common threads among these cases (as well as many dissimilarities), which makes a careful study of causes and effects imperative. In particular, although most of the cases seem to have had some accidental contributing factors (except for the masqueraders and various terrorist incidents such as the PanAm Lockerbie disaster), and some cases appear not to be computer related (TW 800), there is much that can be learned concerning the potential security risks. As we discuss in the following section, many of the accidentally caused cases could alternatively have been triggered intentionally.

Possible Future Incidents

If accidental outages and unintended computer-related problems can cause this much trouble, just think what maliciously conceived coordinated attacks could do -- particularly, well conceived attacks striking at weak links in the critical infrastructure! On one hand, attacks need not be very high-tech -- under various scenarios, bribes, blackmail, explosives, and other strong-arm techniques may be sufficient; well-aimed backhoes can evidently have devastating effects. On the other hand, once a high-tech attack is conceived, its very sophisticated attack methods can be posted on underground bulletin boards and may then be exploited by others without extensive knowledge or understanding. Thus, a high level of expertise is no longer a prerequisite.

It is perhaps unwise in this written statement to be too explicit about scenarios for bringing down major components of the aviation infrastructure. There are always people who might want to try those scenarios, and one incident can rapidly be replicated; the copycat has at least nine lives (virtually). Instead, we consider here some of the factors that must be considered in assessing future risks to security, in assessing the safety and reliability that in turn depend upon adequate security, and in efforts to avoid future disasters.

Targets. The air-traffic-control system is itself a huge target. Physical and logical attacks on computers, communications, and radars are all possible. Any use of the Internet for intercommunications could create further risks. Many airports represent vital targets, and the disruptions caused by outages in any major airport are typically felt worldwide. Individual aircraft of course also present highly vulnerable targets. In principle, sectoring of the en-route air-traffic facility provides some redundancy in the event only a single ATC center is affected; however, that is not a sufficient defense against coordinated simultaneous attacks. Overall, the entire gamut of security threats noted above is potentially relevant.

Attack modes. We must anticipate misuse by insiders and attacks by outsiders, including privacy violations, Trojan horses and other integrity attacks, extensive denials of service, physical attacks such as cable cuts and bombs, and electromagnetic and other forms of interference -- to name just a few. There are also more benign attacks, such as wiretaps and electronic eavesdropping -- perhaps gathering information useful for subsequent attacks.

Weak links. Many of the illustrative-risks cases cited in Reference 2 required a confluence of several causes rather than just a single-point failure. The 1980 ARPAnet collapse resulted from bits dropped in a memory that did not have any error checking, combined with an overly lazy garbage collection algorithm. The 1986 separation of New England from the rest of the ARPAnet resulted because seven trunk lines all went through the same cable, which was cut in White Plains, NY. Security is a weak-link problem, but compromises of security often involve exploitation of multiple vulnerabilities, and in many instances multiple exploitations are not significantly more difficult to perpetrate than single-point exploitations. Consequently, trying to avoid single weak links is not enough to ensure the absence of security risks. The basic difficulty is that there are too many weak links, and in some cases -- it would seem -- nothing but weak links. Indeed, the situation is not generally improving, and we can expect systems in the future to continue to have many vulnerabilities -- although some defenses may be locally stronger.

Global problems with local causes. Global problems can result from seemingly isolated events, as exhibited by the 1960s power-grid collapses, the 1980 ARPANET collapse which began at a single node and soon brought down every node, the self-propagating 1990 AT&T long-distance collapse, and a new flurry of widespread west-coast power outages in the summer of 1996 -- all of which seemingly began with single-point failures.

Malicious intent versus accidents. In many cases, air-traffic control and aviation are dependent on our critical infrastructures (e.g., telecommunications, power distribution, and many associated control systems). As noted above, some of the types of situations that did or could occur accidentally could also have been or could still be triggered intentionally. Many of the far-reaching single-point failures that involve cable cuts could have been triggered maliciously. In addition, there are various application areas in which intentional illegal acts can masquerade as apparent accidents.

Terrorism and sabotage. Incentives seem to be on the rise for increased terrorist and other information-warfare activities. The potential for massive widespread disruption or for intense local disruption is ever greater -- especially including denial-of-service attacks. Increasingly, the widespread availability of system-cracking software tools suggests that certain types of attacks may become more frequent as the attack techniques become widely known and adequate defenses fail to materialize. For example, the SYN-flooding denial-of-service attack on the Internet service provider PANIX recently inspired an even more aggressive and more damaging attack on WebCom that affected 3000 websites, over an outage period of about 40 hours on a very busy pre-Christmas business weekend. (See the on-line Risks Forum, volume 18, issues 45, 48, and 69 for further details.)

The feasibility and likelihood of coordinated attacks. Because of the increased use of the Internet, information exchange is very easy and inexpensive. Furthermore, even a single individual can develop simultaneous attacks launched from many different sites that can attack globally wherever vulnerabilities exist. We must recognize the fact that our computer-communication infrastructure is badly flawed, and that our electronic defenses are fundamentally inadequate. Not surprisingly, our ability to resist well-conceived coordinated attacks is even worse. Consequently, we must expect to see large-scale coordinated attacks that will be very difficult to detect, diagnose, and prevent -- and difficult to contain once they are initiated. We must plan our defenses accordingly.

Effects of system and operational complexity. Systems with critical requirements tend to have substantial amounts of software devoted to attaining security, safety, and reliability. Attempts to develop large and very complex systems that are really dependable tend to introduce new risks, particularly when the additional defensive software is used only in times of extreme and often unpredictable circumstances. In many critical systems, as much as half of the software may be dedicated to techniques for attempting to increase security, reliability, and safety.

Increasingly widespread opportunities for misuse. Everyone seems to be jumping on the Internet and the WorldWideWeb, with their inherent dependence on software of completely unknown trustworthiness. The ease with which web pages of the CIA, DoJ, NASA, and the U.S. Air Force have been altered by intruders merely hints at the depth of the problem. Furthermore, those intruders typically acquired the privileges necessary to do much greater damage than was actually observed. As air-industry-related activities become more Internet and computer-system dependent, the risks become ever greater. One recent example relevant to public transportation is provided by the recent breakdown of the Amtrak ticket and information system, which on 29 November 1996 brought the rail system to its knees; employees had to resort to manual ticketing operations, but with no on-line schedules and no hardcopy backups.

International scope. The problems of the Internet are worldwide, just as are the problems of ensuring the safety and security of air travel. We are increasingly confronted with problems that are potentially worldwide in scope -- and in some cases beyond our control.

There are no easy answers. Security, safety, and reliability are separately each very difficult problems. The combination of all three together is seemingly even more complicated. But that combination cries out for a much more fundamental approach -- one that characterizes the overall system requirements a priori, carefully controls system procurements and developments, enforces compliance with the requirements, and continues that control throughout system operation. Simplistic solutions are very risky.

Conclusions

Total integration. Security, safety, and reliability of the aviation infrastructure must be thoroughly integrated throughout the entire infrastructure, addressing computer systems, computer networks, public-switched networks, power-transmission and -distribution facilities, the air-traffic-control infrastructure, and all of the interactions and interdependencies among them.

Technology. Potentially useful technology is emerging from the R&D community, but is typically lacking in robustness. The desired functionality is difficult to attain using only commercially available systems. Further research and prototype development are fundamentally needed, particularly with respect to composing dependable systems out of less dependable components in a way that leads to predictable results. However, greater incentives are needed to stimulate the development of much more robust infrastructures.

Products. The public, our Government, and indeed our entire public infrastructure are vitally dependent on commercial technological developments for the dependability of the infrastructure. We are particularly dependent on our computer-communication systems. The Government must encourage developers to provide better security as a part of their normal product line, and to address safety and reliability much more consistently. Operating systems, networking, and cryptographic policy all play a role.

People. People are always potential source of risks, even if they are well meaning. Much greater awareness is essential -- of the threats, vulnerabilities, and risks -- on the part of everyone involved. Better education and training are absolutely essential, with respect to all of the attributes of security, safety, and reliability. Computer literacy is increasingly necessary for all of us.

Historical perspective. This is not a new topic. The author worked with Alex Blumenstiel in 1987 in developing an analysis of the threats perceived at that time. Those threats are still with us -- perhaps even more intensely than before -- and have been a continuing source of study. (See Reference 1.)

We have been fortunate thus far, in that security-relevant attacks have been relatively limited in their effects. However, the fact that so many reliability and safety violations have occurred reminds us that comparable intentional attacks could have been mounted. Nevertheless, the potential for enormous damage is present. We must not be complacent. Proactive prevention of serious consequences requires foresight and a commitment to the challenge ahead. The technology is ready for much better security than we have at present, although there will always be some risks. The Government has a strong role to play in ensuring that the information infrastructure is ready for prime time.

Perhaps the most fundamental question today is this: How much security is enough? The answer in any particular application must rely on a realistic consideration of all of the significant risks. In general, security is not a positive contributor to the bottom line, although it can be a devastating negative contributor following a real crisis. As a consequence, organizations tend not to devote adequate attention to security until after they have been burned. However, the potential risks in aviation are enormous, and are generally actually much worse than imagined. Above all, there is a serious risk of ignoring risks that are difficult to deal with -- unknown, unanticipated, or seemingly unlikely but with very serious consequences. For situations with potentially very high risks, as is the case in commercial aviation, significantly greater attention to security is prudent.

References

1. Alexander D. Blumenstiel, Guidelines for National Airspace System Electronic Security, DOT/RSPA/Volpe Center, 1987. This report considers the electronic security of NAS Plan and other FAA ADP systems. See also Alex D. Blumenstiel and Paul E. Manning, Advanced Automation System Vulnerabilities to Electronic Attack, DoT/RSPA/TSC, 11 July 1986, and an almost annual subsequent series of reports -- for example, addressing accreditation (1990, 1991, 1992), certification (1992), air-to-ground communications (1993), ATC security (1993), and communications, navigation, and surveillance (1994). For further information, contact Alex at 1-617-494-2391 (Blumenstie@volpe1.dot.gov) or Darryl Robbins, FAA Office of Civil Aviation Security Operations, Internal and AIS Branch.

2. Peter G. Neumann, Illustrative Risks to the Public in the Use of Computer Systems and Related Technology. [This document is updated at least eight times a year, and is available for anonymous ftp as a PostScript file at ftp://ftp.csl.sri.com/illustrative.PS or ftp://ftp.sri.com/risks/illustrative.PS . If you cannot print PostScript, I would be delighted to send you a hardcopy. The compilation of mostly one-line summaries is currently 19 pages, double-columned, in 8-point type. It grows continually.]

3. Peter G. Neumann, Computer-Related Risks, Addison-Wesley, 1995.

4. Peter G. Neumann, Security Risks in the Emerging Infrastructure, Testimony for the U.S. Senate Permanent Subcommittee on Investigations of the Senate Committee on Governmental Affairs, 25 June 1996. [http://www.csl.sri.com/neumann.html/neumannSenate96.html , or browsable from within my web page http://www.csl.sri.com/neumann.html .]

5. Computers at Risk: Safe Computing in the Information Age, National Academy Press, 5 December 1990. [Final report of the National Research Council System Security Study Committee.]

6. Information Security: Computer Attacks at Department of Defense Pose Increasing Risks, U.S. General Accounting Office, May 1996, GAO/AIMD-96-84.

7. Cryptography's Role In Securing the Information Society, National Academy Press, prepublication copy, 30 May 1996; bound version in early August 1996. [Final report of the National Research Council System Cryptographic Policy Committee. The executive summary is on the World-Wide Web at http://www2.nas.edu/cstbweb .]

8. The Unpredictable Uncertainty: Information Infrastructure Through 2000, National Academy Press, 1969. [Final report of the NII 2000 Steering Committee.]

Personal Background

In my 43 and one-half years in various capacities as a computer professional, I have long been concerned with security, reliability, human safety, system survivability, and privacy in computer-communication systems and networks, and with how to develop systems that can dependably do what is expected of them. For example, I have been involved in designing operating systems (Multics) and networks, secure database-management systems (SeaView), and systems that can monitor system behavior and seek to identify suspicious or otherwise abnormal behavior (IDES/NIDES/EMERALD). I have also been seriously involved in identifying and preventing risks. Some of this experience is distilled into my recent book, Computer-Related Risks (Reference 3).

In addition to projects involving computer science and systems, I have worked in many application areas -- including (for example) national security, law enforcement, banking, process control, air-traffic control, aviation, and secure space communications (CSOC). I participated in SRI projects for NASA, one in the early 1970s on a prototype ultrareliable fly-by-wire system, and another in 1985 in which I provided preliminary computer-communication security requirements for the space station. Perhaps most relevant here, the 1987 study I did for Alex Blumenstiel and Bob Wiseman (Department of Transportation, Cambridge, Mass.) specifically addressed computer-communication security risks in aviation. Alex has continued to refine that analysis. (See Reference 1.)

I was a member of the 1994-96 National Research Council committee study of U.S. cryptographic policy, Cryptography's Role In Securing the Information Society (Reference 7), and the 1988-90 National Research Council study report, Computers at Risk (Reference 5). I am chairman of the Association for Computing (ACM) Committee on Computers and Public Policy, and Moderator of its widely read Internet newsgroup Risks Forum (comp.risks). (Send one-line message ``subscribe'' to risks-request@CSL.sri.com for automated subscription to the on-line newsgroup.)

I am a Fellow of the American Association for the Advancement of Science, the Institute for Electrical and Electronics Engineers, and the Association for Computing (ACM). My present title is Principal Scientist in the Computer Science Laboratory at SRI International (not-for-profit, formerly Stanford Research Institute), where I have been since 1971 -- after ten years at Bell Telephone Laboratories in Murray Hill, New Jersey, and a year teaching at the University of California at Berkeley. I have doctorates from Harvard (1961) and the Technische Hochschule, Darmstadt, Germany (1960, obtained while I was on a Fulbright from 1958 to 1960).