CDRL A001 Final Report December 28, 2004

Principled Assuredly Trustworthy
Composable Architectures
 
Final Report
Contract number N66001-01-C-8040
DARPA Order No. M132
SRI Project P11459

Submitted by: Peter G. Neumann, Principal Investigator
Principal Scientist, Computer Science Laboratory
SRI International EL-243, 333 Ravenswood Ave
Menlo Park, California 94025-3493, USA
Neumann@csl.sri.com; http://www.csl.sri.com/neumann
Phone: 1-650-859-2375; Fax: 1-650-859-2844

Prepared for:
Contracting Officer, Code D4121
SPAWAR Systems Center
San Diego, California

Approved:
Patrick Lincoln, Director
Computer Science Laboratory
William Mark, Vice President
Information and Computing Sciences Division

This report is available on-line for browsing

http://www.csl.sri.com/neumann/chats4.html
and also for printing or displaying

http://www.csl.sri.com/neumann/chats4.pdf

http://www.csl.sri.com/neumann/chats4.ps

Contents

  • Contents
  • Preface
  • Abstract
  • Executive Summary
  • Roadmap of This Report
  • 1 The Foundations of This Report
  • 2 Fundamental Principles of Trustworthiness
  • Synopsis
  • 2.1 Introduction
  • 2.2 Risks Resulting from Untrustworthiness
  • 2.3 Trustworthiness Principles
  • 2.3.1 Saltzer-Schroeder Security Principles, 1975
  • 2.3.2 Related Principles, 1969 and Later
  • 2.3.3 Principles of Secure Design (NSA, 1993)
  • 2.3.4 Generally Accepted Systems Security Principles (I2F, 1997)
  • 2.3.5 TCSEC, ITSEC, CTCPEC, and the Common Criteria (1985 to date)
  • 2.3.6 Extreme Programming, 1999
  • 2.3.7 Other Approaches to Principled Development
  • 2.4 Design and Implementation Flaws, and Their Avoidance
  • 2.5 Roles of Assurance and Formalism
  • 2.6 Caveats on Applying the Principles
  • 2.7 Summary
  • 3 Realistic Composability
  • Synopsis
  • 3.1 Introduction
  • 3.2 Obstacles to Seamless Composability
  • 3.3 System Decomposition
  • 3.4 Attaining Facile Composability
  • 3.5 Paradigmatic Mechanisms for Enhancing Trustworthiness
  • 3.6 Enhancing Trustworthiness in Real Systems
  • 3.7 Challenges
  • 3.8 Summary
  • 4 Principled Composable Trustworthy Architectures
  • Synopsis
  • 4.1 Introduction
  • 4.2 Realistic Application of Principles
  • 4.3 Principled Architecture
  • 4.4 Examples of Principled Architectures
  • 4.5 Openness Paradigms
  • 4.6 Summary
  • 5 Principled Interface Design
  • Synopsis
  • 5.1 Introduction
  • 5.2 Fundamentals
  • 5.2.1 Motivations for Focusing on Perspicuity
  • 5.2.2 Risks of Bad Interfaces
  • 5.2.3 Desirable Characteristics of Perspicuous Interfaces
  • 5.2.4 Basic Approaches
  • 5.2.5 Perspicuity Based on Behavioral Specifications
  • 5.2.6 System Modularity, Visibility, Control, and Correctness
  • 5.3 Perspicuity through Synthesis
  • 5.3.1 System Architecture
  • 5.3.2 Software Engineering
  • 5.3.3 Programming Languages and Compilers
  • 5.3.4 Administration and System Operation
  • 5.3.5 No More and No Less
  • 5.3.6 Multilevel Security and Capabilities
  • 5.4 Perspicuity through Analysis
  • 5.4.1 General Needs
  • 5.4.2 Formal Methods
  • 5.4.3 Ad-Hoc Methods
  • 5.4.4 Hybrid Approaches
  • 5.4.5 Inadequacies of Existing Techniques
  • 5.5 Pragmatics
  • 5.5.1 Illustrative Worked Examples
  • 5.5.2 Contemplation of a Specific Example
  • 5.6 Conclusions
  • 6 Assurance
  • Synopsis
  • 6.1 Introduction
  • 6.2 Foundations of Assurance
  • 6.3 Approaches to Increasing Assurance
  • 6.4 Formalizing System Design and Development
  • 6.5 Implementation Consistency with Design
  • 6.6 Static Code Analysis
  • 6.7 Real-Time Code Analysis
  • 6.8 Metrics for Assurance
  • 6.9 Assurance-Based Risk Reduction
  • 6.10 Conclusions on Assurance
  • 7 Practical Considerations
  • Synopsis
  • 7.1 Risks of Short-Sighted Optimization
  • 7.2 The Importance of Up-Front Efforts
  • 7.3 The Importance of Whole-System Perspectives
  • 7.4 The Development Process
  • 7.4.1 Disciplined Requirements
  • 7.4.2 Disciplined Architectures
  • 7.4.3 Disciplined Implementation
  • 7.5 Disciplined Operational Practice
  • 7.5.1 Today's Overreliance on Patch Management
  • 7.5.2 Architecturally Motivated System Administration
  • 7.6 Practical Priorities for Perspicuity
  • 7.7 Assurance Throughout Development
  • 7.7.1 Disciplined Analysis of Requirements
  • 7.7.2 Disciplined Analysis of Design and Implementation
  • 7.8 Assurance in Operational Practice
  • 7.9 Certification
  • 7.10 Management Practice
  • 7.10.1 Leadership Issues
  • 7.10.2 Pros and Cons of Outsourcing
  • 7.11 A Forward-Looking Retrospective
  • 8 Recommendations for the Future
  • 8.1 Introduction
  • 8.2 General R&D Recommendations
  • 8.3 Some Specific Recommendations
  • 8.4 Architectures with Perspicuous Interfaces
  • 8.5 Other Recommendations
  • 9 Conclusions
  • 9.1 Summary of This Report
  • 9.2 Summary of R&D Recommendations
  • 9.3 Risks
  • 9.4 Concluding Remarks
  • Acknowledgments
  • A Formally Based Static Analysis (Hao Chen)
  • A.1 Goals of the Berkeley Subcontract
  • A.2 Results of the Berkeley Subcontract
  • A.3 Recent Results
  • A.4 Integration of Static Checking into EMERALD
  • B System Modularity (Virgil Gligor)
  • B.1 Introduction
  • B.2 Modularity
  • B.2.1 A Definition of "Module" for a Software System
  • B.2.2 System Decomposition into Modules
  • B.2.3 The "Contains" Relation
  • B.2.4 The "Uses" Relation
  • B.2.5 Correctness Dependencies Among System Modules
  • B.2.6 Using Dependencies for Structural Analysis of Software Systems
  • B.3 Module Packaging
  • B.4 Visibility of System Structure Using Modules
  • B.4.1 Design Abstractions within Modules
  • B.4.2 Information Hiding as a Design Abstraction for Modules
  • B.4.3 Layering as a Design Abstraction Using Modules
  • B.5 Measures of Modularity and Module Packaging
  • B.5.1 Replacement Dependence Measures
  • B.5.2 Global Variable Measures
  • B.5.3 Module Reusability Measures
  • B.5.4 Component-Packaging Measures
  • B.6 Cost Estimates for Modular Design
  • B.7 Tools for Modular Decomposition and Evaluation
  • B.7.1 Modularity Analysis Tools Based on Clustering
  • B.7.2 Modularity Analysis Tools based on Concept Analysis
  • B.8 Virgil Gligor's Acknowledgments
  • References
  • Index
  • Preface

    This document is the final report for Task 1 of SRI Project 11459, Architectural Frameworks for Composable Survivability and Security, under DARPA Contract No. N66001-01-C-8040 as part of DARPA's Composable High-Assurance Trustworthy Systems (CHATS) program. Douglas Maughan was the DARPA Program Manager through the first two years of the project. He has been succeeded by Tim Gibson. 

    Acknowledgments are given at the end of the body of this report. However, the author would like to give special mention to the significant contributions of Drew Dean and Virgil Gligor. 

    This report contains no proprietary or sensitive information. Its contents may be freely disseminated. All product and company names mentioned in this report are trademarks of their respective holders.

    Abstract

    This report presents the results of our DARPA CHATS project. We characterize problems in and approaches to attaining computer system and network architectures. The overall goal is to be better able to develop and more rapidly configure highly trustworthy systems and networks able to satisfy critical requirements (including security, reliability, survivability, performance, and other vital characteristics). We consider ways to enable effective systems to be predictably composed out of interoperable subsystems, to provide the required trustworthiness -- with reasonably high assurance that the critical requirements will be met under the specified operational conditions, and (we hope) that do something sensible outside of that range of operational conditions. This work thus spans the entire set of goals of the DARPA CHATS program -- trustworthiness, composability, and assurance -- and much more.

    By trustworthiness, we mean simply worthy of being trusted to fulfill whatever critical requirements may be needed for a particular component, subsystem, system, network, application, mission, enterprise, or other entity. Trustworthiness requirements might typically involve (for example) attributes of security, reliability, performance, and survivability under a wide range of potential adversities. Measures of trustworthiness are meaningful only to the extent that (a) the requirements are sufficiently complete and well defined, and (b) can be accurately evaluated.

    This report should be particularly valuable to system developers who have the need and/or the desire to build systems and networks that are significantly better than today's conventional mass-market and custom software. The conclusions of the report can also be useful to government organizations that fund research and development efforts, and to procurers of systems that must be trustworthy.

    Executive Summary

    Anyone will renovate his science who will steadily look after the irregular phenomena. And when the science is renewed, its new formulas often have more of the voice of the exceptions in them than of what were supposed to be the rules. William James 

    In this report, we confront an extremely difficult problem -- namely, how to attain demonstrably trustworthy systems and networks that must operate under stringent requirements for security, reliability, survivability, and other critical attributes, and that must be able to evolve gracefully and predictably over time -- despite changes in requirements, hardware, communications technologies, and radically new applications. In particular, we seek to establish a sound basis for the creation of trustworthy systems and networks that can be easily composed out of subsystems and components, with predictably high assurance, and also do something sensible when forced to operate predictably outside of the expected normal range of operational conditions. Toward this end, we examine a set of principles for achieving trustworthiness, consider constraints that might enhance composability, pursue architectures and trustworthy subsystems that are inherently likely to result in trustworthy systems and networks, define constraints on administrative practices that reduce operational risks, and seek approaches that can significantly increase assurance. The approach is intended to be theoretically sound as well as practical and realistic. We also outline directions for new research and development that could significantly improve the future for high-assurance trustworthy systems.

    With respect to the future of trustworthy systems and networks, perhaps the most important recommendations involve the urgent establishment and use of soundly based, highly disciplined, and principle-driven architectures, as well as development practices that systematically encompass trustworthiness and assurance as integral parts of what must become coherent development processes and sound subsequent operational practices. Only then can we have any realistic assurances that our computer-communication infrastructures -- and indeed our critical national infrastructures -- will be able to behave as needed, in times of crisis as well as in normal operation. The challenges do not have easy turn-the-crank solutions. Addressing them requires considerable skills, understanding, experience, education, and enlightened management. Success can be greatly increased in many ways, including the availability of reliable hardware components, robust and resilient network architectures and systems, consistent use of good software engineering practices, careful attention to human-oriented interface design, well-conceived and sensibly used programming languages, compilers that are capable of enhancing the trustworthiness of source code, techniques for increasing interoperability among heterogeneous distributed systems and subsystems, methods and tools for analysis and assurance, design and development of systems that are inherently easier to administer and that provide better support for operational personnel, and many other factors. The absence or relative inadequacy with respect to each of these factors today represents a potential weak link in a process that is currently riddled with too many weak links. On the other hand, much greater emphasis on these factors can result in substantially greater trustworthiness, with predictable results. 

    The approach taken here is strongly motivated by historical perspectives of promising research efforts and extensive development experience (both positive and negative) relating to the development of trustworthy systems. It is also motivated by the practical needs and limitations of commercial developments as well as some initial successes in inserting significantly greater discipline into the open-source world. It provides useful guidelines for disciplined system developments and future research.

    This report cannot be everything for everyone, although it should have some appeal to a relatively broad range of readers. As a consequence of the inherent complexity associated with the challenges of developing and operating trustworthy systems and networks, we urge readers with experience in software development to read this report thoroughly, to see what resonates nicely with their experience. However, to the inexperienced developer or to the experienced developer who believes in seat-of-the-pants software creation, we offer a few words of caution. Many of the individual concepts should be well known to many of you. However, if you are looking for easy answers, you may be disappointed; indeed, each chapter should in turn convince you that there are no easy answers. On the other hand, if you are looking for some practical advice on how to develop systems that are substantially more trustworthy than what is commercially available today, you may find many encouraging directions to pursue.

    Although there are some novel concepts in this report, our main thrust involves various approaches that can make better use of what we have learned over the past many years in the research community and that can be used to better advantage in production systems. Many of the lessons relating to serious trustworthiness can be drawn from past research and prototype development. However, those lessons have been largely ignored in commercial development communities, and perhaps have also been insufficiently observed by the developers of source-available software. There are many directions herein -- both new and old -- for fruitful research and development that can help to fill in the gaps.

    We believe that observance of the approaches described here would greatly improve the present situation. The opportunities for this within the open-source community are considerable, although they are also applicable to closed-source proprietary systems (despite various caveats).

    Roadmap of This Report

    The outline of this report is as follows.

    1 The Foundations of This Report

    We essay a difficult task; but there is no merit save in difficult tasks.
    Ovid 

    In the context of this report, the term "trustworthy" is used in a broad sense that is meaningful with respect to any given set of requirements, policies, properties, or other definitional entities. It represents the extent to which those requirements are likely to be satisfied, under specified conditions. That is, trustworthiness means worthy of being trusted to satisfy the given expectations. For example, typical requirements might relate to attributes of security, reliability, performance, and survivability under a wide range of potential adversities. Each of these attributes has expectations that are specific to each layer of abstraction (and differing from one layer to another) -- for example, with respect to hardware, operating systems, applications, systems, networks, and enterprise layers.

    Note that these concepts are sometimes interrelated. Achieving survivability in turn requires security, reliability, and some measures of guaranteed performance (among other requirements). Human safety typically does as well. Many of these properties are meaningful in different ways at various layers of abstraction. At the highest layers, they tend to be emergent properties of systems in the large, or indeed entire enterprises -- that is, they are meaningful only in terms of the entire system complex rather than as lower-layer properties.

    The concept of trustworthiness is essentially indistinguishable from what is termed dependability [24, 25, 26, 202, 309], particularly within the IEEE and European communities. In its very early days, dependability was focused primarily on hardware faults, subsequently extended to software faults, and then generalized to a notion of faults that includes security threats. In that framework, dependability's generalized notions of fault prevention, fault tolerance, fault removal, and fault forecasting (the last of which has some of the elements of assurance) seem to encompass everything that trustworthiness does, albeit with occasionally different terminology. However, a recent paper, Basic Concepts and Taxonomy of Dependable and Secure Computing, by Avizienis, Laprie, Randell, Landwehr [26] (which is a gold mine for serious researchers) attempts to distinguish security as a specifically identifiable subset of dependability, rather than more generally treating it as one of various sets of application-relevant requirements subsumed under trustworthiness, as we do in this report. (Their new reformulation of security encompasses primarily confidentiality, integrity, and availability -- which in this report are only part of the necessary trustworthiness aspects that are required for security -- although it also alludes to other attributes of security. However, any differences between their paper and this report are largely academic -- we are each approaching the same basic problems.)

    We make a careful distinction throughout this report between trust and trustworthiness. Trustworthiness implies that something is worthy of being trusted. Trust merely implies that you trust something whether it is trustworthy or not, perhaps because you have no alternative, or because you are naive, or perhaps because you do not even realize that trustworthiness is necessary, or because of some other reason. We generally eschew the terms trust and trusted unless we specifically mean trust rather than trustworthiness. (The slogan on an old T-shirt worn around the National Security Agency was "In trust we trust.")

    A prophetic definition long ago due to Leslie Lamport can be paraphrased in the context of this report as follows: a distributed system is a system in which you have to trust components whose mere existence may be unknown to you. This is increasingly a problem on the World Wide Web, which is today's ultimate distributed system.

    There are many R&D directions that we believe are important for the short- and long-term futures -- for the computer and network communities at large, for DARPA developers, and for system and network developers generally. (We outline some recommendations for future R&D in Chapter 9.) The basis of our project is the exploration and exploitation of a few of the potentially most timely and significant research and development directions.

    Throughout the history of efforts to develop trustworthy systems and networks, there is an unfortunate shortage of observable long-term progress relating specifically to the multitude of requirements for security. (See, for example, an interview with Richard Clarke [149] in the IEEE Security and Privacy.) Blame can be widely distributed among governments, industries, and users -- both personal and corporate. Significant research and development results are typically soon forgotten or else deprecated in practice. Systems have come and gone, programming languages have come and (sometimes) gone, and certain specific systemic vulnerabilities have come and gone. However, many generic classes of vulnerabilities seem to persist forever -- such as buffer overflows, race conditions, off-by-one errors, mismatched types, divide-by-zero crashes, and unchecked procedure-call arguments, to name just a few. Overall, it is primarily only the principles that have remained inviolable -- at least in principle -- despite their having been widely ignored in practice. It is time to change that unfortunate situation, and honor the principles.

    There is an unfortunate shortage of fundamental books that provide useful background for the material discussed in this report. Two recent books by Matt Bishop,  Computer Security: Art and Science [47] and Introduction to Computer Security [48], are worthy of particular note -- the former for its rather comprehensive and somewhat rigorous computer-science-based treatment of security, and the latter for its less formal approach that should be more accessible to others who are not computer scientists. Chuck Pfleeger's  Security in Computing [303], Ross Anderson's  Security Engineering [14], and Morrie Gasser's  Building a Secure Computer System [127] are also worthy sources. A recent book by Ian Sommerville [359] provides extensive background on software engineering.

    A paper [266] summarizing our conclusions as of early 2003 is part of the DISCEX3 proceedings, from the April 2003 DARPA Information Survivability Conference and Exposition.

    2 Fundamental Principles of Trustworthiness

    Synopsis

    Enormous benefits can result from basing requirements, architectures, implementations, and operational practices on well-defined and well-understood generally accepted principles.

    In this chapter, we itemize, review, and interpret various design and development principles that if properly observed can advance composability, trustworthiness, assurance, and other attributes of systems and networks, within the context of the CHATS effort. We consider the relative applicability of those principles, as well as some of the problems they may introduce.

    2.1 Introduction

    Everything should be made as simple as possible -- but no simpler.
    Albert Einstein 

    A fundamental hypothesis motivating this report is that achieving assurable trustworthiness requires much greater observance of certain underlying principles. We assert that careful attention to such principles can greatly facilitate the following efforts.

    The benefits of disciplined and principled system development cannot be overestimated, especially in the early stages of the development cycle. Principled design and software development can stave off many problems later on in implementation, maintenance, and operation. Huge potential cost savings can result from diligently observing relevant principles throughout the development cycle and maintaining discipline. But the primary concept involved is that of disciplined development; there are many methodologies that provide some kind of discipline, and all of those can be useful in some cases.

    In concept, most of the principles discussed here are fairly well known and understood by system cognoscenti. However, their relevance is often not generally appreciated by people with little development or operational experience. Not wishing to preach to the choir, we do not dwell on elaborating the principles themselves, which have been extensively covered elsewhere (see Section 2.3). Instead, we concentrate on the importance and applicability of these principles in the development of systems with critical requirements -- and especially secure systems and networks. The clear implication is that disciplined understanding and observance of the most effective of these principles can have enormous benefits to developers and system administrators, and also can aid user communities. However, we also explore various potential conflicts within and among these principles, and emphasize that those conflicts must be thoroughly understood and respected. System development is intrinsically complicated in the face of critical requirements. For example, it is important to find ways to manage that complexity, rather than to mistakenly believe that intrinsic complexity is avoidable by pretending to practice "simplicity". 

    2.2 Risks Resulting from Untrustworthiness

    As noted above, trustworthiness is a concept that encompasses being worthy of trust with respect to whatever critical requirements are in effect, often relating to security, reliability, guarantees of real-time performance and resource availability, survivability in spite of a wide range of adversities, and so on. Trustworthiness depends on hardware, software, communications media, power supplies, physical environments, and ultimately people in many capacities -- requirements specifiers, designers, implementers, users, operators, maintenance personnel, administrators, and so on.

    There are numerous examples of untrustworthy systems, networks, computer-related applications, and people. We indicate the extensive diversity of cases reported in the past with just a few tidbits relevant to each of various categories. See Computer-Related Risks [260] and the Illustrative Risks index [267] for numerous further examples and references involving many different types of system applications. (In the Illustrative Risks document, descriptors indicate relevance to loss of life, system survivability, various aspects of security, privacy, development problems, human interface confusions, and so on.) Some of these examples are revisited in Section 6.9, in considering how principled architectures and assurance-based risk reduction might have avoided the particular problems.

     

    Many systems actually have critical requirements that span multiple areas such as security, reliability, safety, and survivability. Although the cases listed above generally result from a problem in primarily one of these areas, there are many cases in which a maliciously induced security problem could alternatively have resulted from an accidentally triggered reliability problem, or -- similarly -- where a reliability/availability failure could also have been triggered intentionally. (For example, see Chapter 4 of [260].)

    One such application area with critical multidisciplinary requirements has become of particular interest since the 2000 November election, resulting from the emerging desire for completely electronic voting systems that ideally should have stringent requirements for system integrity, voter privacy, and accountability, and -- perhaps most important -- the impossibility of uncontrolled human intervention during elections. Some of today's major existing all-electronic systems permit unmonitored human intervention (to recover from election-day glitches and to "fix" problems -- including during the voting and vote-counting procedures!), with no meaningful accountability. Some systems even routinely undergo code changes after the software has been certified! Thus, we are confronted with all-electronic paperless voting systems that have no independent audit record of what has happened in the system internals, with no real assurance that your vote was correctly recorded and counted, with no alternative recount, no systemic way of determining the presence of internal errors and fraud, and no evidence in case of protests. The design specs and code are almost always proprietary, and the system has typically been certified against very weak voluntary standards that do not adequately detect fraud and internal accidents, with evaluations that are commissioned and paid for by the vendors. In contrast, gambling machines are regulated with extreme care (for example, by the Nevada Gaming Commission), and held to extremely high standards.

    For a partial enumeration of recorded cases of voting-system irregularities over the past more than twenty years, see the online html version of [267], clicking on Election Problems, or see the corresponding section in the .pdf and .ps versions.

    Section 5.2.2 reconsiders some of the above cases as well as others in which problems arose specifically because of problems involving the human interfaces.

    2.3 Trustworthiness Principles

    Willpower is always more efficient than mechanical enforcement, when it works. But there is always a size of system beyond which willpower will be inadequate.
    Butler Lampson  

    Developing and operating complex systems and networks with critical requirements demands a different kind of thinking from that used in routine programming. We begin here by considering various sets of principles, their applicability, and their limitations.

    We first consider the historically significant Saltzer-Schroeder principles, followed by several other approaches.

    2.3.1 Saltzer-Schroeder Security Principles, 1975

    The ten basic security principles formulated by Saltzer and Schroeder [334] in 1975 are all still relevant today, in a wide range of circumstances. In essence, these principles are summarized with a CHATS-relevant paraphrased explanation, as follows:

    * Economy of mechanism: Seek design simplicity (wherever and to whatever extent it is effective).
    * Fail-safe defaults: Deny accesses unless explicitly authorized (rather than permitting accesses unless explicitly denied).
    * Complete mediation: Check every access, without exception.
    * Open design: Do not assume that design secrecy will enhance security.
    * Separation of privileges: Use separate privileges or even multiparty authorization (e.g., two keys) to reduce misplaced trust.
    * Least privilege: Allocate minimal (separate) privileges according to need-to-know, need-to-modify, need-to-delete, need-to-use, and so on. The existence of overly powerful mechanisms such as superuser is inherently dangerous.
    * Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users. Avoid sharing of trusted multipurpose mechanisms, including executables and data -- in particular, minimizing the need for and use of overly powerful mechanisms such as superuser and FORTRAN common. As one example of the flaunting of this principle, exhaustion of shared resources provides a huge source of covert storage channels, whereas the natural sharing of real calendar-clock time provides a source of covert timing channels. 
    * Psychological acceptability: Strive for ease of use and operation -- for example, with easily understandable and forgiving interfaces.
    * Work factor: Make cost-to-protect commensurate with threats and expected risks.
    * Recording of compromises: Provide nonbypassable tamperproof trails of evidence.

    Remember that these are principles, not hard-and-fast rules. By no means should they be interpreted as ironclad, especially in light of some of their potential mutual contradictions that require development tradeoffs. (See Section 2.6.)

    The Saltzer-Schroeder principles grew directly out of the Multics experience (e.g., [277]), discussed further at the end of this section. Each of these principles has taken on almost mythic proportions among the security elite, and to some extent buzzword cult status among many fringe parties. Therefore, perhaps it is not necessary to explain each principle in detail -- although there is considerable depth of discussion underlying each principle. Careful reading of the Saltzer-Schroeder paper [334] is recommended if it is not already a part of your library. Matt Bishop's security books [47, 48] are also useful in this regard, placing the principles in a more general context. In addition, Chapter 6 of Matt Curtin's book [89] on "developing trust" -- by which he might really hope to be "developing trustworthiness" -- provides some useful further discussion of these principles.

    There are two fundamental caveats regarding these principles. First, each principle by itself may be useful in some cases and not in others. The second is that when taken in combinations, groups of principles are not necessarily all reinforcing; indeed, they may seem to be mutually in conflict. Consequently, any sensible development must consider appropriate use of each principle in the context of the overall effort. Examples of a principle being both good and bad -- as well as examples of interprinciple interference -- are scattered through the following discussion. Various caveats are considered in the penultimate section.

    Table 1 examines the applicability of each of the Saltzer-Schroeder principles to the CHATS goals of composability, trustworthiness, and assurance (particularly with respect to security, reliability, and other survivability-relevant requirements).


    Table 1: CHATS Relevance of Saltzer-Schroeder to CHATS Goals
     
    PrincipleComposabilityTrustworthinessAssurance
    Economy of Beneficial within a soundVital aid to soundCan simplify
    mechanism architecture; requires design; exceptions mustanalysis
    proactive design effortbe completely handled
    Fail-safe Some help, but not Simplifies design, Can simplify
    defaults fundamental use, operation analysis
    Complete Very beneficial with Vital, but hard Can simplify
    mediation disjoint object typesto achieve with noanalysis
    compromisibility
    Open designDesign documentation isSecrecy of design is,Assurance is mostly
    very beneficial amonga bad assumption;irrelevant in badly
    multiple developersopen design requires designed systems;
    strong system securityopen design enables
    open analysis (+/-)
    Separation of Very beneficial if Avoids many Focuses analysis
    privileges preserved by compositioncommon flaws more precisely
    Least Very beneficial if Limits flaw effects;Focuses analysis
    privilege preserved by composition simplifies operation more precisely
    Least common Beneficial unless there is Finesses some Modularizes
    mechanism natural polymorphism common flaws analysis
    Psychological Could help a little -- Affects mostly usability Ease of use
    acceptability if not subvertibleand operations can contribute
    Work factor Relevant especially forMisguided if systemGives false sense
    crypto algorithms, but noteasily compromisedof security under
    their implementations;from below, spoofed,nonalgorithmic
    may not be composable bypassed, etc. compromises
    Compromise Not an impedimentAfter-the-fact, Not primary
    recording if distributed; real-timebut useful contributor
    detection/response needs
    must be anticipated

    In particular, complete mediation, separation of privileges, and allocation of least privilege are enormously helpful to composability and trustworthiness. Open design can contribute significantly to composability, when subjected to internal review and external criticism. However, there is considerable debate about the importance of open design with respect to trustworthiness, with some people still clinging tenaciously to the notion that security by obscurity is sensible -- despite risks of many flaws being so obvious as to be easily detected externally, even without reverse engineering. Indeed, the recent emergence of very good decompilers for C and Java, along with the likelihood of similar reverse engineering tools for other languages, both suggest that such attacks are becoming steadily more practical. Overall, the assumption of design secrecy and the supposed unavailability of source code is often not a deterrent, especially with ever-increasing skills among black-box system analysts. However, there are of course cases in which security by obscurity is unavoidable -- as in the hiding of private and secret cryptographic keys, even where the cryptographic algorithms and implementations are public.

    Fundamental to trustworthiness is the extent to which systems and networks can avoid being compromised by malicious or accidental human behavior and by events such as hardware malfunctions and so-called acts of God. In [264], we consider compromise from outside, compromise from within, and compromise from below, with fairly intuitive meanings. These notions appear throughout this report.

    There are other cases in theory where weak links can be avoided (e.g., zero-knowledge protocols that can establish a shared key without any part of the protocol requiring secrecy), although in practice they may be undermined by compromises from below (e.g., involving trusted and supposedly trustworthy insiders subverting the underlying operating systems) or from outside (e.g., involving penetrations of the operating systems and masquerading as legitimate users). For a fascinating collection of papers on vulnerabilities and ways to exploit weak links, see Ross Anderson's website:  
    http://www.cl.cam.ac.uk/users/rja14/

    From its beginning, the Multics development was strongly motivated by a set of principles -- some of which were originally stated by Ted Glaser and Peter Neumann in the first section of the very first edition of the Multics Programmers' Manual in 1965. (See http://multicians.org.) It was also driven by extremely disciplined development. For example, with almost no exceptions, no coding effort was begun until a written specification had been approved by the Multics advisory board; also with almost no exceptions, all of the code was written in a subset of PL/I just sufficient for the initial needs of Multics, for which the first compiler (early PL, or EPL) had been developed by Doug McIlroy and Bob Morris.

    In addition to the Saltzer-Schroeder principles, further insights on principles and discipline relating to Multics can be found in a paper by Fernando Corbató, Saltzer, and Charlie Clingen [85] and in Corbató's Turing lecture [84].

    2.3.2 Related Principles, 1969 and Later

    Another view of principled system development was given by Neumann in 1969 [255], relating to what is often dismissed as merely "motherhood" -- but which in reality is both very profound and difficult to observe in practice. The motherhood principles under consideration in that paper (alternatively, you might consider them just as desirable system attributes) included automatedness, availability, convenience, debuggability, documentedness, efficiency, evolvability, flexibility, forgivingness, generality, maintainability, modularity, monitorability, portability, reliability, simplicity, and uniformity. Some of those attributes indirectly affect security and trustworthiness, whereas others affect the acceptability, utility, and future life of the systems in question. Considerable discussion in [255] was also devoted to (1) the risks of local optimization and the need for a more global awareness of less obvious downstream costs of development (e.g., writing code for bad -- or nonexistent -- specifications, and having to debug really bad code), operation, and maintenance (see Section 7.1 of this report); and (2) the benefits of higher-level implementation languages (which prior to Multics were rarely used for the development of operating systems [84, 85]).  

    In later work and more recently in [264], Neumann considered some extensions of the Saltzer-Schroeder principles. Although most of those principles might seem more or less obvious, they are of course full of interpretations and hidden issues. We summarize an extended set of principles here, particularly as they might be interpreted in the CHATS context.

    Table 2 summarizes the utility of the extended-set principles with respect to the three goals of the CHATS program acronym, as in Table 1.


    Table 2: CHATS Relevance of Extended-Set Principles to CHATS Goals
     
    PrincipleComposabilityTrustworthinessAssurance
    Sound Can considerablyCan greatly increaseCan increase assurance
    architecture facilitate compositiontrustworthinessof design and simplify
    implementation analysis
    Minimization ofBeneficial, but notVery beneficial withSimplifies design and
    trustworthinessfundamental sound architectureimplementation analysis
    Abstraction Very beneficial withVery beneficial Simplifies analysis
    suitable independenceif composable by decoupling it
    Encapsulation Very beneficial Very beneficial if Localizes analysis to
    if properly done,composable, avoids abstractions and
    enhances integrationcertain types of bugs their interactions
    Modularity Very beneficial Very beneficial Simplifies analysis
    if interfaces andif well specified;by decoupling it
    specifications overmodularizationand if modules are
    well defined impairs performancewell specified
    Layered protectionVery beneficial, butVery beneficial ifStructures analysis
    may impairnoncompromisible fromaccording to layers
    performance above/within/belowand their interactions
    Robust dependencyBeneficial: canBeneficial: can obviateRobust architectural
    avoid compositionaldesign flaws based onstructure simplifies
    conflicts misplaced trustanalysis
    Object orientationBeneficial, butCan be beneficial, butCan simplify analysis
    labor-intensive; complicates coding of design, possibly
    can be inefficientand debuggingimplementation also
    Separation of Beneficial, but Increases flexibilitySimplifies analysis
    policy/mechanismboth must composeand evolution
    Separation of Helpful indirectly Beneficial if Can simplify analysis
    duties as a precursor well defined if well defined
    Separation of Beneficial if rolesBeneficial if Partitions analysis
    roles nonoverlapping properly enforced of design and operation
    Separation of Can simplify Allows finer-grainPartitions analysis
    domains composition and enforcement and of implementation
    reduce side effectsself-protection and operation
    Sound Helps if uniformlyHuge security benefits,Can simplify analysis,
    authentication invoked aids accountability improve assurance
    Sound Helps if uniformlyControls use, Can simplify analysis,
    authorization invoked aids accountability improve assurance
    Administrative Composability helpsGood architectureControl enhances
    controllabilitycontrollability helps controllabilityoperational assurance
    Comprehensive Composability helpsBeneficial for Can provide feedback
    accountability accountabilitypost-hoc analysis for improved assurance

    At this point in our analysis, it should be no surprise that all of these principles can contribute in varying ways to security, reliability, survivability, and other -ilities. Furthermore, many of the principles and -ilities are linked. We cite just a few of the interdependencies that must be considered.  

    For example, authorization is of limited use without authentication,  whenever identity is important. Similarly, authentication may be of questionable use without authorization. In some cases, authorization requires fine-grained access controls. Least privilege requires some sort of separation of roles, duties, and domains. Separation of duties is difficult to achieve if there is no separation of roles. Separation of roles, duties, and domains each must rely on a supporting architecture.

    The comprehensive accountability principle is particularly intricate, as it depends critically on many other principles being invoked. For example, accountability is inherently incomplete without authentication and authorization. In many cases, monitoring may be in conflict with privacy requirements and other social considerations [101], unless extremely stringent controls are enforceable. Separation of duties and least privilege are particularly important here. All accountability procedures are subject to security attacks, and are typically prone to covert channels as well. Furthermore, the procedures themselves must be carefully monitored. Who monitors the monitors? (Quis auditiet ipsos audites?)

    2.3.3 Principles of Secure Design (NSA, 1993)

    Also of interest here is the 1993 set of principles (or perhaps metaprinciples?) of secure design [56], which emerged from an NSA ISSO INFOSEC Systems Engineering study on rules of system composition. The study was presented not as a finished effort, but rather as something that needed to stand the test of practice. Although there is some overlap with the previously noted principles, the NSA principles are enumerated here as they were originally documented. Some of these principles are equivalent to "the system should satisfy certain security requirements" -- but they are nevertheless relevant. Others might sound like motherhood. Overall, they represent some collective wisdom -- even if they are fairly abstract and incompletely defined.

    Considerable discussion of these metaprinciples is warranted. For example, "Every component in a system must operate in a security environment that is a subset of its specified environment" implies iteratively that maximum trust is required throughout design and implementation of the other components, which is a gross violation of our notion of minimization of what must be trustworthy. It would be preferable to require that each component check that the environment in which it executes is a subset of its specified environment -- which is closely related to Schroeder's notion of mutual suspicion [343], noted further down the list.

    "A system is only as strong as its weakest link" is generally a meaningful statement. However, some weak links may be more devastating than others, so this statement is overly simplistic. In combination with least privilege, separation of domains, and some of the other principles noted previously, the effects of a particular weak link might be contained or controlled. But then, you might say, the weak link was not really a weak link. However, to a first approximation, as we noted above, weak links should be avoided where possible, and restricted in their effects otherwise, through sound architecture and sound implementation practice.

    2.3.4 Generally Accepted Systems Security Principles (I2F, 1997)

    The 1990 report of the National Research Council study group that produced Computers at Risk [83] included a recommendation that a serious effort be made to develop and promulgate a set of Generally Accepted Systems Security Principles (GASSP). That led to the creation of the International Information Security Foundation (I2SF). A draft of its GASSP document [279] is available online. A successor effort is under way, after a long pause.

    The proposed GASSP consists of three layers of abstraction, nine Pervasive Principles (relating to confidentiality, integrity, and availability), a set of 14 Broad Functional Principles, and a set of Detailed Principles (yet to be developed, because the largely volunteer project ran out of steam, in what Jim Horning refers to as a last gassp!). The GASSP effort thus far actually represents a very worthy beginning, and one more approach for those interested in future efforts. The top two layers of the GASSP principle hierarchy are summarized here as follows.

    Pervasive Principles
    * PP-1. Accountability
    * PP-2. Awareness
    * PP-3. Ethics
    * PP-4. Multidisciplinary
    * PP-5. Proportionality
    * PP-6. Integration
    * PP-7. Timeliness
    * PP-8. Assessment
    * PP-9. Equity
    Broad Functional Principles
    * BFP-1. Information Security
    * BFP-2. Education and Awareness
    * BFP-3. Accountability
    * BFP-4. Information Management
    * BFP-5. Environmental Management
    * BFP-6. Personnel Qualifications
    * BFP-7. System Integrity
    * BFP-8. Information Systems Life Cycle
    * BFP-9. Access Control
    * BFP-10. Operational Continuity and Contingency Planning
    * BFP-11. Information Risk Management
    * BFP-12. Network and Infrastructure Security
    * BFP-13. Legal, Regulatory, and Contractual Requirements of Info Security
    * BFP-14. Ethical Practices

    The GASSP document gives a table showing the relationships between the 14 Broad Functional Principles and the 9 Pervasive Principles. That table is reproduced here as Table 3.


    Table 3: GASSP Cross-Impact Matrix
     
    PP: PP-1PP-2PP-3PP-4PP-5PP-6PP-7PP-8PP-9
    BFP-1 X X X X X X X X X
    BFP-2 X X X X X
    BFP-3 X X X X X
    BFP-4 X X X X
    BFP-5 X X X X X X
    BFP-6 X X X X
    BFP-7 X X X X X X
    BFP-8 X X X X X X
    BFP-9 X X X X X X
    BFP-10 X X X X X
    BFP-11 X X X X X X X
    BFP-12 X X X X X
    BFP-13 X X X X X
    BFP-14 X X X X

    2.3.5 TCSEC, ITSEC, CTCPEC, and the Common Criteria (1985 to date)

    Any enumeration of relevant principles must note the historical evolution of evaluation criteria over the past decades -- from the 1985 DoD Trusted Computer System Evaluation Criteria (TCSEC, a.k.a. The Orange Book [249]) and the ensuing Rainbow Books, to the 1990 Canadian Trusted Computer Product Evaluation Criteria (CTCPEC, [64]), and the 1991 Information Technology Security Evaluation Criteria (ITSEC, [116]). These efforts have resulted in an international effort to produce the Common Criteria framework (ISO 15408 [172]), which represents the current state of the art in that particular evolutionary process. (Applicability to multilevel security is also addressed within the Common Criteria framework, although it is much more deeply embedded in the higher-assurance levels of the TCSEC.)

    2.3.6 Extreme Programming, 1999

    A seemingly radical approach to software development is found in the Extreme Programming (XP)  movement [33]. (Its use of "XP" considerably predates Microsoft's.) Although XP appears to run counter to most conventional programming practices, it is indeed highly disciplined. XP might be thought of as very small chief programmer teams somewhat in the spirit of a Harlan Mills'  Clean-Room approach, although it has no traces of formalism and is termed a lightweight methodology. It involves considerable emphasis on disciplined planning throughout (documented user stories, scheduling of relatively frequent small releases, extensive iteration planning, and quickly fixing XP whenever necessary), designing and redesigning throughout (with simplicity as a driving force, the selection of a system metaphor, and continual iteration), coding and recoding as needed (paired programmers working closely together, continual close coordination with the customer, adherence to agreed-upon standards, only one programmer pair may integrate at one time, frequent integration, deferred optimization, and no overtime pay), and testing repeatedly throughout (code must pass unit tests before release, tests must be created for each bug found, acceptance tests are run often, and the results are published).

    In essence, Extreme Programming seeks to have something running at the end of each period (e.g., each week) by deferring less important concepts until later. There is a stated desire to let business folks decide which features to implement, based on the experience with the ongoing development.

    Questions of how to address architecture in the large seem not to be adequately addressed within Extreme Programming (although these questions are absolutely fundamental to the approach that we are taking in this report, but perhaps are considered extraneous to XP). The concept of deferring architectural design until later in the process may work well in small systems (where dynamic changes tend to be relatively local), but can seriously complicate development of highly complex systems. Perhaps if coupled with principled architectures recommended here, Extreme Programming could be effective for larger development efforts. See the Web site noted in [33] for considerable background on the XP movement, including a remarkably lucid Frequently Asked Questions document contrasting XP with several other approaches (UML, RUP, CMM, Scrum, and FDD -- although this is a little like comparing apples and oranges). Wikipedia also has a useful analysis of socalled agile or lightweight methodologies, with relevant references (http://en.wikipedia.org/wiki/Agile_software_development ).

    2.3.7 Other Approaches to Principled Development

    There are too many other design and development methodologies to enumerate here, ranging from very simple to quite elaborate. In some sense, it does not matter which methodology is adopted, as long as it provides some structure and discipline, and is relatively compatible with the abilities of the particular design and development team. For example, Dick Karpinski hands out a business card containing his favorite, Tom Gilb's Project Management Rules: (1) Manage critical goals by defining direct measures and specific targets; (2) Assure accuracy and quality with systematic project document inspections; (3) Control major risks by limiting the size of each testable delivery. These are nice goals, but depend on the skills and experience of the developers -- with only subjective evaluation criteria. Harlan Mills' "Clean-Room" technology has some elements of formalism that are of interest with respect to increasing assurance, although not specifically oriented toward security. In general, good development practice is a necessary prerequisite for trustworthy systems, as are means for evaluating that practice.

    2.4 Design and Implementation Flaws, and Their Avoidance

    Nothing is as simple as we hope it will be. Jim Horning  

    Some characteristic sources of security flaws in system design and implementation are noted in [260], elaborating on earlier formulations and refinements (e.g., [5, 271]). There are various techniques for avoiding those flaws, including sound architectures, defensively oriented programming languages, defensively oriented compilers, better runtime environments, and generally better software engineering practice.

     

    Useful techniques for detecting some of these vulnerabilities include defensive programming language design, compiler checks, and formal methods analyzing consistency of programs with specifications. Of particular interest is the use of static checking. Such an approach may be formally based, as in the use of model checking by Hao Chen, Dave Wagner, and Drew Dean (in the MOPS system, developed in part under our CHATS project). (See Appendix A.) Alternatively, there are numerous approaches that do not use formal methods, ranging in sophistication from lint to LCLint (Evans) to Extended Static Checking (Nelson, Reino, et al., DEC/Compaq SRC). Note that ESC is completely formally based, including use of a theorem prover; indeed, it is a formal method that has some utility even in the absence of formal software specifications.

    Jim Horning notes that even partial specifications increase the power of the latter two, and provide a relatively gentle way to incorporate additional formalism into development. Strong type checking and model checking tend to expose various flaws, some of which are likely to be consequential to security and reliability. For example, purify and similar tools are useful in catching memory leaks, array-bound violations, and related memory problems. These and other analytic techniques can be very helpful in improving design soundness and code quality -- as long as they are not relied on by themselves as silver bullets.

    All of the principles have some bearing on avoiding these classes of vulnerabilities. Several of these concepts in combination -- notably modularity,  abstraction,  encapsulation, device independence where advantageous, information hiding, complete mediation, separation of policy and mechanism, separation of privilege, least privilege, and least common mechanism -- are relevant to the notion of virtual interfaces and virtual machines. The basic notion of virtualization is that it can mask many of the underlying details, and makes it possible to change the implementation without changing the interface. In this respect, several of these attributes are found in the object-oriented paradigm.

    Several examples of virtual mechanisms and virtualized interfaces are worth noting. Virtual memory masks physical memory locations and paging. A virtual machine masks the representation of process state information and processor multiplexing. Virtualized input-output masks device multiplexing, device dependence, formatting, and timing. Virtual multiprocessing masks the scheduling of tasks within a collection of seemingly simultaneous processes. The Multics operating system [277] provides early illustrations of virtual memory and virtual secondary storage management (with demand paging hidden from the programs), virtualized input-output (with symbolic stream names and device independence where commonalities exist), and virtual multiprogramming (with scheduling typically hidden from the programming interfaces). The GLU environment [177] is an elegant illustration of virtual multiprocessing. GLU allows programs to be distributed dynamically among different processing resources without explicitly programmed processor allocation, based on precompiling of embedded guidance in the programs.

     

    2.5 Roles of Assurance and Formalism

    In principle, everything should be simple.
    In reality, things are typically not so simple.

    (Note: The SRI CSL Principal Scientist is evidently both a Principle Scientist and a Principled Scientist, as well as Principal Scientist. PGN) 

    In general, the task of providing some meaningful assurances that a system is likely to do what is expected of it can be enhanced by any techniques that simplify or narrow the analysis -- for example, by increasing the discipline applied to system architecture, software design, specifications, code style, and configuration management. Most of the cited principles tend to do exactly that -- if they are applied wisely. Techniques for increasing assurance are considered in greater detail in Chapter 6, including the potential roles of formal methods.

    2.6 Caveats on Applying the Principles

    For every complex problem, there is a simple solution. And it's always wrong.
    H.L. Mencken 

    As we noted above, the principles referred to here may be in conflict with one another if each is applied independently; in certain cases, the principles are not composable. In general, each principle must be applied in the context of the overall development. Ideally, greater effort might be useful to reformulate the principles to make them more readily composable, or at least to make their potential tradeoffs or incompatibilities more explicit.

    There are also various potentially harmful considerations that must be considered -- for example, overuse, underuse, or misapplication of these principles, and certain limitations inherent in the principles themselves. Merely paying lipservice to a principle is clearly a bad idea; principles must be sensibly applied to the extent that they are appropriate to the given purpose. Similarly, all of the criteria-based methodologies have many systemic limitations (e.g., [257, 372]); for example, formulaic application of evaluation criteria is always subject to incompleteness and misinterpretation of requirements, oversimplification in analysis, and sloppy evaluations. However, when carefully applied, such methodologies can be useful and add discipline to the development process. Thus, we stress here the importance of fully understanding the given requirements and of creating an overall architecture that is appropriate for realizing those requirements, before trying to conduct any assessments of compliance with principles or criteria. And then, the assessments must be taken for what they are worth -- just one piece of the puzzle -- rather than overendowed as definitive results out of context. Overall, there is absolutely no substitute for human intelligence, experience, and foresight.

    The Saltzer-Schroeder principle of keeping things simple is one of the most popular and commonly cited. However, it can be extremely misleading when espoused (as it commonly is) in reference to systems with critical requirements for security, reliability, survivability, real-time performance, and high assurance -- especially when all of these requirements are necessary within the same system environment. Simplicity is a very important concept in principle (in the small), but complexity is often unavoidable in practice (in the large). For example, serious attempts to achieve fault-tolerant behavior often result in roughly doubling the size of the overall subsystem or even the entire system. As a result, the principle of simplicity should really be one of managing complexity rather than trying to eliminate it, particularly where complexity is in fact inherent in the combination of requirements. Keeping things simple is indeed a conceptually wonderful principle, but often not achievable in reality. Nevertheless, unnecessary complexity should of course be avoided. The back-side of the Einstein quote at the beginning of Section 2.1 is indeed both profound and relevant, yet often overlooked in the overzealous quest for perceived simplicity. 

    An extremely effective approach to dealing with intrinsic complexity is through a combination of the principles discussed here, particularly abstraction,  modularity,  encapsulation, and careful hierarchical separation that architecturally does not result in serious performance penalties, well conceived virtualized interfaces that greatly facilitate implementation evolution without requiring changes to the interfaces or that enable design evolution with minimal disruption, and far-sighted optimization. In particular, hierarchical abstraction can result in relative simplicity at the interfaces of each abstraction and each layer, in relative simplicity of the interconnections, and perhaps even relative simplicity in the implementation of each module. By keeping the components and their interconnections conceptually simple, it is possible to achieve conceptual simplicity of the overall system or networks of systems despite inherent complexity. Furthermore, simplicity can sometimes be achieved through design generality, recognizing that several seemingly different problems can be solved symmetrically at the same time, rather than creating different (and perhaps incompatible) solutions. Such approaches are considered further in Chapter 4.

    Note that such solutions might appear to be a violation of the principle of least common mechanism, but not when the common mechanism is fundamental -- as in the use of a single uniform naming convention or the use of a uniform addressing mode that transcends different subtypes of typed objects. In general, it is riskful to have multiple procedures managing the same data structure for the same purposes. However, it can be very beneficial to separate reading from writing -- as in the case of one process that updates and another process that uses the data. It can also be beneficial to reuse the same code on different data structures, although strong typing is then important.

    Of considerable interest here is David Musser's notion of Generic Programming, or programming with concepts. His Web site defines a concept as "a family of abstractions that are all related by a common set of requirements. A large part of the activity of generic programming, particularly in the design of generic software components, consists of concept development -- identifying sets of requirements that are general enough to be met by a large family of abstractions but still restrictive enough that programs can be written that work efficiently with all members of the family. The importance of the C++ Standard Template Library, STL, lies more in its concepts than in the actual code or the details of its interfaces." (http://www.cs.rpi.edu/ musser/gp/)

    One of our primary goals in this project is to make system interfaces conceptually simple while masking complexity so that the complexities of the design process and the implementation itself can be hidden by the interfaces. This may in fact increase the complexity of the design process, the architecture, and the implementation. However, the resulting system complexity need be no greater than that required to satisfy the critical requirements such as those for security, reliability, and survivability. It is essential that tendencies toward bloatware be strongly resisted. (They seem to arise largely from the desire for bells and whistles -- extra features -- and fancy graphics, but also from a lack of enlightened management of program development.)

    A networking example of the constructive use of highly principled hierarchical abstraction is given by the protocol layers of TCP/IP (e.g.,  [169]). An operating system example is given by the capability-based Provably Secure Operating System (PSOS) [120, 268, 269]) in which the functionality at each of more than a dozen layers was specified formally in only a few pages each, with at least the bottom seven layers intended to be implemented in hardware. The underlying addressing is based on a capability mechanism (layer 0) that uniformly encompasses and protects objects of arbitrary types -- including files, directories, processes, and other system- and user-defined types. The PSOS design is particularly noteworthy because a single capability-based operation at layer 12 (user processes) could be executed as a single machine instruction at layer 6 (system processes), with no iterative interpretation required unless there were missing pages or unlinked files that require operating system intervention (e.g., for dynamic linking of symbolic names, à la Multics). To many people, hierarchical layering instantly brings to mind inefficiency. However, the PSOS architecture is an example in which the hierarchical design could be implemented extremely efficiently -- because of the power of the capability mechanism, strong typing, and abstraction, and its intended hardware implementation.

    We note that formalism for its own sake is generally counterproductive. Formal methods are not likely to reduce the overall cost of software development, but can be helpful in decreasing the cost of software quality and assurance. They can be very effective in carefully chosen applications, such as evaluation of requirements, specifications, critical algorithms, and particularly critical code. Once again, we should be optimizing not just the cost of writing and debugging code, but rather optimizing more broadly over the life cycle. 

    There are many other common pitfalls that can result from the unprincipled use of principles. Blind acceptance of a set of principles without understanding their implications is clearly inappropriate. (Blind rejection of principles is also observed occasionally, particularly among people who establish firm requirements with no understanding of whether those requirements are realistically implementable -- and among strong-willed developers with a serious lack of foresight.)

    Lack of discipline is clearly inappropriate in design and development. For example, we have noted elsewhere [264, 265] that the open-source paradigm by itself is not likely to produce secure, reliable, survivable systems in the absence of considerable discipline throughout development, operation, and maintenance. However, with such discipline, there can be many benefits. (See also [126] on the many meanings of open source, as well as a Newcastle Dependable Interdisciplinary Research Collaboration (DIRC) final report [125] on dependability issues in open source, part of ongoing work.)

    Any principle can typically be carried too far. For example, excessive abstraction can result in overmodularization, with enormous overhead resulting from intermodule communication and nonlocal control flow. On the other hand, conceptual abstraction through modularization that provides appropriate isolation and separation can sometimes be collapsed (e.g., for efficiency reasons) in the implementation -- as long as the essential isolation and protection boundaries are not undermined. Thus, modularity should be considered where it is advantageous, but not merely for its own sake.

    Application of each principle is typically somewhat context dependent, and in particular dependent on specific architectures. In general, principles should always be applied relative to the integrity of the architecture.

    One of the severest risks in system development involves local optimization with respect to components or individual functions, rather than global optimization over the entire architecture, its implementation, and its operational characteristics. Radically different conclusions can be reached depending on whether or not you consider the long-term complexities and costs introduced by bad design, sloppy implementation, increased maintenance necessitated by hundreds of patches, incompatibilities between upgrades, noninteroperability among different components with or without upgrades, and general lack of foresight. Furthermore, unwise optimization (whether local or global) must not collapse abstraction boundaries that are essential for security or reliability -- perhaps in the name of improved performance. As one example, real-time checks (such as bounds checks, type checking, and argument validation generally) should be kept close to the operations involved, for obvious reasons. This topic is pursued further in Sections 7.17.2, and 7.3. As another example, the Risks Forum archives include several cases in which multiple alternative communication paths were specified, but were implemented in the same or parallel conduits -- which were then all wiped out by a single backhoe!

    Perhaps most insidious is the a priori lack of attention to critical requirements, such as any that might involve the motherhood attributes noted in [255] and listed above. Particularly in dealing with security, reliability, and survivability in the face of arbitrary adversities, there are few if any easy answers. But if those requirements are not dealt with from the beginning of a development, they can be extremely difficult to retrofit later. One particularly appealing survivability requirement would be that systems and networks should be able to reboot, reconfigure, and revalidate their soundness following arbitrary outages, without human intervention. That requirement has numerous architectural implications that are considered in Chapter 4.

    Once again, everything should be made as simple as possible, but no simpler. Careful adherence to principles that are deemed effective is likely to help achieve that goal.

    2.7 Summary

    In theory, there is no difference between theory and practice. In practice, there is an enormous difference. (Many variants of this concept are attributed to various people. This is a personal adaptation.)

    What would be extremely desirable in our quest for trustworthy systems and networks is theory that is practical and practice that is sufficiently theoretical. Thoughtful and judiciously applied adherence to sensible principles appropriate for a particular development can greatly enhance the security, reliability, and overall survivability of the resulting systems and networks. These principles can also contribute greatly to operational interoperability, maintainability, operational flexibility, long-term evolvability, higher assurance, and many other desirable characteristics.

    To illustrate some of these concepts, we have given a few examples of systems and system components whose design and implementation are strongly principled. The omission of other examples does not in any way imply that they are less relevant. We have also given some examples of just a few of the potential difficulties in trying to apply these principles.

    What are generally called "best practices" are often rather lowest-common-denominator techniques that have found their way into practice, rather than what might otherwise be the best practices that would be useful. Furthermore, the supposedly best practices can be manhandled or womanhandled by very good programmers, and bad programming languages can still be used wisely. Unfortunately, spaghetti code is seemingly always on the menu, and bloatware tends to win out over elegance. Overall, there are no easy answers. However, having sensible system and network architectures is generally a good starting point, as discussed in Chapter 4, where we specifically consider classes of system and network architectures that are consistent with the principles noted here, and that are highly likely to be effective in fulfilling the CHATS goals. In particular, we seek to approach inherently complex problems architecturally, structuring the solutions to those problems as conceptually simple compositions of relatively simple components, with emphasis on the predictable behavior of the resulting systems and networks -- which is the essence of Chapter 3.

     

    3 Realistic Composability

    Synopsis

    One of the biggest obstacles to software development -- and particularly system integration -- is the difficulty of predictably composing subsystems out of modules, systems out of subsystems, and networks of systems out of systems and networking technology.

    In this chapter, we outline some of the obstacles to achieving facile composability as well as some of the approaches that can contribute to the development of significantly greater composability in systems with critical requirements.

    3.1 Introduction

    The basic challenge confronting us is to be able to develop, configure, and operate systems and networks of systems with high levels of trustworthiness with respect to critical requirements for security, reliability, fault tolerance, survivability, performance, and other behavioral criteria, without too seriously sacrificing the desired functionality. As noted in Chapter 1, both compatibility and interoperability are important. Inherently high assurance that those systems will perform dependably as expected is also extremely desirable. These attributes can be greatly aided by taking pains to constrain architectures and the software development process.

    To these ends, one of the most fundamental problems involves assuring the ability to compose subsystems to form dependable systems and to compose component systems to form dependable networks -- without violating the desired requirements, and without diminishing the resulting trustworthiness. Composability problems are very old, relative to the youth of the computer field. They exist throughout the life cycle, involving composability (and noncomposability) of requirements, policies, specifications, protocols, hardware subsystems, and software components (with respect to their source code, compilers, object code, and runtime libraries), as well as arising in system and network reconfiguration, upgrades, and maintenance (for example). Analogous problems also arise with respect to the compositionality of assurance measures (including formal methods and component testing) and their evaluations, and even more so to the evolution of evaluations over time as systems change. Ultimately, the degree to which composability is attainable depends strongly on the system and network architectures, but is also influenced by many other factors. Unfortunately, many seemingly sound compositions can actually compromise the desired overall requirements, as noted in Section 3.2.  

    Various approaches to decomposing systems into components are examined in Section 3.3, whereas how to enhance composability is considered in Section 3.4. Of additional interest is the concept of combining subsystems in ways that can actually increase the resulting trustworthiness. This is explored in Sections 3.5 and  3.6, along with the relevance of concepts of software engineering discipline, programming-language constructs, structural compatibility, execution interoperability, and development tools -- all of which can considerably improve the likelihood of achieving seamless composability.

    We include many references here and intentionally try to balance important early efforts that deserve not to be forgotten with more recent efforts that continue toward the ultimately desired research and development results.

    3.2 Obstacles to Seamless Composability

    A modular system is one that falls apart easily! E.L. (Ted) Glaser, 1965  

    Seamless composability implies that a composition will have the desired beneficial properties, with no uncontrollable or unpredictable side effects. That is, the composed system will do exactly what it is expected to do -- no more and no less. (More and less can both create potentially serious problems.) In practice, many pitfalls are relevant to the composition of subsystems into systems -- often involving unanticipated effects (colloquially, "side effects") that impede the ideal goal of unencumbered composition and interoperability among the subsystems: 

    In common usage, there is considerable confusion surrounding the relative roles of composability, intercompatibility, and interoperability (see Chapter 1). In that it is easy to conceive of examples in which composability implies neither intercompatibility nor interoperability, or in which neither intercompatibility nor interoperability implies composability, we avoid any attempts to taxonomize these three concepts. By avoiding the semantic distinctions, we focus primarily on seeking a strong sense of composability, recognizing that interoperability and intercompatibility may impose further constraints. From a practical point of view, what matters most is that the resulting composed systems and networks must satisfy their desired requirements. If that is the case, then we can simply say that the compositions satisfy whatever requirements exist for composability, interoperability, and intercompatibility. 

    3.3 System Decomposition

    Decomposition into smaller pieces is a fundamental approach to mastering complexity. The trick is to decompose a system in such a way that the globally important decisions can be made at the abstract level, and the pieces can be implemented separately with confidence that they will collectively achieve the intended result. (Much of the art of system design is captured by the bumper sticker "Think globally, act locally.") Jim Horning [259] 

    Given a conceptual understanding of a set of system requirements, or even a detailed set of requirements, one of the most important architectural problems is to establish a workable structure of the system that can evolve into a successful implementation. The architectural decomposition of a network into subnetworks, a system into subsystems, or a subsystem into components, can benefit greatly from the principles enumerated in Chapter 2. In particular, modularity together with encapsulation, hierarchical layering, constructive uses of redundancy, and separation of concerns are examples of design principles that can pervasively affect the decomposability of a system design -- and thereby the modular composability.

    The work of Edsger Dijkstra (for example, [105, 107]) and David Parnas (for example, [281, 283, 284, 290, 295] has contributed significantly to the constructive structural decomposition of system architectures and system designs. In addition, Parnas [86, 282, 285, 287, 291, 292, 293, 294, 296, 297] provided definitive advances toward the formal specifications and analysis of real and complex systems, beginning in the early 1970s. Of particular importance is Parnas's enumeration of various notions of a uses b, and especially the concept of dependence [283] embodied in the relation a depends on b for its correctness. Appendix B elaborates on the uses relations.

    Decomposition can take on several forms. Horizontal decomposition (modularization) is often useful at each design layer, identifying functionally distinct components at that layer. Horizontal decomposition can be achieved in various ways -- for example, through coordination from higher layers, local message passing, or networked interconnections. In addition, the development process entails various temporal decompositions, such as abstraction refinement, in which the representation of a particular function, module, layer, or system interface undergoes successively increased specificity -- for example, evolving from a requirements specification to a functional specification to an implementation. If any, additional functionality is added along the way, vulnerabilities may arise whenever development discipline is not maintained.

    Vertical decomposition recognizes different layers of hierarchical abstraction and distinguishes them from one another. A very simple layering of abstractions (from the bottom up) might be hardware, operating system, middleware, application software, and users. Each of these conceptual layers can in turn be split into multiple layers, according to the needs of an architectural design, its implementation, and its assurance considerations.

    Several important examples of vertical and horizontal system decomposition are found in Multics, the THE system, the Software Implemented Fault-Tolerant (SIFT) system, the Provably Secure Operating System (PSOS), the type-oriented protection of the Honeywell and Secure Computing Corporation lineage, multilevel secure (MLS) kernel-based architectures with trusted computing bases (TCBs), and the MLS database management system SeaView. These systems and others are considered in Chapter 4.

    Ideally, it should be relatively easy to remove all unneeded software from a broadly supported general-purpose system, to achieve a minimal system configuration that is free of bloatware and its accompanying risks. (In some of the server-oriented architectures considered in Chapter 4, there is a fundamental need for highly trustworthy servers that are permitted to perform only a stark subset of the functionality of a general-purpose system, with everything else stripped out.) In practice, monolithic mass-market workstation software and conventional mainframe operating systems tend to defy, or at least greatly hinder, the subsetting of functionality. There are typically many unrecognized interdependencies -- especially in the areas of device drivers and GUIs. (Somewhat intriguingly, real-time operating system developers seem to have done a much better job in realizing the benefits that can be obtained from stark subsetting, partly for reasons of optimizing performance, partly because of historical memory limitations, but perhaps mostly because of the importance of reducing per-unit hardware costs. However, their systems do not yet have adequate security for many critical applications.)

    If a system has been designed to be readily composable out of its components, then it is also likely to be readily decomposable -- either by removal of the unnecessary subsystems, or by the generation of the minimal system directly from its constituent parts. Thus, if composability is dealt with appropriately (e.g., in system design, programming language design, and compiler design), the decomposition problem can be solved as a by-product of composition. On the other hand, the decomposition problem is potentially very difficult for complex conceptual systems that are just being designed and for legacy software that was not designed to be either composable or decomposable.

    And then we have a wonderful quote from Microsoft's Steve Ballmer, who said -- in his 8 February 2002 deposition relating to the nine recalcitrant U.S. states -- that it would be impossible to get the operating system to run properly and still meet the states' demands.

    "That's the way good software gets designed. So if you pull out a piece, it won't run." Steve Ballmer, Reuters, 4 March 2002.
    (Modular, schmodular. That might be why many people consider "software engineering" to be an oxymoron. But what is missing from much mass-market software is not modularity, but rather clean abstraction and encapsulation.)

    This is in contrast to a poignant e-mail quote from Cem Kaner, April 4, 2002: "The problem with installing these [...] patches is that, as lightly tested patches will, they can break one thing while fixing another. Last week I installed yet another Microsoft security patch for Win 2000, along with driver patches they recommended. As a result, my modem no longer works, my screen was screwed up until I reloaded the Dell driver, and my sound now works differently (and less well). I accepted patches for MS Office and Acrobat and now I get messages asking me to enable Word macros when I exit Word, not just when I open a document. (Given the widespread nature of Word macro viruses, I disable the feature.) It wasn't so long ago that it was common knowledge that patching systems reflects poor engineering and is risk prone. We should not be advocating a structure like this or making a standard of it."

    3.4 Attaining Facile Composability

    Ideally, we would like the development of complex hardware/software systems to be like snapping Lego pieces together! Instead, we have a situation in which each component piece can transmogrify its modular interface and its physical appearance -- thereby continually disrupting the existing structure and hindering future composability. An appropriate analog would be if civil engineering were as undisciplined as software engineering. PGN

    Ideally, it should be possible to constrain hardware and software subsystems -- and their compositions -- so that the subsystems can be readily integrated together with predictable consequences. This goal is surprisingly difficult. However, several approaches can help improve the likelihood that composition will not create negative effects. (Note that Brad Cox achieved something like this in the mid-1980s with what he called software integrated circuits; see
    http://en.wikipedia.org/wiki/Software_component.)

    In hardware, address relocation, segmentation, paging, multiprocessing, and coprocessors have helped. In software, virtual memory, virtual machines, distributed operating systems, modern software engineering and programming languages better enforcing the principles of good software-engineering practice, sound distributed programming, network-centered virtualized multiprocessing, and advancing compiler technology can contribute to increased composability -- if they are properly used. In particular, virtual memory techniques have considerably increased the composability of both hardware and software. (It is lamentable that software engineering practice is so seldom good!)

    The above approaches are in a sense all part of what should be commonly known as good software-engineering practice. Unfortunately, system architecture and programming development seldom observe good software-engineering practice. However, the constructive aspects of software engineering -- including establishment of requirements, careful specifications, modularity and encapsulation, clean hierarchical and vertical abstraction, separation of policy and mechanism, object orientation, strong typing, adherence to the basic security principles (e.g.,separation of privileges, allocation of least privilege, least common mechanism, assumptions of open source rather than reliance on security by obscurity), suitable choice of programming language and development tools, and (above all) sensible programming practice -- can all make positive contributions to composability.

    The potential importance of formal methods is largely underappreciated, including formal statements of requirements and specifications, and formal demonstrations (e.g., rigorous proofs and model checking) of consistency of specifications with requirements, consistency of source code with specifications, correctness of a compiler, and so on. The formal methods community has for many years dealt with consistency between specifications and requirements, and with consistency between code and specifications, although that work is seldom applied to real systems. The specifications tend to idealize behavior by concentrating on only the obviously relevant behavioral properties. Formal approaches can provide enormous forcing functions on composability and system correctness, even if applied only in limited ways -- such as model checking for certain properties relating to composability or security. They can also be extremely valuable in efforts to attain high assurance. However, because of their labor-intensive nature, they should generally be applied particularly where they can be most effective. (See Chapter 6.) Once again, architectures that minimize the extent of necessary trustworthiness are important.

    The set of assumptions as to what threats must be defended against is itself almost always inherently incomplete, with respect to what might actually happen. Nominal security requirements often ignore reliability and survivability issues (for example, see [264], which seeks to address relevant requirements within a common and more comprehensive architectural framework). Even detailed security requirements often tend to ignore the effects of buffer overflows, residues, and -- even more obscurely -- emanations such as those exploitable by Paul Kocher's differential power analysis [192, 193] (whereby cryptographic keys can be derived from the behavior of hardware devices such as smart cards) and external interference that can result in internal state changes (or even the ability to derive cryptographic keys, as in Dan Boneh's RSA fault injection attack [54] -- which resulted in a faulted version that when subtracted from the correct version allowed a linear search instead of an exponential search for the private key!). Attempting to enumerate everything that is not supposed to happen is almost always futile, although relatively comprehensive canonical checklists of potential threats and characteristic flaws to be avoided can be very useful to system architects and software developers. Various partial vulnerability and threat taxonomies exist (e.g., [201, 260, 271]), although a major effort would be worthwhile to define broad equivalence classes that at least give extensive coverage, and for which effective countermeasures would be available or incorporated into new system and network architectures. It is important in considering composability that all meaningful requirements and functional behaviors be adequately specified and assured.

    3.5 Paradigmatic Mechanisms for Enhancing Trustworthiness

    You can't make a silk purse out of a sow's ear.
    But in a sense, maybe we can -- in certain cases!

    It is clear that the ideal goals of unencumbered composability and easy interoperability are rather abstract and potentially unrealistic in many practical applications. Indeed, much of the research on properties that compose in some sense (e.g., strict lattice-based multilevel security) is extremely narrow and not generally applicable to common real-world situations. Consequently, we seek a more realistic notion that enables us to characterize the consequences of compositions, essentially seeking to anticipate what would otherwise be unanticipated. That is, we seek a discipline of composition. 

    On one hand, we would like to be able to compose subsystems in such a way that the resulting system does not lose any of the positive properties of its subsystems -- in some sense, a weak compositional monotonicity property in which trustworthiness cannot decrease with respect to certain attributes. (We refer to this as nondecreasing-trustworthiness monotonicity.) This is indeed theoretically possible if there is suitable independence or isolation among the subsystems. In the literature of multilevel security, we are familiar with an architectural abstraction hierarchy beginning with a security kernel that enforces a basic multilevel separation property, then trustworthy extensions that are privileged in certain respects, then application software that need not be trusted with respect to multilevel security, and finally user code that cannot compromise the multilevel security that is enforced by the underlying mechanisms. However, this hierarchy assumes that the kernel is absolutely nonsubvertible and nonbypassable. In the real world of conventional operating systems, such an assumption is totally unrealistic -- because the underlying operating system is typically easily subverted.

    On the other hand, a fundamental hope in designing and implementing systems is that it should be possible to build systems with greater trustworthiness out of less trustworthy concepts -- that is, making the proverbial silk purse out of the sow's ear, as noted above in the discussion on guarded dependence in Section 3.4. This also suggests a stronger kind of monotonicity property in which trustworthiness can actually increase with respect to certain attributes under composition and layered abstraction. (We refer to this as cumulative-trustworthiness monotonicity.) To this end, it is in some cases possible to relax certain assumptions of noncompromisibility of the underlying mechanisms -- assumptions that are absolutely essential to the nondecreasing-trustworthiness monotonicity typified by multilevel security noted in the preceding paragraph. On the other other hand, desire for some sort of compositional monotonicity must be tempered by the existence of emergent properties that cannot be fully characterized in terms of lower-layer properties. That is, the properties at one layer must be superseded by related but different properties at higher layers.

    What is perhaps most important in this context is the ability to make the dependencies explicit rather than allowing them to be unidentified and latent.

    Fundamentally, trustworthiness is multidimensional. Increasing trustworthiness with respect to one set of attributes does not necessarily imply increasing trustworthiness with respect to other attributes. For example, increasing fault tolerance may decrease security and performance; increasing security may decrease reliability, interoperability, and performance. Furthermore, emergent properties must also be considered -- particularly those related to trustworthiness. Once again, we must be very explicit as to the properties under consideration, and cognizant of how those properties relate to other system properties.

    Approaches to increasing trustworthiness are explored next. The following list is the outgrowth of earlier work by Neumann in Practical Architectures for Survivable Systems and Networks [264], which enumerates paradigmatic mechanisms by which trustworthiness and the ensuing assurance can be enhanced by horizontal compositions at the same layer of abstraction and by vertical compositions from one layer of abstraction to the next. Each of these paradigms for guarded dependence demonstrates techniques whereby trustworthiness can be enhanced above what can be expected of the constituent subsystems or transmission media. (References given in the following enumeration are suggestive, and by no means exhaustive.)

    1. Error-correcting codes. Hamming's early paper on single-error-correcting codes [158] inspired a large body of work on error-correcting codes, with several books providing useful overviews (for example, [9, 302, 354]) of an extensive literature. Most of the advances are based on solid mathematics (abstract algebra) -- as is also the case with public-key cryptographic algorithms (e.g., abstract algebra and number theory). The constructive use of redundancy can enable correct erroneous communications despite certain tolerable patterns of errors (e.g., not only single errors, but also random multiple errors, bursts of errors, or otherwise correlated patterns, as well as codes that are optimized for asymmetric errors such as 1-to-0 bit-dropping errors only or erasure errors), in block communications or even in variable-length or sequential encoding schemes, as long as any required redundancy does not cause the available channel capacity to be exceeded (following the guidance of Shannon's information theory). In addition, error-correcting coding can also be used in arithmetic operations (e.g., [274, 310]). Gilbert et al. [131] also considered the problem of detecting intentional deception. With suitable choices of redundancy and mathematically based code construction, error-detecting and error-correcting codes can permit arbitrarily reliable communications over unreliable communication media.
    2. Fault-tolerance mechanisms. Traditional fault-tolerance algorithms and system concepts can tolerate certain specific types of hardware and software failures as a result of constructive use of redundancy [16, 87, 183, 212, 232, 247, 270, 378]. There is an extensive literature on fault-tolerance algorithms that permit systems to withstand arbitrary failures up to the maximum intended fault-tolerance design coverage, with various modes of operation such as fail-safe, fail-soft, fail-fast, and fail-secure. Indeed, many of the fault-tolerance concepts have been around for many years, as for example the 1973 report [270] that discusses the advantages of distributing appropriate techniques according to hierarchical layers of abstraction and different functionality. However, failures beyond that coverage may result in unspecified failure modes. This in turn can be addressed by progressively invoking different fault tolerance techniques, for diagnosis, rollback, forward recovery, repair, reconfiguration, and so on. In terms of communications and processing, error-detecting codes and other forms of error detection combined with possible retransmittal, instruction retry in hardware, or other remediation, can be effective whenever it is not already too late. The early work of John von Neumann [370] and of Ed Moore and Claude Shannon [242] showed how reliable subsystems in general (von Neumann) and reliable relay circuits in particular (Moore-Shannon) can be built out of unreliable components -- as long as the probability of failure of each component is not precisely one-half and as long as those probabilities are independent from one another. With suitable configurations of components (e.g., "crummy relays" in the case of the Moore-Shannon paper), high reliability can be achieved out of low-reliability components. Also relevant is the 1960 paper of Paul Baran [29] on making reliable communications despite unreliable network nodes, which was influential in the early days of the ARPANET. For a recent highly relevant work, see the Guruswami-Sudan approach to achieving a significant improvement in decoding techniques for Reed-Solomon  codes [152] and a subsequent system-theoretic formulation by Kuijper and Polderman [196]. 
    3. Byzantine fault tolerance. Byzantine faults are those in which no assumptions or constraints are placed on the nature of the faults locally. In contrast to conventional fault tolerance, Byzantine fault tolerance architecturally enables a system to be able to withstand Byzantine fault modes [198, 331, 337], providing successful operation despite the arbitrary and completely unpredictable behavior (maliciously or accidentally) of up to some ratio of its component subsystems (for example, k out of 3k+1 in various cases). Thus, with no limitations on the failure modes of individual component subsystems, Byzantine systems can perform correctly even if the up to k bad subsystems fail in optimally contrived and malicious ways. Examples in the literature include Byzantine clocks and Byzantine network-layer protocols [299]. 
    4. Redundancy-based total-system reliability. SRI's Software-Implemented Fault Tolerance (SIFT) fly-by-wire avionics system from the 1970s is an early example of achieving a total-system (including application software) probability of failure of 10-10 per hour out of seven off-the-shelf avionics processors with probability of failure of 10-5 per hour [60, 63, 232, 247, 248, 377, 378]. SIFT is discussed further in Section 4.3.
    5. Self-synchronization. Self-synchronizing techniques can result in rapid resynchronization following nontolerated errors that cause loss of synchronization, including intrinsic resynchronizability of sequentially streamed codes. Common approaches involve adding explicit framing bits. Also found in the early literature are redundant serial codes with implicit synchronization properties that are decodable only if in the correct block synchronization -- as in the case of comma-free codes (block codes that can have only one correct framing boundary when strung together), and even error-correcting comma-free codes. A rather different approach uses inherent self-synchronizing properties of finite-state machines that are used to generate variable-length and sequential codes [252, 253, 254], enabling eventual resynchronization without having to add any redundancy to the codes. This approach applies to variable-length Huffman codes [167] (as in [252, 253]) as well as Huffman-style information-lossless sequential machines [168] (as in [254]). In both of these schemes, it is typically possible to recover from arbitrarily horrible errors, after a period of time that depends on the resynchronizing properties of the generating sequential machine. Yet another example of self-stabilization is given by Dolev [110]. Cipher-block chaining (CBC) cryptographic modes are another example in which synchronization can be a serious problem.
    6. Robust synchronization algorithms and atomic transactions. Various approaches exist for robust synchronization, including hierarchically prioritized locking strategies as in the T.H. Eindhoven THE system [106], two-phase commitments [367], stable storage abstractions [200], nonblocking atomic commitments [312], and fulfillment transactions [231] such as fair-exchange protocols guaranteeing that payment is made if and only if goods have been delivered.
    7. Alternative-computation architectural structures. When failure in a computation can be detected, satisfactory but nonequivalent results can be achieved (with possibly degraded performance), despite failures of hardware and software components and failure modes that exceed planned fault coverage. For example, the Newcastle Recovery Blocks approach [15, 16, 166] provided recursively for explicit alternative procedures in case the primary procedures failed their acceptance tests.
    8. Alternative-routing schemes. The early ARPANET routing protocols  (e.g., [8]) introduced the notion of dynamic reconfiguration in packet-switched networks, with good performance and eventual successful communications despite major outages among intermediate nodes and disturbances in the communications media. Much earlier work at Bell Laboratories on nonblocking telephone switching networks was an intellectual precursor of this concept (although dynamic routing did not appear in telephone networks until the 1980s), and also led to the 1960s work at SRI on the butterfly design for fast Fourier transforms and other applications.
    9. Cryptographic secrecy. Encryption can be applied in many ways -- for example, to an open transmission medium [340] or to specific applications such as e-mail [390], or to a storage medium [373]. It can result in content that is arbitrarily difficult to interpret, even if the communications are intercepted or the stored data acquired. Note that cryptography and cryptographic protocols by themselves do not provide complete solutions, and are indeed subject to numerous attacks [13, 184, 192, 193, 258, 341] including subversions of the underlying operating systems.
    10. Cryptographic integrity checks. Secret- and public-key encryption can both be used for cryptographic checksums that have a very high probability of detecting alterations to software, data, messages, and other content [171, 340], assuming no subversions of the underlying mechanisms (e.g., operating systems).
    11. Cryptographic authentication. Public-key and secret-key encryption can both be used to verify the authenticity of the alleged identity of a user, subsystem, system, or other entity, and can greatly enhance overall security and integrity [157, 171, 340], once again assuming no subversions of the underlying mechanisms (e.g., operating systems).
    12. Fair public-key and secret-sharing cryptographic schemes and robust crypto. Examples include [51, 236]. Various multi-key crypto schemes require different parties to cooperate via the simultaneous presentation of multiple keys -- allowing cryptographically based operations to require the presence of multiple authorities for encryption, sealing, verification of authenticity, access controls, and so on. These might be called n-out-of-n schemes, where all of the n entities must participate. Closely related are multiperson access-control schemes that do not require cryptography, and two-person business procedures. Multi-agent schemes are intended to increase the trustworthiness and integrity of the resulting action, although there can be additional risks involved as in the case of potential misuses of key escrow [6]. See also recent work on self-healing key distribution with revocation [362] and multicast packet authentication [280].  
    13. Threshold multi-key-cryptography schemes. A generalization of the n-out-of-n multi-agent schemes requires the presence of a sufficient proportion of trustworthy entities -- perhaps in which at least k out of n keys are required. This is applicable to conventional symmetric-key cryptography, public-key cryptography, authentication, and escrowed retrieval (sometimes euphemistically called "key recovery"). Examples include a Byzantine digital-signature  system [102]; a Byzantine key-escrow system [313] that can function successfully despite the presence of some parties that may be untrustworthy or unavailable; a signature scheme that can function correctly despite the presence of malicious verifiers [300]; and Byzantine-style authentication protocols that can work properly despite the presence of some untrustworthy user workstations, compromised authentication servers, and other questionable components (see Chapter 7 of [264]). 
    14. Security kernels and Trusted Computing Bases. The so-called "trusted" computing bases (TCBs) should ideally be trustworthy computing bases. Constructive use of kernels and TCBs in multilevel-secure (MLS) systems can lead to nonsubvertible MLS application properties, such as the MLS database security in SeaView [100, 213, 214], which demonstrated how a multilevel-secure database management system can be implemented on top of a multilevel-secure kernel -- with absolutely no requirement for multilevel-security trustworthiness in the Oracle database management system. (This is the notion of balanced assurance, which requires composability of policies and of components.) Another approach was the tagged capabilities of PSOS [120, 268, 269] (see Section 4.3), in which the hardware has only two instructions creating capabilities -- creating a new capability for a new object of a particular type, and creating a restricted copy with access privileges that would be at most as powerful, but never more powerful. This design rather simply avoided the ability to manipulate capabilities in hardware and software. Distributed systems need special care, especially MLS systems (e.g., [108]). 
    15. Architecturally reduced dependence on trustworthiness. Closely related to kernelized systems in principle, but radically different in their practical implications, are architectural approaches that starkly reduce the extent to which subsystems must be trusted, or the extent to which all phases of the development process must be trusted. Instead, the focus is on certain critical properties of selected subsystems or critical stages of the development process. In many cases, trustworthiness can be judiciously isolated within centralized systems or among distributed subsystems. In this way, the perimeters around what must be trustworthy can be reduced, which in turn reduces what is sometimes referred to as the attack surface.

      Several quite different examples are worth mentioning to illustrate this concept:

      • Layered protection. Kernels (noted above), rings of protection, and properly implemented capability-based addressing can protect themselves against compromise from above, as in the Multics operating system [91, 150, 277, 333, 344] and various capability-based architectures [117, 143, 186, 120, 268, 269].
      • MLS enforcement. Multilevel-secure systems and networks can be designed and implemented in which critical security properties such as MLS are enforced in selected servers, but in which there is no MLS dependence in the end-user systems [264, 308].
      • PCC. Proof-carrying code [250] can enable the detection of unexpected alterations to systems or data and thus hinder the tampering of data and programs, and resulting contamination -- irrespective of where in the development process malicious code is introduced prior to the establishment of the proof obligations.
      • Proof checking. Proof-checkers can provide assurance that theorem provers have arrived at correct proofs, without having to trust the provers. Proof-checkers tend to be orders of magnitude simpler to develop, to assure, and to use than theorem provers.
      • Independent accountability. There is enormous contention regarding the integrity of closed-source proprietary electronic systems for casting ballots, recording votes, and determining the results of elections, particularly those systems that are self-auditing with no external assurance. These systems reflect the need for a collection of critical requirements for security (e.g., system integrity, vote integrity, vote confidentiality, voter privacy, anonymity, accountability, nondenial of service, and overall system verifiability) as well as reliability and and other -ilities. Unfortunately, existing touch-screen direct-recording self-auditing electronic voting systems provide no assurances whatsoever that the vote that is cast is identical to the vote that is subsequently recorded and counted, and no meaningful recount is possible because there is no truly independent audit trail. Rebecca Mercuri's PhD thesis [233, 234] suggests the incorporation of a voter-verified electronically readable independent hard-copy image of the ballot as cast. This relatively simple mechanism almost by itself can surmount numerous potential weak links in the electronic stages of the election process, and could thereby enable detection and prevention of many kinds of internal fraud. This is a seemingly rare example of where the highly distributed weak-link nature of security and reliability can be overcome by a relatively simple conceptual mechanism. An entirely different approach has been proposed by David Chaum [73] that has a similar result -- providing a small mechanism relative to the overall voting process that provides for each voter's ability to verify that a private ballot was correctly recorded, despite the potential untrustworthiness of any front-end system for vote casting. (Chaum's approach is applicable to a variety of voting-machine types.) Mercuri's and Chaum's methods both allow far greater trustworthiness of the overall voting systems despite potential untrustworthiness of the voting machines themselves.
    16. Mutual suspicion. The ability to operate properly despite a mutual lack of trust among various entities was explored in 1972 in Mike Schroeder's doctoral thesis [343]. There seems to have been relatively little work along those lines since. Unfortunately, in practice, implementations typically tend to implicitly assume that some or all of the participating entities are trusted, irrespective of whether they are actually trustworthy.
    17. Interposition of trustworthy intermediation. In principle, interposing cross-domain protection mechanisms such as firewalls  (e.g., [80]), guards (which are generally much simpler than firewalls -- perhaps only preventing content with undesirable keywords from being disseminated; for example, see [40, 66, 307]) and proxies (which also act as trusted intermediaries) can supposedly mediate between regions of potentially unequal trustworthiness -- for example, ensuring that sensitive information does not leak out and that Trojan horses and other harmful effects do not sneak in, despite the presence of untrustworthy subsystems or mutually suspicious adversaries. For example, intermediation of network connectivity can increase the trustworthiness of internal secrecy (controlling the outbound direction) and internal integrity (controlling the inbound direction). However, care must be taken not to allow unrestricted riskful traffic, such as Java- and JavaScript-enabled Web content, PostScript, ActiveX, and other executable content that might execute if there are flaws in the underlying systems or in the virtual-machine environments.
    18. Type enforcement, object-oriented domain enforcement, and advanced access-control techniques. Architecturally integrated access controls can effectively mediate or otherwise modify the intent of certain attempted operations, depending on the execution context [120, 268, 269, 343] -- for example, the confined environment of the Java Virtual Machine [146, 148] and related work on formal specification [95, 142] for the analysis of the security of such environments. Such enforcement can be implemented in a combination of hardware and system software (as in the strong type enforcement of PSOS and Secure Computing Corporation systems ), programming languages, and compilers. Other forms of static analysis are of course also relevant, particularly when embedded in the compilation process (including language pre- and post-processors) and are valuable in enhancing the trustworthiness of architectures, implementations, and system operation.
    19. Integrated internal checks. A combination of static (e.g., design-time and compile-time) and dynamic (runtime) analysis can prevent or mediate execution in questionable circumstances -- for example, embedded in programming languages and compilers within the development process, and in resulting operating-system software and application programs, as in the cases of argument validation, bounds checks, strong typing and rigorous type checking, consistency checks, redundancy checks, and independent cross-checks. Static and dynamic checks can be used to significantly increase the trustworthiness of a subsystem or system, with respect to security, reliability, and performance. Bill Arbaugh's trustworthy bootload  protection [18, 19] is an example of a bootload-time check.
    20. External runtime checks. Addition of wrappers   (e.g., [381]) (without modifying the source or object code of the wrapped module) can in principle enhance survivability, security, and reliability, and otherwise compensate for deficient components -- such as adding a "trusted path" to an inherently untrustworthy system, enabling monitoring of otherwise unmonitorable functionality, or providing compatibility that was not of wrapped legacy programs with other programs. However, the utility of the wrapper approach may be subverted if the wrapper does not completely encapsulate the underlying mechanisms (e.g., operating systems).
    21. Real-time analysis. Anomaly and misuse detection to diagnose real-time threats (e.g., from insiders, outsiders, internal malfunctions, and external environmental failures) can provide rapid analyses of actual failures and potential misuses in progress. As one example of such a system for anomaly and misuse detection, the EMERALD  system [208, 207, 272, 304, 306] represents the most recent results of more than two decades of research at SRI. (See http://www.csl.sri.com/intrusion for extensive background.)
    22. Real-time response. Given the results of real-time analysis as noted above, it is possible to trigger automated or semiautomated rapid responses -- including dynamic alterations of system and network configurations, carefully controlled automated software upgrades in response to detected flaws, and enforced alterations in certain user processes, based on evaluations of the perceived real-time events. (Note that the alternative-computation architectures of technique 7 and the alternative-routing schemes of technique 8 have a similar flavor, except that the alternatives tend to be more closely integrated into the architecture, rather than dynamically variable.)

    This enumeration is undoubtedly not exhaustive, and is intended to be representative of a wide variety of trustworthiness-enhancing types of mechanisms. Furthermore, these techniques do not necessarily compose with one another, and may in fact interfere with one another -- especially if used unwisely. On the other hand, many efforts to attain trustworthy system for security and reliability need to rely on a combination of the above techniques -- as is the case with IBM's concept of autonomic systems that can continue to operate largely without system administration. (For example, see IBM's Architectural Blueprint for Autonomic Computing, http://www-3.ibm.com/autonomic/index.shtml.)

    It is clear that reliability enhancement is often based on solid theoretical bases; furthermore, that enhancement is quite tangible, but only if we assume that the underlying infrastructures are themselves not compromisible -- for example, as a result of security violations or uncovered hardware malfunctions. However, in cases of mechanisms for would-be security enhancement, the dependence on the assumption of noncompromisibility of the underlying infrastructures is much more obviously evident; if we are trying to create something more secure on top of something that might be totally compromisible, we are indeed trying to build sandcastles in the wet sand below the high-water mark. Thus, security enhancement may critically depend on certain measures of noncompromisibility in the underlying hardware and software on which the implementation of the enhancement mechanisms depend.

    So far, little has been said here about the relevance of these techniques to open-source software. In principle, all these techniques could be applied to closed-source proprietary software as well as open-source software. However, in practice, relatively few of these techniques have found their ways into commercial products -- error-correcting codes, atomic transactions, some fault tolerance, alternative routing, cryptography, and some dynamic checking are obvious examples. The opportunities are perhaps greater for appropriate techniques to be incorporated into open-source systems, although the incentives may be lacking thus far. (The relevance of open-source paradigms is considered in Section 4.5.)

    Although we suggest that the above techniques can actually enhance trustworthiness through composition, there are still issues to be resolved as to the embedding of these techniques into systems in the large -- for example, whether one of these trustworthiness-enhancing mechanisms might actually compose with another such mechanism. Even more important is the question of whether the underlying infrastructures can be compromised from above, within, or below -- in which we have just another example of building sandcastles in the wet sand below the high-tide level.  

    3.6 Enhancing Trustworthiness in Real Systems

    Bad software lives forever. Good software gets updated until it goes bad, in which form it lives forever. Casey Schaufler 

    Several conclusions can be drawn from consideration of the paradigmatic approaches for enhancing trustworthiness enumerated in Section 3.5.

    In general, the above discussion illustrates that composition -- of different specifications, policies, subsystems, techniques, and so on -- must be done with great care. We have begun to characterize some of the pitfalls and some of the approaches that might result in greater compositional predictability. However, the problem is deceptively open ended.  

    3.7 Challenges

    The components that are cheapest, lightest, and most reliable are the ones that are not there. Gordon Bell 

    Efforts to achieve much greater composability present many opportunities for future work. 

    3.8 Summary

    This chapter outlines various techniques for enhancing compositionality, and for enhancing the resulting trustworthiness that can be achieved by various forms of compositions. Interoperable composability is a pervasive problem whose successful achievement depends on many factors throughout the entire life cycle. It clearly requires much more consistent future efforts in system design, language design, system development, system configuration, and system administration. It requires a highly disciplined development process that intelligently uses sound principles, and it cries out for the use of good software engineering practice. Sound system architecture can also play a huge role. Designing for composability throughout can also have significant payoffs in simplifying system integration, maintenance, and operations. However, seamless composability may be too much to expect in the short term. In the absence of a major cultural revolution in the software development communities, perhaps we must begin by establishing techniques and processes that can provide composability sufficient to meet the most fundamental trustworthiness requirements.

    Overall, we believe that the approaches outlined here can have significant potential benefits, in commercial software developments as well as in open-source software -- in which the specifications and code are available for scrutiny and evolution and in which collaborations among different developers can benefit directly from the resulting composability. Ultimately, however, there is no substitute for intelligent, experienced, farsighted developers who anticipate the pitfalls and are able to surmount them.

     

    4 Principled Composable Trustworthy Architectures

    Synopsis

    Virtue is praised, but is left to starve. Juvenal,  Satires, i.74. (Note: The original Latin is Probitas laudatur et alget; "probitas" (probity) is literally rendered as "adherence to the highest principles and ideals".)

    Many system developments have stumbled from the outset because of the lack of a well-defined set of requirements, the lack of a well-conceived and well-defined flexible composable architecture that is well suited to satisfy the hoped-for requirements, the lack of adherence to principles, and the lack of a development approach that could evolve along with changing technologies and increased understanding of the intended system uses.

    In this chapter, we draw on the principles of Chapter 2 and the desire for predictable composability discussed in Chapter 3. we consider attributes of highly principled composable architectures suitable for system and network developments, appropriately addressing composability, trustworthiness, and assurance within the context of the CHATS program goals.

    4.1 Introduction

    It ain't gonna be trustworthy if it don't have a sensible architecture.
    (With kudos to Yogi Berra's good sense of large systems)

    The following goals are appropriate for truswtworthy architectures.

    Thus, we seek principled composable architectures that can satisfy the trustworthiness goals, with some meaningful assurance that the resulting systems will behave as expected.

    4.2 Realistic Application of Principles

    A system is not likely to be trustworthy if its development and operation are not based on well-defined expectations and sound principles.

    We next examine combinations of the principles discussed in Chapter 2 that can be most effective in establishing robust architectures and trustworthy implementations, and consider some priorities among the different principles.

    From the perspective of achieving a sound overall system architecture, the principle of minimizing what must be trustworthy (Section 2.3) should certainly be considered as a potential driving force. Security issues are inherently widespread, especially in distributed systems. We are confronted with potential questions of trustworthiness relating to processing, multiple processors, primary memory and secondary storage, backup and recovery mechanisms, communications within and across different systems, power supplies, local operating environments, and network communication facilities -- including the public carriers, private networks, wireless,  optical, and so on. Whereas different media have differing vulnerabilities and threats, a trustworthy architecture must recognize those differences and accommodate them.

    As noted in Section 2.6, systems and networks should be able to reboot or reconstruct, reconfigure, and revalidate their soundness following arbitrary outages without violating the trustworthiness requirements -- and, insofar as possible, without human intervention. For example, automated and semiautomated recovery have long been a goal of telephone network switches. In the early Electronic Switching Systems, an elaborate diagnostic dictionary enabled rapid human-aided recovery; the goal of automated recovery has been realistically approached primarily only in the previous two decades. The Plan 9 directory structure provides an interesting example of the ability to restore a local file system to its exact state as of any particular specified time -- essentially a virtualized rollback to any desired file-system state. In addition, two recent efforts are particularly noteworthy: the IBM Enterprise Workload Manager, and the Recovery-Oriented Computing (ROC) project of David Patterson and John Hennessy (as an outgrowth of their earlier work on computer architectures -- e.g., [161, 298]).

    There are of course serious risks that the desired autonomous operation may fail to restore a sound local system, distributed system, or network state -- perhaps because the design had not anticipated the particular failure mode that had resulted in a configuration that was beyond repair. Developing systems for autonomous operation thus seriously raises the ante on the critical importance of system architecture, development methodology, and operational practice.

    This if course works for malware as well! For example, Brian Randell forwarded an observation from Peter Ryan, who noted that the Lazarus virus places two small files into the memory of any machine that it infects. If either one of these files is manually deleted, its partner will resurrect the missing file (ergo, the symbolism of rising from the dead). Ryan added, "Now there's fault-tolerance and resilience through redundancy and self-healing (and autonomic design!?)!"

    A system in which essentially everything needs to be trusted (whether it is trustworthy or not) is inherently less likely to satisfy stringent requirements; it is also more difficult to analyze, and less likely to have any significant assurance. With respect to security requirements, we see in Section 4.3 that trustworthiness concerns for integrity, confidentiality, guaranteed availability, and so on, may differ from one subsystem to another, and even within different functions in the same subsystem. Similarly, with respect to reliability and survivability requirements, the trustworthiness concerns may vary. Furthermore, the trustworthiness requirements typically will differ from one layer of abstraction to another (e.g., [270]), depending on the objects of interest. Trustworthiness is therefore not a monolithic concept, and is generally context dependent in its details -- although there are many common principles and techniques.

    Many of the principles enumerated in Chapter 2 fit together fairly nicely with the principle of minimizing the need for trustworthiness. For example, if they are sensibly invoked, abstraction, encapsulation, layered protection, robust dependencies, separation of policy and mechanism, separation of privileges, allocation of least privilege, least common mechanism, sound authentication, and sound authorization all can contribute to reducing what must be trusted and to increasing the trustworthiness of the overall system or network. However, as noted in Section 2.6, we must beware of mutually contradictory applications of those principles, or limitations in their applicability. For example, the Saltzer-Schroeder principle of least common mechanism is a valuable guiding concept; however, when combined with strong typing and polymorphism that are properly conceived and properly implemented, this principle may be worth downplaying in the case of provably trustworthy shared mechanisms -- except for the creation of a weak link with respect to untrustworthy insiders. For example, sharing of authentication information and use of single signon both create new risks. Thus, this Saltzer-Schroeder principle could be reworded to imply avoidance of untrustworthy or speculative common mechanisms. Similarly, use of an object-oriented programming language may backfire if programmers are not extremely competent; it may also slow down development and debugging, and complicate maintenance. As a further example, separation of privilege may lead to a more trustworthy design and implementation, but may add operational complexity -- and indeed often leads to the uniform operational allocation of maximum privilege just to overcome that complexity. As noted in Section 2.3, the concept of a single sign-on certainly can contribute to ease of use, but can actually be a colossal security disaster waiting to happen, as a serious violation of the principles of separation of privilege and least common mechanism (because it makes everything accessible essentially equivalent to a single mechanism!). In general, poorly invoked security design principles may seriously impede the user-critical principle of psychological acceptability (e.g., ease of use). (See Chapter 2 for discussion of further pitfalls.)

    From an assurance perspective, many of the arguments relating to trustworthiness are based on models in which inductive proofs are applicable. One important case is that of finite-state machines in which the initial state is assumed to be secure (or, more precisely, consistent with the specifications) and in which all subsequent transitions are security preserving. This is very nice theoretically. However, there are several practical challenges. First of all, determining the soundness of an arbitrary initial state is not easy, and some of the assumptions may not be explicit or even verifiable. Second, it may be difficult to force the presence of an known secure state, especially after a malfunction or attack that has not previously been analyzed -- and even more difficult in highly distributed environments. Third, the transitions may not be executed correctly, particularly in the presence of hardware faults, software flaws, and environmental hazards. Fourth, system reboots, software upgrades, maintenance, installation of new system versions, incompatible retrievals from backup, and surreptitious insertion of Trojan horses are examples of events that can invalidate the integrity of the finite-state model assumptions. Indeed, software upgrades -- and, in particular, automated remote upgrades -- must be looked on as serious threats. Under malfunctions, attacks, and environmental threats, the desired assurance is always likely to be limited by realistic considerations. In particular, adversaries have a significant advantage in being able to identify just those assumptions that can be maliciously compromised -- from above, from within, and from below. Above all, many failures to comply with the assumptions of the finite-state model result from a failure to adequately comprehend the assumptions and limitations of the would-be assurance measures. Therefore, it is important that these considerations be addressed within the architecture as well as throughout the development cycle and operation, both in anticipating the pitfalls and in detecting limitations of the inherently incomplete assurance processes.

    4.3 Principled Architecture

    There are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
    Sir Charles Anthony Robert Hoare

    Tony Hoare's comment is obviously somewhat facetious, especially when confronted with complex requirements; he has chosen two extremes, whereas the kind of realistic system designs for inherently complex system requirements that we consider in this report obviously must lie somewhere in between. Nevertheless, Hoare's two extremes are both prevalent -- the former in theory and the latter in practice.

    We next seek to wisely apply the principles to the establishment of robust architectures capable of satisfying such complex requirements. Our conceptual approach is outlined roughly as follows, in a rather idealized form. 

    1. For each representatively comprehensive and realistic range of related system requirements, and with some working understanding of the relative importance of the desired principles relevant to those requirements, establish a spanning set of predictably composable trustworthy components from which systems of varying complexity, varying trustworthiness, and varying assurance can be developed, configured, administered, and maintained. Where generality is not naturally achievable -- for example, within a particularly narrow range of requirements and applicable principles -- separate architectural families should be considered instead of trying to lump everything into a common family. That is, we seek to establish some mainline families of architectures capable of attaining high security, reliability, survivability, and other critical attributes, as desired, but also to allow the general architectural framework to be adapted to special-purpose dedicated uses.
    2. For a particular set of requirements within any particular architecture or family of architectures, seek to minimize what functionality must be trustworthy with respect to each of various criticalities (reliability, integrity, nondenial of service, guaranteed real-time performance, etc.).
    3. Determine a minimal subset of components necessary for each specific set of requirements, and analyze it for consistency with the given requirements. (We use "minimal" to imply "minimum-like" but not necessarily the absolute minimum.) This is the notion of stark subsetting introduced in Section 3.3 -- that is, avoiding or eliminating unneeded functionality and (hopefully) unneeded complexity and bloatware.
    4. Examine the extent to which the chosen principles are satisfied, and consider the consequences. As appropriate, recycle through the previous steps for various families of architectures, further splitting families as suggested in the first step, reexamining the attainable trustworthiness as in the second step, and refining the minimal subset as in the third step, with corresponding refinements of the priorities for the principles and the architectures themselves.
    5. In parallel, evaluate the extent to which the desired trustworthiness might be achieved. If high assurance is required, the requisite approaches and evaluations should be applied throughout each iteration through the development stages, and at various layers of abstraction, as appropriate. However, whereas formal analysis can be especially valuable in the development of high-assurance critical systems (and then particularly in the early development stages), it is not likely to be fruitful in the absence of a principled development. Thus, it is often unwise to attempt to apply formalisms to badly conceived designs and developments. That would be throwing good money after bad, unless it adds significantly to the awareness of how bad the architecture might be -- which can usually be realized much more economically. (Formal analysis of the software implementation in such cases is generally much less rewarding than analyses of requirements and architectures, especially if it is the design that is flawed.)

    The notion of stark subsetting relates to the paired notions of composability and decomposability discussed in Section 3.3. The primary motivation for stark subsetting is to achieve minimization of the need for trustworthiness -- and, perhaps more important, minimization of the need for unjustifiable (unassured) trust. Stark subsetting can also dramatically simplify the effort involved in development, analysis, evaluation, maintenance, and operation.

    For a meaningfully complete stark subset to exist for a particular set of requirements, it is desirable that the stark subset originate from a set of composable components. As we note in Section 3.3, "If a system has been designed to be readily composable out of its components, then it is also likely to be readily decomposable -- either by removal of the unnecessary subsystems, or by the generation of the minimal system directly from its constituent parts. Thus, if composability is attainable, the decomposition problem can be considered as a by-product of composition ..."

    One of the architectural challenges is to attempt to capture the fundamental property of the multilevel-integrity concept, namely, that an application must not be able to compromise the integrity of the underlying mechanisms. However, there is an inherent difficulty in the above five-step formulation, namely, that satisfaction of overall system properties such as survivability and human safety depends on application software and users, not just on the integrity of the operating systems. Therefore, it is not enough to be concerned only with the architecture of the underlying infrastructure; it is also necessary to consider the entire system. (The Clark-Wilson application integrity model [82] is an example that requires such an analysis.)

    Primarily for discussion purposes, we next consider two extreme subspaces in a highly multidimensional space of architectures, each with its own ranges of trustworthiness and corresponding ranges of trustedness, and with associated ranges of assurance, composability, evolvability, principle adherence, and so on. (Note that these dimensions are not necessarily orthogonal, although that is unimportant here.) Of course, there are many interesting subspaces somewhere in between these two extremes, although it is not useful to attempt to itemize them here. Within each subspace in the overall multidimensional space, there are wide variations in what properties are relevant within the concept of trustworthiness, whether the implied trust (if any) is explicit or implicit, in what kinds of assurance might be provided, and so on.

    The two illustrative extremes are as follows:

    Minimal trust is generally compatible with the notions of judicious modularity and stark subsetting. On the other hand, maximal trust is usually a consequence of badly designed systems -- in which it is very difficult to achieve trustworthy subsets, let alone to remove large amounts of bloatware that in a well-designed system would conceptually not have to be trustworthy. (Compare this with the quote from Steve Ballmer given in Section 3.3.)

    What might at first seem to be a hybrid minimax approach to trust and trustworthiness is given by Byzantine agreement, discussed in this context in Section 3.5: even if at most k out of n subsystems may misbehave arbitrarily badly, the overall system still behaves correctly (for suitable k and n). Byzantine agreement makes a negative assumption that some portion of the components may be completely untrustworthy -- that is, arbitrarily bad, maliciously or otherwise -- and a positive assumption that the remaining components must be completely trustworthy. However, Byzantine agreement is in a strict mathematical sense an example of minimum (rather than minimal) trust: in the case of Byzantine clocks, the basic algorithm [198, 331, 337] provably minimizes the number 2k+1 of trustworthy clock subsystems for any given number k of arbitrarily untrustworthy clock subsystems, with the resulting 3k+1 subsystems forming a trustworthy clock system. (Note that the assumption that at most k of the clocks may be arbitrarily untrustworthy is explicit, although the nature of the untrustworthiness can be completely unspecified.)

    Purely for purposes of discussion, we next consider two extreme alternatives with respect to homogeneity versus heterogeneity of architecture, centralization and decentralization of physical configurations, logical control, trust, and trustworthiness. There are many combinations of these aspects of centralization versus decentralization, but for descriptive simplicity we highlight only two extreme cases. For example, we temporarily ignore the fact that centralized control could be exerted over highly distributed systems -- primarily because that is generally very unrealistic in the face of events such as unexpected outages and denial-of-service attacks. Similarly, centralized systems could have distributed control.

    Note that a collection of centralized subsystems may be coordinated into a decentralized system, so the boundaries of our simplified descriptive dichotomy are not always sharp. However, the trustworthiness issues in heterogeneous systems and networks (particularly with respect to security, reliability, and survivability) are significantly more critical than in homogeneous systems and networks, even though the generic problems seem to be very similar. In fact, the vulnerabilities, threats, and risks are greatly intensified in the presence of highly diverse heterogeneity.

    With respect to the principled approach of minimizing what must be trustworthy, the next set of bulleted items provides some motivating concepts for structuring robust systems with noncentralized trustworthiness, irrespective of whether the actual systems have centralized or decentralized control. (For example, fault tolerance and multilevel security are meaningful in centralized as well as distributed systems.)

    Each of these concepts can potentially be useful by itself or in combination with other approaches. However, it is important to realize that one approach by itself may be compromisible in the absence of other approaches, and that multiple approaches may not compose properly and instead interfere with one another. Thus, an architecture (or a family of architectures) must have considerable effort devoted to combining elements of multiple concepts into the effective development of trustworthy systems, with sufficiently trustworthy networking as needed. However, although these concepts are not necessarily disjoint and may potentially interfere with one another, each of these concepts is generally compatible with the notion of stark subsetting -- which of course itself benefits greatly from extensive composability (and its consequence, facile decomposability). 

    Many other system properties of course can also contribute to achieving our desired goals. A few of these are discussed next. These items are somewhat second-order in nature, because they rely on the trustworthiness of the design and implementation of the architectural concepts in the above list -- although they also can each contribute to increased trustworthiness.

    Appropriate architectures are then likely to be some sort of combination of the above approaches, encompassing (for example) heterogeneous subsystems and subnetworks, trustworthy servers and controlled interfaces (TS&CI) that that ensure satisfaction of cross-domain security and integrity, dramatic improvements in system and network-wide authentication, trustworthy bootloads, trusted paths, traceback, trustworthy code distribution, and other concepts included in the above enumeration, particularly in observance of the principles of Chapters 2 and 3. Such architectures (referred to herein as the Enlightened Architecture Concept would provide a basis for a wide class of systems, networks, and applications that can heterogeneously accommodate high-assurance security. This is particularly relevant concerning desires for multilevel security, which realistically are likely to involve collections of MLS clients, MSL clients, and MILS clients, all controllably networked together with a combination of servers, subject to the multilevel constraints, and with a similar assortment of assurance techniques for both conventional security and multilevel security. In that true multilevel security is overkill for many applications, this vision would provide that functionality only where essential.

    4.4 Examples of Principled Architectures

    In our experience, software exhibits weak-link behavior; failures in even the unimportant parts of the code can have unexpected repercussions elsewhere. David Parnas et al. [292] 

    The pervasive nature of weak links is considered in Section 2.3.1 in connection with principles for avoiding them, and again in Section 3.5 in connection with the desire for reduced dependence on trustworthiness. In concept, we like to espouse strength in depth; however, in practice, we find weakness in depth in many poorly architected systems -- where essentially every component may be a weak link. Even well-designed systems are likely to have multiple weak links, especially in the context of insider misuse. As a result, there is a fundamental asymmetry between defenders and attackers. The defenders need to avoid or protect all of the vital weak links; on the other hand, the attackers need to find only one or just a few of the weak links.

    Several historically relevant systems are particularly illustrative of the concept of principled architectures that have sought to avoid weak links in one way or another. These are discussed next.

    4.5 Openness Paradigms

    Closed-source paradigms often result in accidental open-sesames.
    Can open kimonas inspire better software?

    [This section is adapted from Neumann's paper for the 2000 IEEE Symposium on Security and Privacy, entitled "Robust Nonproprietary Software" [265].]

    Various alternatives in a spectrum between "open" and "closed" arise with respect to many aspects of the system development process, including the availability of documentation, design, architecture, algorithms, protocols, and source code. The primary differences arise among many different licensing agreements. The relative merits of various paradigms of open documentation, open design, open architecture, open software development, and available source code are the source of frequent debate, and would benefit greatly from some incontrovertible and well documented analyses. (For example, see [210, 227, 263, 265, 338] for a debate on open source-code availability. See also [126] on the many meanings of open-source.) The projects in the DARPA CHATS program
    (http://www.darpa.mil/ipto/research/chats/index.html) provided some strong justifications for not only the possibilities of openness paradigms, but also some realistic successes.

    As noted throughout this report, our ultimate goal is to be able to develop robust systems and applications that are capable of satisfying critical requirements, not merely for security but also for reliability, fault tolerance, human safety, survivability, interoperability, and other vital attributes in the face of a wide range of realistic adversities -- including hardware malfunctions, software glitches, inadvertent human actions, massive coordinated attacks, and acts of God. Also relevant are additional operational requirements such as interoperability, evolvability and maintainability, as well as discipline in the software development process and assurance associated with the resulting systems.

    Despite extensive past research and many years of system experience, commercial development of computer-communication systems is decidedly suboptimal with respect to its ability to meet stringent requirements. This section examines the applicability of some alternative paradigms to conventional system development.

    To be precise about our terminology, we distinguish here between black-box (that is, closed-box) systems in which source code is not available, and open-box systems in which source code is available (although possibly only under certain specified conditions). Black-box software is often considered as advantageous by vendors and believers in security by obscurity. However, black-box software makes it much more difficult for anyone other than the original developers to discover vulnerabilities and provide fixes therefor. It also hinders open analysis of the development process itself (which, because of extremely bad attention to principled development in many cases, is something developers are often happy to hide). Overall, black-box software can be a serious obstacle to having any objective confidence in the ability of a system to fulfill its requirements (security, reliability, safety, interoperability, and so on, as applicable). In contrast, our use of the term open-box software suggests not only that the source code is visible (as in glass-box software), but also that it is possible to reach inside the box and make modifications to the software. In some cases, such as today's all-electronic (e.g., paperless) voting systems, in which there is no meaningful assurance that votes are correctly recorded and counted, and no useful audit trails that can be used for a recount in the case of errors or system failures (for example, see [194, 233, 235]), black-box software presents a significant obstacle to confidence in the integrity of the entire application. On the other hand, completely open-box software would also provide opportunities for arbitrary software changes -- and, in the case of electronic voting systems, that enable elections to be rigged by malicious manipulators (primarily insiders). Thus, there is a need for controls on the provenance of the software in both open-box and closed-box cases -- tracking the history of changes and providing evidence as to where the code actually came from.

    We also distinguish here between proprietary and nonproprietary software. Note that open-box software can come in various proprietary and nonproprietary flavors, with widely varying licensing agreements regarding copyright, its supplemental concept of copyleft, reuse with or without the ability to remain within the original open-source conditions, and so on. 

    Examples of nonproprietary open-box software are increasingly found in the Free Software Movement (such as the Free Software Foundation's GNU system with Linux) and the Open Source Movement, although discussions of the distinctions between those two movements and their respective nonrestrictive licensing policies are beyond the current scope. In essence, both movements believe in and actively promote unconstrained rights to modification and redistribution of open-box software. (The Free Software Foundation Web site is
    http://www.gnu.org, and contains software, projects, licensing procedures, and background information. The Open Source Movement Web site is http://www.opensource.org/, which includes Eric Raymond's "The Cathedral and the Bazaar" and the Open Source Definition.)

    The potential benefits of nonproprietary open-box software include the ability of good-guy outsiders to carry out peer reviews, add new functionality, identify flaws, and fix them rapidly -- for example, through collaborative efforts involving geographically dispersed people. Of course, the risks include increased opportunities for evil-doers to discover flaws that can be exploited, or to insert Trojan horses and trap doors into the code.

    Open-box software becomes particularly interesting in the context of developing robust systems, in light of the general flakiness of our information system infrastructures: for example, the Internet, typically flawed operating systems, vulnerable system embeddings of strong cryptography, and the presence of mobile code. Our underlying question of where to place trustworthiness in order to minimize the amount of critical code and to achieve robustness in the presence of the specified adversities becomes particularly relevant.

    Can open-box software really improve system trustworthiness? The answer might seem somewhat evasive, but is nevertheless realistic: Not by itself, although the potential is considerable. Many factors must be considered. Indeed, many of the problems of black-box software can also be present in open-box software, and vice versa. For example, flawed designs, the risks of mobile code, a shortage of gifted system developers and intelligent administrators, and so on, all apply in both cases. In the absence of significant discipline and inherently better system architectures, opportunities may be even more widespread in open-box software for insertion of malicious code in the development process, and for uncontrolled subversions of the operational process. However, in essence, many of the underlying developmental problems tend to be very similar in both cases.

    Ultimately, we face a basic conflict between (1) security by obscurity to slow down the adversaries, and (2) openness to allow for more thorough analysis and collaborative improvement of critical systems -- as well as providing a forcing function to inspire improvements in the face of discovered attack scenarios. Ideally, if a system is meaningfully secure, open specifications and open-box source should not be a significant benefit to attackers, and the defenders might be able to maintain a competitive advantage! For example, this is the principle behind using strong openly published cryptographic algorithms, protocols, and implementations -- whose open analysis is very constructive, and where only the private and/or secret keys need to be protected. Other examples of obscurity include tamperproofing and obfuscation, both of which have very serious realistic limitations. Unfortunately, many existing systems tend to be poorly designed and poorly implemented, and often inherently limited by incomplete and inadequately specified requirements. Developers are then at a decided disadvantage, even with black-box systems. Besides, research initiated in a 1956 paper by Ed Moore [241] reminds us that purely external Gedanken experiments on black-box systems can often determine internal state details. Furthermore, reverse engineering is becoming quite feasible, and if done intelligently can result in the adversaries having a much better understanding of the software than the original developers.

    Static analysis is a vital contributor to increasing assurance, and is considered in Section 6.6.

    Behavioral application requirements such as safety, survivability, and real-time control cannot be realistically achieved unless the underlying systems are adequately trustworthy. It is very difficult to build robust applications on either proprietary closed-box software or nonproprietary open-box software that is not sufficiently trustworthy -- once again this is like building castles in the sand. However, it may be even more difficult for closed-box proprietary systems.

    Unless the fantasy of achieving security by obscurity is predominant, there seem to be some compelling arguments for open-box software that encourages open review of requirements, designs, specifications, and code. Even when obscurity may be deemed necessary in certain respects, some wider-community open-box approach may be desirable. For system software and applications in which security can be assured by other means and is not compromisible within the application itself, the open-box approach has particularly great appeal. In any event, it is always unwise to rely primarily on security by obscurity.

    So, what else is needed to achieve trustworthy robust systems that are predictably dependable? The first-level answer is the same for open-box systems as well as closed-box systems: serious discipline throughout the development cycle and operational practice, use of good software engineering, rigorous repeated evaluations of systems in their entirety, and enlightened management, for starters.

    A second-level answer involves inherently robust and secure evolvable composable interoperable architectures that avoid excessive dependence on untrustworthy components. One such architecture is noted in Section 4.3, namely, thin-client user platforms with minimal operating systems, where trustworthiness is bestowed where it is essential -- typically, in starkly subsetted servers and firewalls, code distribution paths, nonspoofable provenance for critical software, cryptographic coprocessors, tamperproof embeddings, preventing denial-of-service attacks, runtime detection of malicious code and deviant misuse, and so on.

    A third-level answer is that there is still much research yet to be done (such as on techniques and development practice that enables realistic predictable compositionality, inherently robust architectures, and sound open-box business models), as well as more efforts to bring that research into practice. Effective technology transfer seems much more likely to happen in open-box systems.

    Above all, nonproprietary open-box systems are not in themselves a panacea. However, they have potential benefits throughout the process of developing and operating critical systems. Nevertheless, much effort remains in providing the necessary development discipline, adequate controls over the integrity of the emerging software, system architectures that can satisfy critical requirements, and well-documented demonstrations of the benefits of open-box systems in the real world. If nothing else, open-box successes may have an inspirational effect on commercial developers, who can rapidly adopt the best of the results. We are already observing some of the major commercial system developers exploring some of the alternatives for open-box source-code distribution. The possibilities for coherent community cooperation are almost open-ended (although ultimately limited in scale and controllability), and offer considerable hope for nonproprietary open-box software -- if the open-box community adopts some concepts of principled architectures such as those discussed here.

    Of course, any serious analysis of open-box versus closed-box and proprietary versus nonproprietary must also take into account the various business models and legal implications. The effects of the federal Digital Millennium Copyright Act (DMCA), the state Uniform Computer Information Transactions Act (UCITA), shrink-wrap restrictions, and other constraints must also be considered. However, these considerations are beyond the present scope.

    A recent report [163] of the Carnegie-Mellon Software Engineering Institute provides a useful survey of the history and motivations for open-source software.

    4.6 Summary

    If carpenters built the way programmers program, the arrival of the first woodpecker would mean the end of civilization as we know it. Gerald Weinberg

    In summarizing the conclusions of this chapter, we revisit and extend the quasi-Yogi Berra quote at the beginning of Section 4.1. A system is unlikely to be trustworthy if it does not have a sufficient supply of good designers, good programmers, good managers, and good system administrators. However, it is also not likely to be secure, reliable, generally trustworthy, evolvable, interoperable, and operationally manageable if the development does not begin with feasible requirements that are well specified and realistically representative of what is actually needed, and if it does not involve good specifications and good documentation, and if it does not use good compilers, good development tools, and lots more. Note that if a set of requirements is trivial or seriously incomplete, the fact that a system satisfies those requirements is of very little help in the real world.

    Thus, appropriately well defined and meaningful requirements for trustworthiness are essential. Good system and network architecture is perhaps the most fundamental aspect of any efforts to develop trustworthy systems, irrespective of the particular set of requirements whose satisfaction is necessary. Wise adherence to a relevant set of principles can be extremely helpful. Architectural composability and implementation composability are of enormous importance, to facilitate development and future evolution. Policy composability is also useful if multiple policies are to be enforced. Good software engineering practice and the proper use of suitable programming languages are also vital. The absence or inadequacies of some of these ideals can sometimes be overcome. However, sloppy requirements and a fundamentally deficient architecture represent huge impediments, and will typically result in increased development costs, increased delays, increased operational costs, and future incompatibilities.

    As we note at the end of Chapter 3, seamless composability is probably too much to expect overall, particularly in the presence of legacy software that was not designed and implemented to be composable; instead, we need to establish techniques that can provide composability sufficient to meet the given requirements. If that happens to be seamless in the particular case, so much the better.

    We believe that the approaches considered in this report have almost open-ended potential for the future of trustworthy information systems. They are particularly well suited to the development of systems and networking that are not hidebound by compatibility with legacy software (and, to some extent, legacy hardware), but many of the concepts are applicable even then. We hope that these concepts will be adopted much more widely in the future by both open-box and closed-box communities. In any case, much greater discipline is needed in design, development, and operation.

     

    5 Principled Interface Design

    Perspicuous: plain to the understanding, especially because of clarity and precision of presentation. (Webster's International Dictionary)

    Synopsis

    This chapter considers system architecture from the viewpoint of external and internal system interfaces, and applies a principled approach to interface design.

    5.1 Introduction

    Interfaces exist at different layers of abstraction (hardware configuration, operating systems, system configurations, networking, databases, applications, control system complexes such as SCADA systems and air-traffic control, each with both distributed and local control) and should reflect the abstractions of those layers and any security issues peculiar to each layer, suitable for the specific types of users. In general, security considerations should be hidden where possible, except where it is necessary for control and understandability of the interfaces. In addition, some sort of automated (or at least semiautomated) intelligent assistance is essential, according to specific user needs.

    Operators, administrators, and users normally have different needs. Those needs must be reflected in the various interfaces -- some of which must not be accessible to unprivileged users. In particular, operators of control systems, enterprises, and other large-system applications need to be able to see the big picture at an easily understood layer of abstraction (e.g., dynamic status updates, configuration management, power-system error messages), with the ability on demand to drill down to arbitrarily fine-grained details. As a consequence, it is generally necessary that greater detail must be available to certain privileged users (for example, system and network administrators or system operators), according to their needs -- either through a separate interface or through a refinement mechanism associated with the standard interface.

    In general, it is important that the different interfaces for different roles at different layers be consistent with one another, except where that is prevented by security concerns. (This is a somewhat subtle point: in order to minimize covert channels in multilevel secure systems, it may be deemed advisable that different, potentially inconsistent, versions of the same information content must be accorded to users with different security levels. This multiplicity of content for seemingly the same information is known as polyinstantiation.) Most important is that the interfaces truly reflect the necessary trustworthiness issues.

    Requirements must address the interface needs at each layer, and architectures must satisfy those requirements. This is very important, and should be mirrored in the requirements and architecture statements. In general, good requirements and good architectures can avoid many otherwise nasty administrative and user woes -- viruses, malcode, patch management, overdependence and potential misuse of superuser privileges. As an example, the Trusted Xenix system requirements demanded a partitioning of privileged administrator functions rather than allowing a single superuser role. This illustrates the principles of separation of duties and a corresponding separation of roles.

    In attempting to simplify the roles of adminstrators and operators, automated vendor-enforced updates are becoming popular, but represent a huge source of security risks. Their use must be considered very carefully - commensurate with the criticality of the intended applications. Remote maintenance interfaces are vital, especially in unmanned environments, but also represent considerable security risks that must be guarded against.

    The rest of this chapter as well as Sections 7.6 and 8.4, and some of Section 7.11 are adapted from the body of a self-contained report, "Perspicuous Interfaces", authored by Peter Neumann, Drew Dean, and Virgil Gligor as part of a seedling study done for Lee Badger at DARPA under his initiative to develop a program relating to Visibly Controllable Computing. That seedling study was funded as an option task associated with SRI's CHATS project. Its report also included an appendix written by Virgil Gligor, entitled "System Modularity: Basis for the Visibility and Control of System Structural and Correctness Properties", which is the basis for Appendix B of this report, courtesy of Virgil Gligor.

    5.2 Fundamentals

    The Internet is arguably the largest man-made information system ever deployed, as measured by the number of users and the amount of data sent over it, as well as in terms of the heterogeneity it accommodates, the number of state transitions that are possible, and the number of autonomous domains it permits. What's more, it is only going to grow in size and coverage as sensors, embedded devices, and consumer electronics equipment become connected. Although there have certainly been stresses on the architecture, in every case so far the keepers of the Internet have been able to change the implementation while leaving the architecture and interfaces virtually unchanged. This is a testament to the soundness of the architecture, which at its core defines a "universal network machine". By locking down the right interfaces, but leaving the rest of the requirements underspecified, the Internet has evolved in ways never imagined. Larry Peterson and David Clark [301]

    This chapter seeks to provide guidelines for endowing system interfaces and their administrative environments with greater perspicuity, so that designers, developers, debuggers, administrators, system operators, and end users can have a much clearer understanding of system functionality and system behavior than is typically possible today. Although the primary concern is for interfaces that are visible at particular layers of abstraction, the approach is immediately also applicable to internal interfaces.

    As is true with security in general, the notion of perspicuity is meaningful primarily only with respect to well-defined criteria (assuming suitable definitions). Some desirable perspicuity criteria and characteristics are considered in Section 5.2.3.

    The approach here considers the traditional problems of design, implementation, operation, and analysis, and suggests ways to achieve the basic goal of perspicuity. It spans source-code analysis, the effects of subsystem composition, debugging, upgrades and other program enhancements, system maintenance, code generation, and new directions. It addresses the relevance of specification languages, programming languages, software engineering development methodologies, and analysis tools. It is applicable to multiple layers of abstraction, including hardware, operating systems, networks, and applications. It considers formal methods, ad-hoc techniques, and combinations of both. Other relevant architectural and system-oriented considerations are characterized in Chapter 4.

    The main emphasis here is on the understandability of the interfaces and the functionality that they represent. Toward that end, we first seek evaluation criteria for and constraints on relevant interfaces (and on development processes themselves) that can help avoid many of the commonly experienced problems relating to security and reliability. We then explore a range of tools that might help detect and eliminate many of the remaining problems and that might also improve the perspicuity of critical software. It is clear that this problem is ultimately undecidable in a strict sense, but nevertheless much can be done to advance the developmental and operational processes.

    This report is not intended as a detailed treatise on the subject of perspicuous interfaces. Instead, it provides an enumeration of the basic issues and some consideration of relative importance of possible approaches, as well as an understanding of how interface design fits into the overall goal of principled assuredly trustworthy composable architectures.

    5.2.1 Motivations for Focusing on Perspicuity

    There are several reasons for expending efforts on enhancing perspicuity.

    5.2.2 Risks of Bad Interfaces

    The archives of the Risks Forum are replete with examples of badly conceived and badly implemented interfaces, with consequential losses of life, injuries, impairment of human well being, financial losses, lawsuits, and so on. A few examples are summarized here -- for which references and further details can be found in the RISKS archives at http://www.risks.org, a topical index for which is found in the ever-growing Illustrative Risks document [267]:
    http://www.csl.sri.com/neumann/illustrative.html. (Some of the pre-1994 incidents are also described in [260].)

    Neumann's Inside Risks column from the March 1991 Communications of the ACM ("Putting Your Best Interface Forward") includes more detailed discussions of several examples, and is the basis for [260], pp. 206-209.

    There are many other emerging applications that will have serious risks associated with nonperspicuity of their human interfaces, especially in systems intended to be largely autonomic. One critical application involves adaptive automobile cruise-control that adjusts to the behavior of the preceding car(s) (including speed and acceleration/deceleration, lane changes, and so on). Some of this functionality is beginning to emerge in certain new cars. For example, BMW advertises an automobile with an 802.11 access point that would enable downloading of new software (presumably by the factory or mechanic, but perhaps even while you are driving?). The concept of a completely automated highway in the future will create some extraordinary dependencies on the technology, especially if the human interfaces provide for emergency overrides. Would you be comfortable on a completely automated networked highway system alleged to be safe, secure, and infallible, where your cruise-control chip is supposedly tamperproof, is supposed to be replaced only by approved dealers, is remotely reprogrammable and upgradeable, and can be monitored and controlled remotely by law enforcement -- which can alter its operation in a chase among many other vehicles?

    5.2.3 Desirable Characteristics of Perspicuous Interfaces

    The major issues underlying our main goal require a characterization of the requirements that must be met by system architectures and by their visible and hidden interfaces, as well as constraints that might be considered essential.

    A popular belief is that highly trustworthy systems with nontrivial requirements are inherently complex. However, we observe in Chapter 4 that -- in a well-designed system -- complexity can be addressed structurally, yielding apparent simplicity locally even when the overall system is complex. To this end, abstractional simplicity is highly desirable. It can be achieved as a by-product of sound system design (e.g., abstraction with strong typing and strong encapsulation), well conceived external and internal interfaces, proactive control of module interactions, and clean overall control flow. For existing legacy systems in which abstractional simplicity may not be attainable directly, it may still sometimes be attainable through wrappers whose interfaces provide appropriate abstraction. In any case, aids to analysis can help significantly. Thus, a sensible approach to perspicuous computing needs to address the system design structure, all of the relevant interfaces (visible or not), and the implementation. Thus, techniques for analyzing interfaces for perspicuity and other characteristics would be very valuable.

    We begin with a consideration of desirable interface characteristics:

    Desired properties of specifications, architectures, and implementations are considered in subsequent sections.

    5.2.4 Basic Approaches

    There are several different approaches to increasing perspicuity. Ideally, a combination of some of the following might be most effective, but each by itself can sometimes be helpful.

    First, consider proactive efforts. Ideally, it would be most appropriate to develop new systems that satisfy all of the above desirable characteristics -- and much more. However, suppose you have an existing system that fails to satisfy these characteristics, or is in some ways difficult to understand. Let us assume that you have identified an interface that is seriously confusing.

    For the most part in this study, we assume that source code is available. However, we also include some approaches that apply to object code when source code may or may not be available on the fly.

    Analytic efforts at enhancing perspicuity of software interfaces can also be useful even if they do not require modification of either the source code or the object code implementing those interfaces.

    5.2.5 Perspicuity Based on Behavioral Specifications

    At the IBM Almaden Institute conference on human interfaces in autonomic systems, on June 18, 2003, Daniel M. Russell stressed the importance of shared experience between users and system developers. The following speaker then continued that chain of thought:

    People and systems are not separate, but are interwoven into a distributed system that performs cognitive work in context. David D. Woods

    An enormous burden thus rests on the human interfaces. As noted in Section 5.2.1, perspicuous interfaces offer their greatest advantage when something has gone wrong, and the system is not working as intended. To really gain leverage from perspicuous interfaces, we need three primary areas of support:

    Together, these capabilities could revolutionize the system debugging experience, by combining tool support with machine-usable documentation of what is supposed to happen, enabling comparisons of theory and practice.

    5.2.6 System Modularity, Visibility, Control, and Correctness

    To establish a baseline for the investigation of system modularity as a basis for establishing the visibility of a system's structural and correctness properties, a brief analysis of prior art was performed by Virgil Gligor, resulting in an appraisal of which methodologies and tools have and have not been effective in defining and analyzing system modularity in the past and to what extent. That analysis is the basis for Appendix B of this report.

    Gligor's analysis investigates the following topics related to modular system structures:

    1. A generally accepted definition of a module (and of module instances such as subsystem, submodule, service, layer, and type manager)
    2. The separation of module interface from the module implementation
    3. Replacement independence property of modules
    4. Structural relations among modules (e.g., the contains and uses relations)
    5. The correctness dependencies among modules and their manifestation as causal relations among module interfaces (e.g., "service," "data," and "environment", along with other dependencies) that could lead to some sort of calculus of dependencies.

    Gligor's analysis also presents the relationships between module definition and its packaging within a programming and configuration management system, and outlines measures (i.e., metrics) of modularity based on the extent of replacement independence and the extent of global variable use, as well as measures of module packaging defects.

    The intent of this analysis is to identify pragmatic tools and techniques for modularity analysis that can be used in practice. Of particular interest are tools that can be used to produce tangible results in the short term and that can be extended to produce incrementally more complex dependency analyses in the future.

    Virgil Gligor notes that Butler Lampson [199] argues that module reusability has failed and will continue to fail, and that "only giant modules will survive." If we believe Butler's arguments (and they are usually hard to dismiss), this means that "visibility into giants" is more important than ever. [Thanks to Virgil Gligor for that gem.]

    5.3 Perspicuity through Synthesis

    We summarize here the main concepts and issues relating to system architecture, software engineering, program languages, and operational concerns. However, no one of those areas is sufficient for ensuring adequate perspicuity, security, reliability, and so on. Indeed, all of these areas should be important contributors to the overall approach.

    From the synthesis perspective, there are two different manifestations of perspicuity: (1) making interfaces understandable when they are to be used under normal operation, and (2) making the handling of exceptional conditions understandable when remediation is required (e.g., recovery, reconfiguration, debugging, aggressive responses). Both of these are considered, although the most significant payoffs may relate to the second case. Note that perspicuity can also be greatly aided during development by the appropriate use of static analysis tools.

    5.3.1 System Architecture

    Issues: Hardware protection and domain isolation, software abstraction, modularity, encapsulation, objects, types, object naming and search strategies, multiprogramming, processes, domains, threads, context changes, concurrency, interprocess communication, multiprocessing, interprocessor communication, networking, wrappers, and so on.

    Useful historical examples: Two system architectures are noted in which great emphasis was devoted to interface design within a hierarchical structure.

    Other systems also pursued various aspects that addressed the importance of proactive interface design -- for example, some other capability-based architectures, Secure Computing Corp.'s strongly typed systems, and to some extent others such as SE-Linux, Plan-9, and some of the multilevel-secure kernels.

    The following concepts are relevant to the development of perspicuous interfaces.

    Relevance for perspicuity: All of these issues can seriously affect interface perspicuity.

    5.3.2 Software Engineering

    Unfortunately, "software engineering" is a term applied to an art form, not to an engineering discipline. Nevertheless, there are many principles (such as those in Chapter 2) of sound architectures, good software engineering, and good development practice, which -- if they were followed wisely -- can result in systems with much greater security, reliability, and so on, much greater assurance that those properties are indeed satisfied statically, and much greater perspicuity when something goes wrong.

    Issues: architecture, distributed systems, real-time systems, requirements, specification, software development methodologies, abstract implementations, composability, abstraction, modularity, encapsulation, information hiding, uniform handling of objects, object-oriented approaches, development practices, integration, debugging, testing, modeling, simulation, fault injection, formal methods for specification and analysis of functional and nonfunctional properties, formal verification and model checking, performance analysis, tools for static and dynamic analysis, software process technology, Clean Rooms, Extreme Programming, and so on. Development environments, component technologies, and related approaches such as the Common Object Request Broker Architecture (CORBA), CORBA Component Model, the Component Object Model (COM), DCOM, ActiveX, Enterprise Java Beans (EJB), Java Remote Method Invocation (RMI), and so on.

    For example, CORBA provides some basic help in dealing with the interface definitions of proprietary closed-source components without having access to the source code. CORBA defined the Interface Definition Language (IDL) as a method to provide language-independent interface definitions. IDL types are then mapped into corresponding types in each language; there are standard mappings for some languages (C++, Java, Smalltalk). While greatly aiding cross-language interoperability, to date, it has not been widely applied to COTS software. (Note: Netscape based much of its architecture in the mid-to-late 1990s on the goal of being a "platform" on CORBA. There are rumors that a good bit of custom, in-house software in large corporations uses CORBA.) In the open-source world, its greatest success has been in the GNOME project. Like other existing technologies, IDL does not support behavorial specifications. While the CORBA folks discuss using IDL to structure the interfaces of a monolithic program, this does not appear to be very popular. CORBA's success, rather, has been in providing object-oriented RPC services, where IDL is used as the RPC specification language.

    Relevance for perspicuity: All of these issues can seriously affect interface perspicuity. In particular, bad software engineering practice can result in systems that are extremely difficult to understand, at all layers of abstraction (if there are any!). On the other hand, intelligently applied good software engineering practice can greatly enhance perspicuity, particularly for software for which human interface design is an integral part of the system architecture. However, the best programming analysis tools can not overcome inherently bad architectures, bad software engineering practice, and sloppy testing.

    5.3.3 Programming Languages and Compilers

    Issues. We begin with a brief enumeration of the most relevant issues that affect interface perspicuity. (Some of the items -- particularly those relating to programming languages -- are merely collections of thoughts for further discussion.)

    Here are several guidelines for increasing perspicuity through good program languages and compiler-related tools, as well as good programming practice.

    Analysis tools that can aid in determining the perspicuity of interfaces are considered in Section 5.4.

    5.3.4 Administration and System Operation

    Administrative-interface issues include ease of maintenance, autonomic system behavior and what happens when the autonomic mechanisms fail, self-diagnosing systems, configuration consistency analysis, and many other topics.

    User-interface issues include ease of diagnosing system failures, ease of debugging application code, analysis tools, and so on. Of particular concern are the users who have critical responsibilities -- for example, operators of SCADA systems and other critical infrastructure components, control systems, financial systems, and so on. In these cases, real-time monitoring and analysis for anomalous system behavior become part of the interface purview.

    Relevance for perspicuity: Today's system adminstrator interfaces tend to put an enormous burden on the administrators. Simplistic would-be solutions that attempt to interpret and explain what has gone wrong are likely to be inadequate in critical situations that go beyond the low-hanging fruit.

    5.3.5 No More and No Less

    "What you see is what you get" might be considered as a basic mantra of perspicuity -- especially if it is taken seriously enough and assuredly implies that what you get is no more and no less than what you see. (Recall this dictum at the beginning of Section 3.2, in the context of the effects of composition.) The extent of typical exceptions to no more and no less is astounding.

    There are many examples of more, many of which can be very damaging: hidden side effects, Trojan horses, undocumented and unadvertised hardware instructions and software primitives (sometimes with powerful override abilities), lurking race conditions and deadly embraces, blue screens of death, frozen windows, misleading URLs (for example, a cyrillic o instead of a roman o, or a Zero in MICROS0FT waiting to take you somewhere else), and so on, ad infinitum. Les Lamport's definition of a distributed system noted in Chapter 1 suggests that what you might have expected to happen won't.

    There are also various examples of less, many of which are likely to be frustrating or debilitating: expected resources that do not exist or are temporarily unavailable, such as URLs that point nowhere, even though they worked previously.

    Perhaps the most insidious cases are those in which something more and something less both occur at the same time.

    5.3.6 Multilevel Security and Capabilities

    Several of the system architecture approaches in Chapter 4 provide elegant ways of achieving What you see is exactly what you get: multilevel-secure (MLS) systems and capability-based addressing.

    In particular, if a multilevel-secure object is at a higher security level or in an inaccessible compartment to the would-be user, then the user simply is not supposed to know of the existence of that object; any attempt to name it or list a directory in which it exists is greeted with a single relatively neutral undifferentiated standard exception condition such as "no such object" that conveys no information. Note that any exception condition indicator that provides a variety of possible context-dependent error messages is likely to be subject to exploitable covert channels through which information can be signaled.

    Similarly in capability-based addressing, if a user does not have a proper capability for an object, that object is logically equivalent to being nonexistent.

    In a sense, this is a very satisfactory goal in terms of perspicuity of naming and accessing system resources. On the other hand, if anything goes wrong, life can become quite complicated. From a user's perspective, everything that supposedly needs to be visible is visible -- except when it isn't. From an application developer's perspective, simply plunking a legacy software system into the multilevel environment may cause the application to break, perhaps as a result of short-sighted assumptions in the legacy code or a configuration problem in the installation of that code. From a system administrator's perspective, access across security levels may be necessary to determine what went wrong -- unless the system is well designed and single-level analysis tools can suffice. Otherwise, there is a risk of violating the MLS properties. Thus, MLS and capabilities can improve perspicuity when things go well, and can decrease it when things go wrong -- unless the architecture and implementation are well conceived in the first place and the analysis tools are effective. Furthermore, in the absence of some sort of multilevel integrity in an MLS system, hidden dependencies on untrustworthy components can undermine the integrity of the MLS levels.

    5.4 Perspicuity through Analysis

    5.4.1 General Needs

    From the dynamic analysis perspective, there are again two different manifestations of perspicuity: (1) using static and dynamic analysis of a given interface to provide greater understandability as the interface is being used under normal operation, and (2) interpreting real-time exceptional conditions and making them understandable contextually-- for example, whenever remediation is urgently required (as in the cases of recovery, reconfiguration, debugging, and aggressive automatic responses). Both of these cases are considered in this section, where we seek to identify, characterize, and exploit analysis techniques for defining and analyzing system interfaces so that the behavior of the systems and the dependencies among those systems can be more easily understood and controlled.

    This is a multidimensional challenge. Some of the dimension are outlined as follows.

    In general, it is advantageous to address the problem of interface perspicuity up front, and then consistently follow through. This suggests an approach that encompasses the entire development cycle and operations, which can make the analysis challenges much more accessible.

    5.4.2 Formal Methods

    Issues: Methods and tools for demonstrating consistency of specifications with requirements, and consistency of code with specifications; formal verification and model checking; analysis tools for detecting characteristic security flaws, buffer overflows, and so on.

    Examples: HDM hierarchical abstraction, formal specifications, state mapping functions, and abstract implementations; PVS and PVS interpretations; CCS, pi-calculus, and so on.

    Of particular recent interest are Drew Dean's PhD thesis [95], the Wagner-Dean paper [98] on static analysis of C source code, the work of Giffin, Jha, and Miller [130] on analyzing binaries for mobile code (extending the Wagner-Dean approach), and Hao Chen's effort [75, 77, 78] at formal model checking to search for characteristic flaws. (Of course, it is much easier to do analysis on source code, if it is available.) Also, see Mitchell and Plotkin [240] for a highly readable paper on theoretical foundations of abstract data types with existential types; it is of particular interest to type theorists. See Chapter 6.

    5.4.3 Ad-Hoc Methods

    Issues: Informal methods and tools for testing for the inconsistency of specifications with requirements and the inconsistency of code with specifications; other tools.

    5.4.4 Hybrid Approaches

    Purely formal tools tend to be difficult to use for ordinary mortals. Purely ad-hoc tools are limited in what they can achieve. Semiformal tools may provide a bridge between these two approaches. Examples include formally based testing (e.g., mechanically deriving test conditions) and machine-assisted code inspections.

    5.4.5 Inadequacies of Existing Techniques

    Some of the above existing techniques can have significant effect in the near-term future, if applied wisely. However, in the longer-term future, those techniques are not nearly adequate. Thus, in this section we consider several areas in which there are serious gaps in existing approaches.

    Many problems are made worse by a lack of perspicuity:

    5.5 Pragmatics

    5.5.1 Illustrative Worked Examples

    We foresee various possibilities for something that can be described conceptually without having to do much implementation, or whose implementation could be outlined and be pursued in detail. Some of these examples can demonstrably enhance perspicuity both statically and dynamically. It might also be possible to characterize some measures of perspicuity that could be analytically determined, although this is deemed less likely and probably less realistic.

    One possible overarching approach is the following. Given a combination of specifications, source code, and perhaps some knowledge of the possible operating environments, statically analyze them and transform them into a body of knowledge that can be interrogated dynamically, for example, when an environment is under stress. A combination of dynamically interpreted pre- and post-conditions could then directly produce analysis results that would facilitate the understanding of attacks and malfunctions, based on which conditions fail. Such an approach would provide help in recommending autonomic responses and human-aided responses, as appropriate. Note that this is not really a new concept. For example, the ESS Number 2 telephone switching systems had a diagnostic dictionary that covered almost all possible failure modes and suggested appropriate remedies. However, in the context of more modern programming language and operating system technologies, such an approach could now be significantly more effective -- albeit significantly more complicated.

    Several specific examples come to mind as candidates for worked examples.

    5.5.2 Contemplation of a Specific Example

    When we went looking for examples where behavioral specifications would be useful, the BSD TCP/IP stack seemed like a logical place to start: not only is the software open-source, but there is excellent documentation as well [229, 382]. Unfortunately, this plan did not succeed as originally hoped. Our first idea was to examine the implementation of the Address Resolution Protocol (ARP). Up through 4.3BSD, the ARP implementation was a small module with a simple interface to the rest of the TCP/IP stack. In 4.4BSD, a new, generalized routing table structure that integrated ARP was introduced. The ARP implementation no longer has a simple, clean interface to the rest of the kernel -- it is now part and parcel of the routing code, a much larger and more complicated piece of the TCP/IP stack. (Of course, it is conceptually nice to deal with ARP-resolved Ethernet addresses in the routing framework, and eliminate the special handling of Ethernet addresses for machines on the local network.)

    Our next target was the UDP implementation. UDP is a nice simple protocol, and would appear to be an ideal example. The networking code in the kernel uses an object-oriented design similar to that of the file system code, although the actual implementation is in plain C. The implementation combines both a message-passing style, à la naïve objects in Scheme, and a record of functions style more similar to C++. The message-passing style is used on output, and the record of functions style on input. With better language support, these paradigms could result in an extremely clean implementation, but with C requiring manual implementation of all the details, some generally difficult layering issues explicitly raise their ugly heads.

    On output, the handoff from the socket layer occurs to the udp_usrreq function, which takes as arguments a socket, a command (i.e., message), and three mbuf chains: the data to be sent, the address to send it to, and some control information that is not used by UDP and will not be discussed further. If the command is PRU_SEND, then udp_output is called. In udp_output is where things start to get ugly, making a behavioral specification less elegant than one would desire: either the socket has been connected to a destination (Yes, this makes sense for UDP!), or a destination address has been supplied -- but not both. The code, most unfortunately, knows details about all of the data structures, and peeks around inside them to enforce this addressing invariant. With better language support, including either method overloading on argument type, as in Java, or full multiple-dispatch, as in CLOS, this could be very elegant: whether or not a socket is connected to a destination, as well as whether or not a destination address is supplied, could easily be encoded in the type system. Then, there would be four separate implementations, three of which simply signal an error, with the fourth function prepending the UDP header, generating the UDP checksum, and eventually calling ip_output. The main implementation would not need explicit code to check the addressing invariant, as everything would be guaranteed correct by the programming language.

    On input, things are much simpler. The code checks various validity conditions on the input packet, and assuming the packet is valid, then checks whether the packet is destined for a unicast or broad/multicast address. If the packet is destined for a unicast address, the code searches for a socket to deliver the packet to. Assuming that an appropriate socket is found, the data is appended to its receive queue, and the process is woken up. For broad/multicast packets, the data can be delivered to more than one socket, and the appropriate process(es) are woken up. If no socket is found, an ICMP error packet is sent back to the source of the packet.

    5.6 Conclusions

    This chapter is perhaps the most speculative in the report, based more on hopes for the future that are less supported by the past than was the case regarding the chapters on principles, composability, and architectures -- all of which have long histories in the research and development communities. Interface architectures have seemingly been neglected, relegated to an afterthought of system design and implementation.

     

    6 Assurance

    Synopsis

    Even if requirements and architectures have been created composably and with serious observance of the most important principles, questions must be considered as to the trustworthiness of the resulting systems and their uses in applications. However, such analysis can be extremely difficult unless assurance has been an integral consideration throughout the development.

    Thus far, we have considered how to achieve principled composable architectures and to informally provide integrity of an architecture and its implementation throughout the system development process, in attempting to develop, configure, and maintain trustworthy systems and networks. In this chapter, we consider assurance aspects associated with the development process and with its artifacts and end products. We seek a collection of assurance techniques and measures of assurance that can be associated with requirements, specifications, architectures, detailed software designs, specifications, implementations, maintenance, and operation, as appropriate.

    6.1 Introduction

    Regarding trustworthiness of critical systems, assurance is in the eye of the beholder. However, it is better to depend on systems worthy of being trusted rather than to be beholden to seriously flawed software and unknown components. PGN

    We seek to achieve trustworthy systems and networks, with some demonstrably sound measures of assurance -- that is, rigorously addressing the question of how worthy really is the intended trustworthiness. Measures of assurance can be sought in a variety of ways, throughout the development cycle -- and thereafter as well. For example, they might involve analyses applied to requirements, architectures and detailed system designs of operating system and application software, compilers, hardware, and operational practices. With respect to software developments, thorough formal analyses throughout the development cycle can provide some significant levels of assurance, although less formal techniques such as code inspection, testing, and red-teaming are complementary techniques that can also be very useful. Generally much less satisfying if not unworthy from a serious assurance point of view are measures of institutional goodness (as in the Capability Maturity Model) and individual programmer competence (as in certification of software engineers). Overall, no one assurance technique is adequate by itself; each -- including those that are formally based -- has inherent limitations that must be recognized and surmounted.

    Perhaps the most important conclusion of this report in our efforts to attain sound and robust systems and networks is that the assurance associated with trustworthiness must be a pervasive and integral part of the development cycle and the subsequent operational use and long-term evolution of the resulting systems and networks. We repeat this conclusion emphatically, referring to it as the notion of Pervasively Integrated Assurance (PIA). 

    Attaining some nontrivial measures of assurance is seemingly a labor-intensive process, but then so is conventional software development -- including testing, debugging, integration, red-teaming, maintenance, and evolution. Ideally, assurance techniques should be incorporated into existing tools for software and hardware development. Furthermore, new tools for enhancing assurance should also be added to the development process. On the other hand, there are grave dangers in believing in the infallibility of development tools. Once again, we must depend on the intelligence, training, and experience of our system architects, designers, implementers, application system operators, administrators, and -- in many cases -- end users themselves.

    Typically, there are enormous benefits from techniques that can be applied upfront in the development process, such as formal specifications for critical requirements, principled architectures, and formal or semiformal design specifications. It is clearly preferable to prevent flaws early that would otherwise be detected only later on in the development. However, there are many flaws that cannot be detected early -- for example, those introduced during implementation, debugging, and maintenance that can nullify earlier assurance techniques. Consequently, assurance must be a distributed and temporal concept throughout development, maintenance, and operation, where constituent assurance techniques and isolated analyses must themselves be consistent, composable, and carefully coordinated. For example, careful documentation, disciplined development methodologies, coding standards, and thoughtful code inspection all have a place in helping increase assurance -- as well as having secondary effects such as reducing downstream remediation costs, and improving interoperability, system flexibility, and maintainability. However, when it comes to providing meaningful assurance, the usual dictum applies: There are no easy answers.

    6.2 Foundations of Assurance

    "If a program has not been specified, it cannot be incorrect; it can only be surprising." W.D. Young, W.E. Boebert, and R.Y. Kain [386]  

    Several basic issues immediately come to mind in seeking increased assurance. (See also a report by Rushby [326] on providing assurance relating to reliability and safety in airborne systems, whose conclusions are also applicable here.)

    There have been many advances in assurance techniques, and particularly in formal methods, over the past thirty years. However, major successes are still awaited in the fruitful application of these methods. We conclude that, whereas considerable potential remains untapped for formal methods applied to security, we are now actually much closer to realizing that potential than previously. Many of the pieces of the puzzle -- theory, methods, and tools -- are now in place. It is unwise to put all your eggs in one basket (such as testing or penetrate-and-patch efforts). Thus, a more comprehensive combination of approaches is recommended, especially if the desired paradigm shifts are taken and if the considerations of the following section are observed.

    6.3 Approaches to Increasing Assurance

    Providing meaningful assurance of trustworthiness is itself a very complex problem, and needs to be spread out across the development process as well as into operational practice. Various approaches can be used in combination with one another to enhance assurance.

    Judicious use of formalisms and formal methods can add significantly to development and operation, but also can add complexity, delays, and cost overruns if not used wisely. Although formal models and formal specifications may seem to complicate the design process (with delays, increased costs, and greater intellectual demands), they can also substantively improve assurance, and also lead to earlier identification of problems that might otherwise be uncovered only in late stages of the development and use cycles. However, they need to be used with considerable care, primarily where they can accomplish things that design reviews, testing, and operational discipline cannot. In that errors in requirements formulation, design, and specification are perhaps the most difficult and costly to repair, formalisms can be particularly valuable in the early stages of development. Although some readers will consider assurance issues to be pie in the sky and unrealistic from the perspective of increased costs, project delays, and increased needs for education and training, the spectrum of assurance techniques does have something for everyone.

    6.4 Formalizing System Design and Development

    Historically, early examples of the use of formalism in system design and implementation are found in two SRI efforts during the 1970s. These rather early instances of uses of formal methods are reconsidered here for yet another visit because they represent some significant advances in the ability to analyze systems in the large that seem to have been otherwise ignored in recent years. (Please excuse a little duplication for contextual ease of reading.)

    A general argument against such efforts seems to be that it is too difficult to deal with big-system issues, and much easier to focus on components. However, it is often the analysis of compositions and system integration that in the long run can be most revealing.

    Incidentally, HDM's 1970s ability to analyze vertical compositions of hierarchical abstractions has been incorporated in SRI's PVS (beginning with version 3.0), in the form of PVS theory interpretations [278]. See http://pvs.csl.sri.com for PVS documentation, status, software downloads, FAQ, etc. See also http://fm.csl.sri.com for further background on SRI's formal methods work, including SAL (the Symbolic Analysis Laboratory, which includes three model checkers) and ICS (the Integrated Canonizer and Solver, a decision procedure). Symbolic analysis involves automated deduction on abstract models of systems couched in formal logic, and is the basis for much of CSL's formal methods work.

    Some further early work on formal methods and verification applied to security is summarized in the proceedings of three VERkshops [275, 276, 205], from 1980, 1981, and 1985, respectively. (The titles of all of the papers in those three VERkshop proceedings are given in the appendix of [259].)

    Considerable benefit can accrue from rigorous specifications -- even if they are not formally checked, although clearly much better if they are. Specifications of what is and is not needed are generally more succinct than literal descriptions of how something should be implemented. Specifications can provide an abstraction between requirements and code that enable early identification of inconsistencies -- between specifications and requirements, and between code and specifications. Furthermore, specifications can be more readable and understandable than code, especially if they can be shown to mirror the requirements explicitly early in the development process, before any code is written.

    The long history of fault-tolerant computing has put significant effort on fault prevention (relative to whatever scope of faults was intended -- from hardware to software faults to faults that included security misuse). Clearly, all of those assurance efforts relating to the avoidance of bad designs and bad implementations are relevant here, including the assurance that can result from inherently sound system and network architectures and good software-engineering practice.

    With respect to the importance of programming languages in security, see Drew Dean's paper on The Impact of Programming Language Theory on Computer Security [96]. As a further useful reference, Chander, Dean, and Mitchell [69, 70] have some interesting recent work on formalizing the modeling and analysis of access-control lists, capabilities, and trust management.

    6.5 Implementation Consistency with Design

    The HDM approach noted in Section 6.4 is one methodology in which formal proofs could be carried out demonstrating the consistency of a software component with its formal specifications. The intent is that such proofs would be carried out only after proofs had shown that the specifications were consistent with the stated requirements (possibly subject to certain exceptions that would have to be tolerated or monitored, as in the case of unavoidable covert channels).

    6.6 Static Code Analysis

    Ideally, the up-front philosophy suggests that discipline embededded in the software development process can have considerable payoff. For example, programming languages that inherently enforce greater discipline would be very beneficial. Compilers and related pre- and post-processor tools that provide rigorous checking would also be useful. However, the integrity that can be provided by the best methodologies, programming languages, and compiler tools is potentially compromisible by people involved in design and implementation, debugging, integration, maintenance, and evolution.

    Early efforts in the 1970s by Abbott [5] and the ISI team of Bisbey, Carlstedt, Hollingworth, and Popek [44, 45, 46, 67, 68, 165] attempted to identify a few characteristic flaws noted in Section 2.4 and to devise means of detecting their presence in source code. The conclusions at that time were generally rather discouraging, except in very constrained circumstances.

    Contemporary analytic techniques and tools are much more promising. They are particularly appropriate for open-box source code, but of course also applicable to closed-box software -- even if only by the proprietors. Examples include (among others), with varying degrees of effectiveness and coverage:
    * Crispin Cowan's StackGuard (http://immunix.org)
    * David Wagner's buffer overflow analyzer (http://www.cs.berkeley.edu/~daw/papers/)
    * @Stake's L0pht security review analyzer slint
    * Cigital's ITS4 function-call analyzer for C and C++ code
    (http://www.cigital.com/its4/)
    * Ken Ashcraft and Dawson Engler's system-specific approach [20]
    * Brian Chess's extended static checking [79]
    * Purify
    * Yuan Yu and Tom Rodeheffer's RaceTrack, for detecting race conditions in multi-threaded code (Microsoft Research)
    * Hao Chen's MOPS (with some assistance from Dave Wagner and Drew Dean, whose earlier joint work [98, 371] provided a starting point); MOPS takes a formally based, approach to static code analysis (see Appendix A), in which formal models of undesirable vulnerability characteristics are the basis for formal model checking of the software, thus identifying software flaws.

    There has also been some effort on formally based testing. (This work is particularly interesting when applied to hardware implementations.) However, the early results of Boyer, Elspas, and Levitt [57] suggest that formal testing is in some sense essentially equivalent to theorem proving in complexity. Nothing since that paper has fundamentally altered their conclusion, although formal derivation of test cases can be extremely effective in increasing the assurance that testing will cover a realistic span of cases. In particular, formal test-case generation has become increasingly popular in the past few years. (As just one example, see [49].)

    6.7 Real-Time Code Analysis

    There has been relatively little exploitation of formalism relating to real-time analysis in the past, but this area represents a potentially fertile ground for the future. One example might involve run-time checks derived from formally based analyses of potential vulnerabilities in source code, above and beyond what might take place in a compiler, or in a preprocessor -- such as buffer-overflow checks and Trojan-horse scans that cannot be done prior to execution. Proof-carrying code [250] and checking of cryptographic integrity seals are two specific examples. Many other concepts remain to be considered.

    6.8 Metrics for Assurance

    In order to have any concrete measures of assurance, it is necessary to establish well-defined metrics against which requirements, architectures, specifications, software, tools, and operational practice can be measured. This is a very complicated area. We believe that it is unwise to do research on metrics for the sake of the metrics themselves, although it is important to establish parameterizable metrics with general applicability. The various metrics then need to be tailored specifically to the development stage in which they are invoked, and applied explicitly to those development efforts.

    6.9 Assurance-Based Risk Reduction

    The assurance techniques summarized in the previous sections of this chapter can have significant effects in reducing risks, particularly with respect to the extent to which critical system requirements are likely to be satisfied by system designs and implementations. These techniques may be applicable in many different ways, all of which are potentially relevant here. In particular, analysis at all development stages and all layers of abstraction within a development can contribute. (See Section 6.3.)

    Several examples may help to illustrate how assurance techniques might be applied. In particular, we examine some of the cases summarized in Section 2.2 and Section 5.2.2, and consider what might have been done to prevent the effects that actually resulted. This is intended not as an exercise in hindsight, but rather as an explicit representation of what types of assurance might be applicable in future developments of a similar nature.

    The above illustrative enumeration suggests that, among the wide variety of assurance techniques (some almost obvious, some more subtle), each potential system risk can benefit from the application of some subset of the total collection of approaches to increasing assurance. Establishment of sound requirements, sensible architectures, and good software development practice would undoubtedly avoid many of the problems discussed throughout this report, and could be significantly aided by formal or even semiformal requirements analysis, model-based design, model checking, formal test-case generation, static analysis, and so on. Of course, there is no one size that fits all; the particular techniques must be used in various coherent combinations, according to the circumstances, the development challenges and risks, and the competence of the developers and analysts. Once again, it is clear that there is a significant need for pervasively integrated assurance, throughout development and operation. However, the amount of resources and effort to be devoted to assurance needs to be commensurate with the overall long-term and nonlocal risks. Unfortunately, most risk assessments relating to how much effort to devote to assurance tend to be short-term and local. (The risks of short-sighted optimization are considered further in Section 7.1, and the importance of up-front efforts are discussed in Section 7.2.)

     

    6.10 Conclusions on Assurance

    Opportunities for seriously increasing the assurance associated with software development and system operations are abundant, but largely unfulfilled. Much greater commitment is needed to providing assurance of trustworthiness. Assurance techniques seem to have greater use and greater payoffs in hardware development than in software development, with heavier emphasis on the use of formalisms. However, assurance applied to operational practice lags far behind either hardware or software assurance.

    The potential benefits of formal methods remain undiminished, particularly with respect to hardware and software, but perhaps also integrated into operational practice. The need for formal methods in the specification and analysis of critical systems and system components remains enormous. In the light of past events -- including rampant system flaws and detected vulnerabilities, system failures, experienced penetrations, and flagrant system misuses -- formal methods remain a potentially important part of the system development and assurance process. Their systematic use at appropriate places throughout the system life cycle can be extremely productive, if used wisely.

    Recommendations for future research and development encompassing increased assurance for trustworthy systems and networks are discussed in Chapter 8.

     

    7 Practical Considerations

    Synopsis

    There's many a road 'twixt the need and the code.
    (It's an especially rough road in the absence of requirements, design specifications, careful programming, sensible use of good development tools, documentation, and so on!)

    The previous chapters pursue approaches that have significant potential to enable the development and operation of useful meaningfully trustworthy systems -- if these approaches are applied wisely. This chapter considers various potential obstacles to the application of these approaches, and explores how they might be overcome. Some of the apparent obstacles are merely perceived problems, and can be readily avoided. Other potential obstacles present genuine concerns that can be circumvented with some degree of knowledge, experience, discipline, and commitment.

    In this chapter, we address such topics as how an architecture can accommodate its relevant requirements (including requirements to be able to adapt to changing requirements!); whether inherently robust architectures are possible given today's mainstream hardware platforms and computer-communications infrastructures; the extent to which discipline can be effectively and pervasively introduced into the development process -- for example, through methodologies, programming languages, and supporting tools; the relative effectiveness of various methodologies; problems peculiar to legacy systems; the practical applicability of formal methods; various alternative paradigms; management issues; relevant pros and cons of outsourcing and offshoring; and so on.

    7.1 Risks of Short-Sighted Optimization

    Many people (for example, system procurers, developers, implementers, and managers) continue to ignore the long-term implications of decisions made for short-term gains, often based on overly optimistic or fallacious assumptions. In principle, much greater benefits can result from far-sighted vision based on realistic assumptions. For example, serious environmental effects (including global warming, water and air pollution, pesticide toxicity, and adverse genetic engineering) are generally ignored in pursuit of short-term profits. However, conservation, alternative energy sources, and environmental protection appear more relevant when considered in the context of long-term costs and benefits. Similarly, the long-term consequences of dumbed-down education are typically ignored -- such as diminution of scientific, engineering, and general technical expertise, poor system development practices, and many social consequences such as higher crime rates, increased reliance on incarceration, and so on. Governments tend to be besieged by intense short-sighted lobbying from special-interest groups. Insider financial manipulations have serious long-term economic effects. Research funding has been increasingly focusing on short-term returns, seemingly to the detriment of the future. Overall, short-sightedness is a widespread problem.

    Conventional computer system development is a particularly frustrating example of this problem. Most system developers are unable or unwilling to confront life-cycle issues up front and in the large, although it should by now be obvious to experienced system developers that up-front investments can yield enormous benefits later in the life cycle. As described in earlier chapters, defining requirements carefully and wisely at the beginning of a development effort can greatly enhance the entire subsequent life cycle and reduce its costs. This process should ideally anticipate all essential requirements explicitly, including (for example) security, reliability, scalability, and relevant application-specific needs such as enterprise survivability, evolvability, maintainability, usability, and interoperability. Many such requirements are typically extremely difficult to satisfy once system development is far advanced, unless they have been included in early planning. Furthermore, requirements tend to change; thus, system architectures and interfaces should be designed to be relatively flaw-free and inherently adaptable without introducing further flaws. Insisting on principled software engineering (such as modular abstraction, encapsulation, and type safety), sensible use of sound programming languages, and use of appropriate support tools can significantly reduce the frequency of software bugs. All of these up-front investments can also reduce the subsequent costs of debugging, integration, system administration, and long-term evolution -- if sensibly invoked. (Note that a few of the current crop of software development methodologies do address the entire software life cycle fairly comprehensively, such as the Unified Software Development Process (USDP) [174], whose three basic principles are use-case driven, architecture centric, and iterative and incremental; USDP is based on the Unified Modeling Language (UML).) 

    Although the potential fruitfulness of up-front efforts and long-term optimization is a decades-old concept, a fundamental question remains: Why has the sagest system development wisdom of the past half-century not been more widely and effectively used in practice? Would-be answers are very diverse, but generally unsatisfactory. These concepts are often ignored or poorly observed, for a variety of offered reasons -- such as short-term profitability while ignoring the long-term; rush to market for competitive reasons; the forcing functions of legacy system compatibility; lack of commitment to quality, because developers can get away with it, and because customers either don't know any better or are not sufficiently organized to demand it; lack of liability concerns, because developers are not accountable (shrink-wrap license agreements typically waiver all liability, and in some cases warn against using the product for critical applications); ability to shift late lifecycle costs to customers; inadequate education, experience, and training; and unwillingness to pursue anything other than seemingly easy answers. Other reasons are also offered, as well.

    Overly optimistic development plans that ignore these issues tend to win out over more realistic plans, but can lead to difficulties later on -- for developers, system users, and even innocent bystanders. The annals of the Risks Forum (http://www.risks.org; see [267]) are replete with examples of systems that did not work properly and people who did not perform according to the assumptions embedded in the development and operational life-cycles. (One illustration of this is seen in the mad rush to paperless electronic voting systems with essentially no operational accountability and no real assurances of system integrity.) The lessons of past failures and unresolved problems are widely ignored. Instead, we have a caveat emptor culture, with developers and vendors disclaiming all warranties and liability, and users who are at risk. (In the case of electronic voting systems, the users include election officials and voters.)

    We need better incentives to optimize over the long term (see Section 7.2) and over whole-system contexts (see Section 7.3), with realistic assumptions, appropriate architectural flexibility to adapt to changing requirements (Chapter 4), and sufficient attention paid to assurance (Section 6.9). Achieving this will require some substantive changes in our research and development agendas, our software and system development cultures, our educational programs, our laws, our economies, our commitments, and perhaps most important -- in obtaining well documented success stories to show the way for others. Particularly in critical applications, if it's not worth doing right, perhaps it's not worth doing at all -- or at least not worth doing without rethinking whatever might be problematic with the requirements, architecture, implementation, and/or operational practice. As an example, the essence of Extreme Programming (Section 2.3.6) seems interesting in achieving working partial systems throughout development, but would be applicable to critical systems only if it converges on products that truly satisfy the critical requirements. Once again, the emphasis must be on having well-defined requirements.

    David Parnas has said, let's not just preach motherhood -- let's teach people how to be good mothers. Indeed, the report you are reading seems to be preaching applicable motherhood. (Although the author of the report you are reading wrote in 1969 about the risks of overly narrow optimization and the importance of diligently applying generally accepted motherhood principles [255], the basic problems still remain today.)  

    One of the most ambitious efforts currently in progress is the U.S. Department of Defense Global Information Grid (GIG), which envisions a globally interconnected completely integrated large-scale fully interoperable end-to-end multilevel-secure networking of computer systems by 2020, and capable of providing certain measures of guaranteed services despite malicious adversaries, unintentional human errors, and malfunctions. The planning and development necessary to attain the desired requirements suggest the need for long-term vision, nonlocal optimization, and whole-system perspectives (see Sections 7.1, 7.2, and 7.3, respectively) -- without which you realistically cannot get there from where we are today. The desirability of observing the principled and disciplined developments described in this report becomes almost self-evident, but still not easy to satisfy, especially with the desire to use extensive legacy software. However, the Enlightened Architecture concept noted at the end of Section 4.3 is fundamental to the success of any environment with the ambitious goals of the GIG.

    7.2 The Importance of Up-Front Efforts

    Perhaps the most important observation here is that if systems and applications are developed without an up-front commitment to and investment in the principles discussed here, very little that is discussed in this report is likely to be applied effectively. The commitment and investment must be both intellectual and tangible -- in terms of people, funding, and perseverance. Looking at the recommended approaches as an investment is a vital notion, as opposed to merely relying on the expenditure of money as a would-be solution. Admittedly, the long-term arguments for up-front investment are not well understood and not well documented in successful developments -- for example, with respect to the positive return on investment of such efforts compared with the adverse back-end costs of not doing it better in the first place: budget overruns, schedule delays, inadequacy of resulting system behavior, lack of interoperability, and lack of evolvability, to cite just a few deleterious results.

    It would seem completely self-evident that the long history of system failures would suggest the need for some radical changes in the development culture. For example, this report strongly advocates realistically taking advantages of the potential benefits of up-front efforts (e.g., careful a priori establishment of requirements, architectures, and specifications). Certainly, this is not a new message. It was a fundamental part of the Multics development beginning in 1965 [84, 85], and it was fundamental to the PSOS design specifications from 1973 to 1980 [268, 269]. Nevertheless, it is a message that is still valid today, as for example in a new series of articles in the IEEE Security & Privacy [228] on building security into the development process, edited by Gary McGraw. Unfortunately, the fact that this is not a new message is in part a condemnation of our education and development processes, and in part a sign that our marketplace is not fulfilling certain fundamental needs.

    A recent global survey of software development practices (Cusumano et al. [90]) strongly supports the wisdom and cost benefits of up-front development. Their survey includes some rather startling conclusions based on a sampling of software projects. For example, detailed design specifications were reportedly used in only 32% of the U.S. projects studied, as opposed to 100% of the projects in India. Furthermore, 100% of the Indian projects reported doing design reviews, and all but one of those projects did code reviews; this was characteristically untrue of the U.S. projects studied. Although it is unwise to draw sweeping generalizations from this survey, the issues considered and the results drawn therefrom are extremely relevant to our report. Besides, if the effectiveness of resulting foreign software developments is actually significantly better, then the rush to outsource software development might in some cases also be motivated by quality considerations, not just cost savings. This has very significant long-term implications -- for the U.S. and for other nations with rapidly developing technology bases.

    7.3 The Importance of Whole-System Perspectives

    If you believe that cryptography is the answer to your problems, then you don't understand cryptography and you don't understand your problems.
    Attributed by Butler Lampson to Roger Needham and by Roger Needham to Butler Lampson

    Unfortunately, up-front effort is not enough by itself. Perhaps equally important is a system-oriented perspective that considers all of the pieces and their interactions in the large, with respect to the necessary requirements. Such a perspective should include (for example) the ability to have an overall conceptual understanding of all relevant requirements and how they relate to particular operational needs; an overall view of the entire development process and how it demands the ability to carry out cyclical iterations; and an overall view of any particular system-network architecture as representing a single virtual system in the large as well as being a composition of systems with predictable properties relating to their interconnections and interoperability. The challenge from the perspective of composability is then to understand the big picture as well as to understand the components and their interrelationships, and to be able to reason from the small to the large -- and from the large to the small. Purely top-down developments are typically limited by inadequate anticipation of the underlying mechanisms, and purely bottom-up developments are typically limited by inadequate anticipation of the big picture.

    There are many would-be short-term "solutions" that emerge in part from the lack of big-picture understanding, but that then take on lives of their own. For example, trusted guards, firewalls, virus checkers, spam filters, and cryptography all have benefits, but also have many problems (some intrinsic, some operational).

    The quote at the beginning of this section is symptomatic of the problem that the best cryptography in the world can still be compromised if not properly embedded and properly used. This entire section can be summed up by polymorphizing the quote at the beginning of this section, as symptomatic of the risks of overly simplistic solutions: for many different instantiations of X, If you believe that X is the answer to your problems, then you don't understand X and you don't understand your problems.

    On the other hand, total systems awareness is a very rare phenomenon. It is not taught in most universities. Perhaps systems are considered to be lacking in theory, or uninteresting, or unwieldy, or dirty, or too difficult to teach, or perhaps just frustrating, or a combination of all of these and other excuses. As a result, system-oriented perspectives are slow to find their way into practice.

    As a historical note, Edsger Dijkstra provides an example of a true pioneer who apparently lost interest in trying to deal with the big picture. In his earlier years, he was particularly concerned with the scalability of his elegant analytic methods to larger systems (for example, his work on structured programming [107],   CSP [105], and the THE system [106] noted in previous chapters). Perhaps out of frustration that practitioners were not heeding his advice, he later became increasingly focused only on very elegant small examples (cf. [121]), trying to teach his beliefs from those examples in the hopes that others would try to extrapolate them to systems in the large. The essential thrust of this report is that systems in the large can be effectively developed and analyzed as compositions of smaller components, but only if you can see and comprehend the big picture.

    One of the frequently heard arguments against spending more effort up front and optimizing over a longer term relates to situations in which there has never previously been an attack of such a magnitude that the need for extraordinary actions became totally obvious. This is the mentality that suggests that because we have not had a Pearl-Harbor or 9/11 equivalent in cybersecurity, there is no real urgency to take proactive action against hypothetical possibilities. This mentality is compounded by the use of statistical arguments that attempt to demonstrate that everything is just fine. Unfortunately, events that seemingly might occur with very low probabilities but with extremely serious consequences tend to be very difficult to comprehend. In such cases, quantitative risk assessments are particularly riskful, because of the uncertainty of the assumptions. For example, see Neumann's Computer-Related Risks book [260]. The entire book suggests a much greater need for realistic risk assessments and corresponding proactive actions. More specifically, pages 255-257 of the book provide a discussion of the risks of risk analysis (contributed by Robert N. Charette), and pages 257-259 consider the importance of considering risks in the large.

    7.4 The Development Process

    I would not give a fig for the simplicity this side of complexity, but I would give my life for the simplicity on the other side of complexity. Oliver Wendell Holmes 
     

    Returning once again to the Einstein quote at the beginning of Section 2.1, we note that the common tendency to oversimplify complex entities is perverse and usually counterproductive. The ability to clearly represent complexity in a simpler way is an art form, and usually very instructive -- but difficult to achieve.

    This section considers perceived and real difficulties with trying to use the concepts of the previous chapters, relating to requirements, architectures, and implementation. It suggests how the development process can be made much more effective, and how it can give the appearance of local simplicity while dealing with complexity more globally.

    7.4.1 Disciplined Requirements

    Well-understood and well-defined requirements are absolutely vital to any system development, and most particularly to those systems that must satisfy critical requirements such as security, reliability, safety, and survivability. They are also useful in evaluating the effects of would-be subsequent changes. Unfortunately, such requirements are seldom precisely defined a priori. Even more difficult are somewhat more subtle requirements, such as pervasive ease of use, interoperability, maintainability, and long-term evolvability -- of the requirements as well as of the architectures and implementations. Jim Horning suggests that evolvability is to requirements as specification is to code, although at a higher level of abstraction. That is, if you don't delineate the space of possible future changes to requirements, you are likely to wind up with requirements that are as difficult to evolve as is code for which there are no specifications or specifications that do not anticipate change. However, well-understood and well-defined requirements are not common.

    Even less common are explicit requirements for required software engineering sophistication, operational constraints, and specified assurance (such as the EAL levels of the Common Criteria). Requirements engineering should play a more prominent role in computer system development, which would necessarily entail adding discipline to both the process of defining requirements and to the statement of requirements themselves.

    For example, the archives of the Risks Forum are littered with cases attributable to requirements problems that propagated throughout the development process into operational use. (See particularly the items denoted by the descriptor r in the Illustrative Risks compendium index [267]. Noteworthy examples include the Vincennes Aegis system shootdown of an Iranian Airbus, the Patriot missile clock-drift problem, and even the Yorktown Aegis missile cruiser dead in the water. See Section 6.9 for these and other cases.) Many lessons need to be learned from those cases. It is generally agreed that efforts to define and systematically enforce meaningful requirements early in system developments can have enormous practical payoffs; however, there seems to be enormous resistance to carrying that out in practice, because it increases up-front costs and requires greater understanding (as noted in Section 7.1).

    7.4.2 Disciplined Architectures

    The material in the foregoing chapters is basic to sound system architectures for trustworthy systems and their implementation. As a reminder of what we have thus far, Section 2.6 summarizes some of the primary caveats that must be observed in applying the principles of Chapter 2; these principles are not absolute, and must be used intelligently. Chapter 3 discusses constraints on subsystems and other components that can enhance composability, with Section 3.2 outlining obstacles that must be avoided. Chapter 4 considers further directions that can contribute to principled composable architectures. Chapter 5 stresses the importance of interface design. Chapter 6 discusses techniques for achieving higher assurance.

    In this section we consider how to apply the approaches of the previous chapters into architectures that are inherently more likely to lead to trustworthy implementations. For example, realistic architectures should proactively avoid many problems such as the following:

    Topics whose consideration might make critical system developments more realistic include the following.

    From a practical point of view, it may seem unrealistic to expect rigorous specifications -- especially formal specifications -- to be used in developments that are not considered to have critical requirements. However, even the informal English-language specification documents that were required in the Multics development (for example) had a very significant effect on the security, reliability, modular interoperability, and maintainability of the software -- and indeed on the discipline of the implementation.

    7.4.3 Disciplined Implementation

    Technique is a means, not an end, but a means that is indispensable. Maurice Allard, renowned French bassoonist in the Paris Opera from 1949-1983
    The best architectures and the best system designs are of little value if they are not properly implemented. Furthermore, properly implemented systems are of little value if they are not properly administered. In each case, "proper" is a term that implies that the relevant requirements are satisfied. Thus, risks abound throughout development and operation. However, the notion of principled composable architectures espoused here can contribute significantly to proper implementation and administration. The notion of stark subsetting discussed in previous chapters can aid significantly in simplifying implementation, configuration, and administration.

    Many security flaws that typically arise in design and/or implementation (such as those enumerated in Section 2.4) lend themselves to exploitation. Indeed, each of the enumerated problem areas tends to represent opportunities for design flaws and for implementation bugs (in hardware just as well as software). Buffer overflows represent just one very common example. For some additional background on buffer overflows and how to prevent them, see the discussion in the Risks Forum, volume 21, numbers 83 through 86, culminating in Earl Boebert's provocative contributions in volume 21, numbers 87 and 89. Boebert refers to Richard Kain's 1988 book on software and hardware architecture [185], which provides considerable discussion of unconventional system architectures for security -- including the need for unconventional hardware platforms. Furthermore, the Multics operating system architecture constructively avoided most stack buffer overflows. The combination of hardware, the PL/I language subset, the language runtime environment, the stack discipline (nonexecutable stack frames; also, the stack grew to higher addresses, making the overflow of a buffer unlikely to clobber the return address in the stack frame), and good software engineering discipline helped prevent most buffer overflows in Multics. (See Tom Van Vleck's comments in the Risks Forum, volume 23, issues 20 and a follow-up in issue 22.) For other background, see also Bass [31] for architecture generally, Gong [145, 146] for the Java JDK architecture intended to provide secure virtual machines, and Neumann [264] for survivable architectures.

    Many implementation issues create serious problems. Establishing sensible policies and sound configurations is an enormously complicated task, and the consequences to security, reliability, functionality, and trustworthiness generally are very difficult to predict. We need better abstractions to control and monitor these policies and configurations, and to understand them better.

    Various popular myths need to be considered and debunked -- for example, the fantasy that a perfect programming language would be able to prevent security bugs. Another myth is that precompile and postcompile tools can detect and remove many classes of bugs. In general, for nontrivial programming languages, both of these myths can be true in principle only for certain types of bugs, although even the best programmers still seem to be able to write buggy code.

    7.5 Disciplined Operational Practice

    System programming is like mountain climbing: It's not a good idea to react to surprises by jumping -- that might not improve the situation. Jim Morris

    Principled composable architectures can contribute not only to trustworthy implementation (as noted at the beginning of Section 7.4.3), but also to sound operational practice -- particularly if considerable attention is paid to system interface design that addresses the needs of system administrators. However, for existing (e.g., legacy) systems that have resulted from inadequate attention to human operational interfaces, other approaches must be taken -- even if only better education and training.

    Operational issues represent enormous potential problems, such as considerable operational costs, shortages of readily available in-house staff, risks of excessive complexity, poorly defined human interfaces, and typically systems that require an ever-present demand for system administrators -- especially in crisis situations. This last concern may be escalated by increasing pressures to oursource operations and administration personnel.

    One concept that in principle would greatly improve operational practice and operational assurance would be the notion of automatic recovery, mentioned in Section 4.2. The ability to recover from most deleterious state-altering events (whether malicious or accidental) without human intervention would be an enormous benefit. Autorecovery requirements have serious implications for system architectures, and would be greatly simplified by the principle of minimizing the need for trustworthiness. Assurance associated with that recovery (e.g., based on the soundness of the architecture itself and on real-time revalidation of the soundness of the system state) would also be valuable. However, making autonomic systems realistic will require further research and extremely disciplined development.

    7.5.1 Today's Overreliance on Patch Management

    Dilbert: We still have too many software faults. We'll miss our ship date.
    Pointy-Haired Manager: Move the list of faults to the "future development" column and ship it.
    PHM, aside: 90% of this job is figuring out what to call stuff.
    Scott Adams, three-panel Dilbert comic strip, 4 May 2004

    Mass-market software as delivered in the past and in the present tends to have many flaws that can compromise the trustworthiness of systems, networks, and applications. As a result, system purveyors and system administrators are heavily dependent on patch management -- that is, developers must continually identify vulnerabilities, create would-be fixes, test them, make those fixes available, and hope that further flaws will not be created thereby. Operational installations must install the patches in the correct order in a timely fashion, at the risk of breaking or otherwise affecting existing applications.

    Patch management is an example of a slippery-slope rathole. Systems should be designed much more carefully and implemented with much greater care and attention to good software engineering practice, easily usable operational and system administration interfaces, and composable upgrade procedures that are integral to the architecture, applications, and user software. Better design and implementation must also be coupled with comprehensive testing, evaluations, and other analyses such as advanced tools to detect serious vulnerabilities; developers should do this before release, rather than simply foisting buggy software on unsuspecting customers who become the Beta testers. However, in the commercial rush to marketplace, essentially none of this happens. Thus, pouring palliative efforts into improving patch management completely misses the much more fundamental point that patches should ideally be minimized through better design and implementation, so that they become rare exceptions rather than frequent necessities. Putting the burden on patch management is somewhat akin to believing in better management of fixed reusable passwords -- that merely increasing password length, including nonalphabetic characters, and changing passwords often will improve authentication; such simplistic approaches totally ignore the risks of fixed passwords that transit networks unencrypted or are otherwise exposed and the risks of exploitable vulnerabilities in systems that allow the password system to be completely bypassed. A better solution for authentication is of course not to rely on conventional fixed passwords as the primary means of authentication, and instead to move to trustworthy systems and trustworthy networking, cryptographically protected tokens or smartcards within the context of trustworthy systems, combined with layered protection, separation of privileges, and judicious observance of the applicable principles noted in Chapter 2, plus a much greater commitment to better system security and reliability throughout development and operation.

    Although it may be necessary evil, dependence on patch management as a major component of security defenses seems too much like micromanaging the rearranging of deckchairs on the Titanic. The barn door is already wide open, and the barn is empty of more fundamental ideas. See [375] for another view of patch management.

    7.5.2 Architecturally Motivated System Administration

    Clearly alternative approaches are needed that simplify system administration and minimize the downsides of patch management. Perhaps we need not just better software engineering in general, but also a methodology that encompasses "design for patching" when "design for avoiding patches" fails -- just as hardware vendors have moved to "design for test" and "design for verification" methodologies. Design for patching should encompass system architecture (e.g., modularity and encapsulation) as well as operational characteristics (e.g., bilateral trusted paths for upgrades). Inherently sound architectures can minimize the need for patching -- as for example in carefully designed autonomic systems and fault-tolerant systems that anticipate the need for rollback, hot standbys, or other alternative measures in response to detected anomalies. Greater attention to the human interfaces (see Chapter 5 and the next section) is also essential.

    According to some reports, patch management is on the order of a $5-billion dollar problem per year. It is probably responsible for much more than that if hidden costs are included, such as down-time and lost work resulting from failed patches. Jim Horning notes that all automobile drivers once had to know how to patch an inner tube (or at least how to change a tire to drive someplace and get one patched). Today inner tubes are gone, and we go years between flat tires. That seems preferable to a highly efficient patching system.

     

    7.6 Practical Priorities for Perspicuity

    Returning to the notion of perspicuous interfaces considered in Chapter 5, this section considers some of the practical issues relating to interface design. Given the range of material addressed in this report, one important question that remains to be addressed is this: Where are the biggest potential payoffs, and what priorities should be allocated to possible future efforts, with respect to dramatically increasing the understandability of systems and their interfaces -- especially under crisis conditions. The same question also applies to subsystem interfaces that may be invisible to end users but vital to application developers, integrators, and debuggers. It is important to note that good interface design is essential not only to human users, but also internally to systems themselves -- especially autonomic systems.  

    One of the most important challenges relates to the roles that administrators play in configuring and maintaining operating systems, application software, networks, control systems, and so on. Even with the hoped-for advent of autonomic systems and networks, significant burdens will rest on admins when something fails or is under attack. Thus, perspicuity for admins must be a high-priority concern. This concern must be twofold: (1) System interfaces must be better designed with admins in mind. (2) Analysis tools must greatly facilitate the critical roles of admins. The potential payoffs for better perspicuity for admins are enormous, in terms of reducing operational costs, increasing speed of remediation, minimizing dependence on critical human resources, increasing job satisfaction, and -- above all -- improving system security and survivability.

    A second challenge has to do with dealing with legacy systems that were not designed with adequate security, reliability, robustness, and interface perspicuity, and that therefore cannot be easily retrofitted with such facilities. This is an unfortunate consequence of many factors, including the inability of the marketplace to drive needed progress, generally suboptimal software development practices, and constraints inherent in closed-source proprietary software -- such as a desire on the part of system developers to keep internal interfaces hidden and making it more difficult for competitors to build compatible applications. In this situation, much more perceptive analysis methods and tools are needed, although those tools would be applicable to closed-source as well as source-available software. To the extent that analysis tools can be applied to available source code (whether proprietary or not) rather than object code, the more effective they are likely to be.

    A third challenge is that, whichever approaches are taken, they must include criteria and techniques for measuring and evaluating their effectiveness. This again suggests the need for better analysis methods, but in the long run also necessitates system developments that anticipate the needs of improved measurability of success.

    Thus, our suggestions for realistic priorities are as follows, in several dimensions:

    Prioritized Approaches for Achieving Greater Perspicuity

    1. A combination of constructive interface design and analysis tools for newly developed software, recognizing a leverage advantage for available source code (the most effective alternative)
    2. Analysis tools that rely on system source code to enhance interface perspicuity for legacy systems, but not on any substantially new or modified interfaces (an intermediate alternative)
    3. Analysis tools that are restricted to see only object code to enhance interface perspicuity (a less desirable and often less effective alternative, although possibly useful when source code is not available)

    Prioritized Human Targets for Enhanced Perspicuity

    1. System developers, debuggers, and integrators (with highest payoff)
    2. System administrators (with very high payoff)
    3. Conventional application developers (with very high payoff) and users (with considerable payoff)

    Potential System Targets for Enhancing Perspicuity

    1. Linux or one of the BSD operating systems (attractive because of availability of source code)
    2. TCP/IP related behavior (complex, but potentially very useful)
    3. A realistic multilevel security system (less accessible, but with considerable potential use)

    7.7 Assurance Throughout Development

    The whole is greater than the sum of its parts. This can be true particularly in the presence of effort devoted to sensible architectures, interface design, principled development, pervasive attention to assurance, and generally wise adherence to the contents of this report. In this case, emergent properties tend to be positive, providing evidence of trustworthiness.

    The whole is significantly less than the sum of its parts. This can be true whenever there is inadequate attention devoted to architecture, interface design, principled development, assurance, or foresight -- for example, resulting in serious integration difficulties and the lack of interoperability, delays, cost overruns, design flaws, implementation bugs, overly complex operations, deadly embraces, race conditions, hazards, inadequate security and reliability, and so on. In this case, emergent properties tend to be negative, providing evidence of untrustworthiness.

    This section reassesses the approaches of Chapter 6 with respect to the practical thrust of the present chapter. In particular, Section 7.7.1 considers assurance related to the establishment and analysis of requirements. Section 7.7.2 reconsiders assurance related to system development, for example, potentially fruitful techniques for assuring the consistency of software and hardware with their respective specifications (Section 6.5). Section 7.8 then considers the practicality of assurance techniques applied to operational practice.

    7.7.1 Disciplined Analysis of Requirements

    It is a common misconception that establishing requirements carefully is generally not worth the effort. Nevertheless, further evidence would be useful in dispelling that myth, especially concerning formal requirements and formal analyses thereof, and particularly in cases of critical systems and outsourcing/offshoring of software development (see Section 7.10.2).

    From a practical point of view, it is immediately obvious that the disciplined use of formal or semiformal analysis methods and supporting tools would have a significant up-front effect that would greatly reduce the subsequent costs of software development, debugging, integration, and continual upgrades. There is a slowly growing literature of such approaches, although there are still relatively few demonstrated successes. One example is provided by the use of formal methods for NASA Space Shuttle requirements [109] -- where the mission is critical and the implications of failure are considerable.

    7.7.2 Disciplined Analysis of Design and Implementation

    Existing analysis techniques and supporting tools for system architectures and for software and hardware implementations tend to be fairly narrowly focused on specific attributes, certain types of design flaws, and specific classes of source-code and object-code bugs (the U.C. Berkeley MOPS analyzer, purify, trace tools, debuggers), security vulnerabilities (e.g., attack graph analysis using symbolic model checking [180, 352]), and hardware mask layout properties. Most of these approaches are limited to static analysis -- although they may sometimes be helpful in understanding dynamic problems.

    One of the most important problems raised in this report is the ability to determine analytically the extent to which systems, modules, and other components can be composed -- that is, identifying all possible deleterious interactions. As discussed in Section 6.2, providing a set of analytic tools to support the practical analysis of the composability of requirements, specifications, protocols, and subsystems would be extremely valuable. For example, analysis should consider the interference effects of improper compositions, or else demonstrate the invariance of basic properties and the appropriateness of emergent properties under composition. 

    Static checking tools along the lines of lint, splint, ESC, Aspect, Alloy [173] (and, in general, what are referred to as "80/20 verifiers") can be extremely helpful. However, their infallibility and completeness should never be overendowed. Although all low-hanging fruit should certainly be harvested, what is harder to reach may have even more devastating effects.

    A set of tools for the analysis of safety specifications [311] has been sponsored by NASA Langley, and is also worth considering -- not only for safety, but for its potential application to other elements of trustworthiness.

    7.8 Assurance in Operational Practice

    Operational practice -- for example, system administration, routine maintenance, and long-term system evolution -- represents an area in which assurance techniques have not been used much in the past. There are various approaches that might be taken, some fairly ad hoc, and some formally based.  

    In addition, there are also many system architectural concepts that can contribute to assurance aspects of operations.

    Significant effort is needed to harness existing analysis tools and to pursue new analysis techniques and tools, to accommodate dynamic understandability of systems in execution. For example, such effort would be valuable in responding to anomalous real-time system behavior and to evaluate the would-be effects of possible system changes, particularly regarding flawed systems and complications in operation and administration.

    Configuring security policies into applicable mechanisms is a particularly important problem. To this end, Deborah Shands et al. are developing the SPiCE translation system[350] at McAfee Research. SPiCE automatically translates high-level access policies to configurations of low-level enforcers,

    7.9 Certification

    Cer.ti.tude: the state of being or feeling certain;
    Rec.ti.tude: correctness of judgment or procedure

    (Abstracted from Webster's International Dictionary)

    Certification is generally considered as the process of applying a kind of blessing to a system or application, implying some kind of seal of approval. The meaning of that certification varies wildly from one environment to another, as noted in the following two paragraphs (which are adapted from [262], along with the definitions noted above).

    There is a fundamental difference between certification (which is intended to give you the feeling that someone or something is doing the right thing) and trustworthiness (for which you would need to have some well-founded reasons for trusting that someone or something is doing the right thing -- always interpreted with respect to appropriate definitions of what is right). Certification is typically nowhere near enough; an estimate of trustworthiness is somewhat closer to what is needed, although ideal trustworthiness is generally unattainable in the large -- that is, with respect to the entire system in operation. Formal demonstrations that something is consistent with expectations are potentially much more valuable than loosely based certification. (Recall the discussion of consistency versus correctness in Section 6.2.) So, a challenge confronting us here is to endow the process and the meaning of certification -- of systems and possibly of people (see below) -- with a greater sense of rigor and credibility.

    Numerous system failures (e.g., [260]) demonstrate the vital importance of people. Many cases are clearly attributable to human shortsightedness, incompetence, ignorance, carelessness, or other foibles. Ironically, accidents resulting from badly designed human interfaces are often blamed on operators (e.g., pilots, system administrators, and users) rather than developers. Unfortunately, software engineering as practiced in much of the world is merely a buzzword rather than an engineering profession [288, 289]. This is particularly painful with respect to systems with life-critical, mission-critical, or otherwise stringent requirements. Consequently, some of the alternatives discussed in this report deserve extensive exploration, such as these:

    Software certification is a slippery slope that can raise false hopes. However, its usefulness can be greatly enhanced somewhat if all of the following are present: (a) well-defined detailed requirements; (b) architectures that anticipate the full set of requirements and that can be predictably composed out of well-conceived subsystems; (c) highly principled development techniques, including good software engineering disciplines, serious observance of important principles such as layered abstraction with encapsulation, least privilege, defensive analytic tools, and so on; (d) judiciously applied assurance measures, pervasively invoked throughout development and evolution, including formal methods where applicable and effective; (e) meaningful evaluations such as consistency between specifications and requirements, consistency between software and specifications, and dynamic operational sanity checks. In this way, certification might have some real value. However, in practice, certification is far short of implying trustworthiness.

    One horrible example of the inadequacy of certification in practice is provided by the currently marketed fully electronic voting machines without a voter-verified audit trail (for example, a paper record of the ballot choices, which remains within the system and is not kept by the voter); all of today's all-electronic paperless voting machines lack any meaningful trustworthiness with respect to system integrity, accountability, auditability, or assurance that your vote goes in correctly. These proprietary closed-source systems are certified against very weak voluntary criteria by a closed process that is funded by the developers. In addition, recent disclosures demonstrate that software used in the 2002 and 2003 elections was not the software that was certified; in many cases, potentially significant changes were introduced subsequent to certification.

    However, simplistic strategies for institutional certification (such as the Capability Maturity Model) and personnel certification (such as the Certified Information Systems Security Professional -- CISSP -- examination and personal designation) are also slippery slopes. Reviews by Rob Slade of numerous books on the limitations of the CISSP exam can be found in the Risks Forum athttp://www.risks.org; for example, see volume 21, issues 79 and 90, and volume 22, issues 08, 10, 36, 49, and 57 (the last of these covering four different books!). (Note: The Risks Forum moderator stopped running Slade's reviews on this subject after all of the above-mentioned books seemed to have similar flaws reflecting difficulties inherent in the CISSP process itself; there are many other books on CISSP than these.)

    Although there is some merit in raising the bar, unmitigated belief in these simplistic approaches is likely to induce a false sense of security -- particularly in the absence of principled development and operation. In the case of the CMM, the highest-rated institutions can still develop very bad systems. In the case of the CISSP, the most experienced programmers can write bad code, and sometimes the least experienced programmers can write good code.

    7.10 Management Practice

    7.10.1 Leadership Issues

    Some of the biggest practical problems relate to the role of Corporate Information Officers (CIOs) in corporate institutions, and their equivalents in government institutions. (Note: There is still no Federal CIO for the U.S. Government, which is increasingly causing certain problems.)

    CIOs are generally business driven, wherein cost is often considered to represent the primary, secondary, and tertiary motivating forces. The advice of Corporate Technical Officers (CTOs) is often considered as close to irrelevant. The business issues generally motivate everything, and may override sound technological arguments. This has some unfortunate effects on the development and procurement of trustworthy systems and networks, which tend to be reinforced by short-sighted optimation and bottom-up implementations.

    7.10.2 Pros and Cons of Outsourcing

    Outsourcing is a real double-edged sword, with many benefits and risks, and with many problems that result from trying to optimize costs and productivity -- both in the short term and in the long term (e.g., as suggested by the last paragraph of Section 7.2). It is seemingly particularly cost-advantageous where cheaper labor can be effectively employed without adverse consequences -- for example, for software development, hardware fabrication, operations and administration, maintenance, documentation, business process work, and other labor-intensive services (such as call centers). However, there are many hidden costs; indeed, several recent studies suggest that the case for overall cost savings is much less clear-cut. Furthermore, other considerations may also be important, such as the ability to innovate compatibly, integrated workforce development, planning, coordination, intellectual property, security, and privacy. These tend to be less tangible and less easily represented in cost metrics.

    From the perspective of a would-be controlling enterprise, we consider two orthogonal dimensions that relate to the extent of outsourcing and offshoring. Outsourcing typically involves contracts, subcontracts, or other forms of agreements for work performed by entities outside of the immediate controlling enterprise. Offshoring involves some degree of work performed by nondomestic organizational entities such as foreign subsidiaries, foreign companies, or foreign individuals. Thus, we can have widely varying degrees of both outsourcing and offshoring, with a wide range of hybrid strategies. The situation is simplified here by considering four basic cases:


    Table 4: Pros and Cons of Outsourcing
     
    DI: Domestic In-House ControlDO: Domestic Outsourcing
    Pros:Pros:
    Closer access to business knowledgeResource balancing
    Tighter reins on intellectual propertyPotential cost savings,
    Tighter control of employees andparticularly for labor
    development effortsOffloading less desirable jobs
    Cons:Cons:
    U.S. education often inadequateLoss of business sense
    for system engineering, security, Increased burden on contracting
    reliability, and trustworthinessPotential loss of control
    Bad Government recordsBad records in managing
    in managing developmentscontracted procurements
    (Large corporations are sometimesPossible hidden offshore subcontracts
    not much better!)(as in the ATC Y2K remediation)
    Greater security/privacy concerns
    FI: Foreign SubsidiariesFO: Foreign (Offshore) Outsourcing
    Pros:Pros:
    Potential cost savings (esp. labor)Potential cost savings (esp. labor),
    In-house control largely retainedat least in the short term
    Resource/labor balancingResource/labor balancing
    Choices exist for well-educatedPotential pockets of good education
    and disciplined labor.and disciplined labor in some cases
    Up-front emphasis on requirements/Up-front emphasis on requirements/
    specs can increase product quality.specs can increase product quality.
    Cons:Cons:
    Some loss of direct controlConsiderable loss of direct control
    More difficult to changeEven more difficult to change
    requirements/specs/code/operationsrequirements/specs/code/operations
    More risks of Trojan horsesGreater risks of Trojan horses
    Possible language problemsPossibly severer language problems
    Hidden long-term costsPossibly more hidden long-term costs
    Domestic job lossesDomestic job losses
    Loss of GNP and tax revenuesLoss of GNP and tax revenues
    Foreign laws generally apply,Foreign laws may cause conflicts with
    in addition to domestic laws.domestic laws.
    Potential political risksGreater potential political risks
    Privacy problems and other risksMore privacy problems and other risks
    Risks of hidden subcontractsFurther loss of control of subcontracts
    Some intellectual property concernsIntellectual property control degradation
    Hidden indirected (nth-party) outsourcing

    Table 4 outlines some of the issues that arise in each of these four cases. The left half of the table represents In-house top-level control (I), and the right half represents some degree of Outsourcing (O). The upper half of the table represents wholly Domestic efforts (D), and the lower half involves some degree of Foreign offshoring (F).

    The pros and cons summarized in the table are intended to be suggestive of concerns that should be raised before engaging in outsourcing and/or offshoring, rather than being dichotomous black-and-white alternatives. Indeed, the pros and cons for all quadrants other than the upper left tend to vary depending on the degree of outsourcing and/or offshoring, as well as such factors as relative physical locations, ease of communications, language barriers, standard-of-living differentials, job marketplaces, government regulation, and so on. Even the upper-left quadrant has variations that depend on management strength, centralization versus distributed control, employee abilities, and so on.

    Several conclusions are suggested by the table.

     

    7.11 A Forward-Looking Retrospective

    Pandora's cat is out of the barn, and the genie won't go back in the closet.
    Peter G. Neumann

    In the same way in which the quote at the beginning of Section 7.3 can be parameterized to apply to many narrow would-be "solutions" for complex problems, the above polymorphic Pandoran multiply-mixed metaphor can be variously applied to cryptography, export controls, viruses, spam, terrorism, outsourcing, and many other issues.

    Over the past forty years, many important research and development results have been specifically aimed at achieving trustworthy systems and networks. However, from the perspective of applications and enterprises in need of high trustworthiness, those results have mostly not been finding their way into commercial developments. Reasons given variously include increased development costs, longer delays in development, extreme complexity of adding significant levels of assurance, lack of customer interest, and so on. Perhaps even more important are factors such as the inadequacy of educational curricula and training programs that minimize or ignore altogether such issues as highly principled system engineering and system development, software engineering, system architecture, security, reliability, safety, survivability, formal methods, and so on. A lack of knowledge and experience among educators fosters a similar lack in their students, and is particularly riskful when also found among managers, contracting agents, legislators, and system developers. Perhaps the most important challenge raised by this report is finding ways of bringing the concepts discussed here realistically and practically into mainstream developments.

    A strong sense of history is not inconsequential, particularly in understanding how badly computer software development has slid down a slippery slope away from perspicuity. Much of the work done in the 1960s to 1980s still has great relevance today, although that work is largely ignored by commercial developments and by quite a few contemporary researchers.

    Hence, we conclude that awareness of much of the work done in the 1960s to 1980s related to trustworthiness is potentially useful today.

    Voltaire's famous quotation (see Dictionaire Philosophique: Art Dramatique), "Le mieux est l'ennemi du bien." is customarily translated as "The best is the enemy of the good." (However, the French language uses mieux for both of the corresponding English words, best and better; thus, in a choice between just two alternatives, a correct English translation might be "The better is the enemy of the good.") This quotation is often popularly cited as a justification for avoiding attempts to create trustworthy systems. However, that reasoning seems to represent another nasty slippery slope. Whenever what is accepted as merely good is in reality not good enough, the situation may be untenable. Realistically speaking, the best we can do is seldom ever close to the theoretical best. Perfect security and perfect reliability are inherently unattainable in the real world, although they can occasionally be postulated in the abstract under very tightly constrained environments in which all possible threats can be completely enumerated and prevented (which is almost always unrealistic), or else simply assumed out of existence (as in supposedly perfect cryptographic solutions that are implemented on top of an unsecure operating system, through which the integrity of those solutions can be completely compromised from below). Thus, we come full circle back to the definition of trustworthiness in the abstract at the beginning of this report. In critical applications, the generally accepted "good" may well be nowhere good enough, and "better" is certainly not the enemy. In this case of short-sighted thinking, we quote Walt Kelley's Pogo: "We have met the enemy, and he is us."

    The need for Information Assurance in the Global Information Grid (GIG) (noted at the end of Section 7.1) -- for example, see [52] -- provides a fascinating example of an environment with a very large collection of critical needs, and extremely difficult challenges for the long-term development of an enormous extensively interoperable trustworthy network environment that far transcends today's Internet. Considerable effort remains to flesh out the GIG requirements and architectural concepts. The principled and disciplined approach of this report would seem to be highly relevant to the GIG effort.

     

    8 Recommendations for the Future

    The future isn't what it used to be. Arthur Clarke 

    8.1 Introduction

    In this chapter, we consider some potentially important areas for future research, development, and system operation, with direct relevance to CHATS-like efforts, to DoD more broadly, and to various information-system communities at large. The recommendations concern the critical needs for high-assurance trustworthy systems, networks, and distributed application environments that can be substantially more secure, more reliable, and more survivable than those that are commercially available today or that are likely to become available in the foreseeable future (given the present trajectories).

    One of the biggest challenges results from the reality that the best R&D efforts have been very slow to find their way into commercial practice and into production systems. Unfortunately, corporate demands for short-term profits seem to have stifled progress in trustworthiness, in favor of rush-to-market featuritis. Furthermore, government incentives for commercial development have been of limited help, although research funding has been a very important contributor to the potential state of the art. We need to find ways to improve that unfortunate history.

    8.2 General R&D Recommendations

    The whole of science is nothing more than the refinement of everyday thinking. Albert Einstein,  Ideas and Opinions, page 290

    This section provides a collection of broad recommendations for future R&D applicable to the development, operation, maintenance, and evolution of trustworthy systems and networks, relating to composability, assurance, system architectures, software engineering practice, and metrics. It also addresses the use of formal methods applicable to system and network architectures intended to satisfy critical security requirements. These recommendations take an overall system approach, and typically have both short-term and long-term manifestations. Each recommendation would benefit considerably from observance of the previous chapters.

    8.3 Some Specific Recommendations

    The issues discussed in Section 6.2 and the general recommendations of Section 8.2 suggest various opportunities for the future. Each typically has both short-term and long-term implications, although some require greater vision and farsight than others. The typical myopia of short-term efforts almost always seems to hinder long-term evolvability, as discussed in Section 7.1. Incremental attempts to make relatively small changes to systems that are already poorly designed are less likely to converge acceptably in the long term.

    We suggest in this report that considerable gains can be achieved by taking a fresh view of secure-system architectures, while also applying formal methods selectively -- particularly up front, where the potential payoffs are thought to be greatest. We also suggest that the choices of methodologies, formal methods, and languages are important, but somewhat less so than the architectures and the emphasis on up-front uses of common sense, knowledge, experience, and even formal methods, if appropriate. However, there is still much worthwhile long-term research to be done, particularly where it can reduce the risks of system developments (which include cost overruns, schedule slippages, and in some cases total project failure) and increase the overall chances of success.

    In those efforts, an appropriate mix of experienced is recommended -- such as systems, development, formal methods, and analysis tools.

    8.4 Architectures with Perspicuous Interfaces

    Section 5.4.5 lists several gaps in existing analysis techniques and tools. Each of those gaps suggests various research and development areas in which new techniques and significant enhancements of existing techniques could advantageously be pursued, developed, and explored experimentally, with the goal of achieving analysis aids that can significantly improve the understandability of interfaces and system behavior.

    Fundamentally, a combination of techniques is essential, encompassing better system architectures, better use of good software engineering practice, better choices of programming languages, better analysis techniques for identifying deficient perspicuity, a willingness to iterate through the development cycle to improve perspicuity, and greater discipline throughout development and operation. Here are a few specific suggestions.

    8.5 Other Recommendations

    The recommendations of the previous sections focus primarily on research and development with potential impact on the development, operation, and maintenance of trustworthy systems and networks. Some other issues with less R&D content could also be extremely effective.

    All in all, the existence of systems and networks that are inherently more trustworthy -- through sound architectures, better development practice, and other approaches discussed in this report -- would greatly simplify the vicissitudes of system operation and administration. By reducing the labor-intensive efforts required today, we could thereby greatly improve the overall trustworthiness of our operational systems and networks.

    A very useful recent assessment of future research directions [339] has been compiled for DARPA by Fred Schneider on behalf of Jay Lala, as a set of nicely annotated slides. It provides a complementary view to that presented here, although there are (not surprisingly) many areas of overlap. In particular, it outlines several approaches to robustness: runtime diversity (as opposed to computational monocultures), scalable redundancy (especially asynchronous), self-stabilization, and natural inherent robustness (as is found in various biological metaphors).

    9 Conclusions

    The merit of virtue is consolidated in action. Cicero 

    9.1 Summary of This Report

    This report addresses the main elements of the DARPA CHATS program -- composable high-assurance trustworthy systems -- with emphasis on providing a fundamental basis for new and ongoing developments having stringent requirements for trustworthiness. We believe that significant benefits can result by emphasizing the importance of the judicious use of principles, the fundamental need for inherently composable architectures as the basis for development, and the underlying need for a highly principled development process. We believe that principled development can also contribute to improved operation, maintenance, and long-term evolution. However, these benefits depend largely on the education, training, and experience of its practitioners, and on a continuing flow of relevant research and development that is suitably motivated by well-defined and realistic requirements.

    9.2 Summary of R&D Recommendations

    If the road to hell is paved with good intentions, then by duality, the road to heaven must be paved with bad intentions. However, the road to good systems development and good management practice is evidently infinitely precarious, no matter which route is taken. PGN

    Chapter 8 summarizes some of the potentially far-reaching areas for future R&D, at a relatively high layer of abstraction. Of these recommendations, the most important are perhaps the following.

    9.3 Risks

    "The essence of risk management lies in maximizing the areas where we have some control over the outcome while minimizing the areas in which we have absolutely no control over the outcome and the linkage between effect and cause is hidden from us." Peter L. Bernstein [38], p. 197

    There are many risks that need to be considered. Some risks are intrinsic in the development process, while others arise during operation and administration. Some relate to technology, whereas many others arise as a result of human action or inaction, and even environmental causes in some cases. Some involve risks of systems that fail to do what they are expected to do, whereas others involve risks that arise because an entirely unexpected behavior has occurred that transcends the normal expectations.

    The Bernstein book quoted above is slanted largely toward a perspective of financial risk management, but an understanding of the nearly millennium-long historical evolution that it presents is also quite appropriate in the context of trustworthy systems and networks. Indeed, that quote echoes our view of the importance of carefully stated comprehensive requirements, sound architectures, principled developments, and disciplined operations as strong approaches to avoiding risks that can be avoided, and to better managing those that cannot.

    Neumann's book, Computer-Related Risks [260], provides a complementary view of the origins of those risks and some constructive ways on how to combat them. Various articles in the ACM Risks Forum, the IEEE Spectrum, and the Communications of the ACM monthly Inside Risks columns have documented selected failures and a few successes. However, Henry Petroski has often remarked that we seldom learn much from what appear to be successes, and that we have a better opportunity to learn from our mistakes -- if we are willing to do so. This report attempts to do exactly that -- learn from past mistakes and dig more deeply into approaches that can reduce the risks related to trustworthiness in critical systems and networks.  

    9.4 Concluding Remarks

    Hindsight is useful only when it improves our foresight. William Safire (The New York Times, 6 June 2002)  

    There are many lessons to be learned from our past attempts to confront the obstacles to developing and consistently operating systems with stringent requirements for trustworthiness. This report is yet another step in that direction, in the hopes that it is time for constructive action.

    We began Chapter 1 of this report quoting Ovid: 

    We essay a difficult task; but there is no merit save in difficult tasks.

    We began Chapter 4 on principled architectures quoting Juvenal:  

    Virtue is praised, but is left to starve.

    We began Chapter 9 quoting Cicero: 

    The merit of virtue is consolidated in action.

    Each of these three two-millennium-old quotes is still extremely apt today.

    With regard to Ovid, the design, development, operation, and maintenance of trustworthy systems and networks represent some incredibly difficult tasks; however, we really must more assiduously confront those tasks, rather urgently. Today's commercially available systems, subsystems, and applications fall significantly short -- for example, with respect to trustworthiness, predictable composability and facile interoperability, assurance, ease of maintenance and operation, and long-term evolvability.

    With regard to Juvenal, it is easy to pay lip service to virtuous principles and good development methodologies, but those principles are seldom observed seriously in today's system and network developments.

    With regard to Cicero, we recognize that it is extremely challenging to practice what we preach here. For example, incompatibility problems with legacy systems tend to make exceedingly difficult the kind of cultural revolution that is likely to be necessary to achieve trustworthy systems in the future. However, it is our sincere hope that this report will help consolidate some of its readers into action toward much more disciplined and principled design and development of composable trustworthy systems and networks, with nontrivial measures of assurance. The alternatives of not doing so are likely to resemble something conceptually approaching the decline and fall of the Roman Empire.

    The U.S. DoD Global Information Grid (GIG), (discussed briefly at the end of Section 7.1) is a work in progress that illustrates the importance of far-sighted thinking, principles, predictable composability, and a viable system-network architecture concept. As noted earlier, the planning and development necessary to attain the desired requirements also strongly suggest the need for long-term vision, nonlocal optimization, and whole-system perspectives (see Sections 7.1, 7.2, and 7.3, respectively). Considering the very considerable difficulties in achieving high-assurance trustworthiness over the past four decades, and the dismal record noted in this report, the challenges of finally overcoming the lurking hurdles in the next 16 years are indeed daunting. As noted at the end of Section 7.1, the content of this report is fundamental to such efforts as the GIG.

    Once again, we reiterate a mantra that implicitly and explicitly runs through this report: In attempting to deal with complex requirements and complex operational environments, there are no easy answers. Those who put their faith in supposedly simple solutions to complex problems are doomed to be disappointed, and -- worse yet -- are likely to seriously disrupt the lives of others as well. If the principles discussed here are judiciously applied with a pervasive sense of discipline, systems and networks can be developed, administered, and operated that are significantly more robust and secure than today's commercial proprietary mass-market software and large-scale custom applications. Perhaps most important, complexity must be addressed through architectures that are composed of well-understood components whose interactions are well understood, and also through compositions that demonstrably do not compromise trustworthiness in the presence of certain untrustworthy components. The approaches offered herein are particularly relevant to developers of open-source software, although they are equally important to mass-market developments. Those approaches may seem to be difficult to follow, but selective application of whatever may be appropriate for given developments should be considered.

    In concluding this report on how we might develop systems and networks that are practical and realistically more trustworthy, the author recognizes that he has given his readers a considerable amount of seemingly repetitive evangelizing. Although such arguments by many authors seem to have fallen on deaf ears in the past, hope springs eternal. Besides, the risks of not taking this report to heart are greater now than they ever have been.

    Acknowledgments

    I am especially grateful to Doug Maughan, who sponsored the CHATS program and was its Program Manager for the first two years of our project (when he was at DARPA). His vision and determination made the CHATS program possible, and his inspiration and encouragement have been instrumental in this project. In addition, Lee Badger (also at DARPA) provided the impetus for the work on perspicuous interfaces on which Chapter 5 is based.

    I enormously appreciate various suggestions from members of our project advisory group (Blaine Burnham, Fernando Corbató (Corby), Drew Dean, George Dinolt, Virgil Gligor, Jim Horning, Cliff Jones, Brian Randell, John Rushby, Jerry Saltzer, Sami Saydjari, Olin Sibert, David Wagner) and other individuals whose comments have been very helpful directly or indirectly in guiding the progress of this report.

    In particular, Drew Dean suggested several examples of conflicts within and among the principles, and exposed me to Extreme Programming; we had many ongoing discussions on composability, architecture, and other subjects. He was instrumental in our joint work on perspicuous interfaces (Chapter 5). Virgil Gligor early on reminded me of several important papers of his on composability; his contributions to the seedling effort on visible interfaces for Lee Badger strongly resonated with that of Drew Dean and me. Virgil also generously contributed the material on which Appendix B is based.

    Sami Saydjari offered numerous valuable comments during the first year of the project. Blaine Burnham drew my attention to the documents on composability from the 1992 time frame noted in the bulleted item on other past research on composition in Section 3.4. Jim Horning offered wonderful suggestions based on his long experience -- including the quote from Butler Lampson at the beginning of Section 2.3, and profound thoughts on Chapter 7, which I gladly incorporated. Eugene Miya offered the quote from Gordon Bell at the beginning of Section 3.7. Tom Van Vleck expressed various doubts as to the efficacy of the object-oriented paradigm. Many years of interactions with Brian Randell have resulted in our pursuing similar research directions; his insights have influenced this report both directly and indirectly. Some detailed comments from Fred Cohen on an early draft of the composability chapter gave me considerable food for thought.

    I am delighted with the results of the subcontract to the University of California at Berkeley and thankful to David Wagner for his excellent leadership, to Hao Chen for carrying out most of the work, and to Drew Dean for his vital participation in that effort. The material in Appendix A summarizes the results of the Berkeley subcontract, plus some further work by Hao Chen conducted during the summer of 2003 at SRI and subsequently at Berkeley. Appropriately, Chen's work uses an approach to static analysis of would-be robust programs that itself contributes significantly to the composability of the analysis tool components.

    A Formally Based Static Analysis (Hao Chen)

    Formally Based Static Analysis for Detecting Flaws

    This appendix summarizes the results of the first-year project subcontract to the University of California at Berkeley and some subsequent related work, culminating in the thesis work of Hao Chen.

    A.1 Goals of the Berkeley Subcontract

    The one-year CHATS project subcontract task involved a short-term potentially high-payoff approach, with static analysis capable of detecting fundamental characteristic common security vulnerabilities in source code. The approach combines models of the vulnerabilities with model checking related to the source code. The approach is intentionally open-ended, with linearly increasing complexity of composability as various new vulnerability types are accommodated. The team for this task includes Professor David Wagner and his graduate student Hao Chen in the Computer Science Department at the University of California at Berkeley, with participation of Drew Dean at SRI and supervision of Peter Neumann.

    A.2 Results of the Berkeley Subcontract

    One of the things that makes computer security challenging is that there are many unwritten rules of prudent programming: "Don't do X when running with root privileges." "Always call Z after any call to Y." And so on. These issues contribute to the prevalence of implementation errors in security-critical systems.

    In this project, our goal was to help reduce the incidence of implementation vulnerabilities in open source software by developing an automated tool to warn when programmers violate these implicit rules of thumb. We have done so. Our hypothesis was that new ideas in software model checking could prove very helpful in this problem, and our research goal was to experimentally assess the utility of our methods. Our studies give strong evidence in favor of the benefits of this style of automated vulnerability detection and avoidance. This project was undeniably a high-risk, high-payoff, novel combination of theory and practice, but we feel that it has already been very successful.

    In this appendix, we give some details on our progress during the year. We have also written research papers [77, 78] on our work, which provide further technical details. Here we give a high-level overview of our results and experimental methodology.

    First, we developed a general platform for automatically scanning C source code and verifying whether it follows these rules. We developed new techniques, based on model checking of pushdown automata, for this problem, and we built a prototype implementation of our algorithms. Our tool, called MOPS, supports compile-time checking of large C programs against temporal safety properties. Please note that the latter two sentences hide a significant amount of investment and implementation work to achieve this goal, but as we will argue next, it has paid off nicely.

    Next, we selected several examples of implicit rules of defensive coding. Several of our rules studied proper usage of the privilege management API in Unix, namely, the setuid()-like calls, and several rules associated with this API. The specific guidelines selected were as follows:

    1. Programs that drop privilege should do so correctly: They should call setgid() before setuid(). Moreover, they should avoid the Linux capability bug: they should be aware that, on some older versions of Linux, the setuid()-like calls may fail to drop privilege in certain special situations.
    2. Programmers should avoid situations where a setuid()-like call may fail. Such situations are dangerous, because these failure modes are often not adequately tested.
    3. Programmers should avoid so-called "tractorbeaming attacks". In a tractorbeaming attack, unexpected interactions between signal handlers, setjmp()/longjmp(), and Unix uid's can create security vulnerabilities. To avoid this, programmers should ensure that every call to longjmp() will be done in exactly the same security context as the preceding call to setjmp(), no matter what intervening code path may be followed between the two.
    4. When writing a setuid program, one should avoid making any assumptions about the environment inherited from the parent process. In particular, file descriptors 0, 1, and 2 are usually bound to an input/output device (e.g., the user's terminal) in normal operation, but for setuid programs, there is no guarantee that the parent process will abide by this convention. If this fact is overlooked, there is a specific class of vulnerabilities that can ensue: for instance, if a setuid program calls open("/etc/passwd", O_RDWR) and then calls printf() to display some output to the user, it may be possible for an attacker to corrupt the password file by calling the setuid program with file descriptor 1 closed, so that the open() call binds the password file to fd 1 and the printf() unintentionally writes to the password file rather than to the screen.

    This is by no means an exhaustive list. Rather, the rules listed above were selected to be representative, of interest to open-source practitioners, and theoretically challenging to automatically check.

    Then, we devoted effort to experimentally assessing the power of our technique. We chose several large, security-critical programs of interest to the open source community as a target for our analysis. In several cases, we were able to find older versions of these programs that contained security vulnerabilities arising from violations of the above rules. The selected programs include wu-ftpd, sendmail, and OpenSSH. We set out to apply our tool to check the above rules to these programs.

    We started by codifying the above rules in a form understandable by our modelchecker, MOPS. We described them as finite state automata on the traces of the program. Along the way, we discovered that we needed to solve an unanticipated research challenge: What are the exact semantics of the Unix setuid()-like system calls? We realized that these semantics are complex, poorly documented, and yet critical to our effort. To reason about the privileges an application might acquire, we must be able to predict how these system calls will affect the application's state. We spent some time working on this problem, because it does not seem to have been addressed before.

    We also developed new techniques for automatically constructing a formal model of the operating system's semantics with respect to the setuid()-like system calls. In particular, our algorithm extracts a finite-state automaton (FSA) model of the relevant part of the OS. This FSA enables us to answer questions like "If a process calls setuid(100) while its effective userid is root, how will this affect its userids?" and "For a process in such-and-such a state, can seteuid(0) ever fail?".

    Our new techniques, and the FSA models they produce, are useful in several ways. First, they form one of the foundations of our tool for static analysis of applications. Because we have an accurate model of both the application and the operating system, we can now predict how the application will behave when run on that operating system. Second, they enable us to document precisely the semantics of the setuid API on various operating systems, which we expect will help open-source programmers as they develop new applications. Third, they enable us to pinpoint potential portability issues: we have constructed FSA models for Linux, Solaris, and FreeBSD, and each difference in the respective FSAs indicates nonportable behavior of the setuid API that application programmers should be aware of.

    Our paper [78] on constructing formal models of the operating system also documents several subtle pitfalls associated with privilege management. We expect that this work will help developers of open-source applications and maintainers of open-source operating systems to improve the quality and security of their software.

    With this research challenge tackled, we were now able to encode rules (1) to (4) in a form readable by MOPS, and we used MOPS to check whether the applications we selected follow the rules. MOPS found several (previously known) security vulnerabilities in these programs, as follows:

    • MOPS found security holes in earlier versions of sendmail. In sendmail 8.10.1, MOPS found an instance of the Linux capabilities bug. In sendmail 8.12.0, MOPS found that sendmail can fail to drop privilege in group IDs properly, due to a violation of rule 1).
    • We used MOPS to verify that OpenSSH 2.5.2 properly uses the setuid()-like system calls in the sense that no uid-setting system call can fail.
    • MOPS found a tractorbeaming bug in wu-ftpd version 2.4 beta 12. This in fact was a source of a security hole in this older version of wu-ftpd, and was later fixed. MOPS also confirmed that the latest version of wu-ftpd correctly obeys our rule regarding setuid(), longjmp(), and signal handlers.
    • Experiments are still under way with respect to rule (4), but we found security bugs in several programs, including login and crontab on Linux.

    In each case, MOPS ran efficiently, taking at most a minute or two to scan the source code. Since each of these application programs is of nontrivial size, this is a very positive result.

    This experimental evidence indicates that MOPS is a powerful tool for finding security bugs, for verifying their absence, and for ensuring that various principles of good coding practice are observed. We have publicly released the MOPS tool under the GPL license at
    http://www.cs.berkeley.edu/~daw/mops/. Our current prototype includes the compiler front end, the modelchecker, and a primitive user interface. However, we should warn that there are several known limitations: the current release does not include an extensive database of rules to check; also, the user interface is rather primitive, and intended primarily for the expert programmer rather than for the novice. We hope to address these limitations in the future.

    Along the way, we developed several theoretical and algorithmic techniques that may be of general interest. First, we extended known modelchecking algorithms to allow backtracking: when the modelchecker finds a violation of the rule, our algorithm allows finding an explicit path where the rule is violated, to help the programmer understand where she went wrong.

    Second, we developed a compaction algorithm for speeding up modelchecking. Our observation is that, if we focus on any one rule, most of the program is usually irrelevant to the rule. Our compaction algorithm prunes away irrelevant parts of the program -- our experience is that compaction reduces the size of the program by a factor of 50x to 500x -- and this makes modelchecking run much more efficiently.

    Our compaction algorithm gives MOPS very good scalability properties. In principle, the time complexity of pushdown modelchecking scales as the cube of the size of the program (expressed as a pushdown automaton) and the square of the size of the rule (expressed as a finite state automaton). However, in practice, the running time is much better than this would indicate, because our compaction eliminates all irrelevant states of the program. With compaction, the running time now depends only on the cube of the size of the relevant parts of the program, and as argued above, this is generally a very small figure.

    As a result, MOPS is expected to scale well to very large programs. We have already shown that it runs very fast on fairly large programs (on programs with 50,000 lines of code or so, modelchecking runs faster than parsing). Moreover, MOPS enables programmers to verify global properties on the entire system, even though each programmer may know only local information about one part of the system. Thus, our approach is very friendly to composition of large systems from smaller modules.

    In summary, we have developed, implemented, and validated new techniques for improving the quality of security-critical software. Our tool is freely available. This points the way to improvements in security for a broad array of open-source applications.

    The relevant papers are "Setuid Demystified" [78], by Hao Chen, David Wagner, and Drew Dean:
    http://www.cs.berkeley.edu/~daw/papers/setuid-usenix02.ps. and "MOPS: An Infrastructure for Examining Security Properties of Software" [77], by Hao Chen and David Wagner:
    http://www.cs.berkeley.edu/~daw/papers/draft-mops.ps.  

    A.3 Recent Results

    Subsequent to the first-year subcontract, Hao Chen continued to work on MOPS and its applications for his Berkeley doctoral dissertation. MOPS acquired its first external user, the Extremely Reliable Operating System (EROS) project at Johns Hopkins University [351]. The EROS project has already uncovered multiple, previously unknown coding errors by using MOPS to analyze the EROS kernel. Based on user feedback, we are working on tuning the performance of the tool. Work has focused on some minor modifications to key data structures to reduce memory pressure on the garbage collector (MOPS is implemented in Java). A small amount of work produces a very large payback: our initial tests indicate a 300%-400% speed improvement over the earlier version. This improvement has recently been completed, and was shipped to Johns Hopkins. These results enhance MOPS's already impressive scalability for analyzing real-world software such as Sendmail and OpenSSH.

    Hao Chen spent the summer of 2003 at SRI, funded by SRI project 11679, under Contract N00014-02-1-0109 from the Office of Naval Research. Building on the prior work on modeling the setuid family of system calls in Unix-like operating systems, the above-mentioned programs were examined for security problems relating to uid handling, concentrating on global properties of the programs. The concentration on global properties was chosen for two reasons: (1) Local properties can easily be checked with less sophisticated tools. Why swat a fly with a sledgehammer? (2) Global properties, being more difficult to check, for both humans and machines, have had poorer tool support, so the probability of interesting discoveries is higher. The experience gained using MOPS to check more properties of more software also uncovered areas in which MOPS needed further improvement.

    In addition to the above mentioned improvements in MOPS, Hao Chen applied MOPS to study selected security properties of widely used open source software. The programs studied included BIND, Sendmail, Postfix, Apache, and OpenSSH. To demonstrate the power and utility of MOPS, these programs were model checked for each of five properties, (a) proper dropping of privileges, (b) secure creation of chroot jails, (c) avoidance of file system race conditions, (d) avoiding attacks on standard error file descriptors, and (e) secure creation of temporary files.

    Hao Chen's work at SRI during the summer 2003, under the guidance of Drew Dean, resulted in the discovery of several hitherto undetected security problems in these programs, as well as the identification of other flaws that had been previously discovered elsewhere. The results of this application of MOPS to real programs are summarized in [75].

    This work provides key capabilities for progress in information assurance. It provides a principled foundation for analyzing the behavior of programs based on traces of system calls, or, for that matter, any functions of interest. This approach to program analysis can directly take advantage of research in both model checking and static analysis to become more precise over time, something that is not directly true of ad-hoc approaches to analyzing programs for security vulnerabilities. Future improvements to underlying technology, in addition to more engineering improvements to MOPS, should allow MOPS to scale from today's ability to handle 100KLOC comfortably (substantially more than competing tools), to 1MLOC. Such scalability will be necessary for DARPA to provide an assured future for the network-centric warfighter.

    Hao Chen's doctoral thesis [74] is now finished and available. Also, a recent paper by Hao Chen and Jonathan Shapiro [76] describes their experience running MOPS on EROS. In addition, a group of students in Professor Wagner's group ran MOPS on all 839 packages in RedHat Linux 9 and found many security bugs and weaknesses, being described in a new paper.

    A.4 Integration of Static Checking into EMERALD

    It is useful to contemplate how the software developments of the Berkeley effort could subsequently be integrated into an anomaly and misuse detection system such as provided by the EMERALD framework and its successor technologies. Several different approaches are potentially of interest:

    • Apply the static analysis techniques and tools to the EMERALD modules to determine EMERALD's compliance with the existing and subsequently emerging Chen-Wagner formal models.
    • Establish new models to be specifically suitable to analysis of the EMERALD software, and apply them to EMERALD.
    • Develop a means for automatically coupling the vulnerability models with EMERALD rule bases, or otherwise incorporating the results of the analyses into EMERALD.
    • Develop a coherent environment that encompasses static and dynamic checking and real-time analysis.

    B System Modularity (Virgil Gligor)

    Basis for the Visibility and Control of System
    Structural and Correctness Properties

    This appendix is based on material written by Virgil D. Gligor under DARPA Contract number MDA 972-03-P-0012 through VDG Inc, 6009 Brookside Drive, Chevy Chase, MD. 20815, telephone 1-301-657-1959, fax 1-301-657-9021, in connection with Lee Badger's Visibly Controllable Computing initiative at DARPA. Gligor's original text appeared as the appendix to an unpublished report, "Perspicuous Interfaces", written by Peter Neumann, Drew Dean, and Virgil Gligor, as part of a seedling study for Lee Badger; it is adapted as an appendix to this report with the permission of Virgil Gligor, with the explicit intent of increasing its availability to the R&D and academic communities. The earlier work of David Parnas on module decomposition [281] and on module dependence [283] (e.g., the various forms of the uses relation) is particularly relevant here.

    B.1 Introduction

    The study of Visibly Controllable Computing has the goals of reducing systems complexity and applying automated reasoning and learning techniques to create systems that can not only explain their current state but also adapt to new environments by

    (1) Connecting their self-knowledge to knowledge of external-environment entities and constraints.

    (2) Warning users if new mission demands cannot be satisfied.

    (3) Exploring alternative configurations and reconfiguring to fit changing needs.

    In general, by establishing the visibility of a system's structural and correctness properties we mean the identification of a system's components and their relationships, and the reasoning about properties like the correctness, fault tolerance, and performance. A first step toward this goal is that of investigating system modularity. This step is necessary if knowledge of system structure and state need to be gained and if systems need to reconfigure on-the-fly to satisfy changing mission requirements. Of particular interest is the investigation of properties that help (1) reconfigure systems by module replacement, and (2) establish causal correctness dependencies among modules (e.g., correctness of module interface A implies correctness of module interface B) in addition to structural visibility and reconfigurability. Of additional interest is the investigation of the properties that help reuse extant modules for different missions. Finally, of significant interest is the identification of a set of simple, practical tools for establishing and verifying system modularity.

    Software systems that employ modular design, and use data abstraction and information hiding to achieve layering [179] offer the following advantages:

    (a) Allow an incremental, divide-and-conquer approach to reasoning about correctness and other important system properties (e.g., fault tolerance, performance).

    (b) Support replacement independence of system components based on well-defined interfaces and uniform reference (i.e., references to modules need not change when the modules change).

    (c) Provide an intuitive packaging of system components with ease of navigation through the system, layer by layer, module by module.

    (d) Allow an incremental, divide-and-conquer approach to system development, with many individuals per development team possible.

    (e) Enable the reuse of software modules in different environments.

    Note that Clark [81], and later Atkins [22], suggest that layering may sometimes be a potentially undesirable form of system structuring because it can lead to poor performance. Also, Nelson suggests the use of protocol "delayering" (i.e., combining protocol layers) to achieve an efficient remote procedure call mechanism [251]. Thus, while layering is a generally useful technique for system structuring, the extent of system layering depends on specific criteria, such as correctness, fault tolerance, and performance. Lampson [199] argues that the reuse of software modules is and will remain an unrealistic goal, in practice.

    Early uses of layered abstraction include Multics [91, 92, 277] (with rings of protection, layering of system survivability and recovery, and directory hierarchies), Dijkstra's THE system [106] (with layers of object locking), and SRI's Provably Secure Operating System [120, 268, 269]. The PSOS hardware-software architecture provided numerous layers of abstraction for different types of objects, and distinguished between objects and their type managers. The architecture explicitly contradicts the above-mentioned Clark and Atkins claim that layering inherently leads to poor performance. For example, the PSOS layering enabled user-process operations (layer 12) to execute as single capability hardware instructions (layer 0) whenever appropriate dynamic linkage of symbolically named objects had been previously established. (The bottom 7 layers were conceived to be implemented directly in hardware, although the hardware could also encompass all or part of higher layers as well.) Thus, repeated layers of nested interpretation are not necessarily a consequence of layered abstraction, given a suitable architecture. (Also, see Section 3.4 for further background on PSOS relevant to composability.)

    B.2 Modularity

    In this section we define the term "module," illustrate system decomposition into modules, and present several correctness dependencies among modules. The following key notions are required to define and implement modular systems:

    * Module and module synonyms

    * Interface versus implementation

    * Replacement independence

    * Reusability

    * "Contains" relation

    * Module hierarchy

    * "Uses" relation

    * Correctness dependency among modules

    B.2.1 A Definition of "Module" for a Software System

    In general, a module is a system component (part, unit, building block). Synonyms for "module" include "system," "platform," "layer," "subsystem," "submodule," "service," and "(abstract) type manager." A software module is part of a software system and has the following six properties:

    P1. Role. A module has a well-defined unique purpose or role (responsibility, contract) that describes its effect as a relation among inputs, outputs, and retained state.

    P2. Set of Related Functions. A module contains all and only the functions (procedures, subroutines) necessary to satisfy its role. Each function has well-defined inputs, outputs, and effects.

    P3. Well-Defined Interface. A module has an interface (external specification) that consists of the public (visible) items that the module exports:

    * declarations of its public functions (i.e., those invocable from outside the module) and their formal parameters;

    * definitions of its exported types and exported manifest constants;

    * declarations of global variables associated with the module;

    * declarations of its signaled exceptions and handled exceptions;

    * definition of the necessary resources and privileges;

    * rules (discipline, restrictions) for using the above public functions, types, constants, and global variables.

    P4. Implementation. A module has an implementation (internal design) that details how its interface is satisfied. It should be possible to understand the interface (and role) of a module without understanding its implementation.

    P5. Replacement Independence. A module implementation can be replaced without also replacing any other module implementation in the system.

    P6. Reusability. A module implementation can be reused in different software systems with little or no added code.

    The role of a module describes its effects or behavior on inputs. The effects of a module can be reflected in the values of outputs, the state of the module, or the state of the system. With software, for example, the state of the module or system can be represented by a set of variables (e.g., simple variables, structures). A well-defined role should have a short and clear description, preferably one sentence. A module should have a simple name that reflects its role. Typically, module roles are system unique; no two modules in a system have the same role (no duplication of role). However, the system may intentionally duplicate modules to achieve other system goals (e.g., performance, reliability).

    For a module function to be well-defined, its inputs and outputs and effects should be well-defined. The name of a function should reflect its purpose. Functions should, but need not, be named. In software, for example, some functions are expanded in-line for performance reasons; also, the programming language may not have a way to express in-line expansion of named functions. Continuing the software example, the inputs and outputs of a function can be formal parameters or informal (global, environment) parameters or (request-response) messages. It should be simple to distinguish the public from the private functions (if any) in a module. It is desirable, but not necessary, that the functions of a module be nonredundant; function redundancy is undesirable but at the discretion of the designer of the system or module. Regarding the all and only nature of a module's functions, certain functions typically have a complementary twin: get-set, read-write, lock-unlock, do-undo, reserve-unreserve, allocate-deallocate, and so on.

    A module interface is well-defined if it contains all and only the module assumptions that a module user needs to know. The discipline of an interface, if any, may explain a legal order in which to use the public functions. For software, a well-defined interface contains declarations of exported (public) functions, data, types, manifest constants, exceptions raised, exceptions handled, exception handlers, and, the associated restrictions or discipline [387]. It may be inappropriate or impossible to capture certain programming restrictions or discipline within programming language constructs, in which case they should be provided in associated specification or commentary. Note that a module interface includes variables that are global to that module.

    A module implementation contains module-construction assumptions and programming details that a module user should not have to know; for example, order of resource use, algorithms employed.

    The typical notion of replacement independence for a module is that, if the module breaks or no longer functions correctly, then if a new module with the same interface is available, we can replace the original module with the new one without replacing any other modules. However, in software systems, the notion of replacement independence has a somewhat different meaning. While replacement independence is implied by "information hiding," [281, 59] and information hiding disallows global variables, replacement independence does not necessarily rule out the use of global variables in modules provided that the global variables are explicitly defined in the module's interface, and that the dependencies among the modules using those global variables are known.

    The typical notion of module reuse requires that a module be (1) general in its purpose or role, so that it is useful to users other than the few people working on the same project; (2) fully specified, so that others can use it; (3) tested, so that general expectations of quality are met; and (4) stable, in the sense that the module's behavior remains unchanged for the lifetime of the user system [199]. Other related properties of module reusability include simplicity of interface (i.e., foreign users should understand the module's interface with little effort), and customization (i.e., foreign users should be able to tailor module use by using special parameters or special-purpose programming language [199].

    B.2.2 System Decomposition into Modules

    The decomposition of any system into modules relies on two intermodule relations, namely, (1) the "contains" relation, and (2) the "uses" relation. These relations imply certain correctness dependencies among modules that are fundamental to the defining the module structure of a system.

    B.2.3 The "Contains" Relation

    Internally, a module may (but need not) contain component submodules. If it is necessary or desirable to identify a set of component parts of a module as submodules, then that set of submodules partitions (i.e., is collectively exhaustive and mutually exclusive) the parent module. The decision as to

    when to stop partitioning a system into modules is generally based on designer discretion and economics -- when it is no longer necessary nor desirable economically to identify and to package and to replace that subpart. Other than this, no generally accepted criterion exists for when to stop partitioning a software system into additional modules.

    Applied system-wide, the "contains" relation yields a module hierarchy (i.e., tree). Nodes of the tree represent modules; arc (A, B) means that module A directly contains submodule B. The root of the tree, the whole system, is the 0-th level of the tree. The system itself should be considered as the 0-th level module. The (n+1)-th level consists of the children (direct submodules) of the n-th level. Modules with no submodules are called leaf modules. We can define a part hierarchy system as modular if the system itself, and recursively each of its subparts that we identify as should-be-modules, satisfies the definition of module.


    Figure 1: Example of the Contains Relation
     

    EXAMPLE. The UNIX kernel is a software module; the system calls compose its set of related functions. The manual pages for the system calls describe the role, set of functions, and interface of the kernel. Figure B.1 shows an example of the "contains" relation, the major subsystems of the Unix kernel.

    Figure 2: Example of the Contains Relation and Module Hierarchy
     

    EXAMPLE. Figure B.2 shows another example of the "contains" relation, a decomposition of the File Subsystem of the Secure XENIX kernel [115] into a module hierarchy. In Figure B.2, ACL means Access Control List. The darkened boxes identify the files of source code in this design. The Superblock Service manages the attributes of a file system as a whole object. In this decomposition, the Mount Service is part of Flat File Service and not part of Directory (pathname) Service. The Mount Service maintains a short association list for substituting one (device, i-node number) pair, a "file handle," for another. The mounted handle represents a file system, and the mounted-on handle represents a directory. The Mount Service knows nothing about directories and pathnames; it knows about pairs of file handles.

    B.2.4 The "Uses" Relation

    In software, if a module uses another module, then the using module imports (all or part of) the interface of the used module to obtain needed declarations and definitions. We define the "uses" relation between functions and modules as follows. Function A uses function B if and only if (a) A invokes B and uses results or side effects of that invocation and (b) there must be a correct version of B present for A to work (run, operate) correctly. A function uses a module if and only if it uses at least one function from that module. A module uses another module if and only if at least one function uses that module. The "uses" relation is well-defined. From the "uses" relation we can draw a directed graph for a given level, where the nodes are same-level modules, and arc (A, B) means that module A uses module B. Also, we can draw a "uses" graph of the leaf modules.


    Figure 3: Example of Refining the Uses Relation 1
     


    Figure 4: Example of Refining the Uses Relation 2
     


    Figure 5: Example of Refining the Uses Relation 3
     

    EXAMPLE. Figures 3, 4, and 5 show an example of an intrasubsystem "uses" graph for the File Subsystem of the Secure XENIX kernel. They shows a progression of versions of a "uses" graph. Version 0 (Figure B.3) shows the entire subsystem. Version 1 (Figure B.3) shows all File Subsystem system calls in one box to simplify the picture, and shows how this layer uses all three top level services of the File Subsystem. The lines from the Flat File Service to the ACL Service (and back) show a "circular dependency" between the two; each uses the other. Version 2 (Figure B.4) replaces the Flat File Service with its three component services. Version 3 (Figure B.5) shows another level of detail of the "uses" graph. (Note the circular dependencies in Figure B.5. For approaches that help eliminate such dependencies see the PSOS abstraction hierarchy [120, 268, 269] and [179]. PSOS inherently removed such circular dependencies as a fundamental part of the architectural abstraction hierarchy.)

    B.2.5 Correctness Dependencies Among System Modules

    Correctness dependencies between modules are basic to describing, evaluating, and simplifying the connectivity of modules, and thus basic to system restructuring and evolution. For modules P and Q, P depends on Q, or P has a correctness dependency on Q (or "the correctness of P depends on the correctness of Q"), if and only if there must be a correct version of Q present for P to work correctly. Based on earlier work by Parnas, Janson et al. [344, 179] identify several types of correctness dependencies, which were later combined into the following three classes by Snider and Hays [358]: service, data, and environmental dependencies.

    Service Dependency: "P invokes a service in Q and uses results or side effects of that service. The service may be invoked through a function call, message, or signal (e.g., a semaphore operation), or through hardware, such as via a trap." [358]

    It is important to point out that not all invocations are service dependencies. "Note that if P transfers control or sends a message to Q and neither expects to regain control, nor counts on observing consequences or results from its interaction with Q, then P does not depend on Q. It is said simply to notify Q of an event (without caring about what Q does once it is notified)" [179]. In layered systems, certain upcalls [81] that provide advice or notifications can be viewed as not violating the downcall-only layering discipline if such upcalls do not correspond to correctness dependencies.

    Data Dependency: "P shares a data structure with Q and relies upon Q to maintain the integrity of that structure." [358]

    Modules that are either readers or writers of shared data depend on other modules that are writers of the same shared data. Thus, shared data with multiple writer modules produce mutual dependencies and increase module connectivity.

    Environmental Dependency: "P does not directly invoke Q and does not share a data structure with Q but nevertheless depends upon Q's correct functioning." [358]

    "One example is the dependency of most of the system on the interrupt handling subsystem. Although this is not generally called directly from the kernel, much of the kernel depends on its correct operation (e.g., properly restoring processor state at the end of an interrupt service routine) in order for the kernel to fulfill its specifications. ... In practice, we did not find that environmental dependencies presented many structural problems." [358]

    Service dependencies are more desirable than data dependencies because service dependencies are explicit; if all dependencies are service dependencies, then the system call graph (the graph of what invokes what), which is usually explicit and easy to compute, represents all dependencies. By introducing information-hiding modules [281, 286, 59] throughout a system, where system data is partitioned among modules and accessed only via function (subroutine, procedure) calls, each data dependency can be converted into a service dependency.

    B.2.6 Using Dependencies for Structural Analysis of Software Systems

    For structural analysis, it is desirable to represent correctness dependencies between system modules with the "contains" and the "uses" relations (and graphs). As seen above the "contains" relation among modules is unambiguously defined by syntactic analysis. In contrast, the "uses" relations can be defined in three possible ways: (1) as representing all correctness dependencies; or (2) as representing only service and data dependencies; or (3) as representing only service dependencies.

    Fundamentally, there is no difference between service and data dependencies since both are correctness dependencies. Further, data dependencies can (and should) be converted to service dependencies if we drive the structure toward desirable information hiding. To simplify system structure, we need to minimize correctness dependencies and eliminate all circular dependencies. To do this, we first minimize data dependencies, because they contribute to circular dependencies, then we remove other circular dependencies. The resulting measurable goal is that of eliminating global variables and acyclic structure, and minimizing the cardinality of the "uses" relation. If this "uses" relation represents all system correctness dependencies and if its graph is cycle-free, then showing correctness of the system parts in a bottom-up order (the reverse of a topological sort of the "uses" graph) leads to correctness of the system.

    In practice, it is not necessarily possible, nor desirable [22], to eliminate all structural imperfection (i.e., all globals, some cyclic structure). Cycle-freedom of the "uses" graph is not a precondition of system correctness; we can scrutinize each cycle on a case-by-case basis to understand and explain correctness, rather than removing cycles by rethinking system structure or by duplicating certain code (e.g., by "sandwiching"). Also, explicit function calls may not represent all correctness dependencies. Implicit correctness dependencies, which include shared memory and sharing through globals and timing dependencies, may or may not be problematical.

    B.3 Module Packaging

    A module is separable from the whole and packageable. We distinguish between "module" and "package"; a module is a logical container of a system part, whereas a package is a physical container of a system part. If there is not a strong reason to the contrary, each module should have a separate package. The modules of a system should be manifest (i.e., obvious) from the packaging. For a software system, the module interface and module implementation should be in separate packages, or there should be a well-defined reason why not.

    EXAMPLE: Packaging the Secure Xenix(TM) Kernel

    To make the nature of each function in the Secure XENIX kernel more conspicuous, one can add the following adjectives before function names:

    SYSTEM_CALL,

    PUBLIC, and

    PRIVATE.

    Also, to make more explicit the modules of the Secure Xenix(TM) kernel, one can add a subsystem-identifying prefix to each module name, for example, fs_dir.c would indicate that the directory manager is part of the File Subsystem ("fs"). Another way to make more explicit the modules of a kernel is to represent each major module (or subsystem) as a subdirectory. For example, the modules of the Secure Xenix(TM) kernel can be packaged as subdirectories of the directory kernel/, as follows:

    * kernel/conf Configuration Subsystem

    * kernel/dd Device Drivers (part of I/O Subsystem)

    * kernel/fp Floating Point Support

    * kernel/file File Subsystem

    * kernel/i Interfaces (.i files) to Kernel Modules

    * kernel/init Initialization Subsystem

    * kernel/io Low-Level I/O Support (part of I/O Subsystem)

    * kernel/ipc IPC Subsystem

    * kernel/memory Memory (Management) Subsystem

    * kernel/misc Miscellaneous Functions Subsystem

    * kernel/process Process Subsystem

    * kernel/syscall System Call (Processing) Subsystem

    * kernel/security Security Subsystem

    Examples of Modularity and System Packaging Defects

    The definition of a module of Section B.2.1 allows us to derive certain measures of modularity, or of modularity defects. Most modularity defects arise from unrestricted use of global variables. This makes both the understanding of system structure difficult [383] and module replacement dependent on other modules.

    B.4 Visibility of System Structure Using Modules

    System modular structure also becomes visible by examining (1) the design abstractions used within would-be modules, (2) the hiding of information (i.e., data) within the would-be modules, and (3) the use of the would-be modules within systems layers.

    B.4.1 Design Abstractions within Modules

    Data abstraction, together with the use of other design abstractions, such as functional and control abstractions, significantly enhances the ability to identify a system's modules and to structure a system into sets of (ordered) layers. As a result, the visibility of system properties and their formal analysis become possible.

    For illustration, we differentiate six forms of abstraction that are typically implemented by a system's modules:

    * functional abstraction,

    * data abstraction,

    * control abstraction,

    * synchronization abstraction,

    * interface abstraction, and

    * implementation abstraction.

    A module implements a functional abstraction if the output of the module is a pure mapping of the input to the module. That is, the module maintains no state. The module always produces the same output, if given identical input. The primary secret of a functional abstraction is the algorithm used to compute the mapping.

    A data abstraction is "a description of a set of objects that applies equally well to any one of them. Each object is an instance of the abstraction" (cf. Britton and Parnas [59]). A module implements a data abstraction if it hides properties of an internal data structure. The interface of a data abstraction module can export a transparent type or an opaque type. A transparent type allows visibility (reading) of its internal fields, whereas an opaque type does not. A transparent type is typically represented as a dereferenceable pointer to the object, whereas an opaque type is typically represented by a "handle" on the object, a nondereferenceable "pointer" like a row number in a private table or a "capability."

    A module implements a control abstraction if it hides the order in which events occur. The primary secret of a control abstraction is the order in which events occur and the algorithms used to determine the order (e.g., scheduling algorithms).

    A module implements a synchronization abstraction if it encapsulates all synchronization primitives necessary for its concurrent execution [164]. The primary role of the module as a synchronization abstraction is to hide the details of these primitives (e.g., mutual exclusion, conditional wait, signals) from the module user and to restrict the scope of correctness proofs to the module definition (to the largest possible extent).

    A module implements an interface abstraction if, by [59], it "represents more than one interface; it consists of the assumptions that are included in all of the interfaces that it represents." An operating system may contain an X-interface table, where each row is an interface (e.g., pointers to public functions) to a type X interface. As examples, the operating system may have an I/O device-interface table, a communication protocol-interface table, and a filesystem-interface table.

    A module includes an implementation abstraction if it represents the implementation of more than one module. For example, if an operating system contains a number of similarly structured tables with similar public functions, then it may be possible to represent all such tables with one implementation schema (or abstract program).

    In [179], Janson defines the concept of "type extension" as a hierarchy of data abstractions. The idea is to build abstract data types atop one another by defining the operations of a higher-level type in terms of the operations of lower-level types.

    B.4.2 Information Hiding as a Design Abstraction for Modules

    Information hiding [59, 281] is a software decomposition criterion. Britton and Parnas in [59] give the following description of information hiding.

    "According to this principle, system details that are likely to change independently should be the secrets of separate modules; the only assumptions that should appear in the interfaces between modules are those that are considered unlikely to change. Every data structure is private to one module; it may be directly accessed by one or more programs within the module but not by programs outside the module. Any other [external] program that requires information stored in a module's data structures must obtain it by calling module programs [public functions]." ...

    "Three ways to describe a module structure based on information-hiding are (1) by the roles played by the individual modules in the overall system operation; (2) by the secrets associated with each module; and (3) by the facilities [public functions] provided by each module. ..."

    "For some modules we find it useful to distinguish between a primary secret, which is hidden information that was specified to the software designer, and a secondary secret, which refers to implementation decisions made by the designer when implementing the module designed to hide the primary secret."

    In general, each module should hide an independent system-design decision. If a table (with related data) is involved, for example, a table manager module mediates all access to that table and it hides the representation of that table. The module secrets are the facts about the module that are not included in its interface -- that is, the assumptions that client programs are not allowed to make about the module. The correctness of other modules must not depend on these facts.

    Note that a system entirely based on "information hiding" is always modular.

    B.4.3 Layering as a Design Abstraction Using Modules

    A layer is a module. We say that a system is layered if the "uses" graph of its leaf modules is a linear order (a reflexive, transitive, and asymmetric relation). (If the layer is actually a collection of modules, then the linear order is on the layer rather than on the individual modules, although a lattice ordering could be used instead.)

    Pictorially, we represent a layered system with horizontal stripes or bands, with one stripe per layer. Also, we typically show only the transitive reduction (i.e., remove all transitively implied arcs) of the "uses" graph. "Traditionally, a layer is thought of as providing services to the layer above, or the user layer. The user has some mechanism for invoking the layer, such a procedure call. The layer performs the service for its user and then returns. In other words, service invocation occurs from the top down." [81] We define layering as a system structuring (organizing) principle in which system modules are partitioned into groups such that the "uses" graph of the system module groups (the layers) is a linear order (although it could also be viewed as a partial order, e.g., in the form of a lattice). Classical layering permits only downcalls, not upcalls. Some experience suggests that upcalls can be valuable (as long as security and integrity are not violated).

    "In classical layering, a lower layer performs a service without much knowledge of the way in which that service is being used by the layers above. Excessive contamination of the lower layers with knowledge about upper layers is considered inappropriate, because it can create the upward dependency which layering is attempting to eliminate. ... It is our experience, both with Swift and in other upcall experiments that we have done, that the ability to upcall in order to ask advice permits a substantial simplification in the internal algorithms implemented by each layer." [81]

    We understand the phrase layers of abstraction as just type extension where the hierarchy is a layering.

    Two popular types of layering in an operating system are policy/mechanism layering and hardware-independent / hardware-dependent (HI/HD) layering - also called device-independent / device-dependent (DI/DD). This was a fundamental part of the Multics input-output system architecture. Also, see [358]. The idea of policy/mechanism layering is that design decisions about system policies (or mechanisms) should tend to reside in higher (or lower) layers. The rationale for HI/HD layering, with the HI layer above the HD layer, is to localize machine dependent modules to simplify porting. In practice, these two layering criteria may not be compatible, since "some design decisions which one would be tempted to label as policies rather than mechanisms can be machine dependent" [358].

    B.5 Measures of Modularity and Module Packaging

    The goal of identifying simple and practical tools for establishing and verifying modularity properties requires that we define simple and practical measures for modularity. Below we define four classes of such measures, namely, for (1) replacement dependence, (2) global variables, (3) module reusability, and (4) component packaging. While we believe that these classes are important for modularity assessments, the specific examples of measures are provided only for illustrative purposes. Other specific measures may be equally acceptable in each class.

    B.5.1 Replacement Dependence Measures

    One way to define a modularity defect is by replacement dependence, a violation of our property P5 of a module. We define two replacement dependence measures below.

    Measure M1. We define modularity measure M1 on an "almost-module" m (satisfying properties P1-P4 but not P5 of the module definition of Section B.21.1) as the number of files that must be edited to replace the implementation of module m, less one (the minimum).

    Measure M2. We define modularity measure M2 on an "almost-module" m as the number of lines of source code, not in the "primary" implementation file, that must be edited to replace the implementation of module m.

    B.5.2 Global Variable Measures

    Although global variables can be useful for system performance, they should be avoided whenever they can produce replacement dependence and extra-module correctness dependencies not represented by explicit service dependencies. It is tempting, if inconsequential, to argue that correctness should be determined only from explicit service dependencies and thus from module interfaces, and not from data dependencies, and to conclude that all globals should be eliminated.

    We have defined two software modules as data dependent if they share a common global variable. A data dependency can sometimes be harmless, or safe or easy to understand, and sometimes it is harmful, or unsafe or difficult to understand [383]. We can define a hierarchy of module dependency (coupling) from very safe to very unsafe with the following types of variables:

    (a) local to a function,

    (b) formal of a function,

    (c) global (but private) to one module,

    (d) global with one writer module (and many reader modules),

    (e) global with a few writer modules (and many reader modules) and a well-defined use discipline,

    (f) global with many writer modules (and many reader modules) and a well-defined use discipline, and

    (g) global with many writer modules (and many reader modules) and an ill-defined (or undefined) use discipline.

    Both types of global variables (a) and (b) are safe, while a global variable of type (g) is unsafe. In general, (a) is safer than (b), which in turn is safer than (c), and so on, and (g) is the most unsafe. In general, module independence is valuable because one can understand (thus, replace, fix, evolve) the module by understanding only its interface and what it uses. On the other hand, a module dependency is undesirable when one cannot understand (thus, replace, fix, evolve) a module without understanding the implementations of other modules. In this sense, a module including global variables of type (d) is easier to understand than a module including a global variable of type (e); a module including global variables of type (e) is easier to understand than one including a global variable of type (f); a module including global variables of type (g) is virtually impossible to understand. By "use discipline" we mean "correctness rule."

    If a global variable can and should be converted to either a local of one module, or a formal of one or more public functions, or a local of a public function, then this new scope is generally better than its old scope as a global. In general, use of formals is a better programming discipline than use of informals (globals, environment variables). As one reason, it makes the function parameters more explicit; this makes the functions simpler to understand, and simpler to evolve (e.g., as a remote procedure call). As another reason, for recursion, use of formals is a less error prone programming discipline than use of informals; care must be taken to save current informal parameters before a recursive call and to restore them after the call.

    Measure M3. We define measure M3 on an "almost-module" m as the number of globals that it writes that are also written by other modules or almost-modules.

    B.5.3 Module Reusability Measures

    Another way to define a modularity defect is as a reuse impediment, or a violation of our property P6 of a module. The major technical source of module-reuse impediments is the violation of compatibility [134] between a module's interface and its new environment of use. Whenever this impediment materializes, we say that the module cannot be composed within its environment of use. In particular, we are interested in the amount of extra (correct) code that has to be written and the amount of administrative effort that has to be expanded to remove interface incompatibility. We define four measures of reuse impediments below. These measures can also be viewed as simple estimates of ability to compose modules.

    For all the measures below we assume the module being reused satisfies properties P1-P5 but not P6 of the module definition in Section B.2.1.1.

    Measure M4. We define modularity measure M4 on an "almost-reusable" module m as the number of exception handlers that must be written to cover all the exceptions signaled by the reused module.

    Measure M5. We define modularity measure M5 on an "almost-reusable" module m as the number of resource allocation and deallocation instructions that must be written to enable the execution of the reused module, less two (the minimum).

    Measure M6. We define modularity measure M6 on an "almost-reusable" module m as the number of lines of code that must be written to satisfy the module invocation discipline (i.e., type matching and coercion, if necessary, setting constants and global variables) to enable the execution of the reused module, less than the number of formal parameters (the minimum).

    Measure M7. We define modularity measure M6 on an "almost-reusable" module m as the number of permissions/system privileges that must be granted to enable the execution of the reused module, less one (the minimum required to invoke the module).

    Clearly, zero extra code or administrative effort is best for all measures, but a small number (e.g., one to three) of extra programs and administrative effort is acceptable, provided that this is a one-time requirement.

    B.5.4 Component-Packaging Measures

    Component-packaging defects can also be evaluated using measures of how closely the module package reflects the module definition. Since packaging is not a modularity property by our definition, packaging-defect measures are not necessarily modularity measures. However, packaging-defect measures are useful in evaluating visibility of system structures by standard tools (e.g., browsers). The examples of packaging-defect measures provided below are only for illustrative purposes. Other measures may be equally acceptable.

    Measure M8: Assume that a leaf module has one implementation file and one interface file. We define packaging measure M8 to be the number of files (e.g., .c, assuming C) over which the implementation of a leaf module is spread, plus the number of files over which the interface of a leaf module is spread, then subtracting two (the minimum).

    M8 is a measure of packaging defects with granularity of a per-file count. For a module m with exactly one implementation file and exactly one interface file, M8(m) = 0. In general, for system s, and measure M, M(s) is the summation of M(m) for each leaf module m.

    Measure M9: We define packaging measure M9 as similar to Measure M8 except that we count the number of lines of source code (SLOC) not in "the primary" implementation file plus not in "the primary" interface file.

    As a class of approximate measures of modularity, additional packaging defect measures can be defined.

    B.6 Cost Estimates for Modular Design

    Different system designers may argue over the specific costs of different features of modular design, but few would disagree with the following "less expensive than" (<) and "included in" (<<) orders:

    "Just Code It," < Module with Properties P1-P4 < Replacement Independence (P5) < Reusability (P6); and

    "Just Code It," << Module with Properties P1-P4 << Replacement Independence (P5) << Reusability (P6).

    In fact, the first seven modularity measures presented above illustrate the different types of complexity involved in modular design and provide intuition for the above cost and feature-inclusion ordering. Specific cost figures for modularity are hard to come by, as insufficient experience is available with systems designs where modularity is an important requirement. However, based on (1) experience with Secure Xenix(tm) [115] (aka Trusted Xenix(tm)), the only Unix(tm) system to achieve a security level where modularity was required (i.e., TCSEC level B2), and (2) experience reported by Lampson [199], we can estimate the relative costs using a "Just Code It" (JCI) unit of modularity currency.

    Lampson estimates the costs for a "good module for your system" and "reusable component." We approximate a "good module for your system" with modularity properties P1-P4 and "reusable component" with property P6. Using these estimates and our approximation, we obtain the following cost ranges:

    Cost of Module with Properties P1-P4 (i.e., "good module for your system") = 0.5 JCI - 2 JCI,
    depending on how many of the properties P1 - P4 are desired (or "how lucky you are" [199]) and

    Cost of Module Reusability (P6) = 3 JCI - 5 JCI.

    For the purposes of this study, it seems important to be able to estimate the cost of the module "replacement independence" property (P5). The cost ordering presented above allows us to interpolate the cost estimate for this property. Since

    Module with Properties P1-P4 < Replacement Independence (P5) < Reusability (P6),
    the above estimates suggest that

    Cost of a Module's Replacement Independence (P5) = 2 JCI - 3 JCI.

    However, is our approximation of a "good module for your system" with module properties P1 - P4 valid? We attempt to validate this approximation using cost estimates of Secure Xenix(tm), where modularity properties P1-P4 were satisfied. The Secure Xenix(tm) estimates can be split roughly as follows: 1/3 design cost (including modularity properties P1 - P4, but excluding that of testing and documentation required for P6), 1/3 assurance cost (including testing to the level required for P6), and 1/3 documentation (including everything that a user and an evaluator might want for level B2, and hence including documentation required for P6). These estimates suggest the following approximate relationships:

    Cost of Module with Properties P1-P4 = 0.33 of Cost of Module with Properties P1-P6.
    Hence, using the above cost estimates, we obtain:

    Cost of a Module with Properties P1 - P4 = 1 JCI - 1.67 JCI,
    which is consistent with Lampson's estimate that

    "good module for your system" = 0.5 JCI - 2 JCI.

    We stress that the above estimates of modularity costs are very rough. However, they appear to be consistent with each other for modularity properties P1 - P4. Further, the requirements of property P6 seem to be consistent with modularity requirements of TCSEC level B2-B3. Hence these cost estimates could be further validated using examples of systems rated at those levels.

    B.7 Tools for Modular Decomposition and Evaluation

    A variety of tools (e.g., algorithms, methods, programs) for modular system decomposition have been proposed during the past dozen years. The motivation for the development of these tools has been driven primarily by the need to improve the understandability and maintainability of legacy software, and to a lesser extent to enable module reusability. Few of these tools were motivated directly by concerns of module replacement independence and correctness, and consequently few support formal dependency analyses among identified modules.

    In general, most tools for modularization can be divided into two broad classes, namely those based on (1) clustering functions and data structures based on different modularity criteria, and (2) concept analysis - an application of a lattice-theoretical notion to groups of functions and function attributes (e.g., access to global variables, formal parameters and types returned). The primary difference between the two classes is that approaches used by the former use metrics of function cohesion and coupling directly whereas those used by the latter rely mostly on semantic grouping of functions that share a set of attributes. Both approaches have advantages and disadvantages. For example, although clustering is based on well-defined metrics (which overlap but are not necessarily identical with M1-M7 above) and always produces modular structures, it does not necessarily provide a semantic characterization of the modules produced and often reveals only few module-design characteristics. In contrast, concept analysis helps characterize the modules recovered from source code semantically, but does not always lead to easily identifiable modules (e.g., nonoverlapping and groupings of program entities covering all functions of a program). Neither approach is designed to characterize correctness dependencies among modules (e.g., systems analysis) and neither is intended to address the properties of systems obtained by modular composition (e.g., systems synthesis). We note that metrics M1-M7 suggested above could be applied equally well to the modular structures produced by either approach. In the balance of this appendix, we provide representative examples of tools developed for modularity analysis using each approach.

    B.7.1 Modularity Analysis Tools Based on Clustering

    The first class of clustering tools is that developed for the identification of abstract types and objects and their modules in source code written in a non-object-oriented language, such as C. The modular structure produced by these tools is partial as it represents only the use of data abstractions in source code. Other modularity structures manifest in the use of other abstractions (e.g., functional, control, synchronization) are not addressed. All tools of this class recognize abstract-object instances by clustering functions that access function-global variables, formal parameters, returned  [211] or received [65]. Relations among the identified components (e.g., call and type relations [65], relations of procedures to internal fields of structures [384]) are used to build dominance trees or graphs, and dominance analysis is used to hierarchically organize the abstract type/object modules into subsystems [132]. A typical problem that appears with this class of tools is that of "coincidental links," which is caused by procedures that implement more than one function, and "spurious links," which is caused by functions that access data structures implementing more than one object type. Both types of links lead to larger-than-desirable clusterings of functions.

    A second class of clustering tools is that based on measuring high internal cohesion and low external coupling (e.g., few inter-module global variables and functions). Tools of this class define cohesion principles and derive metrics based on those principles. For example, the measure of "similarity" among procedures defined for the ARCH tool [346] is derived from the "information hiding" principle and used to extract modules with internal cohesion and low external coupling. Further, genetic algorithms have been used by other tools (e.g., BUNCH [217]) to produce hierarchical clustering of modules identified using cohesion and coupling metrics [216]. Clustering techniques have also been used to group module interconnections into "tube edges" between multiple modules that form subsystems [215].

    B.7.2 Modularity Analysis Tools based on Concept Analysis

    The notion of concept analysis is defined in lattice theory as follows. Let X be a set of objects, or an extent, and Y a set of attributes, or an intent, and R a binary relation between objects and attributes. A concept is the maximal collection of objects sharing common attributes. Formally, a concept is the pair of sets (X,Y) such that X = tau(Y) and Y = sigma(X), where sigma and tau are anti-monotone and extensive mappings in R (i.e., they form a Galois connection). A mapping, say, sigma is said to be anti-monotone if X1 c X2 => sigma(X2) c sigma(X1). Mappings sigma, tau are said to be extensive if X c tau(sigma(X)) and Y c sigma(tau(Y)). Further, a concept (X0, Y0) is a subconcept of concept (X1, Y1), if X0 c X1 or, equivalently, Y1 c Y0. The subconcept relation forms a partial order over the set of concepts leading to the notion of concept (Galois) lattice, once an appropriate top and bottom are defined. A fundamental theorem of concept lattices relates subconcepts and superconcepts and allows the least common superconcept of a set of concepts to be computed by intersecting the intents and finding the common objects of the resulting intersection.

    Concept lattices have been used in tools that identify conflicts in software-configuration information (e.g., as in the RECS tool [356]). During the past half a dozen years, they have been used to analyze modularity structures of programs written in programming languages that do not support a syntactic notion of a module (e.g., Fortran, Cobol, C) [206, 353]. More recently, they have also been used to identify hierarchical relationships among classes in object-oriented programming  [136, 137, 357].

    The use of concept analysis in identifying modular structures of source code requires (1) the selection of an object set (i.e., function set) and attribute set (i.e., function characteristics such as a function's access to global variables, formal parameters returned, types, and so on); (2) the construction of the concept lattice, and (3) the definition of concept partitions and subpartitions as nonoverlapping groupings of program entities (that is, functions and attributes). Note that if concept partitions are used to represent modules, they must be complete; that is, they must cover the entire object (i.e., function) set [353]. The notion of subpartitions was introduced to remove the completeness restriction that sometimes leads to module overlaps caused by artificial module enlargements (to cover the function set) [366].

    Concept analysis applied to modularity analysis has the advantage of flexible determination of modules. That is, if the proposed modularization is too fine-grained, moving up the (sub)partition lattice allows a coarser granularity of modules to be found. Conversely, if the proposed modularization is too coarse, additional attributes can be added to identify finer-granularity modules. Similar flexibility in clustering can be achieved in a more complex manner, namely, by adding and removing metrics to the set used by clustering analysis. Concept analysis also has the advantage that the resulting modularity has a fairly precise semantic characterization.

    B.8 Virgil Gligor's Acknowledgments

    Much of my understanding of "modularity" is based on joint work with Matthew S. Hecht on defining practical requirements for modular structuring of Trusted Computing Bases in the late 1980s. Figures 1-5 were generated by the Secure Xenix modularity study led by Matthew. Virgil D. Gligor

     

    References

     [1]
    M. Abadi, A. Banerjee, N. Heintze, and J.G. Riecke. A core calculus of dependency. In POPL '99, Proceedings of the 26th SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 147-160, San Antonio, Texas, January 20-22 1999.
     [2]
    M. Abadi and A.D. Gordon. A calculus for cryptographic protocols: The Spi calculus. Technical report, Digital Equipment Corporation, SRC Research Report 149, Palo Alto, California, January 1998.
     [3]
    M. Abadi and L. Lamport. Composing specifications. In J.W. de Bakker, W.-P. de Roever, and G. Rozenberg, editors, Stepwise Refinement of Distributed Systems: Models, Formalisms, Correctness, pages 1-41, REX Workshop, Mook, The Netherlands, May-June 1989. Springer-Verlag, Berlin, Lecture Notes in Computer Science, vol. 230.
     [4]
    M. Abadi and R. Needham. Prudent engineering practice for cryptographic protocols. Technical report, Digital Equipment Corporation, SRC Research Report, Palo Alto, California, June 1994.
     [5]
    R.P. Abbott et al. Security analysis and enhancements of computer operating systems. Technical report, National Bureau of Standards, 1974. Order No. S-413558-74.
     [6]
    H. Abelson, R. Anderson, S.M. Bellovin, J. Benaloh, M. Blaze, W. Diffie, J. Gilmore, P.G. Neumann, R.L. Rivest, J.I. Schiller, and B. Schneier. The risks of key recovery, key escrow, and trusted third-party encryption. (http://www.cdt.org/crypto/risks98/), June 1998. This is a reissue of the May 27, 1997 report, with a new preface evaluating what happened in the intervening year.
     [7]
    M.D. Abrams and M.V. Joyce. Composition of trusted IT systems. Technical report, MITRE, September 1992. Draft.
     [8]
    N. Abramson and F.F. Kuo (editors.). Computer-Communication Networks. Prentice-Hall, 1971.
     [9]
    J. Adamek. Foundations of Coding: Theory and Applications of Error-Correcting Codes with an Introduction to Cryptography and Information Theory. Wiley-Interscience, 1991.
     [10]
    M. Adler. Tradeoffs in probabilistic packet marking for ip traceback. In Proceedings of the Thirty-fourth Annual ACM Symposium on Theory of Computing, pages 407-418, 2002.
     [11]
    P.E. Agre and M. Rotenberg, editors. Technology and Privacy: The New Landscape. MIT Press, Cambridge, Massachusetts, 1997.
     [12]
    J.H. An, Y. Dodis, and T. Rabin. On the security of joint signature and encryption. In Advances in Cryptology, EUROCRYPT 2002, Amsterdam, The Netherlands, Springer-Verlag, Berlin, Lecture Notes in Computer Science, pages 83-107, May 2002.
     [13]
    R. Anderson and M. Kuhn. Tamper resistance -- a cautionary note. In Proceedings of the Second Usenix Workshop on Electronic Commerce, pages 1-11. USENIX, November 1996.
     [14]
    R.J. Anderson. Security Engineering: A guide to Building Dependable Distributed Systems. John Wiley and Sons, New York, 2001.
     [15]
    T. Anderson and J.C. Knight. A framework for software fault tolerance in real-time systems. IEEE Transactions on Software Engineering, SE-9(3):355-364, May 1983.
     [16]
    T. Anderson and P.A. Lee. Fault-Tolerance: Principles and Practice. Prentice-Hall International, Englewood Cliffs, New Jersey, 1981.
     [17]
    A.W. Appel and D.B. MacQueen. Standard ML of New Jersey. In Programming Language Implementation and Logic Programming, Lecture Notes in Computer Science vol. 528, pages 1-26, Berlin, 1991. Springer-Verlag.
     [18]
    W.A. Arbaugh, D.J. Farber, and J.M. Smith. A secure and reliable bootstrap architecture. In Proceedings of the 1997 Symposium on Security and Privacy, pages 65-71, Oakland, California, May 1997. IEEE Computer Society.
     [19]
    W.A. Arbaugh, A.D. Keromytis, D.J. Farber, and J.M. Smith. Automated recovery in a secure bootstrap process. In Proceedings of the 1998 Network and Distributed System Security Symposium, San Diego, California, March 1998. Internet Society.
     [20]
    K. Ashcraft and D. Engler. Detecting lots of security holes using system-specific static analysis. In Proceedings of the 2002 Symposium on Security and Privacy, pages 143-159, Oakland, California, May 2002. IEEE Computer Society.
     [21]
    D. Asonov and R. Agrawal. Keyboard acoustic emanations. In Proceedings of the 2004 Symposium on Security and Privacy, pages 3-11, Oakland, California, May 2003. IEEE Computer Society.
     [22]
    M.S. Atkins. Experiments in SR with different upcall program structures. ACM Transactions on Computer Systems, 6(4):365-392, November 1988.
     [23]
    Numerous authors. Automated software engineering (special section). ERCIM News, (58):12-51, July 2004.
     [24]
    A. Avizienis and J-C. Laprie. Dependable computing: From concepts to design diversity. Proceedings of the IEEE, 74(5):629-638, May 1986.
     [25]
    A. Avizienis and J. C. Laprie, editors. Dependable Computing for Critical Applications, volume 4 of Dependable Computing and Fault-Tolerant Systems, Santa Barbara, California, August 1989. Springer-Verlag, Vienna, Austria.
     [26]
    A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and secure computing. IEEE Transactions on Dependable and Secure Computing, 1(1):11-33, January-March 2004.
     [27]
    M. Backes and B. Pfitzmann. A cryptographically sound security proof of the Needham-Schroeder-Lowe public-key protocol. In 23rd Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), Mumbai, India, December 2003.
     [28]
    M. Backes, B. Pfitzmann, and M. Waidner. A universally composable cryptographic library with nested operations. In Tenth ACM Conference on Computer and Communications Security, Washington, D.C., October 2003. ACM.
     [29]
    P. Baran. Reliable digital communications systems using unreliable network repeater nodes. Technical Report P-1995, The RAND Corporaton, May 27 1960.
     [30]
    J. Barnes. High Integrity Software: The SPARK Approach to Safety and Security. Addison-Wesley, Reading, Massachusetts, 2003. reviewed in RISKS-23.01.
     [31]
    L. Bass, P. Clements, and R. Kazman. Software Architecture in Practice. Addison-Wesley, Reading, Massachusetts, 1998.
     [32]
    L. Bauer, A.W. Appel, and E.W. Felten. Mechanisms for secure modular programming in Java. Software-Practice and Experience, 33:461-480, 2003.
     [33]
    K. Beck. Extreme Programming Explained: Embrace Change. Addison-Wesley, Reading, Massachusetts, 1999. (http://www.extremeprogramming.org).
     [34]
    R. Bejtlich. The Tao of Network Security Monitoring. Addison-Wesley, Reading, Massachusetts, 2004.
     [35]
    D.E. Bell and L.J. La Padula. Secure computer system: Unified exposition and Multics interpretation. Technical Report ESD-TR-75-306, The Mitre Corporation, Bedford, Massachusetts, March 1976.
     [36]
    L.A. Benzinger, G.W. Dinolt, and M.G. Yatabe. Combining components and policies. In Proceedings of the Computer Security Foundations Workshop VII, J. Guttman, editor, June 1994.
     [37]
    L.A. Benzinger, G.W. Dinolt, and M.G. Yatabe. Final report: A distributed system multiple security policy model. Technical report, Loral Western Development Laboratories, report WDL-TR00777, San Jose, California, October 1994.
     [38]
    P.L. Bernstein. Against the Gods: The Remarkable Story of Risk. John Wiley & Sons, New York, 1996.
     [39]
    T.A. Berson and G.L. Barksdale Jr. KSOS: Development methodology for a secure operating system. In National Computer Conference, pages 365-371. AFIPS Conference Proceedings, 1979. Vol. 48.
     [40]
    T.A. Berson, R.J. Feiertag, and R.K. Bauer. Processor-per-domain guard architecture. In Proceedings of the 1983 IEEE Symposium on Security and Privacy, page 120, Oakland, California, April 1983. IEEE Computer Society. (Abstract only).
     [41]
    W.R. Bevier. Kit and the short stack. Journal of Automated Reasoning, 5(4):519-30, December 1989.
     [42]
    W.R. Bevier, W.A. Hunt, Jr., J S. Moore, and W.D. Young. An approach to systems verification. Journal of Automated Reasoning, 5(4):411-428, December 1989.
     [43]
    K.J. Biba. Integrity considerations for secure computer systems. Technical Report MTR 3153, The Mitre Corporation, Bedford, Massachusetts, June 1975. Also available from USAF Electronic Systems Division, Bedford, Massachusetts, as ESD-TR-76-372, April 1977.
     [44]
    R. Bisbey II, J. Carlstedt, and D. Chase. Data dependency analysis. Technical Report ISI/SR-76-45, USC Information Sciences Institute (ISI), Marina Del Rey, California, February 1976.
     [45]
    R. Bisbey II and D. Hollingworth. Protection analysis: Project final report. Technical report, USC Information Sciences Institute (ISI), Marina Del Rey, California, 1978.
     [46]
    R. Bisbey II, G. Popek, and J. Carlstedt. Protection errors in operating systems: Inconsistency of a single data value over time. Technical Report ISI/SR-75-4, USC Information Sciences Institute (ISI), Marina Del Rey, California, December 1975.
     [47]
    M. Bishop. Computer Security: Art and Science. Addison-Wesley, Reading, Massachusetts, 2002.
     [48]
    M. Bishop. Introduction to Computer Security. Addison-Wesley, Reading, Massachusetts, 2004.
     [49]
    B. Blanc. GATeL: Automatic test generation from Lustre descriptions. ERCIM News, (58):29-30, July 2004.
     [50]
    M. Blume and A.W. Appel. Hierarchical modularity. ACM Transactions on Programming Languages and Systems, 21(4):813-847, 1999.
     [51]
    C. Blundo, A. De Santis, G. Di Crescenzo, A.G. Gaggia, and U. Vaccaro. Multi-secret sharing schemes. In Advances in Cryptology: Proceedings of CRYPTO '94 (Y.G. Desmedt, editor), pages 150-163. Springer-Verlag, Berlin, LCNS 839, 1994.
     [52]
    Defense Science Board. Protecting the homeland, volume ii. Technical report, Defense Science Board Task Force on Defensive Information Operations 2000 Summer Study, March 2001.
     [53]
    W.E. Boebert and R.Y. Kain. A practical alternative to hierarchical integrity policies. In Proceedings of the Eighth DoD/NBS Computer Security Initiative Conference, Gaithersburg, Maryland, 1-3 October 1985.
     [54]
    D. Boneh, R.A. DeMillo, and R.J. Lipton. On the importance of checking cryptographic protocols for faults. Journal of Cryptology, 14(2):101-119, 1997.
     [55]
    P. Boudra, Jr. Minutes of the meetings of the system composition working group, volume 1. Technical report, National Security Agency, Information Systems Security Organization, Office of Infosec Systems Engineering, S9 Technical Report 6-92, Library No. S-239, 646, October 1992. For Official Use Only.
     [56]
    P. Boudra, Jr. Report on rules of system composition: Principles of secure system design. Technical report, National Security Agency, Information Systems Security Organization, Office of Infosec Systems Engineering, I9 Technical Report 1-93, Library No. S-240, 330, March 1993. For Official Use Only.
     [57]
    R.S. Boyer, B. Elspas, and K.N. Levitt. SELECT: A formal system for testing and debugging programs by symbolic execution. In Proc. Int. Conf. Reliable Software, pages 234-244. IEEE, IEEE, April 1975.
     [58]
    R.S. Boyer and J S. Moore. A Computational Logic. Academic Press, New York, 1979.
     [59]
    K.H. Britton and D.L. Parnas. A-7e software module guide. Technical report, NRL Memorandum Report 4702, Naval Research Laboratory, Washington, D.C., December 1981.
     [60]
    J.E. Brunelle and D.E. Eckhardt, Jr. Fault-tolerant software: An experiment with the SIFT operating system. In Proceedings of the Fifth AIAA Computers in Aerospace Conference, pages 355-360, October 1985.
     [61]
    M. Burrows, M. Abadi, and R. Needham. A logic of authentication. ACM Transactions on Computer Systems, 8(1):18-36, February 1990.
     [62]
    R.W. Butler. An elementary tutorial on formal specification and verification using PVS. Technical report, NASA Langley Research Center, Hampton, Virginia, June 1993.
     [63]
    R.W. Butler, D.L. Palumbo, and S.C. Johnson. Application of a clock synchronization validation methodology to the SIFT computer system. In Digest of Papers, FTCS 15, pages 194-199, Ann Arbor, Michigan, June 1985. IEEE Computer Society.
     [64]
    Canadian Systems Security Centre, Communications Security Establishment, Government of Canada. Canadian Trusted Computer Product Evaluation Criteria, December 1990. Final Draft, version 2.0.
     [65]
    G. Canfora, A. Cimitile, M. Munro, and C. Taylor. Extracting abstract data types from C programs: A case study. In Proceedings of the International Conference on Software Maintenance, pages 200-209, September 1993.
     [66]
    R.A. Carlson and T.F. Lunt. The trusted domain machine: A secure communication device for security guard applications. In Proceedings of the 1986 Symposium on Security and Privacy, pages 182-186, Oakland, California, April 1986. IEEE Computer Society.
     [67]
    J. Carlstedt. Protection errors in operating systems: Validation of critical conditions. Technical Report ISI/SR-76-5, USC Information Sciences Institute (ISI), Marina Del Rey, California, May 1976.
     [68]
    J. Carlstedt, R. Bisbey II, and G. Popek. Pattern-directed protection evaluation. Technical Report ISI/SR-75-31, USC Information Sciences Institute (ISI), Marina Del Rey, California, June 1975.
     [69]
    A. Chander, D. Dean, and J.C. Mitchell. A state-transition model of trust management. In Proceedings of the 14th IEEE Computer Security Foundations Workshop, pages 27-43, Cape Breton, Nova Scotia, Canada, June 2001. IEEE Computer Society Technical Committee on Security and Privacy.
     [70]
    A. Chander, D. Dean, and J.C. Mitchell. Deconstructing trust management. In Proceedings of the 2002 Workshop on Issues in the Theory of Security, Portland, Oregon, January 2002. IFIP Working Group 1.7.
     [71]
    A. Chander, D. Dean, and J.C. Mitchell. A distributed high assurance reference monitor. In Proceedings of the Seventh Information Security Conference Lecture Notes in Computer Science vol. 3225, pages 231-244, Berlin, September 2004. Springer-Verlag.
     [72]
    A. Chander, D. Dean, and J.C. Mitchell. Reconstructing trust management. Journal of Computer Security, 12(1):131-164, January 2004.
     [73]
    D. Chaum. Secret-ballot receipts: True voter-verifiable elections. IEEE Security and Privacy, 2(1):38-47, January-February 2004.
     [74]
    H. Chen. Lightweight Model Checking for Improving Software Security. PhD thesis, University of California, Berkeley, 2004. http://www.cs.ucdavis.edu/~hchen/paper/phddis.ps.
     [75]
    H. Chen, D. Dean, and D. Wagner. Model checking one million lines of code. In Proceedings of the Symposium on Network and Distributed System Security, pages 171-185, San Diego, California, February 2004. Internet Society.
     [76]
    H. Chen and J. Shapiro. Using build-integrated static checking to preserve correctness invariants. In Proceedings of the Eleventh ACM Conference on Computer and Communications Security (CCS), Washington, D.C., November 2004.
     [77]
    H. Chen and D. Wagner. MOPS: An infrastructure for examining security properties of software. In Ninth ACM Conference on Computer and Communications Security, Washington, D.C., November 2002. ACM.
     [78]
    H. Chen, D. Wagner, and D. Dean. Setuid demystified. In Proceedings of the 11th USENIX Security 2002, pages 171-190, San Francisco, California, August 2002. USENIX.
     [79]
    B.V. Chess. Improving computer security using extended static checking. In Proceedings of the 2002 Symposium on Security and Privacy, pages 160-173, Oakland, California, May 2002. IEEE Computer Society.
     [80]
    W.R. Cheswick, S.M. Bellovin, and A.D. Rubin. Firewalls and Internet Security: Repelling the Wily Hacker, Second Edition. Addison-Wesley, Reading, Massachusetts, 2003.
     [81]
    D.D. Clark. The structuring of systems using upcalls. Operating Systems Review, pages 171-180, 1985.
     [82]
    D.D. Clark and D.R. Wilson. A comparison of commercial and military computer security policies. In Proceedings of the 1987 Symposium on Security and Privacy, pages 184-194, Oakland, California, April 1987. IEEE Computer Society.
     [83]
    D.D. Clark et al. Computers at Risk: Safe Computing in the Information Age. National Research Council, National Academy Press, 2101 Constitution Ave., Washington, D.C., 5 December 1990. Final report of the System Security Study Committee.
     [84]
    F.J. Corbató. On building systems that will fail (1990 Turing Award Lecture, with a following interview by Karen Frenkel). Communications of the ACM, 34(9):72-90, September 1991.
     [85]
    F.J. Corbató, J. Saltzer, and C.T. Clingen. Multics: The first seven years. In Proceedings of the Spring Joint Computer Conference, volume 40, Montvale, New Jersey, 1972. AFIPS Press.
     [86]
    P.J. Courtois, F. Heymans, and D.L. Parnas. Concurrent control with readers and writers. Communications of the ACM, 14(10):667-668, October 1971.
     [87]
    F. Cristian. Understanding fault-tolerant distributed systems. Communications of the ACM, 34(2):56-78, February 1991.
     [88]
    I. Crnkovic and M. Larsson. Classification of quality attributes for predictability in component-based systems. In Workshop on Architecting Dependable Systems (DSN WADS 2004), Florence, Italy, June 2004. http://www.cs.kent.ac.uk/events/conf/2004/wads/DSN-WADS2004/indexProgDSN2004.html.
     [89]
    M. Curtin. Developing Trust: Online Security and Privacy. Apress, Berkeley, California, and Springer-Verlag, Berlin, 2002.
     [90]
    M. Cusumano, A. MacCormack, C.F. Kemerer, and W. Crandall. A global survey of software development practices. Technical report, MIT Sloan School of Management, Cambridge, Massachusetts, June 2003.
     [91]
    R.C. Daley and J.B. Dennis. Virtual memory, processes, and sharing in Multics. Communications of the ACM, 11(5), May 1968.
     [92]
    R.C. Daley and P.G. Neumann. A general-purpose file system for secondary storage. In AFIPS Conference Proceedings, Fall Joint Computer Conference, pages 213-229. Spartan Books, November 1965.
     [93]
    A. Datta, R. Küsters, J.C. Mitchell, A. Ramanathan, and V. Shmatikov. Unifying equivalence-based definitions of protocol security. In Proceedings of the ACM SIGPLAN and IFIP WG 1.7 Fourth Workshop on Issues in the Theory of Security, Oakland, California, April 2004. IEEE Computer Society.
     [94]
    W.-P. de Roever, F. de Boer, U. Hanneman, J. Hooman, Y. Lakhnech, M. Poel, and J. Zwiers. Concurrency Verification: Introduction to Compositional and Noncompositional Methods. Cambridge University Press, New York, NY, 2001. Cambridge Tracts in Theoretical Computer Science no. 54.
     [95]
    D. Dean. Formal Aspects of Mobile Code Security. PhD thesis, Computer Science Department, Princeton University, January 1999. (http://www.cs.princeton.edu/sip/pub/ddean-dissertation.php3).
     [96]
    D. Dean. The impact of programming language theory on computer security. In Proceedings of the Mathematical Foundations of Programming Semantics (MFPS), New Orleans, Louisiana, March 2002. Slides at http://www.csl.sri.com/neumann/ddean-MFPS02.ppt.
     [97]
    D. Dean, M. Franklin, and A. Stubblefield. An algebraic approach to ip traceback. ACM Transactions on Information and System Security, 5(2):119-137, May 2002.
     [98]
    D. Dean and D. Wagner. Intrusion detection via static analysis. In Proceedings of the 2001 Symposium on Security and Privacy, Oakland, California, May 2001. IEEE Computer Society.
     [99]
    G. Denker and J. Millen. CAPSL integrated protocol environment. In DARPA Information Survivability Conference (DISCEX 2000), pages 207-221. IEEE Computer Society, 2000.
     [100]
    D.E. Denning, S.G. Akl, M. Heckman, T.F. Lunt, M. Morgenstern, P.G. Neumann, and R.R. Schell. Views for multilevel database security. IEEE Transactions on Software Engineering, 13(2), February 1987.
     [101]
    D.E. Denning, P.G. Neumann, and Donn B. Parker. Social aspects of computer security. In Proceedings of the 10th National Computer Security Conference, September 1987.
     [102]
    Y. Desmedt, Y. Frankel, and M. Yung. Multi-receiver/multi-sender network security: Efficient authenticated multicast/feedback. In Proceedings of IEEE INFOCOM. IEEE, 1992.
     [103]
    Y. Deswarte, L. Blain, and J.-C. Fabre. Intrusion tolerance in distributed computing systems. In Proceedings of the 1991 Symposium on Research in Security and Privacy, pages 110-121, Oakland, California, April 1991. IEEE Computer Society.
     [104]
    W. Diffie and S. Landau. Privacy on the Line: The Politics of Wiretapping and Encryption. MIT Press, 1998.
     [105]
    E.W. Dijkstra. Co-operating sequential processes. In Programming Languages, F. Genuys (editor), pages 43-112. Academic Press, 1968.
     [106]
    E.W. Dijkstra. The structure of the THE multiprogramming system. Communications of the ACM, 11(5), May 1968.
     [107]
    E.W. Dijkstra. A Discipline of Programming. Prentice-Hall, Englewood Cliffs, New Jersey, 1976.
     [108]
    G.W. Dinolt and J.C. Williams. A Graph-Theoretic Formulation of Multilevel Secure Distributed Systems: An overview. In 1987 IEEE Symposium on Security and Privacy, pages 99-103, 1730 Massachusetts Avenue, N.W., Washington, D.C. 20036-1903, April 1987. The Computer Society of the IEEE, IEEE Computer Society Press.
     [109]
    B.L. DiVito and L.W. Roberts. Using formal methods to assist in the requirements analysis of the space shuttle GPS change request. Technical Report NASA Contractor Report 4652, NASA Langley Research Center, Hampton, Virginia, August 1996.
     [110]
    S. Dolev. Self-Stabilization. MIT Press, Cambridge, Massachusetts, 2000.
     [111]
    B. Dutertre, V. Crettaz, and V. Stavridou. Intrusion-tolerant enclaves. In Proceedings of the 2002 Symposium on Security and Privacy, pages 216-224, Oakland, California, May 2002. IEEE Computer Society.
     [112]
    C.M. Ellison et al. SPKI certificate theory. Technical report, Internet Engineering Task Force, September 1999. http://www.ietf.org/rfc/rfc2693.txt).
     [113]
    D.R. Engler. The Exokernel Operating System Architecture. Technical report, Ph.D. Thesis, M.I.T., Cambridge, Massachusetts, October 1998.
     [114]
    D.R. Engler, M.F. Kaashoek, and J. O'Toole Jr. Exokernel: An operating system architecture for application-level resource management. Operating Systems Review, 29:251-266, December 1995. Proceedings of the Fifteenth Symposium on Operating Systems Principles (SOSP '95).
     [115]
    V.D. Gligor et al. Design and implementation of Secure Xenix[TM]. In Proceedings of the 2004 Symposium on Security and Privacy, Oakland, California, April 1986. IEEE Computer Society. also in IEEE Transactions on Software Engineering, vol. SE-13, 2, February 1987, 208-221.
     [116]
    European Communities Commission. Information Technology Security Evaluation Criteria (ITSEC), Provisional Harmonised Criteria (of France, Germany, the Netherlands, and the United Kingdom), June 1991. Version 1.2. Available from the Office for Official Publications of the European Communities, L-2985 Luxembourg, item CD-71-91-502-EN-C. Also available from U.K. CLEF, CESG Room 2/0805, Fiddlers Green Lane, Cheltenham U.K. GLOS GL52 5AJ, or GSA/GISA, Am Nippenkreuz 19, D 5300 Bonn 2, Germany.
     [117]
    R.S. Fabry. Capability-based addressing. Communications of the ACM, 17(7):403-412, July 1974.
     [118]
    A.W. Faughn. Interoperability: Is it achievable? Technical report, Harvard University PIRP report, 2001.
     [119]
    R.J. Feiertag, K.N. Levitt, and L. Robinson. Proving multilevel security of a system design. In Proceedings of the Sixth ACM Symposium on Operating System Principles, pages 57-65, November 1977.
     [120]
    R.J. Feiertag and P.G. Neumann. The foundations of a Provably Secure Operating System (PSOS). In Proceedings of the National Computer Conference, pages 329-334. AFIPS Press, 1979. http://www.csl.sri.com/neumann/psos.pdf.
     [121]
    W.H.J. Feijen, A.J.M. van Gasteren, D. Gries, and J. Misra, editors. Beauty is our Business, A Birthday Salute to Edsger W. Dijkstra. Springer-Verlag, Berlin, 11 May 1990.
     [122]
    J. Feller, B. Fitzgerald, S.A. Hissam, and K.R. Lakhani, editors. Perspectives on Free and Open Source Software. MIT Press, Cambridge, Massachusetts, 2005.
     [123]
    T. Fine, J.T. Haigh, R.C. O'Brien, and D.L. Toups. An overview of the LOCK FTLS. Technical report, Honeywell, 1988.
     [124]
    J.-M. Fray, Y. Deswarte, and D. Powell. Intrusion tolerance using fine-grain fragmentation-scattering. In Proceedings of the 1986 Symposium on Security and Privacy, pages 194-201, Oakland, California, April 1986. IEEE Computer Society.
     [125]
    C. Gacek and C. Jones. Dependability issues in open source software. Technical report, Department of Computing Science, Dependable Interdisciplinary Research Collaboration, University of Newcastle upon Tyne, Newcastle, England, 2001. Final report for PA5, part of ongoing related work.
     [126]
    C. Gacek, T. Lawrie, and B. Arief. The many meanings of open source. Technical report, Department of Computing Science, University of Newcastle upon Tyne, Newcastle, England, August 2001. Technical Report CS-TR-737.
     [127]
    M. Gasser. Building a Secure Computer System. Van Nostrand Reinhold Company, New York, 1988.
     [128]
    M. Gasser, A. Goldstein, C. Kaufman, and B. Lampson. The Digital distributed system security architecture. In Proceedings of the Twelfth National Computer Security Conference, pages 305-319, Baltimore, Maryland, 10-13 October 1989. NIST/NCSC.
     [129]
    S.L. Gerhart and L. Yelowitz. Observations of fallibility in modern programming methodologies. IEEE Transactions on Software Engineering, SE-2(3):195-207, September 1976.
     [130]
    J.T. Giffin, S. Jha, and B.P. Miller. Detecting manipulated remote call streams. In Proceedings of the 11th USENIX Security 2002, pages 61-79, San Francisco, California, August 2002. USENIX.
     [131]
    E. Gilbert, J. MacWilliams, and N. Sloane. Codes which detect deception. Bell System Technical Journal, 53(3):405-424, 1974.
     [132]
    J.F. Girard and R. Koschke. Finding components in a hierarchy of modules: A step towards architectural understanding. In Proceedings of the International Conference on Software Maintenance, pages 72-81, October 1997.
     [133]
    V.D. Gligor. A note on the denial-of-service problem. In Proceedings of the 1983 Symposium on Security and Privacy, pages 139-149, Oakland, California, April 1983. IEEE Computer Society.
     [134]
    V.D. Gligor and S.I. Gavrila. Application-oriented security policies and their composition. In Proceedings of the 1998 Workshop on Security Paradigms, Cambridge, England, 1998.
     [135]
    V.D. Gligor, S.I. Gavrila, and D. Ferraiolo. On the formal definition of separation-of-duty policies and their composition. In Proceedings of the 1998 Symposium on Security and Privacy, Oakland, California, May 1998. IEEE Computer Society.
     [136]
    R. Godin and H. Mili. Building and maintaining analysis-level class hierarchies using Galois lattices. In Proceedings of the 8th Annual Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA '93), SIGPLAN Notices, 28, 10, pages 394-410, 1993.
     [137]
    R. Godin, H. Mili, G.W. Mineau, R. Missaoui, A. Arfi, and T.-T. Chau. Design of class hierarchies based on concept (Galois) lattices. Theory and Practice of Object Systems, 4(2):117-134, 1998.
     [138]
    W. Goerigk. Compiler verification revisited. In M. Kaufmann, P. Maniolis, and J S. Moore, editors, Computer Aided Reasoning: ACL2 Case Studies. Kluwer Academic Publishers, 2000. Chapter 15.
     [139]
    J.A. Goguen and J. Meseguer. Security policies and security models. In Proceedings of the 1982 Symposium on Security and Privacy, pages 11-20, Oakland, California, April 1982. IEEE Computer Society.
     [140]
    J.A. Goguen and J. Meseguer. Unwinding and inference control. In Proceedings of the 1984 Symposium on Security and Privacy, pages 75-86, Oakland, California, April 1984. IEEE Computer Society.
     [141]
    B.D. Gold, R.R. Linde, and P.F. Cudney. KVM/370 in retrospect. In Proceedings of the 1984 Symposium on Security and Privacy, pages 13-23, Oakland, California, April 1984. IEEE Computer Society.
     [142]
    A. Goldberg. A specification of Java loading and bytecode verification. In Fifth ACM Conference on Computer and Communications Security, pages 49-58, San Francisco, California, November 1998. ACM SIGSAC.
     [143]
    L. Gong. A secure identity-based capability system. In Proceedings of the 1989 Symposium on Research in Security and Privacy, pages 56-63, Oakland, California, May 1989. IEEE Computer Society.
     [144]
    L. Gong. An overview of Enclaves 1.0. Technical report, SRI International, Menlo Park, California, SRI-CSL-96-01, January 1996. (http://www.csl.sri.com/papers/346/).
     [145]
    L. Gong. Inside Java(TM) 2 Platform Security: Architecture, API Design, and Implementation. Addison-Wesley, Reading, Massachusetts, 1999.
     [146]
    L. Gong, M. Mueller, H. Prafullchandra, and R. Schemers. Going beyond the sandbox: An overview of the new security architecture in the Java Development Kit 1.2. In Proceedings of the USENIX Symposium on Internet Technologies and Systems, Monterey, California, December 1997.
     [147]
    L. Gong and X. Qian. The complexibility and composability of secure interoperation. In Proceedings of the 1994 Symposium on Research in Security and Privacy, pages 190-200, Oakland, California, May 1994. IEEE Computer Society.
     [148]
    L. Gong and R. Schemers. Implementing protection domains in the Java Development Kit 1.2. In Proceedings of the Internet Society Symposium on Network and Distributed System Security, San Diego, California, March 1998.
     [149]
    G. Goth. Richard Clarke talks cybersecurity and JELL-O. IEEE Security and Privacy, 2(3):11-15, May-June 2004.
     [150]
    R.M. Graham. Protection in an information processing utility. Communications of the ACM, 11(5), May 1968.
     [151]
    C. Gunter, S. Weeks, and A. Wright. Models and languages for digital rights. In Proceedings of the 2001 Hawaii Intenational Conference on Systems Science, Honolulu, Hawaii, March 2001. (http://www.star-lab.com/tr/star-tr-01-04.html).
     [152]
    V. Guruswami and M. Sudan. List decoding algorithms for certain contatenated codes. In Proceedings of the Thirty-second Annual ACM Symposium on Theory of Computing, pages 181-190, April 2000.
     [153]
    J.T. Haigh. Top level security properties for the LOCK system. Technical report, Honeywell, 1988.
     [154]
    J.T. Haigh et al. Assured service concepts and models, final technical report, vol. 3: Security in distributed systems. Technical report, Secure Computing Technology Corporation, July 1991.
     [155]
    J.T. Haigh et al. Assured service concepts and models, final technical report, vol. 4: Availability in distributed MLS systems. Technical report, Secure Computing Technology Corporation, July 1991.
     [156]
    J.T. Haigh et al. Assured service concepts and models, final technical report, volume 1: Summary. Technical report, Secure Computing Technology Corporation, July 1991.
     [157]
    S. Halevi and H. Krawczyk. Public-key cryptography and password protocols. In Fifth ACM Conference on Computer and Communications Security, pages 122-131, San Francisco, California, November 1998. ACM SIGSAC.
     [158]
    R.W. Hamming. Error detecting and error correcting codes. Bell System Technical Journal, 29:147-60, 1950.
     [159]
    R. Harper and M. Lillibridge. A type-theoretic approach to higher-order modules with sharing. In Conference Record of POPL '94: 21st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 123-137, Portland, Oregon, January 1994.
     [160]
    J. Hemenway and D. Gambel. Issues in the specification of composite trustworthy systems. In Fourth Annual Canadian Computer Security Symposium, May 1992.
     [161]
    J.L. Hennessy and D.A. Patterson. Computer Architecture: A Quantitative Approach, Second Edition. Morgan Kaufmann, 1996.
     [162]
    H.M. Hinton. Composing partially-specified systems. In Proceedings of the 1998 Symposium on Security and Privacy, Oakland, California, May 1998. IEEE Computer Society.
     [163]
    S. Hissam, C.B. Weinstock, D. Plakosh, and J. Asundi. Perspectives on open source software. Technical report, Carnegie-Mellon Software Engineering Institute, Pittsburgh, Pennsylvania 15213-3890, November 2001. CMU/SEI-2001-TR-019
    (http://www.sei.cmu.edu/publications/pubweb.html).
     [164]
    C.A.R. Hoare. Monitors: An operating system structuring concept. Communications of the ACM, 17(10), October 1974.
     [165]
    D. Hollingworth and R. Bisbey II. Protection errors in operating systems: Allocation/deallocation residuals. Technical report, USC Information Sciences Institute (ISI), Marina Del Rey, California, June 1976.
     [166]
    J.J. Horning, H.C. Lauer, P.M. Melliar-Smith, and B. Randell. A program structure for error detection and recovery. In Operating Systems, Proceedings of an International Symposium, Notes in Computer Science 16, pages 171-187. Springer-Verlag, Berlin, 1974.
     [167]
    D.A. Huffman. A method for the construction of minimum redundancy codes. Proceedings of the IRE, 40, 1952.
     [168]
    D.A. Huffman. Canonical forms for information-lossless finite-state machines. IRE Transactions on Circuit Theory (special supplement) and IRE Transactions on Information Theory (special supplement), CT-6 and IT-5:41-59, May 1959. A slightly revised version appeared in E.F. Moore, Editor, Sequential Machines: Selected Papers, Addison-Wesley, Reading, Massachusetts.
     [169]
    C. Hunt. TCP/IP Network Administration, 3rd Edition. O'Reilly & Associates, Sebastopol, California, 2002.
     [170]
    W.A. Hunt Jr. Microprocessor design verification. Journal of Automated Reasoning, 5(4):429-460, December 1989.
     [171]
    IEEE. Standard specifications for public key cryptography. Technical report, IEEE Standards Department, 445 Hoes Lane, P.O. Box 1331, Piscataway, New Jersey 08855-1331, 2000 and ongoing. (http://grouper.ieee.org/groups/1363/).
     [172]
    International Standards Organization. The Common Criteria for Information Technology Security Evaluation, Version 2.1, ISO 15408. ISO/NIST/CCIB, 19 September 2000. (http://csrc.nist.gov/cc).
     [173]
    D. Jackson. Alloy: A lightweight object modelling notation. ACM Transactions on Software Engineering Methodology, 11(2):256-290, 20.
     [174]
    I. Jacobson, G. Booch, and J. Rumbaugh. The Unified Software Development Process. Addison-Wesley, Reading, Massachusetts, 1999.
     [175]
    R. Jagannathan. Transparent multiprocessing in the presence of fail-stop faults. In Proceedings of the 3rd Workshop on Large-Grain Parallelism, Pittsburgh, Pennsylvania, October 1989.
     [176]
    R. Jagannathan. Coarse-grain dataflow programming of conventional parallel computers. In Advanced Topics in Dataflow Computing and Multithreading (edited by L. Bic, J-L. Gaudiot, and G. Gao). IEEE Computer Society, April 1995.
     [177]
    R. Jagannathan and C. Dodd. GLU programmer's guide v0.9. Technical report, Computer Science Laboratory, SRI International, Menlo Park, California, November 1994. CSL Technical Report CSL-94-06.
     [178]
    R. Jagannathan and A.A. Faustini. The GLU programming language. Technical report, Computer Science Laboratory, SRI International, Menlo Park, California, November 1990. CSL Technical Report CSL-90-11.
     [179]
    P.A. Janson. Using type extension to organize virtual memory mechanisms. ACM Operating Systems Review, 15(4):6-38, October 1981.
     [180]
    S. Jha, O. Sheyner, and J. Wing. Two formal analyses of attack graphs. In Proceedings of the 15th IEEE Computer Security Foundations Workshop, pages 49-64, Cape Breton, Nova Scotia, Canada, June 2002. IEEE Computer Society Technical Committee on Security and Privacy.
     [181]
    D.R. Johnson, F.F. Saydjari, and J.P. Van Tassel. MISSI security policy: A formal approach. Technical report, NSA R2SPO-TR001-95, 18 August 1995.
     [182]
    C. Jones. Providing a formal basis for dependability notions. Technical report, Department of Computing Science, Dependable Interdisciplinary Research Collaboration, University of Newcastle upon Tyne, Newcastle, England, 2002. UNU/IIST Anniversary Colloquium.
     [183]
    M.F. Kaashoek and A.S. Tanenbaum. Fault tolerance using group communication. ACM SIGOPS Operating Systems Review, 25(2):71-74, April 1991.
     [184]
    R. Kailar, V.D. Gligor, and L. Gong. On the security effectiveness of cryptographic protocols. In Proceedings of the 1994 Conference on Dependable Computing for Critical Applications, pages 90-101, San Diego, California, January 1994.
     [185]
    R.Y. Kain. Computer Architecture: Software and Hardware. Prentice-Hall, 1988.
     [186]
    R.Y. Kain and C.E. Landwehr. On access checking in capability-based systems. In Proceedings of the 1986 IEEE Symposium on Security and Privacy, April 1986.
     [187]
    P.A. Karger and H. Kurth. Increased information flow needs for high-assurance composite evaluations. In Proceedings of the Second International Information Assurance Workshop (IWIA 2004), pages 129-140, Charlotte, North Carolina, May 2004. IEEE Computer Society.
     [188]
    P.A. Karger and R.R. Schell. Multics security evaluation: Vulnerability analysis. In Proceedings of the 18th Annual Computer Security Applications Conference (ACSAC), Classic Papers section, Las Vegas, Nevada, December 2002. Originally available as U.S. Air Force report ESD-TR-74-193, Vol. II, Hanscomb Air Force Base, Massachusetts.
     [189]
    P.A. Karger and R.R. Schell. Thirty years later: Lessons from the Multics security evaluation. In Proceedings of the 18th Annual Computer Security Applications Conference (ACSAC), Classic Papers section, Las Vegas, Nevada, December 2002. http://www.acsac.org/ .
     [190]
    M Kaufmann, J S. Moore, and P. Manolios. Computer-Aided Reasoning: An Approach. Kluwer Academic Publishing, Norwell, Massachusetts, 2000.
     [191]
    S. Keung and L. Gong. Enclaves in Java: APIs and Implementations. Technical Report SRI-CSL-96-07, SRI International, Computer Science Laboratory, 333 Ravenswood Avenue, Menlo Park, California 94025, July 1996.
     [192]
    P. Kocher. Cryptanalysis of Diffie-Hellman, RSA, DSS, and other systems using timing attacks (extended abstract). Technical report, Cryptography Research Inc., 607 Market St, San Francisco, California 94105, December 7 1995.
     [193]
    P.C. Kocher. Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems. In Springer-Verlag, Berlin, Lecture Notes in Computer Science, Advances in Cryptology, Proceedings of Crypto '96, pages 104-113, Santa Barbara, California, August 1996.
     [194]
    T. Kohno, A. Stubblefield, A.D. Rubin, and D.S. Wallach. Analysis of an electronic voting system. In Proceedings of the 2004 Symposium on Security and Privacy, pages 27-40, Oakland, California, May 2004. IEEE Computer Society.
     [195]
    H. Kopetz. Composability in the time-triggered architecture. In Proceedings of the SAE World Congress, pages 1-8, Detroit, Michigan, 2000. SAE Press.
     [196]
    M. Kuijper and J.W. Polderman. Reed-solomon list decoding from a system-theoretic perspective. IEEE Transactions on Information Theory, 40(2):259-271, February 2004.
     [197]
    L. Lamport. A simple approach to specifying concurrent program systems. Communications of the ACM, 32(1):32-45, January 1989.
     [198]
    L. Lamport, R. Shostak, and M. Pease. The Byzantine generals problem. ACM Transactions on Programming Languages and Systems, 4(3):382-401, July 1982.
     [199]
    B.W. Lampson. Software components: Only the giants survive. In Computer Systems: papers for Roger Needham, K. Spark-Jones and A. Herbert (editors), pages 113-120. Microsoft Research, Cambridge, U.K., February 2003.
     [200]
    B.W. Lampson and H. Sturgis. Reflections on an operating system design. Communications of the ACM, 19(5):251-265, May 1976.
     [201]
    C.E. Landwehr, A.R. Bull, J.P. McDermott, and W.S. Choi. A taxonomy of computer program security flaws, with examples. Technical report, Center for Secure Information Technology, Information Technology Division, Naval Research Laboratory, Washington, D.C., November 1993.
     [202]
    J.C. Laprie, editor. Dependability: A Unifying Concept for Reliable Computing and Fault Tolerance. Springer-Verlag, 1990.
     [203]
    E.S. Lee, P.I.P. Boulton, B.W. Thompson, and R.E. Soper. Composable trusted systems. Technical report, Computer Systems Research Institute, University of Toronto, Technical Report CSRI-272, May 1992.
     [204]
    E.S. Lee, P.I.P. Boulton, B.W. Thomson, and R.E. Soper. Composable trusted systems. Technical report, Computer Systems Research Institute, University of Toronto, Ontario, 31 May 1992. CSRI-272.
     [205]
    K.N. Levitt, S. Crocker, and D. Craigen, editors. VERkshop III: Verification workshop. ACM SIGSOFT Software Engineering Notes, 10(4):1-136, August 1985.
     [206]
    C. Lindig and G. Snelting. Assessing modular structure of legacy code based on mathematical concept analysis. In Proceedings of the International Conference on Software Engineering, pages 349-359, 1997.
     [207]
    U. Lindqvist and P.A. Porras. Detecting computer and network misuse through the Production-Based Expert System Toolset (P-BEST). In Proceedings of the 1999 Symposium on Security and Privacy, Oakland, California, May 1999. IEEE Computer Society.
     [208]
    U. Lindqvist and P.A. Porras. eXpert-BSM: A host-based intrusion-detection solution for Sun Solaris. In Proceedings of the 17th Annual Computer Security Applications Conference (ACSAC 2001), New Orleans, Louisiana, 10-14 December 2001.
     [209]
    S.B. Lipner. Non-discretionary controls for commercial applications. In Proceedings of the 1982 Symposium on Security and Privacy, pages 2-10. IEEE, 1982. Oakland, California, 26-28 April 1982.
     [210]
    S.B. Lipner. Security and source code access: Issues and realities. In Proceedings of the 2000 Symposium on Security and Privacy, pages 124-125, Oakland, California, May 2000. IEEE Computer Society.
     [211]
    P.E Livadas and T. Johnson. A new approach to finding objects in programs. Software Maintenance: Research and Practice, 6:249-260, 1994.
     [212]
    M. Lubaszewski and B. Courtois. A reliable fail-safe system. IEEE Transactions on Computers, C-47(2):236-241, February 1998.
     [213]
    T.F. Lunt, R.R. Schell, W.R. Shockley, M. Heckman, and D. Warren. A near-term design for the SeaView multilevel database system. In Proceedings of the 1988 Symposium on Security and Privacy, pages 234-244, Oakland, California, April 1988. IEEE Computer Society.
     [214]
    T.F. Lunt and R.A. Whitehurst. The SeaView formal top level specifications and proofs. Final report, Computer Science Laboratory, SRI International, Menlo Park, California, January/February 1989. Volumes 3A and 3B of "Secure Distributed Data Views," SRI Project 1143.
     [215]
    S. Mancoridis and R.C. Holt. Recovering the structure of software systems using tube graph interconnection clustering. In Proceedings of the International Conference on Software Maintenance, pages 23-32, 1996.
     [216]
    S. Mancoridis, B.S. Mitchell, Y. Chen, and E.R. Gansner. Using automatic clustering to produce high-level system organization of source code. In Proceedings of the International Workshop on Program Comprehension, pages 42-52, 1998.
     [217]
    S. Mancoridis, B.S. Mitchell, Y. Chen, and E.R. Gansner. Bunch: A clustering tool for the recovery and maintenance of software system structures. In Proceedings of the International Conference on Software Maintenance, pages 50-59, 1999.
     [218]
    A.P. Maneki. Algebraic properties of system composition in the Loral, Ulysses and McLean trace models. In Proceedings of the 8th IEEE Computer Security Foundations Workshop, Kenmare, County Kerry, Ireland, June 1995.
     [219]
    H. Mantel. Preserving information flow properties under refinement. In Proceedings of the 2001 Symposium on Security and Privacy, pages 78-91, Oakland, California, May 2001. IEEE Computer Society.
     [220]
    H. Mantel. On the composition of secure systems. In Proceedings of the 2002 Symposium on Security and Privacy, pages 88-101, Oakland, California, May 2002. IEEE Computer Society.
     [221]
    E.J. McCauley and P.J. Drongowski. KSOS: The design of a secure operating system. In National Computer Conference, pages 345-353. AFIPS Conference Proceedings, 1979. Vol. 48.
     [222]
    D. McCullough. Specifications for multi-level security and a hook-up property. In Proceedings of the 1987 Symposium on Security and Privacy, pages 161-166, Oakland, California, April 1987. IEEE Computer Society.
     [223]
    D. McCullough. Noninterference and composability of security properties. In Proceedings of the 1988 Symposium on Security and Privacy, pages 177-186, Oakland, California, April 1988. IEEE Computer Society.
     [224]
    D. McCullough. Ulysses security properties modeling environment: The theory of security. Technical report, Odyssey Research Associates, Ithaca, New York, July 1988.
     [225]
    D. McCullough. A hookup theorem for multilevel security. IEEE Transactions on Software Engineering, 16(6), June 1990.
     [226]
    P. McDaniel and A. Prakash. Methods and limitations of security policy reconciliation. In Proceedings of the 2002 Symposium on Security and Privacy, pages 73-87, Oakland, California, May 2002. IEEE Computer Society.
     [227]
    G. McGraw. Will openish source really improve security? In Proceedings of the 2000 Symposium on Security and Privacy, pages 128-129, Oakland, California, May 2000. IEEE Computer Society.
     [228]
    G. McGraw. Software security. IEEE Security and Privacy, 2(2):80-83, March-April 2004.
     [229]
    M.K. McKusick, K. Bostic, M.J. Karels, and J.S. Quarterman. The Design and Implementation of the 4.4 BSD Operating System. Addison-Wesley, Reading, Massachusetts, 1996.
     [230]
    J. McLean. A general theory of composition for trace sets closed under selective interleaving functions. In Proceedings of the 1994 Symposium on Research in Security and Privacy, pages 79-93, Oakland, California, May 1994. IEEE Computer Society.
     [231]
    P.M. Melliar-Smith and L.E. Moser. Surviving network partitioning. Computer, 31(3):62-68, March 1998.
     [232]
    P.M. Melliar-Smith and R.L. Schwartz. Formal specification and verification of SIFT: A fault-tolerant flight control system. IEEE Transactions on Computers, C-31(7):616-630, July 1982.
     [233]
    R. Mercuri. Electronic Vote Tabulation Checks and Balances. PhD thesis, Department of Computer Science, University of Pennsylvania, 2001. (http://www.notablesoftware.com/evote.html).
     [234]
    R. Mercuri. A better ballot box: New electronic voting systems pose risks as well as solutions. IEEE Spectrum, pages 46-50, October 2002.
     [235]
    R. Mercuri and P.G. Neumann. Verification for electronic balloting systems. In Secure Electronic Voting, Advances in Information Security, Volume 7. Kluwer Academic Publishers, Boston, Massachusetts, 2002.
     [236]
    S. Micali. Fair public-key cryptosystems. In Advances in Cryptology: Proceedings of CRYPTO '92 (E.F. Brickell, editor), pages 512-517. Springer-Verlag, Berlin, LCNS 740, 1992.
     [237]
    J. Millen and G. Denker. CAPSL and MuCAPSL. Journal of Telecommunications and Information Technology, (4):16-27, 2002.
     [238]
    J.K. Millen. Hookup security for synchronous machines. In Proceedings of the IEEE Computer Security Foundations Workshop VII, pages 2-10, Franconia, New Hampshire, June 1994. IEEE Computer Society.
     [239]
    R. Milner, M. Tofte, R. Harper, and D. MacQueen. The Definition of Standard ML. MIT Press, Cambridge, Massachusetts, 1997.
     [240]
    J.C. Mitchell and G.D. Plotkin. Abstract types have existential type. ACM Transactions on Programming Languages and Systems, 10(3):470-502, July 1988.
     [241]
    E.F. Moore. Gedanken experiments on sequential machines. In Automata Studies, Annals of Mathematical Studies, 34, Princeton University Press, 1956, pages 129-153, 1956. C.E. Shannon and J. McCarthy, editors.
     [242]
    E.F. Moore and C.E. Shannon. Reliable circuits using less reliable relays. Journal of the Franklin Institute, 262:191-208, 281-297, September, October 1956.
     [243]
    J S. Moore. A mechanically verified language implementation. Journal of Automated Reasoning, 5(4):461-492, December 1989.
     [244]
    J S. Moore. System verification. Journal of Automated Reasoning, 5(4):409-410, December 1989.
     [245]
    J S. Moore, editor. System verification. Journal of Automated Reasoning, 5(4):409-530, December 1989. Includes five papers by Moore, W.R. Bevier, W.A. Hunt, Jr, and W.D. Young.
     [246]
    M. Moriconi. A designer/verifier's assistant. IEEE Transactions on Software Engineering, SE-5(4):387-401, July 1979. Reprinted in Artificial Intelligence and Software Engineering, edited by C. Rich and R. Waters, Morgan Kaufmann Publishers, Inc., 1986. Also reprinted in Tutorial on Software Maintenance, edited by G. Parikh and N. Zvegintzov, IEEE Computer Society Press, 1983.
     [247]
    L. Moser, P.M. Melliar-Smith, and R. Schwartz. Design verification of SIFT. Contractor Report 4097, NASA Langley Research Center, Hampton, Virginia, September 1987.
     [248]
    NASA Conference Publication 2377. Peer Review of a Formal Verification/Design Proof Methodology, July 1983.
     [249]
    NCSC. Department of Defense Trusted Computer System Evaluation Criteria (TCSEC). National Computer Security Center, December 1985. DOD-5200.28-STD, Orange Book.
     [250]
    G.C. Necula. Compiling with Proofs. PhD thesis, Computer Science Department, Carnegie-Mellon University, 1998.
     [251]
    B.J. Nelson. Remote procedure call. Technical report, Research Report CSL-79-9, XEROX Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, California, May 1981.
     [252]
    P.G. Neumann. Efficient error-limiting variable-length codes. IRE Transactions on Information Theory, IT-8:292-304, July 1962.
     [253]
    P.G. Neumann. On a class of efficient error-limiting variable-length codes. IRE Transactions on Information Theory, IT-8:S260-266, September 1962.
     [254]
    P.G. Neumann. Error-limiting coding using information-lossless sequential machines. IEEE Transactions on Information Theory, IT-10:108-115, April 1964.
     [255]
    P.G. Neumann. The role of motherhood in the pop art of system programming. In Proceedings of the ACM Second Symposium on Operating Systems Principles, Princeton, New Jersey, pages 13-18. ACM, October 1969.
     [256]
    P.G. Neumann. System design for computer networks. In Computer-Communication Networks (Chapter 2), pages 29-81. Prentice-Hall, 1971. N. Abramson and F.F. Kuo (editors).
     [257]
    P.G. Neumann. Rainbows and arrows: How the security criteria address computer misuse. In Proceedings of the Thirteenth National Computer Security Conference, pages 414-422, Washington, D.C., 1-4 October 1990. NIST/NCSC.
     [258]
    P.G. Neumann. Can systems be trustworthy with software-implemented crypto? Technical report, Final Report, Project 6402, SRI International, Menlo Park, California, October 1994. For Official Use Only, NOFORN.
     [259]
    P.G. Neumann. Architectures and formal representations for secure systems. Technical report, Final Report, Project 6401, SRI International, Menlo Park, California, October 1995. CSL report 96-05.
     [260]
    P.G. Neumann. Computer-Related Risks. ACM Press, New York, and Addison-Wesley, Reading, Massachusetts, 1995.
     [261]
    P.G. Neumann. Practical architectures for survivable systems and networks. Technical report, Final Report, Phase One, Project 1688, SRI International, Menlo Park, California, January 1999. http://www.csl.sri.com/neumann/arl-one.html, also available in .ps and .pdf form.
     [262]
    P.G. Neumann. Certitude and rectitude. In Proceedings of the 2000 International Conference on Requirements Engineering, page 153, Schaumberg, Illinois, June 2000. IEEE Computer Society.
     [263]
    P.G. Neumann. The potentials of open-box source code in developing robust systems. In Proceedings of the NATO Conference on Commercial Off-The-Shelf Products in Defence Applications: The Ruthless Pursuit of COTS, Brussels, Belgium, April 2000. NATO.
     [264]
    P.G. Neumann. Practical architectures for survivable systems and networks. Technical report, Final Report, Phase Two, Project 1688, SRI International, Menlo Park, California, June 2000. (http://www.csl.sri.com/neumann/survivability.html).
     [265]
    P.G. Neumann. Robust nonproprietary software. In Proceedings of the 2000 Symposium on Security and Privacy, pages 122-123, Oakland, California, May 2000. IEEE Computer Society. (http://www.csl.sri.com/neumann/ieee00.ps and http://www.csl.sri.com/neumann/ieee00.pdf).
     [266]
    P.G. Neumann. Achieving principled assuredly trustworthy composable systems and networks. In Proceedings of the DARPA Information Survivability Conference and Exhibition, DISCEX3, volume 2, pages 182-187. DARPA and IEEE Computer Society, April 2003.
     [267]
    P.G. Neumann. Illustrative risks to the public in the use of computer systems and related technology, index to RISKS cases. Technical report, Computer Science Laboratory, SRI International, Menlo Park, California, 2004. The most recent version is available online in html form for browsing at http://www.csl.sri.com/neumann/illustrative.html, and also in .ps and .pdf form for printing in a much denser format.
     [268]
    P.G. Neumann, R.S. Boyer, R.J. Feiertag, K.N. Levitt, and L. Robinson. A Provably Secure Operating System: The system, its applications, and proofs. Technical report, Computer Science Laboratory, SRI International, Menlo Park, California, May 1980. 2nd edition, Report CSL-116.
     [269]
    P.G. Neumann and R.J. Feiertag. PSOS revisited. In Proceedings of the 19th Annual Computer Security Applications Conference (ACSAC 2003), Classic Papers section, pages 208-216, Las Vegas, Nevada, December 2003. IEEE Computer Society. http://www.acsac.org/ and http://www.csl.sri.com/neumann/psos03.pdf.
     [270]
    P.G. Neumann, J. Goldberg, K.N. Levitt, and J.H. Wensley. A study of fault-tolerant computing. Final report for ARPA, AD 766 974, Stanford Research Institute, Menlo Park, California, July 1973.
     [271]
    P.G. Neumann and D.B. Parker. A summary of computer misuse techniques. In Proceedings of the Twelfth National Computer Security Conference, pages 396-407, Baltimore, Maryland, 10-13 October 1989. NIST/NCSC.
     [272]
    P.G. Neumann and P.A. Porras. Experience with EMERALD to date. In Proceedings of the First USENIX Workshop on Intrusion Detection and Network Monitoring, pages 73-80, Santa Clara, California, April 1999. USENIX. http://www.csl.sri.com/neumann/det99.html.
     [273]
    P.G. Neumann, N.E. Proctor, and T.F. Lunt. Preventing security misuse in distributed systems. Technical report, Computer Science Laboratory, SRI International, Menlo Park, California, June 1992. Issued as Rome Laboratory report RL-TR-92-152, Rome Laboratory C3AB, Griffiss AFB NY 13441-5700. For Official Use Only.
     [274]
    P.G. Neumann and T.R.N. Rao. Error correction codes for byte-organized arithmetic processors. IEEE Transactions on Computers, C-24(3):226-232, March 1975.
     [275]
    P.G. Neumann, editor. VERkshop I: Verification Workshop. ACM SIGSOFT Software Engineering Notes, 5(3):4-47, July 1980.
     [276]
    P.G. Neumann, editor. VERkshop II: Verification Workshop. ACM SIGSOFT Software Engineering Notes, 6(3):1-63, July 1981.
     [277]
    E.I. Organick. The Multics System: An Examination of Its Structure. MIT Press, Cambridge, Massachusetts, 1972.
     [278]
    S. Owre and N. Shankar. Theory interpretations in PVS. Technical Report SRI-CSL-01-01, Computer Science Laboratory, SRI International, Menlo Park, CA, April 2001. http://www.csl.sri.com/~owre.
     [279]
    W. Ozier. GASSP: Generally Accepted Systems Security Principles. Technical report, International Information Security Foundation, June 1997. web.mit.edu/security/www/gassp1.html.
     [280]
    J.M. Park, E.K.P. Chong, and H.J. Siegel. Efficient multicast packet authentication using signature amortization. In Proceedings of the 2002 Symposium on Security and Privacy, pages 227-240, Oakland, California, May 2002. IEEE Computer Society.
     [281]
    D.L. Parnas. On the criteria to be used in decomposing systems into modules. Communications of the ACM, 15(12), December 1972.
     [282]
    D.L. Parnas. A technique for software module specification with examples. Communications of the ACM, 15(5), May 1972.
     [283]
    D.L. Parnas. On a "buzzword": Hierarchical structure. In Information Processing 74 (Proceedings of the IFIP Congress 1974), volume Software, pages 336-339. North-Holland, Amsterdam, 1974.
     [284]
    D.L. Parnas. The influence of software structure on reliability. In Proceedings of the International Conference on Reliable Software, pages 358-362, April 1975. Reprinted with improvements in R. Yeh, Current Trends in Programming Methodology I, Prentice-Hall, 1977, 111-119.
     [285]
    D.L. Parnas. On the design and development of program families. IEEE Transactions on Software Engineering, SE-2(1):1-9, March 1976.
     [286]
    D.L. Parnas. Designing software for ease of extension and contraction. IEEE Transactions on Software Engineering, SE-5(2):128-138, March 1979.
     [287]
    D.L. Parnas. Mathematical descriptions and specification of software. In Proceedings of the IFIP World Congress 1994, Volume I, pages 354-359. IFIP, August 1994.
     [288]
    D.L. Parnas. Software engineering: An unconsummated marriage. Communications of the ACM, 40(9):128, September 1997. Inside Risks column.
     [289]
    D.L. Parnas. Computer science and software engineering: Filing for divorce? Communications of the ACM, 41(8), August 1998. Inside Risks column.
     [290]
    D.L. Parnas, P.C. Clements, and D.M. Weiss. The modular structure of complex systems. IEEE Transactions on Software Engineering, SE-11(3):259-266, March 1985.
     [291]
    D.L. Parnas and G. Handzel. More on specification techniques for software modules. Technical report, Fachbereich Informatik, Technische Hochschule Darmstadt, Research Report BS I 75/1, Germany, April 1975.
     [292]
    D.L. Parnas, J. Madey, and M. Iglewski. Precise documentation of well-structured programs. IEEE Transactions on Software Engineering, 20(12):948-976, December 1994.
     [293]
    D.L. Parnas and W.R. Price. The design of the virtual memory aspects of a virtual machine. In Proceedings of the ACM SIGARCH-SIGOPS Workshop on Virtual Computer Systems. ACM, March 1973.
     [294]
    D.L. Parnas and W.R. Price. Design of a non-random access virtual memory machine. In Proceedings of the International Workshop On Protection in Operating Systems, pages 177-181, August 1974.
     [295]
    D.L. Parnas and D.L. Siewiorek. Use of the concept of transparency in the design of hierarchically structured systems. Communications of the ACM, 18(7):401-408, July 1975.
     [296]
    D.L. Parnas, A.J. van Schouwen, and S.P. Kwan. Evaluation of safety-critical software. Communications of the ACM, 33(6):636-648, June 1990.
     [297]
    D.L. Parnas and Y. Wang. Simulating the behaviour of software modules by trace rewriting systems. IEEE Transactions of Software Engineering, 19(10):750-759, October 1994.
     [298]
    D.A. Patterson and J.L. Hennessy. Computer Organization and Design: The Hardware/Software Interface, Second Edition. Morgan Kaufmann, 1997.
     [299]
    R. Perlman. Network Layer Protocols with Byzantine Robustness. PhD thesis, MIT, Cambridge, Massachusetts, 1988.
     [300]
    H. Petersen and M. Michels. On signature schemes with threshold verification detecting malicious verifiers. In Springer-Verlag, Berlin, Lecture Notes in Computer Science, Security Protocols, Proceedings of 5th International Workshop, pages 67-77, Paris, France, April 1997.
     [301]
    L. Peterson and D. Clark. The Internet: An experiment that escaped from the lab. In Computer Science: Reflections on the Field, Reflections from the Field, pages 129-133. National Research Council, National Academy Press, 500 Fifth Ave., Washington, D.C. 20001, 2004.
     [302]
    W.W. Peterson and E.J. Weldon, Jr. Error-Correcting Codes, 2nd ed. MIT Press, Cambridge, Massachusetts, 1972.
     [303]
    C.P. Pfleeger. Security in Computing. Prentice-Hall, Englewood Cliffs, New Jersey, 1989.
     [304]
    P.A. Porras and P.G. Neumann. EMERALD: Event Monitoring Enabling Responses to Anomalous Live Disturbances. In Proceedings of the Nineteenth National Computer Security Conference, pages 353-365, Baltimore, Maryland, 22-25 October 1997. NIST/NCSC.
     [305]
    P.A. Porras, K. Nitz, U. Lindqvist, M. Fong, and P.G. Neumann. Discerning attacker intent. Technical report, Computer Science Laboratory, SRI International, Project 10779, Menlo Park, California, April 2003.
     [306]
    P.A. Porras and A. Valdes. Live traffic analysis of TCP/IP gateways. In Proceedings of the Symposium on Network and Distributed System Security. Internet Society, March 1998.
     [307]
    N.E. Proctor. The restricted access processor: An example of formal verification. In Proceedings of the 1985 Symposium on Security and Privacy, pages 49-55, Oakland, California, April 1985. IEEE Computer Society.
     [308]
    N.E. Proctor and P.G. Neumann. Architectural implications of covert channels. In Proceedings of the Fifteenth National Computer Security Conference, pages 28-43, Baltimore, Maryland, 13-16 October 1992. (http://www.csl.sri.com/neumann/ncs92.html).
     [309]
    B. Randell, J.-C. Laprie, H. Kopetz, and B. Littlewood, editors. Predictably Dependable Computing Systems. Basic Research Series. Springer-Verlag, Berlin, 1995.
     [310]
    T.R.N. Rao. Error-Control Coding for Computer Systems. Prentice-Hall, Englewood Cliffs, New Jersey, 1989.
     [311]
    V. Ratan, K. Partridge, J. Reese, and N. Leveson. Safety analysis tools for requirements specification. In Proceedings of the Eleventh Annual Conference on Computer Assurance, COMPASS '96, pages 149-160. IEEE Computer Society, 1996.
     [312]
    M. Raynal. A case study of agreement problems in distributed systems: Non-blocking atomic commitment. In Proceedings of the 1997 High-Assurance Systems Engineering Workshop, pages 209-214, Washington, D.C., August 1997. IEEE Computer Society.
     [313]
    M. Reiter and K. Birman. How to securely replicate services. ACM Transactions on Programming Languages and Systems, 16(3):986-1009, May 1994.
     [314]
    J.H. Reppy. Concurrent Programming in ML. Cambridge University Press, Cambridge, U.K., 1999.
     [315]
    R. Rivest and B. Lampson. SDSI - a simple distributed security infrastructure. Technical report, MIT Laboratory for Computer Science, 2000. Version 2.0 is available online (http://theory.lcs.mit.edu/~cis/sdsi.html) along with other documentation and source code.
     [316]
    L. Robinson and K.N. Levitt. Proof techniques for hierarchically structured programs. Communications of the ACM, 20(4):271-283, April 1977.
     [317]
    L. Robinson, K.N. Levitt, P.G. Neumann, and A.R. Saxena. A formal methodology for the design of operating system software. In R. Yeh (editors), Current Trends in Programming Methodology I, Prentice-Hall, 61-110, 1977.
     [318]
    L. Robinson, K.N. Levitt, and B.A. Silverberg. The HDM Handbook. Computer Science Laboratory, SRI International, Menlo Park, California, June 1979. Three Volumes.
     [319]
    A.W. Roscoe and L. Wulf. Composing and decomposing systems under security properties. In Proceedings of the 8th IEEE Computer Security Foundations Workshop, Kenmare, County Kerry, Ireland, June 1995.
     [320]
    E. Rosen. Vulnerabilities of network control protocols. ACM SIGSOFT Software Engineering Notes, 6(1):6-8, January 1981.
     [321]
    J. Rushby. Partitioning for avionics architectures: Requirements, mechanisms, and assurance. Technical report, NASA Langley Research Center, June 1999. Contractor Report CR-1999-209347; also issued as FAA DOT/FAA/AR-99/58.
     [322]
    J. Rushby. Modular certification. Technical report, Computer Science Laboratory, SRI International, Menlo Park, California, June 2002.
     [323]
    J.M. Rushby. A trusted computing base for embedded systems. In Proceedings of the Seventh DoD/NBS Computer Security Initiative Conference, pages 294-311, Gaithersburg, Maryland, September 1984.
     [324]
    J.M. Rushby. Kernels for safety? In T. Anderson, editor, Safe and Secure Computing Systems, chapter 13, pages 210-220. Blackwell Scientific Publications, 1989. Proceedings of a Symposium held in Glasgow, October 1986.
     [325]
    J.M. Rushby. Composing trustworthy systems. Technical report, Computer Science Laboratory, SRI International, Menlo Park, California, July 1991.
     [326]
    J.M. Rushby. Formal methods and their role in digital systems validation for airborne systems. Technical report, SRI International, Menlo Park, California, CSL-95-01, March 1995.
     [327]
    J.M. Rushby and B. Randell. A distributed secure system. IEEE Computer, 16(7):55-67, July 1983.
     [328]
    J.M. Rushby and B. Randell. A distributed secure system. Technical Report 182, Computing Laboratory, University of Newcastle upon Tyne, May 1983.
     [329]
    J.M. Rushby and B. Randell. A distributed secure system (extended abstract). In Proceedings of the 1983 IEEE Symposium on Security and Privacy, pages 127-135, Oakland, California, April 1983. IEEE Computer Society.
     [330]
    J.M. Rushby and D.W.J. Stringer-Calvert. A less elementary tutorial for the PVS specification and verification system. Technical report, SRI International, Menlo Park, California, CSL-95-10, October 1995.
     [331]
    J.M. Rushby and F. von Henke. Formal verification of the interactive convergence clock synchronization algorithm using EHDM. Technical Report SRI-CSL-89-3, Computer Science Laboratory, SRI International, Menlo Park, California, February 1989. Also available as NASA Contractor Report 4239.
     [332]
    T.T. Russell and M. Schaefer. Toward a high B level security architecture for the IBM ES/3090 processor resource/systems manager (PR/SM). In Proceedings of the Twelfth National Computer Security Conference, pages 184-196, Baltimore, Maryland, 10-13 October 1989. NIST/NCSC.
     [333]
    J.H. Saltzer. Protection and the control of information sharing in Multics. Communications of the ACM, 17(7):388-402, July 1974.
     [334]
    J.H. Saltzer and M.D. Schroeder. The protection of information in computer systems. Proceedings of the IEEE, 63(9):1278-1308, September 1975. (http://www.multicians.org).
     [335]
    O.S. Saydjari, J.M. Beckman, and J.R. Leaman. LOCKing computers securely. In 10th National Computer Security Conference, Baltimore, Maryland, pages 129-141, 21-24 September 1987. Reprinted in Rein Turn, editor, Advances in Computer System Security, Vol. 3, Artech House, Dedham, Massachusetts, 1988.
     [336]
    W.L. Schiller. The design and specification of a security kernel for the PDP-11/45. Technical Report MTR-2934, Mitre Corporation, Bedford, Massachusetts, March 1975.
     [337]
    F.B. Schneider. Understanding protocols for Byzantine clock synchronization. Technical Report 87-859, Department of Computer Science, Cornell University, Ithaca, New York, August 1987.
     [338]
    F.B. Schneider. Open source in security: Visiting the bizarre. In Proceedings of the 2000 Symposium on Security and Privacy, pages 126-127, Oakland, California, May 2000. IEEE Computer Society.
     [339]
    F.B. Schneider, editor. Research to support robust cyber defense. Technical report, Study Committee for J. Lala, DARPA, May 2000. Slides only.
     [340]
    B. Schneier. Applied Cryptography: Protocols, Algorithms, and Source Code in C: Second Edition. John Wiley and Sons, New York, 1996.
     [341]
    B. Schneier. Secrets and Lies: Digital Security in a Networked World. John Wiley and Sons, New York, 2000.
     [342]
    B. Schneier and D. Banisar. The Electronic Privacy Papers. John Wiley and Sons, New York, 1997.
     [343]
    M.D. Schroeder. Cooperation of mutually suspicious subsystems in a computer utility. Technical report, Ph.D. Thesis, M.I.T., Cambridge, Massachusetts, September 1972.
     [344]
    M.D. Schroeder, D.D. Clark, and J.H. Saltzer. The Multics kernel design project. In Proceedings of the Sixth Symposium on Operating System Principles, November 1977. ACM Operating Systems Review 11(5).
     [345]
    M.D. Schroeder and J.H. Saltzer. A hardware architecture for implementing protection rings. Communications of the ACM, 15(3), March 1972.
     [346]
    R.W. Schwanke. An intelligent tool for re-engineering software modularity. In Proceedings of the International Conference On Software Engineering, pages 83-92, 1991.
     [347]
    Secure Computing Technology Center. LOCK formal top level specification, volumes 1-6. Technical report, SCTC, 1988.
     [348]
    Secure Computing Technology Center. LOCK software B-specification, vol. 2. Technical report, SCTC, 1988.
     [349]
    A. Shamir and E. Tromer. Acoustic cryptanalysis: On nosy people and noisy machines. preliminary proof-of-concept presentation, 2004.
     [350]
    D. Shands, E. Wu, J. Horning, and S. Weeks. Spice: Configurationa synthesis for policies enforcement. Technical report, MacAfee Research Technical Report 04-018, June 2004.
     [351]
    J.S. Shapiro and N. Hardy. EROS: a principle-driven operating system from the ground up. IEEE Software, 19(1):26-33, January/February 2002.
     [352]
    O. Sheyner, J. Haines, S. Jha, R. Lippmann, and J.M. Wing. Automated generation and analysis of attack graphs. In Proceedings of the 2003 Symposium on Security and Privacy, pages 273-284, Oakland, California, May 2003. IEEE Computer Society.
     [353]
    M. Siff and T. Reps. Identifying modules via concept analysis. IEEE Transactions on Software Engineering, SE-25(6):749-768, 1999.
     [354]
    N.J.A. Sloane and F.J. MacWilliams. The Theory of Error-Correcting Codes, 9th reprint. North-Holland, 1998.
     [355]
    M.A. Smith. Portals: Toward an application framework for interoperability. Communications of the ACM, 47(10):93-97, October 2004.
     [356]
    G. Snelting. Reengineering of configurations based on mathematical concept analysis. IEEE Transactions on Software Engineering and Methodology, 5(2):146-189, 1996.
     [357]
    G. Snelting and F. Tip. Reengineering class hierarchies using concept analysis. In Proceedings of the International Symposium on Foundations of Software Engineering, 1998.
     [358]
    G. Snider and J. Hays. The modix kernel. In 1989 Winter USENIX Conference Proceedings, pages 377-392, San Diego, California, February 1989.
     [359]
    I. Sommerville. Software Engineering. Addison-Wesley, Reading, Massachusetts, 2001. Sixth Edition.
     [360]
    D. Song, D. Zuckerman, and J.D. Tygar. Expander graphs for digital stream authentication and robust overlay networks. In Proceedings of the 2002 Symposium on Security and Privacy, pages 258-270, Oakland, California, May 2002. IEEE Computer Society.
     [361]
    SRI-CSL. HDM Verification Environment Enhancements, Interim Report on Language Definition. Computer Science Laboratory, SRI International, Menlo Park, California, 1983. SRI Project No. 5727, Contract No. MDA904-83-C-0461.
     [362]
    J. Staddon, S. Miner, M. Franklin, D. Balfanz, M. Malkin, and D. Dean. Self-healing key distribution with revocation. In Proceedings of the 2002 Symposium on Security and Privacy, pages 241-257, Oakland, California, May 2002. IEEE Computer Society.
     [363]
    D.I. Sutherland. A model of information flow. In Proceedings of the Ninth National Computer Security Conference, pages 175-183, September 1986.
     [364]
    K.L. Thompson. Reflections on trusting trust. Communications of the ACM, 27(8):761-763, August 1984.
     [365]
    M. Tinto. The design and evaluation of INFOSEC systems: The computer security contribution to the composition discussion. Technical report, National Computer Security Center, June 1992. C Technical Report 32-92.
     [366]
    P. Tonella. Concept analysis for module restructuring. IEEE Transactions on Software Engineering, SE-27(4):351-363, 2001.
     [367]
    I.L. Traiger, J. Gray, C.A. Galtieri, and B.G. Lindsay. Transactions and consistency in distributed database systems. ACM TODS, 7(3):323-342, September 1982.
     [368]
    Unspecified. Composability constraints of multilevel systems. Technical report, Integrated Computer Systems, Inc., 215 South Rutgers Ave., Oak Ridge, Tennessee, June 1994.
     [369]
    USGAO. Defense acquisitions: Knowledge of software suppliers needed to manage risks. Technical report, U.S. General Accounting Office, GAO-04-078, Washington, D.C., May 2004.
     [370]
    J. von Neumann. Probabilistic logics and the synthesis of reliable organisms from unreliable components. In Automata Studies, pages 43-98, Princeton University, Princeton, New Jersey, 1956.
     [371]
    D. Wagner. Static Analysis and Computer Security: New Techniques for Software Assurance. PhD thesis, Division of Computer Science, University of California, Berkeley, December 2000. (http://www.cs.berkeley.edu/~daw).
     [372]
    W.H. Ware. A retrospective of the criteria movement. In Proceedings of the Eighteenth National Information Systems Security Conference, pages 582-588, Baltimore, Maryland, 10-13 October 1995. NIST/NCSC.
     [373]
    P. Wayner. Translucent Databases. Flyzone Press, Baltimore, Maryland, 2002.
     [374]
    S. Weeks. Understanding trust management systems. In Proceedings of the 2001 Symposium on Security and Privacy, Oakland, California, May 2001. IEEE Computer Society. (http://www.star-lab.com/tr/star-tr-01-02.html).
     [375]
    L. Weinstein. The devil you know. Communications of the ACM, 46(12):144, December 2003.
     [376]
    L. Weinstein. TRIPOLI: An Empowered E-Mail Environment. Technical report, People for Internet Responsibility, January 2004.
     [377]
    J.H. Wensley et al. SIFT design and analysis of a fault-tolerant computer for aircraft control. Proceedings of the IEEE, 66(10):1240-1255, October 1978.
     [378]
    J.H. Wensley et al. Design study of software-implemented fault-tolerance (SIFT) computer. NASA contractor report 3011, Computer Science Laboratory, SRI International, Menlo Park, California, June 1982.
     [379]
    D.A. Wheeler. Secure Programming for Linux and Unix HOWTO. 2003.
     [380]
    D.A. Wheeler. Secure programmer: Minimizing privileges; taking the fangs out of bugs. May 2004.
     [381]
    I. White. Wrapping the COTS dilemma. In Proceedings of the NATO Conference on Commercial Off-The-Shelf Products in Defence Applications: The Ruthless Pursuit of COTS, Brussels, Belgium, April 2000. NATO.
     [382]
    G.R. Wright and W.R. Stevens. TCP/IP Illustrated, Volume 2. Addison-Wesley, Reading, Massachusetts, 1995.
     [383]
    W. Wulf and M. Shaw. Global variable considered harmful. SIGPLAN Notices, 8(2):28-34, February 1973.
     [384]
    A. Yeh, D. Harris, and H. Reubenstein. Recovering abstract data types and object instances from a conventional procedural language. In Proceedings of the Working Conference on Reverse Engineering, pages 227-236, 1995.
     [385]
    W.D. Young. A mechanically verified code generator. Journal of Automated Reasoning, 5(4):493-518, December 1989.
     [386]
    W.D. Young, W.E. Boebert, and R.Y. Kain. Proving a computer system secure. Scientific Honeyweller, 6(2):18-27, July 1985. Reprinted in Tutorial: Computer and Network Security, M.D. Abrams and H.J. Podell, editors, IEEE Computer Society Press, 1987, pp. 142-157.
     [387]
    C.-F. Yu and V.D. Gligor. A formal specification and verification method for the prevention of denial of service. In Proceedings of the 1988 Symposium on Security and Privacy, pages 187-202, Oakland, California, April 1988. IEEE Computer Society. Also in IEEE Transactions on Software Engineering, SE-16, 12, June 1990, 581-592).
     [388]
    A. Zakinthinos and E.S. Lee. The composability of non-interference. In Proceedings of the 8th IEEE Computer Security Foundations Workshop, Kenmare, County Kerry, Ireland, June 1995.
     [389]
    A. Zakinthinos and E.S. Lee. Composing secure systems that have emergent properties. In Proceedings of the 11th IEEE Computer Security Foundations Workshop, pages 117-122, Rockport, Massachusetts, June 1998.
     [390]
    P.R. Zimmermann. The Official PGP User's Guide. MIT Press, Cambridge, Massachusetts, 1995.
     [391]
    L. Zuck, A. Pnueli, Y. Fang, and B. Goldberg. VOC: a translation validator for optimizing compilers. In Electronic Notes in Theoretical Computer Science, 2002. Preliminary version at www.cs.nyu.edu/~zuck/pubs/, final version at www.elsevier.ai/locate/entcs.

    Index

    A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
    A
    Abstraction
    Abstraction
    Abstraction
    Abstraction!excessive
    Abstraction!refinement
    Abstraction!TCP/IP
    Accountability
    Administration
    Administration
    Administration!controllability
    Administration!operational assurance
    Administration!system
    Airgaps
    Anderson, Ross
    Anderson, Ross!exploitations of vulnerabilities
    Arbaugh, William
    Architecture
    Architecture!assurance
    Architecture!autonomous
    Architecture!centralized
    Architecture!composable
    Architecture!composable
    Architecture!conceptual approach
    Architecture!decentralized
    Architecture!enlightened
    Architecture!enlightened
    Architecture!enlightened
    Architecture!heterogeneous
    Architecture!heterogeneous
    Architecture!heterogeneous
    Architecture!homogeneous
    Architecture!network-centric
    Architecture!network-oriented
    Architecture!openness paradigms
    Architecture!practical considerations
    Architecture!principled
    Architecture!principled
    Architecture!principled
    Architecture!principled!examples
    Architecture!stark subsetting
    Architecture!stark subsetting
    Architecture!stark subsetting
    Architecture!stark subsetting
    Architecture!stark subsetting
    Architecture!trustworthy
    Architecture!TS&CI
    Architecture!TS&CI
    Architecture!TS&CI
    Architecture!TS&CI
    Architecture!TS&CI
    Architecture!TS&CI
    Architecture|(bold
    Architecture|)
    ARPANET!1980 collapse
    ARPANET!1980 collapse
    ARPANET!1980 collapse
    Ashcraft-Engler!static analysis
    Assurance
    Assurance!analytic tools for
    Assurance!code inspection
    Assurance!composability analysis
    Assurance!composability of
    Assurance!correctness versus consistency
    Assurance!debugging
    Assurance!dependency analysis
    Assurance!dynamic analysis
    Assurance!enhancement
    Assurance!in architecture
    Assurance!in design
    Assurance!in development
    Assurance!in implementation
    Assurance!in interfaces
    Assurance!methodologies
    Assurance!metrics for
    Assurance!operational
    Assurance!operational
    Assurance!Pervasively Integrated (PIA)
    Assurance!Pervasively Integrated (PIA)
    Assurance!Pervasively Integrated (PIA)
    Assurance!Pervasively Integrated (PIA)
    Assurance!Pervasively Integrated (PIA)
    Assurance!Pervasively Integrated (PIA)
    Assurance!Pervasively Integrated (PIA)
    Assurance!preserved by transformations
    Assurance!principles for
    Assurance!principles for
    Assurance!red-teaming
    Assurance!requirements
    Assurance!risk mitigation
    Assurance!role of development tools
    Assurance!software engineering
    Assurance!static analysis
    Assurance!static analysis
    Assurance!testing
    Assurance!voting systems
    Assurance!vulnerability detection
    Assurance!vulnerability elimination
    Assurance|(bold
    Assurance|)
    Attacks!denial-of-service
    Attacks!denial-of-service!prevention
    Attacks!denial-of-service!prevention of
    Attacks!denial-of-service!traceback
    Attacks!spoofing
    Attacks¡`man-in-the-middle"
    Attacks¡`man-in-the-middle"
    Attacks¡`man-in-the-middle"
    AT&T!1990 long-distance collapse
    AT&T!1990 long-distance collapse
    AT&T!1990 long-distance collapse
    Authentication!Byzantine
    Authentication!cryptographic
    Authentication!cryptographic
    Authentication!cryptographic
    Authentication!in subnetworks
    Authentication!inadequacy of fixed passwords
    Authentication!message
    Authentication!multicast
    Authentication!need for authorization
    Authentication!nonbypassable
    Authentication!nonspoofable
    Authentication!servers
    Authentication!vulnerabilities
    Authorization
    Authorization!fine-grained
    Authorization!need for authentication
    Authorization!vulnerabilities
    Autonomous operation
    Autonomous operation
    Autonomous operation
    Autonomous operation
    Autonomous operation
    Autonomous operation!interface design
    Autonomous operation!risks in administration
    Autonomous operation!risks of failure
    Availability!assurance
    Availability!multilevel
    Availability!risks
    Availability!risks
    B
    Badger, Lee
    Ballmer, Steve
    Baran, Paul
    Bell and LaPadula!multilevel security
    Bell, Gordon
    Bell, Gordon
    Bernstein, Peter L.
    Biba, Ken!multilevel integrity (MLI)
    Biba, Ken!multilevel integrity (MLI)
    Biba, Ken!multilevel integrity (MLI)
    Bishop, Matt
    Bishop, Matt
    Blade computers
    Boebert, W.E.
    Boebert, W.E.!on buffer overflows
    Boneh, Dan!fault injection
    Bootload!trustworthy
    Bootload!trustworthy
    Burnham, Blaine
    Burnham, Blaine
    Byzantine!agreement
    Byzantine!authentication protocols
    Byzantine!digital signature
    Byzantine!fault tolerance
    Byzantine!faults
    Byzantine!key escrow
    Byzantine!protocols
    C
    Capabilities
    Capabilities
    Capabilities!and perspicuity
    Capabilities!modeling
    Capabilities!PSOS
    Certification
    Certification!composability
    Chaum, David
    Chen, Hao
    Chen, Hao
    Chen, Hao!MOPS
    Chen, Hao!MOPS
    Chess, Brian!static analysis
    Cicero
    Cicero
    Clark-Wilson integrity model
    Clarke, Arthur
    Clean-Room development
    Clean-Room development
    CLInc stack
    Cohen, Fred
    Commitment!nonblocking
    Commitment!two-phase
    Common Criteria
    Common Criteria
    Common Criteria!assurance
    Common Criteria!composite evaluation
    Communications!optical
    Communications!wireless
    Compatibility
    Compatibility
    Compatibility
    Compatibility!among requirements
    Compatibility!among policies
    Compatibility!in heterogeneous systems
    Compatibility!of legacy software
    Compatibility!structural
    Compilers!correctness of
    Compilers!dynamic checking
    Compilers!object-oriented
    Compilers!research directions
    Compilers!risks of optimization
    Compilers!role in security
    Compilers!static analysis
    Compilers!static checking
    Compilers!subversion of by Trojan horse
    Complexity!and simplicity
    Complexity!Einstein quote
    Complexity!Einstein quote
    Complexity!interfaces masking ...
    Complexity!managing ...
    Complexity!O.W. Holmes quote
    Composability
    Composability!analysis
    Composability!and stark subsetting
    Composability!approaches
    Composability!decomposition
    Composability!future challenges
    Composability!horizontal
    Composability!horizontal
    Composability!horizontal
    Composability!independence
    Composability!information hiding
    Composability!interoperability
    Composability!noncomposability
    Composability!noncomposability
    Composability!obstacles to|(
    Composability!obstacles to|)
    Composability!of assurance measures
    Composability!of certification
    Composability!of evaluations
    Composability!of policies
    Composability!of proofs
    Composability!of protocols
    Composability!predictable
    Composability!predictable
    Composability!reasoning about
    Composability!reasoning about
    Composability!reasoning about
    Composability!reasoning about
    Composability!reasoning about
    Composability!seamless
    Composability!statelessness
    Composability!vertical
    Composability!vertical
    Composability!vertical
    Composability|(bold
    Composability|)
    Composable High-Assurance Trustworthy Systems (CHATS)
    Compromise!accidental
    Compromise!by adversaries
    Compromise!Byzantine avoidance
    Compromise!emergency
    Compromise!from below
    Compromise!from below
    Compromise!from below
    Compromise!from below
    Compromise!from below
    Compromise!from below
    Compromise!from outside
    Compromise!from outside
    Compromise!from outside
    Compromise!from outside
    Compromise!from outside
    Compromise!from within
    Compromise!from within
    Compromise!from within
    Compromise!from within
    Compromise!malicious
    Compromise!of compositions
    Compromise!of MLS
    Compromise!of security
    Compromise!of trustworthiness enhancement
    Compromise!total
    Concurrency Workbench
    Configuration control
    Configuration control!analysis of changes
    Configuration control!assurance
    Configuration control!discipline
    Configuration control!of networks
    Consistency!of code
    Consistency!of hardware
    Consistency!of interface specs
    Consistency!of software
    Consistency!of specifications
    Contains relation
    Control!centralized
    Control!decentralized
    Copyleft
    Corbató, Fernando
    Corbató, Fernando!Turing lecture
    Correctness!...-preserving transformations
    Correctness!deprecated
    Covert channels!avoidance
    Covert channels!storage
    Covert channels!timing
    Cowan, Crispin!StackGuard
    Crnkovic, Ivica
    Cross-domain mechanisms
    Cryptography!attacks
    Cryptography!embedding
    Cryptography!fair public-key
    Cryptography!for authentication
    Cryptography!for integrity
    Cryptography!for secrecy
    Cryptography!multikey
    Cryptography!secret-sharing
    Cryptography!threshold
    Cryptography!trustworthy embeddings
    CTCPEC
    CTCPEC
    D
    Dean, Drew
    Dean, Drew
    Dean, Drew
    Dean, Drew
    Dean, Drew
    Dean, Drew!MOPS
    Dean, Drew!MOPS
    Debuggability
    Decomposability
    Decomposition!Dijkstra
    Decomposition!horizontal
    Decomposition!Parnas
    Decomposition!temporal
    Decomposition!vertical
    Denials of service
    Denials of service!prevention
    Denials of service!prevention!in distributed systems
    Denials of service!prevention!role of hierarchy
    Denials of service!remediation
    Denials of service!self-induced
    Denials of service!self-induced
    Dependability
    Dependence!generalized
    Dependence!guarded
    Dependence!guarded
    Dependence!guarded
    Dependence!Parnas
    Dependencies!among principles
    Dependencies!among specifications
    Dependencies!analysis
    Dependencies!analysis of
    Dependencies!analysis of
    Dependencies!causing vulnerabilities
    Dependencies!constrained
    Dependencies!constrained
    Dependencies!explicit
    Dependencies!explicit!interlayer ... in LOCK
    Dependencies!explicit!interlayer ... in PSOS
    Dependencies!on less trustworthiness
    Dependencies!order
    Dependencies!reduced ... on trustworthiness
    Dependencies!timing
    Dependencies!timing
    Detection!of anomalies
    Detection!of misuse
    Development methodology!Clean-Room
    Development methodology!HDM
    Development methodology!USDP
    Development methodology!XP
    Development!discipline
    Development!discipline
    Development!discipline
    Development!of trustworthy systems
    Development!principles
    Differential power analysis
    Differential power analysis
    Digital Distributed System Security Architecture (DDSA)
    Dijkstra, Edsger W.
    Dijkstra, Edsger W.
    Dijkstra, Edsger W.
    Dijkstra, Edsger W.!Discipline of programming
    Dijkstra, Edsger W.!THE system
    Dijkstra, Edsger W.!THE system
    Dijkstra, Edsger W.!THE system
    Dijkstra, Edsger W.!THE system
    Dinolt, George
    Discipline!in development
    Discipline!in development
    Discipline!in methodology
    Discipline!in Multics
    Discipline!in XP
    Discipline!lack of
    Discipline!needed for open-box software
    Discipline!of composition
    Distributed systems!composable trustworthiness
    Distributed systems!denials of service
    Distributed systems!distributed protection
    Distributed systems!distributed trustworthiness
    Distributed systems!Lamport's definition
    Distributed systems!Lamport's definition
    Distributed systems!MLS in
    Distributed systems!network oriented
    Distributed systems!networked trustworthiness
    Distributed systems!parameterizable
    Distributed systems!reduced need for trustworthiness
    Distributed systems!reduced need for trustworthiness
    Distributed systems!reduced need for trustworthiness
    Distributed systems!reduced need for trustworthiness
    Distributed systems!risks of weak links
    Distributed systems!trustworthiness
    Distributed systems!trustworthiness
    Diversity!in heterogeneous systems
    Diversity!of design
    DMCA
    Domains!enforcement
    Domains!for constraining software
    Domains!Multics
    Domains!separation
    Domains!separation
    E
    Eiffel
    Einstein, Albert!science
    Einstein, Albert!simplicity
    Einstein, Albert!simplicity
    Einstein, Albert!simplicity
    Electronic Switching Systems (ESSs)
    EMERALD
    EMERALD
    EMERALD!integration of static checking
    Emergent properties
    Emergent properties
    Emergent properties
    Emergent properties
    Emergent properties
    Emergent properties
    Emergent properties
    Emergent properties
    Emergent properties!reasoning about
    Empowered E-Mail Environment (Tripoli)
    Encapsulation
    Encapsulation
    Encapsulation
    Encapsulation!vulnerabilities
    Enclaves
    Enlightened Architecture Concept
    Enlightened Architecture Concept
    Enlightened Architecture Concept
    Enlightened Architecture Concept!needed for the GIG
    Error!correction
    Error!correction!for human errors
    Error!correction!Guruswami-Sudan
    Error!correction!Kuijper-Polderman
    Error!correction!Reed-Solomon
    Error!detection
    Error!detection!for human errors
    Euclid
    Evaluations!composability of
    Evaluations!continuity despite changes
    Evolvability!of architectures
    Evolvability!of implementations
    Evolvability!of requirements
    Exokernel Operating System
    Extreme Programming
    Extreme Programming
    F
    Fault!forecasting
    Fault!injection
    Fault!prevention
    Fault!removal
    Fault!tolerance
    Fault!tolerance
    Fault!tolerance
    Fault!tolerance!hierarchical
    Fault!tolerance!literature
    Finalization!vulnerabilities
    Firewalls
    Firewalls
    Flaws!design
    Flaws!implementation
    Formal!analysis
    Formal!analysis!of changes
    Formal!basis of languages
    Formal!basis of static checking
    Formal!basis of tools
    Formal!development
    Formal!mappings
    Formal!mappings
    Formal!mappings
    Formal!methods
    Formal!methods
    Formal!methods
    Formal!methods
    Formal!methods!for hardware
    Formal!methods!potential benefits
    Formal!operational practice
    Formal!proofs
    Formal!proofs
    Formal!real-time analysis
    Formal!requirements
    Formal!specifications
    Formal!specifications
    Formal!specifications!for JVM
    Formal!specifications!in HDM
    Formal!specifications!in HDM
    Formal!specifications!Parnas
    Formal!static analysis
    Formal!test-case generation
    Formal!testing
    G
    Gasser, Morrie
    GASSP
    GASSP
    Generalized!dependence
    Generally Accepted Systems Security Principles (GASSP)
    Gibson, Tim
    GIG!see Global Information Grid
    Gilb, Tom!Project Management Rules
    Glaser, Edward L.!modularity
    Glaser, Edward L.!principles
    Gligor, Virgil
    Gligor, Virgil
    Gligor, Virgil
    Gligor, Virgil!composability
    Gligor, Virgil!system modularity
    Gligor, Virgil!system modularity|(bold
    Gligor, Virgil!system modularity|)
    Global Information Grid (GIG)
    Global Information Grid (GIG)!assurance
    Global Information Grid (GIG)!development
    Global Information Grid (GIG)!vision
    GLU
    GLU
    GNU system with Linux
    Goguen-Meseguer
    Gong, Li!enclaves
    GOVNET
    Guarded!dependence
    Guarded!dependence
    Guards
    Guards
    Guards!trusted
    Guruswami-Sudan decoding
    H
    Hamming, Richard
    Handheld devices!constrained
    Handheld devices!unconstrained
    Hardware!research directions
    Hennessy, John L.
    Hierarchical Development Methodology (HDM)
    Hierarchical Development Methodology (HDM)
    Hierarchical Development Methodology (HDM)
    Hierarchical Development Methodology (HDM)
    Hierarchical Development Methodology (HDM)!hierarchical abstractions
    Hierarchy!for correlation in misuse detection
    Hierarchy!HDM mapping functions
    Hierarchy!HDM mapping functions
    Hierarchy!of abstractions
    Hierarchy!of directories
    Hierarchy!of locking protocols
    Hierarchy!of locking protocols
    Hierarchy!of locking protocols
    Hierarchy!of locking protocols
    Hierarchy!of policies
    Hierarchy!of PSOS layers
    Hierarchy!of PSOS layers
    Hierarchy!of PSOS layers
    Hierarchy!of SeaView
    Hierarchy!of SIFT layers
    Hierarchy!of trustworthiness
    Holmes, Oliver Wendell
    Horning, Jim
    Horning, Jim
    Horning, Jim!decomposition
    Horning, Jim!evolvability and requirements
    Horning, Jim!last gassp
    Horning, Jim!object orientation
    Horning, Jim!partial specifications
    Horning, Jim!patching
    Horning, Jim!policy composition
    Horning, Jim!simplicity
    HP!blade computers
    I
    IBM!blade computers
    IBM!Enterprise Workload Manager
    ICS: Integrated Canonizer and Solver
    Illustrative Risks
    Implementation!analysis of
    Implementation!practical considerations
    Initialization!vulnerabilities
    Integrity!checks for
    Integrity!multilevel
    Integrity!multilevel!Biba
    Intel!LaGrande
    Interfaces!assurance
    Interfaces!constrained
    Interfaces!human
    Interfaces!human
    Interfaces!human
    Interfaces!human!assurance
    Interfaces!human!risks
    Interfaces!incompatibility
    Interfaces!perspicuous
    Interfaces!perspicuous
    Interfaces!perspicuous|(bold
    Interfaces!perspicuous|)
    Interfaces!risks
    Interfaces!RISSC architectures
    Interoperability
    Interoperability
    Interoperability
    Interoperability
    Interoperability
    Interoperability!cross-language
    Interoperability!impairments
    Interoperability!in composability
    Interoperability!of tools
    IP Version 6 (IPv6)
    IPSEC
    ITS4
    ITSEC
    ITSEC
    J
    James, William!exceptions
    Java
    Java!... Virtual Machine (JVM)
    Jones, Cliff
    Juvenal
    Juvenal
    K
    Kain, R.Y.
    Kain, R.Y.!on architecture
    Kaner, Cem
    Karger, Paul A.!composite evaluation intercommunication
    Karger, Paul A.!Multics security evaluation
    Karpinski, Richard
    Kernel!MLS
    Kernel!operating system ...
    Kernel!separation
    Kernel!separation
    Kernel!separation
    Kocher, Paul
    Kurth, Helmut!composite evaluation intercommunication
    L
    LaGrande
    Lala, Jay
    Lamport, Leslie!distributed systems
    Lamport, Leslie!liveness
    Lamport, Leslie!safety
    Lampson, Butler!capability systems
    Lampson, Butler!cryptography
    Lampson, Butler!reusability of components
    Lampson, Butler!willpower
    Lampson, Butler!willpower
    Larsson, Magnus
    Lazarus virus
    Least privilege
    Least privilege!David Wheeler
    Legacy software!incompatibility
    Legacy software!incompatibility
    Legacy software!incompatibility
    Lego modularity
    Liveness (Lamport)
    Locking!hierarchical
    LOgical Coprocessor Kernel (LOCK)
    LOgical Coprocessor Kernel (LOCK)
    LOgical Coprocessor Kernel (LOCK)
    Longhorn
    Lynch, Nancy!protocol composability
    M
    Maintainance!risks
    Maintenance
    Maintenance
    Mantel, Heiko
    Mapping!between layers
    Mapping!between layers
    Mapping!between layers
    Mapping!between layers
    Maughan, Douglas
    Maughan, Douglas
    Medical!assurance
    Medical!risks
    Medical!risks
    Mencken, H.L.
    Mercuri, Rebecca
    Methodology!for development!Clean-Room
    Methodology!for development!HDM
    Methodology!for development!USDP
    Methodology!for development!XP
    Metrics
    Microsoft!Longhorn
    Mills, Harlan!Clean-Room
    Mills, Harlan!Clean-Room
    MILS!see multiple independent levels of security
    MISSI!security policy
    Misuse!real-time detection
    Misuse!real-time detection
    Mitchell, John
    Miya, Eugene
    ML
    ML
    MLA: see multilevel availability
    MLI: see multilevel integrity
    MLS: see multilevel security
    MLX: see multilevel survivability
    Modula 3
    Modularity
    Modularity
    Modularity
    Modularity
    Modularity
    Modularity!and interoperability
    Modularity!and stark subsetting
    Modularity!as in Lego piecesa
    Modularity!Cem Kaner quote
    Modularity!compiler enforced
    Modularity!excessive
    Modularity!facilitates evaluation
    Modularity!of requirements
    Modularity!of tools
    Modularity!programming-language driven
    Modularity!Steve Ballmer quote
    Modularity!system|(bold
    Modularity!system|)
    Modularity!Ted Glaser quote
    Modularity!with abstraction and encapsulation
    Modularity!with abstraction and encapsulation
    Modularity!with abstraction and encapsulation
    Modularity!with abstraction and encapsulation
    Monitoring!real-time
    Monotonicity!compositional!stronger
    Monotonicity!compositional!weak
    Monotonicity!cumulative-trustworthiness
    Monotonicity!nondecreasing-trustworthiness
    Moore, Edward F.
    MOPS!recent results
    MOPS|(bold
    MOPS|)
    MSL!see multiple single-level security
    MSL!see multiple single-level security
    Multics
    Multics!architecture
    Multics!avoiding stack buffer overflows
    Multics!development
    Multics!development
    Multics!directory hierarchy
    Multics!discipline
    Multics!domains
    Multics!dynamic linking
    Multics!dynamic linking
    Multics!interfaces
    Multics!multilevel security retrofit
    Multics!multilevel security retrofit
    Multics!principles
    Multics!principles
    Multics!ring structure
    Multics!ring structure
    Multics!security evaluation
    Multics!virtual input-output
    Multics!virtual memory
    Multics!virtual memory
    Multics!virtual multiprogramming
    Multilevel availability
    Multilevel integrity
    Multilevel integrity!policy
    Multilevel security
    Multilevel security
    Multilevel security!and perspicuity
    Multilevel security!Distributed Secure System (DSS)
    Multilevel security!noncompromisibility from above
    Multilevel security!policy
    Multilevel security!Proctor-Neumann
    Multilevel security!TS&CI architectures
    Multilevel security!TS&CI architectures
    Multilevel security!TS&CI architectures
    Multilevel security!TS&CI architectures
    Multilevel survivability
    Multiple!independent levels of security (MILS)
    Multiple!single-level security (MSL)
    Multiple!single-level security (MSL)
    Multiprocessing!network-centric
    Multiprocessing!network-centric
    Multiprocessing!virtual
    Mutual suspicion
    Mutual suspicion
    N
    Naming!vulnerabilities
    Navy Marine Corp Intranet (NMCI)
    Needham, Roger!cryptography
    NetTop
    Network-centric!architecture
    Networks!alternative routing
    Networks!as backplanes
    Networks!authentication
    Networks!Byzantine protocols
    Networks!configuration management
    Networks!dependable
    Networks!firewalls
    Networks!guards
    Networks!heterogeneous
    Networks!multilevel secure
    Networks!packet authentication
    Networks!protocols
    Networks!reliable despite unreliable nodes
    Networks!subnetworks
    Networks!survivable
    Networks!testbeds
    Networks!trustworthy
    Networks!trustworthy
    Networks!trustworthy interface units
    Networks!virtualized multiprocessing
    Networks!with traceback
    Next Generation Secure Computing Base (NGSCB)
    NGSCB!see Next Generation Secure Computing Base
    NMCI!see Navy Marine Corp Intranet
    O
    Object-oriented paradigm
    Object-oriented paradigm
    Object-oriented paradigm!domain enforcement
    Object-oriented paradigm!downsides
    Object-oriented paradigm!in PSOS
    Object-oriented paradigm!in PSOS
    Object-oriented paradigm!Objective Caml
    Object-oriented paradigm!strong typing
    Objective Caml
    Offshoring!pros and cons|(bold
    Offshoring!pros and cons|)
    Openness!and perspicuity
    Openness!composability in
    Openness!Free Software Foundation
    Openness!licensing agreements
    Openness!Open Source Movement
    Openness!open-box software
    Openness!open-box software
    Openness!open-box software
    OpenSSH
    Operations
    Operations
    Operations!analysis of changes
    Operations!practical considerations
    Operations!privacy implications
    Optical communications
    Optimization!code translation validation
    Optimization!deferred, in Extreme Programming
    Optimization!nonlocal
    Optimization!nonlocal
    Optimization!risks of short-sighted ...
    Optimization!risks of short-sighted ...
    Optimization!risks of short-sighted ...
    Optimization!risks of short-sighted ...|(bold
    Optimization!risks of short-sighted ...|)
    Orthogonality theorem
    Outsourcing!pros and cons|(bold
    Outsourcing!pros and cons|)
    Outsourcing!system administration
    Outsourcing!system administration
    Ovid
    Ovid
    Owicki-Gries
    P
    Parnas, David L.
    Parnas, David L.
    Parnas, David L.!decomposition
    Parnas, David L.!decomposition
    Parnas, David L.!dependence
    Parnas, David L.!motherhood
    Parnas, David L.!specifications
    Parnas, David L.!weak-link quote
    Patch management|
    Patch management|(bold
    Patterson, David A.
    Pavlovic, Dushko
    Performance
    Performance!acceptable degradation
    Perspicuity
    Perspicuity!risks of bad interfaces
    Perspicuity!through analysis
    Perspicuity!through synthesis
    Perspicuity|(bold
    Perspicuity|)
    Pervasively Integrated Assurance (PIA)
    Pervasively Integrated Assurance (PIA)
    Pervasively Integrated Assurance (PIA)
    Pervasively Integrated Assurance (PIA)
    Pervasively Integrated Assurance (PIA)
    Pervasively Integrated Assurance (PIA)
    Pervasively Integrated Assurance (PIA)
    Pervasively Integrated Assurance (PIA)
    Petroski, Henry
    Pfleeger, Charles
    Plan 9
    Polyinstantiation
    Portals
    Practical Considerations|(bold
    Practical Considerations|)
    Predictability!for certification
    Predictability!of assurance
    Predictability!of composition
    Predictability!of composition
    Predictability!of composition
    Predictability!of composition
    Predictability!of composition
    Predictability!of composition
    Predictability!of evolvability
    Predictability!of trustworthiness
    Predictability!of trustworthiness
    Principles
    Principles!abstraction
    Principles!architectural
    Principles!constrained dependency
    Principles!encapsulation
    Principles!for security
    Principles!for system development|
    Principles!for system development|(bold
    Principles!for trustworthiness|(bold
    Principles!for trustworthiness|)
    Principles!layered protection
    Principles!modularity
    Principles!motherhood as of 1969
    Principles!object orientation
    Principles!of secure design (NSA)
    Principles!reduced need for trustworthiness
    Principles!Saltzer-Schroeder
    Principles!Saltzer-Schroeder
    Principles!Saltzer-Schroeder
    Principles!separation of domains
    Principles!separation of duties
    Principles!separation of policy/mechanism
    Principles!separation of roles
    Principles!throughout R&D
    Privacy!in conflict with monitoring
    Privacy!policies
    Privacy!risks
    Programming languages!and composability
    Programming languages!enhancing modularity
    Programming languages!enhancing modularity
    Programming languages!for system development
    Programming languages!for trustworthiness
    Programming languages!object-oriented
    Programming languages!research directions
    Programming languages!static checking
    Programming languages!supporting software engineering
    Proof-carrying code
    Proof-carrying code
    Proofs!composability
    Propagation of errors
    Protocols!ARPANET routing
    Protocols!Byzantine
    Protocols!trustworthy
    Provably Secure Operating System (PSOS)
    Provably Secure Operating System (PSOS)
    Provably Secure Operating System (PSOS)!alternative MLS hierarchy
    Provably Secure Operating System (PSOS)!architecture
    Provably Secure Operating System (PSOS)!composability
    Provably Secure Operating System (PSOS)!HDM methodology
    Provably Secure Operating System (PSOS)!hierarchy
    Provably Secure Operating System (PSOS)!hierarchy
    Provably Secure Operating System (PSOS)!hierarchy
    Provably Secure Operating System (PSOS)!interface design
    Provably Secure Operating System (PSOS)!object-oriented
    Provably Secure Operating System (PSOS)!reduced need for trustworthiness
    Provably Secure Operating System (PSOS)!types
    Provably Secure Operating System (PSOS)!types
    Provenance
    Provenance
    Provenance!nonspoofable
    Proxies
    PSOS (see Provably Secure Operating System)
    Purify
    PVS
    PVS!theory interpretations
    R
    RaceTrack
    Randell, Brian
    Randell, Brian
    Randell, Brian
    Randell, Brian!Distributed Secure System
    Randell, Brian!Distributed Secure System
    Randell, Brian!location of checking
    Recovery!... Blocks
    Recovery!...-Oriented Computing (ROC)
    Recovery!automatic
    Recovery!automatic
    Recovery!automatic
    Recovery!automatic
    Recovery!semiautomatic
    Redundancy!cyclic ... checks
    Redundancy!for error correction
    Redundancy!for fault tolerance
    Redundancy!for integrity
    Redundancy!for reliability
    Redundancy!not needed for resynchronization
    Refinement
    Reliability
    Reliability!and security
    Reliability!assurance
    Reliability!out of unreliable components
    Reliability!risks
    Reliability!risks
    Requirements!analysis of
    Requirements!critical
    Requirements!engineering
    Requirements!for autorecovery
    Requirements!for composition
    Requirements!for decomposition
    Requirements!for decomposition
    Requirements!for reliability
    Requirements!for security
    Requirements!for trustworthiness
    Requirements!formal
    Requirements!increasing assurance
    Requirements!increasing assurance
    Requirements!lack of attention to
    Requirements!practical considerations
    Response!automated
    Response!automated
    Response!automated
    Response!automated
    Response!real-time
    Reusability!of architectures
    Reusability!of components
    Reusability!of components!Butler Lampson
    Reusability!of components!with high assurance
    Reusability!of requirements
    Risk
    Risks
    Risks!reduction via assurance
    Risks!reduction via assurance|(bold
    Risks!reduction via assurance|)
    Risks|(bold
    Risks|(bold
    Risks|)
    Risks|)
    Robinson-Levitt hierarchies
    Robinson-Levitt hierarchies
    Robinson-Levitt hierarchies
    Robinson-Levitt hierarchies
    Routers!trustworthy
    Routing!alternative
    Runtime checks
    Rushby, John M.
    Rushby, John M.!Distributed Secure System
    Rushby, John M.!Distributed Secure System
    Rushby, John M.!separation kernels
    Rushby, John M.!separation kernels
    Rushby, John M.!separation kernels
    Rushby-Randell
    Rushby-Randell
    Ryan, Peter!self-healing example
    S
    Safety!human
    Safety!human!assurance
    Safety!human!risks
    Safety!human!risks
    Safety!Lamport-style
    Safire, William!hindsight and foresight
    SAL: Symbolic Analysis Laboratory
    Saltzer, Jerome H.
    Saltzer, Jerome H.!principles
    Saltzer-Schroeder principles
    Saltzer-Schroeder principles
    Saltzer-Schroeder principles
    Saltzer-Schroeder principles
    Saltzer-Schroeder principles
    Sandcastles
    Saydjari, Sami
    Saydjari, Sami
    Schaufler, Casey
    Schell, Roger R.!Multics security evaluation
    Schneider, Fred B.
    Schroeder, Michael D.!mutual suspicion
    Schroeder, Michael D.!mutual suspicion
    Schroeder, Michael D.!principles
    SDSI/SPKI
    SDSI/SPKI
    SeaView
    SeaView
    Security
    Security!and reliability
    Security!by obscurity
    Security!by obscurity
    Security!by obscurity
    Security!in distributed systems
    Security!in distributed systems
    Security!in distributed systems
    Security!in distributed systems!MLS
    Security!kernels
    Security!multilevel
    Security!multilevel!Bell and LaPadula
    Security!multilevel!compartments
    Security!multilevel!databases
    Security!principles|(bold
    Security!principles|)
    Security!risks
    Security!risks
    Security!Trusted Computing Bases (TCBs)
    Self-diagnosing
    Self-diagnosing
    Self-healing
    Self-healing!key distribution
    Self-optimizing
    Self-reconfiguring
    Self-reconfiguring
    Self-recovering
    Self-reprotecting
    Self-stabilizing
    Self-stabilizing
    Self-synchronizing
    Separation!kernels
    Separation!kernels
    Separation!kernels
    Separation!of domains
    Separation!of duties
    Separation!of policy and mechanism
    Separation!of policy and mechanism
    Separation!of roles
    setuid
    Shands, Deborah!SPiCE
    Shannon, Claude
    Shannon, Claude
    Sibert, Olin
    SIFT (see Software-Implemented Fault-Tolerant System)
    Simplicity
    Simplicity
    Simplicity
    Simplicity!abstractional
    Simplicity!Einstein quote
    Simplicity!Einstein quote
    Simplicity!Einstein quote
    Simplicity!Horning quote
    Simplicity!Mencken quote
    Simplicity!O.W. Holmes quote
    Simplicity!Saltzer-Schroeder
    Single sign-on!risks of
    Single sign-on!risks of
    slint
    Sneaker-net
    Software!black-box
    Software!closed-box
    Software!nonproprietary
    Software!open-box
    Software!proprietary
    Software-Implemented Fault-Tolerant System (SIFT)
    Software-Implemented Fault-Tolerant System (SIFT)
    Software-Implemented Fault-Tolerant System (SIFT)
    Software-Implemented Fault-Tolerant System (SIFT)
    Software-Implemented Fault-Tolerant System (SIFT)
    Spam!filters
    Spam!Tripoli: defense against
    SPARK
    SPECIfication and Assertion Language (SPECIAL)
    SPiCE
    ssh
    StackGuard
    Stark subsetting
    Stark subsetting
    Stark subsetting
    Stark subsetting
    Stark subsetting
    Stark subsetting
    Stark subsetting!in real-time operating systems
    Strength in depth
    Subnetworks
    Subnetworks
    Subnetworks!trustworthy
    Subnetworks!trustworthy!virtual
    Subsystems!composability!assurance
    Subsystems!composability!functionality
    Subsystems!decomposability
    Subsystems!diversity among
    Subsystems!parameterizable
    Subsystems!trustworthiness!enhancement
    Survivability
    Survivability!multilevel
    Survivability!risks
    Synchronization!robust
    Synchronization!self-...
    Synchronization!vulnerabilities
    System!administration
    System!administration
    System!administration!assurance
    System!administration!assurance
    System!composed of subsystems
    System!composed of subsystems
    System!distributed ... trustworthiness
    System!distributed ... trustworthiness
    System!distributed ... trustworthiness
    System!handheld
    System!heterogeneous
    System!wireless
    T
    TCP/IP
    TCSEC
    TCSEC
    TCSEC
    Testbeds
    THE system
    THE system
    THE system
    Thin-client!architectures
    Thin-client!user systems
    Thin-client!user systems
    Thin-client!user systems
    Time-of-check to time-of-use flaws (TOCTTOU)
    TOCTTOU flaws
    Traceback
    Traceback
    Traceback
    Traceback
    Traceback
    Transactions!fulfillment
    Tripoli: Empowered E-Mail Environment
    Trust
    Trust!layered
    Trust!maximal
    Trust!minimal
    Trust!partitioned
    Trusted (i.e., Trustworthy) Paths
    Trusted (i.e., Trustworthy) Paths
    Trusted (i.e., Trustworthy) Paths
    Trusted (i.e., Trustworthy) Paths!for upgrades
    Trusted Computer System Evaluation Criteria (TCSEC)
    Trusted Computing Group (TCG)
    Trusted Xenix
    Trusted Xenix
    Trustworthiness
    Trustworthiness
    Trustworthiness
    Trustworthiness
    Trustworthiness!enhancement of
    Trustworthiness!enhancement of|(bold
    Trustworthiness!enhancement of|)
    Trustworthiness!enhancement!paradigms
    Trustworthiness!enhancement!reliability
    Trustworthiness!enhancement!sandcastles
    Trustworthiness!enhancement!security
    Trustworthiness!in distributed systems
    Trustworthiness!in distributed systems
    Trustworthiness!in distributed systems
    Trustworthiness!in distributed systems
    Trustworthiness!in distributed systems!reduced dependence
    Trustworthiness!layered
    Trustworthiness!need for discipline
    Trustworthiness!of bootloads
    Trustworthiness!of bootloads
    Trustworthiness!of bootloads
    Trustworthiness!of code distribution
    Trustworthiness!of code distribution
    Trustworthiness!of code distribution
    Trustworthiness!of code provenance
    Trustworthiness!of networks
    Trustworthiness!of protocols
    Trustworthiness!of servers
    Trustworthiness!of servers
    Trustworthiness!of subnetworks
    Trustworthiness!of traceback
    Trustworthiness!of "trusted paths"
    Trustworthiness!partitioned
    Trustworthiness!principles for|(bold
    Trustworthiness!principles for|)
    Trustworthiness!system development
    Trustworthiness!where needed
    Trustworthy Servers and Controlled Interfaces (see TS&CI)
    TS&CI
    TS&CI
    TS&CI
    TS&CI
    TS&CI
    TS&CI
    TS&CI!in heterogeneous architectures
    TS&CI: Trustworthy Servers and Controlled Interfaces
    Type!enforcement
    Type!enforcement!PSOS
    Type!enforcement!SCC
    U
    UCITA
    Unified Modeling Language (UML)
    Unified Modeling Language (UML)
    Unified Software Development Process (USDP)
    Uses relation
    V
    Validation vulnerabilities
    Van Vleck, Tom
    Venema, Wietse
    VERkshops
    Virtual!input-output
    Virtual!machine
    Virtual!machine monitors
    Virtual!memory
    Virtual!multiprocessing
    Virtual!multiprocessing!in GLU
    Virtual!Private Networks (VPNs)
    Visibility|(bold
    Visibility|(bold
    Visibility|)
    Visibility|)
    VMWare
    Voltaire
    von Neumann, John
    Voting!electronic systems
    Voting!electronic systems!assurance
    Voting!electronic systems!assurance
    Voting!electronic systems!assurance
    Voting!electronic systems!Chaum
    Voting!electronic systems!integrity
    Voting!electronic systems!Mercuri
    Voting!electronic systems!privacy problems
    Voting!electronic systems!security problems
    Voting!majority ... for enhancing reliability
    Vulnerabilities!security|(bold
    Vulnerabilities!security|)
    W
    Wagner, David
    Wagner, David
    Wagner, David
    Wagner, David!buffer overflow analyzer
    Wagner, David!MOPS
    Wagner, David!MOPS
    Weak-link!avoidance
    Weak-link!avoidance
    Weak-link!hindering trustworthiness
    Weak-link!phenomena
    Weak-link!phenomena
    Weak-link!targets
    Weakness in depth
    Web portal
    Web services!universal
    Wheeler, David A.!least privilege
    Wheeler, David A.!secure programming
    Wireless!communications
    Wireless!devices
    Wireless!networks
    Wrappers
    Wrappers
    Y
    Young, W.D.