ENPM 808s
Information Systems Survivability:
7. Overcoming These Deficiencies 2
- - - - - - - - - - - - - - - - - - -
Role of security and fault tolerance, software engineering, good development practice, principles, good experience, selective use of formal methodsThis lecture is in memory of John Gannon of the Maryland Computer Science Department.
NOTE: Proposals for your final projects are due by e-mail the day after the NEXT class (in ASCII, PostScript, pdf, html, or other format that I can read easily on a Unix-based systems).
Reminder of Serious Obstacles
- - - - - - - - - - - - - - - - - - -
Existing system development practice is generally terrible.The tendency toward COTS products is a self-defeating prophecy unless dramatic improvements occur.
Development Tools Can Help
- - - - - - - - - - - - - - - - - - -
Record design decisions
Enforce desired design structure
Check syntax of specifications
Record/control development status
Provide programming environment (compilers, optimizers, debuggers, test data generators, editors, symbolic executors, simulators, etc.)
Prove design fulfills requirements
Prove code consistent with specs
Perform refining transformations
Generate and configure systems
Good software engineering practice (e.g.,)
- - - - - - - - - - - - - - - - - - -
Modular design
Functional abstraction
Well-specified functionality
Reusable and interoperable interfaces
Information hiding to mask implementation detail
Constructive use of analysis techniques and tools, selective use of formal methods
Higher-level programming languages less susceptible to characteristic errors (missing bounds checks, mismatched types, mismatched pointers, off-by-one errors, missing exception conditions)
Good documentation
Desired Characteristics of Survivable Systems and Networks ---------------------------------- secure reliable available responsive flexible monitorable evolvable reusable portable convenient understandable simple/realistic uniform forgiving efficient cost-effective compatible usefully documented interoperable rapidly prototypeable
Schopenhauer:
Experience and Principles
- - - - - - - - - - - - - - - - - - -
.. our heads are full of general ideas that we are now trying to turn to some use, but that we hardly ever apply rightly. This is the result of acting in direct opposition to the natural development of the mind by obtaining general ideas first, and particular observations last; it is putting the cart before the horse. ...
The mistaken views ... that spring from a false application of general ideas have afterwards to be corrected by long years of experience; and it is seldom that they are wholly corrected. That is why so few men of learning are possessed of common sense, such as is often to be met within people who have had no instruction at all.Arthur Schopenhauer, Parerga
and Paralipomena, 1851.
Einstein: Thinking and Judgement
- - - - - - - - - - - - - - - - - - -
The development of general ability for independent thinking and judgment should always be placed foremost, not the acquisition of special knowledge. If a person masters the fundamentals of his subject and has learned to think and work independently, he will surely find his way and besides will better be able to adapt himself to progress and changes than the person whose training principally consists in the acquiring of detailed knowledge.Albert Einstein, Out of My Later Years, 1950.
Security Design Principles (after Schroeder-Saltzer 1972) ------------------------------------------ 0. Methodological basis 1. Simplicity 2. Fail-safe defaults 3. Complete mediation 4. Open design 5. Separation of privilege 6. Principle of least privilege 7. Principle of least common mechanism 8. Psychological acceptability 9. Sufficiently high work factor 10. Noncompromisible auditing 11. Principle of least surprise (especially in interfaces)
0. Unifying Methodological View Development | Analysis stages Stages | Functional Performance --------------------------------------------------- Requirements | Consistency, Consistency, (functional+ | adequacy, adequacy, performance) | realizability realizability --------------------------------------------------- Design: | Design proofs Design feasibility Hierarchy, | (spec-req (spec-req Specifications | consistency) consistency) (func+perf) | Design reviews --------------------------------------------------- Implemen- | Code proofs Performance test tation: | (code-spec and measurement Code, linkage | consistency, Formal analysis | portability as appropriate | checks), | Tiger teams ---------------------------------------------------
Development | Analysis stages Stages | Functional Performance --------------------------------------------------- Maintenance | Incremental Check for adequacy | closure after of performance of | change revised system --------------------------------------------------- Documentation | Consistency of Consistency of | documentation documentation | with design, with design,code | code ---------------------------------------------------
1. SIMPLICITY ---------------------------- Einstein: Everything should be made as simple as possible, but no simpler. * Conceptual simplicity, economy of mechanism: Keep it simple, Keep it small. (But complex systems are not simple.) * Comprehensibility of design * Controllability of implementation * Comprehensibility of validation * Constrained code development * In-house expertise All of these suggest the use of high-level design languages and programming languages. WE NEED VIRTUALIZED SIMPLICITY.
2. ACCESS-CONTROL DEFAULT * Protection scheme permits access unless denied (typically uncontrollably dangerous) * Protection permits no access unless specifically authorized (typically too restrictive and causes many error messages) * Multilevel security with preassigned levels (conceptually simple, but full of pitfalls) 3. COMPLETE MEDIATION * EVERY access must be checked * Must be system-wide * ``Foolproof'' identification * Audit changes in authority * Reference Monitors
The Reference Monitor Concept (Anderson, 1972) ------------------------------------- Reference monitor mediates security on every access by subject to object. May use hardware and software. Small enough be analyzed thoroughly (formal verification?) Protected from tampering, nonbypassable.
4. OPEN DESIGN * Design should not be a secret. * Protection mechanisms separate from protection keys - Pins and tumblers vs. combinations - Security design vs. passwords * Provision for open review of protection mechanism * Offer bounty for penetrations
5. SEPARATION OF DUTIES * Enforce distinct roles, Firewalls of protection * Require multiple authorizations, not all in one (avoid superuser)
6. LEAST PRIVILEGE * Require need-to-know * Allocate just what is needed 7. LEAST COMMON MECHANISM * Minimize amount of mechanism common to more than one user. * Minimize pervasiveness of shared variables. 8. PSYCHOLOGICAL ACCEPTABILITY * Easy to use and understand, no incentive to circumvent.
9. WORK FACTOR * Related to degree of difficulty of cracking * Increased possibility of exposure 10. NONCOMPROMISIBLE AUDITING * Need to record all interesting events, particularly potential compromises (survivability/security/reliability) * Thresholds depending on context * Must be tamperproof: nonbypassable, nonalterable
11. PRINCIPLE OF LEAST SURPRISE * Relevant especially to systems in the large * Vital in human interfaces (user interface coherence), and also in accessible internal program interfaces
Implementation Principles -------------------------------- Superb design can be compromised by shoddy implementation. Good implementation is vital. Implementation structure should relate to design. Use of appropriate higher-level languages Use of appropriate hardware Defer binding until essential. Defer optimization until needed. Anticipate later problems.
Good Programming Language Features ---------------------------------- Compatibility with methodology Ease of programming, understanding Efficient code compilable Support for separate compilation Encapsulation of data types Strong typing and type safety Symmetry (built-in, extensions) Dynamically declarable attributes Dynamic creation and deletion Ease of expressing concurrency Sound synchronization primitives Clean handling of exceptions Initialization and finalization Controlled argument passing No aliases
Riskful Programming Language Problems ------------------------------------- Nonisolated dynamic memory management Multitasking (synchronization, process isolation) Dependence on untrustworthy runtime libraries Do-it-yourself exception handling GoTos and nonnested calling semantics Unchecked arguments, bounds, arrays, buffer overflows Lack of strong typing Unchecked escapes into machine code Leaky garbage collection Undetectable Trojan horses
Roles of Formal Methods in Survivability
Formal methods can play an important role in the attainment of systems and networks that must achieve generalized survivability, in specification, design, and execution. Great improvements in system behavior can be realized when the requirements (for survivability, security, reliability, etc.) have a formal basis. Similarly, there are enormous benefits whenever design specifications have a formal basis, especially if they are derived from well specified requirements rather than the common practice of being established after the fact to represent an ad hoc assembly of already developed software (sometimes referred to as putting the cart before the horse). Formal design verification then involves formal demonstrations that the specifications are consistent with their requirements, providing no less than required, and to the extent that the absence of Trojan horses can be demonstrated, nothing unexpected that might be harmful. Verification of designs is difficult for systems that were not designed to be readily analyzed, but can nevertheless be valuable in legacy systems (as in analyses of the risks associated with the Year-2000 problem). Finally, although it is less commonly practiced in software, formal code verification can demonstrate that a given implementation is consistent with its specifications. Formal hardware verification is being used increasingly, and demonstrates the potential effectiveness of formal methods where there are considerable risks (financial or otherwise) of improper design and implementation.
Various formal methods can be valuable in specifying and analyzing requirements, designs, and implementations. Of particular importance in connection with survivability are techniques that can provide formal relationships between different layers of abstraction, with respect to requirements and specifications alike. The use of formal methods is recommended in particularly critical applications, and can help move the current highly unpredictable ad hoc development process into a much more predictable formal development process. In the long run, use of such techniques can dramatically decrease the risks of system failure. Contrary to popular myth, judicious use of formal methods can also decrease the overall development and operating costs, especially when the costs of aborted developments (e.g, cancellations of IRS, FAA, FBI systems) are considered, along with the costs of overruns, delays in delivery, and subsequent maintenance.
Judicious use of formal methods can have a very high payoff, particularly in requirements, specifications, algorithms, and programs concerned with especially critical functionality, such as concurrency, synchronization, avoidance of deadlocks and race conditions in the small, and perhaps even network stability and survivability in a larger context, derived on the basis of more detailed analyses of components. There is no substitute for using demonstrably sound algorithms.
Of particular importance is the formal analysis of requirements, for example, determining whether a given set of requirements at a particular layer of abstraction is consistent within itself, whether the different sets of requirements at the lower layer are fundamentally incompatible with one another, and whether the requirements at a lower layer are consistent with the requirements at the upper layer. Once such an analysis is done, then it is also beneficial to determine whether system specifications and implementations are consistent with the relevant requirements.
It must be emphasized that the most valuable uses of formal methods are in finding flaws and inconsistencies, not in attempting to prove that something is completely correct. However, formal methods approaches are not absolute guarantees, because problems that can exist outside of their scope of analysis. For example, suppose that a given analysis does not detect any flaws or inconsistencies in a specification or implementation. It is still possible that the requirements are inadequate (for example, the specifications could fail to prevent a problem not covered by the requirements), or that the analysis methods themselves could be flawed. For these reasons, extensive testing of developed systems is also important - albeit inherently limited. Unfortunately, testing is itself inherently incomplete and incapable of discovering many types of problems, for example, stemming from distributed system interactions and concurrency failures, subtle timing problems, unanticipated hardware failures, and environmental effects. Exhaustive testing over all possible scenarios is basically impossible in any complex system.
Past Uses of Formal Methods
- - - - - - - - - - - - - - - - - - -
Fault tolerance and Byzantine algorithms; SRI's 1970s Software Implemented Fault Tolerance (SIFT) and Provably Secure Operating System (PSOS); security and cryptographic protocols; network protocol development; hardware verification and computer-aided design; compiler correctness; and human safety; but not yet for survivability, or for security and fault tolerance in combination.The European notion of dependability attempts to encompass various requirements.
Belief logics for cryptographic protocols (e.g., Burrows-Abadi-Needham BAN logic)
Formal methods can also be used in execution: Proof-carrying code, providing evidence that a component has not been tampered. See references to George Necula and others.
Directions for the Future
- - - - - - - - - - - - - - - - - - -
Establishment of survivable heterogeneous open-system and network architectures, designed for the full set of requirements, with robust information infrastructure, exploiting robust mobile code; focus on selected trustworthy critical components; minimizing trustedness in dumb terminals and hand-held wireless communicators; real-time detection of anomalies; robust configuration managementDramatically improved system development practice and procurement practice
Better use of research and development, sensible fault tolerance, security; formal methods for specification, selected proofs of critical properties of critical modules, proof-carrying code, safe reconfiguration, ...
Establishment of explicit requirements and understanding of the interdependenciesBetter system/software engineering practice: disciplined use of structural organizing principles for survivable systems, such as abstraction, composition, encapsulation, reusability, separation, least privilege; domains; baseline survivable architectures; enriched IPSec/IPv6 for realistic survivability requirements; adherence to the Generally Accepted System Security Principles (GASSP)
(http://web.mit.edu/security/www.gassp1.html)Practical development and use of real-time analysis and response subsystems coupled with dynamic configuration control
Find ways to develop the missing pieces of the computer-communication infrastructure, and ruggedize the best R&D available. We need sound operating systems for boundary controllers, crypto, trustworthy servers, pervasive authentication, and architectures that localize critical trustworthiness needs.Provide better architectural and implementation guidance to avoid characteristic weaknesses.
Improve the evaluation criteria framework (Common Criteria!) and establish specific requirements for evaluation of survivability
General References
- - - - - - - - - - - - - - - - - - -
Computer-Related Risks and the arl-one report provide an enormous number of references. For example:Software engineering: Many references to works by Dijkstra beginning in 1969s, Parnas in 1970s, FPBrooks, ...
Robust systems: Multics (1965 and on, http://www.multicians.org), T.H.E. system (Dijkstra 1968), domains of protection (Schroeder 1972), confined execution (Lampson 1973), PSOS (1975, 1977, 1980), Rushby's isolation kernels (Rushby 1984).
Early efforts linking security and reliability: Lampson 1974, Dobson and Randell 1986, Randell and Dobson 1986.
Reading for the Class Periods on Architectures
- - - - - - - - - - - - - - - - - - -
Read Chapter 7 of arl-one:
http://www.csl.sri.com/neumann/arl-one.htmlLook at the Feiertag-Neumann PSOS paper for its hierarchical layering and object-oriented architecture:
http://www.csl.sri.com/neumann/psos.pdfScan the Proctor-Neumann paper for its architectural content, not for the covert channel analysis:
http://www.csl.sri.com/neumann/ncs92.html