Prev Up Next
Go backward to 5 Systemic Inadequacies and Other Deficiencies
Go up to Top
Go forward to 7 Approaches for Overcoming Deficiencies 2

6 Approaches for Overcoming Deficiencies 1

ENPM 808s
Information Systems Survivability:
6. Approaches for Overcoming These Deficiencies
- - - - - - - - - - - - - - - - - - -
What exists (not enough), and what is needed - architecture, components, configuration management, infrastructures, protocols, standards and criteria, above all, a coherent collection of defensive measures that can be seamlessly integrated

NOTE: Lectures 6 and 7 are paired. The exact split as to what goes in which lecture is not important.

Typical Defensive Measures
Approach Relability      Security       Performance
- - - - - - - - - - - - - - - - - - - - - - - - - - -
Prevent  Architectural   Architectural  Architectural
           robustness:     robustness:    robustness:
         Redundancy,     Isolation,     Spare capacity, 
         Error correct'n Authentication Reconfigurability
         Fault tolerance Authorization  Multiprocessing
           in HW and SW  Differential     and highly 
         Synchronization   access ctls    distributed
         Generalized     Generalized      systems
           dependence      dependence
- - - - - - - - - - - - - - - - - - - - - - - - - - -
Detect   Redundancy:     Integrity      Performance 
         Error detection   checking;      monitoring
           in HW and SW  Anomaly+misuse 
                           detection         
- - - - - - - - - - - - - - - - - - - - - - - - - - -
React    Forward/backw'd Security-      Reconfiguration
           recovery in     preserving
           HW and SW       reconfiguration  
- - - - - - - - - - - - - - - - - - - - - - - - - - -
Iterate  Redesign,       Redesign,      Redesign,
           reimplement:    reimplement:   reimplement:
         Fault removal   Exploratory    Experimental
           or tolerance    code patches   tradeoffs 
Protocols
- - - - - - - - - - - - - - - - - - -
The existing Internet protocols (TCP, IP, ftp, telnet, udp, smtp etc.) must be extended, revised, augmented, and/or replaced to foster greater robustness.

IPSEC and IP Version 6 are seriously incomplete. They provide end-to-end encryption, but are inadequate with respect to user authentication, integrity, prevention of denials of service, key management, and other issues that are fundamental to achieving meaningfully robust systems and networks. They treat reliability and survivability issues only minimally.

Evaluation Criteria
- - - - - - - - - - - - - - - - - - -
U.S. evaluation criteria for secure systems (TCSEC, Orange Book, Rainbow Series) are fragmentary, seriously deficient for networks and distributed systems. Incomplete. Very narrow view of security. Overly constraining. Very weak on authentication and integrity, insider misuse, Trojan horses. Tolerates fixed passwords! Static. Poorly matches functionality and assurance. Very outdated. MLS oriented. Software engineering and reliability almost nonexistent. (See PGN, Rainbows and Arrows: How the Security Criteria Address Computer Misuse, NatlCompSecConf 1990.)
Evaluation Criteria (more)
- - - - - - - - - - - - - - - - - - -
The Common Criteria are emerging for security (http://csrc.nist.gov/cc), following the Canadian (CTCPEC) and European (ITSEC) criteria. However, these are merely frameworks, with roll-your-own specifics against which to be evaluated. Functional and assurance requirements are separated (good).

There are as yet no adequate criteria for survivable systems and networks. Interoperability, configurability, and many other requirements are absent.

Standards and criteria are important, but beware of bad ones and good ones misapplied.

Types of Computer Misuse (review)
- - - - - - - - - - - - - - - - - - -
External misuse (EX)
1. Visual spying
2. Misrepresentation
3. Physical scavenging
Hardware misuse (HW)
4. Logical scavenging
5. Eavesdropping
6. Interference
7. Physical attack
8. Physical removal
Masquerading (MQ)
9. Impersonation
10. Piggybacking attacks
11. Playback and spoofing
12. Network weaving
Pest programs (PP)
13. Trojan-horse attacks)
14. Logic bombs
15. Malevolent worms
16. Virus attacks
Bypassing authentication/authoriz'n (BY)
17. Trapdoor attacks
18. Authorization attacks
Active misuse of authority (AM)
19. Creation, modification, use, denials
20. Incremental attacks
21. Denials of service
Passive misuse of authority (PM)
22. Browsing randomly or searching
23. Inference and aggregation
24. Covert channel exploitation
25. Misuse through inaction (IM)
26. Use as an indirect aid (IN)
Orange Book\ Misuse      EX  HW  MQ  PP,BY   AM  PM PM PM IM IN
Criterion   \Technique  1-3 4-8 9-12 13-18 19-21 22 23 24 25 26
---------------------------------------------------------------
TCSEC Security Policy:
Discretionary access control          (*)    *    *
Object reuse                         17b,c
Labels & label integrity               *     *    *  *  *
Exportation (3 criteria)    (4)        *     *    *  *  *
Labeling human output   (3)            *     *    *  *  *
Mandatory access control               *     *    *  *  *
Subject sensitivity labels             *     *    *  *  *
Device labels                          *     *    *  *  *
TCSEC Accountability:
Identification/authentication    (9)   *     *    * (*)(*)
Audit                            (*)   *     *    *  *  * (*)(*)
Trusted path                      *    * 
TCSEC Assurance:
System architecture              (*)   *     *    *  *  *
System integrity                       * 
Security testing                 (*)  (*)   
Design spec/verification               *     *    * (*) *
Covert channel analysis                          (*)(*) *
Trusted facility mgnt   (3)           (*)    *    * (*)
Configuration management         (*)   *
Trusted recovery                 (*)   *
Trusted distribution                   *
TCSEC Documentation:   
Security features user's guide         *     *    *  *  *
Trusted facility manual                *     *    *  *  *
Test documentation                     *     *    *  *  *
Design documentation                   *     *    *  *  *
Future Needs
- - - - - - - - - - - - - - - - - - -
Detailed functional requirements encompassing realistically complete sets of survivability relevant requirements (security, reliability, fault tolerance, etc.) that could be used for a wide range of procurements, presumably parametrically rather than generically

Detailed guidance on how to handle the interactions among the various constituent subtended requirements for survivability

Design frameworks and alternative families of logical architectures and architectural structures that facilitate the composition of survivable system and networks out of subsystems

Guidance on good system engineering and software engineering practice, and far-reaching approaches toward system development

Future Needs, continued
- - - - - - - - - - - - - - - - - - -
Guidance on good procurement practice

Guidance on achieving interconnectivity and reusability

Guidance on hardware-software tradeoffs, particularly with respect to crypto implementations. (There is a great need for hardware-based authentication as in Fortezza, but hardware-based crypto is by no means a panacea.)

Future Needs, continued
- - - - - - - - - - - - - - - - - - -
Identification of interoperable development environments and constructive tools. However, it must be recognized that tools are not a panacea; their use may in fact be counterproductive.

Identification of fundamental gaps in the existing standards, and identification of which standards are or are not truly compatible with which others

Identification of fundamental components that are absent or inadequate in the existing products, and guidance on what is actually commercially available rather than merely a hopefully emerging product.

Missing Components
- - - - - - - - - - - - - - - - - - -
Secure and robust boundary controllers (not just firewalls based on weak operating systems that pass executable content), network controllers, and server systems. To some extent, we can do without really secure and robust client systems (workstations, laptops, palmtops) if we have strong servers.

Cryptographic co-processors and distributed authentication servers using nontrivial passwords (e.g., trustworthy certificates, smart cards, cryptographically generated tokens)

Bilateral "trusted path" mechanisms (must be trustworthy)

Multilevel-secure servers when MLS is a necessity

See subsequent lectures on architecture.

Defenses Against Compromise
- - - - - - - - - - - - - - - - - - -
Exogirding vs. compromise from outside: Firewalls, authentication, encryption, electromagnetic shielding

Endogirding vs. compromise from within: Compile-time and run-time checks, fault tolerance, cryptographic authentication and integrity checks, monitoring and real-time misuse/anomaly detection

Undergirding vs. compromise from below: security kernels and trusted computing bases, trustworthy operating systems, special-purpose hardware and co-processors, integrity checks

Multilevel Security (MLS)
- - - - - - - - - - - - - - - - - - -
Information may not flow from one entity at a particular security level to another entity at a lower security level.

An entity could be a process, a file, a server, an entire system or network, or any other computer resource.

As a mandatory policy, MLS controls flow to within levels and compartments. Ideally, if properly implemented in a security kernel or trusted computing base, it cannot be subverted by users or any higher-layer entities.

Problems with MLS
- - - - - - - - - - - - - - - - - - -
Mandatory security policies are not well supported in practical commercial systems, difficult to use, difficult to configure properly, and operationally require many exception mechanisms that must be trusted. High-watermark policies tend to drive information to higher classification levels.

The Orange Book MLS criteria are incomplete, and MLS is only a small part of the problem.

Multilevel Integrity (MLI)
- - - - - - - - - - - - - - - - - - -
An entity at a particular integrity level may not depend for its correctness (the Parnas uses relation) on another entity at a lower integrity level.

An entity could be a process, a file, a server, an entire system or network, or any other computer resource.

As a mandatory policy, it restricts the damaging effects of Trojan horses to within levels and compartments. However, it seems awkward to use without many trusted exceptions, because of vast dependence on untrusted editors, compilers, other utilities. Web-of-trust arguments tend to lead to low-watermark integrity.

Multilevel Availability (conceptual)
- - - - - - - - - - - - - - - - - - -
An entity at a particular availability level may not depend on another entity at a lower availability level.

An entity could be a process, a file, a server, an entire system or network, or other computer resource.

As with MLS and MLI, MLA would have availability levels associated with each entity.

Multilevel Survivability (conceptual)
- - - - - - - - - - - - - - - - - - -
An entity at a particular survivability level may not depend on another entity at a lower survivability level.

An entity could be a process, a file, a server, an entire system or network, or any other computer resource.

Combined with generalized dependence, this could be a useful architectural organizing concept, although it is probably too restrictive for entire systems and networks, even if combined with generalized dependence.

A Few Comments
- - - - - - - - - - - - - - - - - - -
An Orange Book reference monitor is supposed to be nonbypassable, tamperproof, and small enough to be analyzable. Typical kernels and TCBs are not.

The Java Security Manager mobile code is not consistent with those three needs, and hinders mobile-code survivability.

Windows NT is reportedly 48 million lines of source code (with lots of memory required to load it and run it), plus test code (7.5M lines)

The metaphor that you cannot make a silk-purse out of a sow's ear seems quite apt, although there are a few examples in fault tolerance.


Prev Up Next