Architectural Implications of Covert Channels

Norman E. Proctor and Peter G. Neumann
Computer Science Lab, SRI International, Menlo Park CA 94025
Proceedings of the Fifteenth National Computer Security Conference
Baltimore, Maryland, 13-16 October 1992, pages 28-43.

Copyright 1992, Norman E. Proctor and Peter G. Neumann.
Presented at the 15th National Computer Security Conference, Baltimore, 13-16 October 1992, this paper is based on work performed under Contract F30602-90-C-0038 from the U.S. Air Force Rome Laboratory, Computer Systems Branch, Griffiss Air Force Base, NY 13441 [10].

Abstract This paper presents an analysis of covert channels that challenges several popular assumptions and suggests fundamental changes in multilevel architectures. Many applications could benefit from a practical multilevel implementation but should not tolerate any compromise of multilevel security, not even through covert channels of low bandwidth. With the present state of the art, the applications either risk compromise or forgo the benefits of multilevel systems because multilevel systems without covert channels are grossly impractical. We believe that the presence of covert channels should no longer be taken for granted in multilevel systems.

Many covert channels are inherent in the strategies that multilevel systems use to allocate resources among their various levels. Alternative strategies would produce some sacrifice of efficiency but no inherent covert channels. Even these strategies are insufficient for general-purpose processor designs that are both practical and multilevel secure.

The implications for multilevel system architectures are far-reaching. Systems with multilevel processors seem to be inherently either impractical or unsecure. Research and development efforts directed toward developing multilevel processors for use in building multilevel systems should be redirected toward developing multilevel disk drives and multilevel network interface units for use with use only single-level processors in building multilevel distributed operating systems and multilevel distributed database management systems. We find that distributed systems are much easier to make both practical and secure than are nondistributed systems. The appropriate distributed architectures are, however, radically different from those of current prototype developments.

Keywords covert channels, distributed systems, multilevel security, system architecture.


This introduction describes covert channels and their exploitation. The next section gives some background on covert channel research and relevant standards. After that, we identify the circumstances in which covert channels need not be avoided when designing a system for an installation. First, we consider various reasons why covert channels might be tolerable in a multilevel system. Then, since covert channels are found only in multilevel systems, we consider when alternatives to multilevel systems are appropriate for an installation. This seems to leave a large class of installations that would want multilevel systems free of covert channels.

We next turn our attention to various reasons why multilevel systems have covert channels and consider how the needs of applications can be met without producing covert channels. We consider in particular how a multilevel system can allocate resources among levels without covert channels that compromise security and without inefficiencies that leave the system impractical. We describe the problems with dynamic allocation and identify three alternative strategies for secure and practical resource allocation: static allocation, delayed allocation, and manual allocation.

We describe some practical approaches to multilevel allocation for various devices, including multilevel disk drives, and explain why allocating software resources among levels is so troublesome. Finally, we present the implications for multilevel system architectures and suggest new directions for research and development.

To the Reader

Earlier versions of this paper were misinterpreted by some very knowledgeable readers, leading us to clarify the exposition. Nevertheless, we warn readers familiar with the problems of covert channels in multilevel systems that, because we are questioning some popular assumptions about covert channels, what you already know about covert channels may cause you to misunderstand our main points. Thus, please forgive our belaboring certain central issues and slighting other fascinating topics that seemed less central to the discussion.

Covert Channels

Covert channels are flaws in the multilevel security of a system. [NOTE: Similar flaws in other aspects of security are sometimes called covert channels, too, but a covert channel in this paper is always a communication channel in violation of the intended multilevel policy of the system.] The channels are found only in multilevel systems. A malicious user can exploit a covert channel to receive data that is classified beyond the user's clearance. Although a covert channel is a communication channel, it is generally not intended to be one and may require some sophistication to exploit. It may take considerable processing to send one bit of data through the channel; error control coding is needed to signal reliably through a noisy covert channel. Exploitation may require the help of two Trojan horses. One runs at a high level and feeds high data into the channel, and the other Trojan horse runs at a lower level and reconstructs the high data for the malicious user from the signals received through the covert channel. The low Trojan horse is not needed if the high one can send a straightforward signal that can be directly interpreted. Also, as we explain later, malicious users can exploit some special kinds of covert channels directly without using any Trojan horses at all.

A covert channel is typically a side effect of the proper functioning of software in the trusted computing base (TCB) of a multilevel system. Trojan horses are untrusted programs that malicious users have written or otherwise introduced into the system. A Trojan horse introduced at a low level can usually execute at any higher levels. [NOTE: If a program could run only at the level where it was installed, it would be harder for a malicious user with a low clearance to introduce the high-level Trojan horse. It would also be inconvenient to install legitimate software.]

A malicious user with a high clearance does not need to use covert channels to compromise high data. The mandatory access controls would permit reading the high data directly. Ordinary reading is certainly an easier way to receive the data if the discretionary access controls permit ordinary reading. If not, it is easier for a Trojan horse to copy the data into another place where the discretionary controls do permit the malicious user to read it than to exploit a covert channel to transmit the data.

The levels that concern us here are not necessarily hierarchical confidentiality levels. They may instead be partially ordered combinations of hierarchical levels with sets of compartments. We assume that a level might have some compartments. This means that two different levels may be comparable or incomparable. If comparable, one level is higher and the other is lower. If incomparable, neither level is higher or lower. A higher level denotes greater intended secrecy or confidentiality. [NOTE: For simplicity, we assume that levels are for confidentiality although they could instead be for integrity or for both integrity and confidentiality. The levels for mandatory integrity are duals of confidentiality levels; covert channels can compromise mandatory integrity in a direct parallel to their compromise of mandatory confidentiality. For example, a Trojan horse running at a low integrity level might covertly contaminate high integrity data where overt contamination was prevented by multilevel integrity.]

Noise in Covert Channels

The bandwidth of a covert channel is the rate at which information or data passes through it. A noisy channel intentionally or accidentally corrupts the data signal with errors so that the information rate is slower than the data rate. A very noisy channel with an apparent bandwidth of one bit of data per second might actually leak only one millionth of a bit of usable information per second. Such a low bandwidth is beneath the notice of some. A malicious user who might have received the classified answer to a yes-or-no question almost immediately if the channel had no noise would expect to wait almost twelve days for the answer. Of course, the channel still compromises security even though extremely high noise makes for an extremely low effective bandwidth.

Noise in a covert channel may also make its information probabilistic. For example, consider a slower covert channel with a bandwidth of a thousandth of a data bit per second where each bit received has a seventy-five percent chance of being the same as what was sent and a twenty-five percent chance of being wrong. A malicious user exploiting the channel must receive the answer to a yes-or-no question many times before believing whichever answer was received more often. The expected wait for each answer is about seventeen minutes, but it takes around five hours for confidence in the answer to reach ninety-nine percent. Here again, compromise of security is postponed but not prevented.

Background Various approaches exist for detecting and analyzing covert storage channels [2,12] and for avoiding some of them [5]. For covert timing channels, additional approaches exist for detection, analysis, and avoidance [4,14]. Some approaches attempt to address both types of covert channels [3]. The notions of restrictiveness and composability [8] seek to preserve the absence of covert channels under composition, assuming their absence in the underlying components.

At the end of this paper, we discuss some new directions for multilevel system designs that avoid all covert channels. The architectures themselves are not new, of course. Others have considered similar architectures for somewhat different reasons [11,13].

Much of the research and development in covert channels for practical systems has been devoted to reducing bandwidths to what some consider to be slow rates. Sometimes delays are introduced to lower bandwidth, and sometimes noise is added to lower the usable bandwidth. These approaches merely ensure that malicious users exploiting the channels do not enjoy the same quick response times to their queries as legitimate users enjoy. The assumption may be that if it takes hours or days for an answer to a simple illicit question, malicious users will ignore the covert channel and prefer more traditional methods of compromise, such as blackmailing or bribing cleared users. Although we do recognize some situations where covert channels are tolerable, we believe the reason is rarely because of low bandwidths. For most installations, we believe that all covert channels should be completely avoided, not simply made small. A clever, malicious user can generally compromise classified information with even the narrowest covert channel.

Other research in covert channels for practical systems has addressed the elimination of some specific varieties of channels. The other varieties, typically including all timing channels, are permitted in a multilevel system because the developers could not find a way to eliminate them without rendering the system impractical for its legitimate functions. The assumption may be that any channel that is too hard for a developer to eliminate must also be too hard for a malicious user to exploit, but this assumption is so clearly fallacious that it is never explicit.

A cynical interpretation of this willingness to tolerate residual channels is that, because many users have simply accepted systems with covert channels despite the potential for security violations, developers treat a multilevel security policy as an ideal to approach, not as a requirement to meet. A more generous interpretation is that the developers intend to eliminate more and more kinds of covert channels with each new generation of multilevel designs hoping that someday they can actually design a system with no covert channels. We wish they would go straight for systems free of covert channels, and we believe the goal can be reached.


The U.S. Defense Department standards in the Trusted Computer System Evaluation Criteria [9], also known as the Orange Book, place restrictions on covert channels in secure systems. Systems evaluated at classes C1 and C2 would have no covert channels simply because they would always be run at a single level. There are no restrictions on covert channels in a class B1 system, even though the system would probably have plenty of them.

For a class B2 system, an attempt must be made to identify the covert storage channels, measure their bandwidths, and identify events associated with exploitation of the channels. The design must avoid all storage channels with bandwidths over one bit per second, and the audit must be able to record the exploitation events for any storage channels with bandwidths over one tenth of a bit per second. There are no restrictions on covert timing channels. In a class B3 system, the criteria for covert channels are extended to the timing channels.

In a class A1 system, the attempt to identify covert channels must use formal methods, but the criteria are otherwise the same as for class B3. The requirement of formal methods does imply that the informal methods acceptable for classes B2 and B3 may miss some covert channels. Among the channels that formal methods themselves tend to miss are the timing channels.

The criteria for covert channels in other security standards are similar to the Orange Book criteria. Although no standards require avoiding all covert channels, considerable theoretical work has been done on hypothetical systems free of covert channels. This is in part because absolute multilevel security would be better than multilevel security with potential compromise through covert channels. Another reason is surely that absolute security is far easier to express in a mathematical model than is compromised security.

We feel that the tolerance of covert channels in security standards is unnecessary and therefore inappropriate for most multilevel systems. In fairness, when the Orange Book was written, covert channels were believed to be inevitable. This belief remains widespread today. We do not accept the inevitability of covert channels in practical multilevel systems, and we fear that the current tolerance of covert channels is itself a major threat to classified information. The Orange Book and other standards are meant to promote the development of secure systems. The standards should not be used as excuses for developing systems with unnecessary flaws.

Tolerating Covert Channels

A malicious user who is cleared for certain classified data can always compromise the secrecy of the data. The problem with covert channels is that a malicious user with the help of one or more Trojan horse programs can exploit a covert channel to compromise data classified beyond the user's clearance. Installations without malicious users or without Trojan horses can tolerate whatever covert channels a multilevel system might have because the channels would not be exploited.

No Malicious Users

Of course, at any installation with more than one user, one can never be certain that no users are malicious, but a system-high installation might reasonably ignore its covert channels as if there were none. Since running system-high requires that all users be cleared for every level, the security officers of the installation would not expect users to exploit covert channels. To compromise any data in the system, a malicious user does not need a covert channel. Covert channels are tolerable in system-high installations because they do not increase the system vulnerability.

No Trojan Horses

The security officers of some installations will assume that they have no Trojan horses. They may be right only because conventional compromise remains easier than exploiting Trojan horses when malicious users have limited technical skills. Few malicious hackers have access to multilevel systems, and few multilevel systems are exposed to malicious hackers. But security officers cannot know whether their installations are among the unfortunate systems.

An installation cannot reasonably be assumed free of Trojan horses unless appropriately trained people rigorously check all the programs that run on the system to be sure that none harbor Trojan horses. All new applications and all changes to existing applications must be reviewed. Rigorous reviews are so expensive and time-consuming that the software on the system must be fairly stable. Also, the system must not have any compilers, command interpreters, or similar programs able to create code and bypass the review procedures. Because no Trojan horses are available to exploit them, most covert channels are tolerable in an installation that can afford to ensure that all software is trusted not to contain a Trojan horse. Such a multilevel installation, if any exists, is probably dedicated to one modest-size application program running on a bare processor.

Malicious users can exploit some special covert channels to compromise certain kinds of classified data without employing Trojan horses. Typically, the data might indicate how busy the system currently is at various levels. If the data were only nominally classified, its leakage would not be serious, but release of such data at lower levels could constitute a real compromise of some systems. These special covert channels are, of course, intolerable even when an installation is known to be free of Trojan horses.

Low Bandwidth

It may also be the case that leaks through covert channels are tolerable at some installations, provided the leaks are slow enough. The Orange Book suggests that covert channels with bandwidths under one bit per second are ``acceptable in most application environments.'' This acceptability may simply be a concession to the sorry state of the art where some covert channels are sure to be present in any multilevel system and where merely identifying all the covert channels is generally infeasible.

It is difficult to believe that many security officers worry about how quickly data is compromised instead of worrying about whether it is compromised. Surely most worry about both problems. Nevertheless, a sufficiently low bandwidth could reasonably make covert channels tolerable at installations with special situations. Where all classified data is tactical data with ephemeral classifications, slow covert channels are tolerable if data would no longer be classified by the time it had been released. If leaking the answer to one crucial yes-or-no question is enough to compromise the system, either the classification of that answer must last only a split second or all covert channels must have extremely low bandwidth.

Similarly, at installations where a price tag can be placed on all classified data, some covert channels are tolerable because no Trojan horses to exploit the channels would be cost-effective or because any alternative without covert channels would be too expensive. If covert channel bandwidths are important in performing the cost-benefit analysis, some covert channels may be tolerable because of their low bandwidths. Where data is classified to protect national security, assigning prices is foolish and perhaps illegal.

Lack of Alternatives

Many installations tolerate covert channels simply because every multilevel system under consideration has some and because those in charge feel they need multilevel systems. Fortunately, these difficulties can be overcome. We believe that there can be multilevel systems without covert channels and that there are often suitable alternatives to multilevel systems. The accreditors of automated systems for multilevel applications should not have to tolerate covert channels.

Alternatives to Multilevel Systems

Not all applications have to run on multilevel systems. We mention first two unattractive options that must sometimes be taken. One is not to implement the application at all, and the other is to implement it with manual procedures only. The remaining alternatives are all automated implementations. The potential benefits of automation include convenience, accuracy, speed, and lower costs. These benefits have permitted the implementation of many applications that were infeasible before the advent of computers.

When an application involves only one level of data or when all users are cleared for every level of data, the best alternatives are a single-level system or a system-high system, respectively. But the applications that interest us here have some data classified at levels beyond the clearances of some users of the automated system. A single-level or system-high system cannot accommodate these applications, but a multilevel system is not the only alternative left. Another possibility is a system with an independent subsystem per level (ISPL). ISPL systems tend to be inefficient, but at least they are intrinsically free of covert channels. We present the ISPL architecture mostly because it is useful later for comparisons with more attractive alternatives.

In an ISPL system, there is a separate subsystem for any level where the system as a whole could have some data. Data is stored on the subsystem for the level matching the classification of the data. Additional upgraded copies of the data might be stored on some other subsystems at higher levels. A user has access to a subsystem only if its level is dominated by the user's clearance.

The subsystems are electronically independent. Each subsystem has its own hardware, and the hardware for the subsystem at one level is not connected to any hardware for subsystems at other levels. The subsystems are not completely independent, however. They are parts of a whole system with multiple levels because users sometimes refer to data on a lower subsystem in order to modify data on a higher subsystem. Users might also manually reenter data from a low subsystem into a high one, or operators might transfer data storage media to higher subsystems.

Like single-level systems, ISPL systems are inherently free of covert channels. Multilevel security is compromised only when people fail to follow proper procedures. The automated parts of the system cannot themselves reveal data to a user not cleared for it. However, trying to overcome some of the limitations of an ISPL system may lead to complex procedures, and the complexity brings serious dangers that accidental compromise would become frequent and that malicious compromise would become easy to arrange.

Because the subsystems are independent of each other, none of the coordination among subsystems can be automated. This tends to diminish all the potential benefits of automation. Unless the required coordination among subsystems is minor, an ISPL system may well be too inconvenient, inaccurate, slow, or expensive for an application. An integrated multilevel system may then be the only practical option. Unfortunately, multilevel systems typically have many covert channels.

Some Reasons for Covert Channels

Our aim is to avoid all covert channels in multilevel systems. Present experience, however, is that any practical multilevel system contains many covert channels, despite the attempts of developers to eliminate them. It has been so difficult to avoid covert channels because several highly desirable functions of a multilevel system seem to produce covert channels as a side effect. Fortunately, the essential multilevel functions can be implemented without building covert channels into the system.

The differences in functional capabilities between ISPL systems and multilevel systems highlight the major sources of covert channels in multilevel systems. In an ISPL system, which cannot have covert channels, the absence of connections among the independent subsystems for each level prevents the system from doing all that a multilevel system can do. Among the services requiring some manual assistance in an ISPL system are reading consistent data from lower levels, downgrading overclassified data, writing up reliably, and maintaining consistency among the values of data items at different levels. A multilevel system needs no manual assistance with these services, but the implementation techniques generally introduce covert channels.

Reading Down

An automated system might allow one process to change data that another process is currently reading. Then, the value the reading process receives could reflect neither the value before the change nor the value after the change, but some useless mixture of the two values. Such mixed results from reading are unacceptable in most applications. The usual technique to prevent the problem is for the reading process to lock the data before reading it. The lock is not granted if any other process is currently writing the data, but once the lock is granted, no other process is permitted to write the data until the reading process releases the lock.

In a multilevel system with support for reading down, this technique produces a covert channel. Lower-level processes can detect when a higher process reads down to lower data because the higher process holds a lock that prevents the lower processes from writing the lower-level data. Data cannot be locked for reading down without producing a covert channel.

Different techniques free of covert channels can ensure that high processes do not read inconsistent data [1,6,7]. The most popular technique is for the high process to check whether any lower process may have written the data between the time when the high process started to read the data and the time when it stopped reading the data. If so, the read is potentially inconsistent, and the high process repeats the entire read again until it is sure that no lower process wrote the data while it was being read. For some applications, there is a serious risk with this technique that a high process that tries to read a lengthy and volatile data item may keep trying to read the item for a long time without ever succeeding. Other techniques may be appropriate for those applications.


All downgrading is inherently an exploitation of a covert channel. When the downgrading is legitimate, one could say that the channel is not really ``covert,'' but the intended downgrade of overclassified data is often accompanied by some incidental and unacknowledged downgrading of other data. A Trojan horse might exploit the channel by manipulating the other data. It may also be possible for a Trojan horse to hide other data in the overclassified data. Multilevel system designs cannot provide legitimate automated downgrading and still avoid all covert channels.

Writing Up

When a user working at a low level upgrades low data to a higher level, the data is said to be written up. [NOTE: If the user were working at the higher level, the upgrade is from reading down, not writing up.] To make the writing reliable, the low user might be notified whether sufficient resources at the higher level are currently available to support the writing up. This notification produces an exploitable covert channel. Suppressing the notification makes writing up unreliable; the user or program that wants to upgrade data never knows whether the writing up worked or not. Applications that need writing up typically need reliable writing up, not hit-or-miss writing up. Reliable writing up can be achieved without covert channels by reserving sufficient resources at higher levels to accommodate all potential requests to write up. This is not easy to implement, and reserving the high resources may constitute a serious loss of efficiency. A practical multilevel system apparently cannot provide reliable writing up without covert channels.

Consistency Across Levels

When an application requires consistent values in two data items, a change to one may force a change to the other to keep them consistent, or alternatively, a change to one may be forbidden until after the other is changed to a consistent value. This can be problematic in a multilevel system when the two data items are classified at different levels [1]. If the levels are comparable, one approach is secure and the other produces a covert channel. Which is which depends on whether the data item changed is at the lower or higher level. Neither approach is secure if the levels are incomparable due to differing compartment sets.

Fortunately, one result of a rational classification of data is that any criterion of consistency applies to data items that are all at the same level. A data item would never have to be consistent with data items at any other levels. A requirement for consistency with a higher item implies that a user cleared to read the lower item can infer something about the higher item, which must have a consistent value. The existence of the inference suggests either that the lower data should be classified at the higher level or that the higher data should be classified at the lower level. If data were classified rationally, users cleared just for lower data could not infer anything about higher data.

In practice, however, classification is not purely rational, and some applications really may need consistency across levels. This can be achieved without covert channels, provided that reliable writing up is properly implemented and the levels involved are all comparable. The likely cost is gross inefficiency from keeping the writing up reliable and some inconvenience because users must always change the lowest items first. Data consistency across levels, freedom from covert channels, and practicality seem to be incompatible in a multilevel system.

Resource Allocation among Levels

We turn next to another distinction between ISPL systems and multilevel systems, their different abilities to allocate resources among levels. In an ISPL system, the allocation for a level is the hardware in the subsystem for the level. In order to change the allocation for a level, some piece of equipment must be replaced, and reallocating resources from one level to another is likely to involve bringing down two subsystems for a while. In a multilevel system, reallocating resources is more convenient. Resources can often be allocated to whichever level can make the best use of them at the time. This can greatly increase the efficiency of the system. With a multilevel system instead of an ISPL system, the users can get more service from the same hardware or equivalent service from less hardware.

Reading down, downgrading, writing up, and data consistency across levels, as we explained before, are not just functional distinctions between ISPL systems and multilevel systems, but also common reasons for covert channels in multilevel systems. Similarly, resource allocation is a common reason why multilevel systems have covert channels, as well as being a functional difference from ISPL systems.

Because a system often has many kinds of resources, resource allocation may be the reason for most of the covert channels in a multilevel system. Among the space resources to be allocated are physical memory, entries in operating system tables software, storage on disk, and bandwidth in a network connection. The allocable time resources include processor time (CPU time), service time from the operating system, disk access time, and access time to other multilevel devices such as terminals, printers, tape drives, and network interface units. Resource allocation is a primary function of operating systems, but multilevel networks, database management systems, and even applications have resources of their own to allocate among levels.

We consider four general strategies for resource allocation among levels: static allocation, dynamic allocation, delayed allocation, and manual allocation. Dynamic allocation is the most efficient but inherently produces covert channels. The other three strategies are free of covert channels but can be inefficient to the point of complete impracticality when used for the wrong resources. Static allocation is the simplest strategy and the least efficient. It is usually as inefficient as an ISPL system. Delayed allocation and manual allocation are more efficient, sometimes approaching the efficiency of dynamic allocation. Delayed allocation is better suited to some resources, manual allocation is better for other resources, and a combination of both may be better than either one in some cases. We use the allocation of processor time as the main example to illustrate the four strategies.

Static Allocation

With static allocation, a fixed portion of a resource is allocated to each level that shares the resource. One level cannot borrow from another level even when the first level could use more than its share and the other level has idle capacity.

If processor time is statically allocated, the share of time allocated to a level is generally determined through the initial system configuration. The configuration might assign time slots to each level. The schedule would consist of a sequence of time slots that is repeated for as long as the processor runs. The share for a level is the length of its time slot in the sequence or, if the level has several slots, their combined length. Only processes at the proper level run during the time slot for a level. The level gives up the processor at the end of its time slot even if some processes at that level still want processing time. On the other hand, during the time slot for a level, the processor is left idle whenever every process at the level is waiting for I/O or whenever there are no current processes at the level. This means that the processor may be idle during the time slot for one level when there are processes at another level that could have been serviced.

Dynamic Allocation

At the cost of producing a covert channel, dynamic allocation avoids such wasting of resources. Resources are allocated among levels based on the current needs at each level. The simplest algorithms allow one level to borrow freely as needed from other levels. More complicated dynamic allocation algorithms place some limits on how much can be shared or how frequently reallocation can occur.

If processor time is dynamically allocated, the current loads might freely determine the share of processor time for a level, or the system may adjust shares within configured limits. When the higher levels are busy, processes at lower levels cannot get as much processing time as when the higher levels are idle. Because lower processes can detect whether higher levels are relatively idle or relatively busy, there is an exploitable covert channel.

For example, a high Trojan horse could send a ``one'' bit during a particular period by requesting so much processor time that the processor would seem especially busy to the low Trojan horse receiving the signal. To send a ``zero'' bit instead, the high Trojan horse would refrain from requesting processor time so that the low Trojan horse would find the processor relatively idle. Irregular patterns of legitimate activity probably make the channel noisy, and the noise reduces the effective bandwidth of the channel. But the channel is not eliminated. Some bandwidth would still be available for leaking information to users who are not cleared to see it.

The covert channel from dynamic allocation is exploited by exhausting the resource. Processor time like any resource is finite, but in some cases, processor time is effectively inexhaustible. If the heaviest possible load on the processor would not consume all the available time, there is always time available whenever a level wants some. This eliminates the covert channel, but it makes dynamic and static allocation equally inefficient. Ensuring that processing time is always available with dynamic allocation would ensure that time is always available with static allocation, too. The same amount of processing time would go idle either way. [NOTE: In some circumstances, dynamic allocation might always give enough time even though static allocation of the same total capacity did not always give enough. This may occur if the limits on the load yield a maximum combined load for all levels that is less than the sum of the maximum loads for individual levels. The most likely reason for such a pattern of loads is that some other dynamically allocated resources are exhausted. The allocation routines for the other resources would then have exploitable covert channels even though the allocation routine for processor time did not.]

Delayed Allocation

Allocating resources to one level may entail denying the same resources to other levels that request them later. A dynamic allocation strategy that could support instant reclamation of resources need not have a covert channel. Each level would have a basic allocation, but when a lower level was not using all of its basic allocation, a higher level wanting more than its own allocation could borrow from the unused portion of the lower level allocation. If the lower level later became busy enough to want some of the borrowed allocation back, enough would be instantly reclaimed for the lower level.

Similarly, if an intermediate level wanted more than its allocation, it could also borrow from the lower level. When a higher level had already borrowed from the lower level, that would not influence how much the intermediate level could borrow. If necessary, resources that were borrowed for the higher level would be instantly reclaimed and reallocated to the intermediate level.

A higher level could not borrow resources from a lower level while the lower level was using them or while any intermediate level was already borrowing them. Also, a lower level could never borrow from a higher level although it would sometimes reclaim its own basic allocation from the higher level or usurp the resources of a still lower level that the higher level happened to be borrowing. [NOTE: When a system involves incomparable levels, the rules for borrowing are more complex. Incomparable levels cannot borrow from each other, nor can they compete to borrow from another level lower than them. One way to avoid competition among incomparable levels is to allow only some of the higher levels to borrow from a lower level. The system configuration would select which higher levels can borrow from a level. The levels selected to have borrowing privileges for a resource at a lower level must be mutually comparable. For any two incomparable levels, the selections for a lower resource might contain one or the other of the two incomparable levels, or perhaps neither, but certainly not both. Because any level not selected could not borrow the lower resource at all, it would never compete for the resource with another incomparable level that was selected.]

When a process at a level is given resources, it might be told whether they come from the basic allocation for its own level, and if not, it could be told from which lower level it is borrowing them. It must not be informed whether the resources were reclaimed from a higher level. There is no covert channel because the borrowings of higher levels do not affect the resource amounts available for a lower level.

When requests for resources are satisfied, the resources are allocated with the same speed whether the resources are currently free or currently being borrowed at a higher level. If free resources might be allocated instantaneously, then borrowed resources must be reallocable to a lower level instantaneously, too. Since instantaneous reallocation is not feasible for most resources, instantaneous allocation of free resources usually cannot be provided either. If borrowed resources can be reallocated only slowly, free resources must be allocated just as slowly. The delayed allocation strategy is named for the sometimes substantial delays the strategy can introduce in the allocation of resources.

For a delayed allocation of processor time in a system with only comparable levels, throughput could be maximized by making a basic allocation of all the processor time to the lowest level. Each level would seem to have available to it all the time that lower levels were not already using. At the end of each time slice, the processor would be allocated to the lowest level with a process ready to run. [NOTE: All levels except the lowest level are borrowing their time from the basic allocation to the lowest level. Since two incomparable levels cannot compete for the same resource, a system with incomparable levels needs some changes to the algorithm. The simplest variation is to specify a repeating sequence of time slices. The slices in the sequence need not all be the same length of time, but for each cycle through the time slices, each slice must be the same length as it was in the first cycle. All the time slices would still be in the basic allocation for the lowest level, but different sets of borrowing levels should be selected for different time slices in the sequence to ensure that each incomparable level has chances to borrow processor time.] An interrupt for the currently allocated level could be serviced promptly, but an interrupt for another level would not be serviced until the next time slice when no lower tasks were pending. With all time slices being of equal duration, this delay in servicing interrupts conceals whether the processor was idle when the interrupt occurred or was busy servicing a higher level. The delay clearly wastes some processor time in order to avoid the covert channel found with dynamic allocation.

Since a lower level would not be prevented from consuming all the time and shutting out all higher levels, some installations may prefer instead to give each level a basic allocation in order to guarantee some time for each level. This fairness comes at the cost of lower overall efficiency. Whenever multiple levels compete for a shared resource, any strategy to prevent denial of service to high levels will either require more resources or produce a covert channel, entailing compromise of multilevel security.

The advantage of dynamic allocation is its more efficient use of processor time than with static allocation. In fortunate circumstances, delayed allocation is essentially as efficient as dynamic allocation. But in ordinary circumstances, the delays introduced to conceal processor loads at higher levels make delayed allocation less efficient than dynamic allocation. And in unfortunate circumstances, delayed allocation could be even less efficient than static allocation.

Manual Allocation

A contributing factor in producing a covert channel with dynamic allocation is that the allocation is changed automatically based on data from untrusted software. Changes in the allocation based on trustworthy data do not necessarily produce a covert channel. The operators of a multilevel system could sometimes switch the system manually among a variety of different multilevel allocations appropriate for different situations. The operators would choose an allocation based on their expectations of the upcoming resource needs at each level. They must be careful to use information from outside the system, not simply the current loads at each level. Those loads may reflect the influence of Trojan horses instead of legitimate activity.

More automated variants of manual allocation are also possible. Some information within the system could be used for automatic changes in the allocations of resources among levels. The information that is safe to use is information that users or operators input manually and that comes through trusted paths to ensure freedom from the influence of any untrusted software. On a multilevel system, safe inputs may include user logins, user logouts, user requests to change to a new level, and possibly some other inputs through an operator console.

These inputs must follow trusted paths from the user or operator to the TCB. There is no covert channel because Trojan horses are incapable of spoofing what a user does through a trusted path. That is precisely what makes a path qualify as a trusted path. Since Trojan horses cannot produce any of the manual inputs that determine how allocations are updated in the manual allocation strategy, they cannot influence the changes in allocation to any level. It is crucial that the only information used to adjust the allocations is information the operating system receives directly from users through trusted paths.

Manual allocation of processor time can be reasonably efficient in a multilevel system used primarily for online processing. If the user inputs for logging in, logging out, and changing level all come via a trusted path, the allocation of processor time for a level can be proportional to the number of user sessions currently logged in at a particular level. This is often a fair measure of the expected load at that level. No time would go to levels with no current user sessions. When all current sessions are at one level, that level would be allocated all the processor time. Allocations would be subject to change each time a user logged in or out or changed from one level to another.

The ratio of the number of current user sessions at a level to the total number of current sessions is a secure basis for manual allocation only on a system where the total number of users logged in is unclassified. If users with low clearances must not know how many users are logged in at higher levels, then the ratio determining the allocation for a level should instead compare the current sessions at the level to the sessions at or below the level. Manual allocation based on this ratio would be somewhat less efficient.

Efficiency might be enhanced by taking into account some other information about current user sessions that the trusted paths have validated. The user's name, the time of day, and, if the system is distributed, the processor supporting the user session could be used to anticipate different loads from different sessions and calculate allocations based on those expectations. The weights for the calculations should come from tables the operators have prepared in advance, not from the current demands of the sessions.

In a multilevel system where online processing predominates but there is some background or batch processing, this approach should be modified so that some time is allocated to levels that may have offline processing. Otherwise, offline processing at a level would cease whenever there happened to be no current user sessions at the level.

Reallocation based solely on manual inputs would not be as efficient as dynamic allocation based on all available information. It should still be more efficient than a static allocation that never changes. Manual allocation, like delayed allocation, is less efficient than dynamic allocation. Both allocation strategies are compromises between dynamic allocation and static allocation.

Manual and delayed allocation can be combined. The same kinds of inputs as the manual strategy uses to update allocations can be used to update the basic allocations for the delayed strategy. The hybrid allocation strategy improves the efficiency of delayed allocation, and with resources for which delayed allocation is appropriate, the hybrid strategy is more efficient than manual allocation, too. The hybrid strategy cannot outperform the best dynamic allocation algorithm, nor is it likely even to be equally efficient. However, the covert channels of dynamic allocation are absent from a combination of manual and delayed allocation, just as they are with static allocation, simple delayed allocation, and simple manual allocation.

Allocating Device Resources

We call a device multilevel if it ever stores or transmits data for more than one level. At one extreme, the device may always handle hundreds of levels, or at the other extreme, it may handle one level on some days and another level on the other days.

As a first example of a multilevel device, we consider a multilevel terminal. It is inconvenient for a user to move to a different terminal in order to work at a different level or for the user to have as many terminals on one desk as there are levels of work to do. With one multilevel terminal, terminal access time could be allocated to whichever level the user currently wants. Multilevel terminals would cost more than single-level terminals, but the convenience may justify the added cost. And if one multilevel terminal fully replaces several other terminals, there may even be a cost savings.

The multilevel terminal would need some special manual inputs for selecting the level where the user wants to allocate the terminal access time. A reset button, a dial or switch for indicating a level, and a ready button would be enough. When the user presses the reset button, the terminal clears its screen and any volatile memory, locks the keyboard, and unlocks the level dial. Then, the user can set the dial to the new level. When the user presses the ready button, the terminal locks the dial, selects the single-level communication line at the level corresponding to the setting of the dial, and unlocks the keyboard.

When the terminal is installed, the security administrators should make sure that the dial settings correctly label the processors that can be accessed through the corresponding single-level lines. The terminal must also be protected from sabotage, of course. We caution against making the multilevel terminal too sophisticated. A multilevel workstation is far less likely to be implemented free of covert channels than is a basic multilevel terminal. Pushing the reset button must remove all traces of whatever had been done before.

A similar approach would work for a multilevel printer or multilevel tape drive. The reset button of a printer must clear all physical traces of what was printed at the previous level. The justification for a multilevel printer or tape drive is probably lower cost or greater convenience again.

Trusted Network Interfaces

A network of multilevel lines is more convenient for operators to install and maintain than are separate networks of single-level lines for a variety of levels. The convenience may justify the cost of the trusted network interface (TNI) units to connect each single-level communication line to a multilevel line. Especially in a wide-area network, the savings from having fewer cables may also offset the cost of TNI units.

If a multilevel line is a radio-frequency cable, each level can be statically allocated its own frequency band. A TNI unit would tune to a band based on its control settings. Whoever installs or maintains a unit connecting a multilevel line to a single-level line must check that the control settings of the unit agree with the level of the single-level line.

TNI units should be connected to the communication lines of single-level processors and devices so that they can communicate over the multilevel network lines. Rather than having TNI units connected to the various single-level lines for a multilevel device such as the terminal described earlier, one TNI unit could be embedded in the multilevel device so that one multilevel line could replace all its single-level lines. The terminal would retune its frequency based on the current dial setting when the user pushed the ready button. Embedding a TNI unit is also an option for a multilevel printer or multilevel tape drive.

A network of multilevel lines with TNI units wherever processors and devices connect to the network is functionally equivalent to separate single-level networks. A single-level processor could communicate with other single-level processors and devices only if they are at the same level. It could communicate with the multilevel devices we described only when they were currently allocated to the same level, too.

More complex TNI units might support multiple single-level lines or support an allocation strategy for the multilevel lines more efficient than static allocation of frequency bands to levels. We suspect the added efficiency would not offset the problems of the extra complexity: a higher cost per unit and reduced assurance of multilevel security.

Cryptographic methods can supplement such TNI units but are never a substitute. If network lines are vulnerable, encryption can help preserve the confidentiality and integrity of messages transmitted over the network. However, if the network does not carefully allocate resources based on the levels of the decrypted messages, there are covert channels. Users communicating at low levels could detect heavier and lighter loads on the network from activity at higher levels, possibly due to Trojan horses. Encrypting messages does nothing to eliminate this covert channel.

Multilevel Disk Drives

Any multilevel application requires some support for reading down. Reading down can be implemented with multilevel processors, multilevel disk drives, some other multilevel storage media, or a combination. Disks are more generally useful for reading down than are other storage devices. Also, we believe that multilevel disk drives are much easier to build free of covert channels than are multilevel processors. We are not certain that multilevel drives really can be implemented without covert channels as nobody has yet tried, but we sketch a design that seems feasible.

The design uses manual allocation of the storage space on the disk and uses a combination of delayed and manual allocation for the access time to the disk drive. The interface for the operator has a reset button, a restore button, an accept button, and various browsing buttons to control a display panel. The interface to the rest of the multilevel system is through separate single-level lines for each level the drive supports. [NOTE: As before, the single-level lines could be replaced with an embedded TNI unit and a multilevel line.]

A special single-level line connects the disk drive to a single-level processor with a configuration table that the operator maintains. The table shows
(1) the levels of the other single-level lines,
(2) which levels are higher or lower than other levels, [NOTE: The level of the special line should be lower than the levels of the other lines.]
(3) what level of data is to be stored in each sector of the disk,
(4) how long each period in the access time schedule lasts,
(5) which level is the basic level for each time period in the schedule,
(6) which higher levels may borrow time during each time period, [NOTE: If the disk supports some incomparable levels, the borrowing levels for a time period must be chosen to be mutually comparable.] and
(7) what position the disk arm is to be in at the end of each time period.

When a configuration table takes effect, the allocation of storage space to a level is the sectors that the configuration assigns to that level. The allocation strategy for access time is a hybrid of delayed and manual allocation. The effective configuration gives the parameters for delayed allocation. The basic allocation of access time to a level is the time periods where that level is the basic level.

While the disk drive is providing its regular reading and writing services, the drive rejects any requests to change its internal configuration table. When the operator pushes the reset button, the disk drive locks all the buttons, stops regular reading and writing services, and waits to receive a new configuration table through its special line. The operator working on the processor where configuration tables are maintained should request a change to the new configuration. If the disk drive finds the new configuration unacceptable, it shows an error code in its display panel and unlocks the reset and restore buttons. The operator has a choice of fixing and resubmitting the new configuration or restoring the old configuration.

If the drive would accept the new configuration, it unlocks all buttons and prompts the operator to double check the changes. The operator uses the browsing buttons to check all parts of the new configuration and perhaps also the old configuration to be sure that the configuration the disk drive received is exactly as intended. This precaution means that the single-level processor where the table is maintained and the path connecting the processor and disk drive do not have to be completely trusted.

If the configuration does not look right, the operator pushes the restore button. The disk drive locks the restore and accept buttons, discards the new configuration, and resumes regular service with the old configuration. If the operator pushes the accept button instead, the restore and accept buttons are still locked, but it is the old configuration that is discarded and the new configuration that is used to resume regular services. Also, before resuming regular reading and writing services with a new configuration, the drive clears any disk sectors then allocated to levels lower than before. [NOTE: Any sector allocated to a level incomparable to its old level is also cleared. If the level of a sector is left unchanged, its contents are kept. The contents are also kept in a sector whose level increases. In such a sector, the contents are effectively upgraded to the higher level.] During regular services, the reset and browsing buttons remain unlocked.

While the disk drive serves a level, it accepts inputs and returns outputs through the communication line for the level. The other communication lines are ignored. The drive honors any requests to read or write sectors at the current level. To support reading down, the drive also honors requests to read sectors at lower levels.

Within the disk drive itself, there is a scheduler that determines which level to serve and for how long. The scheduler cycles through the schedule of time periods in the current configuration. At the beginning of a time period, it serves the basic level for the period. When appropriate, the scheduler may change level before the period ends and allocate whatever remains of the period to the lowest level that can borrow time in the period. It may also change level more times and allocate the remainder of the period to the next highest borrowing level. [NOTE: Because levels that may borrow time within a period are chosen to be mutually comparable even when the drive supports incomparable levels, the next highest borrowing level is uniquely defined until the highest borrowing level is reached.] If the highest borrowing level for the period is reached, the level stays the same until the start of the next period -- when it becomes the basic level for that period.

The scheduler in the disk drive changes to the next highest borrowing level when the current level has no more disk accesses to make. If the current level is already the highest borrowing level, the drive waits idly until the period ends or more requests are received at the highest level. The drive does not change level if there would not be enough time to establish the new higher level and still position the disk arm as the configuration requires before the period ends. Similarly, as the period draws to its end, the disk drive rejects any access request that could not be completed in time to position the disk arm properly afterward.

The covert channel that would be produced by a dynamic allocation of access time is not found in this design. The allocations of storage space on the disk and the parameters used for delayed allocation of access time change only when the configuration changes, and that is only when the operator pushes the appropriate buttons. While the configuration remains unchanged, the performance of a disk drive in one time period has no effect on its performance in later time periods. Within a time period, the service to a level depends just on the requests from that level and lower levels. The higher borrowing levels receive no service until the lower levels voluntarily release their claims on the time period.

The sometimes long delays while a multilevel disk drive is inaccessible from a level make the drive inappropriate for the I/O of many ordinary processes. We suggest that most data be kept on single-level disks and accessed there primarily. Multilevel disks would hold only replicas of data that is sometimes read down. The following scenario explains how this might work.

A Scenario with Upgraded Replicas

An ordinary process running on a single-level processor at some low level writes to a file stored on a single-level disk at the low level. When the process releases its write lock, a new value of the file is available for other processes at the low level to read from the same disk. But if the file header indicates the file is replicated, the replicas do not yet have the new value.

A replica management (RM) process on the same processor sends the updates to RM processes for any other disks that the file header indicates keep replicas at the low level. Although some of these RM processes may run on other processors, all run on single-level processors at the low level. The RM processes update the replicas on their disks to reflect the new value of the file. Multiple copies at the low level increases the availability of the file to users throughout the system. If its disk is multilevel, an RM process also records the new time stamp of the updated replica in a special disk segment for the low level.

Periodically, each process of another kind, the upgraded replica management (URM) processes, reads down on a multilevel disk in the time stamp segments for any levels lower than the level of the processor where the URM process runs. For each file with an upgraded replica at the high level of its processor, the URM process checks whether the time stamp of the lower replica has changed since last checked. If so, the URM process reads the updated lower replica of the file. It is again reading down on the multilevel disk.

The URM process sends the updates to the appropriate RM processes at the high level. As before, the RM processes write the new value of the file into the replicas on their disks at the high level. If any of these disks are multilevel, that may trigger another round of propagating the updates to replicas at still higher levels.

The new value of the file becomes available to ordinary processes running on single-level processors at a variety of levels. A process running on a processor at one of those levels can read any replica of the file found on a single-level disk at the same level. [NOTE: If the single-level disk is remote from the process, processes on other processors at the same level would help with the reading.]

In the scenario above, all processes can run on single-level processors. Ordinary processes can do all their reading and writing on single-level disks. The only processes that must access multilevel disks are the replica management (RM) and updated replica management (URM) processes. An RM process reads and writes time stamp segments and replicas at its own level, and the URM processes read down to lower time stamp segments and lower replicas. [NOTE: A disk controller process on the same processor as the RM or URM process might mediate its reading and writing of the multilevel disk.]

The inefficiencies of the allocation strategy for access time to the multilevel disk drives may hinder the upgrading of new or changed files. To update the upgraded replicas at the same time as the changes are made in the file itself would require reliable writing up, not just reading down. Because a covert-channel-free system is not expected to have reliable writing up, there will be some lag between the writing of a file and the updating of the upgraded replicas. The choice of an allocation strategy for the multilevel disk drives would affect only how long that lag can be. It does not affect any other processing. In particular, the I/O of ordinary processes and the propagation of replicas within a level are unaffected. They can benefit from all the efficiencies of high-performance, single-level disks.

Allocating Software Resources

While discussing multilevel devices, we have ignored multilevel processors and assumed that the multilevel devices would have to communicate with single-level processors. We now consider some of the resources of a multilevel processor. A multilevel processor has a trusted computing base (TCB), typically consisting of a kernel and some trusted processes. The software for the kernel and most trusted processes runs multilevel. The resources of that software are allocated among the various levels that the software serves.

As with hardware resources, dynamically allocating these resources on the basis of current demand creates an exploitable covert channel. Since the resources are limited, a low process employing the services of the multilevel software can detect how much has been allocated to higher levels, and a high process can send signals by modulating its demands on the multilevel software services. Static, delayed, or manual allocation, on the other hand, would produce no covert channels. Static allocation is feasible for most TCB software resources but is relatively inefficient. Manual allocation is often feasible and more efficient. Delayed allocation is also more efficient but would be too difficult to implement correctly for many software resources.

Kernel Resources

The innermost layers of a trusted operating system for a multilevel processor are called a trusted executive or kernel. The layers that concern us include the layer presenting the abstraction of processes and all lower layers. These are the layers that do not run as processes. The kernel is inherently multilevel, and many of its resources are also multilevel. The execution time of the kernel is allocated among the levels. An allocation of processor time to a level includes the time the kernel spends serving that level, not just the execution time of single-level processes at the level. The storage resources of the multilevel kernel in a multilevel processor include most of the system data space. At any given moment, some of these resources would be fully allocated to the same level as is the processor time. Other storage resources might be partially allocated among levels.

It is extremely difficult to avoid every covert channel in the allocation of kernel time and storage in a multilevel processor. Some kernel resources can easily be allocated among levels using a static or manual allocation strategy, but it is unlikely that all resources of a practical multilevel kernel would be so safely allocated, especially in the lowest layers of the kernel.

A multilevel processor embedded in a special-purpose device such as a disk adrive, printer, terminal, or network interface unit should need such a simple executive that safe allocation of all resources can be achieved without sacrificing practicality. The executive probably would not even support real processes.

A more general-purpose multilevel processor supporting user processes, however, seems doomed to have some covert channels at least within its kernel. The service time and data spaces for the lowest kernel layers could not avoid load-influenced dynamic allocation. The covert channels might all have small bandwidths or high noise, but they would still be there for malicious users to exploit, however slowly. Even some special-purpose multilevel processors, such as file servers, may be too sophisticated to be reliably free of covert channels.

To date, no designers have even come close to producing a covert-channel-free kernel for a multilevel operating system. In a typical design for a multilevel kernel, many low-bandwidth covert channels are not even identified.

Trusted Process Resources

Secure allocation among levels is somewhat easier for the resources of multilevel trusted processes than for kernel resources. This may be largely irrelevant, however, because multilevel processes exist only on multilevel processors with more sophisticated kernels. Since the kernels already would have introduced some covert channels, the effort to avoid all covert channels in the trusted processes may be futile. The result would still be a TCB with some covert channels.

As with the kernel, the allocation of the execution time of a trusted process to a level must be considered part of the allocation of processor time to the level. Static allocation of trusted process time is simpler, but the efficiencies of manual allocation might justify the extra complexity.

The virtual address space of a trusted process in a multilevel processor gives it storage resources that can be allocated among the levels that the process serves. Some variables in the address space would be fully allocated at any moment to the same level as the process time. Other storage resources, especially structures such as tables, lists, and buffers, might be partially allocated among levels based on a static allocation, or perhaps a manual allocation. Dynamic allocation based on current demand would create a covert channel, of course.

Memory management for the address spaces of trusted processes differs from the memory management for single-level process address spaces. Because the storage resources of a trusted multilevel process are allocated among multiple levels, it is not safe to handle them like those of untrusted single-level processes. The level of an untrusted process labels its whole address space, but the labeling of trusted process storage is not so simple.

The data of a trusted process must always be clearly labeled when it is stored in physical memory, when it is communicated over the memory bus, when it is kept on a paging disk, or when it is sent over communication lines between the processor and the paging disk. Otherwise, it becomes impossible to maintain control over the allocations among levels for various resources, including space in physical memory, access time to the memory bus, storage space on the paging disk, access time to the paging disk, and access time to the lines connecting the processor and the disk. Without explicit labels on trusted process data at all times, current demands would influence the allocation of those resources. Their allocation strategies would degenerate into some variety of dynamic allocation with covert channels and compromise of multilevel security.

Architectural Implications

Avoiding all covert channels in multilevel processors would require static, delayed, or manual allocation of all the following resources: processor time, space in physical memory, service time from the memory bus, kernel service time, service time from all multilevel processes, and all storage within the address spaces of the kernel and the multilevel processes. We doubt that this can be achieved in a practical, general-purpose processor. Perhaps the simplest strategy, static allocation, would be possible, but then the multilevel processor is not significantly more efficient than a set of single-level processors. It would be better to replace it with single-level processors and have real assurance of freedom from covert channels in processors. We suggest that multilevel systems not have any multilevel processors.

Having no multilevel processors certainly helps to minimize the TCB for mandatory security. This is especially appropriate for the high-assurance systems at the Orange Book classes B3 and A1. Because of the rapid drop in prices for processors and memories and the relatively wide selection of secure single-level processors, limiting a multilevel system to single-level processors may impose little or no penalty in efficiency. We believe the best architecture for most multilevel applications is a Distributed, Single-level-processor, Multilevel-secure (DSM) system. Even if a multilevel application does not need a distributed architecture for any other reason, we feel it should be distributed in order to be multilevel secure.

The network in a DSM system must not introduce covert channels. A simple option is a separate network for each level to connect the single-level processors at that level. A potentially less costly network has multilevel lines connecting all the processors and has the trusted network interface (TNI) units sketched earlier ensuring covert-channel-free allocation of the lines. The two options are functionally equivalent. The difference is in the number and capacity of the lines and in the hardware at the interface between the processors and the network.

Multilevel System Benefits in DSM Systems

Each processor of a DSM system handles just one level, as in an ISPL system. An important question is whether a DSM system is as limited in its functionality as an ISPL system.

Downgrading, writing up reliably, and maintaining data consistency across levels cannot be fully automated as they can be in systems with multilevel processors and covert channels, but they can at least be more automated than in an ISPL system. Many, perhaps most, multilevel applications require none of these functions, but some do need one or more of them. Manual contributions to reliable writing up or to data consistency are inconvenient, but the only practical alternatives compromise multilevel security. Downgrading is so fraught with risk that it is reasonable to insist that some critical step be performed manually. The inconvenience is worthwhile.

Reading down is the essence of multilevel processing. Users perceive a system as multilevel if they have a choice of levels at which to work and if they can refer to the data at lower levels while creating or updating data at the current working level. Reading down and ordinary single-level services are sufficient for most multilevel applications. DSM systems need not have the same problems with reading down as ISPL systems do. Reading down can be supported with multilevel disk drives similar to those described earlier. However, most disk drives in a practical DSM system should probably still each service a single level.

Some multilevel hardware in DSM systems can also escape the limitations on resource allocation in ISPL systems. Cost and convenience arguments justify static allocation of multilevel network lines and manual allocation of such resources as terminals, tape drives, and printers.

Partitioning Levels

In the classification scheme of the U.S. Department of Defense, there are four hierarchical levels: unclassified, confidential, secret, and top secret. A level at which data is classified might also be one of the four hierarchical levels plus a set of nonhierarchical compartments. Many other classification schemes are similar. A user's clearance is the highest level of data the user may see. The clearance is the hierarchical level to which the user is cleared plus any compartments for which the user is cleared.

As noted above, it is best to run a multilevel application as system high if every user has the same clearance, covering all data levels in the application, no matter how many. However, a DSM system is appropriate when some users have different clearances and data is classified over a range of levels. Normally, a DSM system has different processors for each different data level. This is practical for many multilevel applications, ones with data at only two levels or at only a few levels. Some other applications, though, involving various nonhierarchical compartments use dozens or even hundreds of data levels. Processor prices may be falling, but a DSM system with at least one single-level processor for each of hundreds of levels would be impractical. However, a DSM solution may still be reasonable, provided that the number of different user clearances is fairly small, even though the number of different data levels is large.

We describe a DSM system with many data levels, many users, and a handful of different user clearances. A few users, perhaps just the system administrators, might be cleared for all levels, but most would have limited clearances. Probably, those clearances differ in their sets of compartments. The data levels are partitioned based on the overlaps and differences between pairs of clearances. Each partition contains one or more data levels; each data level belongs in one partition; and each clearance includes one or more complete partitions. In the best case, there are exactly as many partitions as clearances, but usually there would be more partitions. [NOTE: In the worst case, n mutually incomparable clearances form 2**n - 1 partitions. Probably, the levels in most of those partitions would never be used to classify any data in the system and so would never need resources. Partitions with no resource needs can be ignored.]

The processors are allocated, not to a single level, but rather to a single partition. A processor may handle data at every level within its partition and may communicate with any other processors sharing the same partition. It should have functionality similar to that required for class B1 in the Orange Book.

A user of a single-partition processor could be anybody whose clearance includes the partition. Because of how the levels are partitioned, the user's clearance will include all or none of the levels in the partition. This is why multilevel security is not compromised even though we expect the processor to have plenty of covert channels. The channels are tolerable because their exploitation could leak information only between levels in the same partition. A malicious user cleared for one level in a partition would not bother to exploit a covert channel in order to access another level in the partition because the user's clearance must include the other level, too.

Because covert channels can still leak within a partition, printed output from a partitioned DSM system can safely be released without review only if the label that the system generated is the highest level of the partition. Users can release output with other labels after manually confirming the labels.


Until feasible techniques are found to develop a covert-channel-free TCB for a practical multilevel processor, most multilevel systems should be DSM systems with some multilevel disks and perhaps other multilevel devices, but with no general-purpose, multilevel processors. The current research and development efforts on multilevel systems seem to focus on operating systems for multilevel processors, database management systems for multilevel processors, multilevel networks among multilevel processors, and distributed operating systems with multilevel processors. These systems are suitable only for installations that really must tolerate compromises of multilevel security through covert channels.

Promising directions for new efforts to serve secure installations include the development of multilevel disk drives and trusted network interfaces without covert channels. Other efforts should examine how single-level processors can use the multilevel disks and networks to build basic DSM systems that provide reading down in addition to the regular services of single-level distributed systems. Further efforts should enhance the basic DSM systems to build more sophisticated DSM systems or multilevel database management systems.

Because these implications for multilevel system architectures represent such a radical shift from the predominant direction of research and development, we encourage readers to dispute our conclusions. Optimists may wish to explain why most installations should tolerate covert channels or how a practical, general-purpose, multilevel processor can be developed with no covert channels. Pessimists may wish to explain why multilevel disk drives or trusted network interfaces cannot be developed without covert channels or why they could not be used to build practical DSM systems. We feel that avoiding all covert channels makes good sense for multilevel systems, that the current dismal state of the art is sufficient evidence of the unsuitability of architectures with multilevel processors, and that it is worth a serious effort to build a prototype of a covert-channel-free, multilevel system that has multilevel disk drives and single-level processors instead of multilevel processors.


1. A. Downing, I. Greenberg, and T. Lunt. Issues in distributed system security. In Proc. 5th Aerospace Computer Security Conference, December 1989.

2. R.J. Feiertag. A technique for proving specifications are multilevel secure. Technical Report CSL-109, Computer Science Laboratory, SRI International, Menlo Park, CA, January 1980.

3. J.W. Gray III. Toward a mathematical foundation for information flow security. In Proc. 1991 Symposium on Research in Security and Privacy, pages 21--34, Oakland, CA, May 1991. IEEE Computer Society.

4. W.-M. Hu. Reducing timing channels with fuzzy time. In Proc. 1991 Symposium on Research in Security and Privacy, pages 8--20, Oakland, CA, May 1991. IEEE Computer Society.

5. P.A. Karger and J.C. Wray. Storage channels in disk arm optimization. In Proc. 1991 Symposium on Research in Security and Privacy, pages 52--61, Oakland, CA, May 1991. IEEE Computer Society.

6. T.F. Keefe and W.T. Tsai. Multiversion concurrency control for multilevel secure database systems. In Proc. 1990 Symposium on Research in Security and Privacy, pages 369--383, Oakland, CA, May 1990. IEEE Computer Society.

7. W.T. Maimone and I.B. Greenberg. Single-level multiversion schedulers for multilevel secure database systems. In Proc. 6th Annual Computer Security Applications Conference, December 1990.

8. D. McCullough. A hookup theorem for multilevel security. IEEE Trans. Software Engineering, 16(6), June 1990.

9. NCSC. Department of Defense Trusted Computer System Evaluation Criteria (TCSEC). National Computer Security Center, December 1985. DOD-5200.28-STD, Orange Book.

10. P.G. Neumann, N.E. Proctor, and T.F. Lunt. Preventing security misuse in distributed systems. Technical report, Computer Science Laboratory, SRI International, Menlo Park, CA, March 1992. Project 1021, Final Report.

11. R. Pike, D. Resotto, K. Thompson, H. Trickey, T. Duff, and G. Holzmann. Plan 9: The early papers. Technical report, AT&T Bell Laboratories, Murray Hill, NJ, July 1991. Computing Science Technical Report No. 158. This report contains 7 conference papers presented during 1990 and 1991.

12. P.A. Porras and R.A. Kemmerer. Analyzing covert storage channels. In Proc. 1991 Symposium on Research in Security and Privacy, pages 36--51, Oakland, CA, May 1991. IEEE Computer Society.

13. J.M. Rushby and B. Randell. A distributed secure system. IEEE Computer, 16(7):55--67, July 1983.

14. J.C. Wray. An analysis of covert timing channels. In Proc. 1991 Symposium on Research in Security and Privacy, pages 2--7, Oakland, CA, May 1991. IEEE Computer Society.